Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Register machine
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Type of abstract computing machine}} {{Use dmy dates|date=May 2023|cs1-dates=y}} {{Use list-defined references|date=May 2023}} In [[mathematical logic]] and [[theoretical computer science]], a '''register machine''' is a generic class of [[abstract machine]]s, analogous to a [[Turing machine]] and thus [[Turing completeness|Turing complete]]. Unlike a Turing machine that uses a tape and head, a register machine utilizes multiple uniquely addressed registers to store non-negative integers. There are several sub-classes of register machines, including [[Counter machine|counter machines]], [[Pointer machine|pointer machines]], [[Random-access machine|random-access machines (RAM)]], and [[Random-access stored-program machine|Random-Access Stored-Program Machine (RASP)]], each varying in complexity. These machines, particularly in theoretical studies, help in understanding computational processes. The concept of register machines can also be applied to [[Virtual machine|virtual machines]] in practical computer science, for educational purposes and reducing dependency on specific hardware architectures. ==Overview== The register machine gets its name from its use of one or more "[[Processor register|register]]s". In contrast to the tape and head used by a [[Turing machine]], the [[model]] uses '''multiple uniquely addressed registers''', each of which holds a single non-negative [[integer]]. There are at least four sub-classes found in the [[literature]]. In ascending order of complexity: * [[Counter machine]] â the most primitive and reduced [[theoretical]] model of computer hardware. This machine lacks indirect addressing, and instructions are in the [[Finite-state machine|finite state machine]] in the manner of the [[Harvard architecture]]. * [[Pointer machine]] â a blend of the counter machine and RAM models with less common and more abstract than either model. Instructions are in the finite state machine in the manner of Harvard architecture. * [[Random-access machine]] (RAM) â a counter machine with indirect addressing and, usually, an augmented instruction set. Instructions are in the finite state machine in the manner of the Harvard architecture. * [[Random-access stored-program machine]] model (RASP) â a RAM with instructions in its registers analogous to the [[Universal Turing machine]], making it an example of the [[von Neumann architecture]]. But unlike a computer, the model is ''idealized'' with effectively infinite registers (and if used, effectively infinite special registers such as [[Accumulator (computing)|accumulators]]). As compared to a modern computer, however, the instruction set is still reduced in number and complexity. Any properly defined register machine model is [[Turing completeness|Turing complete]]. Computational speed is very dependent on the model specifics. In practical computer science, a related concept known as a [[virtual machine]] is occasionally employed to reduce reliance on underlying machine architectures. These [[Virtual machine|virtual machines]] are also utilized in educational settings. In textbooks, the term "register machine" is sometimes used interchangeably to describe a virtual machine.<ref name="Abelson-Sussman_1996"/> == Formal definition == A register machine consists of: #'''An unbounded number of labelled, discrete, unbounded registers unbounded in extent (capacity)''': a finite (or infinite in some models) set of registers <math>r_0 \ldots r_n</math> each considered to be of infinite extent and each holding a single non-negative integer (0, 1, 2, ...).<ref group="nb" name="NB1"/> The registers may do their own arithmetic, or there may be one or more special registers that do the arithmetic (e.g. an "accumulator" and/or "address register"). ''See also [[Random-access machine]].'' #'''Tally counters or marks''':<ref group="nb" name="NB2"/> discrete, indistinguishable objects or marks of only one sort suitable for the model. In the most-reduced [[counter machine]] model, per each arithmetic operation only one object/mark is either added to or removed from its location/tape. In some counter machine models (e.g. Melzak,<ref name="Melzak_1961"/> Minsky<ref name="Minsky_1961"/>) and most RAM and RASP models more than one object/mark can be added or removed in one operation with "addition" and usually "subtraction"; sometimes with "multiplication" and/or "division". Some models have control operations such as "copy" (or alternatively: "move", "load", "store") that move "clumps" of objects/marks from register to register in one action. #'''A limited set of instructions''': the instructions tend to divide into two classes: arithmetic and control. The instructions are drawn from the two classes to form "instruction sets", such that an instruction set must allow the model to be Turing equivalent (it must be able to compute any [[partial recursive function]]). ##'''Arithmetic''': Arithmetic instructions may operate on all registers or on a specific register, such as an accumulator. Typically, they are selected from the following sets, though exceptions exist: Counter machine: { Increment (r), Decrement (r), Clear-to-zero (r) } Reduced RAM, RASP: { Increment (r), Decrement (r), Clear-to-zero (r), Load-immediate-constant k, Add (<math>r_1,r_2</math>), Proper-Subtract (<math>r_1,r_2</math>), Increment accumulator, Decrement accumulator, Clear accumulator, Add the contents of register <math>r</math> to the accumulator, Proper-Subtract the contents of register <math>r</math> from the accumulator } Augmented RAM, RASP: Includes all of the reduced instructions as well as: { Multiply, Divide, various [[Boolean function|Boolean bit-wise operations]] (left-shift, bit test, etc.) }. ##'''Control''': Counter machine models: Optionally include { Copy (<math>r_1,r_2</math>) }. RAM and RASP models: Most include { Copy (<math>r_1,r_2</math>) }, or { Load Accumulator from <math>r</math>, Store accumulator into <math>r</math>, Load Accumulator with an immediate constant }. All models: Include at least one conditional "jump" (branch, goto) following the test of a register, such as { Jump-if-zero, Jump-if-not-zero (i.e., Jump-if-positive), Jump-if-equal, Jump-if-not-equal }. All models optionally include: { unconditional program jump (goto) }. ##'''Register-addressing method''': ##*Counter machine: no indirect addressing, immediate operands possible in highly atomized models ##*RAM and RASP: indirect addressing available, immediate operands typical ##'''Input-output''': optional in all models #'''State register''': A special [[Instruction register|Instruction Register]] (IR), distinct from the registers mentioned earlier, stores the current instruction to be executed along with its address in the instruction table. This register, along with its associated table, is located within the finite state machine. The IR is inaccessible in all models. In the case of RAM and RASP, for determining the "address" of a register, the model can choose either (i) the address specified by the table and temporarily stored in the IR for direct addressing, or (ii) the contents of the register specified by the instruction in the IR for indirect addressing. It's important to note that the IR is not the "program counter" (PC) of the RASP (or conventional computer). The PC is merely another register akin to an accumulator but specifically reserved for holding the number of the RASP's current register-based instruction. Thus, a RASP possesses two "instruction/program" registers: (i) the IR (finite state machine's Instruction Register), and (ii) a PC ([[Program counter|Program Counter]]) for the program stored in the registers. Additionally, aside from the PC, a RASP may also dedicate another register to the "Program-Instruction Register" (referred to by various names such as "PIR," "IR," "PR," etc.). #'''List of labelled instructions, usually in sequential order''': A finite list of instructions <math>I_1 \ldots I_m</math>. In the case of the counter machine, random-access machine (RAM), and pointer machine, the instruction store is in the "TABLE" of the finite state machine, thus these models are examples of the Harvard architecture. In the case of the RASP, the program store is in the registers, thus this is an example of the Von Neumann architecture. ''See also [[Random-access machine]] and [[Random-access stored-program machine]].''<br>The instructions are usually listed in sequential order, like [[computer program]]s, unless a jump is successful. In this case, the default sequence continues in numerical order. An exception to this is the abacus<ref name="Lambek_1961"/><ref name="Minsky_1961"/> counter machine modelsâevery instruction has at least one "next" instruction identifier "z", and the conditional branch has two. #*Observe also that the abacus model combines two instructions, JZ then DEC: e.g. { INC ( r, z ), JZDEC ( r, z<sub>true</sub>, z<sub>false</sub> ) }. See [[McCarthy Formalism]] for more about the ''conditional expression'' "IF r=0 THEN z<sub>true</sub> ELSE z<sub>false</sub>"<ref name="McCarthy_1960" /> == Historical development of the register machine model == Two trends appeared in the early 1950s. The first was to characterize the computer as a Turing machine. The second was to define computer-like modelsâmodels with sequential instruction sequences and conditional jumpsâwith the power of a Turing machine, a so-called Turing equivalence. Need for this work was carried out in the context of two "hard" problems: the unsolvable word problem posed by [[Emil Post]]<ref name="Post_1936"/>âhis problem of "tag"âand the very "hard" problem of [[Hilbert's problems]]âthe 10th question around [[Diophantine equation]]s. Researchers were questing for Turing-equivalent models that were less "logical" in nature and more "arithmetic."<ref name="Melzak_1961"/>{{rp|page=281}}<ref name="Shepherdson-Sturgis_1963"/>{{rp|page=218}} The first step towards characterizing computers originated<ref group="nb" name="NB3"/> with [[Hans Hermes]] (1954),<ref name="Hermes_1954"/> [[RĂłzsa PĂ©ter]] (1958),<ref name="PĂ©ter_1958"/> and Heinz Kaphengst (1959),<ref name="Kaphengst_1959"/> the second step with [[Hao Wang (academic)|Hao Wang]] (1954,<ref name="Wang_1954"/> 1957<ref name="Wang_1957"/>) and, as noted above, furthered along by Zdzislaw Alexander Melzak (1961),<ref name="Melzak_1961"/> [[Joachim Lambek]] (1961)<ref name="Lambek_1961"/> and [[Marvin Minsky]] (1961,<ref name="Minsky_1961"/> 1967<ref name="Minsky_1967"/>). The last five names are listed explicitly in that order by [[Yuri Matiyasevich]]. He follows up with: :"''Register machines [some authors use "register machine" synonymous with "counter-machine"] are particularly suitable for constructing Diophantine equations. Like Turing machines, they have very primitive instructions and, in addition, they deal with numbers''".<ref name="Matiyasevich_1993"/> Lambek, Melzak, Minsky, Shepherdson and Sturgis independently discovered the same idea at the same time. See note on [[#Precedence|precedence]] below. The history begins with Wang's model. === Wang's (1954, 1957) model: PostâTuring machine === Wang's work followed from Emil Post's (1936)<ref name="Post_1936"/> paper and led Wang to his definition of his [[Wang B-machine]]âa two-symbol [[PostâTuring machine]] computation model with only four atomic instructions: { LEFT, RIGHT, PRINT, JUMP_if_marked_to_instruction_z } To these four both Wang (1954,<ref name="Wang_1954"/> 1957<ref name="Wang_1957"/>) and then C. Y. Lee (1961)<ref name="Lee_1961"/> added another instruction from the Post set { ERASE }, and then a Post's unconditional jump { JUMP_to_ instruction_z } (or to make things easier, the conditional jump JUMP_IF_blank_to_instruction_z, or both. Lee named this a "W-machine" model: { LEFT, RIGHT, PRINT, ERASE, JUMP_if_marked, [maybe JUMP or JUMP_IF_blank] } Wang expressed hope that his model would be "a rapprochement"<!-- which one? <ref name="Wang_1954"/> or <ref name="Wang_1957"/> -->{{rp|page=63}} between the theory of Turing machines and the practical world of the computer. Wang's work was highly influential. We find him referenced by Minsky (1961)<ref name="Minsky_1961"/> and (1967),<ref name="Minsky_1967"/> Melzak (1961),<ref name="Melzak_1961"/> Shepherdson and Sturgis (1963).<ref name="Shepherdson-Sturgis_1963"/> Indeed, Shepherdson and Sturgis (1963) remark that: :"''...we have tried to carry a step further the 'rapprochement' between the practical and theoretical aspects of computation suggested by Wang,''"<ref name="Shepherdson-Sturgis_1963"/>{{rp|page=218}} [[Martin Davis (mathematician)|Martin Davis]] eventually evolved this model into the (2-symbol) PostâTuring machine. '''Difficulties with the Wang/PostâTuring model''': Except there was a problem: the Wang model (the six instructions of the 7-instruction PostâTuring machine) was still a single-tape Turing-like device, however nice its ''sequential program instruction-flow'' might be. Both Melzak (1961)<ref name="Melzak_1961"/> and Shepherdson and Sturgis (1963)<ref name="Shepherdson-Sturgis_1963"/> observed this (in the context of certain proofs and investigations): :"''...a Turing machine has a certain opacity... a Turing machine is slow in (hypothetical) operation and, usually, complicated. This makes it rather hard to design it, and even harder to investigate such matters as time or storage optimization or a comparison between the efficiency of two algorithms.<ref name="Melzak_1961" />{{rp|page=281}} "...although not difficult... proofs are complicated and tedious to follow for two reasons: (1) A Turing machine has only a head so that one is obliged to break down the computation into very small steps of operations on a single digit. (2) It has only one tape so that one has to go to some trouble to find the number one wishes to work on and keep it separate from other numbers''"<ref name="Shepherdson-Sturgis_1963" />{{rp|page=218}} Indeed, as examples in [[Turing machine examples]], PostâTuring machine and [[partial function|partial functions]] show, the work can be "complicated". <!-- Example: Multiply '''a''' x '''b''' = '''c''', for example: 3 x 4 = 12. The scanned square is indicated by brackets around the mark i.e. ['''1''']. An extra mark serves to indicate the symbol "0". At the start of a computation, just as ShepherdsonâSturgis and Melzak complain, we see the variables expressed in unaryâi.e. the tally marks for '''a'''= '''| | | |''' and '''b''' = '''| | | | |''' â "in a line" (concatenated on what Melzak calls a "linear tape"). Space must be available for '''c''' at the end of the computation, extending without bounds to the right: {|class="wikitable" |- style="font-size:9pt" align="center" valign="bottom" | width="14.4" Height="11.4" | | width="13.8" | | width="13.8" | | width="13.8" | top | width="13.8" | a | width="13.8" | a | width="13.8" | a | width="13.8" | | width="13.8" | top | width="13.8" | b | width="13.8" | b | width="13.8" | b | width="15.6" | b | width="13.8" | | width="13.8" | btm | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | | width="13.8" | | width="13.8" | |- style="font-size:9pt" align="center" valign="bottom" | Height="11.4" | | | |style="background-color:#FFFF99" | [1] |style="background-color:#FFFF99" | 1 |style="background-color:#FFFF99" | 1 |style="background-color:#FFFF99" | 1 | |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 | | | | | | | | | | | | | | | | | | | | |} At the end of the computation the multiplier '''b''' is 5 marks "in a line" (i.e. concatenated) to left of the 13 marks of product '''c'''. {|class="wikitable" |- style="font-size:9pt" align="center" valign="bottom" | width="14.4" Height="11.4" | | width="13.8" | | width="13.8" | | width="13.8" | top | width="13.8" | a | width="13.8" | a | width="13.8" | a | width="13.8" | | width="13.8" | top | width="13.8" | b | width="13.8" | b | width="13.8" | b | width="15.6" | b | width="13.8" | | width="16.8" | btm | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c | width="13.8" | c |- style="font-size:9pt" align="center" valign="bottom" | Height="11.4" | | | | | | | | |style="background-color:#CCFFCC" | [1] |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 | |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 |style="background-color:#99CCFF" | 1 | | | |}--> ===Minsky, MelzakâLambek and ShepherdsonâSturgis models "cut the tape" into many=== {{Tone|section|date=January 2024}} Initial thought leads to 'cutting the tape' so that each is infinitely long (to accommodate any size integer) but left-ended. These three tapes are called "PostâTuring (i.e. Wang-like) tapes". The individual heads move to the left (for decrementing) and to the right (for incrementing). In a sense, the heads indicate "the top of the stack" of concatenated marks. Or in Minsky (1961)<ref name="Minsky_1961"/> and Hopcroft and Ullman (1979),<ref name="Hopcroft-Ullman_1979"/>{{rp|pages=171ff}} the tape is always blank except for a mark at the left endâat no time does a head ever print or erase. Care must be taken to write the instructions so that a test for zero and a jump occur ''before'' decrementing, otherwise the machine will "fall off the end" or "bump against the end"âcreating an instance of a [[partial function]]. Minsky (1961)<ref name="Minsky_1961"/> and ShepherdsonâSturgis (1963)<ref name="Shepherdson-Sturgis_1963"/> prove that only a few tapesâas few as oneâstill allow the machine to be Turing equivalent if the data on the tape is represented as a [[Gödel number]] (or some other uniquely encodable Encodable-decodable number); this number will evolve as the computation proceeds. In the one tape version with [[Gödel numbering|Gödel number]] encoding the counter machine must be able to (i) multiply the Gödel number by a constant (numbers "2" or "3"), and (ii) divide by a constant (numbers "2" or "3") and jump if the remainder is zero. Minsky (1967)<ref name="Minsky_1967" /> shows that the need for this bizarre instruction set can be relaxed to { INC (r), JZDEC (r, z) } and the convenience instructions { CLR (r), J (r) } if two tapes are available. However, a simple Gödelization is still required. A similar result appears in ElgotâRobinson (1964)<ref name="Elgot-Robinson_1964" /> with respect to their RASP model.<!-- To do a multiplication algorithm we don't need the extra mark to indicate "0", but we will need an extra "temporary" tape '''t'''. And we will need an extra "blank/zero" register (e.g. register #0) for an unconditional jump: {|class="wikitable" |- style="font-size:9pt" align="center" valign="bottom" |style="font-weight:bold" width="83.4" Height="12" | At the start: | width="16.8" | | width="15" | | width="16.2" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | | width="15" | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | register 0: |style="font-weight:bold" | [] | | | | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | a = register 1: |style="background-color:#FFFF99" | 1 |style="background-color:#FFFF99" | 1 |style="background-color:#FFFF99" | 1 |style="background-color:#FFFF99;font-weight:bold" | [] |style="font-weight:bold" | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | b = register 2: |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC;font-weight:bold" | [] | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | c = register 3: |style="background-color:#CCFFFF;font-weight:bold" | [] | | | | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | t = register 4: |style="font-weight:bold" | [] | | | | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="3" | | | | | | | | | | | | | | | |- style="font-size:9pt" |style="font-weight:bold" Height="12" align="center" valign="bottom" | At the end: | valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | | align="center" valign="bottom" | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | register 0: |style="font-weight:bold" | [] | | | | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | a = register 1: |style="background-color:#FFFF99;font-weight:bold" | [] | | | |style="font-weight:bold" | | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | b = register 2: |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC" | 1 |style="background-color:#CCFFCC;font-weight:bold" | [] | | | | | | | | | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | c = register 3: |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF" | 1 |style="background-color:#CCFFFF;font-weight:bold" | [] | |- style="font-size:9pt" align="center" valign="bottom" | Height="12" | t = register 4: | 1 | 1 | 1 | 1 |style="font-weight:bold" | [] | | | | | | | | | |} We can write simple PostâTuring "subroutines" to atomize "increment" and "decrement" into PostâTuring instructions. Note that the head stays always just one square to the right of the top-most printed mark, i.e. at the "top of the stack". "r" is a parameter in the instructions that symbolizes the tape-as-register to be moved and printed or erased, and tested: : : "Increment r" = PRINT_SCANNED_SQUARE_of_TAPE_r, MOVE_TAPE_r_LEFT; i.e. (or: move tape r's head right) ::'''X+''' r is equivalent to '''P''' r; '''L''' r : "Decrement r" = JUMP_IF_TAPE_r_BLANK(ZERO) TO XXX, ELSE MOVE_TAPE_r_RIGHT, ERASE_SCANNED_SQUARE_of_TAPE_rN; (or: move tape r's head left) ::'''X-''' r is equivalent to '''J0''' r, xxx; '''R''' r; '''E''' r Indeed this is similar to the approach that Minsky (1961)<ref name="Minsky_1961"/> took. He started with 4 left-ended tape-machine that: : "used the basic arithmetic device of the present paper. Then, two of the tapes were eliminated by the prime-factor method".<ref name="Minsky_1961"/>{{rp|page=438}} He then observed that: : "we may formulate these results so that the operations act essentially only on the ''length'' of the strings"<ref name="Minsky_1961"/>{{rp|page=449}} His first model, "1961" (it had changed by 1967<ref name="Minsky_1967"/>) started out with only a single mark at the left end of each tape-as-register. The machine was not allowed to '''P'''rint any marks, just move '''L'''eft or '''R'''ight and test for the mark = "1" in the following example. Thus the conventional PostâTuring-like instruction set went from : { R; L; P; E; J0 xxx; J1 xxx, H } to, for each tape-as-register: : { R; L; J1 xxx, H } where '''R''' can be renamed '''INC'''rement, '''R''' can be renamed '''DEC'''rement, "J1" can be combined with "DEC" to create a non-atomized instruction, or can be kept separate and renamed '''J'''ump if '''Z'''ero. --> === Melzak's (1961) model is different: clumps of pebbles go into and out of holes === Melzak's (1961)<ref name="Melzak_1961"/> model is significantly different. He took his own model, flipped the tapes vertically, called them "holes in the ground" to be filled with "pebble counters". Unlike Minsky's "increment" and "decrement", Melzak allowed for proper subtraction of any count of pebbles and "adds" of any count of pebbles. He defines indirect addressing for his model<ref name="Melzak_1961"/>{{rp|page=288}} and provides two examples of its use;<ref name="Melzak_1961"/>{{rp|page=89}} his "proof"<ref name="Melzak_1961"/>{{rp|pages=290â292}} that his model is Turing equivalent is so sketchy that the reader cannot tell whether or not he intended the indirect addressing to be a requirement for the proof. Legacy of Melzak's model is Lambek's simplification and the reappearance of his mnemonic conventions in Cook and Reckhow 1973.<ref name="Cook-Reckhow_1973"/> === Lambek (1961) atomizes Melzak's model into the Minsky (1961) model: INC and DEC-with-test === [[Joachim Lambek|Lambek]] (1961)<ref name="Lambek_1961"/> took Melzak's ternary model and atomized it down to the two unary instructionsâX+, Xâ if possible else jumpâexactly the same two that Minsky (1961)<ref name="Minsky_1961"/> had come up with. However, like the Minsky (1961)<ref name="Minsky_1961"/> model, the Lambek model does execute its instructions in a default-sequential mannerâboth X+ and Xâ carry the identifier of the next instruction, and Xâ also carries the jump-to instruction if the zero-test is successful. === ElgotâRobinson (1964) and the problem of the RASP without indirect addressing === A RASP or random-access stored-program machine begins as a counter machine with its "program of instruction" placed in its "registers". Analogous to, but independent of, the finite state machine's "Instruction Register", at least one of the registers (nicknamed the "program counter" (PC)) and one or more "temporary" registers maintain a record of, and operate on, the current instruction's number. The finite state machine's TABLE of instructions is responsible for (i) fetching the current ''program'' instruction from the proper register, (ii) parsing the ''program'' instruction, (iii) fetching operands specified by the ''program'' instruction, and (iv) executing the ''program'' instruction. Except there is a problem: If based on the ''counter machine'' chassis this computer-like, [[John von Neumann|von Neumann]] machine will not be Turing equivalent. It cannot compute everything that is computable. Intrinsically the model is bounded by the size of its (very-) ''finite'' state machine's instructions. The counter machine based RASP can compute any [[primitive recursive function]] (e.g. multiplication) but not all [[mu recursive function]]s (e.g. the [[Ackermann function]]). ElgotâRobinson investigate the possibility of allowing their RASP model to "self modify" its program instructions.<ref name="Elgot-Robinson_1964"/> The idea was an old one, proposed by BurksâGoldstineâvon Neumann (1946â1947),<ref name="Burks-Goldstine-Neumann_1947"/> and sometimes called "the computed goto". Melzak (1961)<ref name="Melzak_1961"/> specifically mentions the "computed goto" by name but instead provides his model with indirect addressing. '''Computed goto:''' A RASP ''program'' of instructions that modifies the "goto address" in a conditional- or unconditional-jump ''program'' instruction. But this does not solve the problem (unless one resorts to [[Gödel number]]s). What is necessary is a method to fetch the address of a program instruction that lies (far) "beyond/above" the upper bound of the ''finite'' state machine instruction register and TABLE. :Example: A counter machine equipped with only four unbounded registers can e.g. multiply any two numbers ( m, n ) together to yield pâand thus be a primitive recursive functionâno matter how large the numbers m and n; moreover, less than 20 instructions are required to do this! e.g. { 1: CLR ( p ), 2: JZ ( m, done ), 3 outer_loop: JZ ( n, done ), 4: CPY ( m, temp ), 5: inner_loop: JZ ( m, outer_loop ), 6: DEC ( m ), 7: INC ( p ), 8: J ( inner_loop ), 9: outer_loop: DEC ( n ), 10 J ( outer_loop ), HALT } However, with only 4 registers, this machine has not nearly big enough to build a RASP that can execute the multiply algorithm as a ''program''. No matter how big we build our finite state machine there will always be a ''program'' (including its parameters) which is larger. So by definition the bounded program machine that does not use unbounded encoding tricks such as Gödel numbers cannot be ''universal''. Minsky (1967)<ref name="Minsky_1967"/> hints at the issue in his investigation of a counter machine (he calls them "program computer models") equipped with the instructions { CLR (r), INC (r), and RPT ("a" times the instructions m to n) }. He doesn't tell us how to fix the problem, but he does observe that: : "''... the program computer has to have some way to keep track of how many RPT's remain to be done, and this might exhaust any particular amount of storage allowed in the finite part of the computer. RPT operations require infinite registers of their own, in general, and they must be treated differently from the other kinds of operations we have considered.''"<ref name="Minsky_1967"/>{{rp|page=214}} But Elgot and Robinson solve the problem:<ref name="Elgot-Robinson_1964"/> They augment their P<sub>0</sub> RASP with an indexed set of instructionsâa somewhat more complicated (but more flexible) form of indirect addressing. Their P'<sub>0</sub> model addresses the registers by adding the contents of the "base" register (specified in the instruction) to the "index" specified explicitly in the instruction (or vice versa, swapping "base" and "index"). Thus the indexing P'<sub>0</sub> instructions have one more parameter than the non-indexing P<sub>0</sub> instructions: : Example: INC ( r<sub>base</sub>, index ) ; effective address will be [r<sub>base</sub>] + index, where the [[natural number]] "index" is derived from the finite-state machine instruction itself. === Hartmanis (1971) === By 1971, Hartmanis has simplified the indexing to [[indirection]] for use in his RASP model.<ref name="Hartmanis_1971"/> '''Indirect addressing:''' A pointer-register supplies the finite state machine with the address of the target register required for the instruction. Said another way: The ''contents'' of the pointer-register is the ''address'' of the "target" register to be used by the instruction. If the pointer-register is unbounded, the RAM, and a suitable RASP built on its chassis, will be Turing equivalent. The target register can serve either as a source or destination register, as specified by the instruction. Note that the finite state machine does not have to explicitly specify this target register's address. It just says to the rest of the machine: Get me the contents of the register pointed to by my pointer-register and then do xyz with it. It must specify explicitly by name, via its instruction, this pointer-register (e.g. "N", or "72" or "PC", etc.) but it doesn't have to know what number the pointer-register actually contains (perhaps 279,431). === Cook and Reckhow (1973) describe the RAM === Cook and Reckhow (1973)<ref name="Cook-Reckhow_1973"/> cite Hartmanis (1971)<ref name="Hartmanis_1971"/> and simplify his model to what they call a random-access machine (RAMâi.e. a machine with indirection and the Harvard architecture). In a sense we are back to Melzak (1961)<ref name="Melzak_1961"/> but with a much simpler model than Melzak's. == Precedence == Minsky was working at the [[MIT Lincoln Laboratory]] and published his work there; his paper was received for publishing in the ''Annals of Mathematics'' on 15 August 1960, but not published until November 1961.<ref name="Minsky_1961"/> While receipt occurred a full year before the work of Melzak<ref name="Melzak_1961"/> and Lambek<ref name="Lambek_1961"/> was received and published (received, respectively, May and 15 June 1961, and published side-by-side September 1961). That (i) both were Canadians and published in the [[Canadian Mathematical Bulletin]], (ii) neither would have had reference to Minsky's work because it was not yet published in a peer-reviewed journal, but (iii) Melzak references Wang, and Lambek references Melzak, leads one to hypothesize that their work occurred simultaneously and independently. Almost exactly the same thing happened to Shepherdson and Sturgis.<ref name="Shepherdson-Sturgis_1961"/> Their paper was received in December 1961âjust a few months after Melzak and Lambek's work was received. Again, they had little (at most 1 month) or no benefit of reviewing the work of Minsky. They were careful to observe in footnotes that papers by Ershov,<ref name="Ershov_1958"/> Kaphengst<ref name="Kaphengst_1959"/> and PĂ©ter<ref name="PĂ©ter_1958"/> had "recently appeared"<ref name="Shepherdson-Sturgis_1961"/>{{rp|page=219}} These were published much earlier but appeared in the German language in German journals so issues of accessibility present themselves. The final paper of Shepherdson and Sturgis did not appear in a peer-reviewed journal until 1963.<ref name="Shepherdson-Sturgis_1963"/> And as they note in their Appendix A, the 'systems' of Kaphengst (1959),<ref name="Kaphengst_1959"/> Ershov (1958),<ref name="Ershov_1958"/> and PĂ©ter (1958)<ref name="PĂ©ter_1958"/> are all so similar to what results were obtained later as to be indistinguishable to a set of the following: : produce 0 i.e. 0 → n : increment a number i.e. n+1 → n ::"i.e. of performing the operations which generate the natural numbers"<ref name="Shepherdson-Sturgis_1963"/>{{rp|page=246}} : copy a number i.e. n → m : to "change the course of a computation", either comparing two numbers or decrementing until 0 Indeed, Shepherson and Sturgis conclude: ::"''The various minimal systems are very similar''"<ref name="Shepherdson-Sturgis_1963"/>{{rp|page=246}} By order of ''publishing'' date the work of Kaphengst (1959),<ref name="Kaphengst_1959"/> Ershov (1958),<ref name="Ershov_1958"/> PĂ©ter (1958) were first.<ref name="PĂ©ter_1958"/> ==See also== {{div col|colwidth=25em}} * [[Counter machine]] ** [[Counter-machine model]] * [[Pointer machine]] * [[Random-access machine]] * [[Random-access stored-program machine]] * [[Turing machine]] ** [[Universal Turing machine]] ** [[Turing machine examples]] * [[Wang B-machine]] * [[PostâTuring machine]] - description plus examples * [[Algorithm]] ** [[Algorithm characterizations]] * [[Halting problem]] * [[Busy beaver]] * [[Stack machine]] * [[WDR paper computer]] {{div col end}} == Bibliography == '''Background texts:''' The following bibliography of source papers includes a number of texts to be used as background. The mathematics that led to the flurry of papers about abstract machines in the 1950s and 1960s can be found in van Heijenoort (1967)<ref name="Heijenoort_1967"/>âan assemblage of original papers spanning the 50 years from Frege (1879)<ref name="Frege_1879"/> to Gödel (1931).<ref name="Gödel_1931"/> Davis (ed.) ''The Undecidable'' (1965)<ref name="Davis_1965"/> carries the torch onward beginning with Gödel (1931)<ref name="Gödel_1931"/> through Gödel's (1964) postscriptum;<ref name="Gödel_1964"/>{{rp|page=71}} the original papers of [[Alan Turing]] (1936<ref name="Turing_1936"/>â1937) and Emil Post (1936)<ref name="Post_1936"/> are included in ''The Undecidable''. The mathematics of Church, Rosser, and Kleene that appear as reprints of original papers in ''The Undecidable'' is carried further in Kleene (1952),<ref name="Kleene_1952"/> a mandatory text for anyone pursuing a deeper understanding of the mathematics behind the machines. Both Kleene (1952)<ref name="Kleene_1952"/> and Davis (1958)<ref name="Davis_1958"/> are referenced by a number of the papers. For a good treatment of the counter machine see Minsky (1967) Chapter 11 "Models similar to Digital Computers"âhe calls the counter machine a "program computer".<ref name="Minsky_1967"/> A recent overview is found at van Emde Boas (1990).<ref name="EmdeBoas_1990"/> A recent treatment of the Minsky (1961)<ref name="Minsky_1961"/>/Lambek (1961)<ref name="Lambek_1961"/> model can be found BoolosâBurgessâJeffrey (2002);<ref name="Boolos-Burgess-Jeffrey_2002"/> they reincarnate Lambek's "abacus model" to demonstrate equivalence of Turing machines and partial recursive functions, and they provide a graduate-level introduction to both abstract machine models (counter- and Turing-) and the mathematics of recursion theory. Beginning with the first edition BoolosâBurgess (1970)<ref name="Boolos-Burgess_1970"/> this model appeared with virtually the same treatment. '''The papers''': The papers begin with Wang (1957)<ref name="Wang_1957"/> and his dramatic simplification of the Turing machine. Turing (1936),<ref name="Turing_1936"/> Kleene (1952),<ref name="Kleene_1952"/> Davis (1958),<ref name="Davis_1958"/> and in particular Post (1936)<ref name="Post_1936"/> are cited in Wang (1957);<ref name="Wang_1957"/> in turn, Wang is referenced by Melzak (1961),<ref name="Melzak_1961"/> Minsky (1961),<ref name="Minsky_1961"/> and ShepherdsonâSturgis (1961â1963)<ref name="Shepherdson-Sturgis_1961"/><ref name="Shepherdson-Sturgis_1963"/> as they independently reduce the Turing tapes to "counters". Melzak (1961)<ref name="Melzak_1961"/> provides his pebble-in-holes counter machine model with indirection but doesn't carry the treatment further. The work of ElgotâRobinson (1964)<ref name="Elgot-Robinson_1964"/> define the RASPâthe computer-like random-access stored-program machinesâand appear to be the first to investigate the failure of the bounded [[counter machine]] to calculate the mu-recursive functions. This failureâexcept with the draconian use of [[Gödel number]]s in the manner of Minsky (1961)<ref name="Minsky_1961"/>âleads to their definition of "indexed" instructions (i.e. indirect addressing) for their RASP model. ElgotâRobinson (1964)<ref name="Elgot-Robinson_1964"/> and more so Hartmanis (1971)<ref name="Hartmanis_1971"/> investigate RASPs with self-modifying programs. Hartmanis (1971)<ref name="Hartmanis_1971"/> specifies an instruction set with indirection, citing lecture notes of Cook (1970).<ref name="Cook_1970"/> For use in investigations of computational complexity Cook and his graduate student Reckhow (1973)<ref name="Cook-Reckhow_1973"/> provide the definition of a RAM (their model and mnemonic convention are similar to Melzak's, but offer him no reference in the paper). The pointer machines are an offshoot of Knuth (1968,<ref name="Knuth_1968"/> 1973) and independently Schönhage (1980).<ref name="Schönhage_1980"/> For the most part the papers contain mathematics beyond the undergraduate levelâin particular the [[primitive recursive function]]s and [[mu recursive function]]s presented elegantly in Kleene (1952)<ref name="Kleene_1952"/> and less in depth, but useful nonetheless, in BoolosâBurgessâJeffrey (2002).<ref name="Boolos-Burgess-Jeffrey_2002"/> All texts and papers excepting the four starred have been witnessed. These four are written in German and appear as references in ShepherdsonâSturgis (1963)<ref name="Shepherdson-Sturgis_1963"/> and ElgotâRobinson (1964);<ref name="Elgot-Robinson_1964"/> ShepherdsonâSturgis (1963)<ref name="Shepherdson-Sturgis_1963"/> offer a brief discussion of their results in ShepherdsonâSturgis' Appendix A. The terminology of at least one paper (Kaphengst (1959)<ref name="Kaphengst_1959"/> seems to hark back to the BurkeâGoldstineâvon Neumann (1946â1947)<ref name="Burks-Goldstine-Neumann_1947"/> analysis of computer architecture. {|class="wikitable" style="text-align:center; font-size:88%" |- style="vertical-align:bottom;" ! style="width:114.6;"| Author ! style="width:43.2;"| Year ! style="width:46.8;"| Reference ! style="width:41.4;"| Turing machine ! style="width:43.2;"| Counter machine ! style="width:27.6;"| RAM ! style="width:28.2;"| RASP ! style="width:38.4;"| Pointer machine ! style="width:54px;"| Indirect addressing ! style="width:43.8;"| Self-modifying program |- style="vertical-align:bottom;" | style="text-align:left" | Goldstine & von Neumann | 1947<ref name="Burks-Goldstine-Neumann_1947"/> | {{ya}} | | | | | | | {{ya}} |- style="vertical-align:bottom;" | style="text-align:left" | Kleene | 1952<ref name="Kleene_1952"/> | {{ya}} | | | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Hermes | 1954â1955<ref name="Hermes_1954"/> | | | {{dunno}} | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Wang | 1957<ref name="Wang_1957"/> | {{ya}} | {{ya}} | hints | | | | hints | |- style="vertical-align:bottom;" | style="text-align:left" | PĂ©ter | 1958<ref name="PĂ©ter_1958"/> | | | {{dunno}} | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Davis | 1958<ref name="Davis_1958"/> | {{ya}} | {{ya}} | | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Ershov | 1959<ref name="Ershov_1958"/> | | | {{dunno}} | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Kaphengst | 1959<ref name="Kaphengst_1959"/> | | | {{dunno}} | | {{ya}} | | | |- style="vertical-align:bottom;" | style="text-align:left" | Melzak | 1961<ref name="Melzak_1961"/> | | | {{ya}} | | | | {{ya}} | hints |- style="vertical-align:bottom;" | style="text-align:left" | Lambek | 1961<ref name="Lambek_1961"/> | | | {{ya}} | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Minsky | 1961<ref name="Minsky_1961"/> | | | {{ya}} | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Shepherdson & Sturgis | 1963<ref name="Shepherdson-Sturgis_1963"/> | | | {{ya}} | | | | hints | |- style="vertical-align:bottom;" | style="text-align:left" | Elgot & Robinson | 1964<ref name="Elgot-Robinson_1964"/> | | | | | {{ya}} | | {{ya}} | {{ya}} |- style="vertical-align:bottom;" | style="text-align:left" | Davis- Undecidable | 1965<ref name="Davis_1965"/> | {{ya}} | {{ya}} | | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | van Heijenoort | 1967<ref name="Heijenoort_1967"/> | {{ya}} | | | | | | | |- style="vertical-align:bottom;" | style="text-align:left" | Minsky | 1967<ref name="Minsky_1967"/> | | | {{ya}} | hints | | | hints | |- style="vertical-align:bottom;" | style="text-align:left" | Knuth | 1968,<ref name="Knuth_1968"/> 1973 | {{ya}} | | | | | {{ya}} | {{ya}} | {{ya}} |- style="vertical-align:bottom;" | style="text-align:left" | Hartmanis | 1971<ref name="Hartmanis_1971"/> | | | | | {{ya}} | | | {{ya}} |- style="vertical-align:bottom;" | style="text-align:left" | Cook & Reckhow | 1973<ref name="Cook-Reckhow_1973"/> | | | | {{ya}} | {{ya}} | | {{ya}} | {{ya}} |- style="vertical-align:bottom;" | style="text-align:left" | Schönhage | 1980<ref name="Schönhage_1980"/> | | | | {{ya}} | | {{ya}} | {{ya}} | |- style="vertical-align:bottom;" | style="text-align:left" | van Emde Boas | 1990<ref name="EmdeBoas_1990"/> | {{ya}} | {{ya}} | {{ya}} | {{ya}} | {{ya}} | {{ya}} | | |- style="vertical-align:bottom;" | style="text-align:left" | Boolos & Burgess; Boolos, Burgess & Jeffrey | 1970<ref name="Boolos-Burgess_1970"/>â2002<ref name="Boolos-Burgess-Jeffrey_2002"/> | {{ya}} | {{ya}} | {{ya}} | | | | | |} ==Notes== {{reflist|group="nb"|refs= <ref group="nb" name="NB1">". . . a denumerable sequence of registers numbered 1, 2, 3, ..., each of which can store any natural number 0, 1, 2, .... Each particular program, however, involves only a finite number of these registers, the others remaining empty (i.e. containing 0) throughout the computation." (Shepherdson and Sturgis 1961: p. 219); (Lambek 1961: p. 295) proposed: "a countably infinite set of ''locations'' (holes, wires, etc).</ref> <ref group="nb" name="NB2">For example, (Lambek 1961: p. 295) proposed the use of pebbles, beads, etc.</ref> <ref group="nb" name="NB3">See the "Note" in (Shepherdson and Sturgis 1963: p. 219). In their Appendix A the authors follow up with a listing and discussions of Kaphengst's, Ershov's and PĂ©ter's instruction sets (cf p. 245ff).</ref> }} ==References== {{reflist|refs= <ref name="Abelson-Sussman_1996">[[Harold Abelson]] and [[Gerald Jay Sussman]] with Julie Sussman, [[Structure and Interpretation of Computer Programs]], [[MIT Press]], [[Cambridge, Massachusetts]], 2nd edition, 1996</ref> <ref name="Melzak_1961">{{cite journal |author-last=Melzak |author-first=Zdzislaw Alexander |author-link1=:d:Q95321564 |date=September 1961 |title=An Informal Arithmetical Approach to Computability and Computation |journal=[[Canadian Mathematical Bulletin]] |volume=4 |issue=3 |pages=89, 279â293 [89, 281, 288, 290â292] |doi=10.4153/CMB-1961-031-9 |doi-access=free}} The manuscript was received by the journal on 15 May 1961. Melzak offers no references but acknowledges "the benefit of conversations with Drs. R. Hamming, D. McIlroy and V. Vyssotsky of the Bell Telephone Laboratories and with Dr. H. Wang of Oxford University." [https://web.archive.org/web/20230520150655/https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C94994F05E6820B5014B7A3A14024726/S0008439500050955a.pdf/an-informal-arithmetical-approach-to-computability-and-computation.pdf]</ref> <ref name="Minsky_1961">{{cite journal |author-last=Minsky |author-first=Marvin |author-link=Marvin Minsky |date=1961 |title=Recursive Unsolvability of Post's Problem of 'Tag' and Other Topics in Theory of Turing Machines |journal=Annals of Mathematics |jstor=1970290 |volume=74 |issue=3 |pages=437â455 [438, 449] |doi=10.2307/1970290}}</ref> <ref name="Lambek_1961">{{cite journal |author-last=Lambek |author-first=Joachim |author-link=Joachim Lambek |date=September 1961 |title=How to Program an Infinite Abacus |journal=[[Canadian Mathematical Bulletin]] |volume=4 |issue=3 |pages=295â302 [295] |doi=10.4153/CMB-1961-032-6 |doi-access=free}} The manuscript was received by the journal on 15 June 1961. In his Appendix II, Lambek proposes a "formal definition of 'program'. He references Melzak (1961) and Kleene (1952) ''Introduction to Metamathematics''.</ref> <ref name="McCarthy_1960">McCarthy (1960)</ref> <ref name="Hermes_1954">[[Hans Hermes]] "Die UniversalitĂ€t programmgesteuerter Rechenmaschinen". ''Math.-Phys. Semesterberichte'' (Göttingen) 4 (1954), 42â53.</ref> <ref name="PĂ©ter_1958">[[RĂłzsa PĂ©ter|PĂ©ter, RĂłzsa]] "Graphschemata und rekursive Funktionen", ''Dialectica'' 12 (1958), 373.</ref> <ref name="Kaphengst_1959">{{ill|Heinz Kaphengst|de|Heinz Kaphengst|lt=Kaphengst, Heinz}}, "Eine Abstrakte Programmgesteuerte Rechenmaschine", ''Zeitschrift fur mathematische Logik und Grundlagen der Mathematik'' 5 (1959), 366â379. [https://onlinelibrary.wiley.com/doi/abs/10.1002/malq.19590051413]</ref> <ref name="Wang_1954">[[Hao Wang (academic)|Hao Wang]] "Variant to Turing's Theory of Computing Machines". Presented at the meeting of the Association, 23â25 June 1954.</ref> <ref name="Wang_1957">[[Hao Wang (academic)|Hao Wang]] (1957), "A Variant to Turing's Theory of Computing Machines", ''JACM'' (''Journal of the Association for Computing Machinery'') 4; 63â92. Presented at the meeting of the Association, 23â25 June 1954.</ref> <ref name="Minsky_1967">{{cite book |author-last=Minsky |author-first=Marvin |author-link=Marvin Minsky |date=1967 |title=Computation: Finite and Infinite Machines |url=https://archive.org/details/computationfinit0000mins |url-access=registration |edition=1st |publisher=Prentice-Hall, Inc. |location=Englewood Cliffs, New Jersey, USA |page=214}} In particular see chapter 11: ''Models Similar to Digital Computers'' and chapter 14: ''Very Simple Bases for Computability''. In the former chapter he defines "Program machines" and in the later chapter he discusses "Universal Program machines with Two Registers" and "...with one register", etc.</ref> <ref name="Shepherdson-Sturgis_1963">Shepherdson, Sturgis (1963): [[John C. Shepherdson]] and [[H. E. Sturgis]] (1961) received December 1961 "Computability of Recursive Functions", ''Journal of the Association for Computing Machinery'' (JACM) 10:217â255 [218, 219, 245ff, 246], 1963. An extremely valuable reference paper. In their Appendix A the authors cite 4 others with reference to "Minimality of Instructions Used in 4.1: Comparison with Similar Systems".</ref> <ref name="Matiyasevich_1993">[[Yuri Matiyasevich]], ''Hilbert's Tenth Problem'', commentary to Chapter 5 of the book, at http://logic.pdmi.ras.ru/yumat/H10Pbook/commch_5htm.)</ref> <ref name="Post_1936">Emil Post (1936)</ref> <ref name="Lee_1961">C. Y. Lee (1961)</ref> <ref name="Hopcroft-Ullman_1979">[[John Hopcroft]], [[Jeffrey Ullman]] (1979). ''Introduction to Automata Theory, Languages and Computation'', 1st ed., Reading Mass: Addison-Wesley. {{isbn|0-201-02988-X}}, pp. 171ff. A difficult book centered around the issues of machine-interpretation of "languages", NP-Completeness, etc.</ref> <ref name="Elgot-Robinson_1964">[[Calvin Elgot]] and [[Abraham Robinson]] (1964), "Random-Access Stored-Program Machines, an Approach to Programming Languages", ''Journal of the Association for Computing Machinery'', Vol. 11, No. 4 (October 1964), pp. 365â399.</ref> <ref name="Cook-Reckhow_1973">[[Stephen Cook|Stephen A. Cook]] and Robert A. Reckhow (1972), ''Time-bounded random access machines'', Journal of Computer Systems Science 7 (1973), 354â375.</ref> <ref name="Burks-Goldstine-Neumann_1947">[[Arthur Burks]], [[Herman Goldstine]], [[John von Neumann]] (1946â1947), "Preliminary discussion of the logical design of an electronic computing instrument", reprinted pp. 92ff in [[Gordon Bell]] and [[Allen Newell]] (1971), ''Computer Structures: Readings and Examples'', McGraw-Hill Book Company, New York. {{isbn|0-07-004357-4}}.</ref> <ref name="Hartmanis_1971">[[Juris Hartmanis]] (1971), "Computational Complexity of Random Access Stored Program Machines," ''Mathematical Systems Theory'' 5, 3 (1971) pp. 232â245.</ref> <ref name="Shepherdson-Sturgis_1961">Shepherdson, Sturgis (1961), p. 219</ref> <ref name="Ershov_1958">[[Andrey Ershov|Ershov, Andrey P.]] "On operator algorithms", (Russian) ''Dok. Akad. Nauk'' 122 (1958), 967â970. English translation, Automat. Express 1 (1959), 20â23.</ref> <ref name="Davis_1958">[[Martin Davis (mathematician)|Martin Davis]] (1958), ''Computability & Unsolvability'', McGraw-Hill Book Company, Inc. New York.</ref> <ref name="Knuth_1968">[[Donald Knuth]] (1968), ''The Art of Computer Programming'', Second Edition 1973, Addison-Wesley, Reading, Massachusetts. Cf pages 462â463 where he defines "a new kind of abstract machine or 'automaton' which deals with linked structures."</ref> <ref name="EmdeBoas_1990">[[Peter van Emde Boas]], "Machine Models and Simulations" pp. 3â66, in: [[Jan van Leeuwen]], ed. ''Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity'', The MIT PRESS/Elsevier, 1990. {{isbn|0-444-88071-2}} (volume A). QA 76.H279 1990. van Emde Boas' treatment of SMMs appears on pp. 32â35. This treatment clarifies SchĆnhage 1980âit closely follows but expands slightly the SchĆnhage treatment. Both references may be needed for effective understanding.</ref> <ref name="Kleene_1952">[[Stephen Kleene]] (1952), ''Introduction to Metamathematics'', North-Holland Publishing Company, Amsterdam, Netherlands. {{isbn|0-7204-2103-9}}.</ref> <ref name="Schönhage_1980">[[Arnold Schönhage]] (1980), ''Storage Modification Machines'', Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980. Wherein SchĆnhage shows the equivalence of his SMM with the "successor RAM" (Random Access Machine), etc. resp. ''Storage Modification Machines'', in ''Theoretical Computer Science'' (1979), pp. 36â37</ref> <ref name="Boolos-Burgess-Jeffrey_2002">[[George Boolos]], [[John P. Burgess]], [[Richard Jeffrey]] (2002), ''Computability and Logic: Fourth Edition'', Cambridge University Press, Cambridge, England. The original Boolos-Jeffrey text has been extensively revised by Burgess: more advanced than an introductory textbook. "Abacus machine" model is extensively developed in Chapter 5 ''Abacus Computability''; it is one of three models extensively treated and comparedâthe Turing machine (still in Boolos' original 4-tuple form) and recursion the other two.</ref> <ref name="Boolos-Burgess_1970">[[George Boolos]], [[John P. Burgess]] (1970)</ref> <ref name="Frege_1879">Frege (1879)</ref> <ref name="Gödel_1931">Gödel (1931)</ref> <ref name="Gödel_1964">Gödel (1964), postscriptum p. 71.</ref> <ref name="Davis_1965">Davis (ed.) ''The Undecidable'' (1965)</ref> <ref name="Heijenoort_1967">van Heijenoort (1967)</ref> <ref name="Turing_1936">Turing (1936)</ref> <ref name="Cook_1970">Cook (1970)</ref> }} ==Further reading== * {{cite book |last=Wolfram |first=Stephen |author-link=Stephen Wolfram |title=A New Kind of Science |url=https://www.wolframscience.com/nks/ |publisher=Wolfram Media, Inc. |date=2002 |pages=[https://www.wolframscience.com/nks/p97--register-machines/ 97â102] |isbn=1-57955-008-8}} ==External links== {{Commons category|Register machines}} * {{MathWorld|title=Register machine|urlname=RegisterMachine}} * [http://www.igblan.free-online.co.uk/igblan/ca/minsky.html Igblan - Minsky Register Machines] {{DEFAULTSORT:Register Machine}} [[Category:Models of computation]] [[Category:Register machines| ]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Ambox
(
edit
)
Template:Cite book
(
edit
)
Template:Commons category
(
edit
)
Template:DMCA
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Dunno
(
edit
)
Template:MathWorld
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Tone
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Use list-defined references
(
edit
)
Template:Ya
(
edit
)