Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Stack machine
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History and implementations == Description of such a method requiring only two values at a time to be held in registers, with a limited set of pre-defined operands that were able to be extended by definition of further operands, functions and subroutines, was first provided at conference by [[Robert S. Barton]] in 1961.<ref name="Barton_1961"/><ref name="Barton_1987"/> ===Commercial stack machines=== {{see also|High-level language computer architecture}} Examples of stack instruction sets directly executed in hardware include * the [[Z4 (computer)|Z4]] (1945) computer by [[Konrad Zuse]] had a 2-level stack.<ref name="Blaauw-Brooks_1997"/><ref name="LaForest_2007"/> * the [[Burroughs large systems]] architecture (since 1961) * the [[English Electric KDF9]] machine. First delivered in 1964, the KDF9 had a 19-level deep pushdown stack of arithmetic registers, and a 17-level deep stack for subroutine return addresses * the [[Collins Radio]] [[Collins Adaptive Processing System]] minicomputer (CAPS, since 1969) and [[Rockwell Collins]] [[Advanced Architecture Microprocessor]] (AAMP, since 1981).<ref name="Greve-Wilding_1998"/> * the [[Xerox Star#Hardware|Xerox Dandelion]], introduced 27 April 1981, and the [[Xerox Daybreak]] utilized a stack machine architecture to save memory.<ref name="Xerox_Mesa"/><ref name="DigiBarn_Star"/> * the [[UCSD Pascal]] p-machine (as the [[Pascal MicroEngine]] and many others) supported a complete student programming environment on early 8-bit microprocessors with poor instruction sets and little RAM, by compiling to a virtual stack machine. * [[Manchester computers|MU5]] and [[ICL 2900 Series]]. Hybrid stack and accumulator machines. The accumulator register buffered the memory stack's top data value. Variants of load and store opcodes controlled when that register was spilled to the memory stack or reloaded from there. * [[HP 3000]] (Classic, not PA-RISC) * [[HP 9000]] systems based on the [[HP FOCUS]] microprocessor.<ref>{{cite journal |title=Instruction Set for a Single-Chip 32-Bit Processor |journal=Hewlett-Packard Journal |volume=34 |issue=8 |publisher=Hewlett-Packard |date=August 1983 |url=https://archive.org/details/Hewlett-Packard_Journal_Vol._34_No._8_1983-08_Hewlett-Packard/page/n9/mode/1up |access-date=5 February 2024}}</ref> * [[Tandem Computers]] T/16. Like HP 3000, except that compilers, not microcode, controlled when the register stack spilled to the memory stack or was refilled from the memory stack. * the [[Atmel]] [[MARC4]] [[microcontroller]]<ref name="MARC4"/> * Several "Forth chips"<ref name="Colorforth"/> such as the RTX2000, the [[RTX2010]], the F21<ref name="UT_F21"/> and the [[Ignite (microprocessor)|PSC1000]]<ref name="ForthHub_2017"/><ref name="Java_1999"/> * The [[Setun]] [[Ternary computer]] performed [[balanced ternary]] using a stack. * Patriot Scientific's [[Ignite (microprocessor)|Ignite]] stack machine designed by [[Charles H. Moore]] holds a leading ''functional density'' benchmark. * [[Saab Ericsson Space]] Thor [[radiation hardened]] microprocessor<ref name="Lundqvist_1995"/> * [[Inmos]] [[transputer]]s. * [[ZPU (microprocessor)|ZPU]] A physically-small CPU designed to supervise [[FPGA]] systems.<ref name="ZPU"/> *Some technical handheld calculators use reverse Polish notation in their keyboard interface, instead of having parenthesis keys. This is a form of stack machine. The Plus key relies on its two operands already being at the correct topmost positions of the user-visible stack. ===Virtual stack machines=== Examples of [[Virtual machine|virtual]] stack machines interpreted in software: * the [[Whetstone (benchmark)|Whetstone]] [[ALGOL 60]] interpretive code,<ref name="Randell-Russell_1964"/> on which some features of the Burroughs B6500 were based * the [[UCSD Pascal]] p-machine; which closely resembled Burroughs * the [[p-code machine#Example machine|Niklaus Wirth p-code machine]] * [[Smalltalk]] * the [[Java virtual machine]] instruction set (note that only the abstract instruction set is stack based, HotSpot, the Sun Java Virtual Machine for instance, does not implement the actual interpreter in software, but as handwritten assembly stubs) * the [[WebAssembly]] bytecode * the [[Virtual Execution System]] (VES) for the [[Common Intermediate Language]] (CIL) instruction set of the [[.NET Framework]] (ECMA 335) * the [[Forth (programming language)|Forth]] programming language, especially the integral virtual machine * Adobe's [[PostScript]] * [[Sun Microsystems]]' SwapDrop programming language for [[Sun Ray]] [[smartcard]] identification * Adobe's [[ActionScript]] Virtual Machine 2 (AVM2) * [[Ethereum]]'s EVM * the [[CPython]] [[bytecode]] interpreter * the [[Ruby (programming language)|Ruby]] [[YARV]] bytecode interpreter * the [[Rubinius]] virtual machine * the [[bs (programming language)]] in [[Unix]] uses a virtual stack machine to process commands, after first transposing provided input language form, into reverse-polish notation * the [[dc (computer program)]] one of the oldest [[Unix]] programs * the [[Lua (programming language)]] C API * the [[TON Virtual Machine (TVM)]] for [[The Open Network]] smart contracts ===Hybrid machines=== {{distinguish|text=[[hybrid computer]]s that combine both digital and analogue features}} Pure stack machines are quite inefficient for procedures which access multiple fields from the same object. The stack machine code must reload the object pointer for each pointer+offset calculation. A common fix for this is to add some register-machine features to the stack machine: a visible register file dedicated to holding addresses, and register-style instructions for doing loads and simple address calculations. It is uncommon to have the registers be fully general purpose, because then there is no strong reason to have an expression stack and postfix instructions. Another common hybrid is to start with a register machine architecture, and add another memory address mode which emulates the push or pop operations of stack machines: 'memaddress = reg; reg += instr.displ'. This was first used in [[Digital Equipment Corporation|DEC]]'s [[PDP-11]] minicomputer.<ref name="Duncan_1977"/> This feature was carried forward in [[VAX]] computers and in [[Motorola 6809]] and [[Motorola 68000|M68000]] microprocessors. This allowed the use of simpler stack methods in early compilers. It also efficiently supported virtual machines using stack interpreters or [[threaded code]]. However, this feature did not help the register machine's own code to become as compact as pure stack machine code. Also, the execution speed was less than when compiling well to the register architecture. It is faster to change the top-of-stack pointer only occasionally (once per call or return) rather than constantly stepping it up and down throughout each program statement, and it is even faster to avoid memory references entirely. More recently, so-called second-generation stack machines have adopted a dedicated collection of registers to serve as address registers, off-loading the task of memory addressing from the data stack. For example, MuP21 relies on a register called "A", while the more recent GreenArrays processors relies on two registers: A and B.<ref name="Colorforth_F18A"/> The Intel x86 family of microprocessors have a register-style (accumulator) instruction set for most operations, but use stack instructions for its [[x87]], [[Intel 8087]] floating point arithmetic, dating back to the iAPX87 (8087) coprocessor for the 8086 and 8088. That is, there are no programmer-accessible floating point registers, but only an 80-bit wide, 8-level deep stack. The x87 relies heavily on the x86 CPU for assistance in performing its operations. ===Computers using call stacks and stack frames=== Most current computers (of any instruction set style) and most compilers use a large [[call stack|call-return stack]] in memory to organize the short-lived local variables and return links for all currently active procedures or functions. Each nested call creates a new '''stack frame''' in memory, which persists until that call completes. This call-return stack may be entirely managed by the hardware via specialized address registers and special address modes in the instructions. Or it may be merely a set of conventions followed by the compilers, using generic registers and register+offset address modes. Or it may be something in between. Since this technique is now nearly universal, even on register machines, it is not helpful to refer to all these machines as stack machines. That term is commonly reserved for machines which also use an expression stack and stack-only arithmetic instructions to evaluate the pieces of a single statement. Computers commonly provide direct, efficient access to the program's [[global variable]]s and to the local variables of only the current innermost procedure or function, the topmost stack frame. 'Up level' addressing of the contents of callers' stack frames is usually not needed and not supported as directly by the hardware. If needed, compilers support this by passing in frame pointers as additional, hidden parameters. Some Burroughs stack machines do support up-level refs directly in the hardware, with specialized address modes and a special 'display' register file holding the frame addresses of all outer scopes. Currently, only MCST [[Elbrus (computer)|Elbrus]] has done this in hardware. When [[Niklaus Wirth]] developed the first [[Pascal (programming language)|Pascal]] compiler for the [[CDC 6000 series|CDC 6000]], he found that it was faster overall to pass in the frame pointers as a chain, rather than constantly updating complete arrays of frame pointers. This software method also adds no overhead for common languages like C which lack up-level refs. The same Burroughs machines also supported nesting of tasks or threads. The task and its creator share the stack frames that existed at the time of task creation, but not the creator's subsequent frames nor the task's own frames. This was supported by a [[Parent pointer tree|cactus stack]], whose layout diagram resembled the trunk and arms of a [[Saguaro]] cactus. Each task had its own memory segment holding its stack and the frames that it owns. The base of this stack is linked to the middle of its creator's stack. In machines with a conventional flat address space, the creator stack and task stacks would be separate heap objects in one heap. In some programming languages, the outer-scope data environments are not always nested in time. These languages organize their procedure 'activation records' as separate heap objects rather than as stack frames appended to a linear stack. In simple languages like [[Forth (programming language)|Forth]] that lack local variables and naming of parameters, stack frames would contain nothing more than return branch addresses and frame management overhead. So their return stack holds bare return addresses rather than frames. The return stack is separate from the data value stack, to improve the flow of call setup and returns.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)