Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Instruction set architecture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==<span id="NATIVE"></span>Instructions== [[Machine_code|Machine language]] is built up from discrete ''statements'' or ''instructions''. On the processing architecture, a given instruction may specify: *[[opcode]] (the instruction to be performed) e.g. add, copy, test *any explicit operands: ::[[processor register|registers]] ::literal/constant values ::[[addressing mode]]s used to access memory More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by [[control flow]] instructions. ===Instruction types=== Examples of operations common to many instruction sets include: ====Data handling and memory operations==== *''Set'' a [[processor register|register]] to a fixed constant value. *''Copy'' data from a memory location or a register to a memory location or a register (a machine instruction is often called ''move''; however, the term is misleading). They are used to store the contents of a register, the contents of another memory location or the result of a computation, or to retrieve stored data to perform a computation on it later. They are often called [[load–store unit|''load'' or ''store'']] operations. *''Read'' or ''write'' data from hardware devices. ====Arithmetic and logic operations==== *''Add'', ''subtract'', ''multiply'', or ''divide'' the values of two registers, placing the result in a register, possibly setting one or more [[flag (programming)|condition codes]] in a [[status register]].{{Sfn|Hennessy|Patterson|2003|p=108}} **''{{vanchor|increment}}'', ''{{vanchor|decrement}}'' in some ISAs, saving operand fetch in trivial cases. *Perform [[bitwise operation]]s, e.g., taking the ''[[logical conjunction|conjunction]]'' and ''[[logical disjunction|disjunction]]'' of corresponding bits in a pair of registers, taking the ''[[logical negation|negation]]'' of each bit in a register. *''Compare'' two values in registers (for example, to see if one is less, or if they are equal). *''{{vanchor|Floating-point instruction}}s'' for arithmetic on floating-point numbers.{{Sfn|Hennessy|Patterson|2003|p=108}} ====Control flow operations==== *''[[Branch (computer science)|Branch]]'' to another location in the program and execute instructions there. *''[[Conditional branch|Conditionally branch]]'' to another location if a certain condition holds. *''[[Indirect branch|Indirectly branch]]'' to another location. *''Skip'' one of more instructions, depending on conditions *''Trap'' Explicitly cause an [[interrupt]], either conditionally or unconditionally. *''[[Subroutine|Call]]'' another block of code, while saving, e.g., the location of the next instruction, as a point to return to. ====Coprocessor instructions==== *Load/store data to and from a coprocessor or exchanging with CPU registers. *Perform coprocessor operations. ===Complex instructions=== Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are [[typified]] by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include: *transferring multiple registers to or from memory (especially the [[Call stack|stack]]) at once *moving large blocks of memory (e.g. [[string copy]] or [[DMA transfer]]) *complicated integer and [[floating-point arithmetic]] (e.g. [[square root]], or [[transcendental function]]s such as [[logarithm]], [[sine]], [[cosine]], etc.) *''{{vanchor|[[SIMD]] instruction|SIMD instruction}}s'', a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated [[SIMD register]]s *performing an atomic [[test-and-set]] instruction or other [[read–modify–write]] [[atomic instruction]] *instructions that perform [[arithmetic logic unit|ALU]] operations with an operand from memory rather than a register Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include [[Single instruction, multiple data|SIMD]] or [[vector processing|vector]] instructions that perform the same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy [[parallelization]] of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as [[MMX (instruction set)|MMX]], [[3DNow!]], and [[AltiVec]]. {{Anchor|Parts of an instruction}} ===Instruction encoding=== [[File:Mips32 addi.svg|thumb|right|upright=1.7|One instruction may have several fields, which identify the logical operation, and may also include source and destination addresses and constant values. This is the MIPS "Add Immediate" instruction, which allows selection of source and destination registers and inclusion of a small constant.]] On traditional architectures, an instruction includes an [[opcode]] that specifies the operation to perform, such as ''add contents of memory to register''—and zero or more [[operand]] specifiers, which may specify [[processor register|registers]], memory locations, or literal data. The operand specifiers may have [[addressing mode]]s determining their meaning or may be in fixed fields. In [[very long instruction word]] (VLIW) architectures, which include many [[microcode]] architectures, multiple simultaneous opcodes and operands are specified in a single instruction. Some exotic instruction sets do not have an opcode field, such as [[transport triggered architecture]]s (TTA), only operand(s). Most [[stack machine]]s have "[[0-operand instruction set|0-operand]]" instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix ([[reverse Polish notation]]) operations that work only on the expression [[Stack (abstract data type)|stack]], not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation.<ref>{{cite web|url=http://www.cs.kent.edu/~durand/CS0/Notes/Chapter05/isa.html|title=Instruction Set Architecture (ISA)|work=Introduction to Computer Science CS 0|first=Paul|last=Durand}}</ref> <!-- A conditional branch that falls though may still have other effects, e.g., decrementing a count register. --> Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example, a conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and the target location not modified, if the condition is false. Similarly, IBM [[z/Architecture]] has a conditional store instruction. A few instruction sets include a predicate field in every instruction. Having predicates for non-branch instructions is called [[predication (computer architecture)|predication]]. ====Number of operands==== Instruction sets may be categorized by the maximum number of operands ''explicitly'' specified in instructions. (In the examples that follow, ''a'', ''b'', and ''c'' are (direct or calculated) addresses referring to memory cells, while ''reg1'' and so on refer to machine registers.) C = A+B *0-operand (''zero-address machines''), so called [[stack machine]]s: All arithmetic operations take place using the top one or two positions on the stack:{{Sfn|Hennessy|Patterson|2003|p=92}} <code>push a</code>, <code>push b</code>, <code>add</code>, <code>pop c</code>. **<code>C = A+B</code> needs ''four instructions''.{{Sfn|Hennessy|Patterson|2003|p=93}} For stack machines, the terms "0-operand" and "zero-address" apply to arithmetic instructions, but not to all instructions, as 1-operand push and pop instructions are used to access memory. *1-operand (''one-address machines''), so called [[accumulator machine]]s, include early computers and many small [[microcontroller]]s: most instructions specify a single right operand (that is, constant, a register, or a memory location), with the implicit [[accumulator (computing)|accumulator]] as the left operand (and the destination if there is one): <code>load a</code>, <code>add b</code>, <code>store c</code>. **<code>C = A+B</code> needs ''three instructions''.{{Sfn|Hennessy|Patterson|2003|p=93}} *2-operand — many CISC and RISC machines fall under this category: **CISC — <code>move A</code> to ''C''; then <code>add B</code> to ''C''. ***<code>C = A+B</code> needs ''two instructions''. This effectively 'stores' the result without an explicit ''store'' instruction. **CISC — Often machines are [https://web.archive.org/web/20131105155703/http://cs.smith.edu/~thiebaut/ArtOfAssembly/CH04/CH04-3.html#HEADING3-79 limited to one memory operand] per instruction: <code>load a,reg1</code>; <code>add b,reg1</code>; <code>store reg1,c</code>; This requires a load/store pair for any memory movement regardless of whether the <code>add</code> result is an augmentation stored to a different place, as in <code>C = A+B</code>, or the same memory location: <code>A = A+B</code>. ***<code>C = A+B</code> needs ''three instructions''. **RISC — Requiring explicit memory loads, the instructions would be: <code>load a,reg1</code>; <code>load b,reg2</code>; <code>add reg1,reg2</code>; <code>store reg2,c</code>. ***<code>C = A+B</code> needs ''four instructions''. *3-operand, allowing better reuse of data:<ref name="Cocke"> {{Cite journal |last1=Cocke |first1=John |last2=Markstein |first2=Victoria |date=January 1990 |title=The evolution of RISC technology at IBM |url=https://www.cis.upenn.edu/~milom/cis501-Fall11/papers/cocke-RISC.pdf |journal=IBM Journal of Research and Development |volume=34 |issue=1 |pages=4–11 |doi=10.1147/rd.341.0004 |access-date=2022-10-05}} </ref> **CISC — It becomes either a single instruction: <code>add a,b,c</code> ***<code>C = A+B</code> needs ''one instruction''. **CISC — Or, on machines limited to two memory operands per instruction, <code>move a,reg1</code>; <code>add reg1,b,c</code>; ***<code>C = A+B</code> needs ''two instructions''. **RISC — arithmetic instructions use registers only, so explicit 2-operand load/store instructions are needed: <code>load a,reg1</code>; <code>load b,reg2</code>; <code>add reg1+reg2->reg3</code>; <code>store reg3,c</code>; ***<code>C = A+B</code> needs ''four instructions''. ***Unlike 2-operand or 1-operand, this leaves all three values a, b, and c in registers available for further reuse.<ref name=Cocke/> *more operands—some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the [[VAX]] "POLY" polynomial evaluation instruction. Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, [[TI MSP430]], and some versions of [[ARM Thumb]]. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the [[ARM architecture family|ARM]], [[AVR32]], [[MIPS architecture|MIPS]], [[Power ISA]], and [[SPARC]] architectures. Each instruction specifies some number of operands (registers, memory locations, or immediate values) ''explicitly''. Some instructions give one or both operands implicitly, such as by being stored on top of the [[stack (data structure)|stack]] or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the [[arity]]). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode. ==={{Anchor|REGISTER-PRESSURE}}Register pressure=== ''Register pressure'' measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be [[register spilling|spilled]] into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.<ref>{{cite book |last=Page |first=Daniel |title=A Practical Introduction to Computer Architecture |chapter=11. Compilers |year=2009 |publisher=Springer |isbn=978-1-84882-255-9 |page=464|bibcode=2009pica.book.....P }}</ref> While embedded instruction sets such as [[ARM Thumb|Thumb]] suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like [[MIPS architecture|MIPS]] and [[DEC Alpha|Alpha]] enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer.<ref>{{cite conference |last1=Venkat |first1=Ashish |last2=Tullsen |first2=Dean M. |title=Harnessing ISA Diversity: Design of a Heterogeneous-ISA Chip Multiprocessor |year=2014 |conference=41st Annual International Symposium on Computer Architecture |url=http://dl.acm.org/citation.cfm?id=2665692}}</ref> ==={{Anchor|Fixed length|Fixed width|Variable length|Variable width}}Instruction length=== The size or length of an instruction varies widely, from as little as four bits in some [[microcontroller]]s to many hundreds of bits in some [[very long instruction word|VLIW]] systems. Processors used in [[personal computer]]s, [[mainframe computer|mainframe]]s, and [[supercomputer]]s have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits).<ref>{{cite web|title=Intel® 64 and IA-32 Architectures Software Developer's Manual|url=https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html|publisher=Intel Corporation|access-date=5 October 2022}}</ref> Within an instruction set, different instructions may have different lengths. In some architectures, notably most [[reduced instruction set computer]]s (RISC), {{vanchor|instructions are a fixed length|FIXED_LENGTH_INSTRUCTIONS}}, typically corresponding with that architecture's [[word (data type)|word size]]. In other architectures, instructions have [[variable-length code|variable length]], typically integral multiples of a [[byte]] or a [[halfword]]. Some, such as the [[ARMv7|ARM]] with ''Thumb-extension'' have ''mixed'' variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8). Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary,<ref name=Cocke/> for instance), and are therefore somewhat easier to optimize for speed. ===Code density=== In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the ''code density'', was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch. Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named ''Complex Instruction Set Computers'', [[complex instruction set computer|CISC]]). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using [[addressing mode]]s such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions. ''Reduced instruction-set computers'', [[RISC]], were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed [[#Instruction length|instruction length]], whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories. Certain embedded RISC ISAs like [[ARM architecture#Thumb|Thumb]] and [[AVR32]] typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.<ref name=weaver>{{cite conference|last1=Weaver|first1=Vincent M.|last2=McKee|first2=Sally A.|title=Code density concerns for new architectures|year=2009|conference=IEEE International Conference on Computer Design|doi=10.1109/ICCD.2009.5413117|citeseerx=10.1.1.398.1967}}</ref> [[Minimal instruction set computer]]s (MISC) are commonly a form of [[stack machine]], where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA ([[field-programmable gate array]]) or in a [[multi-core]] form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.<ref>{{Cite web|title=RISC vs. CISC|url=https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/|access-date=2021-12-18|website=cs.stanford.edu}}</ref>{{Failed verification|reason=That discusses RISC and CISC, but not MISC.|date=December 2021}} <!-- Need examples here --> There has been research into [[executable compression]] as a mechanism for improving code density. The mathematics of [[Kolmogorov complexity]] describes the challenges and limits of this. In practice, code density is also dependent on the [[compiler]]. Most [[optimizing compiler]]s have options that control whether to optimize code generation for execution speed or for code density. For instance [[GNU Compiler Collection|GCC]] has the option <code>-Os</code> to optimize for small machine code size, and <code>-O3</code> to optimize for execution speed at the cost of larger machine code. ===Representation=== The instructions constituting a program are rarely specified using their internal, numeric form ([[machine code]]); they may be specified by programmers using an [[assembly language]] or, more commonly, may be generated from [[high-level programming language]]s by [[compiler]]s.{{Sfn|Hennessy|Patterson|2003|p=120}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)