Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Instruction set architecture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Code density=== In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the ''code density'', was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch. Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named ''Complex Instruction Set Computers'', [[complex instruction set computer|CISC]]). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using [[addressing mode]]s such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions. ''Reduced instruction-set computers'', [[RISC]], were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed [[#Instruction length|instruction length]], whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories. Certain embedded RISC ISAs like [[ARM architecture#Thumb|Thumb]] and [[AVR32]] typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.<ref name=weaver>{{cite conference|last1=Weaver|first1=Vincent M.|last2=McKee|first2=Sally A.|title=Code density concerns for new architectures|year=2009|conference=IEEE International Conference on Computer Design|doi=10.1109/ICCD.2009.5413117|citeseerx=10.1.1.398.1967}}</ref> [[Minimal instruction set computer]]s (MISC) are commonly a form of [[stack machine]], where there are few separate instructions (8β32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA ([[field-programmable gate array]]) or in a [[multi-core]] form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.<ref>{{Cite web|title=RISC vs. CISC|url=https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/|access-date=2021-12-18|website=cs.stanford.edu}}</ref>{{Failed verification|reason=That discusses RISC and CISC, but not MISC.|date=December 2021}} <!-- Need examples here --> There has been research into [[executable compression]] as a mechanism for improving code density. The mathematics of [[Kolmogorov complexity]] describes the challenges and limits of this. In practice, code density is also dependent on the [[compiler]]. Most [[optimizing compiler]]s have options that control whether to optimize code generation for execution speed or for code density. For instance [[GNU Compiler Collection|GCC]] has the option <code>-Os</code> to optimize for small machine code size, and <code>-O3</code> to optimize for execution speed at the cost of larger machine code.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)