Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Superscalar processor
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|CPU that implements instruction-level parallelism within a single processor}} {{Redirect|Superscaler|the Sega arcade system board|Sega Super Scaler}} {{more footnotes|date=October 2017}} [[File:Superscalarpipeline.svg|thumb|Simple superscalar pipeline. By fetching and dispatching two instructions at a time, a maximum of two instructions per cycle can be completed. (IF = instruction fetch, ID = instruction decode, EX = execute, MEM = memory access, WB = register write-back, ''i'' = instruction number, ''t'' = clock cycle [i.e. time])]] [[File:Processor board cray-2 hg.jpg|thumb|Processor board of a [[Cray T3E|CRAY T3e]] supercomputer with four ''superscalar'' [[Alpha 21164]] processors]] A '''superscalar processor''' (or '''multiple-issue processor'''<ref>P. Pacheco, ''Introduction to Parallel Programming'', 2011, section 2.2.5, "There are two main approaches to ILP: pipelining ... and multiple issue ... A processor that supports dynamic multiple issue is sometimes said to be superscalar." [[Andrew A. Chien|A. Chien]], ''Computer Architecture for Scientists'', 2022, page 102, "multiple-issue (aka superscalar)".</ref>) is a [[Central processing unit|CPU]] that implements a form of [[Parallel computer|parallelism]] called [[instruction-level parallelism]] within a single processor.<ref>{{Cite web |title=What is a Superscalar Processor? - Definition from Techopedia |url=http://www.techopedia.com/definition/2897/superscalar-processor |access-date=2022-08-29 |website=Techopedia.com |date=28 February 2019 |language=en}}</ref> In contrast to a [[scalar processor]], which can execute at most one single instruction per clock cycle, a superscalar processor can execute or start executing more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different [[execution unit]]s on the processor. It therefore allows more [[throughput]] (the number of instructions that can be executed in a unit of time which can even be less than 1) than would otherwise be possible at a given [[clock rate]]. Each execution unit is not a separate processor (or a core if the processor is a [[multi-core processor]]), but an execution resource within a single CPU such as an [[arithmetic logic unit]]. While a superscalar CPU is typically also [[instruction pipeline|pipelined]], superscalar and pipelining execution are considered different performance enhancement techniques. The former (superscalar) executes multiple instructions in parallel by using multiple execution units, whereas the latter (pipeline) executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. In the "Simple superscalar pipeline" figure, fetching two instructions at the same time is superscaling, and fetching the next two before the first pair has been written back is pipelining. The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU): * Instructions are issued from a sequential instruction stream * The CPU dynamically checks for [[data dependencies]] between instructions at run time (versus software checking at [[compile time]]) * The CPU can execute multiple instructions per clock cycle ==History== [[Seymour Cray]]'s [[CDC 6600]] from 1964, while not capable of issuing multiple instructions per cycle, is often cited as an early influence to modern superscalar processors for its ability to execute instructions simultaneously through multiple functional units. The 1967 [[IBM System/360 Model 91]], was another early influence that introduced out-of-order execution, pioneering use of [[Tomasulo's algorithm]].<ref>{{cite journal |last1=Smith |first1=James E. |last2=Sohi |first2=Gurindar S. |title=The Microarchitecture of Superscalar Processors |journal=Proceedings of the IEEE |date=December 1995 |volume=83 |issue=12 |page=1609 |doi=10.1109/5.476078 |url=https://minds.wisconsin.edu/bitstream/handle/1793/9476/file_1.pdf}}</ref> The [[Intel i960]]CA (1989),<ref>{{cite conference |last1=McGeady |first1=Steven |title=The i960CA SuperScalar implementation of the 80960 architecture |journal=Digest of Papers Compcon Spring '90. |conference=Thirty-Fifth IEEE Computer Society International Conference on Intellectual Leverage |date=Spring 1990 |pages=232–240 |doi=10.1109/CMPCON.1990.63681|isbn=0-8186-2028-5 |s2cid=13206773 }}</ref> the [[AMD 29000]]-series 29050 (1990), and the Motorola [[MC88110]] (1991),<ref>{{cite book |last1=Diefendorff |first1=K. |last2=Allen |first2=M. |chapter=The Motorola 88110 Superscalar RISC microprocessor |title=Digest of Papers COMPCON Spring 1992 |date=Spring 1992 |pages=157–162 |doi=10.1109/CMPCON.1992.186702|isbn=0-8186-2655-0 |s2cid=34913907 }}</ref> microprocessors were the first commercial single-chip superscalar microprocessors. [[RISC]] microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units and the traditional uniformity of the instruction set favors superscalar dispatch (this was why RISC designs were faster than [[Complex instruction set computer|CISC]] designs through the 1980s and into the 1990s, and it's far more complicated to do multiple dispatch when instructions have variable bit length). Except for CPUs used in [[Low-power electronics|low-power]] applications, [[embedded system]]s, and [[Battery (electricity)|battery]]-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar. The [[P5 (microarchitecture)|P5 Pentium]] was the first superscalar x86 processor; the [[Nx586]], [[P6 (microarchitecture)|P6 Pentium Pro]] and [[AMD K5]] were among the first designs which decode [[x86]]-instructions asynchronously into dynamic [[microcode]]-like ''[[micro-op]]'' sequences prior to actual execution on a superscalar [[microarchitecture]]; this opened up for dynamic scheduling of buffered ''partial'' instructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplified [[speculative execution]] and allowed higher clock frequencies compared to designs such as the advanced [[Cyrix 6x86]]. ==Scalar to superscalar== The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by a [[vector processor]] operates simultaneously on many data items. An analogy is the difference between [[Scalar (mathematics)|scalar]] and vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently. Superscalar CPU design emphasizes improving the instruction dispatcher accuracy and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have two [[Arithmetic logic unit|ALU]]s and a single [[floating-point unit|FPU]], a later design such as the [[PowerPC 970]] includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design. A superscalar processor usually sustains an execution rate in excess of one [[Cycles per instruction|instruction per machine cycle]]. But merely processing multiple instructions concurrently does not make an architecture superscalar, since [[Instruction pipeline|pipelined]], [[multiprocessor]] or [[Multi-core (computing)|multi-core]] architectures also achieve that, but with different methods. In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned as having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread. Most modern superscalar CPUs also have logic to reorder the instructions to try to avoid pipeline stalls and increase parallel execution. ==Limitations== Available performance improvement from superscalar techniques is limited by three key areas: * The degree of intrinsic parallelism in the instruction stream (instructions requiring the same computational resources from the CPU) * The complexity and time cost of dependency checking logic and [[register renaming]] circuitry * The branch instruction processing Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructions <code>a = b + c; d = e + f</code> can be run in parallel because none of the results depend on other calculations. However, the instructions <code>a = b + c; b = e + f</code> might not be runnable in parallel, depending on the order in which the instructions complete while they move through the units. Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results. No matter how advanced the [[semiconductor device fabrication|semiconductor process]] or how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units (e.g. ALUs), the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively the [[CPU power dissipation|power consumption]], complexity and gate delay costs limit the achievable superscalar speedup. However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation. ==Alternatives== Collectively, these limits drive investigation into alternative architectural changes such as [[very long instruction word]] (VLIW), [[explicitly parallel instruction computing]] (EPIC), [[simultaneous multithreading]] (SMT), and [[Multi-core (computing)|multi-core computing]]. With VLIW, the burdensome task of dependency checking by [[hardware logic]] at run time is removed and delegated to the [[compiler]]. [[Explicitly parallel instruction computing]] (EPIC) is like VLIW with extra cache prefetching instructions. Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. The fact that they are independent means that we know that the instruction of one thread can be executed out of order and/or in parallel with the instruction of a different one. Also, one independent thread will not produce a pipeline bubble in the code stream of a different one, for example, due to a branch. Superscalar processors differ from [[multi-core processor]]s in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the [[Arithmetic logic unit|ALU]], [[Integer (computer science)|integer]] [[Binary multiplier|multiplier]], integer shifter, [[Floating-point unit|FPU]], etc. There may be multiple versions of each execution unit to enable the execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions from ''multiple'' threads, one thread per [[Central processing unit|processing unit]] (called "core"). It also differs from a [[instruction pipelining|pipelined processor]], where the multiple instructions can concurrently be in various stages of execution, [[Assembly line|assembly-line]] fashion. The various alternative techniques are not mutually exclusive—they can be (and frequently are) combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also include [[vector processor|vector]] capability. ==See also== * [[Eager execution]] * [[Hyper-threading]] * [[Simultaneous multithreading]] * [[Out-of-order execution]] * [[Shelving buffer]] * [[Speculative execution]] * [[Software lockout]], a multiprocessor issue similar to logic dependencies on superscalars * [[Super-threading]] ==References== {{Reflist}} * [[William Michael (Mike) Johnson (technologist)|Mike Johnson]], ''Superscalar Microprocessor Design'', Prentice-Hall, 1991, {{ISBN|0-13-875634-1}} * Sorin Cotofana, Stamatis Vassiliadis, "On the Design Complexity of the Issue Logic of Superscalar Machines", [[EUROMICRO]] 1998: 10277-10284 * [[Steven McGeady]], et al., "Performance Enhancements in the Superscalar i960MM Embedded Microprocessor," ''ACM Proceedings of the 1991 Conference on Computer Architecture (Compcon)'', 1991, pp. 4–7 ==External links== * [http://www.cs.clemson.edu/~mark/eager.html Eager Execution / Dual Path / Multiple Path], By Mark Smotherman {{CPU technologies}} {{Parallel computing}} [[Category:Superscalar microprocessors| ]] [[Category:Classes of computers]] [[Category:Computer architecture]] [[Category:Parallel computing]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:CPU technologies
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:More footnotes
(
edit
)
Template:Parallel computing
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)