Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Pipeline (computing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Design considerations == === Balancing the stages === Since the throughput of a pipeline cannot be better than that of its slowest element, the designer should try to divide the work and resources among the stages so that they all take the same time to complete their tasks. In the car assembly example above, if the three tasks took 15 minutes each, instead of 20, 10, and 15 minutes, the latency would still be 45 minutes, but a new car would then be finished every 15 minutes, instead of 20. === Buffering === Under ideal circumstances, if all processing elements are synchronized and take the same amount of time to process, then each item can be received by each element just as it is released by the previous one, in a single [[clock cycle]]. That way, the items will flow through the pipeline at a constant speed, like waves in a water channel. In such "wave pipelines",<ref>{{cite book |author1=O. Hauck |author2=Sorin A. Huss |author3=M. Garg |title=Proceedings. Fifth International Symposium on Advanced Research in Asynchronous Circuits and Systems |chapter=Two-phase asynchronous wave-pipelines and their application to a 2D-DCT |year=1999 |pages=219–228 |doi=10.1109/ASYNC.1999.761536 |isbn=0-7695-0031-5 |s2cid=206515615 }}</ref> no synchronization or buffering is needed between the stages, besides the storage needed for the data items. More generally, buffering between the pipeline stages is necessary when the processing times are irregular, or when items may be created or destroyed along the pipeline. For example, in a graphics pipeline that processes triangles to be rendered on the screen, an element that checks the visibility of each triangle may discard the triangle if it is invisible, or may output two or more triangular pieces of the element if they are partly hidden. Buffering is also needed to accommodate irregularities in the rates at which the application feeds items to the first stage and consumes the output of the last one. The buffer between two stages may be simply a [[hardware register]] with suitable synchronization and signalling logic between the two stages. When a stage A stores a data item in the register, it sends a "data available" signal to the next stage B. Once B has used that data, it responds with a "data received" signal to A. Stage A halts, waiting for this signal, before storing the next data item into the register. Stage B halts, waiting for the "data available" signal, if it is ready to process the next item but stage A has not provided it yet. If the processing times of an element are variable, the whole pipeline may often have to stop, waiting for that element and all the previous ones to consume the items in their input buffers. The frequency of such [[pipeline stall]]s can be reduced by providing space for more than one item in the input buffer of that stage. Such a multiple-item buffer is usually implemented as a [[FIFO (computing and electronics)|first-in, first-out queue]]. The upstream stage may still have to be halted when the queue gets full, but the frequency of those events will decrease as more buffer slots are provided. [[Queueing theory]] can tell the number of buffer slots needed, depending on the variability of the processing times and on the desired performance. === Nonlinear pipelines === If some stage takes (or may take) much longer than the others, and cannot be sped up, the designer can provide two or more processing elements to carry out that task in parallel, with a single input buffer and a single output buffer. As each element finishes processing its current data item, it delivers it to the common output buffer, and takes the next data item from the common input buffer. This concept of "non-linear" or "dynamic" pipeline is exemplified by shops or banks that have two or more cashiers serving clients from a single waiting queue. === Dependencies between items === In some applications, the processing of an item Y by a stage A may depend on the results or effect of processing a previous item X by some later stage B of the pipeline. In that case, stage A cannot correctly process item Y until item X has cleared stage B. This situation occurs very often in instruction pipelines. For example, suppose that Y is an arithmetic instruction that reads the contents of a register that was supposed to have been modified by an earlier instruction X. Let A be the stage that fetches the instruction [[operand]]s, and B be the stage that writes the result to the specified register. If stage A tries to process instruction Y before instruction X reaches stage B, the register may still contain the old value, and the effect of Y would be incorrect. In order to handle such conflicts correctly, the pipeline must be provided with extra circuitry or logic that detects them and takes the appropriate action. Strategies for doing so include: * '''Stalling:''' Every affected stage, such as A, is halted until the dependency is resolved—that is, until the required information is available and/or the required state has been achieved. * '''Reordering items:''' Instead of stalling, stage A may put item Y aside and look for any subsequent item Z in its input stream that does not have any dependencies pending with any earlier item. In instruction pipelines, this technique is called [[out-of-order execution]]. * '''Guess and backtrack:''' One important example of item-to-item dependency is the handling of a [[conditional branch]] instruction X by an instruction pipeline. The first stage A of the pipeline, that fetches the next instruction Y to be executed, cannot perform its task until X has fetched its operand and determined whether the branch is to be taken or not. That may take many clock cycles, since the operand of X may in turn depend on previous instructions that fetch data from main memory. : Rather than halt while waiting for X to be finished, stage A may guess whether the branch will be taken or not, and fetch the next instruction Y based on that guess. If the guess later turns out to be incorrect (hopefully rarely), the system would have to backtrack and resume with the correct choice. Namely, all the changes that were made to the machine's state by stage A and subsequent stages based on that guess would have to be undone, the instructions following X already in the pipeline would have to be flushed, and stage A would have to restart with the correct [[instruction pointer]]. This [[branch prediction]] strategy is a special case of [[speculative execution]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)