Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multiple instruction, single data
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Parallel computing architecture}} {{Essay-like|date=April 2017}} [[Image:MISD.svg|right|225px]] {{Flynn's Taxonomy}} In [[computing]], '''multiple instruction, single data''' ('''MISD''') is a type of [[parallel computing]] [[computer architecture|architecture]] where many functional units perform different operations on the same data. [[Pipeline (computing)|Pipeline]] architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. [[Fault tolerance]] executing the same instructions redundantly in order to detect and mask errors, in a manner known as [[Replication (computer science)|task replication]], may<!-- incoherent with the definition of MIMD above... "_different_ operations" --> be considered to belong to this type. Applications for this architecture are much less common than [[Multiple instruction, multiple data|MIMD]] and [[Single instruction, multiple data|SIMD]], as the latter two are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources. However, one prominent example of MISD in computing are the [[Space Shuttle]] flight control computers.<ref>{{cite journal |last1=Spector|first1=A. |last2=Gifford |first2=D. |date=September 1984 |title=The space shuttle primary computer system |journal=Communications of the ACM |volume=27 |pages=872β900 |issue=9 |doi=10.1145/358234.358246|s2cid=39724471 |doi-access=free }}</ref> ==Systolic arrays== [[Systolic array]]s (< '''wavefront''' processors), first described by [[H. T. Kung]] and [[Charles E. Leiserson]] are an example of '''MISD''' architecture. In a typical systolic array, [[parallel computing|parallel]] input [[data]] flows through a network of hard-wired [[Microprocessor|processor]] [[node (networking)|node]]s, resembling the human [[brain]] which combine, process, [[merge algorithm|merge]] or [[sorting algorithm|sort]] the input data into a derived result. Systolic arrays are often hard-wired for a specific operation, such as "multiply and accumulate", to perform massively [[parallel computing|parallel]] integration, [[convolution]], [[correlation]], [[matrix multiplication]] or data sorting tasks. A systolic array typically consists of a large monolithic network of primitive computing [[node (computer science)|node]]s, which can be hardwired or software-configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. More general ''' wavefront''' processors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. Because the [[wave]]-like propagation of data through a [[Systolic array|systolic]] array resembles the [[pulse]] of the human circulatory system, the name systolic was coined from medical terminology. A significant benefit of systolic arrays is that all operand data and partial results are contained within (passing through) the processor array. There is no need to access external buses, main memory, or internal caches during each operation, as with standard sequential machines. The sequential limits on parallel performance dictated by [[Amdahl's law]] also do not apply in the same way because data dependencies are implicitly handled by the programmable node interconnect. Therefore, systolic arrays are extremely good at artificial intelligence, image processing, pattern recognition, computer vision, and other tasks that animal brains do exceptionally well. Wavefront processors, in general, can also be very good at machine learning by implementing self-configuring neural nets in hardware. While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is not [[Single instruction, single data|SISD]]. Since these [[input (computer science)|input]] values are merged and combined into the result(s) and do not maintain their [[independence]] as they would in a [[Single instruction, multiple data|SIMD]] vector processing unit, the [[array data structure|array]] cannot be classified as such. Consequently, the array cannot be classified as a [[Multiple instruction, multiple data|MIMD]] either, since MIMD can be viewed as a mere collection of smaller SISD and SIMD machines. Finally, because the data [[swarm]] is transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification a [[misnomer]]. The other reason why a systolic array should not qualify as a '''MISD''' is the same as the one which disqualifies it from the SISD category: The input data is typically a vector, not a '''single '''data value, although one could argue that any given input vector is a single dataset. The above notwithstanding, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in the engineering class. If the array is viewed from the outside as [[atomic operation|atomic]] it should perhaps be classified as '''SFMuDMeR''' = ''single function, multiple data, merged result(s).''<ref>Michael J. Flynn, Kevin W. Rudd. ''Parallel Architectures''. CRC Press, 1996.</ref><ref>Quinn, Michael J. ''Parallel Programming in C with MPI and OpenMP''. Boston: McGraw Hill, 2004.</ref><ref>Ibaroudene, Djaffer. "Parallel Processing, EG6370G: Chapter 1, Motivation and History." St Mary's University, San Antonio, TX. Spring 2008.</ref><ref>{{cite book|last1=Null|first1=Linda|last2=Lobur|first2=Julia|title=The Essentials of Computer Organization and Architecture|url=https://archive.org/details/essentialsofcomp00null|url-access=registration|year=2006|publisher=Jones and Bartlett|location=468}}</ref> ==Footnotes== {{reflist}} {{CPU technologies}} {{Parallel computing}} [[Category:Flynn's taxonomy]] [[Category:Parallel computing]] [[Category:Classes of computers|Misd]] [[de:Flynnsche Klassifikation#MISD (Multiple Instruction, Single Data)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:CPU technologies
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Essay-like
(
edit
)
Template:Flynn's Taxonomy
(
edit
)
Template:Parallel computing
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)