Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multiprocessing
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Use of two or more central processing units (CPUs) within a single computer system}} {{Use dmy dates|date=April 2014}} {{refimprove|date=February 2014}} '''Multiprocessing''' ('''MP''') is the use of two or more [[central processing unit]]s (CPUs) within a single [[computer system]].<ref name="Rajagopal1999">{{cite book |author=Raj Rajagopal |title=Introduction to Microsoft Windows NT Cluster Server: Programming and Administration |url=https://books.google.com/books?id=kUJnHJJlnpUC&pg=PA4 |year=1999 |publisher=CRC Press |isbn=978-1-4200-7548-9 |page=4}}</ref><ref name="EbbersKettner2012"/> The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined ([[multi-core (computing)|multiple cores]] on one [[Die (integrated circuit)|die]], multiple dies in one [[Chip carrier|package]], multiple packages in one [[system unit]], etc.). A '''multiprocessor''' is a computer system having two or more [[Central processing unit|processing units]] (multiple processors) each sharing [[main memory]] and peripherals, in order to simultaneously process programs.<ref>{{cite web |url=http://www.yourdictionary.com/multiprocessor |title=Multiprocessor dictionary definition - multiprocessor defined |website=www.yourdictionary.com |access-date=16 March 2018 |archive-date=16 March 2018 |archive-url=https://web.archive.org/web/20180316151954/http://www.yourdictionary.com/multiprocessor |url-status=live }}</ref><ref>{{cite web |url=http://www.thefreedictionary.com/multiprocessor |title=multiprocessor |access-date=16 March 2018 |via=The Free Dictionary |archive-date=16 March 2018 |archive-url=https://web.archive.org/web/20180316151656/http://www.thefreedictionary.com/multiprocessor |url-status=live }}</ref> A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the systemโs memory and I/O facilities"; it also gave '''tightly coupled system''' as a synonymous term.<ref>{{cite book |author=Irv Englander |title=The architecture of Computer Hardware and Systems Software. An Information Technology Approach. |publisher=Wiley |date=2009 |edition=4th | isbn=978-0471715429 |page=265}}</ref> At the [[operating system]] level, ''multiprocessing'' is sometimes used to refer to the execution of multiple concurrent [[Process (computing)|processes]] in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant.<ref name="MorleyParker2012">{{cite book |author1=Deborah Morley |author2=Charles Parker |title=Understanding Computers: Today and Tomorrow, Comprehensive |url=https://books.google.com/books?id=-2Ewg8QX8U4C&pg=PA183 |date=13 February 2012 |publisher=Cengage Learning |isbn=978-1-133-19024-0 |page=183}}</ref><ref name="Shibu">{{cite book |author=Shibu K. V. |title=Introduction to Embedded Systems |url=https://books.google.com/books?id=8hfn4gwR90MC&pg=PA402 |publisher=Tata McGraw-Hill Education |isbn=978-0-07-014589-4 |page=402}}</ref> When used with this definition, multiprocessing is sometimes contrasted with [[Computer multitasking|multitasking]], which may use just a single processor but switch it in time slices between tasks (i.e. a [[time-sharing system]]). Multiprocessing however means true parallel execution of multiple processes using more than one processor.<ref name="Shibu"/> Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term [[Parallel computing|parallel processing]] is generally used to denote that scenario.<ref name="MorleyParker2012"/> Other authors prefer to refer to the operating system techniques as [[multiprogramming]] and reserve the term ''multiprocessing'' for the hardware aspect of having more than one processor.<ref name="EbbersKettner2012">{{cite book |author1=Mike Ebbers |author2=John Kettner |author3=Wayne O'Brien |author4=Bill Ogden |title=Introduction to the New Mainframe: z/OS Basics |url=https://books.google.com/books?id=c-a1AgAAQBAJ&pg=PA96 |year=2012 |publisher=IBM |isbn=978-0-7384-3534-3 |page=96}}</ref><ref name="Arora2006">{{cite book |author=Ashok Arora |title=Foundations of Computer Science |url=https://books.google.com/books?id=CrcoszZBMowC&pg=PA149 |year=2006 |publisher=Laxmi Publications |isbn=978-81-7008-971-1 |page=149}}</ref> The remainder of this article discusses multiprocessing only in this hardware sense. In [[Flynn's taxonomy]], multiprocessors as defined above are [[Multiple instruction, multiple data|MIMD]] machines.<ref name="Giladi2008"/><ref name="Shiva2005">{{cite book |author=Sajjan G. Shiva |title=Advanced Computer Architectures |url=https://books.google.com/books?id=DhdCwk5AhbEC&pg=PA221 |date=20 September 2005 |publisher=CRC Press |isbn=978-0-8493-3758-1 |page=221}}</ref> As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains [[message passing]] multicomputer systems.<ref name="Giladi2008">{{cite book |author=Ran Giladi |title=Network Processors: Architecture, Programming, and Implementation |url=https://books.google.com/books?id=_7aH_4axpwAC&pg=PA293 |year=2008 |publisher=Morgan Kaufmann |isbn=978-0-08-091959-1 |page=293}}</ref> == Key topics == ===Processor symmetry=== In a '''multiprocessing''' system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and [[operating system]] software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are called [[symmetric multiprocessing]] (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including [[asymmetric multiprocessing]] (ASMP), [[non-uniform memory access]] (NUMA) multiprocessing, and [[computer cluster|clustered]] multiprocessing. ====Master/slave multiprocessor system==== In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are the [[Bull Gamma 60]] and the [[Burroughs Large Systems#B5000, B5500, and B5700|Burroughs B5000]].<ref>{{cite manual |title = The Operational Characteristics of the Processors for the Burroughs B5000 |id = 5000-21005A |version = Revision A |year = 1963 |url = http://www.bitsavers.org/pdf/burroughs/LargeSystems/B5000_5500_5700/5000-21005_B5000_operChar.pdf |publisher = [[Burroughs Corporation|Burroughs]] |access-date = June 27, 2023 |archive-date = 30 May 2023 |archive-url = https://web.archive.org/web/20230530061204/http://www.bitsavers.org/pdf/burroughs/LargeSystems/B5000_5500_5700/5000-21005_B5000_operChar.pdf |url-status = live }}</ref> An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio Shack [[TRS-80 Model 16]] desktop computer which came out in February 1982 and ran the multi-user/multi-tasking [[Xenix]] operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bit [[Zilog Z80]] CPU running at 4 MHz, and a 16-bit [[Motorola 68000]] CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlier [[TRS-80 Model II]], which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021<ref>{{cite book |title=TRS-80 Model II Technical Reference Manual |date=1980 |publisher=Radio Shack |page=135}}</ref> microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. ===Instruction and data streams=== In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts ([[single instruction, multiple data]] or SIMD, often used in [[vector processing]]), multiple sequences of instructions in a single context ([[multiple instruction, single data]] or MISD, used for [[Redundancy (engineering)|redundancy]] in fail-safe systems and sometimes applied to describe [[Pipeline (computing)|pipelined processors]] or [[hyper-threading]]), or multiple sequences of instructions in multiple contexts ([[multiple instruction, multiple data]] or MIMD). ===Processor coupling=== ====Tightly coupled multiprocessor system==== Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or [[Uniform Memory Access|UMA]]), or may participate in a memory hierarchy with both local and shared memory (SM)([[non-uniform memory access|NUMA]]). The [[IBM p690]] Regatta is an example of a high end SMP system. [[Intel]] [[Xeon]] processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of [[AMD]]'s [[Opteron]] range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system [[Random-access memory|RAM]]. Chip multiprocessors, also known as [[Multi-core (computing)|multi-core]] computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. ====Loosely coupled multiprocessor system==== {{main | shared nothing architecture}} Loosely coupled multiprocessor systems (often referred to as [[Computer cluster|clusters]]) are based on multiple standalone relatively low processor count [[commodity computer]]s interconnected via a high speed communication system ([[Gigabit Ethernet]] is common). A Linux [[Beowulf cluster]] is an example of a [[loose coupling|loosely coupled]] system. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may [[depreciation|depreciate]] rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. == Disadvantages == Merging data from multiple [[Thread (computing)|threads]] or [[Process (computing)|processes]] may incur significant overhead due to [[conflict resolution]], [[data consistency]], versioning, and synchronization. <ref>{{Cite book |title=Concurrent Programming: Algorithms, Principles, and Foundations |date=23 December 2012 |publisher=Springer |isbn=978-3642320262}}</ref> ==See also== *[[Multiprocessor system architecture]] *[[Symmetric multiprocessing]] *[[Asymmetric multiprocessing]] *[[Multi-core processor]] *[[BMDFM]] โ Binary Modular Dataflow Machine, a SMP MIMD runtime environment *[[Software lockout]] *[[OpenHMPP]] ==References== {{Reflist|40em}} {{Parallel computing}} [[Category:Parallel computing]] [[Category:Classes of computers]] [[Category:Computing terminology]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite manual
(
edit
)
Template:Cite web
(
edit
)
Template:Main
(
edit
)
Template:Parallel computing
(
edit
)
Template:Refimprove
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Use dmy dates
(
edit
)