Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Parallel computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{redirect|Parallelization|parallelization of manifolds|Parallelization (mathematics)}} {{short description|Programming paradigm in which many processes are executed simultaneously}} [[File:IBM Blue Gene P supercomputer.jpg|thumb|300px|Large [[Supercomputer|supercomputers]] such as IBM's [[Blue Gene|Blue Gene/P]] are designed to heavily exploit parallelism. ]] '''Parallel computing''' is a type of [[computing|computation]] in which many calculations or [[Process (computing)|process]]es are carried out simultaneously.<ref>{{cite book |last=Gottlieb |first=Allan |url=http://dl.acm.org/citation.cfm?id=160438 |title=Highly parallel computing |author2=Almasi, George S. |publisher=Benjamin/Cummings |year=1989 |isbn=978-0-8053-0177-9 |location=Redwood City, Calif. |language=en-US}}</ref> Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: [[Bit-level parallelism|bit-level]], [[Instruction-level parallelism|instruction-level]], [[Data parallelism|data]], and [[task parallelism]]. Parallelism has long been employed in [[high-performance computing]], but has gained broader interest due to the physical constraints preventing [[frequency scaling]].<ref name=":0">S.V. Adve ''et al.'' (November 2008). [https://graphics.cs.illinois.edu/sites/default/files/upcrc-wp.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] {{webarchive|url=https://web.archive.org/web/20180111165735/https://graphics.cs.illinois.edu/sites/default/files/upcrc-wp.pdf |date=2018-01-11 }} (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The [[computer industry]] has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>[[Krste Asanović|Asanovic]] ''et al.'' Old [conventional wisdom]: Power is free, but [[transistor]]s are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[multi-core processor]]s.<ref name="View-Power">[[Asanovic, Krste]] ''et al.'' (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance… Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limits."</ref> [[File:Parallelism_vs_concurrency.png|thumb|Parallelism vs concurrency]] In [[computer science]], '''parallelism''' and concurrency are two different things: a parallel program uses [[Multi-core processor|multiple CPU cores]], each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. [[Thread (computing)|threads]]) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics.<ref>{{Cite book |title=Parallel and Concurrent Programming in Haskell |publisher=O'Reilly Media |year=2013 |isbn=9781449335922}}</ref> Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and [[Symmetric multiprocessing|multi-processor]] computers having multiple [[processing element]]s within a single machine, while [[Computer cluster|clusters]], [[Massively parallel (computing)|MPPs]], and [[Grid computing|grids]] use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly [[parallel algorithm]]s, particularly those that use concurrency, are more difficult to write than [[sequential algorithm|sequential]] ones,<ref>{{cite book|last=Hennessy|first=John L.|author-link=John L. Hennessy|title=Computer organization and design: the hardware/software interface|year=1999|publisher=Kaufmann|location=San Francisco|isbn=978-1-55860-428-5|edition=2. ed., 3rd print.|author2=Patterson, David A.|author-link2=David Patterson (computer scientist)|author3=Larus, James R.|author-link3=James Larus|url=https://archive.org/details/computerorganiz000henn}}</ref> because concurrency introduces several new classes of potential [[software bug]]s, of which [[race condition]]s are the most common. [[Computer networking|Communication]] and [[Synchronization (computer science)|synchronization]] between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance. A theoretical [[upper bound]] on the [[Speedup|speed-up]] of a single program as a result of parallelization is given by [[Amdahl's law]], which states that it is limited by the fraction of time for which the parallelization can be utilised. {{TOC limit|limit=4}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)