Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Theoretical computer science
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Parallel computation=== {{main|Parallel computation}} [[Parallel computing]] is a form of [[computing|computation]] in which many calculations are carried out simultaneously,<ref>{{cite book|last=Gottlieb|first=Allan|title=Highly parallel computing|year=1989|publisher=Benjamin/Cummings|location=Redwood City, Calif.|isbn=978-0-8053-0177-9|url=http://dl.acm.org/citation.cfm?id=160438|author2=Almasi, George S.}}</ref> operating on the principle that large problems can often be divided into smaller ones, which are then solved [[Parallelism (computing)|"in parallel"]]. There are several different forms of parallel computing: [[bit-level parallelism|bit-level]], [[instruction level parallelism|instruction level]], [[data parallelism|data]], and [[task parallelism]]. Parallelism has been employed for many years, mainly in [[high performance computing|high-performance computing]], but interest in it has grown lately due to the physical constraints preventing [[frequency scaling]].<ref>S.V. Adve et al. (November 2008). [http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] {{Webarchive|url=https://web.archive.org/web/20081209154000/http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf |date=2008-12-09 }} (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits โ increased clock frequency and smarter but increasingly complex architectures โ are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>[[Krste Asanoviฤ|Asanovic]] et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[multi-core processor]]s.<ref name="View-Power">[[Asanovic, Krste]] et al. (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."</ref> [[Parallel algorithm|Parallel computer programs]] are more difficult to write than sequential ones,<ref>{{cite book|last=Hennessy|first=John L.|title=Computer organization and design : the hardware/software interface|year=1999|publisher=Kaufmann|location=San Francisco|isbn=978-1-55860-428-5|edition=2. ed., 3rd print.|author2=Patterson, David A.|author3=Larus, James R.|url=https://archive.org/details/computerorganiz000henn}}</ref> because concurrency introduces several new classes of potential [[software bug]]s, of which [[race condition]]s are the most common. [[Computer networking|Communication]] and [[Synchronization (computer science)|synchronization]] between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible [[speedup|speed-up]] of a single program as a result of parallelization is known as [[Amdahl's law]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)