Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Parallel computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Relevant laws === [[File:AmdahlsLaw.svg|right|thumbnail|300px|A graphical representation of [[Amdahl's law]]. The law demonstrates the theoretical maximum [[speedup]] of an overall system and the concept of diminishing returns. If exactly 50% of the work can be parallelized, the best possible speedup is 2 times. If 95% of the work can be parallelized, the best possible speedup is 20 times. According to the law, even with an infinite number of processors, the speedup is constrained by the unparallelizable portion.]] [[File:Optimizing-different-parts.svg|thumb|300px|Assume that a task has two independent parts, ''A'' and ''B''. Part ''B'' takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part ''A'' twice as fast. This will make the computation much faster than by optimizing part ''B'', even though part ''B'''s speedup is greater by ratio, (5 times versus 2 times).]] Main article: [[Amdahl's law]] Optimally, the [[speedup]] from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements. The maximum potential speedup of an overall system can be calculated by [[Amdahl's law]].<ref name=":02">{{Citation |last=Bakos |first=Jason D. |title=Chapter 2 - Multicore and data-level optimization: OpenMP and SIMD |date=2016-01-01 |work=Embedded Systems |pages=49–103 |editor-last=Bakos |editor-first=Jason D. |url=https://linkinghub.elsevier.com/retrieve/pii/B978012800342800002X |access-date=2024-11-18 |place=Boston |publisher=Morgan Kaufmann |doi=10.1016/b978-0-12-800342-8.00002-x |isbn=978-0-12-800342-8|url-access=subscription }}</ref> Amdahl's Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond a certain point. <ref>{{Cite book |title=The Art of Multiprocessor Programming, Revised Reprint |date=22 May 2012 |publisher=Morgan Kaufmann |isbn=9780123973375}}</ref><ref>{{Cite book |title=Programming Many-Core Chips |isbn=9781441997395 |last1=Vajda |first1=András |date=10 June 2011 |publisher=Springer}}</ref> Amdahl's Law has limitations, including assumptions of fixed workload, neglecting [[inter-process communication]] and [[Synchronization (computer science)|synchronization]] overheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads.<ref>{{Cite book |last=Amdahl |first=Gene M. |chapter=Validity of the single processor approach to achieving large scale computing capabilities |date=1967-04-18 |title=Proceedings of the April 18-20, 1967, spring joint computer conference on - AFIPS '67 (Spring) |chapter-url=https://dl.acm.org/doi/10.1145/1465482.1465560 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=483–485 |doi=10.1145/1465482.1465560 |isbn=978-1-4503-7895-6}}</ref><ref name=":1">{{Cite book |title=Computer Architecture: A Quantitative Approach |date=2003 |publisher=Morgan Kaufmann |isbn=978-8178672663}}</ref><ref>{{Cite book |title=Parallel Computer Architecture A Hardware/Software Approach |date=1999 |publisher=Elsevier Science |isbn=9781558603431}}</ref> [[Gustafson's law]] and [[Neil J. Gunther#Universal Scalability Law|Universal Scalability Law]] give a more realistic assessment of the parallel performance.<ref>{{cite book |last1=McCool |first1=Michael |title=Structured Parallel Programming: Patterns for Efficient Computation |last2=Reinders |first2=James |last3=Robison |first3=Arch |publisher=Elsevier |year=2013 |isbn=978-0-12-415993-8 |pages=61}}</ref><ref>{{Cite book |last=Gunther |first=Neil |title=Guerrilla Capacity Planning: A Tactical Approach to Planning for Highly Scalable Applications and Services |year=2007 |isbn=978-3540261384}}</ref>[[File:Gustafson.png|thumb|right|300px|A graphical representation of [[Gustafson's law]]]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)