Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Parallel computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Software== ===Parallel programming languages=== {{main|List of concurrent and parallel programming languages}} [[List of concurrent and parallel programming languages|Concurrent programming languages]], [[Library (computing)|libraries]], [[Application programming interface|APIs]], and [[parallel programming model]]s (such as [[algorithmic skeleton]]s) have been created for programming parallel computers. These can generally be divided into classes based on the assumptions they make about the underlying memory architecture—shared memory, distributed memory, or shared distributed memory. Shared memory programming languages communicate by manipulating shared memory variables. Distributed memory uses [[message passing]]. [[POSIX Threads]] and [[OpenMP]] are two of the most widely used shared memory APIs, whereas [[Message Passing Interface]] (MPI) is the most widely used message-passing system API.<ref>The [http://awards.computer.org/ana/award/viewPastRecipients.action?id=16 Sidney Fernbach Award given to MPI inventor Bill Gropp] {{Webarchive|url=https://web.archive.org/web/20110725191103/http://awards.computer.org/ana/award/viewPastRecipients.action?id=16 |date=2011-07-25 }} refers to MPI as "the dominant HPC communications interface"</ref> One concept used in programming parallel programs is the [[Futures and promises|future concept]], where one part of a program promises to deliver a required datum to another part of a program at some future time. Efforts to standardize parallel programming include an open standard called [[OpenHMPP]] for hybrid multi-core parallel programming. The OpenHMPP directive-based programming model offers a syntax to efficiently offload computations on hardware accelerators and to optimize data movement to/from the hardware memory using [[remote procedure call]]s. The rise of consumer GPUs has led to support for [[compute kernel]]s, either in graphics APIs (referred to as [[compute shader]]s), in dedicated APIs (such as [[OpenCL]]), or in other language extensions. ===Automatic parallelization=== {{main|Automatic parallelization}} [[Automatic parallelization]] of a sequential program by a [[compiler]] is the "holy grail" of parallel computing, especially with the aforementioned limit of processor frequency. Despite decades of work by compiler researchers, automatic parallelization has had only limited success.<ref>{{cite book |last1=Shen |first1=John Paul |last2=Lipasti |first2=Mikko H. |year=2004 |title=Modern processor design: fundamentals of superscalar processors |edition=1st |publisher=McGraw-Hill |location=Dubuque, Iowa |isbn=978-0-07-057064-1 |page=561 |quote=However, the holy grail of such research—automated parallelization of serial programs—has yet to materialize. While automated parallelization of certain classes of algorithms has been demonstrated, such success has largely been limited to scientific and numeric applications with predictable flow control (e.g., nested loop structures with statically determined iteration counts) and statically analyzable memory access patterns. (e.g., walks over large multidimensional arrays of float-point data).}}</ref> Mainstream parallel programming languages remain either [[Explicit parallelism|explicitly parallel]] or (at best) [[Implicit parallelism|partially implicit]], in which a programmer gives the compiler [[Directive (programming)|directives]] for parallelization. A few fully implicit parallel programming languages exist—[[SISAL]], Parallel [[Haskell]], [[SequenceL]], [[SystemC]] (for [[Field-programmable gate array|FPGAs]]), [[Mitrionics|Mitrion-C]], [[VHDL]], and [[Verilog]]. ===Application checkpointing=== {{main|Application checkpointing}} As a computer system grows in complexity, the [[mean time between failures]] usually decreases. [[Application checkpointing]] is a technique whereby the computer system takes a "snapshot" of the application—a record of all current resource allocations and variable states, akin to a [[core dump]]—; this information can be used to restore the program if the computer should fail. Application checkpointing means that the program has to restart from only its last checkpoint rather than the beginning. While checkpointing provides benefits in a variety of situations, it is especially useful in highly parallel systems with a large number of processors used in [[high performance computing]].<ref>''Encyclopedia of Parallel Computing, Volume 4'' by David Padua 2011 {{ISBN|0387097651}} page 265</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)