Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Distributed memory
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description |Multiprocessing memory architecture}} {{Use British English |date=February 2024}} {{Use dmy dates |date=February 2024}} {{More citations needed |date=February 2024}} [[Image:Distributed Memory.jpeg|right|300px|thumb|An illustration of a distributed memory system of three computers.]] In [[computer science]], '''distributed memory''' refers to a [[Multiprocessing|multiprocessor computer system]] in which each [[Central processing unit|processor]] has its own private [[Computer memory|memory]].<ref name="pama21" /> Computational tasks can only operate on local data, and if remote data are required, the computational task must communicate with one or more remote processors. In contrast, a [[Shared memory architecture|shared memory]] multiprocessor offers a single memory space used by all processors. Processors do not have to be aware where data resides, except that there may be performance penalties, and that race conditions are to be avoided. In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other. The interconnect can be organised with [[Network_topology#Point-to-point|point to point links]] or separate hardware can provide a switching network. The [[network topology]] is a key factor in determining how the multiprocessor machine [[Scalability|scales]]. The links between nodes can be implemented using some standard network protocol (for example [[Ethernet]]), using bespoke network links (used in for example the [[transputer]]), or using [[Dual-ported RAM|dual-ported memories]]. ==Programming distributed memory machines== The key issue in programming distributed memory systems is how to distribute the data over the memories. Depending on the problem solved, the data can be distributed statically, or it can be moved through the nodes. Data can be moved on demand, or data can be pushed to the new nodes in advance. As an example, if a problem can be described as a pipeline where data ''x'' is processed subsequently through functions ''f'', ''g'', ''h'', etc. (the result is ''h''(''g''(''f''(''x'')))), then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs ''f'' that passes the result onto the second node that computes ''g'', and finally to the third node that computes ''h''. This is also known as [[Systolic array|systolic computation]]. Data can be kept statically in nodes if most computations happen locally, and only changes on edges have to be reported to other nodes. An example of this is simulation where data is modeled using a grid, and each node simulates a small part of the larger grid. On every iteration, nodes inform all neighboring nodes of the new edge data. ==Distributed shared memory== Similarly, in [[distributed shared memory]] each node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory. ==Shared memory vs. distributed memory vs. distributed shared memory== * The advantage of (distributed) shared memory is that it offers a unified address space in which all data can be found. * The advantage of distributed memory is that it excludes race conditions, and that it forces the programmer to think about data distribution. * The advantage of distributed (shared) memory is that it is easier to design a machine that scales with the algorithm Distributed shared memory hides the mechanism of communication, it does not hide the latency of communication. ==See also== * [[Memory virtualization]] * [[Distributed cache]] == References == {{reflist |refs= <ref name="pama21">{{cite book |title=Modeling of Resistivity and Acoustic Borehole Logging Measurements Using Finite Element Methods |first1=David |last1=Pardo |first2=Paweł J. |last2=Matuszyk |first3=Vladimir |last3=Puzyrev |first4=Carlos |last4=Torres-Verdín |first5=Myung Jin |last5=Nam |first6=Victor M. |last6=Calo |year=2021 |chapter=Parallel implementation |publisher=Elsevier |doi=10.1016/C2019-0-02722-7 |isbn=978-0-12-821454-1 |via=ScienceDirect |quote=Distributed memory refers to a computing system in which each processor has its memory. Computational tasks efficiently operate with local data, but when remote data is required, the task must communicate (using explicit messages) with remote processors to transfer data. This type of parallel computing is standard on supercomputers equipped with many thousands of computing nodes. }}</ref> }} {{Parallel Computing}} {{DEFAULTSORT:Distributed Memory}} [[Category:Parallel computing]] [[Category:Distributed computing architecture]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:More citations needed
(
edit
)
Template:Parallel Computing
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Use British English
(
edit
)
Template:Use dmy dates
(
edit
)