Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Distributed computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===An example=== Consider the computational problem of finding a coloring of a given graph ''G''. Different fields might take the following approaches: ; Centralized algorithms<ref name=":0" /> * The graph ''G'' is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result. ; Parallel algorithms * Again, the graph ''G'' is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part. * The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel. ; Distributed algorithms * The graph ''G'' is the structure of the computer network. There is one computer for each node of ''G'' and one communication link for each edge of ''G''. Initially, each computer only knows about its immediate neighbors in the graph ''G''; the computers must exchange messages with each other to discover more about the structure of ''G''. Each computer must produce its own color as output. * The main focus is on coordinating the operation of an arbitrary distributed system.<ref name=":0" /> While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the [[Cole–Vishkin algorithm]] for graph coloring<ref>{{harvtxt|Cole|Vishkin|1986}}. {{harvtxt|Cormen|Leiserson|Rivest|1990}}, Section 30.5.</ref> was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm. Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing).<ref>{{harvtxt|Andrews|2000}}, p. ix.</ref> The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)