Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Grid computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Design considerations and variations== {{unreferenced section|date=December 2015}} One feature of distributed grids is that they can be formed from computing resources belonging to one or multiple individuals or organizations (known as multiple [[administrative domain]]s). This can facilitate commercial transactions, as in [[utility computing]], or make it easier to assemble [[volunteer computing]] networks. One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that [[Node (computer science)|nodes]] will not drop out of the network at random times. Some nodes (like laptops or [[dial-up]] Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time. Another set of what could be termed social compatibility issues in the early days of grid computing related to the goals of grid developers to carry their innovation beyond the original field of high-performance computing and across disciplinary boundaries into new fields, like that of high-energy physics.<ref>{{Cite journal|last1=Kertcher|first1=Zack|last2=Coslor|first2=Erica|date=2018-07-10|title=Boundary Objects and the Technical Culture Divide: Successful Practices for Voluntary Innovation Teams Crossing Scientific and Professional Fields|journal=Journal of Management Inquiry|volume=29|language=en|pages=76β91|doi=10.1177/1056492618783875|issn=1056-4926|hdl=11343/212143|s2cid=149911242|url=http://minerva-access.unimelb.edu.au/bitstream/11343/212143/5/Kertcher%20%26%20Coslor%20-%20Boundary%20Objects%20and%20the%20Technical%20Culture%20Divide%202018-02-13.pdf|doi-access=free|access-date=2019-09-18|archive-date=2022-03-28|archive-url=https://web.archive.org/web/20220328181127/https://minerva-access.unimelb.edu.au/bitstream/11343/212143/5/Kertcher%20%26%20Coslor%20-%20Boundary%20Objects%20and%20the%20Technical%20Culture%20Divide%202018-02-13.pdf|url-status=live}}</ref> The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust βclientβ nodes must place in the central system such as placing applications in virtual machines. Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on [[Heterogeneous computing|heterogeneous]] systems, using different [[operating systems]] and [[computer architecture|hardware architectures]]. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). [[Cross-platform]] languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given [[Node (computer science)|node]] (due to run-time interpretation or lack of optimization for the particular platform). Various [[middleware]] projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. [[BOINC]] is a common one for various academic projects seeking public volunteers; more are listed at the [[Grid computing#See also|end of the article]]. In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include [[Service level agreement|SLA]] management, Trust, and Security, [[Virtual organization (grid computing)|Virtual organization]] management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)