Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Beowulf cluster
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Development == [[File:BSC-Beowulf-cluster.JPG|thumb|Detail of the first Beowulf cluster at [[Barcelona Supercomputing Center]]]] A description of the Beowulf cluster, from the original "how-to", which was published by Jacek Radajewski and Douglas Eadline under the [[Linux Documentation Project]] in 1998:<ref>{{Cite web |title=Beowulf HOWTO |last1=Radajewski |first1=Radajewski |last2=Eadline |first2=Douglas |work=ibiblio.org |date=22 November 1998 |access-date=8 June 2021 |url= http://ibiblio.org/pub/Linux/docs/HOWTO/archive/Beowulf-HOWTO.html#ss2.2 |version=v1.1.1}}</ref> {{Blockquote| Beowulf is a multi-computer [[computer architecture|architecture]] which can be used for [[parallel computing|parallel computations]]. It is a system which usually consists of one server node, and one or more client nodes connected via [[Ethernet]] or some other network. It is a system built using commodity hardware components, like any PC capable of running a [[Unix-like]] operating system, with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible. Beowulf also uses commodity software like the FreeBSD, Linux or Solaris operating system, Parallel Virtual Machine ([[Parallel Virtual Machine|PVM]]) and Message Passing Interface ([[Message Passing Interface|MPI]]). The server node controls the whole cluster and serves files to the client nodes. It is also the cluster's console and [[gateway (computer networking)|gateway]] to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks, for example consoles or monitoring stations. In most cases, client nodes in a Beowulf system are dumb, the dumber the better. Nodes are configured and controlled by the server node, and do only what they are told to do. In a disk-less client configuration, a client node doesn't even know its [[IP address]] or name until the server tells it. One of the main differences between Beowulf and a Cluster of Workstations (COW) is that Beowulf behaves more like a single machine rather than many workstations. In most cases client nodes do not have keyboards or monitors, and are accessed only via remote login or possibly serial terminal. Beowulf nodes can be thought of as a [[Central processing unit|CPU]] + memory package which can be plugged into the cluster, just like a CPU or memory module can be plugged into a motherboard. Beowulf is not a special software package, new network topology, or the latest kernel hack. Beowulf is a technology of clustering computers to form a parallel, virtual supercomputer. Although there are many software packages such as kernel modifications, PVM and MPI libraries, and configuration tools which make the Beowulf architecture faster, easier to configure, and much more usable, one can build a Beowulf class machine using a standard Linux distribution without any additional software. If you have two networked computers which share at least the <code>/home</code> file system via [[Network File System|NFS]], and trust each other to execute remote shells ([[Remote Shell|rsh]]), then it could be argued that you have a simple, two node Beowulf machine. }} === Operating systems === {{unreferenced section|date=November 2020}} [[File:Beowulf.jpg|thumb|A home-built Beowulf cluster composed of [[White box (computer hardware)|white box]] PCs]] {{As of | 2014}} a number of [[Linux distribution]]s, and at least one [[Berkeley Software Distribution|BSD]], are designed for building Beowulf clusters. These include: * [[MOSIX]], geared toward computationally intensive, IO-low applications * [[Rocks Cluster Distribution]], latest 2017 * [[DragonFly BSD]] OS, latest 2022 * [[Quantian]] OS latest 2006, a [[live CD|live DVD]] with scientific applications, remastered from [[Knoppix]] * [[Kentucky Linux Athlon Testbed]], physical installation at University of Kentucky The following are no longer maintained: * [[Kerrighed]] (EOL: 2013) * [[OpenMosix]] (EOL: 2008), forked from MOSIX * [[ClusterKnoppix]] OS, forked from [[Knoppix]] OS, forked from [[OpenMosix]] * [[PelicanHPC]] OS latest 2016, based on [[Debian#Live images|Debian Live]] A cluster can be set up by using Knoppix bootable CDs in combination with [[OpenMosix]]. The computers will automatically link together, without need for complex configurations, to form a Beowulf cluster using all CPUs and [[Random-access memory|RAM]] in the cluster. A Beowulf cluster is scalable to a nearly unlimited number of computers, limited only by the overhead of the network. Provisioning of operating systems and other software for a Beowulf Cluster can be automated using software, such as [[Open Source Cluster Application Resources]]. OSCAR installs on top of a standard installation of a supported Linux distribution on a cluster's head node.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)