Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Google File System
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Design== [[File:GoogleFileSystemGFS.svg|thumb|300px|Google File System is designed for system-to-system interaction, and not for user-to-system interaction. The chunk servers replicate the data automatically.]] GFS is enhanced for Google's core data storage and usage needs (primarily the [[Google (search engine)|search engine]]), which can generate enormous amounts of data that must be retained; Google File System grew out of an earlier Google effort, "BigFiles", developed by [[Larry Page]] and [[Sergey Brin]] in the early days of Google, while it was still located in [[Stanford]]. Files are divided into fixed-size ''chunks'' of 64 [[megabyte]]s, similar to clusters or sectors in regular file systems, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on Google's computing clusters, dense nodes which consist of cheap "commodity" computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high data [[throughput]]s, even when it comes at the cost of [[Latency (engineering)#Computer hardware and operating systems|latency]]. A GFS cluster consists of multiple nodes. These nodes are divided into two types: one ''Master'' node and multiple ''Chunkservers''. Each file is divided into fixed-size chunks. Chunkservers store these chunks. Each chunk is assigned a globally unique 64-bit label by the master node at the time of creation, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network. At default, it is replicated three times, but this is configurable.{{Sfn | Ghemawat | Gobioff | Leung | 2003}} Files which are in high demand may have a higher replication factor, while files for which the application client uses strict storage optimizations may be replicated less than three times - in order to cope with quick garbage cleaning policies.{{Sfn | Ghemawat | Gobioff | Leung | 2003}} The Master server does not usually store the actual chunks, but rather all the [[metadata]] associated with the chunks, such as the tables mapping the 64-bit labels to chunk locations and the files they make up (mapping from files to chunks), the locations of the copies of the chunks, what processes are reading or writing to a particular chunk, or taking a "snapshot" of the chunk pursuant to replicate it (usually at the instigation of the Master server, when, due to node failures, the number of copies of a chunk has fallen beneath the set number). All this metadata is kept current by the Master server periodically receiving updates from each chunk server ("Heart-beat messages"). Permissions for modifications are handled by a system of time-limited, expiring "leases", where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to modify the chunk. The modifying chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. The changes are not saved until all chunkservers acknowledge, thus guaranteeing the completion and [[Atomicity (database systems)|atomicity]] of the operation. Programs access the chunks by first querying the Master server for the locations of the desired chunks; if the chunks are not being operated on (i.e. no outstanding leases exist), the Master replies with the locations, and the program then contacts and receives the data from the chunkserver directly (similar to [[Kazaa]] and its [[supernode (networking)|supernode]]s). Unlike most other file systems, GFS is not implemented in the [[kernel (operating system)|kernel]] of an [[operating system]], but is instead provided as a [[userspace]] library.<ref>{{Cite book| last=Kyriazis|first=Dimosthenis| title=Data Intensive Storage Services for Cloud Environments|publisher=IGI Global| year=2013|isbn=9781466639355|pages=13| url=https://books.google.com/books?id=mNCeBQAAQBAJ&pg=PA13}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)