Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
RAID
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Increasing rebuild time and failure probability === Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity.<ref>Patterson, D., Hennessy, J. (2009). ''Computer Organization and Design''. New York: Morgan Kaufmann Publishers. pp 604β605.</ref> Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to "classic" two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives' [[mean time between failure]] (MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time.<ref name="StorageForum">{{cite web |url=http://www.enterprisestorageforum.com/technology/features/article.php/3839636 |title=RAID's Days May Be Numbered |last=Newman |first=Henry |date=2009-09-17 |access-date=2010-09-07 |work=EnterpriseStorageForum}}</ref> Some commentators have declared that RAID 6 is only a "band aid" in this respect, because it only kicks the problem a little further down the road.<ref name="StorageForum" /> However, according to the 2006 [[NetApp]] study of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives.<ref name="ACMQ" />{{cnf}} Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010.<ref name="ACMQ" />{{Unreliable source?|date=October 2020}} Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time.<ref name="ACMQ">{{cite web |title=Triple-Parity RAID and Beyond. ACM Queue, Association for Computing Machinery |url=https://queue.acm.org/detail.cfm?id=1670144 |first=Adam |last=Leventhal |date=2009-12-01 |access-date=2012-11-30}}</ref>{{Unreliable source?|date=October 2020}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)