Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Data center
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Data center design== The field of '''data center design''' has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers. * a 65-story data center has already been proposed<ref>{{cite news|newspaper=[[Computerworld]]|date=April 12, 2016|author=Patrick Thibodeau|url=https://www.computerworld.com/article/3054603/data-center/a-65-story-data-center-design-that-soars-with-ideas.html|title=Envisioning a 65-story data center|archive-date=October 28, 2018|access-date=October 28, 2018|archive-url=https://web.archive.org/web/20181028225606/https://www.computerworld.com/article/3054603/data-center/a-65-story-data-center-design-that-soars-with-ideas.html|url-status=live}}</ref> * the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwide<ref name="DataM" /> Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are: [[File:Rack001.jpg|thumb|right|A typical server rack, commonly seen in [[colocation centre|colocation]]]] * Size - one room of a building, one or more floors, or an entire building; * Capacity - can hold up to or past 1,000 servers;<ref>{{Cite web|url=https://www.youtube.com/watch?v=zRwPSFpLX8I| archive-url=https://ghostarchive.org/varchive/youtube/20211104/zRwPSFpLX8I| archive-date=2021-11-04 | url-status=live|title=Google container data center tour (video)| website=[[YouTube]]| date=7 April 2009}}{{cbignore}}</ref> * Other considerations - Space, power, cooling, and costs in the data center;<ref>{{cite web|url=http://www.networkcomputing.com/data-center/231000669|title=Romonet Offers Predictive Modeling Tool For Data Center Planning|date=June 29, 2011|access-date=February 8, 2012|archive-date=August 23, 2011|archive-url=https://web.archive.org/web/20110823041831/http://www.networkcomputing.com/data-center/231000669|url-status=dead}}</ref> * Mechanical engineering infrastructure - heating, ventilation and air conditioning ([[HVAC]]); humidification and dehumidification equipment; pressurization;<ref name="nxtbook.com">{{cite web |title=BICSI News Magazine - May/June 2010 |url=http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26 |website=www.nxtbook.com |access-date=2012-02-08 |archive-date=2019-04-20 |archive-url=https://web.archive.org/web/20190420132241/http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26 |url-status=dead }}</ref> * Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; [[uninterruptible power source]] (UPS) systems; and more.<ref name="nxtbook.com" /><ref>{{cite web |title=Hedging Your Data Center Power |url=http://www.datacenterjournal.com/design/hedging-your-data-center-power/ |access-date=2012-02-08 |archive-date=2024-05-17 |archive-url=https://web.archive.org/web/20240517235843/https://www.datacenterjournal.com/design/hedging-your-data-center-power/ |url-status=live }}</ref> [[File:CRAC Cabinets 2.jpg|thumb|CRAC Air Handler]] ===Design criteria and trade-offs=== * '''Availability expectations''': The costs of avoiding downtime should not exceed the cost of the downtime itself<ref>Clark, Jeffrey. "The Price of Data Center Availability—How much availability do you need?", Oct. 12, 2011, The Data Center Journal {{cite web|url=http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability|title=Data Center Outsourcing in India projected to grow according to Gartner |access-date=2012-02-08|url-status=dead|archive-url=https://web.archive.org/web/20111203145721/http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability|archive-date=2011-12-03 }}</ref> * '''Site selection''': Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs).<ref>{{cite web|url=http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location|title=Five tips on selecting a data center location}}</ref> ** Often, power availability is the hardest to change. ====High availability==== {{Main|High availability}} Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many ''nines'' can be placed after ''99%''.<ref>{{Cite web|url=https://www.youtube.com/watch?v=DPcM5UePTY0| archive-url=https://web.archive.org/web/20120829083144/http://www.youtube.com/watch?v=DPcM5UePTY0&gl=US&hl=en| archive-date=2012-08-29 | url-status=dead|title=IBM zEnterprise EC12 Business Value Video| website=[[YouTube]] }}</ref> ====Modularity and flexibility==== {{main|Modular data center}} Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.<ref>Niles, Susan. "Standardization and Modularity in Data Center Physical Infrastructure," 2011, Schneider Electric, page 4. {{cite web|url=http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf|title=Standardization and Modularity in Data Center Physical Infrastructure |access-date=2012-02-08|url-status=dead|archive-url=https://web.archive.org/web/20120416120624/http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf|archive-date=2012-04-16 }}</ref> A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2011/09/08/strategies-for-the-containerized-data-center/|title=Strategies for the Containerized Data Center|date=September 8, 2011}}</ref> Components of the data center can be prefabricated and standardized which facilitates moving if needed.<ref>{{cite web|url=http://www.infoworld.com/d/green-it/hp-says-prefab-data-center-cuts-costs-in-half-837?page=0,0|title=HP says prefab data center cuts costs in half|first=James|last=Niccolai|date=2010-07-27}}</ref> ===Electrical power=== [[File:Datacenter Backup Batteries.jpg|thumb|right|A bank of batteries in a large data center, used to provide power until diesel generators can start]] [[File:Power generator of a hospital data center.jpg|thumb|Diesel-powered generator of a hospital data center]] Backup power consists of one or more [[uninterruptible power supply|uninterruptible power supplies]], battery banks, and/or [[Diesel generator|diesel]] / [[gas turbine]] generators.<ref>Detailed explanation of UPS topologies {{cite web|url=http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf|title=EVALUATING THE ECONOMIC IMPACT OF UPS TECHNOLOGY|url-status=dead|archive-url=https://web.archive.org/web/20101122074817/http://emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf|archive-date=2010-11-22 }}</ref> To prevent [[single point of failure|single points of failure]], all elements of the electrical systems, including backup systems, are typically given [[Redundancy (engineering)|redundant copies]], and critical servers are connected to both the ''A-side'' and ''B-side'' power feeds. This arrangement is often made to achieve [[N+1 redundancy]] in the systems. [[Transfer switch#Static transfer switch|Static transfer switches]] are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure. ===Low-voltage cable routing=== Options include: * Data cabling can be routed through overhead [[cable tray]]s<ref>{{cite web|url=https://www.cablinginstall.com/articles/print/volume-24/issue-4/features/data-center/cable-tray-systems-support-cables-journey-through-the-data-center.html|title=Cable tray systems support cables' journey through the data center|date=April 2016}}</ref> * Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks. * Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface. ===Airflow and environmental control=== [[Airflow]] management is the practice of achieving data center [[computer cooling|cooling]] efficiency by preventing the recirculation of hot exhaust air and by reducing bypass airflow. Common approaches include hot-aisle/cold-aisle containment and the deployment of in-row cooling units which position cooling directly between server racks to intercept exhaust heat before it mixes with room air.<ref name="fox">{{cite news |author=Mike Fox |title=Stulz announced it has begun manufacturing In Row server cooling units under the name "CyberRow". |url=http://datacenterfix.com/news/stulz-release-cyberrow-row-cooling-units |access-date=27 February 2012 |newspaper=DataCenterFix |date=15 February 2012 |url-status=dead |archive-url=https://web.archive.org/web/20120301084918/http://www.datacenterfix.com/news/stulz-release-cyberrow-row-cooling-units |archive-date=1 March 2012}}</ref> Maintaining suitable temperature and humidity levels is critical to preventing equipment damage caused by [[Overheating (electricity)|overheating]]. Overheating can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards. Typical control methods include: * [[Air conditioning]] * Indirect cooling, such as the use of outside air,<ref>{{cite news |url=https://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914 |archive-url=https://web.archive.org/web/20090926171248/http://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914 |url-status=dead |archive-date=26 September 2009 |work=Reuters |title=tw telecom and NYSERDA Announce Co-location Expansion |date=14 September 2009}}</ref><ref>{{cite web |url=http://www.datacenterdynamics.com/focus/archive/2013/09/air-air-combat-indirect-air-cooling-wars-0 |title=Air to air combat – indirect air cooling wars}}</ref><ref group="note">Indirect systems can reduce or eliminate the need for mechanical chillers or conventional air conditioners, resulting in energy savings.</ref> Indirect Evaporative Cooling (IDEC) units, and seawater cooling. Humidity control not only prevents moisture-related issues: importantly, excess humidity can cause dust to adhere more readily to fan blades and heat sinks, impeding air cooling leading to higher temperatures.<ref>{{cite web |title=The Importance of Humidity Management in Data Centres and comms Rooms |url=https://treske.com.au/blogs/treske-business-blogs/the-importance-of-humidity-management-in-data-centres-and-comms-rooms-ensuring-optimal-performance-and-reliability |website=Treske Pty Limited |language=en |date=10 February 2024}}</ref> ====Aisle containment==== Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products.[[File:Cabinet Asile.jpg|thumb|upright|Typical cold aisle configuration with server rack fronts facing each other and cold air distributed through the [[raised floor]]]] Computer cabinets/[[Server farm]]s are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained.<ref>[http://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/WP-APC-Hot_vs_Cold_Aisle.pdf/ Hot-Aisle vs. Cold-Aisle Containment for Data Centers], John Niemann, Kevin Brown, and Victor Avelar, APC by Schneider Electric White Paper 135, Revision 1</ref><!-- Suggested edit: Alternatively, the hot air can be funneled into the raised flooring tiles instead for more efficient air handling. --> Another option is fitting cabinets with vertical exhaust duct [[chimney]]s.<ref>{{Cite web|url=https://patents.justia.com/patent/20180042143|title=US Patent Application for DUCTED EXHAUST EQUIPMENT ENCLOSURE Patent Application (Application #20180042143 issued February 8, 2018) - Justia Patents Search|website=patents.justia.com|language=en|access-date=2018-04-17}}</ref> Hot exhaust pipes/vents/ducts can direct the air into a [[Plenum space]] above a [[Dropped ceiling]] and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement.<ref>{{Cite news|url=https://datacenterfrontier.com/airflow-management-basics-comparing-containment-systems|title=Airflow Management Basics – Comparing Containment Systems • Data Center Frontier|date=2017-07-27|work=Data Center Frontier|access-date=2018-04-17|archive-date=2019-02-19|archive-url=https://web.archive.org/web/20190219015621/https://datacenterfrontier.com/airflow-management-basics-comparing-containment-systems/|url-status=dead}}</ref> ===Fire protection=== [[File:FM200 Three.jpg|thumb|[[FM200]] fire suppression tanks]] Data centers feature [[fire protection]] systems, including [[passive fire protection|passive]] and [[Active Design]] elements, as well as implementation of [[fire prevention]] programs in operations. [[Smoke detectors]] are usually installed to provide early warning of a fire at its incipient stage. Although the main room usually does not allow [[Wet pipe sprinkler|Wet Pipe-based Systems]] due to the fragile nature of [[Circuit-boards]], there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are [[closed system]]s, such as:<ref>{{Cite web|url=https://www.facilitiesnet.com/datacenters/article/Data-Center-Fire-Suppression-Systems-What-Facility-Managers-Should-Consider--14595|title=Data Center Fire Suppression Systems: What Facility Managers Should Consider|website=Facilitiesnet|access-date=2018-10-28|archive-date=2024-05-22|archive-url=https://web.archive.org/web/20240522091545/https://www.facilitiesnet.com/datacenters/article/Data-Center-Fire-Suppression-Systems-What-Facility-Managers-Should-Consider--14595|url-status=live}}</ref> * [[fire sprinkler system|Sprinkler systems]] * [[Mist]]<nowiki/>ing, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets. However, there also exist other means to put out fires, especially in [[Server farm|Sensitive areas]], usually using [[Gaseous fire suppression]], of which [[Halon gas]] was the most popular, until the negative effects of producing and using it were discovered.[https://www.epa.gov/ozone-layer-protection/halons-program] ===Security=== {{main|Data center security}} Physical access is usually restricted. Layered security often starts with fencing, [[bollard]]s and [[mantrap (access control)|mantraps]].<ref>{{cite web|author=Sarah D. Scalet|url=http://www.csoonline.com/article/220665|title=19 Ways to Build Physical Security Into a Data Center|publisher=Csoonline.com|date=2005-11-01|access-date=2013-08-30|archive-date=2008-04-21|archive-url=https://web.archive.org/web/20080421020352/http://www.csoonline.com/article/220665|url-status=dead}}</ref> [[Video camera]] surveillance and permanent [[security guard]]s are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent [[power distribution unit]]s, so that locks are networked through the same appliance.<ref>{{Citation|title=Systems and methods for controlling an electronic lock for a remote device|date=2016-08-01|url=https://patents.google.com/patent/US9865109B2/en|access-date=2018-04-25|archive-date=2023-03-06|archive-url=https://web.archive.org/web/20230306024610/https://patents.google.com/patent/US9865109B2/en|url-status=live}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)