Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
High-performance computing
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Computing with supercomputers and clusters}} {{distinguish|High-throughput computing|Many-task computing}} [[Image:Nanoscience High-Performance Computing Facility.jpg|thumb|300px|The Center for Nanoscale Materials at the [[Advanced Photon Source]]]] '''High-performance computing''' ('''HPC''') is the use of [[supercomputer]]s and [[computer cluster]]s to solve advanced computation problems. ==Overview== HPC integrates [[systems administrator|systems administration]] (including network and security knowledge) and [[parallel computing|parallel programming]] into a multidisciplinary field that combines [[digital electronics]], [[computer architecture]], [[system software]], [[programming language]]s, [[algorithm]]s and computational techniques.<ref name="tstc2005"/> HPC technologies are the tools and systems used to implement and create high performance computing systems.<ref name="usdoe2004"/> Recently{{When|date=November 2011}}, HPC systems have shifted from supercomputing to computing [[computer cluster|cluster]]s and [[grid computing|grid]]s.<ref name="tstc2005"/> Because of the need of networking in clusters and grids, High Performance Computing Technologies are being promoted{{By whom|date=November 2011}} by the use of a [[collapsed backbone network|collapsed network backbone]], because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones. HPC integrates with data analytics in [[Artificial intelligence engineering|AI engineering]] workflows to generate new data streams that increase simulation ability to answer the "what if" questions.<ref>{{Cite web |date=April 24, 2024 |title=AI-powered engineering: A modern must-have |url=https://www.fastcompany.com/91109775/ai-powered-engineering-a-modern-must-have |access-date=March 5, 2025}}</ref> The term is most commonly associated with computing used for scientific research or [[computational science]]. A related term, [[high-performance technical computing]] (HPTC), generally refers to the engineering applications of cluster-based computing (such as [[computational fluid dynamics]] and the building and testing of [[virtual prototypes]]). HPC has also been applied to [[business]] uses such as [[data warehouse]]s, [[line of business]] (LOB) applications, and [[transaction processing]]. High-performance computing (HPC) as a term arose after the term "supercomputing".<ref>{{Cite OED|supercomputing|id=319183}} "Supercomputing" is attested from 1944.</ref> HPC is sometimes used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more powerful subset of "high-performance computers", and the term "supercomputing" becomes a subset of "high-performance computing". The potential for confusion over the use of these terms is apparent. Because most current applications are not designed for HPC technologies but are retrofitted, they are not designed or tested for scaling to more powerful processors or machines.<ref name="usdoe2004"/> Since networking clusters and grids use [[multiprocessing|multiple processors]] and computers, these scaling problems can cripple critical systems in future supercomputing systems. Therefore, either the existing tools do not address the needs of the high performance computing community or the HPC community is unaware of these tools.<ref name="usdoe2004"/> A few examples of commercial HPC technologies include: * the simulation of car crashes for structural design * molecular interaction for new drug design * the airflow over automobiles or airplanes In government and research institutions, scientists simulate [[galaxy formation and evolution]], fusion energy, and global warming, as well as work to create more accurate short- and long-term weather forecasts.<ref name="drdobbs2007"/> The world's tenth most powerful supercomputer in 2008, [[IBM Roadrunner]] (located at the [[United States Department of Energy]]'s [[Los Alamos National Laboratory]])<ref>{{cite web | title=Launching a New Class of U.S. Supercomputing | publisher=Department of Energy| date=17 November 2022 | url=https://www.energy.gov/science/articles/launching-new-class-us-supercomputing}}</ref> simulated the performance, safety, and reliability of nuclear weapons and certifies their functionality.<ref name="usdoe2008"/> ==TOP500== {{main|TOP500}} TOP500 ranks the world's 500 fastest high-performance computers, as measured by the [[HPL (benchmark)|High Performance LINPACK]] (HPL) benchmark. Not all existing computers are ranked, either because they are ineligible (e.g., they cannot run the HPL benchmark) or because their owners have not submitted an HPL score (e.g., because they do not wish the size of their system to become public information, for defense reasons). In addition, the use of the single LINPACK benchmark is controversial, in that no single measure can test all aspects of a high-performance computer. To help overcome the limitations of the LINPACK test, the U.S. government commissioned one of its originators, [[Jack Dongarra]] of the University of Tennessee, to create a suite of benchmark tests that includes LINPACK and others, called the HPC Challenge benchmark suite. This evolving suite has been used in some HPC procurements, but, because it is not reducible to a single number, it has been unable to overcome the publicity advantage of the less useful TOP500 LINPACK test. The TOP500 list is updated twice a year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing Conference in November. Many ideas for the new wave of [[grid computing]] were originally borrowed from HPC. == High performance computing in the cloud == {{Main|Cloud computing}} Traditionally, HPC has involved an [[On-premises software|on-premises]] infrastructure, investing in supercomputers or computer clusters. Over the last decade, [[cloud computing]] has grown in popularity for offering computer resources in the commercial sector regardless of their investment capabilities.<ref name=uk /> Some characteristics like scalability and [[OS-level virtualization|containerization]] also have raised interest in academia.<ref>{{cite web |author1=Sebastian von Alfthan |title=High-performance computing in the cloud?|year=2016|url=https://www.csc.fi/documents/10180/187845/High-performance+computing+in+the+cloud.pdf |publisher=CSC – IT Center for Science}}</ref> However [[Cloud computing security|security in the cloud]] concerns such as data confidentiality are still considered when deciding between cloud or on-premise HPC resources.<ref name=uk>{{cite web |author1=Morgan Eldred |author2=Dr. Alice Good |author3=Carl Adams |title=A case study on data protection and security decisions in cloud HPC|date=24 January 2018<!--embedded in PDF -->|url=https://pure.port.ac.uk/ws/portalfiles/portal/8527525/A_case_study_on_data_protection.pdf |publisher=School of Computing, University of Portsmouth, Portsmouth, U.K.}}</ref> == Current leading Supercomputers == Below is a list of the main HPCs by computing power, as reported in the Top500 list:<ref>{{cite web | title= List Top 500 November 2024| publisher=Top500.org| url=https://top500.org/lists/top500/2024/11/}}</ref> * [[El Capitan (supercomputer)|El Capitan]]: this HPE Cray EX255a system reaches 1.742 [[FLOPS|exaFLOPS]] with 1,051,392 [[CPU]] cores and 9,988,224 accelerator cores, totaling 11,039,616 cores. It uses Slingshot-11 interconnect technology and is housed at the [[Lawrence Livermore National Laboratory]], [[United States|USA]].<ref>{{cite web | title=AMD-powered El Capitan is now the world's fastest supercomputer with 1.7 exaflops of performance| publisher=Tomshardware.com| url=https://www.tomshardware.com/pc-components/cpus/amd-powered-el-capitan-is-now-the-worlds-fastest-supercomputer-with-1-7-exaflops-of-performance-fastest-intel-machine-falls-to-third-place-on-top500-list}}</ref> * [[Frontier (supercomputer)|Frontier]]: boasting 1.353 exaFLOPS, this HPE Cray EX235a system features 614,656 CPU cores and 8,451,520 accelerator cores, making a total of 9,066,176 cores. It operates with Slingshot-11 interconnects at [[Oak Ridge National Laboratory]], USA.<ref>{{cite web | title=Frontier named one of Time's 2023 Best Inventions| publisher=Hpe.com| url=https://www.hpe.com/asia_pac/en/compute/hpc/cray/oak-ridge-national-laboratory.html}}</ref> * [[Aurora (supercomputer)|Aurora]]: this Intel-powered system delivers 1.012 exaFLOPS, leveraging [[Xeon]] and Ponte Vecchio architectures. It is installed at [[Argonne National Laboratory]], USA.<ref>{{cite web | title=Aurora Supercomputer Achieves Exascale Capacity | publisher=Admin-magazine.com| url=https://www.admin-magazine.com/News/Aurora-Supercomputer-Achieves-Exascale-Capacity}}</ref> * Eagle: powered by Intel Xeon Platinum 8480C 48C 2GHz processors and NVIDIA H100 GPUs, Eagle reaches 561.20 petaFLOPS of computing power, with 2,073,600 cores. It features NVIDIA Infiniband NDR for high-speed connectivity and is hosted by [[Microsoft Azure]], USA.<ref>{{cite web | title=Microsoft is deploying the equivalent of five 561 petaflops supercomputers every month | publisher=Datacenterdynamics.com| url=https://www.datacenterdynamics.com/en/news/microsoft-deploying-equivalent-of-five-561-petaflops-supercomputers-every-month/}}</ref> * [[HPC6]]: the most powerful industrial supercomputer in the world, HPC6 was developed by [[Eni]] and launched in November 2024. With 606 petaFLOPS of computing power, it is used for energy research and operates in [[Italy]]. It is located in the Eni Green Data Center in [[Ferrera Erbognone]] (PV).<ref>{{cite web | title=Eni launches new supercomputer HPC6 that ranks No.5. in the TOP500 list | publisher=Eni.com| url=https://www.eni.com/en-IT/media/press-release/2024/11/eni-launches-supercomputer-hpc6-top500-list.html}}</ref> * [[Fugaku (supercomputer)|Fugaku]]: developed by [[Fujitsu]], this system achieves 442.01 petaFLOPS using A64FX 48C 2.2GHz processors and Tofu interconnect D technology. It is located at [[Riken|RIKEN]] Center for Computational Science, [[Japan]].<ref>{{cite web | title=Supercomputer Fugaku retains first place worldwide in HPCG and Graph500 rankings| publisher=Fujitsu.com| url=https://www.fujitsu.com/global/about/resources/news/press-releases/2024/1119-01.html}}</ref> * [[Alps (supercomputer)|Alps]]: this HPE Cray EX254n system reaches 434.90 petaFLOPS, powered by [[NVIDIA]] Grace 72C 3.1GHz processors and NVIDIA GH200 Superchips, connected through Slingshot-11 interconnects. It is located at [[Swiss National Supercomputing Centre|CSCS]], [[Switzerland]].<ref>{{cite web | title=Swiss National Supercomputing Centre unveils Alps supercomputer | publisher=Datacenterdynamics.com| url=https://www.datacenterdynamics.com/en/news/swiss-national-supercomputing-centre-unveils-alps-supercomputer/}}</ref> * [[LUMI]]: one of Europe's fastest supercomputers, LUMI achieves 379.70 petaFLOPS with [[AMD]] Optimized 3rd Generation EPYC 64C 2GHz processors and AMD Instinct MI250X accelerators. It is hosted by [[CSC – IT Center for Science|CSC]], [[Finland]], as part of the [[European High-Performance Computing Joint Undertaking|EuroHPC]] initiative.<ref>{{cite web | title=This is LUMI, Europe’s Most Powerful Supercomputer and a Benchmark in AI | publisher=Silicon.eu| url=https://www.silicon.eu/this-is-lumi-europes-most-powerful-supercomputer-and-a-benchmark-15075.html}}</ref> * [[Leonardo (supercomputer)|Leonardo]]: developed under the EuroHPC initiative, this [[BullSequana]] XH2000 system reaches 241.20 petaFLOPS with [[Xeon]] Platinum 8358 32C 2.6GHz processors and NVIDIA A100 SXM4 64GB accelerators. It is installed at [[CINECA]], [[Italy]].<ref>{{cite web | title=A GPU Upgrade For “Leonardo” Supercomputer But Not A Budget Upgrade | publisher=Nextplatform.com| url=https://www.nextplatform.com/2024/09/23/a-gpu-upgrade-for-leonardo-supercomputer-but-not-a-budget-upgrade/}}</ref> * Tuolumne: Tuolumne achieves 208.10 petaFLOPS and is powered by AMD 4th Gen EPYC 24C 1.8GHz processors and AMD Instinct MI300A accelerators. It operates at [[Lawrence Livermore National Laboratory]], USA.<ref>{{cite web | title=Tuolumne is El Capitan's Little Brother with 200+ PetaFLOPS Performance| publisher=Tomshardware.com| url=https://www.tomshardware.com/news/tuolumne-el-capitan-little-brother-announced}}</ref> * [[MareNostrum]] 5 ACC: this BullSequana XH3000 system runs at 175.30 petaFLOPS, featuring Xeon Platinum 8460Y+ 32C 2.3GHz processors and NVIDIA H100 64GB accelerators. It is hosted by the [[Barcelona Supercomputing Center]] (BSC), [[Spain]], as part of EuroHPC.<ref>{{cite web | title=Global Standing of EuroHPC Supercomputers: Three Systems in the Top 10 and Two New Entries| publisher=Eurohpc-ju.europa.eu| url=https://eurohpc-ju.europa.eu/global-standing-eurohpc-supercomputers-three-systems-top-10-and-two-new-entries-2024-05-13_en}}</ref> ==See also== {{Div col|colwidth=20em}} * [[Distributed computing]] * [[Quantum computing]] * [[Metacomputing]] * [[Grand Challenge]] * [[High Productivity Computing Systems]] * [[High-availability cluster]] * [[High-throughput computing]] * [[Many-task computing]] * [[Urgent computing]] * [[Workstation#Current market|GPU workstation]] {{Div col end}} ==References== {{Reflist |refs= <ref name="tstc2005">{{Cite report|last1 = Brazell | first1 = Jim| last2 = Bettersworth | first2 = Michael|url=http://system.tstc.edu/forecasting/techbriefs/HPC.asp|title=High Performance Computing|publisher=Texas State Technical College|date=2005| archive-url = https://web.archive.org/web/20100731043053/http://system.tstc.edu:80/forecasting/techbriefs/HPC.asp| archive-date = 2010-07-31}}</ref> <ref name="drdobbs2007">Schulman, Michael. [http://drdobbs.com/high-performance-computing/199201209 "High Performance Computing: RAM vs CPU"]. Dr. Dobbs High Performance Computing, April 30, 2007.</ref> <ref name="usdoe2004">{{Cite report|last1 = Collette | first1 = Michael| last2 = Corey | first2 = Bob| last3 = Johnson | first3 = John|url=https://computing.llnl.gov/tutorials/performance_tools/HighPerformanceToolsTechnologiesLC.pdf|title=High Performance Tools & Technologies|publisher=Lawrence Livermore National Laboratory, U.S. Department of Energy|date=December 2004| archive-url = https://web.archive.org/web/20170830034938/https://computing.llnl.gov/tutorials/performance_tools/HighPerformanceToolsTechnologiesLC.pdf| archive-date = 2017-08-30}}</ref> <ref name="usdoe2008">{{Cite web|url=http://cio.energy.gov/high-performance-computing.htm|archive-url=https://web.archive.org/web/20090730002907/http://cio.energy.gov/high-performance-computing.htm|archive-date=30 July 2009|publisher=US Department of Energy|title=High Performance Computing}}</ref> }} ==External links== * [http://www.HPCwire.com HPCwire] * [http://www.top500.org Top 500 supercomputers] * [http://www.rocksclusters.org Rocks Clusters] Open-Source High Performance Linux Clusters * [https://web.archive.org/web/20120610080201/http://www.icse.umich.edu/news-articles--policy-reports.html News Articles & Policy Reports on High-Performance Scientific Computing] * [http://www.modelingimmunity.org The Center for Modeling Immunity to Enteric Pathogens (MIEP)] * [https://theartofhpc.com/ The Art of HPC: Textbooks by Victor Eijkhout of TACC] ** Vol.1: The Science of Computing ** Vol.2: Parallel Programming for Science Engineering ** Vol.3: Introduction to Scientific Programming in C++17/Fortran2008 ** Vol.4: Tutorials for High Performance Scientific Computing {{Parallel computing}} {{Authority control}} [[Category:Parallel computing]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:By whom
(
edit
)
Template:Cite OED
(
edit
)
Template:Cite web
(
edit
)
Template:Distinguish
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Main
(
edit
)
Template:Parallel computing
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:When
(
edit
)