Exascale computing refers to computing systems capable of calculating at least 1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)";[1] it is a measure of supercomputer performance.
In 2022, the world's first public exascale computer, Frontier, was announced.[8] As of November 2024[update], Lawrence Livermore National Laboratory's El Capitan is the world's fastest exascale supercomputer.[9]
Whilst a distributed computing system had broken the 1 exaFLOPS barrier before Frontier, the metric typically refers to single computing systems. Supercomputers had also previously broken the 1 exaFLOPS barrier using alternative precision measures; again these do not meet the criteria for exascale computing using the standard metric.[1] It has been recognised that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement.[11][12]
Technological challenges
It has been recognized that enabling applications to fully exploit capabilities of exascale computing systems is not straightforward.[13] Developing data-intensive applications over exascale platforms requires the availability of new and effective programming paradigms and runtime systems.[14] The Folding@home project, the first to break this barrier, relied on a network of servers sending pieces of work to hundreds of thousands of clients using a client–server modelnetwork architecture.[15][16]
History
The first petascale (1015 FLOPS) computer entered operation in 2008.[17] At a supercomputing conference in 2009, Computerworld projected exascale implementation by 2018.[18] In June 2014, the stagnation of the Top500 supercomputer list had observers question the possibility of exascale systems by 2020.[19]
In June 2020[25] the Japanese supercomputer Fugaku achieved 1.42 exaFLOPS using the alternative HPL-AI benchmark.
In 2022, the world's first public exascale computer, Frontier, was announced, achieving an Rmax of 1.102 exaFLOPS in June 2022.[8] As of November 2024[update], the world's fastest supercomputer is El Capitan at 1.742 exaFLOPS.[9]
In January 2012, Intel purchased the InfiniBand product line from QLogic for US$125 million in order to fulfill its promise of developing exascale technology by 2018.[28]
By 2012, the United States had allotted $126 million for exascale computing development.[29]
On 29 July 2015, Barack Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale system and funding research into post-semiconductor computing.[32] The Exascale Computing Project (ECP) hopes to build an exascale computer by 2021.[33]
On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOPS supercomputer would be operational at Argonne National Laboratory by late 2022. The computer, named Aurora is to be delivered to Argonne by Intel and Cray (now Hewlett Packard Enterprise), and is expected to use Intel Xe GPGPUs alongside a future Xeon Scalable CPU, and cost US$600 Million.[34][35]
On 7 May 2019, the U.S. Department of Energy announced a contract with Cray (now Hewlett Packard Enterprise) to build the Frontier supercomputer at Oak Ridge National Laboratory. Frontier is anticipated to be fully operational in 2022 [36] and, with a performance of greater than 1.5 exaFLOPS, should then be the world's most powerful computer.[37]
On 4 March 2020, the U.S. Department of Energy announced a contract with Hewlett Packard Enterprise and AMD to build the El Capitan supercomputer at a cost of US$600 million, to be installed at the Lawrence Livermore National Laboratory (LLNL). It is expected to be used primarily (but not exclusively) for nuclear weapons modeling. El Capitan was first announced in August 2019, when the DOE and LLNL revealed the purchase of a Shasta supercomputer from Cray. El Capitan will be operational in early 2023 and have a performance of 2 exaFLOPS. It will use AMD CPUs and GPUs, with 4 Radeon Instinct GPUs per EPYC Zen 4 CPU, to speed up artificial intelligence tasks. El Capitan should consume around 40 MW of electric power.[38][39]
In May 2022, the United States had its first exascale supercomputer, Frontier. In June 2024, Argonne National Laboratory's Aurora became the country's second exascale computer, followed five months later by El Capitan becoming operational. As of November 2024, the United States remains the only country with exascale supercomputers.
Japan
In Japan, in 2013, the RIKEN Advanced Institute for Computational Science began planning an exascale system for 2020, intended to consume less than 30 megawatts.[40] In 2014, Fujitsu was awarded a contract by RIKEN to develop a next-generation supercomputer to succeed the K computer. The successor is called Fugaku, and aims to have a performance of at least 1 exaFLOPS, and be fully operational in 2021. In 2015, Fujitsu announced at the International Supercomputing Conference that this supercomputer would use processors implementing the ARMv8 architecture with extensions it was co-designing with ARM Limited.[41] It was partially put into operation in June 2020[25] and achieved 1.42 exaFLOPS (fp16 with fp64 precision) in HPL-AI benchmark making it the first ever supercomputer that achieved 1 exaFLOPS.[42] Named after Mount Fuji, Japan's tallest peak, Fugaku retained the No. 1 ranking on the Top 500 supercomputer calculation speed ranking announced on November 17, 2020, reaching a calculation speed of 442 quadrillion calculations per second, or 0.442 exaFLOPS.[43]
In around May 2026, Japan will have its first exascale supercomputer. In other words, the country will get an exascale supercomputer then. Japan will also be the second country to have at least one exascale supercomputer after the United States, which had only one exascale supercomputer for around two years, between May 2022 and May 2024.[citation needed]
China
As of June 2022, China had two of the Top Ten fastest supercomputers in the world. According to the national plan for the next generation of high performance computers and the head of the school of computing at the National University of Defense Technology (NUDT), China was supposed to develop an exascale computer during the 13th Five-Year-Plan period (2016–2020) which would enter service in the latter half of 2020.[44] The government of Tianjin Binhai New Area, NUDT and the National Supercomputing Center in Tianjin are working on the project. After Tianhe-1 and Tianhe-2, the exascale successor is planned to be named Tianhe-3. As of 2023 China is reported to have two operational exascale computers; Tianhe-3 and Sunway OceanLight, with a third being built. Neither are on the Top500.[45][46]
In 2011, several projects aiming at developing technologies and software for exascale computing were started in the European Union. The CRESTA project (Collaborative Research into Exascale Systemware, Tools and Applications),[47] the DEEP project (Dynamical ExaScale Entry Platform),[48] and the project Mont-Blanc.[49] A major European project based on exascale transition is the MaX (Materials at the Exascale) project.[50] The Energy oriented Centre of Excellence (EoCoE) exploits exascale technologies to support carbon-free energy research and applications.[51]
In 2015, the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project, a major research project between the University of Manchester and the STFC Daresbury Laboratory in Cheshire, was awarded c. £1million from the United Kingdom's Engineering and Physical Sciences Research Council. The SERT project was due to start in March 2015. It will be funded by EPSRC under the Software for the Future II programme, and the project will partner with the Numerical Analysis Group (NAG), Cluster Vision and the Science and Technology Facilities Council (STFC).[52]
On 28 September 2018, the European High-Performance Computing Joint Undertaking (EuroHPC JU) was formally established by the EU. The EuroHPC JU aims to build an exascale supercomputer by 2022/2023. The EuroHPC JU will be jointly funded by its public members with a budget of around €1 billion. The EU's financial contribution is €486 million.[53][54]
In March 2023 the government of the United Kingdom announced it would invest £900 million in the development of an exascale computer.[55] This project was axed in August 2024.[56]
Taiwan
In June 2017, Taiwan's National Center for High-Performance Computing initiated the effort towards designing and building the first Taiwanese exascale supercomputer by funding construction of a new intermediary supercomputer based on a full technology transfer from Fujitsu corporation of Japan, which is currently building the fastest and most powerful A.I. based supercomputer in Japan.[57][58][59][60][61]
Additionally, numerous other independent efforts have been made in Taiwan with the focus on the rapid development of exascale supercomputing technology, such as Foxconn Corporation which recently designed and built the largest and fastest supercomputer in all of Taiwan. This new Foxconn supercomputer is designed to serve as a stepping stone in research and development towards the design and building of a state of the art exascale supercomputer.[62][63][64][65]
India
In 2012, the Indian Government proposed to commit US$2.5 billion to supercomputing research during the 12th five-year plan period (2012–2017). The project was to be handled by Indian Institute of Science (IISc), Bangalore.[66] Additionally, it was later revealed that India plans to develop a supercomputer with processing power in the exaFLOPS range.[67] It will be developed by C-DAC within the subsequent five years of approval.[68] These supercomputers will use indigenously developed microprocessors by C-DAC in India.[69] In 2023, in a presentation by CDAC, it plans to have a indigenously developed exascale supercomputer named Param Shankh. The Param Shankh will be powered by an indigenous 96 core, ARM architecture-based processor which has been nicknamed AUM (ॐ).[70]
^Anderson, Mark (7 January 2020). "Full Page Reload". IEEE Spectrum: Technology, Engineering, and Science News. Archived from the original on 24 June 2020. Retrieved 6 July 2020.
Gropp, William (2009). "MPI at Exascale: Challenges for Data Structures and Algorithms". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. Vol. 5759. Berlin: Springer. p. 3. Bibcode:2009LNCS.5759....3G. doi:10.1007/978-3-642-03770-2_3. ISBN978-3-642-03769-6.