PDF version of this report
You must have Adobe Acrobat reader to view, save, or print PDF files. The
reader is available for free
download.
Supercomputing
Copyright 2022, Faulkner Information Services. All
Rights Reserved.
Docid: 00018025
Publication Date: 2205
Publication Type: TUTORIAL
Preview
Supercomputing is a form of electronic data processing in which
ultra-fast computers, known appropriately as supercomputers, are used to
conduct scientific, engineering, or other research. Found
primarily in universities, government labs, and private sector R&D
centers, supercomputers are often employed in the testing of mathematical
models for complex physical phenomena or industrial designs.
Report Contents:
- Executive Summary
- Related Reports
- Supercomputing Basics
- Supercomputing_Applications
- Supercomputing_Trends
- Web Links
Executive Summary
[return to top of this
report]
Supercomputing is a form of electronic data processing in which
ultra-fast computers, known appropriately as supercomputers, are used to
conduct scientific, engineering, or other research.
Deployed primarily in universities, government labs, and private sector
R&D centers, supercomputers are often employed in the testing of
mathematical models for complex physical phenomena or industrial designs.1
Related Faulkner Reports |
IBM Grid Computing Product |
Quantum Computing Tutorial |
Hewlett-Packard Enterprise Company Profile Vendor |
Famously associated with long-term weather forecasting and other
real-world simulations, supercomputers, like the NASA model pictured in
Figure 1, have broad application in areas such as:
- Climate change prediction;
- Aircraft and nuclear reactor design;
- Pharmaceutical development;
- Cryptography; and
- Basic research into the evolution of the cosmos, from the Big Bang to
present day.2
Figure 1. NASA Pleiades Supercomputer
Source: NASA
New developments in supercomputing have focused on making the technology
more accessible, with cloud-based supercomputing becoming more of a
reality.
Recently cracking the Top 10 in supercomputer performance – as
recognized in the 58th annual edition of the “TOP500 list,” published
November 2021 – a Microsoft Azure system called Voyager-EUS2 is “based on
an AMD EPYC processor with 48 cores and 2.45GHz working together with an
NVIDIA A100 GPU and 80 GB of memory.”3
Supercomputing Basics
[return to top of this
report]
Supercomputing Glossary
Supercomputing is frequently confused with other, similar concepts. Consequently, a few definitions are in order.
Mainframe Computing
– Mainframe computing features
high-performance computers (mainframes) boasting large amounts of memory
and large numbers of processors.4 Today, mainframes are
largely responsible for the computational heavy lifting required by
data-intensive industries like banking, insurance, and retail. By
one estimate, “[one] mainframe can process 2.5 billion transactions in a
single day, which is the equivalent of handling 100 Cyber Mondays – and
over $790 billion – on one system.”5
High Performance Computing (HPC)
– HPC refers to the use of
multiple supercomputers to process “complex and large calculations.” The term HPC, however, is often used interchangeably with supercomputing.6
Parallel Processing
– Parallel processing occurs when “multiple
CPUs work on solving a single calculation at a given time.” While
supercomputers can invoke parallel processing, so too can high performance
computers.7
Grid Computing
– As described by analysts Robert Sheldon and Jim
O”Reilly, “Grid computing is a system for connecting a large number of
computer nodes into a distributed architecture that delivers the compute
resources necessary to solve complex problems. The nodes can include
servers or personal computers that are loosely linked together by the
internet or other networks and, in many cases, distributed across multiple
geographic regions. Grid computing uses the resources available to
each node to run independent tasks that contribute to the larger
endeavor.”8
Quantum Computing
– Quantum computing leverages certain phenomena,
like superposition and entanglement that occur at the subatomic level, to
solve problems. As detailed by the Institute for Quantum Computing
at the University of Waterloo:
-
“Superposition is
essentially the ability of a quantum system to be in multiple states
at the same time. - “Entanglement is an extremely strong correlation
that exists between quantum particles – so strong, in fact, that two or
more quantum particles can be inextricably linked in perfect unison,
even if separated by great distances.”9
Supercomputing Technology
Unlike traditional computers, supercomputers, as defined by IBM, “use
more than one central processing unit (CPU). These CPUs are grouped
into compute nodes, comprising a processor or a group of processors –
symmetric multiprocessing (SMP) – and a memory block. At scale, a
supercomputer can contain tens of thousands of nodes. With
interconnect communication capabilities, these nodes can collaborate on
solving a specific problem. Nodes also use interconnects to
communicate with I/O systems, like data storage and networking.
“Supercomputing is measured in floating-point operations per second
(FLOPS). Petaflops are a measure of a computer’s processing speed
equal to a thousand trillion flops. And a 1-petaflop computer system
can perform one quadrillion (1015) flops. From a
different perspective,
supercomputers can [deliver] one million times
more processing power than the fastest laptop.
“10
Supercomputing Market
As revealed by research conducted by Mordor Intelligence, the market for
supercomputers is expected to expand at a compound annual growth rate
(CAGR) of 9.49 percent during the period from 2022 to 2027. Prominent players in a fairly-concentrated supercomputer space include:
- HP Enterprise
- Atos
- Dell
- Fujitsu
- IBM
- Lenovo
- NEC11
Supercomputer Performance
According to the 58th annual edition of the TOP500 list – a sort of
Fortune 500 list for supercomputer performance – the “Fugaku
supercomputer” holds the No. 1 position. Jointly developed by RIKEN
and Fujitsu, the Fugaku has successfully retained the top spot for four
consecutive rankings.
Interestingly from a cloud-based supercomputer perspective, coming in
tenth is a Microsoft Azure system called Voyager-EUS2, which is “based on
an AMD EPYC processor with 48 cores and 2.45GHz working together with an
NVIDIA A100 GPU and 80 GB of memory.”12
Supercomputing Applications
[return to top of this
report]
Famously associated with long-term weather forecasting and other
real-world simulations, supercomputers have broad application in areas
such as:
Climate change prediction;
Aircraft and nuclear reactor design;
Pharmaceutical development;
Cryptography; and
Basic research into the evolution of the cosmos, from
the Big Bang to present day.13
Other use cases include:
Combating Cancer
IBM sees supercomputing-enabled artificial intelligence as a vehicle for
mining cancer data. “Machine learning algorithms will help supply
medical researchers with a comprehensive view of the US cancer population
at a granular level of detail.”14
Identifying Next-Generation Materials
IBM also views super AI as an instrument for advancing materials research
and management. “Deep learning could help scientists identify
materials for better batteries, more resilient building materials, and
more efficient semiconductors.”15
Creating an AI Supercomputer
Meta (formerly Facebook) is developing an “AI supercomputer.” Analysts Arunima Sarkar and Nikhil Malhotra report that “The first phase
of its creation is already complete, and by the end of 2022 the second
phase is expected to be finished. At that point, Meta’s
supercomputer will contain some 16,000 total GPUs, and the company has
promised that it will be able to train AI systems with more than a
trillion parameters on data sets as large as an exabyte – or one thousand
petabytes.
“Meta has promised a host of revolutionary uses of its supercomputer,
from ultra-fast gaming to instant and seamless translation of
mind-bendingly large quantities of text, images and videos at once – think
about a group of people simultaneously speaking different languages, and
being able to communicate seamlessly. It could also be used to scan
huge quantities of images or videos for harmful content, or identify one
face within a huge crowd of people.”16
Calculating Pi to a Ridiculous Level of Precision
Researchers at the Swiss university Fachhochschule Graubünden have
calculated pi (the ratio of a circle’s circumference to its diameter) to a
staggering 62.8 trillion digits, besting the previous record of 50
trillion digits. More remarkably, they completed their computations
nearly four times faster than the existing record holders.
While such an exercise may seem silly, there is a proverbial “method to
the madness.” As Project Leader Thomas Keller explains, "The number of pi is
(except for a few very well-known digits) irrelevant to us and probably to
anyone else in science and mathematics. For
us,
the record is a byproduct of tuning our system for future
computation tasks
.”
As analyst Caroline Delbert adds, “calculating pi has become a way for
computers to flex their computational abilities, as programmers look
toward extremely resource-intensive tasks, like modeling the universe or
even making high-performance imagined worlds in video games.”17
Supercomputing Trends
[return to top of this
report]
“The operating power of supercomputers has
increased something like a trillion times over the last 40 years.”
— John Spizzirri18
Solving Complex Problems
In reflecting on the evolution of supercomputing, Salman Habib, Director
of Argonne’s Computational Science division, observes that “The big
difference between the supercomputers of the past and today’s
supercomputers is that the early supercomputers could solve certain types
of problems very quickly, but you couldn’t solve very complex
problems. The supercomputers of today can do that.”19
The COVID-19 Infusion
Although the COVID-19 pandemic appears to be receding, COVID-related
research is still fueling demand for supercomputers, artificial
intelligence, and machine learning. At issue is the concern over new virus
variants, and the potential for completely new viruses.
Promoting Expanded Access
As analyst David A. Bader reminds us, supercomputing, like all
information technology, has a political component. “[The]
problem-solving capabilities of supercomputers will only improve as more
people gain access to and learn to use the technologies. Women and
other underrepresented groups in STEM fields currently have limited access
to the power of supercomputing, and the high-performance computing field
is currently losing out on important perspectives.
“Increasing investment and expanding the user base of supercomputers
helps drive innovation and improvement forward in academia, government,
and the private sector. If we can’t get advanced supercomputers in
the hands of more people, the US will fall behind globally in solving some
of tomorrow’s most pressing problems.”20
Edge Filtered Data
There is a growing consensus that edge computing can help facilitate
supercomputing.
Edge computing involves the positioning of compute, storage, and
networking resources proximate to the end users they serve and the various
devices these end users employ. A rapidly emerging field, especially
given the exponential growth of the Internet of Things (IoT) and the
resultant proliferation of edge devices like smartphones and smart
sensors, edge computing is demanding the attention of enterprise
technologists, just as cloud computing did a decade ago.
The impetus behind edge computing is the realization that some data, like
data generated by autonomous automobile sensors, must be processed
immediately. It cannot be shuttled to a cloud repository, processed
by a backend analytics package, and returned to an automotive steering
system, at least not in time to prevent an accident. The data must
be processed on the spot, or “at the network edge.” The situation is
analogous to a paramedic attending an accident victim. The paramedic
can relay the patient’s vital signs to the hospital, but must act
immediately to stop any major bleeding.
Where edge computing and supercomputing intersect is the potential for
edge systems to regulate supercomputer input, “[filtering] only the
important or interesting data to [enterprise] supercomputers for
heavy-duty analysis.” This will, of course, enhance supercomputer
efficiency since I/O is always the major bottleneck.21
Cloud Based Supercomputing
Eventually, everything – software, hardware, services – moves to the
cloud. So, too, is supercomputing.
As analyst Joris Poort points out, “While the cloud is now ubiquitous in
enterprise computing, there is one area where the shift to cloud has only
just quietly begun: supercomputing. [Supercomputers] were once
available only to governments, research universities, and the most
well-heeled corporations, and were used for cracking enemy codes,
simulating weather, and designing nuclear reactors. But today, the
cloud is bringing supercomputing into the mainstream.
“This transition has the potential to accelerate (or disrupt) how
businesses deliver complex engineered products, from designing rockets
capable of reaching space and supersonic jets to creating new drugs and
discovering vast pools of oil and gas hidden deep underground.”22
Super Cyber Concerns
As supercomputing becomes more mainstream, it becomes more vulnerable to
mainstream threats. According to Mordor Intelligence, in May 2020,
multiple supercomputers across Europe were infected with cryptocurrency
mining malware. Security incidents have been reported in the United
Kingdom, Germany, and Switzerland, while similar [intrusions were]
reported in computing centers in Spain. Such incidents are forcing
companies to focus on the security aspect of supercomputers.”23
Web Links
[return to top of this
report]
-
Continuity Central: http://www.continuitycentral.com/
National Center for Supercomputing Applications: http://www.ncsa.illinois.edu/
SANS Institute: http://www.sans.org/
TOP500: http://www.top500.org/
US National Institute of Standards and Technology: http://www.nist.gov/
References
1-2 Britannica.com. 2022.
3 “Still Waiting for Exascale: Japan’s Fugaku Outperforms All
Competition Once Again.” TOP500.org. 2021.
4-5 “The Rockstar in Your Data Center: The IBM Z Mainframe.”
Rocket Software, Inc. 2021:5.
6-7 What Is Supercomputing? IBM Corporation. 2022.
8 Robert Sheldon and Jim O’Reilly. “Grid Computing” TechTarget. December 2021.
9 “Quantum Computing 101.” Institute for Quantum Computing,
University of Waterloo.
10 “What Is Supercomputing? IBM Corporation. 2022.
11 “Supercomputers Market – Growth, Trends, COVID-19 Impact,
and Forecasts (2022 – 2027).” Mordor Intelligence LLP. February 2022.
12 “Still Waiting for Exascale: Japan’s Fugaku Outperforms All
Competition Once Again.” TOP500.org. 2021.
13 Britannica.com. 2022.
14-15 “What Is Supercomputing? IBM Corporation. 2022.
16 Arunima Sarkar and Nikhil Malhotra. “Supercomputers, AI and
the Metaverse: Here’s What You Need to Know.” World Economic Forum.
February 4, 2022.
17 Caroline Delbert. “A Supercomputer Calculated Pi to a
Record 62.8 Trillion Digits.” Popular Mechanics. August 18, 2021.
18-19 John Spizzirri. “The Age of Exascale and the Future of
Supercomputing.” Argonne National Laboratory. November 15, 2021.
20 David A. Bader. “The Future of Supercomputers:
Democratization Is Critical.” InformationWeek | Informa PLC. June 4, 2021.
21 Oliver Peckham. “The Case for an Edge-Driven Future for
Supercomputing.” HPCwire. September 24, 2021.
22 Joris Poort. “How Cloud-Based Supercomputing Is Changing
R&D.” Harvard Business Review | Harvard Business School Publishing.
November 29, 2021.
23 “Supercomputers Market – Growth, Trends, COVID-19 Impact,
and Forecasts (2022 – 2027).” Mordor Intelligence LLP. February 2022.
About the Author
[return to top of this
report]
James G. Barr is a leading business continuity analyst
and business writer with more than 40 years’ IT experience. A member of
“Who’s Who in Finance and Industry,” Mr. Barr has designed, developed, and
deployed business continuity plans for a number of Fortune 500 firms. He
is the author of several books, including How to Succeed in Business
BY Really Trying, a member of Faulkner’s Advisory Panel, and a
senior editor for Faulkner’s Security Management Practices.
Mr. Barr can be reached via e-mail at jgbarr@faulkner.com.
[return to top of this
report]