The differences in speeds between these computers are striking, but the processes for most of our computing machinery are essentially the same. Both minivans and Formula One racers can convert energy to motion, but it’s the wait at the finish line that justifies the cost differences in the engineering for each.


18 ZEROS

Like miles per hour or zero-to-sixty benchmarks, there are standard measures for computer speeds, and it’s these numbers that create the labels for the different types. General-purpose computers use million instructions per second (MIPS) as markers, but the speeds for the supercomputer class are measured in ­floating-point operations per second (FLOPS). Speeds can reach up to 1,017 FLOPS or one-hundred quadrillion operations per second (written in shorthand as 100 petaFLOPS). Yet as inconceivable as any activity at that number of events might be, there’s still a ceiling above that.

Any supercomputer that’s able to calculate at least 1,018 FLOPS qualifies as an exascale system. Seventeen zeros, and it’s still a supercomputer; 18, and it’s in exascale territory. For a surreal human comparison, Network World offered this scenario: “To match what a one ExaFLOP computer can do in just one second, you’d have to perform one calculation every second for 31,688,765,000 years.”

But supercomputers and exascale computers aren’t distinct machines. A dramatic example of this was reported by Andy Patrizio of Network World in April 2020. He described a project called Folding@Home, which involved 700,000 volunteers who offered downtime computing access to their home computers to do molecular research on a biological process called protein folding. The combined power of all those home machines in off hours set a record. Patrizio reported, “While the supercomputing stalwarts continue to build their systems, Folding@Home just crossed the exaFLOP barrier ahead of IBM, Intel, Nvidia, and the Department of Energy.” The project’s peak performance of 1.5 exaFLOPs “makes it more than seven times faster than the world’s fastest supercomputer, Summit, at the Oak Ridge National Laboratory.” The feat was accomplished with home PCs that didn’t even know about each other in the system.


TOP500

To keep track of the progress of the growing field of supercomputers, the website TOP500 updates two lists each year, one in the summer and one in the fall, that rates the top 500 supercomputers in the world. In the most recent fall list, number one is Japan’s Fugaku supercomputer running on 7,630,848 cores. Among the top five, the United States has numbers two (IBM’s Summit at the Oak Ridge National Laboratory), three (IBM’s Sierra), and five (NVIDIA’s Selene), and China’s Sunway TaihuLight at the National Supercomputing Center in Wuxi has the world’s number-four supercomputer. Rounding out the top 10 are one each from the U.S., China, Germany, Italy, and Saudi Arabia. TOP500 also offers statistics on the efficiencies, power, and performance development.


MOORE'S LAW

Almost from the beginning, the standard measure for development in computing came from Intel cofounder Gordon Moore. He projected that the number of transistors on a microchip would double every two years, and the cost of computers would halve as their speeds would increase.

That measure no longer works. According to Lori Diachin, deputy director of the Department of Energy’s Exascale Computing Project, “We’ve come to the end of an era in which computing speeds can be doubled every 18 months by adding more and more transistors to the silicon chip. Transistors have become so small that they have reached their physical limits.… [E]xascale computing needs to tap into the latest innovations in microarchitectures, like increasing the number of processing cores.” Consider the 7.6 million cores in the current number-one supercomputer Fugaku.

IBM’s Aurora computer is expected to be launched in 2021 with exascale computing capability. Perhaps the significance of this arrival of exacomputing can best be seen in the shared belief of researchers that exascale computing levels can reach the estimated processing power of the human brain operating at neural levels. The long-sought mapping of the brain’s neural connectome—the collective neural pathways—might be on the horizon.

About the Authors