At a networking event, I once told a fellow industrial engineer that I work on supercomputers.

He replied, “Nice! I should get a couple of those for the brewery I work at.”

I was puzzled, smiled politely, and moved on. Later, it hit me: he probably imagined a supercomputer as some kind of suped‑up laptop—RGB lights, absurd specs, maybe liquid cooling if things got really fancy. That confusion is understandable. My parent company makes laptops and printers, after all.

The truth is, most people outside IT—even engineers—don’t really know what a supercomputer (a.k.a. High Performance Computing, or HPC) actually is. So depending on who I’m talking to, I explain it very differently.

Explaining Supercomputers to a 12‑Year‑Old

If I were explaining this to my 12‑year‑old—who knows gaming laptops pretty well—I’d say something like:

“You know how your gaming laptop has around 10 CPU cores and maybe an NVIDIA GPU?
The supercomputers I work on haveover a million CPU and GPU cores combined, consume enough electricity to power a small town, and use about 6,000 gallons of water just to stay cool.”

That usually does the trick.

And yes, they cost hundreds of millions of dollars, so unfortunately they don’t fit into your local brewery’s budget for discovering the next great IPA.

Who Owns Supercomputers?

Most of the world’s top supercomputers belong to the richest entities on the planet. In the U.S., that’s primarily the federal government—specifically the Department of Energy—which sponsors several national laboratories in partnership with universities.

Other federal governments run their own systems, and a few extremely wealthy corporations do as well. (Saudi ARAMCO, for example, has its own supercomputers.) These machines are used for everything from climate modeling and nuclear simulations to materials science, astrophysics, and AI research.

How Do You Measure “Fast”?

You can’t exactly run Crysis on a supercomputer and call it a benchmark.

Instead, there’s a project called Top500, started in 1993, which ranks and documents the 500 most powerful non‑distributed computer systems in the world. The list is published twice a year, and rankings are based on a benchmark called HPL (High Performance Linpack).

HPL measures how fast a computer can solve a massive math problem: a dense system of linear equations.

In very simple terms, it solves something like:

A · x = b

Where:

To solve this, HPL mainly uses Gaussian elimination—a systematic way of eliminating variables until the solution falls out. (Math teachers everywhere nod approvingly.)

Parallelism: One Problem, Thousands of Brains

Of course, no single computer handles the entire matrix.

Instead, the matrix is chopped into blocks. Each compute node (or group of CPU/GPU cores) works on its own chunk at the same time. Nodes perform calculations locally, eliminate rows, update values—and then share updated data with other nodes over ultra‑fast interconnects like Slingshot or InfiniBand.

When you have thousands of nodes talking to each other constantly, you need a common language. That’s where MPI (Message Passing Interface) comes in. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation"

MPI is the de‑facto standard for parallel programming in HPC. If you’re running HPL, you’re running MPI.

The final HPL result is reported as floating‑point operations per second—FLOPS—scaled up to:

(Each step is a thousand times bigger than the last, just in case your brain wasn’t already hurting.)

A Bit of Supercomputing History

The very first system on the Top500 list was the CM‑5 (Connection Machine) from Thinking Machines Corporation in 1993.

Fun fact: a CM‑5 appeared in the movie Jurassic Park, sitting ominously in the control room with red lights glowing in the background. Clearly, Hollywood knew powerful computers should always look slightly dangerous.

Welcome to the Exascale Era

The world’s first exascale supercomputer, Frontier, crossed the exaflop barrier on May 30, 2022.

Frontier is liquid‑cooled using four 350‑horsepower pumps, circulating about 6,000 gallons of non‑pre‑chilled water per minute. This allows roughly five times the compute density of traditional air‑cooled designs.

It consumes about 40 megawatts of power—roughly what 15,000 single‑family homes would use.

An exaflop is:

1,000,000,000,000,000,000 operations per second (10¹⁸ FLOPS)

To put that in perspective: if every person on Earth performed one calculation per second, it would take over four years to match what an exascale computer does in one second.

The Top Exascale Systems (as of November 2025)

El Capitan, Frontier, and Aurora are the three exascale supercomputers installed at U.S. Department of Energy laboratories. These are currently the exascale machines on Earth.

Rank

Name

Model

Site

Total cores

Performance in exaflops

CPU

Accelerator

Operating System

Interconnect

1

El Capitan

HPE Cray EX255a

Lawrence Livermore National Laboratory, California, USA

11,340,000

1.809

AMD 4th Gen EPYC 24C 1.8GHz

AMD Instinct MI300A

TOSS

Slingshot-11

2

Frontier

HPE Cray EX235a

Oak Ridge National Laboratory, Tennessee, USA

9,066,176

1.353

AMD Optimized 3rd Generation EPYC 64C 2GHz

AMD Instinct MI250X

HPE Cray OS

Slingshot-11

3

Aurora

HPE Cray EX - Intel Exascale Compute Blade

Argonne Leadership Computing Facility, Illinois, USA

9,264,128

1.012

Intel Xeon CPU Max 9470 52C 2.4GHz

Intel Data Center GPU Max

SLES

Slingshot-11

4

JUPITER

BullSequana XH3000

Jülich Supercomputing Centre, Germany

4,801,344

1.0

Nvidia GH Superchip 72C 3GHz

NVIDIA GH200 Superchip

RHEL

NVIDIA InfiniBand NDR200

*TOSS stands for Tri-Lab Operating System Stack and is a Linux distribution based on Red Hat Enterprise Linux (RHEL) that was created to provide a software stack for high performance computing HPC clusters for laboratories within the National Nuclear Security Administration (NNSA).

And how exactly are your tax dollars being spent on these beasts …

Name

Uses

El Capitan

National security simulations (e.g., stockpile stewardship)
High‑fidelity modeling & physics simulations
Complex scientific modeling & AI‑driven workflows

Frontier

Broad scientific research
Physics & engineering simulations
Climate and materials science
AI and computational science applications

Aurora

Aerospace & airflow research
Cancer research & materials discovery
Energy systems modeling
Large‑scale data analysis & scientific simulation

Vendors and Countries Supercomputer Share

What Comes After Exascale?

The first petaflop computer, Roadrunner, debuted in 2008. It took 14 years to reach exascale.

So what’s next?

Zettascale—machines capable of 10²¹ floating‑point operations per second.

Getting there won’t be easy. Newer systems like JUPITER achieve exascale performance with half as many cores as the top supercomputer. Newer supercomputers have slingshot 400 Gbps switches, so twice the networking speed as the top supercomputer. All good signs. But simply shrinking transistors and increasing network speeds will only take us so far.

New ideas will be required. Maybe radically different architectures. Maybe specialized accelerators. Maybe even quantum co‑processors someday. With AI/ML empowering research, it’s likely that the first zettascale system will arrive within the next decade.

Until then, a new wave of powerful machines is already being commissioned—like the Department of Energy’s Lux and Discovery, and France’s Alice Recoque.

The supercomputing race continues—quietly, in climate‑controlled buildings, fueled by water, electricity, and an absurd amount of math.