Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the thegem domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/j8p72agj2cgw/fanaticalfuturist.com/wp-includes/functions.php on line 6121

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-2fa domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/j8p72agj2cgw/fanaticalfuturist.com/wp-includes/functions.php on line 6121
Google claims Ironwood chip is 24x faster than the world's fastest supercomputer – Matthew Griffin | Keynote Speaker & Master Futurist
Scroll Top

Google claims Ironwood chip is 24x faster than the world’s fastest supercomputer

WHY THIS MATTERS IN BRIEF

Google keep making massive and bold claims about the performance of their computer hardware, but the devil’s in the details.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Google and gang does love to play with numbers, which is why we’re now starting to see the world’s first Zettascale Artificial Intelligence (AI) supercomputers. In the latest example of number magic at the recent Google Cloud Next 2025 event the tech giant claimed that its new Ironwood TPU v7p pod is a massive 24 times faster than El Capitan, the exascale-class supercomputer at Lawrence Livermore National Laboratory. But Timothy Prickett Morgan of TheNextPlatform has dismissed the claim.

 

RELATED
Invisible AI helps cars makers see what machines and humans do wrong

 

“Google is comparing the sustained performance of El Capitan with 44,544 AMD ‘Antares-A’ Instinct MI300A hybrid CPU-GPU compute engines running the High Performance LINPACK (HPL) benchmark at 64-bit floating point precision against the theoretical peak performance of an Ironwood pod with 9,216 of the TPU v7p compute engines,” he wrote. “This is a perfectly silly comparison, and Google’s top brass not only should know better, but does.”

Prickett Morgan argues that while such comparisons are valid between AI systems and HPC machines, the two systems serve different purposes – El Capitan is optimized for high-precision simulations; the Ironwood pod is tailored to low-precision AI inference and training. What matters, he adds, is not just peak performance but cost.

“High performance has to have the lowest cost possible, and no one gets better deals on HPC gear than the US government’s Department of Energy.”

 

RELATED
Scientists have found a way to turn your body into a computer

 

Estimates from TheNextPlatform claim the Ironwood pod delivers 21.26 exaflops of FP16 and 42.52 exaflops of FP8 performance, costs $445 million to build and $1.1 billion to rent over three years. That results in a cost per teraflops of $21 (build) or $52 (rental). Meanwhile, El Capitan delivers 43.68 FP16 exaflops and 87.36 FP8 exaflops at a build cost of $600 million, or $14 per teraflops.

“El Capitan has 2.05X more performance at FP16 and FP8 resolution than an Ironwood pod at peak theoretical performance,” Prickett Morgan notes. “The Ironwood pod does not have 24X the performance of El Capitan.”

He adds: “HPL-MxP uses a bunch of mixed precision calculations to converge to the same result as all-FP64 math on the HPL test, and these days delivers around an order of magnitude effective performance boost.”

The article also includes a comprehensive table comparing top-end AI and HPC systems on performance, memory, storage, and cost-efficiency. While Google’s TPU pods remain competitive, Prickett Morgan maintains that, from a cost/performance standpoint, El Capitan still holds a clear advantage.

 

RELATED
OpenAI's GPT4 isn't actually a single AI it's many and a Master of Experts

 

“This comparison is not perfect, we realize,” he admits. “All estimates are shown in bold red italics, and we have question marks where we are not able to make an estimate at this time.”

Either way though however the numbers are sliced and diced we are seeing the development of computer chips and spines with huge amounts of performance – some of which are moving more data than the entire internet does every second, so yes Google’s being wild but as the world’s thirst for computing increases exponentially so too is the performance of any of the computing platforms we rely on to run our AI fuelled digital world.

Related Posts

Leave a comment

Pin It on Pinterest

Share This