In the ever-evolving realm of computing, we have witnessed a seismic shift in how technology advances from the colossal supercomputers of yesteryear to the compact chips found in our everyday devices. Over the past decade and a half, a remarkable transformation has occurred; innovation is no longer simply trickling down. Instead, it is surging upwards, thanks to the power of GPUs, or Graphics Processing Units. Originally designed for gaming, GPUs have now become a cornerstone of accelerated computing, fundamentally altering the landscape of supercomputing and driving forward the AI revolution in cutting-edge scientific computing systems.
A prime example of this new era is JUPITER, housed at Forschungszentrum Jülich. This supercomputer is not only among the most efficient of its kind, achieving an impressive 63.3 gigaflops per watt, but it is also a formidable force in the realm of artificial intelligence, delivering 116 AI exaflops, up from the 92 exaflops recorded at the ISC High Performance 2025 conference.
This transformation is often referred to as a "flip." To provide some context, back in 2019, nearly 70% of the TOP100 high-performance computing systems relied solely on CPUs, or Central Processing Units. However, today, that figure has plummeted to below 15%. Now, an astounding 88 out of the TOP100 systems are accelerated, with a significant 80% of them powered by NVIDIA GPUs.
Expanding our view to the broader TOP500 list, we find that 388 systems, or 78%, now incorporate NVIDIA technology. This includes 218 systems that are GPU-accelerated—a 34-system increase from the previous year—and 362 systems that leverage high-performance NVIDIA networking. This trend is unmistakable: accelerated computing has emerged as the new standard.
However, the true revolution lies in AI performance. With advanced architectures such as NVIDIA Hopper and Blackwell, alongside systems like JUPITER, researchers now have access to orders of magnitude more AI computing power than ever before. AI FLOPS, or floating-point operations per second, have become the new benchmark for progress, enabling breakthroughs in diverse fields such as climate modeling, drug discovery, and quantum simulation—challenges that demand both scale and efficiency.
Several years ago, Jensen Huang, the CEO of NVIDIA, foresaw the impact of AI on the world’s most powerful computing systems. He likened the advent of deep learning to Thor’s hammer, a powerful tool that would revolutionize the way we tackle some of the world’s most complex problems. At the time, the mathematical foundations of computing power consumption were already pointing toward the inevitability of GPUs. However, it was the AI revolution, fueled by the NVIDIA CUDA-X computing platform, that propelled these machines to new heights.
The introduction of CUDA-X, a platform built on the foundation of NVIDIA GPUs, extended the capabilities of supercomputers dramatically. Suddenly, these machines could perform meaningful scientific computations at double precision (FP64), as well as at mixed precision (FP32, FP16), and even at ultra-efficient formats like INT8 and beyond, which form the backbone of modern AI.
This newfound flexibility allowed researchers to stretch their power budgets further than ever before, enabling them to run larger, more complex simulations and train deeper neural networks, all while maximizing performance per watt. Even before AI took hold, the raw numbers were already compelling. Power budgets are non-negotiable, and researchers in both NVIDIA and the broader scientific community recognized that the road ahead was paved with GPUs.
To reach exascale computing levels without incurring astronomical electricity costs, researchers needed a solution. GPUs offered significantly more operations per watt compared to CPUs. This was the pre-AI indicator of what was to come, and it explains why, when the AI boom finally arrived, large-scale GPU systems were already gaining momentum.
The seeds of this transformation were planted with the introduction of Titan in 2012 at the Oak Ridge National Laboratory. Titan was one of the first major U.S. systems to pair CPUs with GPUs on an unprecedented scale, demonstrating how hierarchical parallelism could unlock substantial gains in application performance.
In Europe, the Piz Daint supercomputer set a new standard for both performance and efficiency in 2013, proving its capabilities in real-world applications such as COSMO forecasting for weather prediction.
By 2017, the shift was undeniable. Summit at Oak Ridge National Laboratory and Sierra at Lawrence Livermore Laboratory set a new standard for leadership-class systems, prioritizing acceleration. These systems not only ran faster but also redefined the kinds of questions that science could ask in domains like climate modeling, genomics, materials science, and more.
Today, these systems can accomplish much more with significantly less. On the Green500 list of the most efficient supercomputers, the top eight are NVIDIA-accelerated, with NVIDIA Quantum InfiniBand connecting seven of the top ten.
The story behind these impressive numbers is how AI capabilities have become the new benchmark. JUPITER, for instance, delivers an astonishing 116 AI exaflops alongside one exaflop of FP64 performance—a clear indication of how science is now a blend of simulation and AI. Power efficiency did not just make exascale computing attainable; it made AI at exascale levels practical. With AI at scale, scientific research has entered a new era of unprecedented achievements.
What It Means Next
This transformation is not merely about achieving impressive benchmarks; it has real-world implications for scientific progress. Here are a few key areas where this shift is making a difference:
- Faster and more accurate weather and climate models: The ability to simulate complex weather patterns and climate changes with greater accuracy has profound implications for our understanding of the environment and our ability to respond to climate challenges.
- Breakthroughs in drug discovery and genomics: The enhanced computing power allows researchers to accelerate the discovery of new drugs and gain deeper insights into the complexities of genomics, potentially leading to groundbreaking medical advancements.
- Simulations of fusion reactors and quantum systems: Researchers can now simulate the behavior of fusion reactors and quantum systems with unprecedented precision, paving the way for advancements in energy production and quantum computing.
- New frontiers in AI-driven research across every discipline: The integration of AI into scientific research opens up new possibilities for innovation across a wide range of fields, from biology to physics to social sciences.
What began as a power-efficiency imperative has evolved into an architectural advantage and has now matured into a scientific superpower. The combination of simulation and AI, operating at an unprecedented scale, is transforming the landscape of scientific computing. It is not only reshaping scientific computing but also setting the stage for a broader transformation across the entire computing landscape.
For more details on this topic, you can refer to the original article on the NVIDIA blog.
For more Information, Refer to this article.


































