AI-generated image of 4,000 teraflops of computing power
Author profile picture

This week, NVIDIA started shipping their “DGX H100” systems to customers all over the globe to muster the incredible amounts of computing power needed for AI workloads. Announced last year, the DGX system features 8 “Hopper H100” datacenter chips. “And this chip is blaaaaazing fast. Featuring a brand new GPU architecture built on a 4-nanometer process with a whopping 80 billion transistors, every chip can deliver up to 4,000 teraflops of computing power”, Sander Hofman writes in his weekly newsletter

So…what’s a teraflop

Sander Hofman explains: A teraflop is a chip’s capability to calculate one trillion floating-point operations per second. It’s also often abbreviated to TFLOP or TFLOPS. Because of the sheer amount of computing power, teraflops are often used specifically for supercomputers, where you will also come across petaflops (1,000 teraflops) and exaflops (1,000,000 teraflops, for exascale computing). An electronic device with 1 teraflop or TFLOP means that the chip inside can handle an average of 1 trillion floating-point calculations every second. To give you some idea of what that means in practice, Apple’s entry-level M2 chip offers 3.6 teraflops. The PlayStation 5 has over 10 teraflops. 
And so NVIDIA’s Hopper H100 can do a whopping 4,000 teraflops!  That makes it a data-cruncher extraordinaire. As you may imagine, this chip is aimed at offering industrial computing power for Artificial Intelligence, Machine Learning, Deep Neural Networking, and other High-Performance Computing (HPC) related workloads.

Revolutionizing Industries

NVIDIA’s DGX H100 systems are now shipping worldwide, bringing advanced artificial intelligence (AI) capabilities to various industries. Customers across the globe are using these AI supercomputers to transform sectors such as finance, healthcare, law, IT, and telecom. Green Physics AI predicts factory equipment aging for improved efficiency, while Boston Dynamics AI Institute utilizes the DGX H100 to develop dexterous mobile robots. Startups like Scissero and DeepL harness the power of generative AI through DGX H100 systems for legal processes and translation services. Healthcare organizations, universities, and various industries are leveraging these systems to accelerate research, optimize data science pipelines, and create innovative AI-driven solutions. The DGX H100 boasts eight NVIDIA H100 Tensor Core GPUs, NVIDIA NVLink, and 400 Gbps ultra-low latency NVIDIA Quantum InfiniBand, making it a powerhouse for AI innovation.

Green Physics AI: Predicting the Aging of Factory Equipment

Green Physics AI aims to enhance the efficiency of future factories by predicting the aging of factory equipment. Manufacturers can develop powerful AI models and create digital twins by adding information such as an object’s CO2 footprint, age, and energy consumption to SORDI.ai, the largest synthetic dataset in manufacturing. These digital twins optimize the efficiency of factories and warehouses and energy and CO2 savings for the factory’s products and their components.

Boston Dynamics AI Institute: Dexterous Mobile Robots

The AI Institute, a research organization rooted in Boston Dynamics, uses DGX H100 systems to develop dexterous mobile robots that can perform useful tasks in factories, warehouses, disaster sites, and, eventually, homes. Al Rizzi, CTO of The AI Institute, envisions a robot valet to follow people and perform tasks for them. The DGX H100 will initially tackle reinforcement learning tasks, a key technique in robotics, before running AI inference jobs while connected directly to prototype bots in the lab.

Start-ups Riding the Generative AI Wave

Startups are utilizing DGX H100 systems to explore the potential of generative AI. Scissero, a legal tech startup, employs a GPT-powered chatbot to streamline legal processes by drafting legal documents, generating reports, and conducting legal research. DeepL, a language translation company, uses several DGX H100 systems to expand its services, offering translation between dozens of languages for customers like Nikkei, Japan’s largest publishing company. DeepL has also released an AI writing assistant called DeepL Write.

Improving Healthcare and Patient Outcomes

Many DGX H100 systems are being used to advance healthcare and improve patient outcomes. In Tokyo, DGX H100s run simulations and AI to accelerate the drug discovery process as part of the Tokyo-1 supercomputer project. Xeureka, a Mitsui & Co. Ltd. startup, manages the system. Hospitals and academic healthcare organizations in Germany, Israel, and the US are among the first users of DGX H100 systems.

Universities and Research Institutions Embrace DGX H100

Universities across the globe are adopting DGX H100 systems for research in various fields. Johns Hopkins University Applied Physics Laboratory will use a DGX H100 to train large language models, while the KTH Royal Institute of Technology in Sweden will provide state-of-the-art computer science programs for higher education using the system. Other use cases include Japan’s CyberAgent creating smart digital ads and celebrity avatars and Telconet, a leading telecommunications provider in Ecuador, building intelligent video analytics for safe cities and language services to support customers across Spanish dialects.

An Engine of AI Innovation

Each NVIDIA H100 Tensor Core GPU in a DGX H100 system provides around six times more performance than prior GPUs. The eight H100 GPUs connect over NVIDIA NVLink to create one giant GPU. Organizations can connect hundreds of DGX H100 nodes to an AI supercomputer using 400 Gbps ultra-low latency NVIDIA Quantum InfiniBand, twice the speed of prior networks.