Since its introduction, the NVIDIA Hopper architecture has transformed the AI and high-performance computing (HPC) landscape, helping enterprises, researchers and developers tackle the world’s most complex challenges with higher performance and greater energy efficiency.
During the Supercomputing 2024 conference, NVIDIA announced the availability of the NVIDIA H200 NVL PCIe GPU — the latest addition to the Hopper family. H200 NVL is ideal for organizations with data centers looking for lower-power, air-cooled enterprise rack designs with flexible configurations to deliver acceleration for every AI and HPC workload, regardless of size.
### Ideal for Data Centers
According to a recent survey, roughly 70% of enterprise racks are 20kW and below and use air cooling. This makes PCIe GPUs essential, as they provide granularity of node deployment, whether using one, two, four or eight GPUs — enabling data centers to pack more computing power into smaller spaces. Companies can then use their existing racks and select the number of GPUs that best suits their needs.
### Accelerating AI and HPC Workloads
Enterprises can use H200 NVL to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. With a 1.5x memory increase and 1.2x bandwidth increase over NVIDIA H100 NVL, companies can use H200 NVL to fine-tune LLMs within a few hours and deliver up to 1.7x faster inference performance. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.
### Complementing the H200 NVL
Complementing the raw power of the H200 NVL is NVIDIA NVLink technology. The latest generation of NVLink provides GPU-to-GPU communication 7x faster than fifth-generation PCIe — delivering higher performance to meet the needs of HPC, large language model inference and fine-tuning.
### Software Tools
The NVIDIA H200 NVL is paired with powerful software tools that enable enterprises to accelerate applications from AI to HPC. It comes with a five-year subscription for NVIDIA AI Enterprise, a cloud-native software platform for the development and deployment of production AI. NVIDIA AI Enterprise includes NVIDIA NIM microservices for the secure, reliable deployment of high-performance AI model inference.
### Companies Tapping into Power of H200 NVL
With H200 NVL, NVIDIA provides enterprises with a full-stack platform to develop and deploy their AI and HPC workloads. Customers are seeing significant impact for multiple AI and HPC use cases across industries, such as visual AI agents and chatbots for customer service, trading algorithms for finance, medical imaging for improved anomaly detection in healthcare, pattern recognition for manufacturing, and seismic imaging for federal science organizations.
### Availability
H200 NVL will be available across the ecosystem from Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro. Additionally, H200 NVL will be available in platforms from Aivres, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, MSI, Pegatron, QCT, Wistron, and Wiwynn. Some systems are based on the NVIDIA MGX modular architecture, which enables computer makers to quickly and cost-effectively build a vast array of data center infrastructure designs.
### Conclusion
The NVIDIA H200 NVL is a powerful addition to the Hopper family, offering enterprises a flexible and scalable solution for accelerating AI and HPC workloads while improving energy efficiency. With its 1.5x memory increase and 1.2x bandwidth increase over NVIDIA H100 NVL, companies can use H200 NVL to fine-tune LLMs within a few hours and deliver up to 1.7x faster inference performance.
### FAQs
Q: What is the NVIDIA Hopper architecture?
A: The NVIDIA Hopper architecture is a new architecture designed for AI and high-performance computing (HPC) workloads.
Q: What is the NVIDIA H200 NVL PCIe GPU?
A: The NVIDIA H200 NVL PCIe GPU is the latest addition to the Hopper family, offering a flexible and scalable solution for accelerating AI and HPC workloads while improving energy efficiency.
Q: What are the key features of the H200 NVL?
A: The key features of the H200 NVL include a 1.5x memory increase and 1.2x bandwidth increase over NVIDIA H100 NVL, as well as support for NVIDIA NVLink technology.
Q: What software tools are included with the H200 NVL?
A: The H200 NVL comes with a five-year subscription for NVIDIA AI Enterprise, a cloud-native software platform for the development and deployment of production AI.
Q: Who are the partners that will be offering H200 NVL?
A: H200 NVL will be available from Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro, as well as other leading global partners.

