The NVIDIA® H100 Tensor Core GPU powered by the Hopper architecture delivers the next massive leap in our accelerated compute data center platform, securely accelerating diverse workloads from small enterprise workloads to exascale HPC and trillion parameter AI in every data center. It enables these brilliant innovators to fulfill their life's work at the fastest pace ever in human history.
Thinkmate has a wide variety of systems that support the NVIDIA H100 in various form factors, GPU densities, and storage capacities. Our team of system design experts has hand-selected systems from a variety of manufacturers that we believe best support the breadth and depth of our clients' needs. Each system is highly configurable with components from industry-leading technology providers.
The list of systems that support the NVIDIA H100 is constantly growing, so visit these pages to see what systems are available today.
H100 is bringing massive amounts of compute to data centers. To fully utilize that compute performance, H100 is the world’s first GPU with HBM3 memory with a class-leading 3 terabytes per second (TB/sec) of memory bandwidth. H100 is also the first GPU to support PCIe Gen5, providing the highest speeds possible at 128GB/s (bi-directional). This fast communication enables optimal connectivity with the highest performing CPUs, as well as with NVIDIA ConnectX-7 SmartNICs and BlueField-3 DPUs, which allow up to 400Gb/s Ethernet or NDR 400Gb/s InfiniBand networking acceleration for secure HPC and AI workloads.
As workloads explode in complexity there’s a need for multiple GPUs to work together with extremely fast communication between them. NVIDIA H100 leverages new PCI-e Gen 5 interconnects to improve GPU-to-GPU and GPU-to-CPU bandwidth. If leveraging NVLink or NVSwitch, multiple H100 GPUs enable the creation of the world’s most powerful scale-up servers.
NVIDIA H100s are available as a server building block in the form of integrated baseboards in four or eight H100 GPU configurations. Leveraging the power of H100 multi-precision Tensor Cores, an 8-way HGX H100 provides over 32 petaFLOPS of FP8 deep learning compute performance. This performance density is critical to powering the most demanding workloads in HPC and AI today.