NVIDIA GB300 NVL72

Experience the future of AI computing with the most powerful superchip ever created. 72 Blackwell Ultra GPUs delivering unprecedented performance for trillion-parameter models.

72
Blackwell Ultra GPUs
40 TB
Fast Memory
130 TB/s
NVLink Bandwidth
1,400
PFLOPS FP4
NVIDIA GB300 NVL72 Superchip
Bitkom Member
German Data Center Association
Eco Verband
Nvidia Partner
EU flag
Hosted in EU GDPR Compliant
DE flag
Made in Germany Quality Infrastructure
Data Sovereignty Full Transparency

Built for the Age of AI Reasoning

The GB300 NVL72 with Blackwell Ultra delivers 70x faster FP4 inference than H100 systems, featuring 72 Blackwell Ultra GPUs and 36 Grace CPUs optimized for massive-scale AI reasoning and inference.

Configuration

72 NVIDIA Blackwell Ultra GPUs, 36 NVIDIA Grace CPUs, 2,592 Arm Neoverse V2 cores

Memory & Bandwidth

288 GB HBM3e per GPU, 20 TB total HBM, 40 TB fast memory, 130 TB/s NVLink

Tensor Core Performance

1,400 PFLOPS FP4, 720 PFLOPS FP8/FP6, 360 PFLOPS FP16/BF16, 180 PFLOPS TF32

Networking

800 Gb/s per GPU (PCIe Gen6), ConnectX-8 SuperNIC, Quantum-X800 InfiniBand support

AI Performance

70x FP4 inference vs H100, 1.5x FLOPS vs GB200, 2x attention acceleration vs Blackwell

System Architecture

Liquid-cooled rack-scale, 72-GPU NVLink domain, optimized for AI reasoning workloads

Experience the future of AI computing with the most powerful superchip ever created.

Request a Quote

Technological Breakthroughs

Revolutionary innovations that redefine the boundaries of AI computing performance, delivering unprecedented capabilities for the most demanding AI workloads.

AI Reasoning Inference

Test-time scaling and AI reasoning increase the compute necessary to achieve quality of service and maximum throughput. NVIDIA Blackwell Ultra's Tensor Cores are supercharged with 2x the attention-layer acceleration and 1.5x more AI compute FLOPS compared to NVIDIA Blackwell GPUs.

288 GB of HBM3e

Larger memory capacity allows for larger batch sizing and maximum throughput performance. NVIDIA Blackwell Ultra GPU's offer 1.5x larger HBM3e memory in combination with added AI compute, boosting AI reasoning throughput for the largest context lengths.

Fifth-Generation NVIDIA NVLink

Unlocking the full potential of accelerated computing requires seamless communication between every GPU. The fifth-generation of NVIDIA NVLink™ is a scale–up interconnect that unleashes accelerated performance for AI reasoning models.

NVIDIA ConnectX-8 SuperNIC

The NVIDIA ConnectX-8 SuperNIC's input/output (IO) module hosts two ConnectX-8 devices, providing 800 gigabits per second (Gb/s) of network connectivity for each GPU in the NVIDIA GB300 NVL72.

NVIDIA Grace CPU

A breakthrough processor designed for modern data center workloads with 2x the energy efficiency of today's leading server processors.

NVIDIA Mission Control

Streamlines AI factory operations with world-class expertise delivered as software, bringing instant agility for inference and training.

NVIDIA Cloud Partner

NVIDIA Preferred Partner

Polarise has achieved NVIDIA Preferred Partner status and is listed as official NVIDIA Cloud Service Provider (CSP), solidifying our position as a trusted leader in cloud innovation. This designation is reserved for select partners who operate large clusters built in coordination with NVIDIA, adhering to a tested and optimized reference architecture.

Ready to start your AI project?

Let's discuss your specific requirements in a personal conversation. I'll help you find the perfect AI infrastructure solution for your organization.

Nils - Your AI Infrastructure Expert

Nils Herhaus

Business Development

@Polarise