CoreWeave Unveils 34×-Bigger Nvidia GPU Cluster

Ran 2,496 Nvidia GB200 Grace Blackwell GPUs--the largest MLPerf Training submission by 34×

Author's Avatar
Jun 04, 2025
Summary
  • First to scale Grace Blackwell GPUs in April—CRWV customers gain major efficiency advantage
  • Public since March, CoreWeave scaled thousands of GB200 GPUs in April, cementing AI leadership.
Article's Main Image

CoreWeave (CRWV, Financial) rose about 3% after unveiling record-breaking MLPerf Training v5.0 results using Nvidia's GB200 Grace Blackwell GPUs on its AI-optimized cloud.

The firm ran 2,496 Blackwell GPUs—34 times larger than the next biggest cloud submission—achieving a 27.3-minute run on the Llama 3.1 405B model and more than twice the training performance of similar setups.

CoreWeave co-founder and CTO Peter Salanki says these figures confirm the company's leadership in high-end AI workloads. Faster training directly cuts model development cycles and lowers total cost of ownership, enabling customers to scale and deploy cutting-edge AI models more efficiently, months ahead of competitors.

After going public in March and scaling thousands of Grace Blackwell GPUs in April, CoreWeave's platform now offers unmatched performance for demanding AI tasks.

Disclosures

I/we have no positions in any stocks mentioned, and have no plans to buy any new positions in the stocks mentioned within the next 72 hours. Click for the complete disclosure