- CoreWeave (CRWV, Financial), NVIDIA, and IBM achieve record-breaking MLPerf Training v5.0 results with 2,496 NVIDIA GB200 GPUs.
- The cluster completed Llama 3.1 405B model training in just 27.3 minutes, 2x faster than similar-sized clusters.
- CoreWeave's NVIDIA GB200 NVL72 cluster is 34 times larger than other cloud provider submissions.
CoreWeave (CRWV), in collaboration with NVIDIA and IBM, has delivered a landmark performance in the MLPerf Training v5.0 benchmarks. Utilizing a massive cluster of 2,496 NVIDIA GB200 Blackwell GPUs, CoreWeave achieved the largest-ever MLPerf submission, highlighting its AI cloud platform's superior capacity.
This record-setting configuration, the most expansive NVIDIA GB200 NVL72 cluster to date, is 34 times the size of other cloud provider submissions. The cluster successfully trained the Llama 3.1 405B model in a staggering 27.3 minutes, demonstrating a training speed twice as fast as similarly sized clusters.
CoreWeave's achievement underscores its strength in AI workload performance, offering significant model development speed and cost efficiency benefits to its clients. The company’s prominence in AI infrastructure is further confirmed by its Platinum tier ranking in SemiAnalysis' ClusterMAX, bolstered by its strong performance in both MLPerf Inference v5.0 and Training v5.0 benchmarks.
Peter Salanki, Chief Technology Officer and Co-founder of CoreWeave, stated, "AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand.”
The company's successful deployment of one of the largest and fastest AI models positions CoreWeave as a key player in the competitive AI infrastructure market, poised to capture significant market share by offering cutting-edge cloud solutions well ahead of competitors.