Nvidia Unveils Expansive AI Roadmap at GTC 2025

Nvidia announced at GTC 2025 that AI computing demands are expected to increase 100-fold.

Author's Avatar
Mar 18, 2025
Summary
  • Data center investments are projected to exceed $1 trillion by 2030.
Article's Main Image

Nvidia (NVDA, Financials) announced at its GTC 2025 event that artificial intelligence computing will require 100 times more processing power than previously estimated, underscoring the growing demand for accelerated computing.

Chief executive officer of Nvidia Jensen Huang called GTC the "Super Bowl of AI" and underlined the growth of agentic artificial intelligence, in which autonomous AI systems create fresh computing demand. From the 1.3 million Hopper GPUs bought last year, the business claimed the four biggest cloud providers—Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud—ordered 3.6 million of its Blackwell GPUs.

By the end of the decade, Nvidia estimated that data center expenditures with an eye toward artificial intelligence will total $1 trillion. Expanding CUDA-X software libraries to support developments in quantum computing, physics, 5G/6G technologies, gene sequencing, and computational lithography, the business is Working with partners including Ansys (ANSS, Financials) and Cadence Design Systems (CDNS, Financials), Nvidia unveiled CUD-SS, a new CUDA tool for computer-aided engineering.

To create full-stack radio networks in the United States and to assist General Motors (GM, Financials) with its autonomous car activities, the business has announced alliances with Cisco (CSCO, Financials) and T-Mobile (TMUS, Financials).

Beginning with full Grace Blackwell platform manufacturing, Nvidia presented a multi-year GPU plan. Expected in the second part of 2025, Blackwell Ultra will provide doubled the bandwidth of Blackwell along with more memory. While the Rubin Ultra in 2027 is planned to provide 15 times more exaflops for inferencing and notable increases in training capacity, the Rubin series, scheduled for delivery in 2026 will feature HBM-4 memory and efficiency enhancements. Scheduled for 2028, the Feynman architecture will reflect the next development in artificial intelligence computation.

In networking, Nvidia unveiled Quantum-X, a silicon photonics device created in alliance with Taiwan Semiconductor (TSM, Financials), due to delivery in 2025; Spectrum-X Ethernet switch, scheduled for sale in 2026.

The advancements support Nvidia's position in next-generation networking, cloud infrastructure, and artificial intelligence computation.

Disclosures

I/we have no positions in any stocks mentioned, and have no plans to buy any new positions in the stocks mentioned within the next 72 hours. Click for the complete disclosure