OpenAI Diversifies with Google's AI Chips, Reducing Reliance on Nvidia (NVDA)

Author's Avatar
2 days ago
Article's Main Image

OpenAI, a startup backed by Microsoft (MSFT), has begun using Google's (GOOGL) AI chips to develop products like ChatGPT. This marks a significant shift from its previous reliance solely on Nvidia (NVDA, Financial) chips. The collaboration with Google's Tensor Processing Units (TPUs) signifies OpenAI's first substantial move to diversify its chip suppliers.

Historically, OpenAI depended heavily on Nvidia for both training AI models and executing inference computations. The new strategy aims to lower costs associated with the inference process by leasing TPUs from Google Cloud. This could position TPUs as a cost-effective alternative to Nvidia's GPUs.

Earlier this month, OpenAI announced plans to integrate Google Cloud services to meet its growing computational demands, highlighting an unexpected partnership between two prominent AI competitors. Analysts from Morgan Stanley have expressed support for Google, suggesting that the agreement reflects Google's confidence in its long-term search business and accelerates Google Cloud's growth, potentially boosting its valuation.

Google's collaboration with OpenAI coincides with its efforts to make its TPUs more widely available. While previously used mainly for internal projects, TPUs are now attracting major tech companies like Apple (AAPL) and AI competitors founded by former OpenAI members. However, Google has not leased its most advanced TPU models to OpenAI, reserving them for internal projects, including its Gemini language model.

Disclosures

I/We may personally own shares in some of the companies mentioned above. However, those positions are not material to either the company or to my/our portfolios.