OpenAI, backed by Microsoft, has confirmed testing Google's Tensor Processing Units (TPUs) but has no plans for widespread deployment. TPUs are custom ASIC chips designed by Google for machine learning tasks, optimizing neural network training and inference through efficient matrix multiplication and reduced memory latency.
OpenAI will continue to rely on NVIDIA GPUs and AMD (SMCI, Financial) accelerators, citing their proven performance and existing supply agreements. Although OpenAI has started using some Google AI chips for specific tasks, these are lower-tier TPUs, with Google's most advanced chips reserved for internal use.
Despite a recent agreement with Google Cloud to meet broader infrastructure needs, OpenAI has no immediate plans to shift significant computational power to the TPU platform. Analysts had viewed potential TPU collaboration as a sign of OpenAI seeking alternatives to NVIDIA, but the company’s stance highlights the complexity of large-scale hardware deployment and the stickiness of existing partnerships.
OpenAI's commitment to NVIDIA and AMD as core suppliers may limit the growth of Google's AI hardware market share, despite advancements in TPU technology. Investors are keenly watching OpenAI's infrastructure updates and Google Cloud's financial reports for any signs of changes in TPU usage or supplier diversification.