OpenAI, backed by Microsoft (MSFT), is reportedly assessing Google's (GOOGL) Tensor Processing Units (TPUs) to meet its increasing AI demands. However, the company clarified that it currently has no active plans to deploy Google's TPUs on a large scale. OpenAI is conducting preliminary tests on some of Google's TPUs.
Currently, OpenAI heavily relies on NVIDIA's (NVDA, Financial) GPUs and AMD's (AMD) AI chips to support its growing requirements. A potential deal with Google could indicate OpenAI's strategy to diversify its suppliers, moving away from its dependence on NVIDIA chips for AI model training and inference.
According to a Morgan Stanley analyst, if OpenAI starts using Google's chips for AI inference workloads, it would be a significant endorsement of Google's hardware capabilities. Reports suggest that OpenAI has initiated the use of Google's AI chips for product development, although Google has not provided its most advanced TPUs, reserving them for internal projects like its Gemini large language model.
Last month, OpenAI signed an agreement with Google Cloud to address its computational needs.