
Update 2025-08-09 (PST) (AI summary of creator comment): - Multiple companies: If GPT-6 uses chips from multiple companies (including different phases like pretraining/posttraining), the market will resolve by allocating percentages to each company based on the creator’s best estimate of their share of total compute.
June 30 - OpenAI said it has no active plans to use Google's in-house chip to power its products, two days after Reuters and other news outlets reported on the AI lab's move to turn to its competitor's artificial intelligence chips to meet growing demand.
A spokesperson for OpenAI said on Sunday that while the AI lab is in early testing with some of Google's tensor processing units (TPUs), it has no plans to deploy them at scale right now.
Google declined to comment.
While it is common for AI labs to test out different chips, using new hardware at scale could take much longer and would require different architecture and software support. OpenAI is actively using Nvidia's graphics processing units (GPUs), and AMD's AI chips to power its growing demand. OpenAI is also developing its chip, an effort that is on track to meet the "tape-out" milestone this year, where the chip's design is finalized and sent for manufacturing.
OpenAI has signed up for Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. Most of the computing power used by OpenAI would be from GPU servers powered by the so-called neocloud company CoreWeave.
Google has been expanding the external availability of its in-house AI chips, or TPUs, which were historically reserved for internal use. That helped Google win customers, including Big Tech player Apple, as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders.
@ahalekelly how does the market resolve if it uses multiple different companies' chips
[Microsoft’s] new Maia 100 AI Accelerator will power some of the largest internal AI workloads running on Microsoft Azure. Additionally, OpenAI has provided feedback on Azure Maia and Microsoft’s deep insights into how OpenAI’s workloads run oninfrastructure tailored for its large language models is helping inform future Microsoft designs.
“Since first partnering with Microsoft, we’ve collaborated to co-design Azure’s AI infrastructure at every layer for our models and unprecedented training needs,” said Sam Altman, CEO of OpenAI. “We were excited when Microsoft first shared their designs for the Maia chip, and we’ve worked together to refine and test it with our models. Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”
@SaviorofPlant I thought it was unlikely since Google Gemini directly competes with ChatGPT, but ok I have added it