OpenAI to use Oracle’s chips for more AI compute

Estimated read time 2 min read


OpenAI and Microsoft are teaming up with Oracle to get more compute capacity to run ChatGPT. As part of a partnership announced this week, the three companies are working together so that OpenAI can use the Microsoft Azure Al platform on Oracle’s infrastructure.

OpenAI CEO Sam Altman hasn’t shied away from the fact that his company needs substantially more infrastructure to power its services. He has even been in discussions to raise billions of dollars for an AI chip venture. In the press release for the Oracle deal this week, he said Oracle’s chips will “enable OpenAI to continue to scale.”

To date, OpenAI has relied fully on Microsoft for its compute needs. In turn, Microsoft has invested $13 billion for a 49 percent stake in OpenAI’s for-profit subsidiary and the exclusive right to commercially license its technology. But as this Oracle deal makes clear, OpenAI needs more compute than Microsoft alone can give if it wants to keep up with demand and prevent future ChatGPT outages.

Microsoft and OpenAI are clearly sensitive about how this Oracle deal is perceived. On Wednesday, OpenAI issued a follow-up statement saying that “our strategic cloud relationship with Microsoft is unchanged” and that the new partnership “enables OpenAI to use the Azure AI platform on OCI infrastructure for inference and other needs.” (Inference refers to the act of running AI models in production through applications like ChatGPT.)

OpenAI also made clear that the pre-training of its frontier models “continues to happen on supercomputers built in partnership with Microsoft.”



Source link

You May Also Like

More From Author

+ There are no comments

Add yours