(Reuters) – U.S. businessman Elon Musk recently told investors that his artificial intelligence startup xAI is planning to build a supercomputer to power the next version of its AI chatbot Grok, The Information reported on Saturday citing a presentation to investors.
Musk said he wants to get the proposed supercomputer running by the fall of 2025, as per the report, adding that xAI could partner with Oracle to develop the massive computer.
xAI could not be immediately reached for comment. Oracle did not respond to a Reuters request for comment.
When completed, the connected groups of chips — Nvidia’s flagship H100 graphics processing units (GPUs) — would be at least four times the size of the biggest GPU clusters that exist today, The Information reported quoting Musk from a presentation made to investors in May.
Nvidia’s H100 family of powerful GPUs dominate the data center chip market for AI but can be hard to obtain due to high demand.
Musk founded xAI last year as a challenger to Microsoft-backed OpenAI and Alphabet’s Google. Musk also co-founded OpenAI.
Earlier this year, Musk said training the Grok 2 model took about 20,000 Nvidia H100 GPUs, adding that the Grok 3 model and beyond will require 100,000 Nvidia H100 chips.
(Reporting by Mrinmay Dey in Bengaluru; Editing by Josie Kao)