AI combined Nvidia Hopper GPUs with its Spectrum-X platform to supercharge AI model training at its Colossus site in ...
TL;DR: Elon Musk's xAI is upgrading its Colossus AI supercomputer from 100,000 to 200,000 NVIDIA Hopper AI GPUs. Colossus, ...
Although Nvidia's hardware should retain its computing superiority, Wall Street's artificial intelligence (AI) darling is set ...
With reference designs derived from the existing Nvidia Cloud Partner reference architecture, Nvidia right-sizes for ...
Musk’s xAI is using Nvidia’s advanced networking technology to boost Colossus’s performance for training Grok.
Elon Musk's xAI Colossus AI supercomputer with 200,000 H200 GPUs uses Nvidia's Spectrum-X Ethernet to connect servers.
Nvidia is apparently loosening the supply chain for accelerator cards from the Hopper generation ... The NVL version uses the GH100 GPU with 132 active streaming multiprocessors, i.e. 16,896 ...
Now let's talk about where Nvidia may be six months after launch. History offers us an initial clue, showing us that in the ...
Colossus, the world’s largest AI supercomputer, is being used to train xAI’s Grok family of large language models, with chatbots offered as a feature for X Premium subscribers. xAI is in the process ...
xAI’s Colossus supercomputer cluster, comprising 100 000 Nvidia Hopper GPUs, achieved this scale by using the Nvidia Spectrum-X Ethernet networking platform, which is designed to power multi-tenant, ...
SANTA CLARA, Calif., Oct. 28, 2024 (GLOBE NEWSWIRE) -- NVIDIA today announced that xAI’s Colossus supercomputer cluster comprising 100,000 NVIDIA Hopper Tensor Core GPUs in Memphis, Tennessee ...