[ Our Gigafactory of Compute ]
Colossus text in 3D chrome
We’re running the world’s biggest supercomputer, Colossus. Built in 122 days—outpacing every estimate—it was the most powerful AI training system yet. Then we doubled it in 92 days to 200k GPUs. This is just the beginning.

We go further, faster

We were told it would take 24 months to build. So we took the project into our own hands, questioned everything, removed whatever was unnecessary, and accomplished our goal in four months.

Aerial shot of Colossus's site

Unprecedented scale

We doubled our compute at an unprecedented rate, with a roadmap to 1M GPUs. Progress in AI is driven by compute and no one has come close to building at this magnitude and speed.

Number of GPUs

200K

GPUs

Total Memory Bandwidth

194

Petabytes/s

Network Bandwidth per Server

3.6

Terabits/s

Storage Capacity

>1

Exabyte

Our path of progress

We’re moving toward a future where we will harness our cluster’s full power to solve intractable problems. What’s one seemingly impossible question you’d answer for humanity?

May 2024
Aug 2024
Nov 2024
Feb 2025
17 Feb

Today

Running jobs with 80k+ GPUs and 99% uptime

Latest news

Grok 3

Grok 3 Beta — The Age of Reasoning Agents

We are thrilled to unveil an early preview of Grok 3, our most advanced model yet, blending superior reasoning with extensive pretraining knowledge.

February 19, 2025

Series C

xAI raises $6B Series C

We are partnering with A16Z, Blackrock, Fidelity Management & Research Company, Kingdom Holdings, Lightspeed, MGX, Morgan Stanley, OIA, QIA, Sequoia Capital, Valor Equity Partners and Vy Capital, amongst others.

December 23, 2024

Grok for all

Bringing Grok to Everyone

Grok is now faster, sharper, and has improved multilingual support. It is available to everyone on the 𝕏 platform.

December 12, 2024