Tecnologia do Blogger.
RSS

Special Report: Sam Altman Wants 250 Gigawatts of Power. Is That Possible?

Special Report Series
Welcome to the first edition of The Information's newsletter on AI infrastructure. In the coming months, we'll cover the data centers, chips, networking and energy that power AI. Artificial intelligence is hungry for power at a scale that defies belief. Last week, OpenAI and Nvidia said they would work together to develop 10 gigawatts of data center capacity over an unspecified period. Inside OpenAI, Sam Altman floated an even more staggering number: 250 GW of compute in total by 2033, roughly one-third of the peak power consumption in the entire U.S.!
Sep 29, 2025

Special Report Series

Anissa Gardizy headshot
Supported by Sponsor Logo

Welcome to the first edition of The Information's newsletter on AI infrastructure. In the coming months, we'll cover the data centers, chips, networking and energy that power AI.

Artificial intelligence is hungry for power at a scale that defies belief.

Last week, OpenAI and Nvidia said they would work together to develop 10 gigawatts of data center capacity over an unspecified period. Inside OpenAI, Sam Altman floated an even more staggering number: 250 GW of compute in total by 2033, roughly one-third of the peak power consumption in the entire U.S.!

Let that sink in for a minute. A large data center used to mean 10 to 50 megawatts of power. Now, developers are pitching single campuses in the multigigawatt range—on par with the energy draw of entire cities—all to power clusters of AI chips.

Or think of it this way: A typical nuclear power plant generates around 1 GW of power. Altman's target would mean the equivalent of 250 plants just to support his own company's AI. And based on today's cost to build a 1 GW facility (around $50 billion), 250 of them implies a cost of $12.5 trillion.

"We are in a compute competition against better-resourced companies," Altman wrote to his team last week, likely referring to Google and Meta Platforms, which also have discussed or planned large, multigigawatt expansions. (XAI CEO Elon Musk also knows a thing or two about raising incredible amounts of capital.) 

"We must maintain our lead," Altman said.

OpenAI expects to exit 2025 with about 2.4 GW of computing capacity powered by Nvidia chips, said a person with knowledge of the plan, up from 230 MW at the start of 2024.

Ambition is one thing. Reality is another, and it's hard to see how the ChatGPT maker would leap from today's level to hundreds of gigawatts within the next eight years. Obviously, that figure is aspirational. 

Then again, OpenAI's fast-rising server needs surprised even Nvidia executives, said people on both sides of the relationship.

Before the events of last week, OpenAI had contracted to have around 8 GW by 2028, almost entirely consisting of servers with Nvidia graphics processing units. That's already a staggering jump, and OpenAI is planning to pay hundreds of billions of dollars in cash to the cloud providers who develop the sites. 

To put it into perspective, Microsoft's entire Azure cloud business operated at about 5 GW at the end of 2023—and that was to serve all of its customers, not just AI. (Azure is No. 2 after Amazon's cloud business.)

Bigger Is Still Better

Data center developers tell me most of OpenAI's top competitors are asking for single campuses in the 8 to 10 GW range, an order of magnitude bigger than anything the industry has ever attempted to build.

A year and a half ago, OpenAI's plan with Microsoft to build a single Stargate supercomputer costing $100 billion seemed like science fiction. Barring a seismic macroeconomic change, these types of projects now seem like a real possibility.

The rationale behind them is simple: Altman and his rivals believe that the bigger the GPU cluster, the stronger the AI model they can produce. Our team has been at the forefront of reporting on some of the limitations of this scaling law, as evidenced by the smaller step-up in quality between GPT-5 and GPT-4 than between GPT-4 and GPT-3. 

Nevertheless, Nvidia's fast pace of GPU improvements has strengthened the belief of Altman and his ilk that training runs conducted with Blackwell chip clusters this year and with Rubin chips next year will crack open significant gains, according to people who work for these leaders.

In the early days of the AI boom, it was hard to develop clusters of a few thousand GPUs. Now firms are stringing together 250,000, and they want to connect millions in the future. 

That desire runs into a pretty important constraint: electricity. Companies are already trying to overcome that hurdle in unconventional ways, by building their own power plants instead of waiting for utilities to provide grid power, or by putting facilities in remote areas where energy is easier to secure. 

Still, the gap between company announcements and the reality on the ground is enormous. Utilities by nature are conservative when it comes to adding new power generation. They won't race to build new plants if there's a risk of ending up with too much capacity—no matter who is asking.

'Activating the Full Industrial Base'

OpenAI's largest cluster under development, in Abilene, Texas, currently uses grid power and natural gas turbines. But other projects it has announced in Texas will use a combination of natural gas, wind and solar. 

Milam County, where OpenAI is planning one of its next facilities, recently approved a 5 GW solar cell plant, for instance. And gas is expected to be the biggest source of power for the planned sites, this person said.

To accomplish its goals, OpenAI and its partners will need the makers of gas and wind turbines to greatly expand their supply chains. That's not an easy task, given that it involves some risk-taking on the part of the suppliers. Perhaps Nvidia's commitment to funding OpenAI's data centers while maintaining control of the GPUs will make those conversations easier.

Altman told his team that obtaining boatloads of servers "means activating the full industrial base of the world—energy, manufacturing, logistics, labor, supply chain—everything upstream that will make large-scale compute possible."

There are other bottlenecks, such as getting enough chipmaking machines from ASML and getting enough manufacturing capacity from Taiwan Semiconductor Manufacturing Co., which produces Nvidia's GPUs. Negotiating for that new capacity will fall to Nvidia.

Predicting the future is notoriously difficult, but a lot of things will need to go right for OpenAI and its peers to get all the servers they want. In the meantime, they will keep making a lot of headlines in their quest to turn the endeavor into a self-fulfilling prophecy.

A message from Nebius

AI is redefining infrastructure. Nebius is redefining cloud.

The AI era is here, but you're stuck choosing between general-purpose clouds or bare metal that sacrifices simplicity for power. Nebius brings both. Purpose-built for AI from silicon to software, Nebius delivers hyperscaler flexibility with supercomputer performance. Every layer engineered for AI workloads — eliminating friction, reducing costs and risk — so your team can build and deploy at scale. The ultimate AI cloud.

Opportunities

Group subscriptions

Empower your teams to stay ahead of market trends with the most trusted tech journalism.

Learn more


Brand partnerships

Reach The Information’s influential audience with your message.

Connect with our team

About Special Report Series

The Information's Special Report brings you a deeper dive on a unique topic.

Read the archives

Follow us
X
LinkedIn
Facebook
Threads
Instagram
Sent to cintilanteaguda@gmail.c­om | Manage your preferences or unsubscribe | Help The Information · 251 Rhode Island Street, Suite 107, San Francisco, CA 94103

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

0 comentários:

Postar um comentário