Building AI-level rack densities? You could be making a mistake
Why few organisations need to think of AI workloads today.

AI data centres are all over the news, but most companies would be making a huge mistake by planning for AI-level rack densities today.
In today's UnfilteredFriday, let's talk about AI data centres - and why few organisations globally need to even think of catering to AI workloads.
AI data centres
I've written extensively about AI data centres over the last year or two.
Some topics include defining what an AI data centre is, their heavy use of liquid cooling, and how they are concentrated in just a few countries.
More recently, I shared what I learned at my fireside chat with an Oracle VP on building an AI data centre.
- We are all new to AI data centres.
- Making educated bets on new systems.
A split in the path
I've long argued that AI data centres are diverging from traditional data centres. And this divergence is widening at an accelerating pace.
Let's take a closer look at them.
AI data centres
The energy requirements of AI data centres are incredible. A rack packed with GPUs needs a lot of power.
- H200 GPUs: 40-50kW.
- B200 GPUs: 130kW.
- Future: 600kW or more.
Enterprise data centres
Globally, the average electricity consumed per rack is around 8kW in 2024. This will likely increase as more AI data centres are rolled out.
But I expect enterprises to hover around 10-20kW. This means spending more to support 50kW or even 130kW would be a huge waste of money.
But what if you need AI?
But won't enterprises need to roll out their own AI deployments at some point? True. It still won't make sense to go beyond 50kW per rack, though.
Cost of GPU servers
GPU servers are extremely costly. A single server with 8x H200 GPUs could cost upward of US$300,000. If you are deploying 1-2 of them, then 20kW racks are more than adequate.
Inference hardware
Moreover, most enterprises will be doing inference, not AI training. And dedicated inference hardware is expected to take off.
Dedicated inference hardware, such as those from Groq, consumes a lot less power than Nvidia GPUs.
- Current Groq LPUs consume 20kW per rack.
- Next-gen racks will require "slightly more."
Future proofing
In my personal view, an enterprise planning for the future will be well-served to plan for up to 50kW racks - unless they have unique use cases already lined up.
What do you think?