The rack density myth
Most organisations need far less rack density than they think.

How much power do you need per rack in your data centre? Most organisations probably need far less than they think.
Since last year, I've made a concerted effort to enquire about rack densities, with the expectation of high double-digit or even triple-digit kW deployments. I've asked CEOs with new data centres, country leaders overseeing multiple facilities, and vendors selling GPU servers.
Turns out high-density racks are still a rarity. One country leader told me about low take-up for optional cooling systems that can handle up to 40kW at his facility. The reason? Customers simply don't need it.
AI workloads drive the split
I've previously written that data centres are diverging into AI data centres and enterprise data centres, and that this split is accelerating. We're witnessing the emergence of two fundamentally different types of data centre, each with radically different power, cooling, and infrastructure requirements.
There's no question that AI data centres operated by OpenAI, Meta, and xAI require incredible amounts of electricity for energy-guzzling GPUs. Power requirements for AI data centres mirror the power needs of various Nvidia GPUs. Racks packed with H200 GPUs require 40-50kW, while B200 GPUs need 130kW. Future deployments are expected to hit 600kW or more per rack.
These power requirements are pulling far ahead of non-AI data centres, which Uptime Institute says averaged just 8kW per rack in 2024. That's a massive gulf between AI and enterprise requirements.
Future-proofing without overbuilding
But let's assume for a moment that I haven't talked to enough people. Or perhaps your organisation wants to future-proof its data centre deployment. What would be a sensible rack density for a new deployment today?
Barring any special requirements, I'd say 40-50kW should be more than sufficient. Here's my thinking: it provides ample headroom for 20-25kW HPC systems, can support Groq inferencing servers at around 20kW per rack, and can even handle cutting-edge B200 GPUs at one-third rack density.
While 50kW racks will require non-standard cooling systems, they can be supported by existing active rear-door heat exchangers (RDHx) solutions. In general, RDHx systems are more cost-effective to deploy and maintain. They're also far easier to retrofit into an existing data centre compared to liquid cooling infrastructure.
The overspecification trap
The reality is that most enterprise workloads haven't fundamentally changed. Yes, there's more virtualisation and containerisation, but the actual power draw per rack remains modest. Even organisations running some AI inferencing workloads rarely need the extreme densities being promoted.
What do you think? What is a good cooling capacity to future-proof an enterprise data centre deployment without overbuilding for requirements that may never materialise?