Rising AI demands push Asia Pacific data centres to adapt, says Vertiv

Key Takeaways
- Rapid AI adoption in Asia Pacific is overwhelming traditional data centres due to high energy and cooling demands from GPU clusters.
- Rack power densities are expected to reach extreme levels (potentially 1 MW) by 2030, necessitating a shift from upgrades to purpose-built "AI factory" data centres.
- Cooling challenges require advanced solutions like hybrid systems combining direct-to-chip liquid cooling with air-based methods.
- Power delivery must evolve to handle rapidly fluctuating AI workloads, requiring smarter distribution units and load balancing.
- The future data centre architecture will be hybrid and integrated, designed specifically around liquid flow and advanced power management, moving away from retrofitting older facilities.
The surge in artificial intelligence adoption across Asia Pacific industries is placing immense pressure on existing data centre infrastructure, which struggles with the heavy energy and cooling demands of modern GPU-driven workloads. Projections indicate that rack power densities could reach 1 MW by 2030, rendering incremental upgrades obsolete and driving the need for entirely new, purpose-built "AI factory" data centres. Paul Churchill of Vertiv Asia highlighted that this transition requires smarter, future-ready strategies encompassing high-capacity power systems and advanced thermal management, such as hybrid direct-to-chip liquid cooling. The market is projected to grow exponentially, fueled by digitalization and generative AI applications, demanding solutions for rising rack densities from 40 kW up to 250 kW. Consequently, data centre architecture is shifting toward hybrid designs where cooling and power systems are integrated from the chip level upward to support liquid-cooled GPU pods. This architectural redesign is crucial for Asia Pacific as it prepares to overtake the US in data centre capacity by 2030, balancing performance expectations with sustainability goals.




