Liquid and immersion cooling for AI data centres taking off
- Babak Baghaei
- 3 days ago
- 3 min read

As AI workloads scale, the data-centre industry is undergoing a fundamental shift: air cooling alone is no longer enough. High-power GPUs and accelerators, often running at hundreds of watts per chip and tens of kilowatts per rack, are pushing facilities towards liquid and immersion cooling at speed.
Market data shows just how quickly this shift is happening. The global data-centre liquid immersion cooling market is estimated at about USD 2.1 billion in 2024 and is forecast to reach around USD 7.2 billion by 2030, a compound annual growth rate of roughly 22%. Another analysis projects immersion cooling growing from about USD 1.3–1.6 billion in 2024–25 to over USD 7 billion by 2034, at a CAGR of around 18%.
For liquid cooling more broadly, one recent market study estimates that direct-to-chip solutions account for over 40% of the liquid-cooling market today, while immersion cooling is expected to grow at ~27% CAGR through 2030.
Industry tracking from TrendForce suggests that in AI data centres specifically, liquid-cooling penetration could jump from around 14% in 2024 to about 33% in 2025, driven by deployments built around NVIDIA’s GB200 NVL72 and similar high-density AI racks.

From air to liquid: what’s driving the change?
Several factors are pushing operators towards liquid and immersion solutions:
Rising power density – AI racks are headed towards hundreds of kilowatts and even 1 MW IT racks, as highlighted by Google’s OCP presentations, where liquid cooling is treated as a prerequisite rather than an option.
Thermal limits of air cooling – fan-based systems struggle to remove heat efficiently from densely packed high-TDP devices, particularly in retrofitted halls.
Energy efficiency and PUE – liquid cooling can cut the energy overhead of cooling systems, improving power usage effectiveness (PUE) and lowering operating costs.
Hardware reliability – better temperature control reduces thermal cycling and hotspots, improving component lifetime at high loads.
Hyperscale operators are responding accordingly. Google, Microsoft, Meta, Amazon and Alibaba are all reported to be investing heavily in liquid cooling for AI services. JLL’s 2025 global data-centre outlook bluntly notes that GPU advancements require a shift to liquid cooling as part of the wider AI infrastructure build-out.
Direct-to-chip vs immersion
Within the broad move to liquids, two main architectures are emerging:
Direct-to-chip (cold-plate) liquid cooling
Coolant flows through cold plates attached directly to CPUs/GPUs.
Often seen as the “easiest” path for brownfield sites because it can co-exist with existing air-cooling and rack layouts.
Immersion cooling (single-phase or two-phase)
Servers or whole racks are submerged in non-conductive fluid.
Offers very high heat-flux capability and can radically simplify airflow needs.
More often discussed for greenfield deployments or highly dense AI clusters.
Analyses from IEEE Spectrum and others describe liquid cooling as “the AI heat solution”, arguing that without it, the industry cannot realistically sustain the next wave of model sizes and training clusters.
Market signals and consolidation
The growth trajectory is also visible in corporate moves. Thermal-management and power companies are expanding aggressively into the data-centre cooling space, including large acquisitions focused on liquid-cooling technology portfolios aimed squarely at AI and high-density workloads.
At the same time, research and standards activity has accelerated: recent reviews in the academic literature examine direct liquid, immersion, two-phase, spray and hybrid cooling through the lenses of performance, commercial readiness and environmental impact, particularly for AI-intensive facilities.
Design and engineering implications
For data-centre designers, operators and their engineering partners, this transition brings several practical implications:
Early thermal planning is now critical: cooling strategy (air vs liquid vs immersion, or hybrids) needs to be decided much earlier in the design process, alongside power and layout decisions.
System-level thinking: moving to liquid cooling affects not just the rack, but secondary loops, heat-rejection plant, water use, and sometimes opportunities for waste-heat reuse.
Complex trade-offs: each architecture has different footprints in terms of CAPEX, OPEX, maintainability, water use, and risk profile; these must be quantified and compared rather than chosen on headline efficiency alone.
Growing role for modelling: high-fidelity thermal–fluid simulation, combined with techno-economic analysis, is increasingly used to test cooling concepts, evaluate retrofits, and de-risk novel architectures before large capital commitments.
The overall direction is clear: as AI workloads continue to grow, liquid and immersion cooling are moving rapidly from niche options to mainstream requirements. Future data centres – particularly those hosting large AI clusters, are likely to be designed around liquids from the outset, with air cooling playing a supporting rather than central role. Source: https://www.grandviewresearch.com/industry-analysis/data-center-liquid-immersion-cooling-market-report https://www.mordorintelligence.com/industry-reports/data-center-liquid-cooling-market https://www.datacenterdynamics.com/en/analysis/liquid-cooling-in-the-generative-ai-era/




Comments