Categories: AITech

Powering the AI era: How data centres need to evolve

The AI boom is reshaping the digital landscape – and nowhere is this more evident than in data centres. As models grow larger and more compute-intensive, the infrastructure supporting them must evolve rapidly. Racks that once consumed 30kW are now pushing beyond 100kW, with 1MW+configurations quickly becoming a reality. 

Meeting these demands is not just about scaling capacity – it is about rethinking the entire power delivery chain for maximum efficiency and sustainability. This includes the adoption of high-voltage DC (HVDC) architectures, which offer improved power distribution efficiency and reduced conversion losses, and the introduction of liquid cooling technologies, which are essential for managing the thermal loads of ultra-dense compute environments.

Sponsored

Introducing liquid cooling

As AI and high-performance computing (HPC) workloads continue to exceed the thermal limits of traditional air-cooled systems, liquid cooling has become the go-to solution for dealing with the excessive heat produced in high-density compute environments. Direct-to-chip cooling technology isenabling significant improvements in energy efficiency and sustainability including zero water consumption, over 50% decrease in cooling power usage, and an 18% decrease in total power consumption. At scale, these gains could prevent an annual emission of 35 million metric tons of CO2.

While AI workloads dominate the headlines, they still represent only a fraction of global data centre power usage. Most of the infrastructure remains dedicated to CPU-based workloads, which also benefit from advanced cooling solutions. Innovations like standalone liquid cooling systems are designed to integrate seamlessly into existing data centre environments, delivering immediate performance and efficiency improvements without requiring major infrastructure changes. Increasingly, hybrid cooling approaches – combining air and liquid cooling – are being adopted to optimise thermal management across diverse workloads, striking a balance between efficiency, scalability, and flexibility.

Rethinking power architectures

As AI workloads continue to push data centre rack densities higher, operators are rethinking how to meet energy consumption demands with maximum efficiency, scalability, and sustainability. A key innovation gaining traction is the shift toward HVDC architectures, particularly +/- 400 V DC and 800 V DC systems, along with the solid-state technologies they enable. The configurations have the potential to reduce conduction losses, enable longer cable runs, and minimise the conversion stages required to step power down from the grid. The result is improved overall system efficiency and reduced thermal management complexity. 

Another advancement sees a power shelf system optimised for next-generation AI platforms, achieving 97.5% efficiency at half-load. By leveraging native 800 V DC input, the system streamlinespower conversion and reduces the need for intermediate AC stages. This improves energy efficiency while simplifying infrastructure design, allowing for denser deployments and faster scalability within the same data centre footprint.  

Sponsored

The future of power

Looking ahead, the next generation of data centre infrastructure will be defined by radical efficiency gains – not just in energy consumption, but in physical space and system design. Traditionally, converting incoming AC power to a DC voltage usable at the chip level required several conversion steps, each of which negatively impacted energy efficiency. But we are seeing higher DC voltages emerge in the data centre, including the 800 V DC that allows direct connection to renewable energy systems and +/- 400 V DC required for the integration of capacitive energy storage systems (CESS), battery energy storage systems (BESS), and microgrid applications. 

Condensing power conversion into a single solid-state transformer not only produces efficiency gains, but it significantly reduces the square footage required for electrical equipment – which, when combined with higher density compute and cooling, could mean up to 90% smaller data centre footprints by 2030. This opens new paths to profitability: saving on construction costs or increasing compute capacity in the existing envelop by adding more racks. We call this the convergence of power and IT, and it is a welcome step forward.  

Building a scalable, sustainable AI infrastructure

As AI continues to evolve, so too must the infrastructure that powers it. From liquid cooling to HVDC systems and solid-state transformers, the future of data centres lies in integrated, efficient, and sustainable design. The next few years will be critical in shaping how we compute – and how responsibly we do it.

rssfeeds-admin

Share
Published by
rssfeeds-admin

Recent Posts

Fire devastates South End home of Concord bakery owner

A fire broke out at the South End home of a downtown bakery owner over…

38 minutes ago

Rosalie Miller’s unsolved homicide and a look at New Hampshire’s cold cases, by the numbers

Nearly three decades ago, Rosalie Miller’s body was found off the Auburn stretch of the…

38 minutes ago

Chichester residents will see slight tax increase, warrant articles on open enrollment and municipal ethics

Town meeting in Chichester grew so heated over money last year that it had to…

38 minutes ago

Pre-order The Latest Range of Galaxy S26 Devices and Save Up To AUD $400

Samsung Unpacked has taken the world by storm after revealing a brand new Galaxy S26…

1 hour ago

Horror Icon Bruce Campbell Shares ‘Treatable’ Cancer Diagnosis, Cancels Summer Appearances

Actor, filmmaker, and author Bruce Campbell has today shared he has been diagnosed with cancer.…

3 hours ago

Judge blocks Noem policy limiting congressional visits to immigrant detention facilities

U.S. Reps. Kelly Morrison, Ilhan Omar and Angie Craig of Minnesota, all Democrats, arrive outside…

3 hours ago

This website uses cookies.