Unsupported browser

For a better experience please update your browser to its latest version.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Rise of the machines

With growth in data centre demand set to continue unabated, the focus on using efficient cooling technology is becoming even more important, says Lone Hansen

A combination of energy-efficiency measures and rising energy costs have resulted in companies searching for ways of lowering their PUE (Power Usage Effectiveness) and operating costs.

It is especially crucial for the colocation data centres. In particular, Big Data companies have been criticised for their inefficiency, resulting in their adoption of newer technologies.

Traditional Close Control (CRAC and CRAH) is still fit for purpose in many countries, but it is gradually losing its share to newer technologies, especially evaporative cooling.

Evaporative cooling capitalises on the feature of water as a natural coolant when warm and dry air is being humidified, enabling significant savings in operating costs.

Evaporative cooling can either use the pressurised or compressed water mist (evaporative system) or wetted pads media (adiabatic system).

It is also divided into direct (direct external air is allowed into the data hall) and indirect (when external air does not mix with the internal air within the data hall).

One major downside with evaporative systems is that they consume water (which can itself be an issue if supplies are scarce) and the evaporative process can cause scaling of pipework and heat exchangers in high-pressure systems. The products are more suitable for new-build projects, due to space and height requirements, as they are mostly large units.

Close coupled solutions embrace a range of products, located close to the heat source: in-row, rack, rear door heat exchangers and overhead terminal units. These are intended to be more expensive and more suitable at higher-density racks.

Liquid (direct on-chip or immersion cooling) cooling is taking the water or other source of heat rejection (Novec 1230) directly to the server.

Depending on application and the technology chosen, the server equipment can be completely submersed into the coolant.

However, there is still a certain stigma around liquids being at the heart of the IT equipment and it still remains a rather niche product, used only to deal with extremely high densities in HPC segment (>35kW per rack).

The UK and the US are major data centre users, sharing a similar profile for the choice of cooling technology used.

However, it is believed that as more applications move to offsite data centres, operators will increasingly be looking for locations in low-cost countries.

The US accounts for approximately 40 per cent of the worldwide precision cooling market.

It is the market with the largest share of evaporative cooling, representing 26 per cent of the total market. Meanwhile, the UK has traditionally been a big data centre market, being Europe’s main banking and financial centre.

Big Data

It has been claimed that more data has been captured in the past 12 months than the previous 5,000 years – with Big Data companies and Cloud providers driving the biggest changes.

The strategies they are employing for identifying the appropriate locations for their next data centres are not only defined by the cost saving strategies to capitalise on favourable tax regimes, but also colder climates, allowing them to maximise the use of free cooling.

Within EMEA, the popular destinations for Big Data companies have been extended beyond Ireland into the Nordics (Netherlands, Sweden, Finland), thus creating new data centre hotspots within EMEA.

Big Data companies choose a variety of technologies to cool their data centres in a most energy-effective way.

In June 2015, a study by Jonathan Koomey and Jon Taylor confirmed the view that a great deal of the servers inside the data centres are just consuming electricity, while not being used or accessed in any way.

The study concluded that the level of these servers, which are called comatose servers, is at 30 per cent of all the servers inside the data centres.

Identifying that the redundant and rarely accessed data in the public cloud comprise a large share, in 2013 Facebook separated the old data into the “cold storage” category.

Cold storage can be defined as the retention of inactive data that an organisation or an individual rarely, if ever, expects to access.

Previously, it was believed to be necessary for the servers in data centres to be “always on” to provide immediate access to users’ data, but the servers in the cold storage facility are on “sleep mode” and kick-start only when there is a request for access to archived information.

There will be a certain delay in accessing data by the end-user, but that slight delay is believed to be acceptable.

Facebook has devoted considerable attention to the hardware used for a cold storage system through the Open Compute Project that has been working on the improvements in the hardware systems used in data centres, including cooling.

It led to adoption of direct evaporative cooling in Facebook data centres and, following some teething problems in 2013, it has been implemented across an increasing number of its storage centres with improved control.

In 2014 Facebook also presented the modular approach to cooling in the data centres through Open Compute, opening this technology for wider usage in the data centres.

Lone Hansen is manager of the IT cable group at BSRIA

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions.

Links may be included in your comments but HTML is not permitted.