Following the launch of the Carbon Trust’s (CT) data centre design service, Rob Jones, lead consultant for ICT at the CT’s Advisory Services offers advice on efficiency
“There are encouraging signs of a shift towards more efficient design and operation of datacentres in western economies but due to the flexibility of new architectures such as cloud there is a risk that emissions may simply be exported to other countries where there are less stringent operating environments.”
Regarding efficiency savings, Mr Jones says: “It depends on the original design and scale of the datacentre. We can achieve energy savings 70 per cent or more at some centres through radical optimisation of the IT layer as well as re-design of data halls and supporting electrical and cooling infrastructure, or even through a complete new build facility. If we’re dealing with a state-of-the-art new build facility that is essentially well designed to begin with, we might improve savings by a few percentage points through some targeted design tweaks. Savings could therefore be several MWh or a few kWh depending on scale.”
“To date we have worked with eight data centres but have a long track record with data centre providers and enterprise ‘in house’ systems. We are in discussions with the next tranche of customers – about another eight.”
CARBON TRUST DATA CENTRE ADVICE
· Free air cooling rather than chiller plant and traditional Computer Room Air Conditioning (CRAC) units has significant impacts on energy efficiency and performance. Therefore the geographical location of the data centre in relation to local climate is a key consideration.
· Some of the most efficient new build data centres are using evaporative free cooling technologies combined with modular provisioning and operation – they also incorporate intelligent building management systems with temperature and humidity sensors allowing cooling response to be ‘trimmed’ in line with dynamic IT loads and hence thermal response
· It’s important to note that compute density has a large part to play in what type of cooling technology and medium is used – free air cooling has been modelled to be effective in compute densities of up to 18kW per rack although ‘safe operation’ is regarded as being significantly less than this due to issues such as ‘time to failure’ – this should not be a problem in a properly configured environment
· Future proofing of data centres in terms of the type of medium being used for cooling is imperative, this is due to the advent of multi-core chip architectures and the use of highly virtualised environments resulting in very high compute densities and thermal outputs. This has resulted in some organisations investing in infrastructure that can be hooked up to a water interface at some point in the future avoiding expensive retro-fit or new build. Others have looked at new super efficient refrigerants and dual phase systems. Future technologies may involve on-chip cooling technologies or immersive non conducting cooling mediums which both look promising
· What’s the biggest mistake or no-no in terms of inefficient data-centre cooling design or technology usage?
· Hot cold aisle orientation – simply making sure servers are in the same orientation in the racks can make a significant difference – a recent survey we conducted suggested of 175 data centres in the UK 68% did not have a rigorous hot / cold aisle policy and hence there was mixing of air in the aisles.
· Not considering cost effective hot / cold aisle containment – ‘butchers curtains’ for example. Our survey suggested that of 175 data centres sampled 78 per cent of them do not have a rigorous hot / cold aisle containment policy.
· Optimising at the IT layer without considering the impact on cooling configuration. The latter may negate much of the saving at the IT layer through thermal hot spot creation.
· Opting to replace technology on a like for like rather than exploring new solutions that have the potential to significantly reduce energy use and cooling demand.
· Over provisioning capacity by constructing large data halls which fill up over time, rather than using modular data centre halls provisioned with modular cooling infrastructure that can be expanded according to need.