Unsupported browser

For a better experience please update your browser to its latest version.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Too hot to handle?

Our annual Data Centre Cooling Question Time brought together another crop of experts to consider some of the challenges facing the rapidly changing sector, from heat recovery to energy efficiency

Panel One: Cooling and data centre design

Adam Beaumont I would describe myself as both a thermodynamicist and a data centre operator, so I have a foot in both camps. I founded aql 16 years ago, and prior to that I worked for the MOD and before that I was a university lecturer.

It means I have been a member of the Royal Society of Chemistry, where we are sitting today, for over 20 years. I specialised in applied chemistry and thermodynamics, so I have got my geek badge.

About six years ago, we realised we could control costs better by stopping being a tenant and building our own data centres instead.

We started with a very small inefficient facility in Leeds with a PUE of 3 point-something, but when we got more organised, we realised that we wanted to put more kit in Leeds – with secondary sites in London, Manchester and Glasgow – so we wanted larger premises.

At this point, we thought we should get the textbooks out again, so as to apply some science to what we wanted to cool.

The theory was that we apply the science and then leave the engineering to our supply partners to make sure it all runs to spec.

Andy Hayes I am an electronics engineer by background, but for the past ten to 15 years I have been trying to become more of a mechanical engineer.

As a business we start with the IT side of things, trying to understand what impacts that might create on the engineering side of the data centre.

We work early with our customers on feasibility studies, primarily looking at how the mechanical and electrical systems that will suit the IT applications they are trying to achieve.

We have also worked on many MW of design-and-build, up and down the country. I have personally consulted on over 150 MW, so have experienced most of the cooling solutions on offer.

Jon Summers I have been at the University of Leeds school of mechanical engineering since 1996, starting in applied mathematics so I would describe myself as a scientist-cum-renegade engineer.

Since 1998 I have been involved in high performance computing and we have built and run our own computer in a research facility looking at the thermo-fluid problems, effectively looking at how to get the heat out of computing.

We have been doing quite a bit of work on liquid cooling, as you will hear later.

What is interesting is the silo mentality in the data centre industry – you’ve got the IT and you’ve got Facilities. And as soon as you introduce liquid, you’ve got an invasion of one on the other. That’s currently a barrier.

The other interesting thing is that carbon footprints are now becoming an issue as data centre growth has made the sector a sore thumb for government.

Robert Tozer I am different to you guys in that I am an engineer who aspires to be a geek. I did an engineering degree in Argentina and then decided to do a PhD at Southbank University.

At Operational Intelligence we believe that you can always find a better way to do things, so what we do is optimise we optimise design, we optimise for risk, we optimise energy – when we leave, we want the knowledge to remain in the data centre.

Q How do you go about reusing the heat from a data centre in practice?

AB When we started, there weren’t any data centres in Leeds at all. You can either build where there is a high-voltage cable and look to bring all the fibre in, or you can build where the fibre is and bring the power there.

We looked everywhere in Leeds and found an area on the south bank where the metal bashing factories had been, so there was power, but it was also where all the fibre passes through the city.

None of that fibre had been broken out; it was just passing through for other locations.

We had to look for the pinch point where all the operators come together, whether it is a pylon or a canal, and the only place that happens is in a city. That is the only logical place to have a data centre.

We build carrier-neutral facilities for all the different content providers and internet service providers.

The content providers need to be near where all the request from users is coming from, because that makes it more efficient and lower cost.

The problem then is you have created this antisocial monster in the centre of the city and you can’t build around it much, because the data centre is sucking up the power.

You also have what could be an antisocial noise problem as you are spinning up fans and rejecting heat.

The city planners say ‘we want to build nice balconies but all we can hear is the sound of a plague of locusts below’. So we have to try and decrease that level of ‘antisociality’. We do that by offering to share our heat and by keeping our noise down. As a data centre operator, you can’t just get involved with your own scheme, you become an unpaid city planner.

So I found myself involved with the Leeds district heating scheme, identifying the hot spots with the consultants and trying to work out what level of heat was needed and what we can do.

The problem with data centres is in normal operation you are down at the 30 deg C level, which is good for space heating but difficult to put into district heating schemes, unless you create your own, via warm water exchange with nearby buildings, underfloor heating or radiant heating, or whatever.

So we helped design the scheme in Leeds so that we could inject heat into the cold return of the heating scheme. The only problem with this is you have got to have a base load all year round.

Rejecting the heat to atmosphere is seen as antisocial, so it is a great driver to get the city councils to foster cooperation with more parties who can take that heat. The new buildings have to be in a position where they can take that low-grade heat.

We are just starting to do that in Leeds, but it is challenging as it is a cost and there isn’t a direct way of grant funding for developers – they see it almost as an eco-tax.

We are going to be energy-sharing with the 338 homes being built next to us.

They are only going to require 1 kilowatt per home to keep it heated, so they are amazingly efficient.

The legal obligations are quite onerous. We also don’t charge for our heat – as an employer, we want to be doing good stuff. If you can give away something that would be wasted, what’s the harm in that?

The ideal would be to get the scheme operator to take all the heat all the time, so that we don’t have to invest in any fans, but on a residential scheme you couldn’t do that, as it would get too hot in summer. But if they joined a district heating scheme, they could regulate that.

There is one in Sheffield and will soon be one in Leeds. They are key in smoothing out the demand. It is still unregulated but that will help, certainly.

AH In the conversations we have with our clients, there is often a commercial barrier to reusing the heat. It is often low-grade heat, and it is a mass of air, which is difficult to transport, so it has to go back into water.

The heat is generated by data centres 24/7 too, let’s remember. Large data centres could be generating 40 or 50 MW of heat all the time.

So how do you make it viable technically and commercially to reuse it?

We have done projects at the smaller computer room end where we have reused the waste heat on the  return of the water loop for underfloor heating, because that requires the heat all the time, based on a 13 to 14 deg C intake.

That’s at the smaller end, but at the larger scale there are interesting things going on with eco-villages, where you recapture the heat in various ways, or you could consider building a swimming pool next to the data centre. Or greenhouses.

Q Do you see scope for absorption chillers?

AH We are working on two projects at design stage with absorption chillers actually.

While energy is important, the driver for the colocation centre is lowest cost per kilowatt to compute. The schemes would utilise the heat from waste to energy plants at 30 to 60 MW and take the steam from that and turn that into coolth for the data centre. We can also take direct electrical feeds from it as well. There are quite a number of those at the 40 MW plus size which are quite under-utilised at the moment.

AB There is a small one of those in Leeds and it is no coincidence that I have just bought the plot of land next to it.

Q Has the focus of attention from governments in Europe and the UK on energy changed the picture for data centres?

AB The data centre layer is sadly becoming commoditised by the growth of the Cloud, since cloud-based operators say ‘well I can just turn off your data centre and our other 20 sites will pick up the load’. So it becomes essential to have the best energy efficiency for the lowest cost. It is not necessarily just about being green.

AH I think resilience and redundancy still comes first, but energy efficiency is next. Yet the growth of data will see the carbon footprint of the sector overtaking the aerospace sector soon – in some countries it already is – and that will probably see more legislation. We are a bit constrained by the IT technology – that still creates the base load, of course.

RT I think the idea of risk versus energy is a bit of a myth. It is no longer a case of simply adding another system for redundancy, running at full power, and thus doubling the energy. Because you can add redundancy by having two CRAH systems running at half the speed, each with independent controls. Two fans running at half the speed will give you one eighth the power under the fan power rule. If you can do that with variable drive on the fans, pumps and compressors you are onto a winner for efficiency – and you have additional heat exchange area.

JS Cooling technology is actually very efficient now – with evaporative cooling, for instance, the PUE is very low, but the problem is that the IT equipment is more variable. You can buy IT that idles at 75 per cent or at 25 per cent. We should be beating up the IT guys about it, not the M&E guys. The low efficiency often comes from not using the IT enough.

Q What one message would you give to the supply industry?

AB Understand the difference between heat and temperature – and that goes for suppliers and for clients –  that is where we will make the step change.

The two are not sufficiently understood – we often tell our clients’ lawyers that they should change the SLAs to cooling, rather than temperature.

So it is about how we exchange a certain amount of heat, not keeping the centre to 21 deg C: you can provide a nice trickle of air at 21 degrees, but the equipment will be starving.

We need to produce guidance that can be easily understood by the lawyers. We should be talking about thermodynamic equilibrium – if you go into one of our data centres, it is quiet.

AH Aside from delivery times of equipment, I agree with Adam there needs to be more understanding around the IT, about things such as airflows – it isn’t just about the kilowatt capacity of the cooling equipment, it is also about the airflow capacity. Many legacy CRAC units are undersized for the airflow capacity of a modern datacentre.

JS If you are cooling by air, it is difficult to handle, and gets turbulent, so it is more down to the design of how you get the air to the IT so that exhaust air doesn’t mix with the supply air.

The great thing about using liquid by contrast is that you require a smaller volume and that it is much easier to move around. But I understand that the industry is still a little bit averse to the method.

Q What is more important to the panel – the operating costs or the capital cost?

AB I have no problem with upfront costs; I want my operating costs to be as low as possible. We look at our costs over ten years. If we can make savings over that period, that is the way to go.

Look at the way the data centre sector has changed in the last ten years – you go into some London facilities and the floor is uneven from the amount of cables under the tiles. The cost of retrofitting anything is absolutely massive – whether it be managing legal risk or adding extra power.

AH  There has been a real change in the last five or six years in the knowledge in the supply base.

Our advice to clients on total cost of ownership is that the biggest cost across the lifecycle of the facility will be power usage and associated costs and the upfront Capex will be fairly insignificant against that.

So if you put 30 or 40 per cent more investment upfront, you will have significant improvement on TCO. We get asked for that from educated clients but we tell clients to write in their tenders for a ten-year TCO and see what the upfront cost might be.

 Q Do you see potential for river source heat pumps to cool data centres?

AB River source is a funny one. You can’t get a permit for putting heat into a river; you can only get a permit for water usage. For data centre cooling you would be taking water out, and putting it back but you can’t put it back more than one and a half degrees warmer than you took it out.

So depending on your heat load you have to use the right mass of water to meet that.

You don’t have to use a heat pump per se; you could simply use a heat exchanger. The issue is that there is only a certain amount of heat load any river can take.

Will IT equipment be emitting more heat in five years’ time as it gets more powerful?

RT As far as trends go, we can see computer use going up and therefore the load densities will go up too. If the load densities are going up, so will the Delta T’s across your IT equipment. The inlet temperatures are going to go up, but the outlet temperatures will go up even further.

Before you had air going in at 20 deg C and going out at 28 deg C, but now we could see it going in at 25 deg C and going out as high as 50 deg C. What is it going to mean for the cooling industry?

Containment is going to be an absolute must.

Secondly, hot aisle temperatures are going to go up, to the extent that you may not want to do cold aisle containment but instead have a chimney-type solution.

And if I have higher return temperatures going to the equipment, that will give more opportunity for free cooling.

AH  We have seen that happening: In the past 18 months we have installed about 5 MW of compressor-free cooling in high density facilities and we are looking at dynamically controlling the inlet temperature of the space based on the ambient.

As the temperature rises outside, we will raise the inlet temperature so don’t need the chillers. Manufacturers are now warrantying servers for up to 40 deg C in short-term conditions.

If you can contain the heat in chimney racks, you can use indirect free cooling without any compressors.

AB  And as you start to rise inlet temperatures you have fewer cold spots and can start to raise the humidity levels.

It was a concern in the past but standards on humidity were written with ticker tape in mind, keeping the humidity low. It is not so relevant these days.

You are also increasing the specific heat capacity of your air so per unit mass of air flow you can exchange more heat energy with less risk of it condensing on any part of the equipment.

Panel 2: Data centres in operation

Frank Mills I am basically an M&E engineer who has specialised in sustainability.

I am a member of Ashrae’s Technical Committee 9.9 for mission-critical facilities, as well as a Cibse council member.

I’ve been involved in Media City’s district heating system in Salford, as well as the extension to Vertex’s data centre facility, which used both liquid cooling and free cooling and halved its energy use.

Free cooling is a good option in the UK – we did an interesting job for Burnley schools where we had a data centre supplying nine schools, and along with free cooling and liquid cooling, we used a heat pump with phase change material either side.

On the cooling side, we had a buffer vessel storing energy in the phase change material at 8 deg C, while on the heat side, we were storing at 45 deg C.

That is because you can never really match the time you want heating and cooling, so this way, the heat pump runs most efficiently.

Stu Redshaw - My background is that I did a degree in thermodynamics at Nottingham University and did about six years post-doc studying fridge cycles.

These ranged from absorption to adsorption chillers, but they were all very inefficient, so I spent all my time worrying about how to stop losing heat from them.

We started up a business called 4Energy, which looked to take that knowledge and apply it to the IT sector.

We designed a free air cooling systems for mobile base stations. As a vendor, I felt we couldn’t answer all the customers’ needs, so took the difficult decision to come out of there and create a consultancy. We have applied that ‘poacher turned gamekeeper’ knowledge into helping customers work out how to save energy.

The tuning techniques are really simple and 90 per cent of this audience would have no problem in applying them, but because the customers are concentrating on their business, they rely on this sector to help them.

As an example, we were working with a big IT manufacturer and we had a thermal issue which was leading that company to investigate rack-based cooling.

We got into a conversation about racks and frames – they are really different but sometimes people think they are interchangeable.

The vendor in this case had specified the equipment to be in a frame, but in the UK this was interpreted as meaning a cabinet with a door and a mesh grille – that box within a box is adding 5 degrees to the Delta T, yet the vendor has no idea that is going on. That is the window for free cooling – that is where we come in.

Q What is the most common and fixable error you find when you look at the operation of the data centres?

FM I think understanding of airflows – we often find problems with turbulence and often that is caused by air coming in around the cables. It may not seem like a problem to have cold air coming up and into the servers, but the Ashrae guidance is based on having the cold air coming into the cold aisle then in front of the rack and flowing through it and out again, so those cable holes are causing unnecessary turbulence and affecting efficiency.

SR Over-provision of airflow can often be a problem. Our methodology is to start metering out air flow, viewing it as the precious resource that it is and then make sure that everything adheres to TC 9.9. We don’t mind if there are some openings in the floor but we are extremely tight on airflow. It sometimes needs a culture change – people are used to seeing nice shiny opening tiles, but they
should get used to tiles that are shut. Even shut tiles release 250 m of air an hour, which has a cost
to it. So even closed tiles are leaking.

FM We have undertaken CFD modelling so that visualising where the air should be is part of the commissioning process. If there is a problem with a data centre, you can go back and run the CFD – we use Tile Flow but others are available – to see what the air flows should be. It is a not particularly expensive tool to use.

JS I have taught CFD to engineers but people have been rightly critical because if you don’t set it up correctly it is garbage in, garbage out. You should treat air like liquid – you don’t let liquid leak. It is amazing how leaky some equipment is – and the first port of call, the low-hanging fruit, should be blanking plates. Don’t let the cooling and heating mix.

FM The challenge is how we deal with an environment where the IT gets changed over every five years but our equipment does not get changed until 20 years or so. This is where BIM [Building Information Modelling] may become crucial – it will create a living piece of data, where all the interventions are recorded. It is a bit of a revolutionary idea for construction – recording all the maintenance and upgrades – but it will allow everyone to see the costs, quantities and time schedules, and potentially the energy in operation too.

SR And you could link BIM with the BMS system to add a fourth dimension, to see everything working in real time. Naturally I have a commercial interest in this, but I think every data centre should get a tune-up regularly. We regularly get 25 per cent energy reductions on the first visit, and it is more difficult to repeat that on the second visit, but it is important to keep the airflow expertise on hand six months, a year down the line.

I am not a huge fan of regulation, but perhaps the airflow environment should be regulated in the same way as say F-Gas, as it is so important. If data centres were to have an air conditioning inspection, the same way buildings do under the EPBD, that would be the single biggest impact on the data centre environment.

FM The new provisions of the EPBD will see the ESOS [Energy Saving Opportunities Scheme], which will require all major users to have an energy audit by the end of this year, which will identify energy opportunities for the user. As an engineer I think this is a brilliant idea, so that people know where they can save energy, but politically I am not sure how much enthusiasm there is.

JS And let’s not forget the EU Code of Conduct on data centres, which makes 155 recommendations for best practice that operators can follow.

FM If we get the IT people involved with us, we can produce something that works much better, we can understand what they require. Unfortunately, we don’t seem to have a similar group in the UK. Perhaps we could get Cibse to start a data centre group – I am sure Ashrae would support us.

Jon Summers: advancing the technology

As you may know, at the University of Leeds, we have been looking at different ways of extracting heat from the IT equipment.

Digital growth has, of course, been fuelling the power demand. The estimates vary, but one figure suggests data centres alone require 35 to 40 Gigawatts of power. Some work done by Greenpeace on total electricity use by the internet put it at the fifth biggest user in the world behind four countries: US, China, Russian and Japan.

The problem with data centres is they are huge power users and that they emit a lot of waste heat along with their data – there is a battle between its growth and the efficiency gains and that makes for a large carbon footprint, at 2.5 per cent of the global CO2 equivalent, or 1 gigaton. One report says that by 2020, the sector will climb to 4 per cent of the footprint.

IT is becoming more efficient, but not fast enough. The reason you see the word ‘effectiveness’ and not ‘efficiency’ in data centres is because they are not very good at reusing the heat.

But if you look at the cost equation, you can see why it hasn’t been a priority: If you say that 25 per cent of our GDP is based on our data centre activity, the revenue generated by a kilowatt hour would be around £50 currently. But the cost of the energy is only about 10p, so it doesn’t stack up. As we have been discussing, it has to be more of a question of corporate social responsibility rather than a real issue of economics.

Greenpeace has also stated that it will “scrutinise the working patterns of data centres and keep a constant check on which are using renewable energy”.

There is a problem though in understanding – someone in the audience was recalling a conversation with government which ran “we won’t need data centres in future if everything is going into the Cloud”.

We have a problem of scale – the heat is generated by the microprocessor which is tiny, but the data centre is large. If you look at what nature does with the problem of scale it uses liquid, by way of blood supply to move the oxygen large distances. We can do the same thing with heat in data centres.

If you are using liquids you can reduce heat dissipation; it is also easier to reduce heat conduction, because pipework is easier to insulate than ductwork; and it is also easier to reduce turbulence with liquid than with volumes of air. Liquids also have a much larger specific heat capacity than air and therefore it is more efficient to move heat with a liquid and to pump it, than to use fans.

Of course if you are in an urban area, there are also benefits to not using fans in terms of noise. Also, you will probably need to transport any waste heat using water, as we have heard earlier, and according to Newton’s law of cooling you can carry liquid two hundred times further than air with the same thermal losses. So if you can pipe it and insulate it, you don’t need any extra plant in the room.

These days you are often seeing some sort of liquid being used in the data centre for cooling: indirect liquid cooling is increasingly common, where you have a rear door heat exchanger or an in-row heat exchanger. The next step is direct liquid cooling, where you capture most of the IT heat in the liquid.

The final classification of ‘total liquid cooling’ picks up all the heat and zero per cent is lost to air. This of course requires the IT and FM sectors to work together.

At the University of Leeds, we have the servers immersed in a dielectric liquid – a favourite demonstration of dielectric liquids is to immerse a mobile phone in the liquid and to ring the number, as all the electronics will continue to work.

We have got up to 89 per cent heat capture in the liquid at 45 deg C. A simple calculation shows that this could potentially offset 40 per cent of GHG emissions from the data centre.

Robert Tozer: the data centre environment in 2015

Let’s start with why we are all here. If you look at a conventional data centre with a PUE (power usage effectiveness) of 2, the biggest energy user is clearly the refrigeration, followed by fans and the UPS. But when you look at a modern data centre, where the PUE is 1.2, the whole picture has changed – while the IT is the same, there are dramatic changes for the refrigeration. Those changes are what we are here to talk about - and of course you can even have zero refrigeration, using free cooling, in certain circumstances if you want.

One of the problems in a data centre of course is waste heat. Air at temperatures up to as much as 30 to 40 deg C is all too often simply expelled to the atmosphere. One solution is to put a heat pump in the return path of the data centre hot air and to use that energy to produce hot water at around 50 deg C. If the outside air is 0-1 deg C, you can have the heat pump working very efficiently. Another option is to use the hot water directly at 30 deg C, for radiant heating.

A further option could be to use absorption chillers in a combined heat and cooling system. You need 80 or 90 deg C for the chillers’ generation cycle, so you won’t get it to work by simply taking the 30 deg C waste heat from the data centre unfortunately. I have seen that claimed, but it’s thermodynamically impossible.

You could work with it in a combined system with power generation or with a steam turbine, but I would have to ask how practical it would be. The technology and expertise required makes it a big undertaking.

Another of the questions we were asked to consider is: “Has chilled water had its day?”

We have used a conventional chilled water circuit but then put a heat exchanger between the condenser water and the chiller water side, so that when it is cold enough outside we can bring down the temperature in the condenser circuit and free cool the data centre.

A project we were involved with in Paris had a PUE of 1.22 average throughout the year, but it was using a chiller unit for 2 per cent of the time for the extremes of temperature, which meant that the maximum PUE went right up to 1.6.

Why did they need all the equipment etc for this 2 per cent, why couldn’t they have just let it run hotter for that time?

That is the big debate with the client.

They could have saved not only the cost of installing the chiller but also the cost of the electrical installation that supports it. The moral of the story is your average PUE is related to the energy, but the maximum PUE will be related to the capital cost.

Then we move on to: has compressor-driven refrigeration had its day?

We did a map which worked on the basis that if you comply with Ashrae’s recommended range of 27 deg C air inlet and use a water indirect cooling system with a cooling tower, using a ‘seven-degree approach’ where the outdoor wet bulb is 20 degrees or lower, a number of places in Europe and North America could achieve zero refrigeration.

But if you use a ‘four degree approach’, where the outdoor wet bulb is 23 degrees or lower, it makes a massive difference – in fact, most of Europe could achieve zero refrigeration following this approach.

But has refrigeration had its day? I will leave that up to the panel to debate.

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions.

Links may be included in your comments but HTML is not permitted.