The benefits of improved contreols systems on a amission-critical data centre are significant and immeadiate, says Steve Nicholls
A breathtaking scale of investment is being poured into building new data centre facilities in the UK and around the world. Google alone invested £1bn in building new data centres in just three months last year, during April to June. Spending on data centres by operators globally is predicted to reach £90bn this year.
In the UK, the market is expanding rapidly. Despite concerns over the availability of adequate power supplies within the M25, London is the largest single data centre market in Europe. Growth outside the capital is also rapid, driven by the lower costs of running data centres in the provinces.
The growth in specialist exhibitions for the data centre market, such as the recent Data Centre World in London and upcoming Data Centre Europe in Monaco, is testimony to the scale of investment flowing into the sector.
The statistics around data centres themselves quickly moves into mind-boggling territory. Take the fact that there are reckoned to be more than 500,000 data centre facilities across the world, occupying 285m sq ft of space.
They range from mega-facilities, covering the equivalent area of a small town, to local server rooms supporting the needs of individual companies.
Estimates suggest that data centres hold in excess of 1.2tn gigabytes of data. This is equivalent to 75 billion 16 GB iPods. All that data has be housed somewhere, driving the need for construction of new data centre facilities.
And as we all know, data centres are huge users of power, accounting for 2 per cent of the world’s total energy consumption. The effect is that in some locations, power use may be near the available maximum, giving little headroom for surges in demand, as experienced during spells of hot weather or peak usage.
As a result, and following recent sharp rises in energy costs, there has been a big focus on increasing efficiency. Operators have made significant progress, improving designs and introducing more heat-resistant chips able to operate at higher temperatures, without suffering damage or loss of capacity.
Despite this, as a result of the constant increase in computing power and growth in the number of facilities, overall power consumption by data centres has continued to rise.
As many will know, a key measure of data centre efficiency is the PUE (Power Usage Effectiveness), calculated by dividing the total power used by a data centre by the actual power used to run its computer infrastructure. The PUE ratio therefore sees overall efficiency improving as the quotient decreases toward 1.
According to data centre specialist the Uptime Institute, a ‘typical’ data centre has an average PUE of 2.5. This means that for every 2.5 watts of power at the utility meter, only one watt is delivered to the IT load.
But significantly, the Institute estimates most facilities could achieve 1.6 PUE using the most efficient equipment and best practices.
A high proportion of the energy used by data centres – up to 50 per cent – goes into removing heat generated by the servers. There has been a big push by some major operators to develop alternative methods of cooling, to replace the power-hungry DX and chilled water-based cooling traditionally used.
In the UK, there have been well-publicised projects using very high volumes of ambient air to cool servers. Evaporative cooling is being trialled by some as a means of overcoming the need for refrigerant-based mechanical cooling, relying instead on natural adiabatic cooling driven by fan-powered air.
All approaches have pros and cons, however, and no single solution has yet been found that addresses all the issues. Naturally, this provides for a great deal of debate.
A vital requirement for many data centres is resilience and security. This is a critical issue for facilities handling sensitive government information or financial transactions. Here, the need to maintain continuity of operation is the overriding concern, and requires risks to be minimised and, if possible, eliminated.
In these cases, which represent a significant percentage of data centre facilities, tried and trusted, reliable systems are essential.
The good news is that the efficiency and resilience or security of data centres can be significantly improved with the use of the latest intelligent control and monitoring systems. These have a key role to play in reducing energy use, cutting running costs and improving the resilience of mission-critical facilities.
Results achieved by applying today’s latest control systems to other high-energy use sectors, such as food retailing and the manufacturing industry, have proven their effectiveness. Depending on the level of sophistication of the existing system, energy savings of up to 30 per cent can be achieved, delivering huge savings in running costs.
This translates into a substantial improvement in a data centre’s PUE rating. Major end-user clients of data centres are increasingly paying attention to PUE scores, as part of their own compsustainability agenda and corporate social responsibility policies. A poor PUE score, perhaps for an older existing data centre, can be quickly and cost-effectively transformed by installing intelligent controls on plant such as air conditioning, fans and humidity control equipment.
Data centres suffer an overage of 2.5 outages a year, according to a study by data protection research group, the Ponemon Institute, with an average duration of 134 minutes. If that trend is extended over the global estate of more than 500,000 data centres, it works out to 2.84 million hours of data centre downtime each year.
The key is how much this downtime threatens to cost operators, directly and otherwise. The business cost of an average data centre outage is estimated at £180,000 an hour. This reinforces the value of uptime, and the importance of effective control and monitoring systems in avoiding plant breakdown.
Effective controls will provide the ability to detect problems before they become a costly issue. Warning signs of gradual equipment failure, such as progressive compressor or fan faults on air conditioning, can be spotted well before failure leads to breakdown.
Preventative maintenance can be undertaken in a planned way, rather than more expensive reactive maintenance. In this way, an appropriately designed control and monitoring system puts operators back in control, by making energy use and plant performance completely transparent, giving the ability to manage problems before they result in a crisis.
Using Resource Data Mangement’s Data Manager system, for instance, all aspects of the operation of a data centre can be viewed from a PC or smartphone. This gives complete control and transparency. Alarms and faults can be tracked in real time, on site or remotely, and the response of local and outside service contractors automatically recorded.
This gives an immensely valuable insight, and helps data centre operators manage the day-to-day maintenance of critical plant on their sites. In some applications, cost savings from improved estate management can be on a par with those resulting from improved energy efficiency.
In the future, the global economy – indeed “life as we know it” – will increasingly depend on a secure, resilient and efficient global network of data centres.
One of the vital elements in facilities in the future will be intelligent control and monitoring systems that put users back in control. My company aims to be at the forefront, which is why we were proud to be headline sponsoring Data Centre Question Time, where we can have an open debate about the challenges.
Steven Nicholls is sales manager for Resource Data Management