Still in the dark about cutting data centre energy costs?

21 August 2014

Experts point out that there is no point in using more efficient hardware without having any energy management software systems in place.

Developing an energy efficient data centre means creating the right balance between resilience, compliance and performance, as RAHIEL NASIR finds out.

On 1 July 2014, the Climate Change Agreement (CCA) for the UK data centre industry came into force. The government has negotiated CCAs with specific industry sectors, giving them reductions in some carbon taxes if they meet set energy efficiency targets. To qualify under the new CCA, data centre (DC) operators will have to make a 30 per cent reduction in non-IT energy consumption between 2011 and 2020.

The agreement for the industry comes after organisations such as techUK campaigned for the government to apply a more “intelligent” approach to improving energy efficiency without penalising growth in a globally important sector. TelecityGroup is a member of techUK’s Data Centre Council and was involved in the CCA’s development from the outset. Group MD Rob Coupland says: “The CCA will give operators and investors greater certainty regarding the way that policy instruments are applied to the sector. This should lead to better confidence when planning investment strategies and expansion programmes which can deliver carbon reductions whilst encouraging growth.”

While the agreement has been broadly welcomed, some experts point out that efficiencies must always be balanced with resilience. “The main challenge facing all DC operators is that uptime remains the most critical consideration when designing their infrastructure,” says Robert Williams, technical director with Custodian Data Centre. “With the new CCA enabling DCs to implement energy efficiencies with the support of the government, one of the challenges is to ensure that they continue to make technologically sound decisions rather than adopt incentivised changes at the risk of compromising uptime.”

Williams says that there is a difference between being able to build a DC from scratch that is energy efficient, compared to legacy centres looking to tackle energy efficiency retrospectively. “One of our challenges as a colocation facility is to educate our clients to utilise our data floors optimally taking advantage of our design. The other challenge is that every rack has varying power requirements meaning that solutions need to flexible and adaptable.”

Ian Bitterlin, CTO of Emerson Network Power EMEA, echoes this view and adds: “The most important area for energy saving is in the ICT hardware itself which remains under-utilised in many DCs and, critically, consumes too high a proportion of full load current when at low IT load.”

Reducing equipment energy consumption certainly sounds like a no-brainer. But according to Raritan the key issues are not so much minimising the use of power, but maximising efficiency so that the limited available power can be used for more revenue earning racks and servers. “In other words, making the kilowatts go further,” says Richard May, technical manager for PowerIQ energy and risk control, Raritan UK. “This drives the need to maximise overall data centre efficiency.”

What’s clear therefore is that data centre operators need to adopt a holistic approach to energy efficiency. And as well as the new CCA, they will also have to take into account other considerations, such as EU directive 2012/27/UE that comes into force in 2016. This requires companies in all member states to undergo an energy audit to ensure they are reducing power consumption. 

Chilled out

So how are the data centre operators performing this tricky act of balancing efficiency, resilience and compliance? The cost of powering cooling systems typically represents a big chunk of a DC’s energy bill, and many have made improvements through technologies such as fresh-air cooling. 

For example, Daisy Group runs a number of centres including one in Manchester where it has implemented cold aisle containment. It says this has allowed it to reduce its cooling and subsequent power usage by more than 20 per cent. 

The firm also has a facility in London that was built in the mid-1990s. But rather than go through the expense of trying to 

re-engineer this to be more energy efficient, Daisy decided it would be more cost effective to acquire an alternative site. It has since spent the last 12 months migrating its London operation to the new premises. Chris Smith, the company’s head of data centre services, says: “Our new North London data centre has a design PUE

power usage efficiency
of less than 1.5 and is powered by 100 per cent renewable sources. The cooling is supplied by intelligent free air chillers which, if the outside ambient temperature is less than 13ºC, can offer 100 per cent free cooling.”

Custodian reckons it has Daisy beat when it comes to PUE. The firm says it was an early proponent of fresh air cooling and cold aisle containment, and took local weather metrics and data into consideration when designing its data centre in Kent.

“Additionally, our cooling system supplies the correct amount of cooling via tuneable, high capacity overhead vents,” says Williams. “For most of the year, the system runs solely on UPS backed fans, with chillers used to ‘clip’ the temperature. We have a highly resilient yet energy efficient cooling system where air supply can be varied down to individual rack level.” He goes on to claim that the DC regularly achieves PUE figures of below 1.15. 

Some advise caution when it comes to PUE ratings. Steve Hall, director of Ark Data Centres, says: “Data rooms come in all sorts of different sizes, and the less forward-thinking data centre providers quote their PUE on just that room. This is confusing, and dare I say misleading, for customers.”

Hall says that when Ark calculates PUE, it measures the power usage of everything the customer uses and will be billed for. “Literally the whole building. Because if a customer is paying for part of it, then it’s part of their PUE overhead. The next step change with PUE is whether or not it’s something your data centre provider will be prepared to guarantee. We do. It gives us a good measure of how everything is performing and in turn means that we can pass savings directly onto our customers.”

Hall says most companies tend to run at a building PUE of 2.5 or higher. A building PUE of 1.25 or less means there is potential to achieve savings of around £1.1m per megawatt per year. “From an environmental perspective, that’s 6000 tonnes of carbon that you could potentially be taxed on.”

When it comes to efficiency, the UPS is now playing a more prominent role. For example, Daisy says it also installed a new UPS system at its Manchester centre to improve the efficiency of the power infrastructure by more than 20 per cent. But in the words of Alan Beresford, technical and managing director of evaporative cooling specialist EcoCooling, some UPS systems have “horrendous losses”, all of which end up as waste heat. He points out that this needs to be extracted by the cooling system. And so reductions in UPS losses not only cut the UPS load but also the cooling load.

This is certainly one area that will continue to see improvement, according to Alan Luscombe, sales director of Uninterruptible Power Supplies Ltd (UPSL). He says that significant gains have already been made with UPS devices, such as when transformerless, modular technology was introduced a decade ago. “The most recent step change came from the introduction of ‘ECO’ mode which can push efficiency to greater than or equal to 99 per cent. Our own calculations show that operating a 1MVA system in ECO mode could save an additional £11,000 in a twelve month period, compared to operating the system in true online double conversion mode.”

Luscombe goes on to say that the latest modular UPS products offer the opportunity to ‘rightsize’ the equipment to the current load, so you are not wasting energy on an oversized system – something that was common ten years ago. He says rightsizing allows DCs to add additional modules to their racks as needed, so they never pay for capacity they’re not using. “Historically, the UPS had to be run close to capacity to get the best out of the system. Now, in line with the user’s actual requirements, the highest possible efficiency can be achieved even if the system is only running at 25-50 per cent of capacity. That has made a tremendous difference to energy consumption.

“It also means redundancy is available in an N+1 formation, rather than N+N. Before, if your load was 1MVA, you would have two systems, both 1MVA (N+N), with one unit simply for redundancy. Nowadays that seems insane as you were never operating at more than 50 per cent load. The latest modular UPS systems enable you divide the load requirement into modules of up to 100kVA each. So a 1MVA load can be serviced with 11 100kVA modules, significantly decreasing energy consumption and improving efficiency.”

Watching the wattage 

There is no point in using more efficient hardware without having any energy management software systems in place. As colo and connectivity specialist C4L says, it’s crucial to understand where you are not being efficient so that you can capture and correct any problems. 

At its DC in Bournemouth, the firm says it has the ability to monitor many points along its infrastructure, from the individual rack IT load all the way back to the M&E load. But DC manager Phil Wrightson adds that it is very difficult for colo facilities to monitor their customers’ power usage beyond the individual racks themselves. “However, for server infrastructure owners, if not all systems are required or at full load, then using management systems to power down unused equipment is definitely recommended.”

Emerson’s Bitterlin agrees here. He says DC managers have traditionally focused on improving the power supply and cooling to determine energy saving solutions. 

But citing recent research from Jisc (formerly the Joint Information Systems Committee), he notes: “With server equipment consuming about 30-40 per cent of the total energy used in data centres, it is one of the primary areas to implement effective energy saving measures.” 

Bespoke energy management solutions are therefore essential and should form part of any strategic plan. After all, if you don’t know what your energy use is, you can’t manage it. Having said that, it would be hard to refute May when he says: “No amount of high-efficiency hardware will make up for poorly engineered deployment.”

He says most (but not all) DCs now have some form of energy and environmental measurement in place. But often, the human operators are overwhelmed by the amount of data, and so the control system envisaged doesn’t materialise in practice. 

“The next step is to introduce systems (software assisted or otherwise) to enable this data to be simplified so it is in a format that allows humans to take both real-time and longer-term decisions based on it. In helping DC clients to simplify and analyse their data, we often find that we need to adjust the granularity and the way the info is presented to match their business model. Colos, enterprises, banks, etc, are all very different in their info needs, and no two DCs have the same requirement.” 

Leo Craig, GM of UPS specialist Riello, supports this view. But he also says that not everything needs to be monitored: “If you monitor everything, you’ll end up with information overload. Things like rack temperatures at the top, middle and bottom, the power draw, and the air conditioning return temperature are good starting points for monitoring. But whatever is monitored needs to have value and meaning. If you can’t change it, there’s no need to monitor it.”

Craig goes on to emphasise that it’s important for DC managers to see how data suites are managed to make sure they don’t have “zombie” servers lying dormant. And where possible, it is also worth looking at where they run processes and try to make sure that the overall power draw is more even. “For example, many processes are run overnight which means that there are huge energy peaks which will impact on the cooling and air conditioning. Instead, it is recommended to stagger the processes to flatten this peak out and ensure a more even power usage.”

In addition, he advises DC managers to implement a strict regime such as making sure blanking plates are in place to ensure efficient cooling and airflow. “This can save up to 20 per cent on cooling requirements. Measures like these do not incur great costs, but the savings on offer are huge. A lot of small to medium-size data centres are still not implementing the basics.”

Diminishing returns

Is there a limit to what DC operators and vendors can do when it comes to deploying and developing more energy efficient equipment? C4L says the only real limitation is how much budget you have to implement certain features as they come out. But when it comes to the data floor, Wrightson agrees that there is a finite saving in efficiency, and that the industry is not “far off” from reaching this.

It is therefore generally agreed that while data centres can always become more efficient, the savings to be had are becoming smaller with each incremental improvement. And it will always be a balancing act between absolute efficiency versus reliability. 

EcoCooling’s Beresford believes that once you get below a PUE of 1.1 – by using good electrical distribution, efficient and well-balanced UPS, and fresh air/evaporative cooling – then there is a limit of what else can be developed. He reckons that a PUE of

So what about UPS systems – how does the law of diminishing returns apply here? UPSL’s Luscombe says: “Running

a UPS
in ECO mode can deliver up to greater than or equal to 99 per cent efficiency, so it is clear that there will be a limit to the further gains that can be made in this area.”

Custodian’s Williams adds: “Single UPS facilities can run at higher loads (around 90 per cent) whereas dual feed facilities can only ever load their UPS systems to a maximum of 50 per cent. To improve efficiency in this scenario, you would lose resilience.”

That said, Luscombe says UPS systems certainly haven’t reached the “end of the development road”, given ongoing work to reduce footprints and cooling requirements. And while the ‘greening’ of IT tends to be all about power and energy usage, there are other aspects of the infrastructure that can be made more environmentally-friendly.

For instance, Wrightson says small changes such as using Twinaxial (or ‘Twinax’) cables instead of Cat6 can make “substantial” differences over a large infrastructure: “For example, Twinax could use only 0.1 per cent watts per transceiver in comparison to Cat6 which is usually 4-8 watts per port. On its own its own that’s a small saving. But over a few thousand servers that is a substantial difference.”

Much has already been written about how virtualisation and cloud-based technologies can reduce the number of physical assets used to operate IT systems, and other energy saving measures also include consolidation of data centres and server rooms. Raritan says another area that can also contribute significantly is the use of remote access and management technologies so that engineer visits to the data centre are minimised. “Having a central pool of experts and cutting out engineer travel can save a very significant CO2 load. And the work we do with clients on risk reduction avoids unnecessary outages and all of the disruption and engineer time needed too,” claims May.

Energy efficiency continues to be a key trend in the data centre power market as operators look to manage rising energy bills. But it should not be the sole consideration for those engaged in data centre design and build, advises Riello’s Craig: “In the long term, only a solution that offers the right combination of efficiency, resilience and total cost of ownership will provide the best results.”