Still Plenty of Green in the Data Center

by ServerWatch Staff

For many data centers, going green meant going virtual -- that is, the vast majority of power savings over the past two or three years has been the by-product of server consolidation.

by Arthur Cole, IT Business Edge

For many data centers, going green meant going virtual -- that is, the vast majority of power savings over the past two or three years was the by-product of server consolidation.

While there's nothing wrong with that, it does lead to something of a conundrum as virtualization reaches its practical limits: how to maintain energy efficiency gains once your consolidation ratios start to encroach upon service level and reliability requirements?

The good news is that there are still plenty of areas in the data center that are ripe for significant energy-use makeovers. The bad news is that many of the most effective measures require a substantial up-front commitment, both in terms of capital costs and systems architecture.

One of the most promising is good old-fashioned power-usage management. The latest systems automation platforms not only relieve IT staff from many of the repetitive functions of data center management, but provide real-time systems analysis to constantly evaluate the most energy-efficient manner to distribute workloads.

"To have a viable power management system, you have to take a holistic view of the data center rather than focus on one specific part of the system," says Clemens Pfeiffer, chief technology officer at energy management firm Power Assure. "Otherwise you will make the wrong decisions.

"While most of the savings come from shutting down idle servers or putting them to sleep, there are other components that need to be factored in," he adds. "First, servers need to be taken out of allocation for each individual application. Load balancing may need to be adjusted; routing may need to be changed; site-to-site load shifting might be required; and cooling might need to be adjusted as servers are turned on or off."

Naturally, this kind of automation does not lend itself to commodity software products. Expect to form a very tight relationship with power management vendors like EnerNOC, which takes a customized approach to how energy is parceled out in specific data centers.

"IT and facility teams often start by looking at their peak load on the grid, since the highest load is a critical factor in monthly electricity bills and carbon footprints," says Tim Healy, chairman and CEO of EnerNOC. "(We) work with hundreds of data centers to use back-up generation during times of peak demand to alleviate stress on the grid, which has many positive benefits for the data center, the grid and the environment.

"The move to system-wide energy management is not dissimilar to the call for more holistic ERP deployments, which blindsided many IT departments. Executives saw a need for an enterprise-wide application, but the full ramifications for IT were not clear up front. With economic recovery on the horizon, now is a great time for IT leaders to create a bold vision for energy management across the enterprise and get ahead of the next big wave in the evolution of IT responsibility."

Next page: New Ties Between Hardware and Software Platforms

Follow ServerWatch on Twitter

By Arthur Cole, IT Business Edge

Energy management has become such an integral component of data center operations that many IT vendors are forging direct ties between hardware and software platforms and the power systems they rely on. Before it became part of Oracle, Sun Microsystems inked a deal with Emerson Network Power designed to provide custom energy management services for Sun users. The agreement provides for tighter integration between Emerson products like the Liebert cooling system and Sun's range of high-end server products.

"As the IT industry moves to blade server utilization, the need for specialized power and cooling solutions will continue," says Bob Miller, vice president of Liebert marquee accounts at Emerson Network Power. "Incorporating power and cooling allows Sun to deploy the latest generation of high-performance servers in the smallest footprint possible."

Once you get beyond an integrated power management system, there can be a plethora of energy-saving opportunities at your disposal, if you know where to look for them. Since almost half of data center energy consumption goes toward keeping hardware cool, there is a strong financial incentive in squeezing as much efficiency out of this process as possible.

For many, that has led to a rethinking of data center ergonomics, with a renewed emphasis on hot-aisle/cold-aisle rack placement and the use of natural, "free" resources such as cool ambient air or nearby water sources. Naturally, these solutions favor data centers in cooler climates, such as WETA Digital's facility in Wellington, New Zealand, which uses a combination of water and air to keep the thermostat down on nearly 4,000 HP blades.

And many organizations are coming to the realization that cool does not have to mean "frigid." Most hardware today can operate comfortably in temperatures in the mid-to-high 70s (F), entering the red zone past 85 degrees or so. Be forewarned, though, that a robust temperature monitoring system should be in place if you intend to push these margins, particularly if you also hope to increase your hardware densities.

Also be aware that energy efficiency will come about simply as a result of normal refresh cycles. Just about every piece of hardware on the market, from major server, storage and networking platforms right down to the processor, is being engineered these days with low-power operation in mind. And new configurations of seemingly disparate components are allowing data centers to cut down on the amount of equipment required to perform advanced functions. This is especially true on the LAN, where convergence onto Ethernet platforms is doing away with much of the redundancy of individual storage, data and even voice architectures.

"By moving to a converged 10G fabric, the customer will see a much better price/performance and realize savings in power and reduced management overhead," says Graham Smith, director of product management at BLADE Network Technologies. "Most customers today are using 4 Gb FC and 1 Gb Ethernet adapters for their SAN and LAN connectivity. Consolidating to 10Gb Ethernet fabrics will provide an increase in available bandwidth, while reducing the number of adapters, switches, cables and management overhead."

One thing is clear: None of this will happen without clear direction from top management. Without a well-conceived plan delineating short-, medium- and long-term goals, energy efficiency measures in the data center will be piecemeal at best and counter-productive at worst.

But with dramatically lower capital and operating costs to guide you, it soon becomes clear that energy efficiency is well worth the effort.

Follow ServerWatch on Twitter

This article was originally published on Monday May 10th 2010
Mobile Site | Full Site