If a data center were to follow Maslow's Hierarchy of Needs, power and cooling would be at the physiological level.
Once as taken as for granted as water and air, power and cooling is now the biggest problem for data centers, according to 76 percent of the audience at "The Data Center Power and Cooling Challenge" session led by Gartner research Vice President Michael Bell. The session was part of the research firm's IT Infrastructure & Management Summit 2007 held this week in Kissimmee, Fla.
Considering that worldwide, 50 percent of the data centers are nearly out of capacity, their concerns are well-founded.
As recently as the early '90s, the data center had a fairly modest power footprint. It has since gone up by a factor of 20. Today, Bell said the much of the cost of a new data center is attributed to power and cooling.
Bell said the power demands of the four largest Internet companies are equivalent to the power demands of Las Vegas.
Google alone, for example, has more than 1 million servers, and has been steadily adding 300,000 to 400,000 per year. To meet this demand, Google has located a server facility near the Columbia River in Oregon. The location was chosen specifically because of the river's hydroelectric power capabilities.
This is not atypical. Bell noted that the most critical resource for an IT-intensive company is power and electricity.
While Google may be the most extreme when it comes to computing demand, Research Vice President Will Cappelli in his address, "The Convergence of Operations and Energy Management," noted a changed mindset in what constitutes capital. "It now equates to racks of servers, whereas previously it was factory smoke stacks."
This is, not surprisingly, taxing a global electricity grid whose "topology is not in the healthiest state to start with," Cappelli said.
Bell acknowledged that it's "hard to talk about power without talking about cooling," and they do have a chicken-and-egg relationship. However, power and cooling are, in fact, separate items that require different management techniques.
High energy prices and a faltering global electric grid are fueling the power issues, according to Cappelli, while increasingly dense servers are driving cooling demands upward, Bell said. Cooling increases result in higher energy bills and further tax the grid.
In some cases, according to Bell, this results in as little as 33 percent of power actually going into productive work. Not surprisingly, for many enterprises, the cost of the electricity is exceeding the cost of the IT equipment.
And, thus, a vicious cycle spins. A cycle where fixing one can sometimes strain the other.
Take, for example, consolidation through virtualization, which Gartner views as a short-term, tactical solution to the power and cooling quagmire. Bell cited as much as a 20 percent to 30 reduction in servers through virtualization. This space savings results in increased density, however, and density means more cooling is needed.
In a best-case scenario, the space saved is consumed by support equipment such as a UPS or units that supply extra cooling. A worse possibility Bell cited is "having the power to run but not cool."
So one could say it boils down to power. After all, without power, there is no cooling. Cappelli emphasizes looking at this from a global perspective. The emergence of India and China as major economic powers, for example, will have a tremendous impact on global energy.
Cappelli recommends looking at energy consumption from the perspective of what is being bought and what is actually being used. Enterprises must think about end-to-end energy management by capturing, tracking and measuring data.
Few tools are available to do so, however.
Aperture VISTA is one offering that may help. VISTA is designed to enable admins to manage equipment, space, power, cooling and overall data center capacity. It delivers a complete view of power consumption, including three-phase power and power connectivity.
The tool visually recognizes 30,000 pieces of IT equipment, CEO William Clifford told ServerWatch.
When initially deployed, VISTA inventories what's on the data center floor, and on average claims "80 better accuracy of what's out there, even for companies that have monitoring software," Clifford said.
VISTA, however, is a reporting tool, not a monitoring tool, so enterprises will need to remedy the problem through other products.
Another issue Cappelli urged enterprises to be cognizant of is the carbon footprint. It's estimated that data centers in North America represent about 4 percent of carbon usage on the continent. Considering IT purchases accounted for 50 percent of enterprise spending in 2006, that figure is expected to grow over time.
Despite cooling's recent rise in importance, little has changed in the options available. Enterprises have a choice of in-chassis cooling, in-rack cooling, in-row cooling or in-room cooling.
Most data centers will require a combination.
Bell said that classic, under-floor cooling is still recommended, but a data center that has less than 15,000 square feet to cool can get away with chilling from above. Otherwise, it's not as effective. Other commonly deployed technologies include blanking panels and sealed floors.
New solutions are coming to market to remedy this, however. SprayCool, a company that cut its teeth with the defense industry by cooling radar equipment, has an offering designed to cut heat at the source. With that comes a commensurate reduction in both cooling and energy needs.
SprayCool takes industry standard rackmount servers and retrofits them with a liquid cooling system, Kevin Engelbert, application engineering manager, told ServerWatch. The company takes the heat synchs out of the server and replaces them with units that facilitate the movement of tubed Fluorinert through the box. Fluorinert is liquid developed by 3M that has been used since the early '90s to clean circuit boards. Fluorinert is particularly effective because the liquid boils to remove additional heat via evaporative cooling. And it doesn't damage the equipment.
The Fluorinert-filled tubes lead to a rack-based heat exchanger from SprayCool that converts the heat the Fluorinert has picked up into heat for the water. The cooled Fluorinert returns to the rack, and the hot water is directed out of the server room.
SprayCool has been selling to enterprises for about two years now, Engelbert said. Currently, it's compatible only with rackmount servers. Compatibility with blades and storage devices is planned, and a telco-optimized and Cisco-compatible offerings are also on the company's roadmap.
Gartner's Bell described SprayCoool's offering as "very efficient, very effective technology."
He also spoke highly of HP's vendor-agnostic Dynamic Smart Cooling solution: "Very promising. They've been running it in their own environment for two years now. It works ... It alone is not the answer, but it is part of the answer."
Even SprayCool is aware that its solution is only part of the answer. Engelbert told ServerWatch, "A raised floor is still needed, but it [SprayCool] eliminates the dependency."
As compelling as these solutions are, there are some hiccups. "Monitoring in real-time is a missing link at this point," Bell said.
Also, servers are but one component in the cooling ecosystem. The final link often lies outside the purview of IT.
Bridging to Facilities
Both Bell and Cappelli had several essentially cost-free tips to minimize power consumption and cooling needs. In all cases, they involve facilities.According to Cappelli, "At a minimum, closer ties must be drawn between facilities and infrastructure and operations."
Aperture's Clifford concurs. He said he believes that a best practice among CIOs is to incorporate some from the facilities side into the team managing the data center.
Bell described the optimal data center location as a moderate climate (e.g., Minnesota or Oregon) where conditions are temperate to cool, and electricity plentiful and inexpensive. He also advises looking at the building itself. A midlevel high-rise, for example, is not an ideal location. The geometric shape of the building can also come into play.
The layout of the data center also has a tremendous impact, as does how the racks are configured. How equipments is deployed matters as well. Bell recommends diversifying equipment in a given location. He suggests having a "zone" provisioned with specialty cooling, to which equipment can be relocated as needed.
Bell also advises keeping a close eye on airflow dynamics and re-assessing it every 18 months to two years.
For those looking to take more drastic steps, Bell recommends considering alternative energy sources. IBM's Cooling Bank, which stores energy overnight and releases it during times of high demand is but one example. He cautions that such "technology is coming into play but likely won't take data centers off the grid."
Cappelli encourages creative thinking, such as looking at ways to harness the value of the heat generated, perhaps transferring it to warm other parts of the building, in a business efficient way.
Cappelli advises enterprises to "look at environmental legislation with a cold hard eye. It will be a significant reality within the next 18 months." He likens the various looming agreements to the Sarbanes-Oxley Act, with which enterprises had no choice but to be compliant within a very tight time frame.
The Big Picture
Thomas Bittman, vice president, distinguished analyst at Gartner, in the first keynote of the show ("Stepping Up to the Challenge: Creating the Future of Infrastructure and Operations") described power and cooling as "more of a challenge than an opportunity."
With 50 percent of current data centers nearing insufficient power and cooling capacity to meet the demands of high-density equipment, the future does indeed look bleak.
The good news is that companies are well-aware of this and making changes. Gartner anticipates many improvements surfacing in the 2011 time frame. In the meantime, enterprises are leading in the charge toward green because it makes their infrastructures sustainable, Bittman said.