Ecmweb 2282 010ecmdcpic1
Ecmweb 2282 010ecmdcpic1
Ecmweb 2282 010ecmdcpic1
Ecmweb 2282 010ecmdcpic1
Ecmweb 2282 010ecmdcpic1

Delivering Data Center Efficiency

Oct. 20, 2010
How to maximize energy consumption and enhance efficiency in a data center without compromising performance

The data center as we know it today started to take shape as the dot-com bubble expanded in the late 1990s. Growth slowed when the bubble burst, but by 2003 the pace of change was accelerating again as IT organizations scrambled to meet demand for computing and expectations for 24/7 availability.

At the same time, a new issue was emerging: energy consumption. The Uptime Institute, a consortium of companies devoted to maximizing efficiency and uptime in data centers and IT organizations, released the “Revolutionizing Data Center Energy Efficiency” report in 2008, which focused on data center energy usage. In the report, the group announced that data center energy use doubled between 2000 and 2006 and predicted it would double again by 2012. With this in mind, the industry started to turn its attention to reducing energy consumption.

Those efforts ramped up in the second half of 2008 as the U.S. economy entered a deep recession, and companies were forced to find ways to reduce spending. IT organizations began to look seriously at energy efficiency in terms of cost savings as well as environmental responsibility. This is reflected in survey data compiled by the Data Center Users’ Group (DCUG), a group of approximately 2,000 influential data center, IT, and facility managers founded by Emerson Network Power. DCUG members surveyed in 2005 did not include energy efficiency in their top five data center concerns. In spring of 2008, efficiency made the list at No. 5. By spring of 2009, efficiency had moved to the second position.

Enhance efficiency without compromising availability

The challenge for data centers is to maintain or improve availability in increasingly dense computing environments while reducing costs and increasing efficiency. In 2008 and 2009, a number of companies experienced well-publicized data center outages, including Rackspace, Google, Equinix, Paypal, and Air New Zealand. Outages such as these can cause disruptions in services and lost data as well as cost companies hundreds of thousands of dollars.

In the wake of those outages, respondents to the fall 2009 DCUG survey showed a renewed respect for availability. In fact, it jumped from the fourth most important concern just six months earlier to the No. 1 concern — the first time it occupied that top spot since the DCUG formed in 2004 (click here to see Table). The likely reason again is economic: One significant outage can be so costly that it wipes out years of savings achieved through incremental efficiency improvements.

To meet the sometimes conflicting objectives of reducing costs and increasing availability, data center management must enter a new stage of maturity. This can be accomplished by establishing data center infrastructures that leverage four distinct opportunities to enhance efficiency without compromising availability — opportunities that will drive data center infrastructure design and management in the years to come.

1. High-density design

Data centers already are moving toward high-density computing environments as newer, more dense servers are deployed. It’s important to note, however, that moving to a high-density computing environment does require a different approach to infrastructure design, including:

High-density cooling: Placing high-efficiency cooling units closer to the source of heat (e.g., near the rack) complements the base room air-conditioning system. These systems can reduce cooling power consumption by as much as 32% compared to traditional room-only designs. High-density cooling systems have become a basic building block of the data center of the future, delivering the ability to meet the needs of today’s 10kW, 20kW, and 30kW racks while offering the ability to support densities of 60kW or higher in the future.

Intelligent aisle containment: The efficient and well-established practice of hot/cold aisle alignments sets up another movement — containment. Aisle containment prevents the mixing of hot and cold air to improve cooling efficiency. Both hot-aisle and cold-aisle containment systems are available.

High-density power distribution: Power distribution has evolved from single-stage to two-stage designs to enable increased density, reduced cabling and more effective use of data center space. Single-stage distribution often is unable to support the number of devices in today’s data center as breaker space is expended long before system capacity is reached. Two-stage distribution eliminates this limitation by separating deliverable capacity and physical distribution capability into subsystems.

The first stage receives high-voltage power from the UPS and can be configured with a mix of circuit and branch-level distribution breakers. The second stage or load-level units can be tailored to the requirements of specific racks or rows. Growing density can be supported by adding breakers to the primary distribution unit and adding additional load-level distribution units.

2. Ensuring availability

As mentioned earlier in the article, a number of high-profile data center outages seemed to refocus businesses on the importance of availability. In the fall 2009 DCUG survey, availability was a major concern for 56% of respondents versus just 41% in the spring 2009 DCUG survey. Understanding that a large percentage of outages are triggered either by electrical or thermal issues, the challenge is optimizing the efficiency gains related to power and cooling approaches while understanding IT criticality and the need for availability. Some of the choices to be made and the potential trade-offs between efficiency and availability include:

Uninterruptible power supply: Data center managers should consider the power topology and availability requirements when selecting a UPS. In terms of topology, there are several available in the market today: standby, line interactive, double conversion, and delta conversion. Double and delta conversion type UPSs are commonly used at larger data center facilities.

Economization: Economizers, which use outside air to reduce work required by the cooling system, can be an effective approach to lowering energy consumption if they are properly applied. Two base methods exist: air side and water side. Water-side economization allows organizations to achieve the benefits of economization without the risks of contaminants presented by air-side approaches. All approaches have pros and cons. Data center professionals should discuss the appropriate applications with local experts.

Service: A proactive view of service and preventive maintenance in the data center can deliver additional efficiencies. Making business decisions with the goal of minimizing service-related issues may result in additional expense up-front, but it can increase life cycle costs. Meanwhile, establishing and following a comprehensive service and preventive maintenance program can extend the life cycle of IT equipment and delay major capital investments.

An Emerson Network Power study of the impact of preventive maintenance on UPS reliability revealed that the mean time between failures (MTBF) for units that received two preventive maintenance service visits a year is 23 times better than a UPS with no preventive maintenance visits. According to the study, reliability continued to steadily increase with additional visits when conducted by highly trained engineers.

3. Providing flexible support

IT demand can fluctuate depending on everything from holiday buying seasons or Wall Street fluctuations to strategic organizational changes and new applications. Responding to those swings without compromising efficiency requires infrastructure technologies capable of dynamically adapting to short-term changes while providing the scalability to support long-term changes. Previous generations of infrastructure systems were unable to adjust to variations in load. Cooling systems had to operate at full capacity all the time, regardless of actual load demands. UPS systems, meanwhile, operated most efficiently at full load. However, full load operation is the exception rather than norm. The lack of flexibility in the power and cooling systems led to inherent energy inefficiency.

Today, technologies enable the infrastructure to adapt to those changes. Where previous generation data centers were unable to achieve optimum efficiency at anything less than full load, today’s facilities can take full advantage of these innovative technologies to match the data center’s power and cooling needs more precisely, regardless of the load demands and operating conditions.

Cooling systems: Newer data center cooling technologies can adapt to change and deliver high efficiency at reduced loads. For example, variable-speed drive fans allow fan speed and power draw to be increased or reduced to match the load, resulting in fan energy savings of 50% or more.

Power systems: New designs in power systems allow improved component performance at 40% to 60% load compared to full load. Power curves that once showed efficiency increasing with load now have been effectively flattened as peak efficiencies can be achieved at important 40% to 50% load thresholds. Scalable UPS solutions also allow data center managers to add capacity when needed.

Distribution systems: In the power distribution system, new TP1-rated transformers are more efficient at half load than they are at full load while pushing overall efficiency above 98%. In-rack PDUs allow rack power distribution systems to adapt to changing technology requirements. They also provide monitoring at the receptacle level to give data center and IT managers the ability to proactively manage changes.

4. Visibility and control enable optimization

If you can’t monitor and control infrastructure performance, you can’t improve it. Management systems that provide a holistic view of the entire data center are key to ensuring availability, improving efficiency, planning for the future, and managing change. Today’s data center supports more critical, interdependent devices and IT systems in higher density environments than ever before. This fact has increased the complexity of data center management and created the need for more sophisticated, automated approaches to IT infrastructure management.

Gaining control of the infrastructure environment leads to an optimized data center that improves availability and energy efficiency, extends equipment life, proactively manages the inventory and capacity of the IT operation, increases the effectiveness of staff, and decreases the consumption of resources. The key to achieving these performance optimization benefits is a comprehensive infrastructure management solution.

Infrastructure management typically progresses through phases. The first phase should involve a data center assessment to provide insight into current conditions in the data center and opportunities for improvement. After establishing that baseline, a sensor network is strategically deployed to collect power, temperature, and equipment status for critical devices in the rack, row, and room. Data from the sensor network is continuously collected by centralized monitoring systems to not only provide a window into equipment and facility performance, but also to point out trends and prevent problems wherever they may be located.

A network of communication and control devices can also be used with power and cooling systems to better match performance to demand and increase efficiency.

The final phase is optimization. A comprehensive infrastructure management system can help data center managers improve equipment utilization, reduce server deployment times, and more accurately forecast future equipment requirements, resulting in operating and capital expense reductions. Managers not only improve inventory and capacity management, but also process management, ensuring all assets are performing at optimum levels. Effective optimization can provide a common window into the data center, improving forecasts, managing supply and demand, and improving levels of efficiency and availability.

Opportunities to improve efficiency and optimize performance can be revealed throughout the data center life cycle — from design and deployment through operations, management, and planning. Businesses that succeed in this effort will look beyond energy when considering efficiency and take every opportunity throughout that life cycle to achieve efficiencies without compromising performance and availability.

Panfil is vice president and general manager of Liebert AC Power, Emerson Network Power, Columbus, Ohio. He can be reached at [email protected].

About the Author

Peter Panfil | Liebert AC Power

Voice your opinion!

To join the conversation, and become an exclusive member of EC&M, create an account today!

Sponsored Recommendations

Electrical Conduit Comparison Chart

CHAMPION FIBERGLASS electrical conduit is a lightweight, durable option that provides lasting savings when compared to other materials. Compare electrical conduit types including...

Don't Let Burn-Through Threaten Another Data Center or Utility Project

Get the No Burn-Through Elbow eGuide to learn many reasons why Champion Fiberglass elbows will enhance your data center and utility projects today.

Considerations for Direct Burial Conduit

Installation type plays a key role in the type of conduit selected for electrical systems in industrial construction projects. Above ground, below ground, direct buried, encased...

How to Calculate Labor Costs

Most important to accurately estimating labor costs is knowing the approximate hours required for project completion. Learn how to calculate electrical labor cost.