Image

Modular Data Center Design Trends

March 1, 2012
Container data centers lead to a module design revolution

In 2006, Sun Microsystems unveiled the first container data center. Project Blackbox, an energy-efficient, water-cooled turnkey data center housed in an International Standards Organization (ISO) intermodal shipping container, was heralded as a cutting-edge solution for low-cost, rapid deployment of remote and rugged environments. It was adopted for select applications and locations, such as at oil rigs for seismic modeling, windmills in underdeveloped rural areas, and U.S. military posts. “The military used container data centers for rapid deployment and rugged environments,” says Rich Hering, technical director mission critical facilities at the Phoenix office of M+W Group, a leading global engineering and construction partner for technology-based clients.

Not long afterward, a few high-tech companies began their own experiments with containerized data centers. Google parked one in an underground parking garage and also proposed a floating data center, located on a cargo ship. In 2008, Microsoft revealed a configuration of 220 shipping containers for a data center in Chicago. “For us, containers have been used more in high-performance data centers,” says Hering .“They’re used in the cloud environment or if a company needs to expand space without wanting to build a whole brand new data center. They use containers as additional space.”

Still others, attracted by the scalability, efficiency, and economics, looked into using a containerized design. Containers became a hot topic for data center operators wanting to increase capacity without waiting for clients to sign on. “It dovetailed into the data center industry, because it was trying to create white space as fast as possible without having to make large investments,” says Hering. “They wanted to build the right-sized solution so that they could maximize their dollars-to-square-foot delivery. It was their answer to just-in-time.”

Off the Rack

However, when planning a container data center for enterprise applications, drawbacks to the container design became evident. “The belief was that there’s a huge market, and that many [end-users] would start using these containers,” said Rakesh Kumar, VP of Gartner Research, Stamford, Conn.-based information technology research and advisory company, speaking at a data center forum in March 2011. “What we have found is that the market is nowhere near as big as the vendors thought it was going to be, because of fundamental problems with [the containerized] design.”

Surprisingly, the problems pointed to by Kumar include a reversal of the reasons an organization would go with a container design in the first place. First, the promise of a lower cost data center hasn’t always come true, especially in comparison with small, early phase traditional data centers. In fact, the cost of the container data centers often reaches the price of a Tier 4-classified center but without the elements offered by that level of data center. Although industry analysts are expecting the price of data center construction to go down in the next few years, few attribute it directly to container design (Data Center Construction Costs).

In addition, the speed of deployment, described as plug-and-play, is oversimplified. Container designs still require a chilled water supply and a secure site, and, in some cases, applications for building permits could delay deployment for months. “That may take many months in many cases to get that,” added Kumar, who also said that some clients of data centers may not like the idea of a remote location for their data either. “You can’t just position this [container] in a car park and not put any security around it. It still contains mission-critical data, and you may have compliance issues with the industry that you’re in.”

Reliability is also an issue. “Your tech-savvy companies, like Microsoft or Google, are doing things in a cloud where equipment failure rates are not as big of an impact,” says Scott T. Scheibner, P.E., associate senior electrical engineer at the Cambridge, Mass.-based office of KlingStubbins, an internationally recognized design firm with more than 60 years of experience on a wide range of projects. “But companies that have definitely shied away from that type of thing are the financial industry, where each individual component tends to perform its own task, so a single failure may be very impactful to what they’re doing.”

Furthermore, because vendors provide the container data center as a whole, the equipment inside the container is proprietary from that vendor. This leads to a vendor lock-in for container designs. “A lot of clients have experienced that major manufacturers that make the boxes put their own computers inside of them,” says Hering. “They want to sell the entire compute offering, and a lot of companies don’t standardize on one particular brand of computers or servers. There’s a lack of standardization between container offerings, which has hurt the market. You can’t use one container across many different compute platforms.

In addition, because the industry is new and evolving, the equipment expires fairly quickly. “I’ve pushed the market, for probably the last two years, to try to develop standards for containers, thinking that the container solution is going to be the PC box of the ’80s,” continues Hering. “It’s going to become a box of delivered components, so it has to somehow standardize how it’s hooked up — the power and the cooling technologies — so that when a container comes to a site, you have the same docking station, as I refer to them, for each type of container. Right now, the container that you ordered six months earlier has a different connection point than the one you’re ordering now. So if I’ve built my facility to house the ones that I ordered six months ago, now I have to reconfigure my facility to fit the new ones.”

Standards could help improve reliability, which is always a key aspect of data centers, according to Scheibner. “Being able to break it down into a standard process and fabricating the systems in a controlled environment will help that reliability,” he says. “I think that will have a high level of interest to a lot of clients.”

Finally, the very nature of container data centers — the actual container — imposes size restrictions, which, as a result, offer less space for setup and maintenance. “You really have to want to use containers because they’re not the easiest environment to build,” says Hering, who cites a lack of traditional data center niceties, such as a raised floor, computer room space, and the distances in front and back of the racks. “They’re not easy to move servers in and out of,” he continues. “They’re just not built the same way. So you either have to have a need for a 20-ft container fully loaded and populated, or you have to be able to put up with the downside of operating a data center in a container. So in that case, most of our clients have looked at them as temporary space.”

Out of the Box

Notwithstanding minor changes to data center design (Be Cool), the container buzz highlighted the change in design as the first major innovation of the basic physical infrastructure of the data center since the mainframe era. Furthermore, lessons learned from the design of container data center have led to a potential revolution in overall data center design. “The things that are being attempted with some of the containerized solutions now are helping to push a change,” says Scheibner.

According to Scheibner, designers are now considering issues such as how to achieve more processing with less infrastructure, and at what point does less reliability outweigh the up-front costs for a more robust infrastructure. “The thinking needs to shift a little bit for a fair number of engineers who still design in the traditional way,” says Scheibner. “If you start looking at it more as the data center itself is a piece of equipment rather than the end-user’s equipment within the data center, then it would help to change that thinking. It’s something that is going to take a little bit of getting used to from an engineering standpoint.”

A natural progression in this new way of thinking about data center design is from container to modular, which permits customized components to be prefabricated and commissioned off-site. While modular design draws on these important aspects of containerized design, it is more suitable for enterprise applications (Major Differences Between Modular and Container Data Centers). “In mostly colocation environments, or smaller data centers, the modular environments are the things they kind of lean toward because they’re a little more flexible,” says Hering.

The most significant data center design trend of the decade, according to The Uptime Institute (the self-described data center authority) is modular and phased. “Certainly, every company building a data center today is using phased, modular design to take advantage of the capital expense savings,” said Vince Renaud, VP and managing principal at The Uptime Institute Professional Services, speaking during a panel featuring globally recognized data center design experts at the institute’s sixth annual symposium in Santa Clara, Calif., in May 2011. According to Uptime Institute’s spring 2011 survey, almost 50% of large data center operators interviewed are considering “going modular.”

Additionally, a number of experts in the mission critical field predict dramatic growth in the use of modular data centers in the next five years. IDC, the Framingham, Mass.-based provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets is predicting modular units will increase from 144 units in 2011 to about 220 units this year, although both these numbers may be conservative due to the covert nature of some of the industries, such as the military. However, in August 2011, Data Center Knowledge surveyed its readers and found that 35% were either using modular products or evaluating them with an eye toward adopting the technology in the next 12 to 24 months.

Results of a survey of 90 mission critical professionals conducted by Mortenson Construction, a U.S. data center contractor, and released in February, revealed that 83% of colocation provider respondents and 60% of corporate and public entity respondents will likely make mission-critical investments in the next 12 to 24 months. Although only 23% of respondents indicated their current data center operations include containerized or modular data centers as a major part of operations, 23% marked them as a minor part of their operations, and 54% said they are not part of current operations, 43% are predicting they will be a major future trend in the future — and 52% are claiming it will be at least a minor future trend.

“Today’s data center is obsolete when taking modularity and the fast maturation of this market into consideration,” says Jason Schafer, research manager at Tier1 Research, researcher of enterprise IT innovation within emerging technology segments. “If data center owners and operators are not at least exploring and considering modular components as a means for data center expansions and new builds, they are putting themselves at a significant disadvantage from a scalability, cost, and possibly maintenance standpoint.”

Modular Conveniences

To cut down on the lengthy and expensive process of general nonresidential construction, prefabrication has become more widespread in the last decade. In the wake of the success of more rapid deployment through prefabrication, many public projects now require it as an element of the contract. It is quickly becoming standard practice on state and municipal work, particularly in departments of transportation. According to the Freedonia Group, nonresidential prefabricated building system demand in the United States is expected to increase 7.8% annually to $15.2 billion in 2015.

It is estimated a traditional data center build-out takes between 18 to 24 months. Under modular design, prefabrication for elements in data centers is becoming more common, cutting this time down significantly. “There’s a lot of advanced planning that needs to happen with the larger systems, because you’ve got much larger duct bank runs and a lot more hurdles because of the system sizes,” says Hering. “Whereas with the modular solution, you’re doing a lot of wiring prefab off-site. It cuts schedule, saving time. Plus, with a smaller building block, it might be a little bit easier to get some of the flexibility in the installation.”

Likewise, the prefabrication that comes with modular designs removes integration clashes associated with full on-site construction methods. “The biggest allure that you find is twofold,” says Scheibner. “You get the rapid deployment, as well as the standard factory design, which gives you a little bit higher quality assurance. It’s a repeatable process, and you’re able to test everything in the environment as it’s installed so you don’t have the large commissioning times that you see on a larger, traditional design built system.”

For example, the realty industry uses what it refers to as turnkey data centers, which is a generalized space that measures about 2MW and is made up of predesigned electrical components. “It’s more or less cookie cutter,” says Hering. “You’re delivering these spaces, and your designs are all the same — so that’s called a modularized design.”

According to Scheibner, a modular design makes planning easier too. “The end-user of the data center doesn’t necessarily have a great understanding of what their five-year or even 10-year outlook will be,” he says. “So in order to make an investment to build the right system in a traditional build, you have take into account a large time frame of growth so that you aren’t going through the process consecutively and getting to a state of continuous building.” As a result, says Scheibner, most traditional data centers are overbuilt.

In this way, modular designs are more flexible than even containers, says Hering. “They offer connectivity to a broader amount of computers,” he says. “So again, I’m building this modular system for delivery of power that is going to go to, say, the protocol description unit (PDU) level, and from there I’m going to supply power out of there. Or I’m delivering a larger air-handler unit, which will supply so much cubic feet per meter (CFM) to a space, so I can build my space based on that amount. Modularity is so much more flexible.”

SIDEBAR: Be Cool

A recent study released by IMS Research USA, Austin, Texas, reveals rack-level cooling devices will grow by 12% from 2011 to 2016, accounting for more than half a billion USD in 2016. According to the study, “The World Market for Data Center Cooling – 2012 Ed.,” the fastest growing national markets for rack-level cooling equipment are forecast to grow by more than 20% over this five-year period.

Data centers require specialized cooling to handle the heating load from IT equipment. The increasing computing capacity of new servers of greater power density has created the need to cool hot spots in the data center more efficiently. Rack- and row-level cooling is the most suitable solution. “Unlike in the early years of data centers, when the only objective was to maintain general room temperature inside the data center, efficiency has become a major concern for data center operators because of the increasing cost of operating the cooling equipment,” says Andrés Gallardo, IMS Research analyst. “Furthermore, this equipment has also become more crucial to a company’s operation; and technology has enabled facilities with increasing densities. For these reasons, the growth of rack-level solutions continues to outpace that of room-cooling products as these products offer efficient cooling for high-density loads.”

Despite the high growth of rack-based cooling, room cooling will remain the largest part of the market over the next five years as most data center designs still call for room cooling. Moreover, new builds in emerging markets are often lower density and do not require supplemental cooling.

Sidebar: Major Differences Between Modular & Container Data Centers

(click here to see Table)

Sidebar: Data Center Construction Costs

Christian Belady, general manager, data center research at Microsoft’s Global Foundation Services, is estimating the cost of data centers will decrease from an average construction cost of $15 million per megawatt to $6 million per megawatt in five years, resulting in a flattening of year-over-year growth. Yet, Belady attributes the reduction in cost not to container design but to the cloud computing model. According to Belady, these designs leverage scale and application redundancy, as opposed to finding savings in infrastructure.

“There is a force that is affecting the annual growth in data center construction that is a bit more tangible: the shift from highly redundant and well-controlled data centers to lower cost, highly efficient cloud data center designs,” says Belady. “These designs leverage their scale and application redundancy, as opposed to hardware redundancy to drive down cost. In addition, these designs use aggressive economizations that employ either liquid or air to help drive down cost and substantially improve efficiency. These data centers today are characterized by costs on the order of $6 million per megawatt and will continue to go lower in the future.”

Belady based his estimates on Microsoft’s internal numbers and looking at public numbers from cloud scale providers, such as Yahoo’s $5 million per megawatt Yahoo! Computing Coop.

Voice your opinion!

To join the conversation, and become an exclusive member of EC&M, create an account today!

Sponsored Recommendations