In 2006, Sun Microsystems unveiled the first container data center. Project Blackbox, an energy-efficient, water-cooled turnkey data center housed in an International Standards Organization (ISO) intermodal shipping container, was heralded as a cutting-edge solution for low-cost, rapid deployment of remote and rugged environments. It was adopted for select applications and locations, such as at oil rigs for seismic modeling, windmills in underdeveloped rural areas, and U.S. military posts. “The military used container data centers for rapid deployment and rugged environments,” says Rich Hering, technical director mission critical facilities at the Phoenix office of M+W Group, a leading global engineering and construction partner for technology-based clients.

Not long afterward, a few high-tech companies began their own experiments with containerized data centers. Google parked one in an underground parking garage and also proposed a floating data center, located on a cargo ship. In 2008, Microsoft revealed a configuration of 220 shipping containers for a data center in Chicago. “For us, containers have been used more in high-performance data centers,” says Hering .“They’re used in the cloud environment or if a company needs to expand space without wanting to build a whole brand new data center. They use containers as additional space.”

Still others, attracted by the scalability, efficiency, and economics, looked into using a containerized design. Containers became a hot topic for data center operators wanting to increase capacity without waiting for clients to sign on. “It dovetailed into the data center industry, because it was trying to create white space as fast as possible without having to make large investments,” says Hering. “They wanted to build the right-sized solution so that they could maximize their dollars-to-square-foot delivery. It was their answer to just-in-time.”

Off the Rack

However, when planning a container data center for enterprise applications, drawbacks to the container design became evident. “The belief was that there’s a huge market, and that many [end-users] would start using these containers,” said Rakesh Kumar, VP of Gartner Research, Stamford, Conn.-based information technology research and advisory company, speaking at a data center forum in March 2011. “What we have found is that the market is nowhere near as big as the vendors thought it was going to be, because of fundamental problems with [the containerized] design.”

Surprisingly, the problems pointed to by Kumar include a reversal of the reasons an organization would go with a container design in the first place. First, the promise of a lower cost data center hasn’t always come true, especially in comparison with small, early phase traditional data centers. In fact, the cost of the container data centers often reaches the price of a Tier 4-classified center but without the elements offered by that level of data center. Although industry analysts are expecting the price of data center construction to go down in the next few years, few attribute it directly to container design (Data Center Construction Costs).

In addition, the speed of deployment, described as plug-and-play, is oversimplified. Container designs still require a chilled water supply and a secure site, and, in some cases, applications for building permits could delay deployment for months. “That may take many months in many cases to get that,” added Kumar, who also said that some clients of data centers may not like the idea of a remote location for their data either. “You can’t just position this [container] in a car park and not put any security around it. It still contains mission-critical data, and you may have compliance issues with the industry that you’re in.”

Reliability is also an issue. “Your tech-savvy companies, like Microsoft or Google, are doing things in a cloud where equipment failure rates are not as big of an impact,” says Scott T. Scheibner, P.E., associate senior electrical engineer at the Cambridge, Mass.-based office of KlingStubbins, an internationally recognized design firm with more than 60 years of experience on a wide range of projects. “But companies that have definitely shied away from that type of thing are the financial industry, where each individual component tends to perform its own task, so a single failure may be very impactful to what they’re doing.”

Furthermore, because vendors provide the container data center as a whole, the equipment inside the container is proprietary from that vendor. This leads to a vendor lock-in for container designs. “A lot of clients have experienced that major manufacturers that make the boxes put their own computers inside of them,” says Hering. “They want to sell the entire compute offering, and a lot of companies don’t standardize on one particular brand of computers or servers. There’s a lack of standardization between container offerings, which has hurt the market. You can’t use one container across many different compute platforms.

In addition, because the industry is new and evolving, the equipment expires fairly quickly. “I’ve pushed the market, for probably the last two years, to try to develop standards for containers, thinking that the container solution is going to be the PC box of the ’80s,” continues Hering. “It’s going to become a box of delivered components, so it has to somehow standardize how it’s hooked up — the power and the cooling technologies — so that when a container comes to a site, you have the same docking station, as I refer to them, for each type of container. Right now, the container that you ordered six months earlier has a different connection point than the one you’re ordering now. So if I’ve built my facility to house the ones that I ordered six months ago, now I have to reconfigure my facility to fit the new ones.”

Standards could help improve reliability, which is always a key aspect of data centers, according to Scheibner. “Being able to break it down into a standard process and fabricating the systems in a controlled environment will help that reliability,” he says. “I think that will have a high level of interest to a lot of clients.”

Finally, the very nature of container data centers — the actual container — imposes size restrictions, which, as a result, offer less space for setup and maintenance. “You really have to want to use containers because they’re not the easiest environment to build,” says Hering, who cites a lack of traditional data center niceties, such as a raised floor, computer room space, and the distances in front and back of the racks. “They’re not easy to move servers in and out of,” he continues. “They’re just not built the same way. So you either have to have a need for a 20-ft container fully loaded and populated, or you have to be able to put up with the downside of operating a data center in a container. So in that case, most of our clients have looked at them as temporary space.”

Out of the Box

Notwithstanding minor changes to data center design (Be Cool), the container buzz highlighted the change in design as the first major innovation of the basic physical infrastructure of the data center since the mainframe era. Furthermore, lessons learned from the design of container data center have led to a potential revolution in overall data center design. “The things that are being attempted with some of the containerized solutions now are helping to push a change,” says Scheibner.

According to Scheibner, designers are now considering issues such as how to achieve more processing with less infrastructure, and at what point does less reliability outweigh the up-front costs for a more robust infrastructure. “The thinking needs to shift a little bit for a fair number of engineers who still design in the traditional way,” says Scheibner. “If you start looking at it more as the data center itself is a piece of equipment rather than the end-user’s equipment within the data center, then it would help to change that thinking. It’s something that is going to take a little bit of getting used to from an engineering standpoint.”

A natural progression in this new way of thinking about data center design is from container to modular, which permits customized components to be prefabricated and commissioned off-site. While modular design draws on these important aspects of containerized design, it is more suitable for enterprise applications (Major Differences Between Modular and Container Data Centers). “In mostly colocation environments, or smaller data centers, the modular environments are the things they kind of lean toward because they’re a little more flexible,” says Hering.

The most significant data center design trend of the decade, according to The Uptime Institute (the self-described data center authority) is modular and phased. “Certainly, every company building a data center today is using phased, modular design to take advantage of the capital expense savings,” said Vince Renaud, VP and managing principal at The Uptime Institute Professional Services, speaking during a panel featuring globally recognized data center design experts at the institute’s sixth annual symposium in Santa Clara, Calif., in May 2011. According to Uptime Institute’s spring 2011 survey, almost 50% of large data center operators interviewed are considering “going modular.”

Additionally, a number of experts in the mission critical field predict dramatic growth in the use of modular data centers in the next five years. IDC, the Framingham, Mass.-based provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets is predicting modular units will increase from 144 units in 2011 to about 220 units this year, although both these numbers may be conservative due to the covert nature of some of the industries, such as the military. However, in August 2011, Data Center Knowledge surveyed its readers and found that 35% were either using modular products or evaluating them with an eye toward adopting the technology in the next 12 to 24 months.

Results of a survey of 90 mission critical professionals conducted by Mortenson Construction, a U.S. data center contractor, and released in February, revealed that 83% of colocation provider respondents and 60% of corporate and public entity respondents will likely make mission-critical investments in the next 12 to 24 months. Although only 23% of respondents indicated their current data center operations include containerized or modular data centers as a major part of operations, 23% marked them as a minor part of their operations, and 54% said they are not part of current operations, 43% are predicting they will be a major future trend in the future — and 52% are claiming it will be at least a minor future trend.

“Today’s data center is obsolete when taking modularity and the fast maturation of this market into consideration,” says Jason Schafer, research manager at Tier1 Research, researcher of enterprise IT innovation within emerging technology segments. “If data center owners and operators are not at least exploring and considering modular components as a means for data center expansions and new builds, they are putting themselves at a significant disadvantage from a scalability, cost, and possibly maintenance standpoint.”