To cut down on the lengthy and expensive process of general nonresidential construction, prefabrication has become more widespread in the last decade. In the wake of the success of more rapid deployment through prefabrication, many public projects now require it as an element of the contract. It is quickly becoming standard practice on state and municipal work, particularly in departments of transportation. According to the Freedonia Group, nonresidential prefabricated building system demand in the United States is expected to increase 7.8% annually to $15.2 billion in 2015.

It is estimated a traditional data center build-out takes between 18 to 24 months. Under modular design, prefabrication for elements in data centers is becoming more common, cutting this time down significantly. “There’s a lot of advanced planning that needs to happen with the larger systems, because you’ve got much larger duct bank runs and a lot more hurdles because of the system sizes,” says Hering. “Whereas with the modular solution, you’re doing a lot of wiring prefab off-site. It cuts schedule, saving time. Plus, with a smaller building block, it might be a little bit easier to get some of the flexibility in the installation.”

Likewise, the prefabrication that comes with modular designs removes integration clashes associated with full on-site construction methods. “The biggest allure that you find is twofold,” says Scheibner. “You get the rapid deployment, as well as the standard factory design, which gives you a little bit higher quality assurance. It’s a repeatable process, and you’re able to test everything in the environment as it’s installed so you don’t have the large commissioning times that you see on a larger, traditional design built system.”

For example, the realty industry uses what it refers to as turnkey data centers, which is a generalized space that measures about 2MW and is made up of predesigned electrical components. “It’s more or less cookie cutter,” says Hering. “You’re delivering these spaces, and your designs are all the same — so that’s called a modularized design.”

According to Scheibner, a modular design makes planning easier too. “The end-user of the data center doesn’t necessarily have a great understanding of what their five-year or even 10-year outlook will be,” he says. “So in order to make an investment to build the right system in a traditional build, you have take into account a large time frame of growth so that you aren’t going through the process consecutively and getting to a state of continuous building.” As a result, says Scheibner, most traditional data centers are overbuilt.

In this way, modular designs are more flexible than even containers, says Hering. “They offer connectivity to a broader amount of computers,” he says. “So again, I’m building this modular system for delivery of power that is going to go to, say, the protocol description unit (PDU) level, and from there I’m going to supply power out of there. Or I’m delivering a larger air-handler unit, which will supply so much cubic feet per meter (CFM) to a space, so I can build my space based on that amount. Modularity is so much more flexible.”


A recent study released by IMS Research USA, Austin, Texas, reveals rack-level cooling devices will grow by 12% from 2011 to 2016, accounting for more than half a billion USD in 2016. According to the study, “The World Market for Data Center Cooling – 2012 Ed.,” the fastest growing national markets for rack-level cooling equipment are forecast to grow by more than 20% over this five-year period.

Data centers require specialized cooling to handle the heating load from IT equipment. The increasing computing capacity of new servers of greater power density has created the need to cool hot spots in the data center more efficiently. Rack- and row-level cooling is the most suitable solution. “Unlike in the early years of data centers, when the only objective was to maintain general room temperature inside the data center, efficiency has become a major concern for data center operators because of the increasing cost of operating the cooling equipment,” says Andrés Gallardo, IMS Research analyst. “Furthermore, this equipment has also become more crucial to a company’s operation; and technology has enabled facilities with increasing densities. For these reasons, the growth of rack-level solutions continues to outpace that of room-cooling products as these products offer efficient cooling for high-density loads.”

Despite the high growth of rack-based cooling, room cooling will remain the largest part of the market over the next five years as most data center designs still call for room cooling. Moreover, new builds in emerging markets are often lower density and do not require supplemental cooling.

Sidebar: Major Differences Between Modular & Container Data Centers

(click here to see Table)

Sidebar: Data Center Construction Costs

Christian Belady, general manager, data center research at Microsoft’s Global Foundation Services, is estimating the cost of data centers will decrease from an average construction cost of $15 million per megawatt to $6 million per megawatt in five years, resulting in a flattening of year-over-year growth. Yet, Belady attributes the reduction in cost not to container design but to the cloud computing model. According to Belady, these designs leverage scale and application redundancy, as opposed to finding savings in infrastructure.

“There is a force that is affecting the annual growth in data center construction that is a bit more tangible: the shift from highly redundant and well-controlled data centers to lower cost, highly efficient cloud data center designs,” says Belady. “These designs leverage their scale and application redundancy, as opposed to hardware redundancy to drive down cost. In addition, these designs use aggressive economizations that employ either liquid or air to help drive down cost and substantially improve efficiency. These data centers today are characterized by costs on the order of $6 million per megawatt and will continue to go lower in the future.”

Belady based his estimates on Microsoft’s internal numbers and looking at public numbers from cloud scale providers, such as Yahoo’s $5 million per megawatt Yahoo! Computing Coop.