These electrical support systems require in-depth evaluation at the design stage.

In an internetworked age, a high-performance data center becomes a company's command center. Balancing the need for maximum system availability and an economically feasible power solution at such sites is something system designers, facility managers, and owners struggle with all the time. Data centers, telephone- or Internet-service-provider (ISP) hub sites, telephone switch facilities, and satellite earth stations all support telecommunications and critical data equipment that need clean and uninterrupted electrical power.

Redundancy is the most important way to gain reliability. Redundancy is the practice of providing equipment that isn't necessary under normal operating conditions, but which can be substituted for failed equipment. This is usually expressed as n + r, where r is the degree of redundancy.

Another important factor directly related to reliability is availability. Availability refers to the time period in which a utility service, power supply, or other component in the path of electrical supply is able to meet equipment needs. This factor is directly related to the probability of failure of particular pieces of equipment. For example, if power to a load that requires conditioned power is only available through the utility bypass circuit, it may be considered a loss of service, because it is not conditioned. Availability depends not only on failure quantity, but also on the repair time to restore service.

Availability is sometimes expressed as a decimal. For example, 0.99999 would be five-nines availability, which implies that, on average, service is available 99.999% of the time. This relates to service being unavailable for a given number of minutes during the year. Because high-reliability facilities handle business transactions and could lose huge sums of money if service were interrupted, these sites definitely need six-nines or seven-nines availability. And they are willing to pay for it. To provide high availability, an infrastructure designer must understand the effect each piece of equipment would have on overall availability and specify components that will reduce the possibility of failures.

Typically, the electrical design for a high-availability power system includes multiple-utility power feeds from separate utility substations, standby diesel generators, UPS units, local power distribution units (PDUs) to serve computers, and multiple-path branch-circuit wiring. A data center using such a design may offer a calculated availability of 0.999999, or six-nines. At this availability, the electric supply to the computer equipment may not have any unplanned interruption for up to five years. In most cases, the computers will become technologically obsolete and will be removed before power fails.

"In a 24-7 world, reliability is essential," notes Adam Krupp, sales managing director for CS Technology, a New York-based technology and design company that has completed more than 1500 technology projects. "Thousands of dollars in infrastructure equipment costs are needed to achieve six-nines of availability, but not everyone needs this level of performance. That's why, at the start of the job, it is important to sit down and determine up-front exactly what the client needs. Basically, every project should start off with a needs assessment - what systems are they putting in? You want to find out the initial requirements for startup, but then you have to get into the nitty-gritty of growth projections. For the expenditure, most systems must not only meet initial projections, but should withstand interruptions as the electronic equipment load grows."

According to Krupp, some of CST's clients think their site needs a six-nines or better design, because they've read about the concept in a trade publication. But, if asked if they could, in reality, afford to be down at least one weekend a year, they will admit that it's possible. The downtime could be a long weekend during a relatively slow period.

When a facility follows this design, the savings in equipment costs are often considerable. Without the need to concurrently maintain equipment and sustain critical load infrastructure, a site won't require expensive solid-state transfer switches, bypass maintenance circuits, and the like.

Another important set of factors in site design is preplanning and scalability. William Angle, managing director for CST's Washington office, states their importance in strong terms. He notes that most corporations allot 90% of their data center budgets to electronics such as computers, servers, and telecommunications. As such, the infrastructure or building enclosure becomes secondary, because the structure and everything else make up the remaining 10% of the cost.

This approach contrasts greatly with typical construction-cost allocations. In these, the building is a greater component of the cost, and offices that house people are easier to move. Hence, anyone planning to build a data center must develop the project under a different set of financial considerations.

Before starting any design, an owner should establish an expansion plan - one that offers the ability to scale up the site at a later date. This is difficult because electronic equipment changes at an accelerated rate compared to the life of the data-center building and support systems. Usually the best design practice is to base the space requirements for a new data center on at least a five-year growth plan, which provides adequate expansion to ensure an orderly installation of more equipment in the future.

This is essential, because once the computer and telecom equipment is online and serving critical functions, construction costs would escalate if the site couldn't easily accept additional electronic equipment and power distribution equipment without an outage.

"It is like building a new operating room while performing surgery in the same area," says Angle.

Costs associated with precise construction to avoid service interruption can be considerable. For that reason, Angle tells clients to spend 10% of the data center's design cost to preplan the project.

"In today's technology economy, it is important to think ahead. Consider a data-center site that is built to carry out operations that don't involve critical functions, but two years later it becomes the Internet hub for a company's financial, manufacturing, and ordering operations. At this point, can you buy the ultrareliability features you need?"

Preplanning this adaptability is well worth the expenditure. The experience of CST's engineers and planners is that while substantial growth in information processing and storage capacities may require little additional floor space due to constant electronic equipment compaction, electrical and mechanical requirements will expand dramatically. The electrical density and corresponding heat dissipation will continue to elevate as the computer-industry compaction continues. Two major influences in data-center design over the past decade are development of fault-tolerant dual-power computers and systems and static switch technology for single-cord equipment.

Fast-acting switches in the electrical source will enable a single power cord on the computer to have two paths of supply. If one path fails, the switch acts quickly and prevents an outage. Care should be used in the design of these systems because if many switches toggle at the same time during an interruption, many power shifts (block loads) must be sustained.

The second factor will eventually eliminate the static-switch technology when electronic equipment will have two power cords and internal power supplies for each device. All switching will then occur within the computers. However, this will not relieve infrastructure equipment from block-load shifting.

In most cases, a variety of small design details can help create a smooth-functioning data center. An occasion might occur where all redundant and backup systems fail, but the site operator should still be able to keep the facility operating or be able to restart the systems. For example, a connection box can be part of the electric power distribution system, which would allow plug-in of a rental generator. Prior arrangements should be made with a generator rental company, of course.

Another design feature is to provide manual electrical system interconnections - which are key-interlocked. With manual interconnections, members of a trained staff can reconfigure the electrical system to correct major component failures.

The ceiling grid, lighting fixtures, equipment racks, floor tiles, and cable-tray locations should be coordinated and aligned. A coordinate system should be set up and included in room circuit schedules to allow a technician to quickly locate power circuits under access floors.

Where possible, branch-circuit wiring should use multi-circuit type AC cables, to minimize cross-section fill of power conductors in cable trays. An 8/C No. 10 AWG hospital-grade cable (having one neutral conductor per circuit, an equipment ground, and an isolated ground wire) is usually recommended. The No. 10 conductor allows for increased harmonic currents - common in data facilities - and the larger conductor cross section also reduces voltage drop. Hospital-grade type AC cable is recommended because it has stringent grounding requirements, and is UL-rated for use in cable tray and environmental air-handling spaces.

Finally, specify NEMA 5-20L twist-lock receptacles in floor outlets to prevent accidental removal of plug-strip connections, thus preventing inadvertent loss of power.

24-7 site. A facility that conducts critical processing continuously and cannot tolerate downtime.

Availability. The degree to which a system performs as specified when called upon to do so. Often expressed as a percentage of time (99.999%).

Concurrent maintenance. Process that enables the performance of regular maintenance on critical systems without disrupting normal operation.

Critical load. Electronic-data-processing (EDP), telecommunications, or other equipment that processes or supports the processing of network traffic.

Critical service. A building service which, if not furnished with specific parameters, causes financial hardship and loss by disrupting the operation of critical loads.

Critical system. A building system that generates, transmits, or delivers a critical service.

Fault-tolerant system. A system that can survive the failure of any component or subsystem without causing the loss of operations.

Redundancy. The use of one, two, or more components or subsystems within a larger system beyond what the critical load actually requires at any given time.

Reliability. The extent to which a system or component can be expected to operate as specified. Often expressed as mean time between failure (MTBF).

Single point of failure. Any component, subsystem, or element of a system, whose failure (independent of any other component, subsystem or element) causes interruption of any critical service.