Data Center Power Featured Articles
Moore's Law and Murphy's Law Rule in Data Centers
Moore’s Law is a rule of thumb developed by Intel (News - Alert) co-founder Gordon Moore that states the number of chip elements called transistors will double approximately every two years. Transistors are used to amplify and switch electronic signals and electrical power. Researchers keep finding ways to maintain a tradition that two generations ago would have been science fiction: That computers will continue to get smaller even as they get more powerful. It shows no sign of slowing, either; IDC (News - Alert) predicts chip sales will rise from $315 billion in 2012 to $380 billion in 2016.
Image via Intel
On the other hand, we have this rule called Murphy’s Law, which is the rule that anything that can go wrong will go wrong.
The demand for higher and higher computing capacity continues to drive up cabinet densities and therefore, the power required to operate these systems has greatly increased as well. Data center floor space is also some of the most expensive property per square feet to build that there is.
“The demand for high computing capacity and the operating costs have led to three-phase power commonly being brought down to the cabinet level with voltage requirements in the 400 V / 480 V range and with current densities at 60 or even 100 A to meet these greater compute demands,” Nicholson wrote.
Higher voltages have the additional benefit of allowing servers to operate more efficiently, which reduces costs. These costs are becoming more critical as power costs continue to increase and power availability in many areas becomes scarce.
Data centers assume Murphy’s Law is as true as Moore’s Law and plan for redundancy. Hundreds of hours are spent designing critical facilities around LEED standards and determining the appropriate Tier level required for data center facilities. Redundant systems, including power, reduce downtime and how facilities support often unplanned increases in demand.
“Power and environmental monitoring with device control will continue to be the ‘new’ law as data centers move forward. As The Green Grid (News - Alert) coined the term long ago, ‘You cannot improve what you are not measuring,’” Nicholson said.Smart Load Shedding, which allows for non-critical devices, by priority, to be automatically shut off during conditions such as increased temperatures or exceeding thresholds, is one feature that can be implemented to ensure equipment is not damaged in those conditions. Smart Load Shedding is perfect for situations such as remote location, secure locations that are difficult to access and other installations.
Edited by Rich Steeves
Data Center Power Resources
Featured White Papers
This article explores the various monitoring systems typically found within the data center ecosystem and how to navigate getting the required power and environmental information needed to make better decisions within your data center facility.[Read More]
RF Code provides an enterprise class, wire-free sensor solution that is ideal for monitoring in real time the environmental conditions in IT dense areas such as data centers and IT closets.[Read More]
Within enterprise data centers, power used for operating the facility, lighting, running IT loads and cooling is the largest operational expense. Numerous papers and articles have been published by The Green Grid, The Uptime Institute, PG&E, Lawrence Berkeley Laboratories and others discussing ways to measure, monitor and increase efficiencies. This paper discusses the effect on efficiency of load balancing across phases in a 3-phase distribution system.[Read More]