Data Center Power Featured Articles
Cutting Data Center Power Costs Using Untraditional Methods
Power dissipation and associated cooling cost are enormous in a data center. While power usage effectiveness (PUE) is a good measure of data center’s energy efficiency, achieving a lower PUE number is not a trivial task. Especially 1.0, which is an ideal measure because PUE compares total data center electrical consumption to the amount converted into useful computing tasks. A common value of 2.0 means two watts coming into the data center falls to one watt by the time it reaches a server. This loss of power converts into heat and must be appropriately managed with cooling systems that cost money.
A recent article on InfoWorld’s Data Center site, by Mel Beckman, proposed eight novel methods of cutting power loss and, thereby, reducing associated heat and cooling costs. The proposed methods are simple to implement, according to Beckman.
The eight novel methods outlined to cut data center power are: crank up the heat; power down servers that aren't in use; use "free" outside-air cooling; use data center heat to warm office spaces; use SSDs for highly active read-only data sets; use direct current in the data center; bury heat in the earth, and move heat to the sea via pipes.
The first energy saving method, crank up the heat, calls for turning up the data center thermostat. Conventionally, the data center temperature is set around 68 °F or below. The logic behind this practice is that these temperatures extend equipment life and give facility managers more time to react in the event of a cooling system failure, explains Beckman. But, the new trend is going upwards.
At last year's GreenNet conference, Google (News - Alert) energy czar Bill Weihl cited Google's experience with raising data center temperatures. As per the InfoWorld write-up, 80 °F can be safely used as a new set point, provided the data center implements a simple prerequisite: separating hot- and cold-air flows as much as possible, using curtains or solid barriers if needed.
While 80 °F looks like a safe bet, Microsoft's (News - Alert) experience shows you could go higher. The software giant’s Dublin, Ireland, data center operates in chiller-less mode, using free outside-air cooling, with server inlet temperatures as high as 95 °F. But, cautions Beckman, there is a point of diminishing returns as you raise the temperature, owing to the higher server fan velocities needed that further increases power consumption.
A second idea for reducing data center power, according to the article, requires cutting power to servers that are unused. So why not power down entire servers? Is the increased "business agility" of keeping servers ever ready worth the cost of the excess power they consume?, asks Beckman. Hence, he suggests that data center operators must find instances where servers can be powered down to achieve the lowest power usage of all, zero.
Another option is to use free outside-air cooling. In this case, if you're trying to maintain 80 °F and the outside air is at 70 °F, the operator can get all the cooling needed by blowing that air into the data center, writes Beckman.
Data center managers can also use data center heat to warm office spaces. By using data center BTUs to heat office spaces, the center can double energy savings.
Likewise, InfoWorld’s fifth suggestion recommends the use of solid-state drives (SSDs) for highly active read-only data sets. Due to faster access times, lower power consumption, and very low heat emissions, SSDs are gaining popularity in netbooks, tablets, and laptops, as well as servers. But, until recently their cost and reliability has been a barrier to adoption. Fortunately, writes Beckman, SSDs have dropped in price considerably in the last two years to become attractive for quick energy savings in data centers. When employed correctly, SSDs can knock a fair chunk off the price of powering and cooling disk arrays, with 50 percent lower electrical consumption and near-zero heat output, wrote Beckman.
Using direct current (DC) is another way of cutting data center energy cost. By using DC, method six eliminates the AC-to-DC conversion performed by a server's internal power supply.
Burying heat in the earth is another way to managing heat cost effectively. By sending pipes into the earth, hot water carrying server-generated heat can be circulated to depths where the surrounding ground will usher the heat away by conduction, according to the seventh suggestion for cutting data center power. However, this method requires that the operator analyze the heat absorption capabilities of the ground to know how much a given area can absorb.
Unlike geothermal heat sinks, the ocean is effectively an infinite heat sink for data center purposes. Beckman’s method 8 moves heat to the sea through pipes.
Server Technology (News - Alert), a company that works to design, develop and provide the world's best power management products and system, has been a leader in the industry for the past 25 years and has also been working to increase data center efficiencies.
“Overall, the power usage has gone up but they are doing a lot more work, they are getting a lot more done and they are providing a lot more services with the power that is being used,” he said. “I think they have gotten more efficient but the demand has gone up radically.”
Ashok Bindra is a veteran writer and editor with more than 25 years of editorial experience covering RF/wireless technologies, semiconductors and power electronics. To read more of his articles, please visit his columnist page.
Data Center Power Resources
Featured White Papers
As the need to balance current and future IT requirements against resource consumption becomes more urgent, the data center industry increasingly views capacity planning as a way of achieving a critical component to planning a new build or retrofit. Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications. DCD Intelligence has therefore compiled this White Paper in order to share some industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.[Read More]
Server Technology had the recent opportunity, along with other partner companies, to participate in discussions across the globe with data center IT and facility managers as part of a road show seminar: Data Center Energy and Operational Efficiency.[Read More]
The demand for more power in the computer cabinet has led many data centers to upgrade to three phase power distribution. Proper three phase power distribution has traditionally meant dividing up power up into multiple branches within the rack PDU (Power Distribution Unit). In this paper we will explore the advantages of a new, less common approach to PDU design by means of alternating each phase on a per-receptacle basis instead of a per branch basis.[Read More]
Increasing powering and cooling demands within the data center have been the topics of choice for Data Center (DC) and Facility Managers for several years now. Increased power demands are a result of the need for more compute power and higher density devices have resulted. These high density installations include stacks and stacks of servers and the trend of implementing blade servers within these server "farms." Cooling problems are a direct result of the increased power demands based on the simple fact that more power increases the demand for cooling.[Read More]