Data Center Power Featured Articles
Data Center Power: Making Data Centers Last 10-15 Years
There are basically two distinct product models in the data center industry, according to a recent Data Center Knowledge article – enterprise class and inexpensive commodity facilities. Today, there are different requirements for building data centers to last the 10 to 15 years that they are designed to live, and the designs need to “be specifically tailored to a business’ current and future needs or they need to have the inherent flexibility to adapt as their needs change,” the article states.
The two models that the data industry revolves around are the enterprise class, which tends to focus on reliability and extended life cycle, “usually at the expense of efficiency,” and inexpensive and quick-to-build commodity facilities, “primarily to meet immediate needs with little understanding or consideration for full life cycle usage.”
“Today, the wholesale data center industry is at a crossroads,” the article states. “It can either continue to produce quite unremarkable accelerated obsolescence-inspired designs that box in customers or they can give the enterprise what it wants: Industrial-strength innovation around scalability, efficiency and flexibility.”
The article adds that seven-year old data centers are “obsolete and that some users’ needs will grow beyond their data center’s capabilities in as little as two to three years.
To create data centers that will last in today’s world, companies must design with the future in mind. Enterprise data center users do make large investments in both capital and innovation, but “the time lag between each build can be substantial, thus not allowing for continued and aggressive innovation in efficiency and design methods.”
If you want maximized flexibility and realized design life cycle, one key is vertical scalability: This “allows businesses to grow within the walls, thereby reducing operating expenses and risks and increasing efficiencies as power densities increase.”
As densities grow, “operating efficiencies improve and capital spent per server/application decreases.”
The data center power monitoring company performs 100 percent product tests, a process that has been met with much success.
“Our processes are very well established,” Edi Murway, head test engineer for Server Technology, told TMCnet in a recent interview. “We’ve always had a very low failure rate, but things do fail. People are involved in the process and whenever people or machines are involved in the process there are always going to be failures. So, having the confidence that when we ship that product it’s going to be 100 percent of what it’s designed to do and what the customer expects makes it worth it.”
The process in the past few years has become even more robust and the company that creates solutions related to data center power and telecommunications prides itself on offering 100 percent performance testing.
David Sims is a contributing editor for TMCnet. To read more of David’s articles, please visit his columnist page. He also blogs for TMCnet here.
Edited by Carrie Schmelkin
Data Center Power Resources
Featured White Papers
This article explores the various monitoring systems typically found within the data center ecosystem and how to navigate getting the required power and environmental information needed to make better decisions within your data center facility.[Read More]
RF Code provides an enterprise class, wire-free sensor solution that is ideal for monitoring in real time the environmental conditions in IT dense areas such as data centers and IT closets.[Read More]
Within enterprise data centers, power used for operating the facility, lighting, running IT loads and cooling is the largest operational expense. Numerous papers and articles have been published by The Green Grid, The Uptime Institute, PG&E, Lawrence Berkeley Laboratories and others discussing ways to measure, monitor and increase efficiencies. This paper discusses the effect on efficiency of load balancing across phases in a 3-phase distribution system.[Read More]