Data Center Power Featured Articles
Virtualization Offers Big Cost Savings, Environmental Advantages
Data centers seem to be multiplying at a rapid rate in an effort to keep up with the demand for high-capacity cloud computing resources. As cloud applications and services generate an ever-increasing amount of big data, the demands on data center power consumption and server bandwidth have exploded.
Data center operators are now seeking ways to constrain costs and power consumption while still offering the high levels of availability, redundancy and security that their customers demand. Virtualization, of both servers and general physical infrastructure, could very well prove to be the key to the efficiencies needed in today’s demanding data center environments.
Very simply, virtualization software separates physical infrastructure, enabling multiple applications and systems to be run simultaneously on the same hardware. Organizations can immediately reap the benefits of cost savings on hardware and operating expenses since fewer servers are essentially needed to run a variety of systems and software. Virtualization is the powerhouse behind cloud computing and offers massive potential when it comes to how data centers operate.
According to research firm IDC (News - Alert), virtualization of servers and physical infrastructure could save businesses in the Asia-Pacific region up to $106 billion by 2020. The company examined server spending costs along with the associated costs of administration, power and cooling and physical floor space. The technology also has a massive environmental impact, with the potential to cut out 6.4 million tons of carbon dioxide emission in the region through 2020.
In the U.S., virtualization has the potential to save companies $1.9 trillion in gross energy and fuel costs through 2020, as well as 9.1 gigatons of carbon dioxide emissions. IDC adds that the technology can also significantly increase time to market for services, which leads to a better ROI.
Virtualization can present complications, however, and Stefan Bernbo, CEO of Compuverde, recently wrote about some of the drawbacks. (Compuverde specializes in big data cloud storage solutions.)
According to Bernbo, rapid virtualization can create issues like data congestion unless the proper hardware is used to keep pace with expansion. As virtual machines (VMs) become rapidly implemented, bottlenecks can occur if all servers and VMs are connected to the same shared storage, creating significant problems. Data center operators need to ensure their infrastructure architectures keep up with the rapid pace of virtualization to avoid problems.
Bernbo suggests organizations look at solutions used by early virtualization adopters like telcos and service providers to deal with congestion issues. Seeking out solutions with multiple data entry points that distribute the load across all servers can optimize performance and minimize lag time.
Running VMs inside a storage node itself is another way to tackle the problem. “With this approach, the whole architecture is flattened out,” writes Bernbo. “If the organization is using shared storage in a SAN, the VM usually hosts from the top of the storage layer, turning it into one giant storage system with only one point of entry. To fix the data congestion issues that result from approach, some businesses are starting to move from the typical two-layer architecture that keeps virtual machines and storage running out of the same layer.”By ensuring physical infrastructure keeps up with the rapid pace of virtualization, data center operators and their customers can reap the most benefits from the technology.
Edited by Rory J. Thompson
Data Center Power Resources
Featured White Papers
As the need to balance current and future IT requirements against resource consumption becomes more urgent, the data center industry increasingly views capacity planning as a way of achieving a critical component to planning a new build or retrofit. Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications. DCD Intelligence has therefore compiled this White Paper in order to share some industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.[Read More]
Server Technology had the recent opportunity, along with other partner companies, to participate in discussions across the globe with data center IT and facility managers as part of a road show seminar: Data Center Energy and Operational Efficiency.[Read More]
The demand for more power in the computer cabinet has led many data centers to upgrade to three phase power distribution. Proper three phase power distribution has traditionally meant dividing up power up into multiple branches within the rack PDU (Power Distribution Unit). In this paper we will explore the advantages of a new, less common approach to PDU design by means of alternating each phase on a per-receptacle basis instead of a per branch basis.[Read More]
Increasing powering and cooling demands within the data center have been the topics of choice for Data Center (DC) and Facility Managers for several years now. Increased power demands are a result of the need for more compute power and higher density devices have resulted. These high density installations include stacks and stacks of servers and the trend of implementing blade servers within these server "farms." Cooling problems are a direct result of the increased power demands based on the simple fact that more power increases the demand for cooling.[Read More]