Data Center Power Featured Articles
Johns Hopkins University Achieves High Performance Virtualization Breakthrough
The Johns Hopkins University Applied Physics Laboratory Air and Missile Defense Department's Combat Systems Development Facility has achieved a major breakthrough in virtualization according to GCN.
The facility has managed to use virtualization to streamline some complicated calculations.
The desire to efficiently use the facility’s 1,500 node computing clusters more efficiently drove the move toward virtualization. Some nodes were fully loaded while others sat idle. A node that’s not processing anything is one that’s wasting money when the costs of the hardware and power and cooling are factored in.
The design of the simulations the lab was running was a major obstacle to virtualizing the high performance computing system. The lab has two clusters, one based on Windows, and the other on Linux. The reason for the two platforms is that the lab does work with outside contractors, and they have their own requirements.
"Some simulations take five seconds per task, and we run that same task up to a million times," Edmond DeMattia, a senior system engineer and virtualization architect told GCN. "While others may take 15 hours per task, but are only run 1,000 times."
These simulations use the “Monte Carlo method,” which means they use repeated random sampling to make calculations. All of those simulations require a lot of computing power.
DeMattia used VMware’s ESXi hypervisor to manage the virtual machines. ESXi can monitor all of the virtual machines, diverting resources from underused virtual machines to machines that are running intensive tasks, using the cluster more efficiently than without the hypervisor.
With the overhead of virtualization, DeMattia expected a 6 to 8 percent performance loss. Instead, he was surprised to find a 2 percent increase. He then began moving the cluster over to the new virtualized grid.
“My team fundamentally redesigned how high-performance scientific computing is performed in the Air and Missile Defense Department by utilizing virtualization and distributed storage as the framework for pooling resources across multiple departments," he said.
The lab saved about $504,000 in hardware costs and over $40,000 in cooling costs by virtualizing its computing cluster.
Edited by Maurice Nagle
Data Center Power Resources
Featured White Papers
As the need to balance current and future IT requirements against resource consumption becomes more urgent, the data center industry increasingly views capacity planning as a way of achieving a critical component to planning a new build or retrofit. Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications. DCD Intelligence has therefore compiled this White Paper in order to share some industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.[Read More]
Server Technology had the recent opportunity, along with other partner companies, to participate in discussions across the globe with data center IT and facility managers as part of a road show seminar: Data Center Energy and Operational Efficiency.[Read More]
The demand for more power in the computer cabinet has led many data centers to upgrade to three phase power distribution. Proper three phase power distribution has traditionally meant dividing up power up into multiple branches within the rack PDU (Power Distribution Unit). In this paper we will explore the advantages of a new, less common approach to PDU design by means of alternating each phase on a per-receptacle basis instead of a per branch basis.[Read More]
Increasing powering and cooling demands within the data center have been the topics of choice for Data Center (DC) and Facility Managers for several years now. Increased power demands are a result of the need for more compute power and higher density devices have resulted. These high density installations include stacks and stacks of servers and the trend of implementing blade servers within these server "farms." Cooling problems are a direct result of the increased power demands based on the simple fact that more power increases the demand for cooling.[Read More]