The Impact of Virtualization on a Data Center's Infrastructure

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >The Impact of Virtualization on a Data Center's Infrastructure </span>

Jun 09

Jun 09

Big Data

The_Impact_of_Virtualization_on_a_Data_Centers_Infrastructure_-1

When you bring virtual resources into the data center, it can have a wide-ranging effect on the network infrastructure. Most of the impact will be on the physical plant and the enterprise capacity to accommodate the new virtual machines. There also are going to be power and cooling considerations, hardware configuration, and layout concerns that you will need to address before virtualization adversely impacts network performance.

The advantages of virtualization are clear. By encapsulating and abstracting applications from the physical hardware, you create virtual machines (VMs) that are easier to manage, are portable, and can be implemented on physical hardware in seconds. VMs make better use of shared data center resources and give IT managers complete control of server functions through a software overlay. More importantly, virtualization provides the elasticity needed to scale the infrastructure up or down, adding more VMs and cloud resources as needed to meet changing demands.

However, by virtualizing server technology, you now have a new set of architectural challenges, including server configuration and systems proliferation.

More Server Horsepower

The first challenge will be an explosion of virtual machines now that you have multiple servers each running 10 to 20 virtual applications. The number of elements that must be managed has now increased 10 to 20 times. Every server that hosts a VM also has a virtual switch that must handle 20 to 40 times the number of managed network elements.

Virtualization and cloud computing call for more powerful x86 server hardware. Although one server can host multiple VMs, they need more powerful processors, more memory, higher IO, and faster bandwidth to deliver the necessary throughput. Upgrading existing servers with more memory and CPUs is usually more cost-effective than buying new servers, but that may not always be enough. You can always justify the investment in new, more powerful server hardware by calculating the number of VMs it can accommodate.

Another side effect of server optimization to support VMs is power consumption. Host servers draw more power and, therefore, create hot spots that require additional cooling. This will affect both your cooling strategy and your data center layout.

Managing Enterprise Storage

Data storage consists of volumes, RAID arrays, and physical disks that VMs map in the virtual domain. The virtual data storage infrastructure is complex, and data stores must map to the storage topology. Because workloads will frequently move to different locations, the relationship between the server and storage must be tracked to make sure the necessary data remains available to applications on VMs.

The bigger storage challenge is enterprise IO demand. Running multiple VMs on the same server starts to create random IO overload, each with a different pattern for writing to the underlying infrastructure. And IOPS tend to increase tenfold, thus putting more strain on the system. Many IT organizations deal with the problem by using flash drives, but that increases costs. A better approach is to adopt a converged infrastructure.

Server SAN is a new strategy described by Wikibon that pools multiple devices into a single storage resource that is directly attached to multiple servers. Server SAN makes the most of commodity hardware and simplifies deployment and scalability without an external storage array.

Handling Deduplication

Effective data deduplication not only eliminates redundant data, but it also reduces bandwidth and storage needs. However, deduplication can be CPU-intensive and slow performance.

Deduplication can be done inline as it is being written to disk, or it can be done post process, which means the data is stored, read from the disk, deduplicated, and rewritten to the disk. The deduplication strategy you adopt depends on your applications.

Power and PUE

Even though individual virtualized servers tend to draw more power, there are usually fewer servers required, so the overall power consumption for the data center is lower. However, the power utilization efficiency (PUE) will worsen with consolidation of servers and storage.

The data center has “fixed losses” in terms of cooling capacity and power—i.e., power used for data center assets and data center cooling remain constant regardless of how much the IT load is consuming. Consolidating server hardware using virtualization means less load, but the fixed losses tend to drive down PUE.

Lower Redundancy

Adopting VMs in the data center makes it easy to dynamically shift processing loads. A well-managed virtual infrastructure can provide a high level of fault tolerance for both VMs and virtual applications. Workloads, virtualized storage, and entire VMs can be automatically relocated when network problems arise. Using virtualization usually speeds recovery in the event of a disaster.

These are just a few of the issues that arise when re-architecting the data center to accommodate a virtual infrastructure. Although virtualization offers a number of benefits, you can’t treat virtual machines or virtualized servers as you would conventional servers. You must be prepared to accommodate changes to the data center infrastructure to accommodate the additional bandwidth, processing power, and resources required to get maximum value from virtualization.

Topics: Beginners/Introduction

big data in 2015
Click me

Trending Data Center Articles

Partnering for Big Data Profits Guide
data center security breach