Home Availability The Risks and Rewards of Virtualization

Virtualization is more than just an industry buzzword or IT trend. This technology enables multiple instances of an operating environment to run on a single piece of hardware. These virtual machines (VMs) then run applications and services just like any other physical server and eliminate the costs related to purchasing and supporting additional servers. Virtualization delivers other benefits, too, such as the faster provisioning of applications and resources. Additionally, it can increase IT productivity, efficiency, agility, and responsiveness, freeing IT resources to focus on other tasks and initiatives. However, virtualization has its risks.

How did Virtualization Evolve?

To best understand the business case for virtualization – as well as potential virtualization risks – we need to look back to the time when mainframes ruled the computing world.

Mainframes were used by large organizations to manage their most critical applications and systems. Yet they could also act as servers, offering the ability to host multiple instances of operating systems at the same time. In doing so, they pioneered the concept of virtualization.

Many organizations were quick to see the potential. They began carving up workloads for different departments or users to give them dedicated compute resources for more capacity and better performance. This was the very beginning of the client-server model.

In most cases, on application ran on one server, which was accessed by many different PCs. Other advancements, such as the emergence of Intel’s x86 technology, all helped make client-server computing faster, cheaper, and more effective.

It all worked great, until its popularity caught up. Eventually, it seemed like everyone in the company wanted a server to host his/her application. This resulted in too many servers – “server sprawl” – that quickly filled up even the largest data center.

Space wasn’t the only concern. All these servers were expensive and required extensive services to support and maintain them. Overall IT costs surged, and many companies began looking for a new approach.

One solution: A virtualized approach for any servers using x86 technology. With virtualization, one physical server could now host many VMs and could provide the full isolation and resources each application required.

A New Approach Leads to New Concerns

All of this worked well, except for the new risk that the virtualization layer – the hypervisor – could fail. Worse, a single failure in the virtualized environment would trigger a domino effect where all virtualized applications would also fail, leading to unacceptable downtime risk. To prevent this scenario, many companies chose to virtualize their non-production systems. This way, if any failure did occur, critical systems wouldn’t go down.

As technology improved, organizations realized that the hypervisors can deliver the performance and stability they required, and they started virtualizing all their applications, even production workloads.

On one hand, the effort wasn’t difficult, and seemed to pave the way for many significant benefits. Yet on the other, it did present new risks related to hardware and availability. For example, consider the case where one company might have 20 business-critical VMs on one server, only to have it fail.

How long would it take to resolve the problem? How much would this downtime cost? What long-term implications would it have on customers, prospects, and the company’s reputation? All of these are reasonable questions, but often, don’t have satisfactory answers.

This scenario points to the need for the right hardware infrastructure and always-available systems as part of any successful virtualization strategy. We’ll cover these topics – while covering some common misconceptions – in our next article. Stay tuned.

Can we Continue the Conversation?

Tell us what you think by sharing your thoughts about virtualization risks on Twitter or LinkedIn.

Related Posts