point
Menu
Magazines
Browse by year:
Virtualization unraveling the mysterious buzz word
Sandeep Menon
Wednesday, November 5, 2008
Virtualization is a topic that is now on everybody's mind, and with a good reason. It's a critical, sea-changing concept with wide-reaching implications on IT optimization and efficiency. The idea that you can make pools of dynamic resources with unlimited capacity available to users anywhere, at anytime is an extraordinary idea that is now real and practical to implement.

However, the concept of virtualization is not as new as some people may think. Virtual machine technology for time-sharing on Mainframes dates back to the 1960s. Proprietary Unix systems have been offering virtualization for over a decade now. And software based virtualization technologies have been available on the x86 platform. So what really has caused this technology to become such a hot topic today? At a very broad level, one could say that it's the confluence of challenging business demands, inefficiently sprawled IT infrastructure and the emergence of virtualization technologies that span the entire stack, which is making virtualization so attractive for today's organizations. Particularly important is the role played by the open source industry, in commoditizing and bringing virtualization to the mass market. The emergence of the Xen hypervisor as an efficient
"para" virtualization tool, and the decision of the major Linux vendors to bundle it freely in their distributions, changed the landscape of the market completely. All of a sudden, even the smallest customer, with the most simple of Intel or AMD boxes, could start experimenting with and implementing virtualization, at no extra cost. This truly brought virtualization to the masses, and made it a much more widely available and popular technology overnight, the results of which we are seeing today.

The long-term benefits of virtualization are also important to note. The most commonly cited benefit is cost reduction. While this can be significant, saving money is just a part of the value that virtualization can deliver. Beyond that, virtualization is a transformational technology that, if effectively employed, can help companies to create IT systems that are not only highly efficient and cost effective, but that have the self-awareness to adapt automatically and instantly to deliver the capabilities needed as business conditions change. In a sense, it also lays the foundation for the future of real utility computing.

What is Virtualization?

In layman terms, virtualization essentially lets one computer do the job of multiple computers, by sharing the resources of a single computer across multiple environments. Virtual servers and virtual desktops let you host multiple operating systems and multiple applications locally and in remote locations, freeing you from physical and geographical limitations. In addition to notching up energy savings and lower capital expenses due to more efficient use of your hardware resources. You get high availability of resources, better desktop management, increased security, and improved disaster recovery processes when you build a virtual infrastructure. And virtual infrastructure is not just limited to servers and desktops. Storage virtualization, I/O virtualization and such others are all complementing concepts to the overall landscape.

Today’s powerful x86 computer hardware was originally designed to run only a single operating system and a single application, but virtualization breaks that bond, making it possible to run multiple operating systems and multiple applications on the same computer at the same time, increasing the utilization and flexibility of hardware.

Multiple virtual machines share hardware resources without interfering with each other, using the software layer to transform or 'virtualize' hardware resources, including the CPU, RAM, hard disk and network controller — to create a fully functional virtual machine that can run its own operating system and applications just like a 'real' computer.

What is Para Virtualization?

Traditional virtualization, as we know it, depends on the Virtual Machine layer to create a software layer that facilitates I/O virtualization, coarse-grained memory management and CPU virtualization. The advantage with full virtualization is that guest operating systems do not need to be modified. However, performance is negatively impacted, because all system calls are managed through the Virtual Machine Monitor layer.
Para Virtualization, on the other hand, is a technique in which the Virtual Machine Monitor is supplemented by an Application Programming Interface that provides an assist for certain situations. It replaces hard-to-virtualize processor instructions with a procedure call that provides the functionality and thus results in higher performance than full virtualization. However the downside is that it requires hardware-dependent portions of the guest operating system to be modified to become aware of the virtualization layer. This becomes particularly difficult in the case of proprietary operating system vendors, unless the vendor agrees to provide the functionality.

How can customers use Microsoft in a Para Virtualized environment?

One of the landmark outcomes of the agreement between Novell and Microsoft has been the development and launch of the Microsoft Virtual Driver Pack, which allows Windows to be used alongside Linux, in a Para Virtualized environment. Using the driver pack, customers can get their Windows partitions to run highly efficiently, even though they may be para virtualized on top of a Xen Hypervisor layer. This opens up the interoperability window, which was so important to customers running mixed environments.

Do we need Virtualization?

Whether we realize it or not, IT networks are intrinsically linked and mapped to larger movements within society. When the Internet began to change the way people communicate and conduct business, the network followed suit. Expansion to multiple servers was the norm in order to carry and house the increasing gigabytes of data created on a daily basis. The advent of the remote site reflected the trend toward a decentralized world, where workers migrated freely throughout the globe, and demanded the ability to take their work with them. The rise of laptops and mobile devices followed, further expanding the data center as users became dependent on the ability to be "always on" regardless of location.
With this "always on," mobile world, society has become increasingly aware of the need for security and control of the access to information. The data center has mirrored this trend as well, bringing many of the servers that were spread through corporate outposts back under central control at the company headquarters. Although this move brought a sense of stability to the network, it demanded an ever-increasing expansion of the data center footprint. Data growth has continued unabated and the rapid deployment of new servers has been the easiest way to keep up with the pace of business. However, this strategy created ever-increasing cost, one that business is just starting to grapple with. Workloads are increasingly difficult to manage, time available for deployment of new services is extremely short and capacity / availability planning involves either living with risk, or sitting on standby resources at great cost. Virtualization, combined with the right management and control tools, is the practical answer to many of the problems.

The relationship between Virtualization and Green IT

Virtualization offers great promises: flexibility and cost efficiency, while reducing the corporate burden on the environment. It can be a key component to addressing the enterprise power consumption issue in the following ways:

* Rapid application validation and deployment
* Application portability
* Dynamic load balancing
* Failover with minimal service interruption
* Extended life for legacy operating systems and hardware
* Simplified physical infrastructure in a heterogeneous software environment

With new virtualization technologies rapidly emerging we're seeing new industry organizations as well, such as The Green Grid, which are committed to making data center energy efficiency an industry standard. With the backing of these technology consortiums, clean virtualization strategies will create new usage models that can transform the entire IT organization.

"If It Sounds Too Good to be True…" - The Virtualization Management Challenges

The case for virtualization has been made, however we'd be neglectful if we recommended running head-first into deploying a virtualization strategy without considering the impact on the organization. Unfortunately, the benefits of virtualization come hand in hand with more complex server management issues. In order to achieve the lower costs and higher efficiencies that virtualization promises, one must also incorporate automated virtualization management.

Imagine assembling the world's best orchestra. You scour every continent for the best violinists and trumpet players, the most skilled drummers and harpists. But you fail to hire a conductor – let alone an experienced, advanced conductor—everyone is playing at once, the tempo is off, and no one knows when to begin the next movement. It’s hardly worth the time and effort to develop the orchestra in the first place.

The same holds true for virtualization. There is a whole set of new management challenges to consider. In the same sense that you reduce physical server sprawl, you can inadvertently create virtual server sprawl. Doing so likely means a whole host of unanticipated capacity and resource allocation issues. Understanding how to manage and allocate effectively is vital to optimizing your new arsenal of virtual machines, and this is where automation tools come in to play.

Let's consider a common IT procedure, and how it might play out without automated virtualization management. IT regularly "brings down" a server for updating or servicing, usually accomplished manually in today’s data center. With virtualization housing a number of tasks and applications on one server, many aspects of the business, not just isolated departments, are going to be affected by these power downs. And in the off chance you suffer a server failure, you risk taking huge portions of the business completely "off-line." In order to avoid these disastrous situations, you’ll likely need to create multiple fall back or mirrored sites to hedge your bets.

Automated management can alleviate the heavy manual process of moving files and applications. Not only does this help avoid an offline situation, it enables effective server maintenance without risk. And if a physical server fails, there can be an automatic, rapid deployment of services for business continuity. You can plan maintenance schedules with confidence and reduced hassle, while keeping the business running smoothly. All the while you’re ensuring the cooling and power consumption savings and reducing the impact of your data center on the environment.

The author is Country Head, Novell India

Twitter
Share on LinkedIn
facebook