Each year brings a fresh whirl of tech buzzwords. Terms like "proactive," "360-degree view" and "information life cycle management" have all been in vogue in recent history. This year, the technology de jour is "virtualization," a topic that has more than just hype behind it and has broken through into the mainstream.
The concept behind server virtualization is not new, however. IBM has been creating virtual machines on its mainframes since the 1960s.
So what is it? Virtualization breaks the link between the hardware and the applications that run on it. This includes virtual storage, virtual networking, application virtualization and, the focus of this tutorial, server virtualization. It requires the installation of a software layer that allows more than one server to operate on the same piece of hardware. There are two basic approaches to this:
1. The one popularized by VMware (now part of EMC Hopkinton, Mass.) runs a virtualization layer (called a hypervisor) between the hardware and the operating system. With this method, several operating systems can run on the same set of hardware. The drawback is that each virtual server requires its own operating system, which adds to licensing costs and system overhead.
2. Sun Microsystems, of Santa Clara, Calif., takes the opposite approach. It installs its Solaris operating system directly on the hardware. Different applications run in isolated areas called "containers," but they all share the same operating system instance. Shares of the physical server resources are then assigned to each of the containers on a permanent or dynamic basis.
There are many reasons for adopting server virtualization. A popular one is better resource utilization. It is not uncommon to see servers running at 10 percent or less of their capacity, at different points in the day. By letting several virtual servers share a single set of hardware, a much higher average utilization rate is achieved, and hardware and support costs are lowered.
Virtualization also makes it easier to provision and reallocate servers. Instead of having to manually set up a server, the virtualization software can set up a server using a pre-existing template and shift server images from one physical server to another to balance workloads or improve efficiency. It can also automatically set up a new virtual server on a different machine when there is a hardware malfunction. Each application is isolated from the others, which provides greater security.
In practice, virtualization requires much more than just a simple hypervisor layer. In fact, virtualization has broken through as a technology largely because all the necessary components (i.e., processors, utilities and management tools) for a completely virtual ecosystem are now in place.
AMD and Intel have both included virtualization support in their chips. In AMD's case, this involves taking some of the commands that normally would be handled by the Virtual Machine Manager and including them in the chips' instruction sets. Similarly, Intel has released its Intel Virtualization Technology (IVT) for desktops, Xeon server and Itanium server CPUs.
Utilities, too, are starting to support virtual servers. Backing up virtual servers, for example, poses a series of unique challenges. One of the biggest challenges involves server consolidation. The advantage of server consolidation is that all the virtual servers share the same set of hardware, which works out great when all servers are running at low levels. But backup is a resource intensive activity in terms of disk I/Os, processor utilization and traffic through the network interface card. Backup vendors, therefore, have developed a number of techniques for backing up virtual servers. CommVault Systems offers customers the option of backing up the entire virtual server into a single large file, and Syncsort's Backup Express can either backup each virtual server individually or backup the entire physical server.
In addition, management tool vendors are including virtual server support in their products. TeamQuest Corporation of Clear Lake, Iowa, for example, supports virtualization throughout its product line. It has a set of five collection agents for VMware ESX Server 2.0, which gather information both on the performance of the virtual machines (e.g., CPU, disk, memory and NIC) and on the ESX service console (e.g., disk space, process-workload and system log messages). Reports and alarms can be set up on each of the individual virtual servers or on the physical server. TeamQuest's capacity planning software can model each virtual machine as an individual workload as well as determine the impact of running several virtual machines on a physical server.
How to Virtualize
The primary action in setting up a virtual server is selecting and installing the virtualization layer. Here are some of the more popular options.
- Xen 3.0: Xen is a lightweight open source hypervisor (less than 50,000 lines of code) which runs on Intel or AMD x86 and 64-bit processors, with or without virtualization technologies. It supports up to 32-way SMP (Simultaneous Multi Processing) and requires a modification of the client operating system, which means it will run Linux but not Windows clients. Although the original Xen hypervisor works only with Linux clients, XenSource, the company behind the Xen project, released XenEnterprise, a version that supports Windows Server and Solaris guests as well.
- Windows Virtual Server 2005 R2: Microsoft initially charged for its virtualization technology, and it was limited to Windows servers. With Windows Server 2003R2, customers can run up to four operating systems on a physical server. On April 3, Microsoft announced it was making Virtual Server a free download, and it extended support to clients running nine versions of Red Hat and SUSE Linux.
- VMware Server: VMware (EMC) is by far the largest vendor of virtualization technology for x86 platforms. In early 2006, the company released VMware Server, a replacement for GSX Server, which is a single server virtualization platform for Linux and Windows. More than 100,000 downloads of this free product were made in the first week alone. VMware Server has all the features of the GSX Server, and adds support for virtual SMP, Intel Virtualization Technology and 64-bit guest operating systems.
- VMware ESX Server: Although its entry-level product is now free, VMware still charges for its enterprise-class ESX Server. ESX server runs on x86-based servers and supports Linux (Red Hat and SUSE), Windows (Server and XP), Novell NetWare and FreeBSD 4.9 clients.
- Virtual Iron: Virtual Iron is another company offering Xen-based products. It has four products: two free single server versions, an enterprise version and one for clusters. In addition to the Xen hypervisor, Virtual Iron also includes management tools and an administrative interface.
- IBM Virtualization Engine Platform: This platform encompasses the entire line of IBM servers. As well as the usual hypervisor for server partitioning, it includes virtual I/O and virtual Ethernet, a workload manager and management console.
- SWSoft Virtuozzo: Virtuozzo takes an approach similar to that of Solaris. It runs above, rather than below, the operating systems. It has two versions one for Windows and another for Linux and customers can create virtual servers on top of these. One particular application is for running "Virtual Private Servers (VPS)" or a hosting facility. With Virtuozzo, a single physical server can run up to 5,000 VPSes.
By and large, these ISVs make it easy to dabble in virtualization. To get your feet wet, simply download a free version of the software from the above vendors to gain familiarity and see how it works in your environment.
Although the basic concept of virtualization is likely to be around for quite a while, it is not clear whether virtualization software will always be a separate product. IBM, EMC, Microsoft, AMD, Intel and others are incorporating greater features into their product line, and this trend will continue. Overall, though, virtualization appears to be moving toward being the default method of server operation, rather than being just for a specific niche. Eventually, this may make discrete hypervisors a thing of the past.