The previous two articles in this series on Windows 2003 Server based high availability solutions examined two VMware products, VMware ESX 3.0 and VMware Server, for setting up clustered configurations. It is certainly possible to use both in not only testing and development scenarios, but also in production environments, especially when accompanied by or geared toward the enterprise computing VMware Infrastructure 3 platform, of which ESX 3.0 is part. However, support for any operating-system-related issues might be problematic, in part because of Microsoft's restrictive policy in this regard, and in part because of clustering issues on Windows 2003 Server SP1 platform that VMware currently tries to resolve.
This situation is hardly surprising, considering Microsoft's own competing virtualization software, Virtual Server and Virtual PC, which provide functionality equivalent to VMware Server and VMware Workstation, albeit without an offering that would be capable of challenging the performance and reliability of VMware Infrastructure 3. Since Virtual PC is intended for desktop virtualization (primarily for development purposes), we will focus on its more powerful counterpart, Microsoft Virtual Server (currently in version 2005 R2).
Microsoft Virtual Server
Microsoft's venture into virtualization started with its acquisition of Connectix in 2003, when it captured its Virtual PC and Virtual Server product lines. Since then, it extended the original feature sets, adding (with the release of Virtual Server 2005 R2) support for SMP and x64-bit (but not IA-64) host operating systems, such as Windows 2003 Standard, Enterprise and Datacenter editions. It also added support for Windows XP Professional x64, although it lacks the same capabilities with guest operating systems, which are limited to single-processor, 32-bit installations. It also supports PXE-based virtual machine deployments, the ability to host Red Hat and SUSE Linux-based guests, and improved hyper-threading.
Microsoft also developed and made available free of charge Virtual Server 2005 Migration Toolkit, which contains tools and documentation to assist with copying standard Windows installations from physical servers into virtual environment by leveraging Automated Deployment Services. (For more information on the latter, refer to our earlier article.) Note that some of the functionality missing from Virtual Server 2005 R2 is available in VMware Server (e.g., support for dual-processor, 64-bit Windows, Linux, and Solaris guest operating system installations).
While the Microsoft product, like VMware Server, is offered as a free download, in both cases you are required to purchase licenses for each actively running guest operating system. This restriction is significantly reduced on Windows 2003 Server Enterprise Edition installations. Here, you are permitted to operate up to four virtual systems without corresponding licenses. This rule applies regardless of the type of virtualization product used, so it extends to VMware Server.
In general, three types of server clustering arrangements incorporate Virtual Server 2005 R2 in their configuration:
- Virtual machine guest clustering with a shared virtual SCSI controller implements virtual clusters, with up to two nodes that are set up as guest machines on the same physical host. They protect against the failure of one of the guest operating systems or individual clustered applications (such failure triggers failover to the other virtual node), but not against issues that affect the physical host. Virtual components emulating network adapters, SCSI controllers, and disks must be configured according to clustering requirements. Detailed description of this type of setup (including software and hardware requirements and step-by-step installation procedure) is provided in the Microsoft Technet article Using Microsoft Virtual Server 2005 to Create and Configure a Two-Node Microsoft Windows Server 2003 Cluster.
- Virtual Server host clustering involves physical clusters (and requires appropriate hardware, as listed in the Microsoft Windows Server catalog) running Virtual Server 2005 R2 as its clustered resource group. The group consists of a Physical Disk resource, where virtual disks files are stored, and a Generic Script resource, which is dependent on it. Appendix B of the easy-to-follow Virtual Server Host Clustering Step-by-Step Guide for Virtual Server 2005 R2 presents an example of a script.
This makes a virtual machine cluster-aware, allowing for automatic failover and failback. It provides high-availability on the virtual machine, as well as the physical host level, which can be further increased by adding extra nodes to the cluster. Here, the version of the host operating system and shared disks type set limitations up to 8 nodes in case of Windows Server 2003 Enterprise and Datacenter Editions with Fibre Channel or iSCSI-based storage. On the other hand, the cluster does not monitor applications that the virtual machine hosts. Therefore, their failure is not automatically detected and does not trigger a failover. This type of setup helps remediate virtual or physical system issues and addresses situations where you must maintain application uptime despite an extended maintenance window (e.g., those caused by frequent hardware or software updates).
This capability constitutes a significant advantage when comparing Microsoft Virtual Server with its competitor, VMware Server, which does not support it out-of-the-box. It is, however, possible to configure GSX Server running on Windows 2003 Server as a clustered resource by implementing Virtual Machine EX from VM6.
With Virtual Machine EX, you can migrate virtual machines across physical servers in a manner similar to that of VMware's Vmotion. Furthermore, VMware VirtualCenter 2 (in combination with ESX 3.0) introduces Distributed Availability Services, which provides alternatives to server clustering. It can automatically move virtual machines across hosts, based on availability or load balancing criteria.
- Virtual machine guest clustering with iSCSI consists of guest machines residing on separate physical, non-clustered hosts. iSCSI can also function with hardware-based clusters, which fall into the host clustering category described above, as well as provide storage for the guest-clustering scenario described in the first section. This semi-hybrid solution relies on the unique capabilities of Microsoft iSCSI Software Initiator 2.0. Therefore, it is not possible when using SCSI or Fibre Channel attached disks, which enable guest operating systems to communicate directly with clustered storage.
As explained earlier in the series, this communication is handled by encapsulating SCSI commands, status signals, and data exchanged between storage devices and cluster nodes into IP packets. It is then transmitted via network cards, host bus adapters, or multifunction offload devices. This, in turn, makes it possible for virtual machines to emulate hardware-based clustering configuration, with guests serving as nodes operating on distinct physical systems and external iSCSI storage hosting their quorum and clustered disk resources.
For this type of arrangement to work, ensure that shared iSCSI-based disks are mapped on guest operating systems at boot time and mounted. This way, they are available when Cluster Service starts and attempts to activate them as clustered resources. To accomplish this, make sure that the "Automatically restore this connection when the system boots" option is selected for clustered disks on the Targets tab in the graphical interface of the Microsoft iSCSI Software Initiator 2.0 (after logging on to iSCSI storage device hosting the disks). Once this is completed, assign an arbitrary drive letter to each of the shared disks (via Disk Management MMC snap-in from the same guest operating system) and add it to the list of disks on the Bound Volumes/Devices tab of the Initiator dialog box.
For more details, refer to the Using iSCSI with Virtual Server 2005 R2 TechNet article.
The Microsoft Knowledge Base article Support for server clustering between Windows Server 2003 and Virtual Server 2005 R2 lays out the fairly strict guidelines that must be satisfied in guest and host clustering configurations for it to support them. If you are using Virtual Server 2005, rather than its R2 edition, refer to Requirements for configuring clustering in Virtual Server 2005.
To optimize performance of clustered installation, create virtual hard disks (represented by .vhd files) on physical disks separate from those that the host operating system uses. In case of guest clustering, avoid disks hosting system directory or paging file. It is also important to configure them as fixed-size, rather than dynamically expanding, to limit fragmentation. This, however, eliminates the ability to compact them to recover unused space.