The less versatile of the two main high availability technologies in Windows Server 2008 R2, Network Load Balancing offers functionality that cannot be easily facilitated through other native OS means. This article examines its features and how to deploy it.
There are two main high availability technologies incorporated into the Windows Server 2008 R2 platform. The first, known as Failover Clustering, tends to receive more attention, thanks to a wider range of scenarios, where its benefits can be fully realized. Its somewhat neglected sibling, Network Load Balancing, is less versatile, it deserves its share of accolades, since it offers functionality that cannot be easily facilitated through other, native to the operating systems means. The purpose of this article is to give it the overdue credit it deserves by describing its features in a comprehensive manner. This will include the latest improvements introduced in Windows Server 2008 R2 Service Pack 1.
Let's start by pointing out a few unique characteristics of Network Load Balancing (NLB) that distinguish it from Failover Clustering. First and foremost, it is important to realize that its primary purpose is to provide high availability through multi-instancing rather than automatic failover, although there is a built-in, heartbeat-based mechanism that detects a server failure and redirects new request to surviving nodes. This results directly from the underlying mechanism on which this technology is based. More specifically, an NLB cluster consists of a number (up to 32, inclusively) of servers, running in parallel and hosting identically configured (and sharing at least one common IP address) instances of a service or an application, which availability you want to ensure. This contrasts with the clustering design, where only one instance of a specific clustered resource is active at a time. Such principle implies, in turn, that highly available applications or services either have to be stateless or their state data must reside on a separate system, shared across all of NLB nodes.
This effectively limits applicability of this approach, but there are workarounds we will shortly describe that help remediate this limitation. Since from the client perspective it is not relevant which node responds to incoming requests, they can be load balanced according to a predefined algorithm, as long as its outcome is deterministic and consistent cluster-wide.
To comply with these requirements, the NLB component is implemented as a packet filter driver named nlb.sys, positioned below IP stack and above network device driver in regard to the OSI model, running on all cluster nodes with to the same configuration settings, which determine how incoming traffic targeting shared IP addresses is handled. By applying the same selection algorithm (based on so-called port rules), only one node becomes responsible for processing any given incoming packet and responding to it. The others are simply discarding it. Note that the resulting selection is not affected in any way by performance or utilization levels but instead it is the function of statically defined parameters. You can, however, designate individual nodes that should handle larger percentage of traffic than their counterparts by assigning to them higher host-level weight values.
Each port rule consists of criteria that determines its applicability by taking into consideration source and target IP addresses, port range and transport layer protocol (TCP, UDP or both). If a match is found, the algorithm applies arbitrarily selected filtering mode. Depending on the mode selection, IP traffic is blocked, processed by a single node (one with the highest priority value, which is equal to the unique identifier defined as part of host configuration), or handled in a distributed mode according to the value of weight parameter (another host-level setting, which, as we mentioned above, is used to calculate weighted average that determines percentage of traffic processed by a given node) and one of three affinity settings -- None, Single or Network.
The first of these uses a combination of the target port and source IP address to decide which cluster node is responsible for processing an incoming request. The second disregards the port value and relies on the source IP address only, which means that traffic matching specific rules originating from the same IP address is processed consistently by the same node. Finally, the last of them also ignores the target port but uses class C subnet (rather than individual IP addresses) to identify the target node. This addresses scenarios where traffic from the same client is delivered via different proxy servers. Note that in each of these cases, changes to cluster membership (by adding or removing a node) triggers convergence, which redistributes load across the nodes, potentially leading to the situation where a different node starts processing requests from a given client.
Additional considerations that need to be taken into account concern network infrastructure. Since NLB operations are based on the premise that all nodes nodes of a cluster simultaneously receive the same incoming client traffic targeting shared IP addresses, its most common implementation involves a switch (or rather a pair of redundant switches) to which each of load-balanced network adapters is connected. On the data link OSI layer, these IP addresses translate into NLB-generated MAC addresses. Your decision regarding their format has significant implications. In particular, you can choose cluster unicast or multicast operation mode. The first choice results in a single unicast address that replaces original MAC addresses burned into each of load-balanced network adapters. In the second case, each network interface retains its existing MAC address but also gets assigned an additional, multicast one. Since the first of these cases typically leads to switch flooding, you might want to consider using the latter as long as your network devices (including access switches and upstream routers) support multicast functionality. If that's the case, you can either create static entries in the content addressable memory for switch ports connected to each of cluster load balanced network adapters or, if the switch is capable of Internet Group Management Protocol snooping, enable IGMP multicast option.
The most straightforward way of installing necessary NLB software components relies on the Add Features Wizard. Alternatively, you can accomplish this by running servermanagercmd.exe with -install nlb parameter. Once completed, you can create a cluster by designating its first node using Network Load Balancing Manager console. New Cluster wizard prompts for the network interface that will accept load balanced traffic, host priority, dedicated IP addresses, initial host state (Started, Stopped, or Suspended), cluster IP addresses (either IPv4 or IPv6) and full Internet name, operating mode (unicast, multicast, or IGMP multicast) and port rules.
Windows Server 2008 R2-based implementation of NLB introduced several new features. The most apparent one, at least from the administrative perspective, involves PowerShell support, which simplifies automation of cluster management tasks. This offers an alternative to the nlb.exe command line utility. In the area of monitoring, NLB Health Awareness based on System Center Operations Manager 2007 NLB Management Pack makes NLB aware of IIS health status, facilitating scenarios where a Web server failure triggers the automatic removal of the host node from the cluster. NLB offers full support for IPv6 as well as integration with Forefront Threat Management Gateway (FTMG) and Forefront Unified Access Gateway (UAG) based DirectAccess and SSL VPN. Extended affinity remediates the shortcoming mentioned earlier, which could redirect client connections to a different node following an event or configuration change that triggers convergence. This is accomplished by applying the Timeout setting as part of a port rule definition, which can be done either from the Network Load Balancing Manager or by applying Set-NlbClusterPortRule PowerShell cmdlet with -NewTimeout parameter, effectively preserving client to node association for an arbitrarily assigned period of time. This is a common requirement in shopping cart e-commerce scenarios or when running SSL VPN sessions to Universal Access Gateway arrays. It is also worth mentioning that it is possible to perform rolling upgrades of NLB clusters from Windows Server 2003 and Windows Server 2008. Keep in mind that you will not be able to take advantage of Windows Server 2008 R2 specific features until this process is completed.
As documented on the Microsoft
TechNet Wiki, Windows Server 2008 R2 Service Pack 1 further extends scalability
and high availability of UAG-based DirectAccess by including support for
IPv6 transition technologies (6to4 and ISATAP) in its NLB implementations.
Note, however, that applying Service Pack 1 to NLB-based FTMG and UAG clusters
requires an additional restart of all nodes (for the total of two) once the
installation is completed.
Follow ServerWatch on Twitter
This article was originally published on Thursday Mar 17th 2011