Cables, wires, spaghetti. No matter what you call them, virtualized data centers need a lot. But things can be made easier by virtualizing your server I/O using a device such as Xsigo's I/O Director. This takes one cable and makes it look like many, as far as your hypervisors are concerned, simplifying the configuration of "North - South" communication between virtual machines, the network and storage.
But what about "East - West" communications -- data that flows between the various virtual machines in your data center? How can this be simplified? Most of this traffic probably flows over your (Layer 2) Ethernet networks, going through multiple network layers to get from one server to another, which is inefficient. Maintaining connections between virtual machines when they are moved can be complex, involving manual reconfiguration of VLANs and switches, which is prone to errors and subject to IP address limitations. Setting up a simple connection between server A and server Z in a data center may well involve your server staff and your networking team staff, and it could take many hours. What ever happened to automation, scalability and agility?
As it happens, Xsigo is aiming to tackle the problem head-on with a software upgrade for I/O Director, a software plugin for its XMS management system and new drivers for customers' servers. Installing all this software -- collectively called SFS 1.0 Server Fabric Suite -- creates what the company is calling the Xsigo Server Fabric.
At its most basic level, Xsigo Server Fabric functionality means any VM in any server connected to a Xsigo box (by a 20Gbps or 40Gbps Infiniband connection) can communicate via Xsigo's Private Virtual Interconnect with any other VM running in any server also connected to the same Xsigo box, or even to a different, but connected, Xsigo box. Up to four Xsigo installs can be connected to together, linking up to 1,000 physical servers and up to 64,000 VMs. "That means I am not consuming VLANS or IP addresses just to communicate from one VM to another," said Jon Toor, Xsigo's marketing VP. "Why should I have to, just because they are on different physical severs?"
One immediate benefit from this type of virtualized networking infrastructure is the ease with which Private Virtual Interconnects can be made, said Toor. "There are enormous time savings to be had. The server guy can now connect server A to server Z in software, without getting the networking guy involved at all. What used to take four hours can now be achieved in less than a minute with a single drag and drop." Once two VMs are linked, the Server Fabric manages the Private Virtual Interconnect automatically, maintaining it even when VMs are VMotioned to new physical hosts within the Server Fabric.
But there's also another less obvious benefit: better performance. When each server is connected to a Xsigo Director using a 40gbps Infiniband connection, any physical server or VM can potentially communicate with any other at up to 40Gbps. "I/O-intensive applications can get a huge speed improvement. For databases or applications like backup or VMotion, this provides a huge performance advantage," said Toor.
Xsigo designed its Server Fabric to be open so customers can continue to use their core Ethernet and Fibre Channel infrastructure from Cisco, Juniper, Brocade or other vendors. As part of this openness, the company also announced the "Server Fabric Alliance" of technology partners, including Blue Coat Systems, Hitachi Data Systems and TrendMicro, which aims to make it easier for customers to create and deploy fully virtualized cloud data centers, including virtualized appliances. Xsigo claims this results in 80 percent less hardware and a 50-percent lower power consumption.
Xsigo's Server Fabric will work with Infiniband-based Xsigo directors (not initially on the newer Ethernet-based ones). The SFS 1.0 Server Fabric Suite will cost $1,000 per physical host server attached to the fabric. It certainly sounds good, especially as it doesn't involve a single vendor solution. However, the proof is in the pudding, and that won't come until December 2011 when the software is released. But you can't help thinking that the highly automated, scalable, agile data center of tomorrow is getting closer by the day.
Paul Rubens is a journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.