"In the past, one problem with data centers was that they are stovepiped, but actually it is more of a hodgepodge," said Richard Villars, vice president storage systems for IDC in Framingham, Mass. "I had an application so I put a server there because the rack was available. Then, I needed to expand the application, so the other server is on another rack on the other side of the room, and the storage is somewhere else."
Several vendors are now moving in the direction of bringing all data center components into a single structure. Or perhaps "directions" would be more accurate, since each is taking a different approach to that problem.
Cisco Systems of Santa Clara, Calif., as one might expect, is taking a network-oriented approach with its approach. In 2007 it released the VFrame DC, an appliance with a Java-based application VFrame Fabric Virtualization Software to provision and reuse infrastructure components including processing, storage and networking. The architecture calls for all the servers to be diskless, with the fabric assigning loads to the CPUs and memory as needed. Administrators establish policies ahead of time, and the appliance then follows those policies in reconfiguring servers or add new applications on the fly. Krish Ramakrishnan, vice president and general manager of Cisco's Virtualization Unit, said this will work particularly well for organizations that have wide swings in the type of traffic they need to handle, since resources can be reprovisioned in about a minute.
"If an official announces a new policy or service, IT administrators can easily monitor traffic, coordinate with load balancers and storages, and move the appropriate number of servers into that environment," he said. "You can never anticipate how the traffic flow will happen, but [you] need to cope with it when it does."
Brocade Communications Systems' solution is called the Brocade Data Center Fabric (DCF) architecture. Rather than a single product like VFrame, it calls for adding a virtualization layer that encompasses data center and server connectivity, the storage fabric, file management, continuous data protection, data migration, and a centralized management framework.
Blade manufacturer Egenera of Marlboro, Mass. uses what it calls a Processing Area Network (PAN). The blades are diskless, booting from the SAN. There are also separate blade providing switching and control. These are then managed as a unit by the PAN Manager software. Hewlett-Packard (Palo Alto, Calif.) also has software now. The HP Virtual Connect Enterprise Manager allows up to 1,600 blade servers to be managed from a single console.
"With these new data center fabric environments the physical assets are managed very consistently," said Villars. "What I am doing is just moving workloads between different compute or storage resources on the fly, without disrupting the end user."
HP and IBM are also developing software to manage the entire data center, including power, cooling and physical security, together with the computing assets. The HP Insight Dynamics VSE supports hypervisors from several other vendors, allowing managers to view all the physical and virtual assets as a pool. In July, IBM announced integration between its Tivoli management software and Johnson Controls' Metasys building management system. Metasys uses XML, SOAP, SNMP and DHCP and operates over an IP interface, making it easy to integrate with the Tivoli Monitoring for Energy Management software.
With each of these changes, we are getting closer to being able to just issue a command to the computer, like on Star Trek, without having to specify which application, database, server or storage resource to use. When we do, however, we just have to make sure we don't name the system HAL or Skynet.