Hardware Today — Cutting Through the Infiniband Buzz

Monday Jun 28th 2004 by Ben Freeman
Share:

When the Infiniband Trade Association formed five years ago, the consortium set out to rock the networking world and get the combination spec and architecture lead billing over Fibre Channel and Ethernet. We look at where it is today.

Interconnect technology is at a crossroads. Ethernet is standards-based and omnipresent but lags in performance, while Fibre Channel has better performance but isn't standards based. Further muddying the waters is a newcomer that has been inching slowly toward data center acceptance: Infiniband, a standards-based collaborative effort that provides 10 GB per second performance.

Infiniband is both an I/O architecture and a specification for the transmission of data between processors and I/O devices that has been gradually replacing the PCI bus in high-end servers and PCs. Instead of sending data in parallel, as PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal.

Infiniband has been gaining traction with early adopters, as evidenced in the latest Top 500 Supercomputer list unveiled last week. According to an Infiniband Trade Association spokesperson, 11 of the machines were constructed from Infiniband grids, up from six last year.

If the universities and research labs that comprise the early adopters using Infiniband for high-performance computing and clusters are any indication, the technology will continue making a dent in the increasingly Intel grid-centric Top 500 as time elapses. But right now, the technology has its sights on a wider market — the data center.

Infiniband Selling Points

Kevin Deierling, vice president of product marketing for Mellanox, a company that manufactures Infiniband-silicon and related hardware, elaborated on Infiniband's four major strengths: a standards-based protocol, 10 GB per second performance, Remote Direct Memory Access (RDMA), and transport offload.

Standards: The Infiniband Trade Association a consortium of 225 companies choreographs the open standard. Founded in 1999, an unlikely gaggle of steering members, Agilent, Dell, HP, IBM, InfiniSwitch, Intel, Mellanox, Network Appliance, and Sun Microsystems, drive the organization. More 100 other member companies join them in what Deierling dubs "co-opetition" to develop and promote the Infiniband specification.

Speed: Infiniband's 10 gigabytes per second performance soundly beats current Fibre Channel's 4 gigabits per second and Ethernet's 1 gigabit per second current top speeds.
To avoid confusion or misleading sales, remember that GBps (with a capital "B") stands for Gigabytes per second; Gbps (with a lower case "b") stands for gigabits per second, 1/8 of the data of a gigabyte.

Memory: Infiniband-enabled servers use a Host Channel Adapter (HCA) to translate the protocol across a server's internal PCI-X or PCI-Xpress bus. HCAs feature RDMA, which is sometimes dubbed Kernel Bypass. RDMA is considered perfect for clusters, as it enables servers to know and manipulate the components of each other's memory via a virtual addressing scheme, without involving the operating system's kernel.

Transport Offload: RDMA is a close friend to transport offload, which moves data packet routing from the OS to the chip level, saving processing power for other functions. To process a 10 GBps pipe in the OS would require an 80 GHz processor, Deierling estimates.

Bait and Switches

Voltaire, a Massachusetts-based provider of Infiniband switches and other hardware, is one vendor dangling incentives to encourage enterprises to deploy Infiniband.

"[In] a cluster, you have many different fabrics to do different functions," Voltaire Vice President of Marketing Arun Jain said. Such an architecture requires gateways or routers to translate between disparate network protocols. "We're the only vendor that has these gateways integrated into our switch chassis," Jain said. This, he claims, saves customers both money and space, and results in a system that is "much more reliable and simpler to manage."

To that end, the vendor's ISR9288 flagship offering provides 288 non-blocking ports in its 14U chassis. Non-blocking ports allow each node to communicate with others "at full bandwidth," Jain said. This cuts down on cabling requirements, which cuts in turn cuts costs.

At press time, Voltaire had just announced virtualization-enabling companion IP and Fibre Channel routers as well as an Infiniband-optimized NAS solution.

The Downside of Infiniband

"The real problem for Infiniband is: How good is good enough?" Gartner Research Vice President James Opfer told ServerWatch, "There's a certain market available for clustering, and it's arguably true that Infiniband does that better, but Ethernet may do it good enough." Although Infiniband has generated increased interest in the past six to nine months, he adds, it "hasn't penetrated [the market] in a major way as of yet."

Gartner does not break out Infiniband-specific market data at this time but believes the technology has its work cut out to push Fibre Channel out of the server closet. Opfer also anticipates Fibre Channel will be alive and kicking through 2010.

Further, Infiniband's performance equation isn't a sure bet down the road. "Infiniband doesn't seem to be a huge risk right at the moment, but it's not like Ethernet for long-term durability," Opfer said, and when it arrives, "if you're going over backplanes, 10 GB Ethernet will be about the same as 10 GB Infiniband."

In fact, slower components could keep a data center from noticing the 10 GBps speed boost, in particular PCI and PCI-X, whose relatively low speeds can prevent the realization of such a boost. PCI Express, which is built specifically to keep up with emerging network protocols like Infiniband and provides up to 2.5 Gbps, addresses this.

Further marring the landscape have been the changes in Infiniband's development. Early on, Infiniband was touted as a radical channel architecture replacement for the PCI load and store architecture internal to servers. Had it succeeded, it would have prevented these bottlenecks.

"They were going to go all the way to the processor with Infiniband," Opfer notes, "and that hasn't happened."

Once Infiniband resigned itself to external, inter-server communications conveyed over internal PCI architectures via an HCA, "the game was over. The revolution was over from that point on, it was just a different implementation of load and store architecture," Opfer eulogized.

Originally, a different funeral had been planned. "They were saying that PCI's dead, that this was going to be the follow-on to PCI, and it's not, it runs on [PCI or PCI Express], it doesn't sound like it's displaced PCI to me," he said.

Infiniband's advantage stems from functions like RDMA, transport offload, and high speed — compelling attributes that seem more a matter of reform than revolution. In the data center, however, this makes for sensible choices.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved