The International Supercomputing Conference will take place this week in Heidelberg, Germany, and the big news is ... cables. And petaflop computing.
If cabling is not the first thing to come to mind when talking about high-performance computing issues, you've never seen a cluster. The hundreds or thousands of processors used in a compute cluster need to be connected via high-speed cables, and those cables are thick and heavy.
Each cable can weigh around one kilogram, or slightly more than half a pound. That may not seem like a lot, but when you have 1,000 cables, 1,000 cables times one kilogram can be a lot of weight, said Tom Willis, general manager of Intel Connects Cables on a conference call.
Cables can pile up two feet thick and bend the pins and force people to reinforce floors, he said. Plus, they impede airflow and can only transmit data so far.
Intel's solution is Intel Cluster Read and Intel Connects Cables. Cluster Ready is a program and technology to simplify the deployment, usage and management of clustered computer systems.
As a part of this, Intel introduced Connects Cables, which enables InfiniBand and 10 gigabit Ethernet to achieve a double data rate of 20 Gb per second. Because the cables are fiber optical, they are 84 percent lighter than copper cables, 83 percent smaller and have a 40 percent smaller bend radius.
The cables can stretch up to 100 meters, making a cluster of up to 10,000 computers quite possible.
The Sun Comes Out
Sun Microsystems meanwhile is also looking at dealing with the cabling menace as it pursues petascale computing.
The systems vendor will introduce the ultra-dense Magnum Switch, which supports 3,456 nodes in a single box, consolidating the number of switch entities needed in a cluster down by a factor of 300, according to Andy Bechtolsheim, chief architect and senior vice president of Sun's Systems Group.
One switching element instead of nearly 300 in a supercomputer cluster also means a major reduction in cables needed, cutting the number of cables from 6,912 to 1,152 a 6 to 1 reduction. These connectors have three times the density of an InfiniBand 4X connector, and Sun has its own new cables in the works that are half the size of standard InfiniBand cables.
"Keep in mind, cables are the least reliable part of a high-performance system," Bechtolsheim told a gathering of reporters at Sun's Menlo Park, Calif., headquarters. "People have more trouble with cables than anything else in the system."
True enough, said analyst Nathan Brookwood of Insight64.
"It's a huge deal, especially for these clusters with hundreds of thousands of cables, it's an enormous deal," he told internetnews.com. "No one is building a switch with the capacity of Magnum. That gets rid of a lot of cables. Cables are one of the easy points of failure."
Sun is making a big push for high-performance computing with its Open Petascale Architecture, although the market for it is admittedly small. "The market for clusters like these are in the dozens, but there are people doing smaller clusters and we think we can go for that," said Bechtolsheim.
He's referring to the first Sun Constellation System, a petascale system that's the result of a collaboration between Sun and the Texas Advanced Computing Center (TACC) at the University of Texas in Austin.
TACC will consist of 82 ultra-dense blades, 72 Sun X4500 storage servers and two of the ultra-dense Magnum switches. Sun expects the Constellation System to be one of the most powerful computing platforms in the world when it gets its CPUs.
Constellation is going to be based on Barcelona, AMD's long-hyped forthcoming Opteron chip. So while Sun is claiming 500 teraflops of performance, which will pass the massive IBM Blue Gene/L supercomputer at the Lawrence Livermore National Labs, Bechtolsheim didn't offer hard performance numbers because Sun is still waiting for the chips.
Here Comes Big Blue
IBM, however, isn't sitting still.
The company is introducing Blue Gene/P, the second generation of its massive supercomputer with 2.5 times the performance as Blue Gene/L, the current top of the Top 500 hot-rod list, but it only consumes 20 percent more power.
You want fast? IBM estimates drug researchers could run simulated clinical trials on 27 million patients in one afternoon using just a sliver of the machine's full power.
Blue Gene/P is a modular design with four quad-core PowerPC 450 processors integrated on a single Blue Gene/P chip. IBM went with the PowerPC design over its newly-released POWER6 processor because POWER6 is more of a general purpose chip, while the modified PowerPC 450 is designed for special, compute-intensive tasks, according to Herb Schultz, marketing manager in the Deep Computing unit at IBM.
Each Blue Gene/P chip is capable of 13.6 billion operations per second. A two-foot-by-two-foot board containing 32 of these chips churns out 435 billion operations every second, making it more powerful than a typical, 40-node cluster based on two-core commodity processors. One rack can hold 4,096 processors.
The one-petaflop Blue Gene/P configuration is a 72-rack system, and IBM estimates it can reach three-petaflop performance, but it would take 216 racks. Just for comparison, the Livermore supercomputer, already the fastest on Earth, is 64 racks.
There are only 28 Blue Gene installations in the world, but IBM is optimistic for growth opportunities.
"There is a growing number of companies needing this performance," said Schultz. "In addition to being a heavy player on government labs, we're looking at Blue Gene/P to break into the industrial category, too."
Brookwood agreed that there isn't a big market, "but clearly this is a market where there's a lot of bragging rights, and to the extent that some of these systems may end up at the top of the supercomputing list, that's kind of prestigious," he said.
This article was originally published on Internetnews.com.