Hardware Today — Clustering Your Way to Supercomputing

Monday Aug 9th 2004 by Ben Freeman

As the climate for scaling out heats up, enterprise interest in clusters continues to grow, along with definitions and vendor claims. We look at clustering vs. grid vs. utility computing, and highlight offerings from IBM and SGI.

As the climate for scaling out heats up, enterprise interest in clusters continues to grow, along with definitions and vendor claims.

Clustering, at its most simple, is defined as the joining of two or more computers together to act like a single, more efficient supercomputer. Clusters are a staple denizen of the high performance computing (HPC) space. They link systems, generally in the same geographic location, to function unilaterally and homogeneously to perform specific tasks. Often they are successful, as is evident in the most recent Top 500 world's fastest supercomputer list, where 58.2 percent of the supercomputers are actually clusters.

Those not familiar with clustering may be better acquainted with its close relatives, which go by the names of virtualization, on-demand, and grid computing.

Grid computing is clustering's more flexible cousin. In a grid set up, heterogeneous, disparate systems are linked, often across multiple locations and through varied network connections, to share processing power to perform complex tasks.
Clustering is defined as the joining of two or more computers together to act like a single, better, supercomputer.

Virtualization creates several servers within one machine to solve the conundrum of often underutilized individual servers, turning what had once been unused overhead into useful virtual partitions. We covered x86 software virtualization in depth her. Mainframe technologies and new partitioning and chip-level hardware from vendors like IBM, HP and Sun add hardware virtualization strategies to the mix.

Rounding out the family is on-demand computing, which is sometimes referred to as utility computing. It is a more enterprise-oriented method of flexibly reallocating resources to suit changing business tasks. One example of on-demand computing in the real world is the new Control Tower management software for RLX blades, which performs automatic on-demand reallocation based on a variety of changing business needs. On-demand computing often involves outsourcing resources, as is the case with IBM's Deep Computing center in Poughkeepsie.

Deep Blue Clusters

In May, ServerWatch reported on the second IBM Deep Computing Capacity On Demand Center, which launched in Montpelier, France. The Center, along with its Poughkeepsie sibling, enables organizations to purchase supercomputer access on an as-needed basis without having to build the clusters or purchase multiprocessor systems themselves.

"By far, the number-one benefit to this model is that it allows companies to be more responsive, to tackle projects that are larger then than they would otherwise be able to consider," Mark Solomon, director of IBM Deep Computing Capacity On-Demand told ServerWatch, "This is a real game changer for the SMB, as it helps level the playing field with larger competitors."

The Center provides technology normally out of reach for the small and midsize business. It offers Intel, AMD, and POWER-based servers, as well as a host of storage technologies, such as SCSIs and SANs, across network protocols that include 10/100 and Gigabit Ethernet, Myrinet, and Infiniband. These architectures can be used interchangeably — for example, memory-heavy POWER-based system on the front end coupled with an IA-32 system on the back end, Solomon said.

The arrangement between the Deep Computing center and an enterprise is akin to a lease but with much more power and flexibility. It removes administrators "from the burden of managing the life cycle of technology," Solomon said. Financial benefits include not having to justify the capital purchase required for the typical three to five year hardware life.

As might be expected, placing this initiative within the above definitions is a blurry prospect. "What we've done is build an extremely large cluster that we slice up and create virtual clusters from," Solomon noted. Also, "It would be appropriate to view [the Center] as a private grid, because the resources are dedicated to an individual user for the period of time that they need it and because the system image is customized for that user's needs."

Will such a flexible commodity model prove irresistible enough to render in-house IT obsolete? Solomon cautions against selling your data center on eBay just yet but is optimistic about the model's potential. "I think we have a ways to go before we see people not [keeping machines in-house], yet I believe this is an emerging trend," he noted. "It's beginning with a interest in gaining external capacity to meet peak demand and will likely continue from there."

>> SGI's Itanium-2 Clusters

Itanium-2 Gets in the Act

While IBM's model is one that includes the kitchen-sink, it doesn't offer Itanium-2 processors, a space SGI's new cluster offerings cover well and through more traditional channels.

The National Computational Science Alliance (NCSA), a nationwide partnership of more than 50 academic, government, and business organizations prototyping an advanced computational infrastructure has used clusters of SGI Itanium-based systems since November 1999, when it demonstrated two firsts simultaneously: the first functional HPC Linux and the first Itanium cluster.

For a brief time, the two parted ways, and NCSA relied on an multiprocessor IBM pSeries machine. In July, NCSA and SGI rejoined forces with NCSA's purchase of SGI's new 1,024 Intel Itanium-2 processor SGI Altix supercomputer, dubbed Cobalt. "We're delighted to re-engage with them," Jeff Greenwald, senior director of server product marketing with SGI told ServerWatch.

Cobalt will help NCSA cosmologists simulate the evolution of the universe on a large scale. Closer to home, it will help atmospheric scientists respond to severe weather conditions in real time.

The trifecta of SGI Altix' NUMAflex architecture, Itanium-2 processors, and the Linux operating system keeps NCSA coming back for more. SGI's NUMAflex technology, which allows the transparent sharing of memory between independent processors, enables each processor to flexibly allocate its needed portion of Cobalt's 3 TB of memory.

Customers looking at clustering generally view Linux as a plus. The SGI Advanced Linux on Cobalt, is no exception for the typical SGI customer, "If you're a government, if you're a university, if you're a research institute, you really demand the benefit that open systems [like Linux] provide," Greenwald added.

As for Itanium-2, "It's fast, it's stable, it's mature, and it's a commodity processor that screams," is Greenwald's spin. "The fact that it's been out there for several years means that the compilers, the tools, the software, the microcode, the reliability, all that stuff is several years ahead of the competition."

Greenwald believes Intel Itanium clusters will do very well in the university HPC market. He cites a list of dozens of university customers relying on the technology. Backing up his claim is the HPC world at large: Intel processors drove 57.4 percent in the June 2004 Top 500 supercomputer list, and Itanium-2 drove a respectable 12.2 percent of that. Xeon, however, carried 44.6 percent, and other Intel processors filled in the remaining 1.6 percent.

The Skinny on a Wide Topic

Sometimes clustering isn't the panacea it's made out to be. Last week, Hardware Today profiled a Sun customer in the process of evaluating its HPC clusters against a prized scale-up server. After decades of increasing resources for scaling out, the organization opted to deploy a high-end, multiprocessor Sun Fire E4900 system, citing a lack of efficiency when compiling code on previously used clusters as the driver.

And when you're looking for a panacea, don't get caught up in the semantics. When it comes to supercomputing, the terms and situations aren't mutually exclusive. As shown above, on-demand computing might involve summoning the services of an outsourced grid or cluster. A cluster can feed a larger grid, or it may result in a supercomputer or a simple assemblage of a few systems in a basement. The options proliferate, and enterprises would be wise not to get caught up in any one hype claiming to be the only true clustering or grid technology or processor. And with clusters filling 58.2 percent majority of the slots on the Top 500 list, it's clear they're here for the long haul.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved