As the role of high performance computing (HPC) shifts, high performance no longer necessarily equates with high end. But is it as simple as A-B-C?
Once confined to academic research, HPC is now broadening its user base and entering the fringes of the mainstream. High-end and high-cost machines are giving way to Intel/AMD commodity boxes. The success of Linux in this zone has even forced Microsoft to enter the fray.
"The HPC market is moving at a fast pace similar to the growth of the Internet in the mid-'90s and the vendor count is rapidly increasing," says Ken Farmer of LinuxHPC.org and WinHPC.org, two technical portals covering Linux and Windows HPC and clustering. "From software development kits by Absoft, to cluster management suites by Cluster Resources, to hardware vendors like PSSC Labs, to Microsoft's Compute Cluster Server, the HPC market is going through a rapid maturation phase."
Robert Gezelter, a Flushing, N.Y.-based software consultant with much experience in HPC, sees this as one facet of the centrality of IT in the modern business landscape: Enterprises these days seem to have an insatiable demand for compute power.
"The big trend in HPC, as indeed in all of business-related computing, has been the demand for ever increasing workloads and security around the clock, as access to computing becomes ever more mission-critical," says Gezelter. "The popularity of wireless access (WiFi and cellular) is another aspect of this enabling untethered users to continue to use systems, wherever they are."
As a result of more and more organizations insisting on HPC-like capabilities in their systems, it's hardly surprising RISC has lost ground to Intel and AMD.
"The past year has seen a big shift away from proprietary chips," says Sharad Mehrotra, CEO of Fabric7 Systems, a Linux HPC server vendor based in Mountain View, Calif. "This development is being driven by advances in dual-core processors and the direct connect architecture of the Opteron platform."
"HPC is being used in practically all industries today. The last soft drink you consumed or the cell phone, car, and jogging shoes you use were most likely designed, modeled and analyzed using HPC." Bjorn Andersson, director of HPC and Grid Computing at Sun Microsystems
Dual-core offerings from both AMD and Intel have popularized the concept of multi-core processors. Users have quickly bought into the notion of accomplishing almost double the computing in the same footprint while generating far less heat per processor. Most vendors report brisk dual-core sales and diminishing interest in single-core products.
"We have noticed a rise in demand for dual-core processors from Intel, where AMD previously had a lock on that segment," says David Drake, a systems engineer at CDW Corporation. "We have also observed more orders for 64-bit servers, with the amount of memory soaring from the previous 32-bit limit of 4 GB to as high as 128 GB for some models."
This 64-bit HPC segment, though, has been an area ripe with controversy. Dell, for example, decided to phase out the Itanium 64-bit server processor in its PowerEdge line, preferring the less ambitious but perhaps more reliable Xeon architecture. Others, however, have rallied under the IA64 banner. HP, Unisys, Hitachi, Bull, Fujitsu Siemens Computers, NEC, and SGI are part of an Itanium Solutions Alliance aimed at promoting the development of software applications for the Itanium platform.
And their work appears to be paying off. According to IDC, today more than $3 billion of Itanium servers run over 5,000 applications worldwide a good portion of this is directly deployed in technical computing environments. Further, the research firm estimates Itanium server shipments will increase by more than 65 percent annually. By 2009, the chip will account for about 10 percent of the overall market.
"The issue of EM64T/IA64 performance is overrated by many," says Gezelter. "I have used IA64 on OpenVMS, and apart from it being a different machine, there is essentially no difference to the application developer."
HPC for Everyone
Gezelter has noticed a definite shift in the buyers of HPC gear. He believes the need for greater performance is driving the purchases of systems ever down the ladder in terms of scale of enterprise. This, he says, is most clearly manifested at any coffee shop, bookstore, or airport offering WiFi. As the volume of mobile devices continues to explode, HPC capabilities are required to provide the information feeds for these devices.
Farmer, too, has seen greater uptake of HPC systems and commodity clusters in the commercial space as a whole.
"Clusters are no longer just for the national labs, government organizations, and educational institutions," he says. "They are now being implemented as a far less expensive and equally capable solution in place of traditional supercomputers in a variety of industries, including aerospace, petroleum and biotech, to name a few."
Bjorn Andersson, director of HPC and Grid Computing at Sun Microsystems sees a similar trend.
"HPC is being used in practically all industries today," says Andersson. "The last soft drink you consumed or the cell phone, car, and jogging shoes you use were most likely designed, modeled and analyzed using HPC."
The Sun Grid Rack System, for example, is a way to simplify clustering to offer more accessible HPC. The system building block is the rack instead of the individual nodes. It is assembled using Sun Fire x64 servers, which are powered by AMD Opteron processors. According to Andersson, these start at $745 with the Sun Fire X2100 server. When you populate a rack with, say, 32 of these servers and add in networking equipment, the list price rises to about $100,000, depending on the exact server node configurations and what type of interconnect selected.
Linux, of course, has been a major driving force in the commercialization of HPC. In particular, Linux dominates the commodity cluster computing space. Linux clusters have been around since the Beowulf Project, which was developed by Donald Becker and Thomas Sterling more than a decade ago. The idea behind the Beowolf Project was to assemble clusters using commodity-based hardware as a cost-effective alternative to large supercomputers.
Fabric7 is one recently launched enterprise server player that hopes to tempt more users away from Unix and onto dual-core x86 64-bit processors running Linux as an affordable high-performance alternative. It offers two servers, the Q160 and Q80. Both are based on dual-core Opteron processors. The Q160 is an up to 16-way SMP model that can be linked via an advanced I/O subsystem to any number of other servers to create a virtual fabric of computing power, as well as memory, network bandwidth, storage access, and offload engines.
"We've begun to see indications that there is increased interest in server I/O as a priority because HPC set-ups are becoming more and more integrated into commercial environments, like Business Intelligence applications," says Mehrotra. "The processing and delivery of equity trading data in financial services at extremely high speeds is an example of where this trend might emerge first in 2006."
But Linux may not have it all its own way. A sure sign that HPC has arrived on the map Microsoft's arrival into the scene. The Redmond-based giant made a push at SuperComputing 2005 to gain exposure in the marketplace, promoting the idea of "Personal Supercomputers". Further, it is developing an HPC solution based on Windows Computer Cluster Server (CCS).
Currently in beta and scheduled for release sometime in the first half of this year, CCS adds several tools on top of a Windows Server 2003 base. Hardware vendors ranging from Dell, HP, IBM, and NEC to Orion Multisystems, Tyan Computer, and Verari Systems have partnered with Microsoft in developing HPC systems. Further, it has invested in millions into 10 HPC institutes, including Cornell University, Nizhni Novgorod State University in Russia, and Shanghai Jiao Tong University in China.
Clearly, Microsoft has the backing and resources to enable Windows to make up ground on Linux quickly in this sector.
"Look for Microsoft to start being an influence in this market as companies familiar with Windows products but unfamiliar with HPC begin deploying large clusters to handle data intensive applications," says Farmer. "Also look to see what has been a loosely grouped community start to form industry standards, as needs dictate, because of rapid growth."