There was a time, not all that long ago, when aggregating computers into a single system was not considered virtualization. It was known as clustering or grid or even utility computing. The model has also long been used in supercomputing.
And speaking of supercomputing, it's almost time for the semi-annual Top 500 supercomputing list to make an appearance. It's no surprise therefore, that the supercomputing announcements are starting to come to the fore this week.
What is interesting, and somewhat surprising, is the arrival of cloud on the supercomputing scene.
On the one hand, the cloud model is complementary to supercomputing, and shifting compute power from the data center to the cloud, isn't all that different from buying compute units from an OEM or MSP, as IBM and others have been doing for years now. With today's virtualization tools, grouping commodity hardware as one server to facilitate a task and ungroup when the resources are needed elsewhere seems hardly, well, rocket science.
On the other hand, despite being labeled a "hot" and "disruptive" technology, cloud computing is far from mature, and how prudent is it to entrust your supercomputing needs to unproven technology?
On Wednesday, ScaleMP added vSMP Foundation for Cloud to its vSMP Foundation family. The Cloud software is designed for environments already provisioned and set up for cloud computing. It is, as Intenetnews notes, "for those moments when you might want to turn off your low-priority services for a few hours to run a massive processing job." To do this, admins create virtual machines on a per-job, per-project, or per-customer basis.
"The product is about increasing cloud utilization," Vice President of Marketing Benjamin Baer told ServerWatch. Founder and CEO Shai Fultheim took it a step further, noting, that ScaleMP hopes to do exactly that to bring about the end of the "end-of-the aisle computing era."
Its goal, Baer said, is to remove the "piece of the pie" and enable multiple systems to be hooked together with Infiniband cables, thus creating a large virtual machine, and in the process saving 25 percent of the cost, and delivering "performance improvements on the order of 40 percent to 500 percent of systems on market." The increases, he explained, are heavily dependent on the applications and data loads running.
To accomplish this, ScaleMP is looking at it from the virtualization perspective, tackling the over-utilization that often occurs with a virtualization endeavors by bringing it to the cloud and removing the constraints of the dedicated box.
This aggregation of power takes x86 computing up another level, putting it in the same league as the HP's Superdome, SGI's Altix and IBM's p590 and p575, Fultheim told ServerWatch. The software can pool up to 16 x86 servers to create virtual SMP with up to 128 cores and 4 TB of main memory using an Intel Xeon 5500 series processor.
Because the company has found security to be the main issue for many organizations considering cloud, it is more focused on the private cloud at this time. It looking at both internal and external cloud though, as well as working with several service providers.
ScaleMP wasn't the only company to announce its plans to blend supercomputing with virtualization and cloud. 3Leaf Tuesday revealed its vision of the Dynamic Data Center. It unveiled a server for the AMD Opteron family of processors and outlined plans to support the Intel QPI 1.1 interconnect, beginning with the Sandy Bridge processors.
3Leaf Systems are designed with the cloud in mind. According to the company, "the technologies enable enterprises to treat x86 servers as building blocks and coalesce them into contiguous pools of CPU, memory, and storage that can span across multiple physical machines and be allocated or de-allocated as needed."
The heart of 3Leaf System's technologies is the DDC-ASIC, which enables distributed cache coherency among all system cores while creating the basic control structure needed for high system agility. The DDC-ASIC chip is available now for AMD Opteron processors supporting the AMD HyperTransport interconnect and up to 1TB of main memory across up to 16 nodes targeting the Shanghai, Istanbul, and Magny Cours processors.
The DDC-Software has three components, DDC-Pool enables multiple x86 boxes or blades to function as a single larger contiguous system; DDC-Share facilitates the allocation of resources down to the core level; and DDC-Flex provides runtime reconfiguration of OS images across any portion of a compute cluster, such that an image can expand or shrink in terms of CPU, memory, and I/O resources without a reboot (DDC-Flex is expected for general availability in 2010.)
Amy Newman is the managing editor of ServerWatch. She has been covering virtualization since 2001, and is the coauthor of Practical virtualization Solutions.