Microservers: Up and Coming Solution or Cute Holiday Boutique Gift?

Monday Feb 11th 2013 by Jeffrey Layton
Share:

Microservers seem to be quite the rage in the server world these days, but are they really a serious solution that can be used as "real" servers?

Microservers seem to be quite the rage in the server world these days. But are they really a serious solution that can be used as "real" servers (define "real" however you like)? Or are they just a cool, buzz-word-like technology that are the "Furby" of the server world?

Introduction

For a long time the under-utilization of server capabilities was the talk of the data center. Servers were using only 15% of their capability on average, yet there were still a large number of servers in the data center, primarily due to needing to keep certain applications or workloads separate from each other.

The result was a data center with woefully under-utilized servers. It made the data center a terribly inefficient desert of wasted power and cooling and, in the end, money. Justifying the purchase of new hardware to keep up with certain requirements was equivalent to going up against a firing squad with a target painted on your shirt. MicroserverKnowing how to bob and weave was a very serious job skill.

As as result, the server world started heading to virtualization. Virtualization allows IT to combine applications in their own independent "virtual server" on a single physical server, improving the utilization of the overall server.

Using this approach, the utilization of the server can be improved from maybe 15% to about 80-90%, which in turn allows the number of servers to be reduced. Ideally, this allows costs to be reduced along with power and cooling, while also reducing footprint. Of course, this can also have an impact on other server aspects such as memory capacity, networking and manageability.

When thinking about or architecting servers that are virtualized, you need to know how much memory is being used by the various applications and VMs (Virtual Machines) so you can be sure you have enough memory in the server. This is also true for networking — do you have enough throughput from the server for all of the VMs? Ensuring there's enough local storage capacity and storage throughput is also important to consider for applications.

Taking all of these aspects into account can sometimes result in a rather beefy server with lots of memory, lots of network throughput and a great deal of local storage and storage performance. The result is a single very large server where you have put all your eggs, requiring you to minimize as many single points of failure as possible.

One of the beauties of virtualization, though, is that you can move VMs to different servers as needed. To achieve this, rather than putting all your eggs into one basket, you will need a second server for migrating or restarting VMs as needed (or spinning up new VMs as the load grows).

Even though most data centers naturally have more than one server, you still have to develop a migration/fail-over plan for your VMs. Plus you need to make sure there are no requirements that restrict VMs and possibly data from existing on the same server. All of this results in more cost, more power and cooling, etc., possibly diluting the savings that virtualization has gained for you.

In the end, you might start to question whether virtualization has saved us anything at all. I think the answer is yes and there are case studies out there to show this is the case. But is it the only possible solution for reducing cost, power and footprint? Perhaps there are other solutions that should be considered.

Jeff Layton is the Enterprise Technologist for HPC at Dell, Inc., and a regular writer of all things HPC and storage.

Microservers Are a Go!

You may have heard of a microserver but just in case you haven't, a microserver is a very small server with reduced capability and options relative to a full server (seems pretty obvious given its name). While they typically have slower processors, less memory per core and less disk per core, a microserver is nonetheless a complete server solution.

For the most part, current servers have a single-core processor (sometimes dual-core or maybe even quad-core), some basic amount of memory, one or two NICs, and at least one hard drive of some size and capacity. What makes a micro-server unique is the focus on low-power. Instead of 95-130W per processor (16.25W per core), a microserver can get as low as 5-20W for the entire server (memory, network and storage). What this means is that you can get a complete microserver that uses less power than a single core of a larger server.

As a result of the low power draw, microservers can be packed very, very densely. It is not unusual to be able to pack up to several thousand servers in a single 42U rack. There are a number of microserver solutions shown by various vendors, of which most are really "demo units," although some are available for commercial purchase.

Since ARM is somewhat new to the server world, the software eco-system is not quite up to speed. Many of the systems listed below are really units intended to help the software eco-system mature rapidly. Plus they can be used to understand how microservers might fit into a server farm or data center. The list below is a quick summary of systems based on recent press releases and various articles.

  • Dell Copper
    • Sled (like a blade) with a Marvell SoC (System on a Chip) that has four 32-bit Marvell Armada XP 78460 ARM processors that have 4 cores each. This means you have 16 cores per sled. The servers are 32-bit and the processor runs at 1.6 GHz based on Marvell specifications.
    • Up to 12 blades fit into a 3U chassis that provides power for all blades. This is a total of 48 servers in 3U for a total of 192 cores.
    • One DDR3 UDIMM VLP slot per server running at 1333 MHz. Up to 8GB of memory per server (2GB per core).
    • There is a GigE link from each server (4 servers per sled). The sled has 2 GigE ports coming out of the front and an internal network connecting to a back-plane so there is some sort of switch on the sled.
    • One 2.5" drive attached to each server (4 drives per sled).
    • Total chassis power draw is 750W. Each server then draws about 15W.
    • Totals:
      • 48 quad-core servers per 3U (192 cores). This is the same as 16 servers per U, or 64 cores per U.
      • 15W per server or about 4W per core.
  • Dell Zinc
    • A Calxeda EnergyCore EXC-1000 card is the basic building block. It has four quad-core (4 cores) processors on the SoC with 32-bit ARM processors (Cortex-A9). These processors have a range of speeds (1.1 GHz to 1.4 GHz), but Dell does not specify which processor speed.
    • The sled (like a blade) has 3 rows of 6 slots where a Calxeda card or a storage card is placed. The Calxeda cards result in a total of 72 servers (4 servers per card, 6 cards per row and 3 rows total). But these servers do not have drives associated with them, requiring some sort of network-based storage. You can also put in cards that have drives on them instead of the Calxeda cards. The example Dell mentioned has 1 row of 6 cards (a total of 24 servers) as well as two rows of cards with only drives (2 drives per card), resulting in 24 2.5" drives. Then you can map one drive to each server.
    • Single DIMM slot per server (presumably up to 8GB per server)
    • The network fabric is discussed here
    • Up to 5 sleds per 4U server node (4 is more likely)
    • One 2.5" drive attached to each core as explained previously
    • Totals:
      • 288 nodes per 4U chassis (4 sleds but no drives - uses network storage)
      • 2,880 nodes per 42U rack
  • Dell Iron
    • No real details released. Data below is based on link above.
    • X-Gene 64-bit ARM processors (Applied Micro)
    • C5000 chassis
    • 6 servers per board. 12 boards per chassis. 72 servers per 3U
    • Totals:
      • 1,008 nodes per 42U rack (estimate based on ArsTechnica article)
  • HP Moonshot
    • A Calxeda EnergyCore EXC-1000 card is the basic building block. It has four quad-core (4 cores) processors on the SoC with 32-bit ARM processors (Cortex-A9). These processors have a range of speeds (1.1 GHz to 1.4 GHz), but HP does not specify which processor speed.
    • Each card has 4 servers (each servers is quad-core).
    • Single DIMM slot per server (presumably up to 8GB per server).
    • Half-width, 2U chassis with 3 rows of 6 cards. 4 servers per card for a total of 72 servers per chassis.
      • Each chassis has 4 10GigE uplinks, which come off internal EnergyCore Fabric.
    • SL6500 accommodates 4 chassis for a total of 288 servers per 4U (each server is quad-core).
    • Totals:
      • 288 nodes per 4U chassis
      • 2,800 nodes per 42U rack
      • Half-rack of 1,600 servers, 9.9 kW (6.1875W per server). Costs $1.2M ($750 per server). (reference)
  • HP Moonshot - "Gemini" - Atom processors
    • Presumably a similar layout to "Moonshot" but undetermined at this time.
    • Centerton Atom processors (S1200):
      • ~10W
      • 64-bit
      • 2 cores. Between 1.6 and 2.13 GHz (6.1W to 10W). 512KB L2 cache
      • ECC (SODIMM, DDR3L, at 1067 MHz. UDIMM and So-DIMM DDR3 at 1333Mhz)
      • Supports Hyperthreading
      • Links:
  • Boston Viridis
    • Uses Calxeda EnergyCore SoC
    • 48 nodes per 2U
    • 300 W per chassis (6.25W per server). Article at InsideHPC points out it used 8W per server when running STREAM (Linpack was 7.9W). See this link.
    • Up to 24 connected SATA devices
    • Fully loaded 2U chassis, 192GB memory (4GB per server) and 24 disks costs $50,000 ($1,041.67 per server)
    • Totals:
      • 48 servers per 2U (up to 4-core per server)
      • 1,008 servers in 42U
    • Links:
  • Quanta S900-X31A
    • 48 Atom S1200 servers into 3U
      • Two Atom servers per sled. Three rows of 8 sleds. 24 total sleds or 48 total servers
    • Dual-core processors up to 1.6 GHz
    • Up to 8GB of memory per node (2 nodes per sled)
    • GigE port per node (2 nodes per sled)
    • One 2.5" drive per node (2 nodes per sled)
    • Totals:
      • 672 servers per 42U rack (48 servers per 3U, fourteen 3U chassis per rack)
    • Links:

While maybe not an enterprise solution, the epitome of a low-power server is the Rasberry PI (RPi). This is a rather simple single-core server, but it has a GPU, sound, networking (Fast Ethernet), a slot for an SD card, two USB ports, an HDMI output and 512MB of memory. All of this can be had for about $30.

Sure the processor isn't that fast (750 MHz ARMv7), and there is only Fast Ethernet, but you can get a complete server that is many times cheaper than a processor or even a memory DIMM in a normal server (around $30 - $50). Plus it is positively tiny. A Raspberry PI is about the size of a credit card. It's a little thicker to accommodate the HDMI and USB ports, but overall it's just about 1" thick.

Imagine traveling with your own server? It would be lots of fun to go through airport security and put one of these in a bucket by itself. It might prompt a question or two from TSA but would hopefully not require any further investigation.

Microservers Instead of Virtualization?

Microservers sound rather cute, don't they? It's a small server with some limited capability relative to a full server, but a microserver also uses a great deal less power. Given that many servers are only about 15% utilized, this brings up a question.

Could very low-power microservers be used instead of virtualizing a much larger server? Rather than taking under-utilized servers and putting them in a VM along with lots of other VMs on a larger server, couldn't you just put the application(s) from the under-utilized server on its own micro-server?

Let's do a quick comparison between a cluster of micro-servers and a virtualized server. Let's assume that a single core on a larger server is the equivalent to maybe 8 microservers. In the case of a dual-socket Intel Xeon server you can get the equivalent of 128 micro-servers in terms of performance (8 * 16 cores = 128).

If we assume a microserver has 4GB of memory and a single hard drive (300GB, 10K, 2.5" drive) for local space, this means we need to have a virtualized server with 640GB (128 * 4GB = 512GB) and 128 drives (or equivalent). Moreover, we need to have a server with at least 12x 10GigE ports (128 * 1Gbps = 128 Gbps ~= 12x 10GigE ports). In summary we need a virtualized server with the following characteristics:

  • 16 Xeon cores (2x 8-core processors)
  • 512GB memory
  • 128 drives
  • 12x 10GigE ports (128 Gbps)
  • 128 VMs

This is quite a bit of hardware in a single server.

To better flesh out the virtualized server, I used on-line configuration tools for pricing and power usage. In particular, I used Dell for the configuration and the power measurements. The configuration I used is the following:

  • Dell PowerEdge R720with:
    • 2x Intel E5-2650 processors (8 cores each)
    • 24x 32GB 1333 MHz LRDIMMs (total of 768GB of memory)
    • PERC H710P RAID card in RAID-1
    • 2x 300GB, 10K SAS, 2.5" drives (holding OS and hypervisor)
    • Broadcom 57800 2x10Gb DA/SFP+ + 2x1Gb BT Network Daughter card
    • 5x Broadcom 57810 DP DA/SFP+ Adapter
    • PERC H810 external RAID card
  • Five Dell PowerVault MD1220JBOD. Total of 120 drives
    • 24x 300GB, 10K SAS 2.5" drives per chassis

This configuration has a total of 16 cores (128 equivalent microserver cores), 768GB of memory (about 6GB per microserver instead of 4GB), 120 disks (instead of 128), and 12x 10GigE ports total. From the Dell on-line pricing tool the price is $98,000, which I assume is the list price, and takes up about 14U of rack space.

Then I used the Dell Energy Smart Solution Advisor (ESSA) to estimate the power used for the entire configuration when fully loaded (which should be the case since we're virtualizing the server). The configuration used a total of 2,223.5W under load.

On the microserver side of the equation let's use the Dell Copper system for comparison. With Copper there are 48 servers per 3U chassis, and each server is a quad-core ARM processor with up to 8GB of memory (let's assume 4GB), a single 2.5" drive and a single GigE port. A total of three chassis will be needed, but the third one only needs 8 sleds (instead of the maximum of 12). Therefore this takes up 9U of space and uses a total of 2,000W almost exactly.

For pricing, let's use the list pricing for HP's Moonshot configuration, which was about $750/server. For 128 servers this is a total of $96,000.

Table 1 below compares the virtualized large server to the equivalent number of microservers.

MetricVirtualized ServerMicroserver (ARM processor)
Total number of servers 16 (128VMs) 128 equivalent
Memory per server/VM ~4-5GB per VM 4GB per server
Number of Drives per VM/server ~1 (300GB, 10K, 2.5") 1 (300GB, 10K, 2.5")
Network throughput per VM/server ~1 Gbps 1 Gbps
Price per VM/server $765 per VM ($98,000/128) $750 per server
Power per VM/server 17.37W per VM (2,223.5W / 128) 15W per server
Total used rack space 12U ~9U (third chassis is only 67% full)
Total Price $98,000 $96,000
Total power 2,223.5 W 2,000 W


The table is very interesting because the two options are so close in most areas. The microserver is about 10% cheaper and uses abut 10% less power, but this is likely within the margin of error for the comparison. However, the microserver option is more dense than the virtualized server option (about 50% more dense in this specific case).

Comparing and Contrasting Virtualized Servers and Microservers

You may already be comparing and contrasting the two options: virtualized servers and microservers. I prefer to contrast rather than compare them, but it's only natural that people will start comparing the options, so let's help things a bit.

The virtualized solution means that only one OS is actually installed on the hardware and then VMs are created as needed. But you have to create the VMs ahead of time, so it is somewhat like installing an OS, but it can be the same OS for each VM (meaning you only have to do it once).

In the case of the microserver, you have to actually install the OS on every server and manage it. If you are thinking that this means extra administration work because you are arguably dealing with 128 OS installations versus 2 for the virtualized server (1 system OS and 1 VM), you have a very valid point.

However, in the HPC world, administrators long ago came to grips with how to deploy a large number of servers with the same image and manage them so it's not an impossible task (HPC has been doing it for a long time). For example, TACC (Texas Advanced Computing Center) just deployed a cluster named Stampede with 6,400 nodes using a small handful of images for the entire system.

The virtualized server gives you the flexibility of moving a VM to a different server if circumstances change. However, if you need to do maintenance on the server forcing a reboot, then you will need a second system very similar to the first.

But virtualization gives you this flexibility. The microserver solution does not because there is no VM on the microserver — it's all bare metal. You could virtualize the microserver if you wished to handle these situations.

If, for some reason, the virtualized server fails, you then lose all 128 VMs. In the case of the microserver, if you lose a server, you only lose that single server, or 0.78% of the cluster.

The software eco-system around ARM processors is not the most complete. It seems as though each ARM implementation has a different boot mechanism and some variations in configuration. The virtualized server option has an advantage in this case because the VM is based on the same hardware as the server itself. The microserver world, specifically the ARM world, needs to catch up rather quickly relative to the virtualized world in this area.

Microservers: Up and Coming Solution or Cute Holiday Boutique Gift?

This is the title of this article and the originating question I used when starting the research. Microservers are becoming the rage because they are very small, allowing for very dense systems, very power-efficient systems, and presumably more cost-effective systems. But are they a truly viable approach for providing server resources?

In the comparison of microservers to virtualized servers I was truly surprised by the closeness of the results. The price and power of the two systems are very close to each other to the point where one can argue they are on equal footing (within 10%). However, the microserver is about 50% more dense, illustrating one of the draws of microservers: density.

There are aspects to both options that are appealing depending upon your situation. At this point, I think it is difficult to say with any certainty that one is better than the other although the software eco-system around ARM servers is still immature (but gaining speed).

Given the very strong roadmaps for both ARM processors and Intel Atom processors that have been discussed in the open literature, I think it's pretty safe to say that microservers are an up and coming solution. Are they valid today? Perhaps in some circumstances, but for the general case, perhaps not. But I believe they could be once the software issues are ironed out.

At the same time I think microservers could also be a cute boutique gift. My Raspberry Pi is utterly cool for many reasons. I'm working on my portable RPi HPC cluster now so I can freak out the TSA. That said, I will be giving myself an extra 2 hours to get through security in case they want to ask more detailed questions ... or in case they happen to be RPi enthusiasts as well.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved