10 Server Predictions for '11

Tuesday Dec 28th 2010 by Kenneth Hess
Share:

10 predictions based on server trends and server hardware evolution. How accurate will they be?

It's that time of year when everyone predicts what will happen in the coming year, mostly about how enterprise businesses will spend money or how consumers will react to a product or event, but some predictions are more concrete and measurable than others. These 10 predictions for 2011 will have you watching the latest developments from the big hardware manufacturers. Sure, some are wishful thinking, but others are clearly within reach for a 2011 delivery time frame.

1. SSDs

Solid state drives, or SSDs, started hitting production servers in stealth fashion in 2010, but expect their presence to grow in 2011. SSDs have no moving parts and a mean time between failures (MTBF) in the 1 million hour range, and their price is now within affordable reach for even the stingiest server builder.

2. On-Demand CPUs

With the move toward virtualization and cloud computing, CPU manufacturers will have to continue changing with the times and the demands of this new computing environment. On-demand CPU allocation is one way to do that. Here's how it works: You have a certain number of physical CPUs and a number of virtual CPUs per physical CPU. When demand on that physical CPU climbs to a preset amount, the CPU fires up a virtual CPU. When demand increases to the level where another physical CPU would decrease the overall load on a system, another CPU fires up to meet that demand. Dynamic CPU allocation handled by the system that bypasses software layer involvement would dramatically increase system response and efficiency.

3. Reduced Format

With data center floor space and rack space at a premium, it's time for server hardware manufacturers to respond. For reduced format server systems, don't think 2U, 1U or blades; think MacBook Air. That doesn't mean you should put Macbook Air units into your data centers, but rather that this type of "paper thin" format is the future. The use of SSDs and flash memory for harboring operating systems and applications means server formats could use a major design overhaul.

4. Hardware-Based Watchdogs

How much do you spend annually on watchdog software programs to monitor system performance, to keep an eye on TCP ports, and to maintain patches and other system needs? Mainboard manufacturers will step up their game to provide these services on a chip. On-chip services alleviates problems with connectivity to a reporting server, since local logging will still occur if the network is offline.

5. Wireless Connectivity

Servers will ship with wireless connectivity and with wireless turned on by default. Soon, mainboard manufacturers will drop integrated wired connections for the less costly wireless ones. High bandwidth wireless connectivity inside a data center is a beautiful thing. A more efficient data center without miles of cables, concentrators and racks of switches will be a significant shift indeed.

6. On-Board Hypervisor

What's better than hypervisor-enhanced CPUs? Hardware hypervisors. Think of the possibilities of having your hypervisor integrated into the hardware backbone of your systems. Exciting, isn't it? It is if you think of the companies capable of making it happen: VMware and Cisco.

7. Server-to-Server Communications

Servers that are "aware" of each other is another technology you'll soon use as part of your daily regimen of computing hardware. At first, manufacturer-specific communications will emerge as standard. In other words, your Dell servers will all know about each other and carry on private conversations about load, utilization and failures. This communication infrastructure will provide essential information for those involved in Tier 4 data center support.

8. OS-Specific Enhancements

How would you like to order a server or group of servers that all have Linux-enhanced hardware in them? You'd have servers that deliver higher performance and require fewer tweaks and workarounds. With OS-specific enhancements, your systems will also withstand the OS upgrade tragedy that strikes your wallet every few years. The reason? The new operating system versions should experience less bloat if developers know that they can push off some of that bloat to on-chip components. If you don't believe it's possible, think back to your Motorola-based Mac.

9. Virtual Component Architecture

Like No. 6 (On-Board Hypervisor) above, you'll soon have the opportunity to purchase components, such as memory and network interface cards (NICs), with optional virtual component architecture (VCA). VCA allows you to purchase virtualization-aware hardware. The advantage of VCA hardware is that you can allocate bandwidth selectively to your hardware components. VCA opens a whole new world of possibilities for bandwidth and usage partitioning for virtual machines.

10. Configurable Computing Bandwidth

What if you could pool all unused memory, CPU and network bandwidth from your systems for specific tasks or to use as available computing power? This isn't a far-fetched idea for anyone who has ever installed the SETI application or a peer-to-peer ap on their computer that borrows bandwidth to solve a problem. Hardware that is not only aware of other similar network components but that can also automatically, or manually, donate or request bandwidth from the pool is a cloud computing dream come true.

Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which was published in October 2009. You may reach him through his web site at http://www.kenhess.com.

Follow ServerWatch on Twitter

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved