The Software Defined Data Center Emerges (Slowly)

Monday Feb 3rd 2014 by Jeff Vance

A variety of IT trends are fueling the software defined data center, yet SDDC still lacks a central driver to propel it into the mainstream.

Due to the increasing importance of virtualization and cloud computing in the enterprise, the Software Defined Data Center, or Virtual Data Center, is gaining momentum. Once applications are separated from underlying hardware, it makes sense to push the concept further. And once application assets reside in a nebulous cloud that may be difficult for IT to gain visibility into, let alone maintain and manage, ideas like Software Defined Data Centers (SDDC) start to make a quite a bit of sense.

To date, the only Software-Defined market with any real traction is for Software Defined Networking products. A 2012 study by IDC predicted that SDN spending would reach $360 million in 2013, expanding to $3.7 billion by 2016.

Analyst forecasts aren’t the only way to measure the potential of SDN, though. In 2012, VMware acquired SDN startup Nicira for $1.26 billion. Brocade, Cisco, and Juniper followed suit by acquiring Vyatta, Cariden, and Contrail, respectively. Moreover, VCs are pouring serious money into this sector, heavily financing such startups as Affirmed Networks ($103 million raised to date), Big Switch Networks ($45 million), and Plexxi ($48 million).

From SDN to SDDCs

Building on SDN’s success, the SD concept has migrated to storage, security, and, now, the entire data center. At the most basic level, what SDN does is separate the control plane (or the built-in management logic firmware) from the data plane (which forwards network traffic to other devices) of networking devices.

Unlike virtualization, where the early focus was on server consolidation, SDN separation makes it possible to program the entire network in a different way. This shift makes it possible for applications themselves to control networking and security features. Or control functions could eventually be centralized and unified into some sort of higher-level cloud control plane or management suite.

In contrast, the status quo is that a device from Cisco or Juniper or whomever ships with vendor-supplied firmware that handles control and invariably results in all sorts of vendor-lock issues. Thus, another benefit of SDN is the ability to exercise unified control in a heterogeneous environment.

Software Defined Storage builds on this concept, treating various storage devices as a single pool of storage, which can be controlled centrally. The Software Defined Data Center, then, is an additional layer of abstraction above the other virtualization and SDx layers, which provide centralized control of all of these assets, not matter where they are located.

“While originally I was skeptical about any SDx assignation, the more I think about SDDCs the more the concept resonates,” David J. Cappuccio, Research VP at Gartner, noted. “The idea is that in a perfect world data center resources would be placed where it made the most economic sense, and then the allocation and use of those resources could be controlled by rules and analytics, allowing both workflows and workloads to be moved, or directed, where they best served the business at any particular point in time.”

Why are SDDCs important?

A number of factors are converging to accelerate the interest in SDx products. For starters, networks are getting bigger, faster, and far more complicated. Meanwhile, applications are breaking away from their siloes and being shifted to the cloud, mobile devices, connected home appliances, the M2M world, etc.

The rapid adoption of cloud computing means that traditional hardware-based networking just won’t keep up with the needs of both service providers and cloud consumers.

The same is true for storage. In theory, cloud computing makes automatic backups and disaster recovery practically table-stakes features for cloud services, but VM sprawl and network constraints hinder that vision.

Let’s also not forget the spread of WLANs in the enterprise. The various networking devices associated with WLANs (and serving Bring Your Own Device employees) are also sprawling practically out of control. As such, wouldn’t it make sense to centralize the control of the many WLAN switches and APs scattered through corporate campuses?

Finally, all of these new application consumption models introduce numerous new security risks. SDx could help security to evolve beyond its perimeter-protection roots into something that better matches today’s cloud and mobile environments. Moreover, IT security pros would be able to shift from a reactionary, firefighting mode, so they could actually spend time analyzing data and behaviors in order to proactively secure dynamic environments.

Add all of this up and the SDDC concept reaches beyond the data center to deliver services from the right place to the right end user in an efficient manner.

The Seven Properties of Network Virtualization

Before being acquired by VMware, the founders of Nicira laid the groundwork for SDDCs by defining “The Seven Properties of Network Virtualization”:

1. Interdependence from network hardware

2. Faithful reproduction of the physical network service model

3. Follow operational model of compute virtualization

4. Compatible with any hypervisor platform

5. Secure isolation between virtual networks, the physical network and the control plane

6. Cloud performance and scale

7. Programmatic network provisioning and control

Boiling down what all of this actually means, the properties essentially say:

1) Avoid outdated vendor-lock architectures

2) Be sure to factor in all of those legacy workloads that weren’t written for virtualized and cloud environments, but which won’t be phased out any time soon

3) Support the networking of VMs in the same way they were designed, i.e., don’t limit VM flexibility

4) Be hypervisor agnostic, which is obvious enough in the statement above, but this is another argument for openness and against vendor lock

5) Realize that multi-tenant environments introduce new security threats, so be sure to factor secure isolation in from the get-go, rather than as an afterthought

6) Figure out how to move beyond the fact that networking is still constrained by the physical limitations of networks, which, for instance, limits VLANs IDs to 4,096, which is not nearly enough for cloud scale

7) Move beyond the one-device-at-time programming that is the status quo

Failing to adhere to any one of those points could severely limit the potential of SDN and could throw up obstacles for the shift to SDDCs.

OpenFlow’s role in SDDC’s future

An important standard to keep an eye on as the SDDC concept gains traction is OpenFlow, an open, programmable network protocol that can be used to manage traffic among various networking devices from various vendors.

For a time, it seemed that OpenFlow and SDN were one and the same. It’s important here to note that SDN was initially seen as a Cisco killer. However, Cisco is now throwing its weight around and has its own SDN plans, which won’t necessarily include OpenFlow, and may instead focus on using network overlays to serve as the bridge from physical devices to virtual workloads.

VMware has issued similar statements de-emphasizing OpenFlow and prioritizing vCloud.

There are two ways to look at this. One is that two leading data center behemoths are pushing back against the many SDN startups challenging their turf. The other is to look at this as typical of open-source projects. Vendors still need to differentiate themselves and will create something somewhere that is proprietary and prevents their devices from becoming completely commoditized.

For now, SDN is immature enough that it’s best to reserve judgment. Symantec and VMware partner for one of the first SDDCs SDDC is still in its infancy, so there aren’t many real-world use cases to reference. However, Symantec recently announced that it will rely on VMware’s vCloud Suite as the foundation for its SDDC. This could have major implications for both companies moving forward, especially since they are already long-standing partners.

In the near-term, Symantec is using vCloud for Global Symantec Labs. Symantec will be able to create flexible, virtualized test environments that will help Symantec recreate a customer’s environment for testing and troubleshooting – without having to repeat this process over and over for each customer. Instead, Symantec will build test kits that can be shared with various security pros via the cloud. The result is being referred to as the “Symantec Lab Cloud.”

According to VMware, “With its software-defined data center, Symantec can leverage vCloud Suite to virtualize its existing infrastructure, to abstract and pool hardware, networking and storage resources, and then to deploy and manage those resources via software. vCenter adds the monitoring and management controls necessary to support Symantec’s rapid growth, and control of their data center operations is automated by software in vCloud Suite.”

More examples could be on the way soon.

New SDDC research from Enterprise Management Associates found that business units are exerting pressure on IT departments to accelerate the shift to SDDCs. Currently, there are no central management technologies that are able to control and unify the entire data center and the public cloud. However, EMA argues that successfully implementing the SDDC starts with an IT operations mindset that focuses on reinventing the infrastructure provisioning and management process in a much more policy-driven manner.

“SDDC cannot be implemented in the form of a technology project, but rather constitutes a concept that describes guidelines that follow the multi-year vision of entirely closing the traditional gap between enterprise IT and the business," said Torsten Volk, EMA Research Director, Systems Management.

Are SDDCs inevitable or just more tech trend hype?

Let’s take a second to look at how we got to this point. Virtualization was a trend that many regarded as overhyped until VMware started taking over the data center. Virtualization’s benefits were too great and too obvious for skeptics to rail against for long.

The next obvious move after virtualization caught on was to decouple applications from specific data centers, transforming traditional on-premise applications into services delivered from the cloud. Remember, there were plenty of cloud computing skeptics panning this trend, as well, with Larry Ellison being one of the most notable cloud deniers (until recently).

Critics have noted that until SD programmability is actually shipped in large volumes in real-world routers, switches and other infrastructure appliances, the SDDC is a mirage. Legacy infrastructure tends to have staying power, especially in organizations with tight IT budgets, so this isn’t a critique to dismiss out of hand.

Yet, SDx seems to be following a very similar path as virtualization and the cloud before it. What’s missing, at this point in time, is the major vendor that pushes the trend from hype to inevitability. VMware did this for virtualization, becoming a major vendor in the process, and a cadre of vendors, including, Google, Amazon, and even Microsoft pushed the cloud into the mainstream.

Who will lead the SDx shift from promising technology to world-beater, however, is still an open question.

Photo courtesy of Shutterstock.

Originally published on Datamation.
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved