Edge computing involves pushing data and computing power away from a centralized point to the logical extremes of a network. Find out why this new topology is growing in popularity and what changes may be in store for some servers.
By Carl Weinschenk
One could easily make the argument that no element of the network will be more impacted by potent edge computing topologies than servers. The pace and nature of these changes are beginning to crystallize.
Edge computing, as the name implies, involves pushing data and computing power away from a centralized point to the logical extremes of a network. A number of approaches making headlines today -- mesh computing, peer-to-peer computing, autonomic (self-healing) computing, and grid computing -- are part and parcel of the edge computing concept.
This has long been a goal of network designers, and it is happening. "We dreamed about this sophistication at the edge for years, and now it's being deployed," Tim O'Neill, a director of sales and marketing for AppDancer Networks, a company that makes network analysis tools, told ServerWatch. "We're seeing a lot more push, a lot more demand for specialization at the edge."
The advantages of edge computing are undeniable. Running applications at the edge cuts down on the amount of data that must be trafficked and the distance that what is sent must go. Both of these reduce transmission costs, shrink latency and, therefore, improve quality of service (QOS). By eliminating or de-emphasizing the core, edge computing limits or removes a major point of failure and a potential bottleneck. Security is inherently better as encrypted data moves further into the network. Since data coming toward the enterprise passes firewalls sooner, viruses and hackers can be caught earlier. Finally, the capability to "virtualize" -- i.e., logically group CPU capabilities on an as-needed, real-time basis -- extends scalability.
Edge computing comes in many shapes and flavors, and the view on the edge differs depending the customer. Sun Microsystems, for example divides its edge computing efforts into three categories: the data center edge, the customer edge, and the network edge. The following table illustrates what Sun believes are the key elements and focus of the three categories.
Sun's Edge of Reality
The Network Edge
The Customer Edge
The Data Center Edge
|Focus||Network infrastructure||Access to internal and external customers and users||Secure and reliable interface to the data center|
|Key Elements||CDNs, DNS services, global load balancing, encryption, QOS, and SLA enforcement||Intranet serving gear, application staging services, local caching, network management, access points, and firewall/VPN||Web serving, e-mail services, network management, and intrusion detection|
Source: Sun Microsystems
Activity at the edge is accelerating for a number of reasons. For years, network designers bought into the idea that "bandwidth is free." As that notion fades, efforts to cut transmission requirements grows. At the same time, running apps at the edge is increasingly possible. The cost of CPU cycles is falling. More condensed and self-contained applications are being written. Stable, robust, and secure transmission protocols are being deployed. Finally, penetration of Web services -- which are the kindred spirit of edge computing -- is growing.
The gating factor in moving to the edge is the specific task the server is performing, said Peter Salus, the chief knowledge officer for Matrix NetSystems. The move of full apps to the edge will be slower since they run on "stateful" application servers that remember clients between communications and perform very specific tasks. It is hard to buttress these servers by "virtualizing" server muscle from elsewhere. Stateless servers (those that retain no memory of previous interactions) are more easily aided by virtualized serving capacity brought from elsewhere. "An application server has to follow very complex business rules and broker requests," Salus said.
Regardless, the impact on servers will be profound. Decentralization means that as a group servers will get smaller. Big, fully functioned machines now in the middle of the network will be de-emphasized in favor of smaller, more flexible units better suited to remote operation. Blade configurations will become more common because they increase flexibility and reduce environmental and power requirements. "I think we will have a smaller number of large servers from Sun, Dell, or HP," Salus added.
Other changes will abound, experts say. While decentralization will take the emphasis away from the core, each server at the edge will become a bit more important. Thus, it is likely that server designs will accommodate increased clustering, redundancy, and connection to network-attached storage (NAS) capabilities. Hardware-based routing engines and secure socket layer (SSL) termination will be put into these machines. Although not a direct cause of the growth of edge computing, future servers will have to accommodate the growth of 802.11b wireless fidelity (Wi-Fi) protocols. Finally, decentralization will imply changes in the way in which servers are managed.
Bill Roth, group marketing manager for Sun's x86 server line, notes that the natural path will be to build as much into edge servers as possible. Eventually, he says, Sun edge servers will host entire e-mail, portal, and content management applications.
Building servers for the edge will involve respoking, not reinventing, the wheel. Servers for different classes of customers carry much the same functionality, although different elements are emphasized. Two such categories are service providers and end users, said Gordon Smith, vice president of marketing for Speedera Networks. "Being a service provider, the evolution of edge computing for us consists of customized deployment of the servers that already exist," Smith said.
Currently, customer edge gear is not hot, Sun's Roth said. "In many cases the emphasis has redoubled on the network edge and data center edge at the expense of the customer edge," Roth noted. "In economic hard times, people are willing to give up a certain amount of control. They need a Web presence and are willing to use a data center to do it."
The move to the edge suggests that in many cases execution of applications will be divided between the edge and core. Much of the evolution of the edge -- and the outfitting of the servers that fit it -- will depend on how that evolution progresses. "The idea of coordination between the central site and the edge in a more structured way is in its early phases," said Joe Anthony, program director for IBM's WebSphere marketing.
The ability to execute logic at the edges -- not just cache information that is dependent on the central source -- is being built into WebSphere version 5, Anthony said. This evolution will accelerate because of the emergence of Web services (which by definition run in a more decentralized manner) and by Edge Side Includes (ESI), an emerging standard for defining Web page components and dynamic assembly at the network's edge.
Dispersion of functionality doesn't happen in a uniform manner. Certain business applications will remain at the core for logistic reasons, even if it is technically possible to push them out, while others apps will get their tickets punched at the earliest possible moment. Which functionality remains in the core and which is shipped out will have ramifications throughout the network in terms of security, communications, and overall network management.
For this reason, the line between the core and the edge will become fuzzy. This line will continue to shift as technology, applications, and other elements evolve, leaving the world of servers and server technologies to continue in a state of permanent evolution. What is certain, however, is that servers will continue to sit on the precipice of significant and long-term change.
This article was originally published on Wednesday Dec 4th 2002