Here are some frequently heard criticisms of traditional networking – be it data center, enterprise campus, telecom, you name it:
- The network has become too complex. It is difficult and time consuming to change and adapt the infrastructure to keep pace with today’s innovative and ever-changing business landscape.
- Implementing new technology features too often requires new hardware, which again takes time before the end user benefits can be realized.
- Innovation of the aforementioned new technologies and features takes too long because network intelligence is too siloed from applications and thus cannot benefit from the valuable relevant data in the respective systems.
- Networking hardware is too often proprietary or closed, limiting the choices available for scaling or enhancing the data center in the future.
- Networking hardware is too static, meaning that to upgrade or innovate with a new application (i.e. type of database system, new scale of web service, move from Layer 3 to Layer 2 metro services, etc.) one either can’t, or must rebuild much of the data center around the new application.
- The network often holds other efforts back or slows them down, when in theory it could be doing the opposite and driving innovation forward.
Let’s look at two very different trends – each answering a different subset of those criticisms.
1. Converged infrastructure
This trend supposes that a layer of complexity is removed when technology providers abstract data center configuration by pre-assembling the components so businesses don’t have to deal with the pieces. With a focus on the data center, this typically means both building tighter integrations as well as preconfiguring the storage, networks, and compute servers with different applications and uses in mind so you’re left with a data center-in-a-box. But which of the challenges listed above does this address? Certainly outsourcing complexity to the provider reduces the time to build and scale the data centers, but this comes at the cost of buying into a closed solution. The acquisition of VCE by EMC raises questions around the pros and cons of this model, with some calling it the most successful joint venture in IT, while others suspect that VCE hasn’t yet turned a profit (though VCE has kept profitability numbers private). In any case, purchasing proven and tested systems streamlines installation and ensures trust. I just don’t think it needs to be so closed in order to do that. Reference architectures such as EMC’s VSPEX which build repeatable end-to-end solutions even if you don’t go with VCE, leave choices more open without sacrificing the efficiencies of a proven solution architecture.
2. Software Defined Networking (SDN)
Then there’s Software Defined Networking (SDN) which, in most architectures, disaggregates networking and IT intelligence into separate pieces in order to create more layers of abstraction and thus increase agility and levels of control and automation. Traditional routers and switches have software embedded within the hardware, and usually they’re mutually exclusive (you can’t pick and choose hardware and software). SDN says there should be the option to separate networking intelligence from networking hardware, which in SDN terms is the “Southbound API.” Software is of course a big category – you can do a lot with software in IT, not limited to a networking operating system or control plane, and in my opinion true SDN includes all of the above. For example, software can also specify how applications talk to the network – asking for more resources, giving the network a heads up on user information, telling the network when an application uses new TCP ports, etc. (in SDN terms this is done through the “Northbound API”). The most extreme incarnation of this model has the potential to add complexity, but also to increase agility and choice, and thus addresses many of the challenges listed above.
How much do these trends really conflict?
To the uninitiated, these may seem like opposing approaches: integrate and bring the pieces closer together, or disaggregate and have more moving pieces. But as we’ve seen, there are different advantages to each trend. Those interested in getting things up and running quickly may go to the most converged infrastructure available, while those interested in open and dynamic networks that can leverage new innovations faster will look to SDN. As far as the strategies of the vendors providing these solutions, one might argue that more converged approaches are extracting value instead of adding value to end users, but as Marc Andreessen says, there’s money be made in bundling as well as unbundling. The good news is, networks don’t have to swing to one extreme or the other. After all, what good is SDN if the pieces don’t work very well together? Many will seek SDN solutions that have been well tested, proven, and with a high integration on the greater ecosystem. Thanks to the rise in open source initiatives like OpenDaylight, Open Networking Lab or ON.Lab and ONOS, Open Compute Project, and more, there is an increasing momentum in open solutions, and thanks to reference architectures like VSPEX and hardened OpenDaylight controllers and apps, the benefits of each approach (bundling, unbundling) can be customized for each network as they’re thankfully not mutually exclusive. How and which of these benefits can be realized in unison requires some more detailed questions, but hopefully this has outlined some starting points.
Thoughts? Feel free to comment below or get me on Twitter: @JonathanMorin
Conflicting Trends Blog Series:
Part 2: (above)
Part 3: coming soon
Part 4: coming soon