There is no doubt that the IT industry is currently undergoing a dramatic shift, and networking is no exception – in fact it may even be the catalyst – but what is that shift? At a high level, I’d argue that the driver is the need for agility as businesses grow or change focus or direction, while responding to the rapidly changing markets of today. The technology they rely on must be more agile than it is today to support this pace of change. Agility, in my mind, means being able to quickly turn up services or applications; quickly adding them, quickly removing them, or quickly adjusting them. Which should be outsourced? Which should be insourced? Businesses will look at cost to answer these questions, but they also need to identify which parts of the technology infrastructure are strategic to their business and invest in those while outsourcing the rest. The thing is, the strategic aspects will change more frequently as businesses evolve at faster rates. As a business leader, I want the ability to change the answers to those questions as my business changes.
OK, less controversy there. But what does that mean for technology? “Cloud” might be the first word that comes to mind. When a business doesn’t want to manage their own technology infrastructure, they now have an increasing number of options to outsource to “the cloud.” How easy this transition is, and how much OpEx and/or CapEx it saves in each case are the two big questions, and in some cases are highly debated, but there’s obviously value there. Yet, is it possible to outsource too much? Can the pendulum swing too far? If it does, and businesses seek to re-insource, how does that work?
Cloud outsourcing or not, how are new applications, users or services added or old ones removed or merged within the organization’s own IT infrastructure? How much work needs to be done? While there are several efforts to change the answer, modifications to applications and IT infrastructure still requires too much work. How can we change this so there’s less overhead each time? One key prohibitive element that networks need to escape from is the prevalence of closed and proprietary systems that block innovation, and thus we see an increasing focus on not only open standards but now open source networking. But what kinds of technology will those “open” efforts define? Software Defined Networking (SDN), right? What does that mean? I’ve recently heard phrases like “that’s partially SDN” or “we’re talking about 100% SDN here” – does this refer to a fully-featured SDN controller that programs all networking elements, each of which runs no Layer 2 or Layer 3 protocols? How do we rationalize the claims of SDN startups, which say that controllers will solve all problems, with those of some of the Wi-Fi vendors who would have you believe that controllers (centralized network intelligence) are of no use at all? Is the more important point rather about the amount of dynamic conversation the network can have with apps? Is it all of the above or are there really “partial” and “100%” SDN approaches? (Note: I personally lean towards the latter). Either way, is the disaggregation of intelligence good in all cases? To those who say yes, I’d ask what about the conflicting trend and growing interest in converged infrastructure? Cloud services, and on another note, cloud managed networks, should be making our lives simpler. If that’s so, then how does the disaggregation of network intelligence simplify our job?
Obviously there are conflicting trends here, and that’s ok. There are different needs and different approaches, all of which seek to achieve the goal of greater business agility.
I have thrown around a lot of terms and discussion topics, but in the succeeding blogs we’ll delve into each one. We’ll at least poke at some of these apparent ironies (more fun) but potentially also find that perhaps they’re not ironic at all and each conflicting trend has its own reason and purpose (more likely). Stay tuned.