Earlier this week, a blog entitled Towards Machine Learning in Networking: Benefits Begin Now discussed machine learning in networking, and was based on a podcast given on the same topic. This is an exciting time as we approach the age of networks that “machine learn” (ML) in the fullest spirit of AI.
The three most important words in the title of the above-referenced article are Towards, Benefits, and Now. Even the first steps you take towards collating and acting on network-provided information lead to operational improvements. I’ll explain why here, by taking a closer look at:
First, let’s look at the questions being asked of data center networks:
The Primacy of Collection
In order to understand the data center network and determine the best ways to improve its maintainability, you need to collect information. For instance, moving down the stack from the application to the hardware, you can gather information on traffic, topology, workloads, and devices (Table 1).
Table 1: Collectable Data to be Mined for Intelligent Automation
Function or Object
Control (Routing, Switching)
Traffic (Data Plane)
Device Inventory (Packet, Optical)
Chassis or fixed platforms, interface cards, optics, servers, etc.
Collecting and storing this information, and being able to update it by executing a workflow is great. But each of these rows represents an individual view into the network, and any one by itself is quite limited.
This is not surprising because networks were not originally designed to provide actionable intelligence. The several decades of ping and traceroute being the main arrows in the network engineer’s quiver are a testament to this.
So we have correlated the gathered information into an abstracted view of the network in order to be able to automate with confidence.
The Network as Program; Automation as Compiler
The goal is to make things better by taking the network–the artifacts we have collected–and organizing them into a program. How could the entities in Table 1 comprise a program? They do not necessarily resemble code.
However, it doesn’t take too much imagination to see how they might. And in fact, it’s a norm for public cloud providers to treat infrastructure as code.
The inputs and outputs to the “program” will be packets to and from every edge (Figure 1).
Figure 1: Inputs and Outputs to/from a “Network Program”
Packets (in the green arrows) will be:
If the network is a program, the automation system (such as Workflow Composer or Flow Optimizer) can be treated as a compiler, which processes the network as source code, and then can pass parameters into it in order to fix problems such as broken links or latency issues. Similarly, though with somewhat more difficulty, the “compiler” can find DDoS flows or identify threats.
And a very versatile “compiler” can also port outputs into systems in other domains, such as compute, storage, security, or applications.
How Does Extreme’s Visibility Portfolio Help?
Extreme focuses on hardware and software optimized for agility across all layers of the data center stack. Workflow Composer automates the network lifecycle, accomplishing this in part by relying on actionable analytics, provided via data collected through the SLX Insight Architecture for pervasive network visibility that can be integrated with third-party analytics applications to improve SLAs.
Together, these capabilities deliver intelligent automation and dynamic remediation.
As we move towards more intelligent networks, and eventually those that can teach themselves to learn how to better provision and remediate themselves, there are many benefits that we will see at every step along the way.
To start now, you should look for solutions that provide intelligent, cross-domain automation, pervasive visibility, real-time analytics, and programmable platforms purpose-built for all places in the data center network.
Follow the links above for more insight into the importance of network visibility. And as always, contact your Extreme Sales or SE representative for more information on our solutions.