September 24, 2013

VMware NSX in the Enterprise Data Center?

Certainly there is a lot of hype right now around overlay networking with VMware’s big announcements around NSX – the technology started by the Nicira acquisition. As I blogged already on SDNCentral (specifically here) I see this more than anything else as a strategy from the system “world” to overcome the challenges that system and network teams face today in various organizations – network teams are often not moving as fast as the system team like. Also, workflows between the teams are not properly established and automated, and orchestration tools are not deployed. There are however such systems available. If they were only used, then the level of agility and operational efficiency that the (enterprise) business is asking for could be achieved.

But the questions still remain: does it make sense for the Enterprise to deploy an overlay network like NSX in their data center? Is there another way solve the problem of automation and orchestration between compute/server and networking in the enterprise data center? In other words, is there another way to achieve the Software Defined Data Center?

I think for most enterprise scenarios, there is – our Data Center Manager (DCM) solution has been successfully deployed at various customer sites to address specifically those challenges. None of those customers are facing VLAN ID limitations, MAC address table sizes or flooding challenges that are often cited as the drivers towards an overlay solution.

That is probably attributed to the fact that it’s the cloud deployments, with a larger number of tenants (which leads to VLAN ID depletion) and servers (which leads to massive MAC address scale and flooding problems), that are targeted as the main sweet spot for overlay solutions. The customers in that market can probably also afford the additional complexity that a typical enterprise might not be to cope with (NSX Manager and API, NSX Controller, NSX Gateways and probably also vSwitches (depending on the deployment) along with a series of new protocols like VXLAN, STT, OpenFlow and others).

Another argument for an overlay is VM mobility at scale without the need to expand or redesign Layer 2 domains. Again, that is an issue of scale – todays virtual switching and shortest path bridging (SPB) solutions are mostly appropriate to do that at Layer 2, and  the upcoming VM mobility over Layer 3 solution expands into Layer 3. It all depends on what the customer is trying to achieve, as always. You can also find a comprehensive overview the options here.

And one should keep in mind that we are talking about (IP) connectivity – if VPN, FW and Load Balancers (ADC) are involved they need to be integrated as well. And just supporting the overlay alone won’t do it: still new instances of VPN, FW, ADC per tenant need to be provisioned as well.

Nevertheless the coming years will show whether there are specific use cases also in the enterprise data center that make those solutions necessary: Cloud bursting comes to mind.

We are ready at Enterasys for this: our SDN architecture allows us to easily integrate with any provisioning solution out there (like the NSX Manager and its API) and our Data Center Manager (DCM) solution is a good proof point for this being done today with VMware vCenter, Citrix XENCenter and Microsoft SCVMM (Hyper-V). In the switching infrastructure, our CoreFlow2 based products are capable of supporting VXLAN and NVGRE via software upgrades in the future if that is required.

About The Contributor:
Extreme Marketing Team

See My Other Posts

One thought on “VMware NSX in the Enterprise Data Center?

Leave a Reply

Your email address will not be published. Required fields are marked *