May 13, 2013

A simplified approach to data center interconnect (DCI )

After customers deploy virtualization to consolidate servers and become more flexible the focus is on even higher agility, cost and resource optimization across the data center and even between data centers. To achieve this customers demand solutions that create a single elastic pool of network, compute and storage resources across all of their data centers – if technically possible – and even towards cloud services outside of their own management domain. So data center interconnect (DCI) solutions become more and more a requirement to provide the following:

  • Cloud Bursting

–          Create an elastic private cloud infrastructure that allows for optimized application delivery based on current and varying business demands

  • Disaster Recovery and Business Continuity

–          Effective and automated recovery with no manual intervention to ensure business continuity

  • Workload (VM) and Data (Storage) Mobility

–          A private cloud infrastructure that is optimized during runtime on resource utilization, application performance and product cost requires a borderless, single compute, network and storage infrastructure pool that can be dynamically allocated

Beside physical (and so geographical) limitations like latency and bandwidth between those data centers, a DCI solution needs to provide a number of features within the network fabric that are important to form that elastic pool. Most especially, the move of storage and Internet connectivity along with all associated services need to be considered in an overall architecture. This might lead often lead to a different design than initially anticipated, but if the conclusion is that all of those concerns can be addressed and the advantages of single pool of resources are convincing, then the following topics need to be considered for the network fabric:

  • Some DC services require a Layer 2 interconnect

–          Firewall, Server, Storage clustering

–          VM mobility, HA/DRS and more

–          FCoE Storage

  • A DCI solution needs to be supported via any provider and technology and support Layer 2 domain extension as an option

–          Dark Fibre, CWDM, L3 Service, L3 MPLS service, L2 MPLS, Ethernet ….

  • The DCI solution must provide optimized routing for east/west AND north/south traffic

–          Service location becomes much more important to optimize resource usage and minimize latency, maximize application performance

–          First Hop Router location

–          Ingress Traffic Optimization

In addition to that a data center network fabric design must be capable of addressing the following challenges:

  • VM mobility introduces challenges on how to optimize east/west and north/south traffic at L2 and L3
  • Larger Layer 2 domains introduce topology challenges (i.e. how to handle more than 2 DC, degree of meshing with virtual switching), availability and resiliency challenge (unknown and broadcast flooding) and scale (no. of switches)

–          Therefore Broadcast, Multicast and Unicast flooding must be controlled throughout the whole fabric

  • Capacity constraints while workload and data is migrated between and within data centers

–          Application traffic visibility and control (Quality of Service) required inside and between data centers

At Enterasys we focus on simplicity – this is a key architectural design goal of our OneFabric architecture. While competitors try to throw a slew of new protocols and standards onto the problem – from OTV to LISP to proprietary VRRP extensions – we offer a solution that uses well-known, proven, and mature standards and protocols to address those challenges. Watch out for news on this topic in the next few month. Refinement can sometimes be superior to innovation – and to the advantage for all.

About The Contributor:
Markus NispelVice President Solutions Architecture and Innovation

Markus Nispel is the Vice President Solutions Architecture and Innovation at Extreme Networks. Working closely together with key customers his focus is the strategic solution development across all technologies provided by Extreme. In his previous role he was responsible as the Chief Technology Strategist and VP Solutions Architecture for the Enterasys Networks solutions portfolio and strategy, namely NAC Network Access Control, SDN Software Defined Networks, DCM Data Center Management, MDM Mobile Device Management Integration, OneFabric, OneFabric Connect and OneFabric Data Center as well as the network management strategy. This position is tied to his previous role in Enterasys as Director Technology Marketing and as a member of the Office of the CTO. In addition to this role he advises key accounts on a worldwide basis in strategic network decisions. Before its activity for Enterasys Markus Nispel was active as system Engineer at Cabletron Systems. Markus Nispel studied at the university of applied sciences in Dieburg and graduaded 1996 as Dipl. – Engineer for communications technology. He collected first professional experience at E-Plus Mobile Communications within the group of network optimization of their DCS cellular mobile network.

See My Other Posts