The notion of a network fabric for the cloud has been gaining some momentum. What’s driving this notion is that VMs are shifting from server to server, creating an increase in East-West traffic patterns in the Cloud, and traditional multi-tiered network architectures are not optimized for these traffic patterns. Rather, traditional multi-tiered networks have served better for north-south traffic, and even there the increased latency and oversubscription with multiple tiers is affecting application performance.
This is driving to the notion of a “flatter” fabric-oriented architecture. Here again, the notion of a flatter architecture has two components. The first is a network architecture that simply has fewer tiers, where servers and VMs are connected to each other over one or two network hops. The second is that the network appears as one forwarding domain, i.e., a Layer 2 forwarding domain, since virtual machine mobility today tends to be confined to within a Layer 2 boundary. Both of these are key components of the notion of a cloud network fabric. In effect, a cloud network fabric provides optimized connectivity for east-west and north-south traffic with minimal latency and highest performance.
Additionally, traditional network and storage traffic have been built on different fabrics such as fiber channel and Ethernet. Even where iSCSI or NFS type technology is used, many vendors would deploy a separate Ethernet-based storage network for performance and predictability. This would add to the deployment costs in terms of additional NICs, cabling, etc. With the availability of Data Center Bridging (DCB) technology, the ability to truly converge on a common Ethernet fabric becomes a reality. DCB allows separation of traffic on a common Ethernet fabric in to multiple traffic classes, with the ability to flow control and assign bandwidth parameters to the individual traffic classes.
With DCB, storage traffic such as iSCSI can have its own traffic class and be assigned its own bandwidth guarantees as well as be flow controlled independently of other traffic on that same Ethernet fabric. This allows newer generation NICs (also called CNA-Converged Network Adapter) and Ethernet switches to serve both data and storage traffic on a common Ethernet fabric, thus helping to reduce costs and simplify deployment challenges. Additionally with 10G CNAs becoming common, the performance benefit in moving to a 10G infrastructure can become significant.
Both the above factors, the increase in East-West traffic, and the move towards a common Ethernet fabric for the LAN and SAN with a view to reduce cost and increase performance, are driving towards a converged Ethernet Fabric Network Architecture for the cloud.