What is Switching Fabric?
A data center fabric is a system of switches and servers and the interconnections between them that can be represented as a fabric. Because of the tightly woven connections between nodes (all devices in a fabric are referred to as nodes), data center fabrics are often perceived as complex, but actually it is the very tightness of the weave that makes the technology inherently elegant. A data center fabric allows for a flattened architecture in which any server node can connect to any other server node, and any switch node can connect to any server node. This flattened architecture of fabrics is key to their agility.
What are the trends of Switching Fabric?
In earlier days, Data Center Architecture was of 3 Tier architecture running spanning tree or layer 3 routing across the switches. The biggest problem was with these architecture was that only single path is selected and rest of the bandwidth got wasted across the network. All data traffic takes that best path as per the routing table until the point that it gets congested then packets are dropped. This fabric was not enough to handle the existing traffic data growths, with predictability and shift was required.
Clos networks made existing complex topology made simple by giving name SPINE and LEAF in modern data center switching topologies. Data center networks are comprised of top-of-rack switches and core switches. The top of rack (ToR) switches are the leaf switches and they are attached to the core switches which represent the spine. The leaf switches are not connected to each other and spine switches only connect to the leaf switches (or an upstream core device). In this Spine-Leaf architecture, the number of uplinks from the leaf switch equals the number of spine switches. Similarly, the number of downlinks from the spike equal the number of leaf switches. The total number of connections is the number of leaf switches multiplied by the number of spine switches. If you have 4 spines and 8 leafs in that case you need to have 4 x 8 = 32 Connections.
How Latency Is Getting Improved By Changing Data Center Switching?
All of us aware that layer 2 switches are usually responsible for transporting data on the data link layer and performs error checking on each transmitted and received frame. The old generation or we can say the earlier used switches in the data center perform store and forwarding switching. In store and forwarding switching, the entire switch has to be received first and after that it is being forwarded. The switch stores the entire frame and does the CRC calculations before it forwards. If no CRC errors are present in that case switch forwards the frame else drop it.
In case of CUT Through Switching, when the switch receive the frame it looks the first 6 bytes of the frame, then the switch checks the destination mac address , outgoing interface and forwards the frames. The all type of error calculations are done by the receiving device as contract to transmitting device in case of store and forward switching.