Networks grow like untreated vegetation in the hidden corner of a backyard — it is not unusual in data centers to find many layers whose sole purpose is to support other network layers.
Consider the fact that half of the network ports in the three-tier networks we all grew to know connect network devices with other network devices, while the other half connect servers to the network. There has to be a way to improve that ratio. IT has woken up and realized the need to reduce the cost of networking gear, and in turn, spend less time (and money) running the network. Put simply, the way to get there is to flatten the data center network.
In January 2007, I delivered an internal SE training to my former employer, and during the preparation for the training I proposed the topic of switching from 3-tier to 2-tier networks. To this day, I can still remember the skepticism around the topic. Five years later, enterprises and clouds are trying to minimize even more by building single-tier networks.
Of course, eight years ago no one had even thought of this strategy, nor offered the technology to enable it. Technology advancements have presented new network architecture and design opportunities realize network flattening. Modern Ethernet switches are very fast and robust, many offer a fully non-blocking architecture, software supports the ability to bundle multiple ports for uplinks and a very sophisticated feature set for any form factor. Today, interconnecting multiple L2/L3 Access switches to a pair of L2/L3 Core Switches can give you a data center network with over 10,000 1GbE ports of non-blocking communications, and at a price point that eight years ago could barely buy a single chassis-based switch.
The availability of new technology with improved economies, triggered by the realization that the data center network needs to serve the applications, is what really brings us here. Today the network is seen more as utility interconnecting application elements – servers and services. Ultimately with the move to two-tier networks, enterprises have been saving on electrical bills, eliminating network choke points, improving their ability to troubleshoot network issues, improving their overall performance and gaining more flexibility in rolling out applications. Let‘s look at these benefits more closely:
Savings: It takes fewer switches to build flat networks. Fewer switches combined consume less power and require less maintenance.
Efficiency: Fewer interconnects mean there are far less paths for network traffic to take when connecting servers and users, meaning you decrease the points of failure along a given path.
Troubleshooting: With less devices and a simpler topology, troubleshooting of communications related issues and root cause analysis are easier.
Performance: Fewer network hops mean the variability of end-to-end performance is improved. The time it takes traffic across a single switch is closer than the longest path in the datacenter.
Flexibility: With a simpler and uniform topology, presenting lower deviation of end-to-end performance, the physical placement of application elements is less important. Applications can be deployed in phases without requiring up-front planning and restricting network operations.
To summarize, the flattening of the network is really the outcome we get from the advancement in technology. The crossing of today’s network technologies available to enterprises, with the realization of how much the network is demanding lead network engineers to define they want flatter networks. Flat networks, where viable, provide a large set of advantages.
In Part II of this blog, I will discuss what organizations this architectural change, fueled by technological progress, is aimed at. And in Part III, look for a discussion on how this impacts the world of network services in general and application delivery controllers in particular.