Driving a car is like riding a bike, if one refers to the old expression. It is fairly easy to recall how to do it if there has been some time since the last time one has been behind the steering wheel. Of course, this old adage does not apply if the way cars are driven has changed. It can be disconcerting going from automatic to manual transmissions or driving on the right side of the road instead of the left.
In technology, we come across similar situations. While the base technologies we use are familiar, the environment and architecture changes the way we utilize these functions. IT networks have evolved to the point where they look nothing like the structures we built 20 years ago. But, at the same time, we are using the same tools to build them.
Some things stay the same
Network protocols like spanning tree (STP), ethernet, OSPF, and BGP are core components within almost all network designs. On top of these protocols, we add network services such as firewalls, application gateways, proxies, server load balancers (SLB), and other functions.
These protocols and functions have evolved and matured, but their core function and purpose has not changed much. OSPFv2 was defined in 1998 and OSPFv3 was introduced in 2008 for IPv6 support. If one understood the protocol 10 years ago, they would have no problem understanding it today.
Consistency and stability of these technologies is critical for network architectures to mature and evolve. IT architectures maintain their reliability and availability when the foundations are built using proven and tested components. The strong foundations have enabled architects to evolve network designs from 20+ years ago to today.
Evolving means relearning
Virtualization through public and private clouds, software-defined anything (SDx) architectures, and software-defined data centers (SDDC) have presented a big challenge that introduces a new adage – you cannot teach an old dog new tricks. Virtualization is challenging the way networks are designed and how applications are delivered end-to-end.
Traditionally, most services are applied at the server-side of the connection such as load balancing, security, content inspection, or compression. In the virtualized architectures, the location and state of the application interface is often transient and rides across network infrastructure that is not owned and managed by the application provider. The lack of ownership and control of the infrastructure makes it hard to insert these critical services into the path of the client-server connection.
Adjustments must be made to the service and how it is delivered to ensure that the application delivery process stays reliable, secure, and optimized. Technology tools (hacks in today’s vernacular) like DNS redirects, generic routing encapsulation (GRE) tunnels, and global server load balancing (GSLB) are used to help control and predict the path of the client-server communications and enable network functions to be applied to the content stream.
One step at a time
Network architectures are constantly changing. The evolution towards virtualized designs has dramatically changed how applications are delivered. Especially when we look at how the chain of application and network services are applied to the application and its content.
This may be disconcerting if one compared the traditional networks of 1990 to the networks today. But, the changes are not tectonic plate shifting events, but a series of smaller changes that add to the new models, further evolving the network architectures. The cars we drive today are very different from the ones we drove decades ago, but at the same time, there is a familiarity and comfort felt when interacting with the traditional, though repurposed functions.