Optimizing network performance is a task that spans multiple domains – from architecting the network, with capacity and topology (segmentation) considerations, through redundancy, bandwidth management and security aspects. But today, I would like to raise 5 additional ways to optimize overall network performance by best utilizing advanced Application Delivery Controller (ADC) capabilities for front end applications.
1. See What You Manage
Managing and optimizing network performance requires gaining visibility into that performance. At the end of the day, the network is supposed to carry the application from the server to the client. So the best way to measure performance is by measuring the end user experience and then break down all of the parameters that affect it: application server response time, network response time, DNS response time, client side rendering time and even related cyber-attacks that could adversely affect performance. These parameters are keys to understanding network performance.
Many organizations have the ability to gather some of this information but, it is often scattered in different management and monitoring systems. ADCs are best suited to gather this information as they front end applications and oversee the end-to-end performance. An ADC solution and security solution managed and monitored through one centralized management system can provide the best way to concentrate all relevant performance information. Together in one holistic performance monitoring system, one can correlate the different pieces of information and understand:
Is there a performance problem in my network?
What’s causing it?
Is this a network problem, a cyber-security problem, or an application slowdown?
This information, concentrated in one place, enables the network, security, and application teams to work in collaboration.
2. Increasing the Network Pipe’s Utilization
A lot has been said about the 30 year old Transmission Control Protocol (TCP) and how it was designed for reliability. This impacts efficiency and performance and on top of that challenge, there’s also the chatty nature of the HTTP protocol to consider. Most applications today run over HTTP (which runs over TCP) and this is imposing a heavier burden on network resources and causing very low bandwidth utilization efficiency.
HTTP 1.1 requires multiple TCP connections to increase its efficiency. But, more TCP connections uses more router and server resources, and doesn’t always result in better performance. A possible solution for that is using HTTP/2 between the application server and the client, which results in fewer TCP connections, higher network pipe utilization efficiency, and overall better performance.
Adopting HTTP/2 on the server side is still a challenge that most application owners are not ready for yet. To help accelerate the adoption of HTTP/2 on the server side, leading ADC solutions in the market offer an HTTP/2 gateway which enable the application side to remain with HTTP 1.1 support only, while communicating with the clients over HTTP/2. When combining this capability with TCP multiplexing and TCP optimization (another commonly available capability in most off-the-shelf ADCs) network performance can gain significantly.
3. Make Sure You Use Your WAN Resources Wisely
As more and more users are accessing applications while commuting, it is important to make sure that WAN connectivity is used efficiently. Some considerations to take into account: most organizations today are using multiple WAN links to connect to the internet, mostly for high availability purposes.
While some lines have symmetric (upload and download) bandwidth (and thus more expensive), other lines may be asymmetric and less expensive. It is important to ensure that applications that require high upload capacity (such as video conferencing, collaboration tools, etc.) will use the symmetrical line, and other applications such as standard web browsing or email, will use the asymmetrical (e.g. DSL) lines.
We all know the experience where some days we can tell that our Internet experience is worse than other days. In many cases this is because the ISP’s network has varying utilization levels or because their DNS is under attack. Whatever the reason may be, load balancing between your Internet link and redirecting traffic through a WAN link that is performing better at that specific moment can guarantee a consistent optimized WAN performance .
As WAN bandwidth is often limited, a best practice is naturally to reduce the amount of payload required to be delivered through the WAN link. Applying compression on the HTTP traffic is one way of achieving it. Another way is to optimize caching on the browser side. The easiest way to achieve that is by utilizing web performance optimization solutions which can automatically increase the application’s resource caching to its maximum (and never send the same resource twice).
4. Filter Attack Traffic BEFORE It Enters Your Network
Protecting the data center from cyber-attacks requires several technologies to work together, such as DDoS protection, Intrusion Detection and Protection, network and web application firewalls and more. It is important to block all types of attack before they even enter the network, to ensure such attack will not even have the chance to impact network’s performance.
While it might be tempting to integrate as many functions in one device such as WAF, DDoS protection and load balancing in the ADC (some ADC vendors offer that), it also means attack traffic will be able to travel through your network, potentially over-utilizing resources and thus causing performance problems. The approach to eliminate such risk is by blocking any kind of attack traffic at the perimeter of the network or even in the cloud.
Placing a DDoS mitigation device at the edge of your data center’s network can make a lot of sense, but how can you detect and block application level attack traffic from entering your network? The answer is by separating the attack detection function from the mitigation / blocking function. Implementing a WAF that can detect application-specific attacks and communicate the attack traffic signature to the mitigation device that is already deployed at the edge of the network (for DDoS protection) will ensure that no attack traffic will penetrate your network and cause any performance issues.
5. Optimize Each Communication Session Separately
Different applications have different communication characteristics – some are more chatty, with a lot of back and force messages between the server and the client (like messaging applications), and some transfer large chunks of data (like Business Intelligence reporting tools). Moreover, the type of client may also affect the communication characteristics – where mobile devices tend to suffer more from packet loss vs. desktop devices connected via landline links. Optimizing the session per application, per device, and per the link it is connected through, can have a great impact on the overall network performance.
ADCs are again the best device to gather all the knowledge about the application characteristics and the type of client accessing it because it has layer 7 visibility. ADCs can also dynamically apply all sorts of optimizations, be it TCP optimizations, compression, caching, or redirecting to another server that is closer to the requesting client. However, to leverage all that, the application and the network administer need to work together and apply the right policies in the ADC, to make sure it will optimize each session according to its specific characteristics.
While ADCs are often thought of as an element which increases application availability and scalability, they also have key functionalities that can provide major improvement to network performance. Not all ADCs have the same set of tools. Not all ADCs will provide you with performance visibility, or communicate with your security solution. This is why selecting the right ADC can have a big impact on your network performance. And even then, one needs to activate and leverage the various ADC capabilities in order to gain maximal performance improvement.