5 Ways to Improve Your Network Infrastructure’s Performance


While network performance challenges are often addressed by adding additional bandwidth, there are ways to yield more “good-put” (good net payload throughput) out of the same network infrastructure. In this blog post, I’ll discuss five improvements related to how a good application delivery solution can help.

1. It’s in the protocol

The 30-year-old TCP (Transmission Control Protocol) is one of the most commonly used protocols between any two network devices today. While the TCP protocol has many advantages, such as reliability, its basic implementation delivers very low efficiency (i.e. net payload throughput). This is mainly because it pauses periodically, waiting for acknowledgements from the receiving side that all information has been received correctly. Even if a small portion has not been received, the information will be resent while recalibrating (slowing down) the amount of traffic that can be sent before waiting for the next acknowledgment. This basic process of pausing and waiting for acknowledgement from the receiver side is one of the causes of the TCP protocol’s low efficiency.

Application Delivery Controllers (ADCs) often serve as proxies to server clusters, and terminate/initiate the TCP connection with both the users and servers. There are various TCP optimization algorithms (e.g. Hybla or Westwood algorithms) that ADCs can execute to optimize the efficiency of the TCP protocol and thus yield higher throughput and lower response times.

2. Connection brokering

The TCP protocol is based on a connection that needs to be established for each and every network transaction. This process adds delays and lowers network throughput.

By using the ADC as a TCP proxy, it can maintain a few TCP connections opened with the server on one side and multiplex those connections with users’ TCP connections, reducing the delay caused by establishing the connection with the server. The ADC can also offload the server from maintaining multiple connections for numerous users.

3. Bandwidth Management

Assuming resources are limited, congestion is unavoidable. In itself, congestion is not necessarily a problem. The challenge is how to minimize its effects. One of those effects can be packet loss or varying delays, and even “time outs” of different application processes. This often results in a higher rate of retransmissions, which lowers the network’s “good-put” and as a result, its efficiency.

One of the solutions for this challenge is managing bandwidth utilization in the network. However, to do this effectively, the enforcer must gain enough knowledge of what each flow is, and how much bandwidth should be allocated to each in order to gain maximum network utilization efficiency. Since ADCs are designed to handle traffic based on information from layers 2-7, it makes sense to also implement a bandwidth management function, which will be able to classify traffic with high enough granularity and smart enough to “understand” how much bandwidth to allocate per traffic flow / application / user.

4. Measuring and selecting the fastest path

Routing protocols provide the ability to choose the fastest path from point A to B. However, if a web application can serve an end user from multiple datacenters, the question is, which one will provide the fastest response time?

To illustrate this point, here’s an example from a real customer – a Video on Demand (VoD) provider. The challenge was to provide the VoD service to hundreds of thousands of users connected through 6 different ISPs. By replicating his VoD application and content 6 times and deploying it in each of those 6 ISPs, the VoD provider naturally gained high availability. But he still needed to route his users to the correct datacenter, so that if user A connected to the Internet via ISP 1, he would be routed to the VoD application sever deployed in ISP 1’s datacenter, and so on.

ADCs that have a Global Server Load Balancing (GSLB) function can be used to do just that. When a user requests content from a website, the GSLB function will measure which of the datacenters can provide the requested content with shortest path delay to the user. It then redirects the user to the closest datacenter in terms of delay and number of hops. This way, the user benefits from a faster service with better quality of experience. At the same time, the amount of traffic across the ISPs is minimized – yielding higher network efficiency, lower delays, and ultimately, a decrease in cost for the VoD provider.

5. Integrated Application Performance Monitoring

Up to this point we’ve discussed techniques and tools embedded in ADCs that allow you to get more “good-put” out of your networks. However, network performance is impacted by unexpected challenges like slow links, unmanaged congestion, slow responding DNS, cyber attacks and much more. There are two ways to discover high delays in the network. The most common way is to hear from dissatisfied users who complain about slow or broken network connections. Another way is to monitor the end user experience.  Although this can be costly and complicated, user experience monitoring is important because the problem is not always in the network – the problem can also lie within the user’s device. 

Implementing an Application Performance Monitoring function in your ADC provides you with a bird’s eye view of the different elements comprising the user experience. It measures datacenter delay, network delay between the datacenter and the user, and collects information from the user about the actual time it takes for the application page to start functioning.

When combining this information with a good analytic tool, it’s possible to easily pinpoint performance bottlenecks in different parts of the network between the user and the server and in real-time, in order to proactively detect and troubleshoot network performance issues that often cause customer dissatisfaction.

In sum, while ADCs are often thought of as a tool that increases application availability and scalability, they also have key functionalities that can provide major improvements to network performance. But it’s not enough that the ADC has those capabilities – the ADC admin must be aware of those capabilities and enable them with the correct configuration in order to gain maximal performance improvement. And the only way to gain maximal improvement is by monitoring performance, which provides the necessary visibility and insight into all aspects of the application delivery process.

Yaron Azerual

Yaron Azerual is a senior product marketing manager at Radware bringing 27 years of engineering, product management and product marketing experience from both large corporations such as Lucent, Avaya as well as from smaller companies and startups such as Alvarion and Wavion. Yaron brings deep understanding of both the development aspects of communication and security products and of the customer challenges those products should solve. He holds a bachelor's in electrical engineering from Tel Aviv University.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center