The History of the Cloud, Part 2 – Global Server Load Balancing


I recently had to replace my washing machine after many years of faithful service.  As a certified geek, I did my research and determined which brand and model I wanted.  At this point, I needed to find a store near me that had my new washing machine in stock and at a reasonable price.  Eventually, I was able to find a store that was a reasonable distance to my home and could deliver the washing machine I wanted, when I wanted it.

Imagine the benefits we could gain if we could apply this type of solution to accessing content on the Internet (and intranet, of course).  This is what we designed global server load balancing (GSLB) for.  After creating and implementing server load balancing (SLB), network designers determined we needed an added level of availability, scalability, and reliability.  They foresaw the potential for an application or content to become unavailable even with SLB technologies if an entire datacenter went offline.  SLB was not designed to provide traffic steering capabilities efficiently across data centers. 

The phone book of the Internet

Traditionally, the Domain Name System (DNS) protocol was utilized to provide multiple locations or IP addresses available for any specific resource or application.  DNS used a rudimentary algorithm to deliver multiple addresses for any individual request.  The problem was that DNS did not have a mechanism to dynamically change the IP addresses delivered based on their availability, relative location or any other metric.  The DNS servers in the 1990s and early 2000s blindly handed out the IP addresses that were manually configured into their database with no sense of priority or availability.

GSLB was designed to make DNS dynamic and understand the state and nature of the applications and content associated with these IP addresses.  In the same way that SLB performed healthchecks and monitoring of the real servers, the GSLB systems could monitor the availability of the applications and content behind the IP addresses.  If the GSLB healthcheck failed, then that IP address would be taken out of the possible pool of DNS responses.

The IP addresses were often Virtual IPs (VIP) configured on SLB solutions and the load balancing vendors discovered that they could provide information between the SLB components and the GSLB servers.  This information could be used to make more intelligent decisions to determine which IP addresses to hand out.  This information included available session capacity and load of the pool of servers behind the VIP, the response time for client connections, or even geolocation performance information based on client IP addresses.

Emergence of the proto-cloud

GSLB became much more than a disaster recovery solution.  GSLB did for DNS and global site availability, scalability, and redundancy what SLB did for local server agility and elasticity.  If a site became overloaded or went offline, GSLB had the intelligence to automatically steer traffic to other locations that provided the identical application and content.  As new sites and capacity were added to the application infrastructure, GSLB could automatically incorporate the new metrics into its algorithms.

Because client connections almost exclusively depended on DNS resolution to map names the IP addresses, GSLB was the perfect technology to extend the functionality of SLB to a global, multi-datacenter solution.

Together, SLB and GSLB became the core technologies to create the initial concept of the cloud.  Put together, they offer a global infrastructure that has the intelligence and automation to deliver agility and elasticity.  This ensures that applications and content are always available on the network notwithstanding the demise of the entire Internet (or intranet).

Next: The History of the Cloud, Part 3 – Automation and Orchestration

Frank Yue

Frank Yue is Director of Solution Marketing, Application Delivery for Radware. In this role, he is responsible for evangelizing Radware technologies and products before they come to market. He also writes blogs, produces white papers, and speaks at conferences and events related to application networking technologies. Mr. Yue has over 20 years of experience building large-scale networks and working with high performance application technologies including deep packet inspection, network security, and application delivery. Prior to joining Radware, Mr. Yue was at F5 Networks, covering their global service provider messaging. He has a degree in Biology from the University of Pennsylvania.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center