main

Application Delivery

Auto-Discover, -Scale and -License ADCs

June 5, 2019 — by Prakash Sinha0

ADC--960x591.jpeg

In the changing world of micro-service applications, agile approach, continuous delivery and integration, and the migration of applications and service to the cloud, ADCs (aka load balancers) are likewise transforming.

ADCs still make applications and services available–locally or globally, within and across cloud and data centers–while providing redundancy to links and reducing latency for the consumers of application services. However, due to where ADCs sit in the network, they have taken on additional roles of a security choreographer and a single point of visibility across the front-end, networks and applications.

Traditionally deployed as a redundant pair of physical devices, ADCs have begun to be deployed as virtual appliances. Now, as applications move to the cloud, ADCs are available as a service in the cloud or as a mix of virtual, cloud and physical devices depending on cost and performance characteristics desired.

Core Use Cases

Providing high availability (HA) is one of the core use cases for an ADC. HA addresses the need for an application to recover from failures within and between data centers. SSL offload is also a core use case. As SSL/TLS become pervasive to secure and protect web transactions, offloading non-business functions from application and web servers is needed to reduce application latency while lowering the cost of application footprint required to serve users.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

One of the ways organizations use cloud and automation to optimize the cost of their application infrastructure is by dynamically adjusting resource consumption to their actual utilization levels. As the number of users connecting to a particular application service grows, new instances of application services are brought online. Scaling-in and scaling-out in an automated way is one of the primary reasons why ADCs have built-in automation and integrations with orchestration systems. For example, Radware’s automation capabilities enhance and extend Microsoft Azure by taking advantage of Scale Sets to automatically grow and shrink the ADC cluster based on demand.

Automating Operations

Auto scale capability is important for organizations looking to automate operations – that is to add and remove services on demand without manual intervention for licensing and to reclaim capacity when no longer in-use. This saves costs, both in operations and well as in training. As organizations move to the cloud, capacity planning and associated licensing are common concerns. Elastic licensing is directed to cap the cost of licenses as organizations transition from physical hardware or virtual deployment to cloud.

[You may also like: Economics of Load Balancing When Transitioning to the Cloud]

Innovative elastic licensing benefits small and large enterprises, and enables then to protect against load balancing pricing shocks as the numbers of users and associated SSL transactions grow, while simplifying capacity planning. End-to-end visibility and automation further enable self-service across various stakeholders and reduce errors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryCloud Computing

Economics of Load Balancing When Transitioning to the Cloud

May 22, 2019 — by Prakash Sinha0

adc2-960x566.jpg

One of the concerns I hear often is that application delivery controller (ADC) licensing models do not support cloud transitions for the enterprise or address the business needs of cloud service providers that have a large number of tenants.

Of course, there are many models to choose from – perpetual pricing per instance, bring-your-own license (BYOL), consumption and metered licensing models by licensing by CPU cores, per-user, by throughput, service provider-licensing agreements (SPLA), to name a few. The biggest concern is the complexity in licensing of ADC capacity. In a cloud environment, the performance profile for a particular instance may need to change to accommodate traffic spike. The licensing infrastructure and automation needs to accommodate this characteristic.

Traditionally, load balancers were deployed as physical devices as a redundant pair supported by perpetual pricing, a non-expiring license to use an instance, whether it’s hardware, virtualized or in the cloud. The customer has no obligation to pay for support or update services, although they are offered at an additional yearly cost. As virtualization took hold in the data centers, ADCs began to be deployed as virtual appliances and started supporting subscription licensing model – a renewable license, usually annual or monthly, that includes software support and updates during the subscription term. The license is automatically terminated unless it is renewed at the end of the term. Now, as applications move to cloud, ADCs are being deployed as a service in the cloud and consumption-based pricing is becoming common.

[You may also like: Keeping Pace in the Race for Flexibility]

Evaluating Choices: The Problem of Plenty

There are many licensing models to choose from – perpetual , subscription, consumption/metered, so how do you decide what to choose? The key is to understand what problem you’re trying to solve, identify the *MUST* have capabilities you’d expect for your applications, and plan how much of the capacity you’d need and then do an apples-to-apples comparison.

Understand the use case

Let us consider a cloud service provider (CSP) tenant onboarding as an example. The provider offers service to its tenants (medium and large enterprises), which consume their own homegrown applications and those offered and hosted by the CSP.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

For example, a CSP whose tenants are hospitals and physician networks offers patient registration systems as a shared SaaS offering among multiple tenants. Each tenant has varying needs for a load balancer – small ones require public cloud-based ADCs, whereas mid-sized and large ones have both public and private cloud solutions. Some of the larger tenants of the CSP also require their application services proxied by hardware ADCs due to low latency requirements. Self-service is a must for the CSP to reduce cost of doing business and so it automation and integration to support the tenants that administer their own environments.

Based on the use case, evaluate what functionality you’d need and what type of form factor support is required

CSPs are increasingly concerned about the rapid growth and expansion of Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform into their markets. Hosting providers that only provide commodity services, such as co-location and web hosting, have realized they are one service outage away from losing customers to larger cloud providers.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

In addition, many CSPs that provide managed services are struggling to grow because their current business is resource intensive and difficult to scale. In order to survive this competitive landscape, CSPs must have:

  • Cost predictability for the CSP (and tenants)
  • The ability to offer value-added advisory services, such as technical and consulting opportunities to differentiate
  • Self-service to reduce resources via the ability to automate and integrate with a customer’s existing systems
  • Solutions that span both private and public cloud infrastructure and includes hardware

For the CSP onboarding use case above, from a technical requirement, this breaks down to: Self-service, ability to create ADC instances of various sizes, automated provisioning, support for Ansible, vRO and Cisco ACI. From a business perceptive, the CSP needs to offer a host of solutions for their tenants that span cloud, private and hardware based ADCs.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

Plan Capacity

Once you understand the use case and have defined functional technical and business requirements, it’s time to review what kind of capacity you’ll need – now and in future. You may use existing analytics dashboards and tools to gain visibility into what you consume today. The data may be your HTTP, HTTP/S, UDP, SSL certificates, throughput per application at peak, connection and requests per second. Based on your growth projections you may define future needs.

Compare Available Options

The next step is to look at the various vendors for the performance metric that’s important to your applications. If you have a lot of SSL traffic, then look at that metric as a cost/unit across various vendors.

[You may also like: Are Your Applications Secure?]

Once you have narrowed down the list of vendors to those that support the functionality your applications MUST have, now it’s time to review the pricing to be within your budget. It’s important to compare apples-to-apples. So based on your capacity and utilizations profile, compare vendors on your short list. The chart below shows one example of comparison on AWS using on demand instances versus Radware Global Elastic Licensing subscription as a yearly cost.

As enterprises and service providers embark on a cloud journey, there is a need for simpler and flexible licensing model and infrastructure that eliminates planning risk, enables predictable costs, simplifies and automates licensing for provisioned capacity and enabled the ability to transfer capacity from existing physical deployment to cloud to realize savings.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha0

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingCloud Security

Embarking on a Cloud Journey: Expect More from Your Load Balancer

November 13, 2018 — by Prakash Sinha0

AdobeStock_215123311-1-960x593.jpg

Many enterprises are in transition to the cloud, either building their own private cloud, managing a hybrid environment – both physical and virtualized—or deploying on a public cloud. In addition, there is a shift from infrastructure-centric environments to application-centric ones. In a fluid development environment of continuous integration and continuous delivery, where services are frequently added or updated, the new paradigm requires support for needs across multiple environments and across many stakeholders.

When development teams choose unsupported cloud infrastructure without IT involvement, the network team loses visibility, and security and cost control is accountable over the service level agreement (SLA) provided once the developed application goes live.

The world is changing. So should your application delivery controller.

Application delivery and load balancing technologies have been the strategic component providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure must now address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

The objective here is not to block agile development and use of innovative services, but to have a controlled environment, which gives the organization the best of both DevOps and IT– that is, to keep a secure and controlled environment while enabling agility. The benefits speak for themselves:

Reduced shadow IT initiatives
To remain competitive, every business needs innovative technology consumable by the end‐user. Oftentimes, employees are driven to use shadow IT services because going through approval processes is cumbersome, and using available approved technology is complex to learn and use. If users cannot get quick service from IT, they will go to a cloud service provider for what they need. Sometimes this results in short‐term benefit, but may cause issues with organizations’ security, cost controls and visibility in the long-term. Automation and self-service address CI/CD demands and reduce the need for applications teams to acquire and use their own unsupported ADCs.

Flexibility and investment protection at a predictable cost
Flexible licensing is one of the critical elements to consider. As you move application delivery services and instances to the cloud when needed, you should be able to reuse existing licenses across a hybrid deployment. Many customers initially deploy on public cloud but cost unpredictability becomes an issue once the services scale with usage.

[You may also like: Load Balancers and Elastic Licensing]

Seamless integration with an SDDC ecosystem
As you move to private or public cloud, you should be able to reuse your investment in the orchestration system of your environment. Many developers are not used to networking or security nomenclature. Using self-service tools with which developers are familiar quickly becomes a requirement.

The journey from a physical data center to the cloud may sometimes require investments in new capabilities to enable migration to the new environment. If an application delivery controller capacity is no longer required in the physical data center, its capacity can be automatically reassigned. Automation and self-services applications address the needs of various stakeholders, as well as the flexible licensing and cost control aspects of this journey.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliverySecurity

Simple to Use Link Availability Solutions

November 1, 2018 — by Daniel Lakier0

simple_to_use_link_availability_solutions_blog-960x640.jpg

Previously, I discussed how rerouting data center host infrastructure should be part of next-generation DDoS solutions. In this blog, I will discuss how link availability solutions should also play a part. Traditional DDoS solutions offer us a measure of protection against a number of things that can disrupt service to our applications or environment. This is good, but what do we do when our mitigation solutions are downstream from the problem? In other words, what do we do if our service provider goes down either from a cyberattack or other event?

What if we had the capacity to clean the bandwidth provided by our service provider, but the service provider itself is down. How do we prepare for that eventuality? Admittedly, in first world nations with modern infrastructure, this is a less likely scenario. In third world nations with smaller carriers/ISPs and/or outdated infrastructure, it is more common. However, times are changing. The plethora of IoT devices deploying throughout the world makes this scenario more likely. While there is no silver bullet, there are several strategies to help mitigate this risk.

[You may also like: Disaster Recovery: Data Center or Host Infrastructure Reroute]

Is Border Gateway Protocol the Right Solution?

Most companies who consider a secondary provider for internet services have been setting up Border Gateway Protocol (BGP) as the service mechanism. While this can work, it may not be the right choice. BGP is a rigid protocol that takes a reasonable skill level to configure and maintain. It can often introduce complexity and some idiosyncrasies that can cause their own problems—not to mention it tends to be an either-or protocol. You cannot set all traffic to take the best route at all times. It has thresholds and not considered a load balancing protocol. All traffic configured to move in a certain route will move that way until certain thresholds are met and will only switch back once those thresholds/parameters change again. It can also introduce its own problems, including flapping, table size limitations, or cost overruns when it has been used to eliminate pay for usage links.

Any solution in this space needs to solve both the technical and economic issues associated with link availability. The technical issues are broken into two parts: people and technology. In other words, make it easy to use and configure; make it work for multiple use cases that include both inbound and outbound; and if possible eliminate the risk factors that can be associated with rigid solutions like link flapping and the associated downtime that can be caused via re-convergence. The second problem is economic.  Allow people to leverage their investments’ fully. If they pay for bandwidth they should be able to use it. Both links should be active (and load balanced if the customer wants). A common problem with BGP is that one link is fully leveraged, and therefore hits its maximum threshold, while the other link sits idle due to lack of flow control or load balancing.

For several years, organizations have looked for alternatives. The link load balancing and VXLAN marketplace have both been popular alternatives, especially as it relates to branch edge redundancy solutions. Most of these solutions have limitations with inbound network load balancing, resulting in curtailed adoption. In many data centers, especially cloud deployments, the usual flow of traffic involves out-of-network users from the outside initiating the traffic flow.  Most link load balancing solutions and VXLAN solutions are very good at load balancing outbound traffic. The key reason for the technology adoption has been two-fold: the ability to reduce cost with WAN/internet providers and the ability to reduce complexity.

The reduction in cost is focused on two main areas:

  • The ability to use less costly bandwidth (and traditionally less reliable) because the stability was compensated for by load balancing links dynamically
  • The ability to use what we were paying for a buy only the required bandwidth

The reduction in complexity comes from the ease in configuration and simplicity of being able to buy link redundancy solutions as a service.

The unique value of this solution is that you can protect yourself from upstream service outages or upstream burst attacks that trip thresholds in your environment and cause the BGP environment to transition back and forth as failover parameters are met, essentially causing port flapping. The carrier may not experience an outage, but if someone can insert enough latency into the link on a regular basis it could cause a continual outage. Purpose-built link protection and load balancing solutions not only serve an economic purpose but also protect your organization from upstream cyberattacks.

Read “Flexibility Is The Name of the Game” to learn more.

Download Now

Application DeliveryCloud Computing

Digital Transformation – Take Advantage of Application Delivery in Your Journey

October 31, 2018 — by Prakash Sinha0

adc_change_journey_blog-1-960x720.jpg

The adoption of new technologies is accelerating business transformation. In its essence, the digital transformation of businesses uses technologies to drive significant improvement in process effectiveness.

Cloud computing is one of the core technologies for Digital Transformation

Increasing maturity of cloud-based infrastructure enables organizations to deploy business-critical applications in public and private cloud. According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%.

Many enterprises are in the midst of this transition to the cloud, whether moving to a public cloud, building their own private cloud or managing a hybrid deployment. In this fluid environment, where new services are being frequently added and old ones updated, the new paradigm requires support for needs across multiple environments and across many constituencies – an IT administrator, an application developer, DevOps and tenants.

[You might also like: Optimizing Multi-Cloud, Cross-DC Web Apps and Sites]

Nobody Said It Was Easy!

However, the process of migration of applications to the cloud is not easy. The flexibility and cost benefit that drives the shift to the cloud also presents many challenges – security, business continuity, and application availability, latency reduction, issues with visibility, SLA guarantees and isolation of resources.  Some other aspects that require some thought – licensing, lock-in with a cloud service provider, architecture to address hybrid deployment, shadow IT, automation, user access, user privacy, and compliance needs.

One of the main challenges for enterprises moving to a cloud infrastructure is how to guarantee consistent quality of experience to consumers across multiple applications, many of which are business critical developed using legacy technologies and still hosted on-premise.

Along with the quality of experience, organizations need to look at the security policies. Sometimes policies require integration with a cloud service provider’s infrastructure or require new capabilities to complement on-premises architecture while addressing denial of service, application security and compliance for new attack surface exposed by applications in the cloud.

Convenience and productivity improvements are often the initial drivers for adopting IT services in the cloud. One way to address security and availability concerns for the enterprise embarking on the cloud journey is to ensure that the security and availability are also included as part of IT self-service, orchestration and automation systems, without requiring additional effort from those driving adoptions of cloud-based IT applications.

The World of Application Delivery Has Changed to Adapt!

Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business-critical applications to the cloud, the same load balancing and application delivery infrastructure has to evolve to address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Acceleration & OptimizationApplication Delivery

Optimizing Multi-Cloud, Cross-DC Web Apps and Sites

September 27, 2018 — by Prakash Sinha1

multi_cloud_speed_web_apps-960x640.jpg

If you are working from your organization’s office, the chances are good that you are enjoying the responsiveness of your corporate LAN, thereby guaranteeing speedy load times for websites and applications.

Yahoo! found that making pages just 400 milliseconds faster resulted in a 9% increase in traffic. The faster site also doubled the number of sessions from search engine marketing and cut the number of required servers in half.

Don’t Fly Blind – Did Someone Say Waterfall?

Waterfall charts let you visualize cumulative data sequentially across a process. Performance waterfalls for webpages, shown below, generated using webpagetest.org, lets you see the series of actions that occur between a user and your application in order for that user to view a specific page of your site.

The Webpagetest.org waterfall chart below shows the connections view with a breakdown showing DNS Lookup, TCP connection establishment, Time to First Byte (TTFB), rendering time and document complete.

[You might also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

Optimizing Web-Facing Apps That Span Cloud and/or Data Center Boundaries

The performance of a website correlates directly to that web site’s success. The speed with which a web page renders in a user’s browser affects every conceivable business metric, such as page views, bounce rate, conversions, customer satisfaction, return visits, and of course revenue.

Latency, payload, caching and rendering are the key measures when evaluating website performance. Each round trip is subject to the connection latency. From the time the webpage is requested by the user to the time the resources on that webpage are downloaded in the browser is directly related to the weight of the page and its resources. The larger the total content size, the more time it will take to download everything needed for a page to become functional for the user.

Using caching and default caching headers may reduce the latency since less content is downloaded and it may result in fewer round trips to fetch the resources, although sometimes round trips may be to validate that the content in the cache is not stale.

Browsers need to render the HTML page and resources served to them. Client-side work may cause poor rendering at the browser and a degraded user experience, for example, some blocking calls (say 3rd party ads) or improper rendering of page resources can delay page load time and impact a user experience.

The low hanging fruit to enable optimizations are easy and obvious such as reducing the number of connection set up using keep-alive and pipelining. Another easy fix is to compress the objects to reduce the size of the payload for the data received by the browser and to utilize caching to manage static objects and pre-fetch data (if possible). A content delivery network (CDN) may serve static contents closer to the users to reduce latency. More involved and advanced optimizations may include techniques to consolidate resources when fetching from the server, compressing images that are sent to the browser depending on the type of device, the speed of connection, the location of the user, and reducing the size of objects requested by content minification. Some additional techniques, such as delaying ads after the page has become usable to the user, may improve the perception of web page and applications.

Read “Just Group Achieves Web Page Acceleration” to learn more.

Download Now

Application Delivery

Operational Visibility for Load Balanced Traffic in SDDC

March 13, 2018 — by Prakash Sinha0

load-balancing-sddc-1-960x720.jpg

Management and monitoring in Software Defined Data Centers (SDDC) benefit from automation principles, programmability, API and policy-driven provisioning of application environments through self-service templates. These best practices help application owners to define, manage and monitor their own environments, while benefiting from the performance, security, business continuity and monitoring infrastructure from the IT teams. SDDC also changes the way IT designs and thinks about infrastructure – the goal is to adapt to demands of continuous delivery needs of application owners in a “cloudy” world.

DDoSSDNSecurityWAF

Orchestrating Flows for Cyber

January 24, 2018 — by Edward G. Amaroso0

sdn-960x463.jpg

There is a great scene in the movie Victor, Victoria, where the character played by James Garner decides it’s time to mix things up a bit. So, he strolls into an old gritty bar wearing a tuxedo, walks up to the bartender, and orders milk. Within minutes, the other men in the bar decide they’ve had enough of this, and they start an intense bar fight. Garner is soon throwing and taking punches, getting tossed across the floor, and loving every minute of it.