main

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha0

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

ComplianceSecurity

Marriott: The Case for Cybersecurity Due Diligence During M&A

December 4, 2018 — by Mike O'Malley0

Marriott-960x640.jpg

If ever there was a perfectly packaged case study on data breaches, it’s Marriott’s recently disclosed megabreach. Last week, the hotel chain announced that its Starwood guest reservation system was hacked in 2014—two years before Marriott purchased Starwood properties, which include the St. Regis, Westin, Sheraton and W Hotels—potentially exposing the personal information of 500 million guests.

The consequences were almost immediate; on the day it announced the breach, Marriott’s stocks were down 5% in early trading and two lawsuits seeking class-action status (one for $12.5 billion in damages) were filed. And the U.S. Senate started to discuss stiffer fines and regulations for security breaches. So far, this is all par for the course.

But what makes Marriott’s breach particularly noteworthy is the obvious lack of cybersecurity due diligence conducted during the M&A process.

Never Ever Skip a Step

In September 2016, Marriott International announced that it had completed the acquisition of Starwood Resorts & Hotels Worldwide, creating the largest hotel company in the world. In its press release, Marriott specifically touted the best-in-class loyalty program that the two brands, combined, could now offer members.

What Marriott International executives didn’t realize was that hackers had gained unauthorized access to Starwood’s loyalty program since 2014, exposing guests’ private information including names, phone numbers, email addresses, passport numbers, dates of birth, credit card numbers and more.

However, if Marriott had done its homework, it might have avoided the mountain of legal fees and compliance fines it now faces. In today’s digital age, cybersecurity due diligence during any M&A process is, without question, imperative.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

And it’s not just security evangelists like myself who emphasize this. The American Bar Association likewise asserts that “it is critical to understand the nature and significance of a target’s vulnerabilities, the potential scope of the damage that may occur (or that already has occurred) in the event of a breach, and the extent and effectiveness of the cyber defenses the target business has put in place to protect itself. An appropriate evaluation of these issues could, quite literally, have a major impact on the value the acquirer places on the target company and on the way it structures the deal.”

The cost of cyberattacks is simply too great to not succeed in mitigating every threat, every time. A successful cyberattack and resulting data breach obliterates trust and destroys brands.

The Only Way Forward

When one company acquires another, it doesn’t just acquire assets. It also assumes the target company’s risks. Put simply, their gaps become your gaps.

In addition, lack of cybersecurity due diligence can actually undermine the value drivers of the deal.  In Marriotts’ case, a big driver was retention of the Starwood high value travelers: the people who make up the loyalty program. Due the pain these customers will now endure—changing credit card numbers, passports, etc.—this value driver has been irrevocably damaged.

It is critical that organizations incorporate cybersecurity into every fabric of the business, from the C-level to IT. Securing digital assets can no longer by delegated solely to the IT department; it must be infused into product and service offerings, security, and perhaps most importantly, development plans and business initiatives. In the case of Marriott, its $13 billion acquisition of Starwood represented a strategic initiative that involved the board of directors, C-level executives and management—all of whom are now partially responsible for the erosion of Marriott’s brand affinity.

[You may also like: Why Cyber-Security Is Critical to The Loyalty of Your Most Valued Customers]

And as we’ve written before, when it comes to loyalty programs, security must transition from the domain of reactive disaster recovery and business continuity into the realm of proactive protection. If loyalty programs are designed to focus on your most valuable customers, why wouldn’t its security fall in line with the other mission-critical assets and infrastructure responsible for servicing these very clients?

Marriott’s Starwood breach is an unfortunate case study for why CEO and executive teams must lead the way in setting the tone when it comes to securing the customer experience. When cybersecurity is overlooked or treated as an afterthought, the potential damage goes far beyond dollars and cents. Your very reputation is at stake.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingCloud Security

Embarking on a Cloud Journey: Expect More from Your Load Balancer

November 13, 2018 — by Prakash Sinha0

AdobeStock_215123311-1-960x593.jpg

Many enterprises are in transition to the cloud, either building their own private cloud, managing a hybrid environment – both physical and virtualized—or deploying on a public cloud. In addition, there is a shift from infrastructure-centric environments to application-centric ones. In a fluid development environment of continuous integration and continuous delivery, where services are frequently added or updated, the new paradigm requires support for needs across multiple environments and across many stakeholders.

When development teams choose unsupported cloud infrastructure without IT involvement, the network team loses visibility, and security and cost control is accountable over the service level agreement (SLA) provided once the developed application goes live.

The world is changing. So should your application delivery controller.

Application delivery and load balancing technologies have been the strategic component providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure must now address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

The objective here is not to block agile development and use of innovative services, but to have a controlled environment, which gives the organization the best of both DevOps and IT– that is, to keep a secure and controlled environment while enabling agility. The benefits speak for themselves:

Reduced shadow IT initiatives
To remain competitive, every business needs innovative technology consumable by the end‐user. Oftentimes, employees are driven to use shadow IT services because going through approval processes is cumbersome, and using available approved technology is complex to learn and use. If users cannot get quick service from IT, they will go to a cloud service provider for what they need. Sometimes this results in short‐term benefit, but may cause issues with organizations’ security, cost controls and visibility in the long-term. Automation and self-service address CI/CD demands and reduce the need for applications teams to acquire and use their own unsupported ADCs.

Flexibility and investment protection at a predictable cost
Flexible licensing is one of the critical elements to consider. As you move application delivery services and instances to the cloud when needed, you should be able to reuse existing licenses across a hybrid deployment. Many customers initially deploy on public cloud but cost unpredictability becomes an issue once the services scale with usage.

[You may also like: Load Balancers and Elastic Licensing]

Seamless integration with an SDDC ecosystem
As you move to private or public cloud, you should be able to reuse your investment in the orchestration system of your environment. Many developers are not used to networking or security nomenclature. Using self-service tools with which developers are familiar quickly becomes a requirement.

The journey from a physical data center to the cloud may sometimes require investments in new capabilities to enable migration to the new environment. If an application delivery controller capacity is no longer required in the physical data center, its capacity can be automatically reassigned. Automation and self-services applications address the needs of various stakeholders, as well as the flexible licensing and cost control aspects of this journey.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliverySecurity

Simple to Use Link Availability Solutions

November 1, 2018 — by Daniel Lakier0

simple_to_use_link_availability_solutions_blog-960x640.jpg

Previously, I discussed how rerouting data center host infrastructure should be part of next-generation DDoS solutions. In this blog, I will discuss how link availability solutions should also play a part. Traditional DDoS solutions offer us a measure of protection against a number of things that can disrupt service to our applications or environment. This is good, but what do we do when our mitigation solutions are downstream from the problem? In other words, what do we do if our service provider goes down either from a cyberattack or other event?

What if we had the capacity to clean the bandwidth provided by our service provider, but the service provider itself is down. How do we prepare for that eventuality? Admittedly, in first world nations with modern infrastructure, this is a less likely scenario. In third world nations with smaller carriers/ISPs and/or outdated infrastructure, it is more common. However, times are changing. The plethora of IoT devices deploying throughout the world makes this scenario more likely. While there is no silver bullet, there are several strategies to help mitigate this risk.

[You may also like: Disaster Recovery: Data Center or Host Infrastructure Reroute]

Is Border Gateway Protocol the Right Solution?

Most companies who consider a secondary provider for internet services have been setting up Border Gateway Protocol (BGP) as the service mechanism. While this can work, it may not be the right choice. BGP is a rigid protocol that takes a reasonable skill level to configure and maintain. It can often introduce complexity and some idiosyncrasies that can cause their own problems—not to mention it tends to be an either-or protocol. You cannot set all traffic to take the best route at all times. It has thresholds and not considered a load balancing protocol. All traffic configured to move in a certain route will move that way until certain thresholds are met and will only switch back once those thresholds/parameters change again. It can also introduce its own problems, including flapping, table size limitations, or cost overruns when it has been used to eliminate pay for usage links.

Any solution in this space needs to solve both the technical and economic issues associated with link availability. The technical issues are broken into two parts: people and technology. In other words, make it easy to use and configure; make it work for multiple use cases that include both inbound and outbound; and if possible eliminate the risk factors that can be associated with rigid solutions like link flapping and the associated downtime that can be caused via re-convergence. The second problem is economic.  Allow people to leverage their investments’ fully. If they pay for bandwidth they should be able to use it. Both links should be active (and load balanced if the customer wants). A common problem with BGP is that one link is fully leveraged, and therefore hits its maximum threshold, while the other link sits idle due to lack of flow control or load balancing.

For several years, organizations have looked for alternatives. The link load balancing and VXLAN marketplace have both been popular alternatives, especially as it relates to branch edge redundancy solutions. Most of these solutions have limitations with inbound network load balancing, resulting in curtailed adoption. In many data centers, especially cloud deployments, the usual flow of traffic involves out-of-network users from the outside initiating the traffic flow.  Most link load balancing solutions and VXLAN solutions are very good at load balancing outbound traffic. The key reason for the technology adoption has been two-fold: the ability to reduce cost with WAN/internet providers and the ability to reduce complexity.

The reduction in cost is focused on two main areas:

  • The ability to use less costly bandwidth (and traditionally less reliable) because the stability was compensated for by load balancing links dynamically
  • The ability to use what we were paying for a buy only the required bandwidth

The reduction in complexity comes from the ease in configuration and simplicity of being able to buy link redundancy solutions as a service.

The unique value of this solution is that you can protect yourself from upstream service outages or upstream burst attacks that trip thresholds in your environment and cause the BGP environment to transition back and forth as failover parameters are met, essentially causing port flapping. The carrier may not experience an outage, but if someone can insert enough latency into the link on a regular basis it could cause a continual outage. Purpose-built link protection and load balancing solutions not only serve an economic purpose but also protect your organization from upstream cyberattacks.

Read “Flexibility Is The Name of the Game” to learn more.

Download Now

Application DeliveryCloud Computing

Digital Transformation – Take Advantage of Application Delivery in Your Journey

October 31, 2018 — by Prakash Sinha0

adc_change_journey_blog-1-960x720.jpg

The adoption of new technologies is accelerating business transformation. In its essence, the digital transformation of businesses uses technologies to drive significant improvement in process effectiveness.

Cloud computing is one of the core technologies for Digital Transformation

Increasing maturity of cloud-based infrastructure enables organizations to deploy business-critical applications in public and private cloud. According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%.

Many enterprises are in the midst of this transition to the cloud, whether moving to a public cloud, building their own private cloud or managing a hybrid deployment. In this fluid environment, where new services are being frequently added and old ones updated, the new paradigm requires support for needs across multiple environments and across many constituencies – an IT administrator, an application developer, DevOps and tenants.

[You might also like: Optimizing Multi-Cloud, Cross-DC Web Apps and Sites]

Nobody Said It Was Easy!

However, the process of migration of applications to the cloud is not easy. The flexibility and cost benefit that drives the shift to the cloud also presents many challenges – security, business continuity, and application availability, latency reduction, issues with visibility, SLA guarantees and isolation of resources.  Some other aspects that require some thought – licensing, lock-in with a cloud service provider, architecture to address hybrid deployment, shadow IT, automation, user access, user privacy, and compliance needs.

One of the main challenges for enterprises moving to a cloud infrastructure is how to guarantee consistent quality of experience to consumers across multiple applications, many of which are business critical developed using legacy technologies and still hosted on-premise.

Along with the quality of experience, organizations need to look at the security policies. Sometimes policies require integration with a cloud service provider’s infrastructure or require new capabilities to complement on-premises architecture while addressing denial of service, application security and compliance for new attack surface exposed by applications in the cloud.

Convenience and productivity improvements are often the initial drivers for adopting IT services in the cloud. One way to address security and availability concerns for the enterprise embarking on the cloud journey is to ensure that the security and availability are also included as part of IT self-service, orchestration and automation systems, without requiring additional effort from those driving adoptions of cloud-based IT applications.

The World of Application Delivery Has Changed to Adapt!

Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business-critical applications to the cloud, the same load balancing and application delivery infrastructure has to evolve to address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication VirtualizationSecurity

DevOps: Application Automation? The Inescapable Path

October 17, 2018 — by Benjamin Maze0

automation_devops_blog_2-960x663.jpg

The world is changing. IoT is becoming more and applications hold a prominent place in this new world. As IT infrastructure carries a huge cost and we need to find a way to optimize it.

  • How can I create apps faster?
  • How can I guarantee my security level?
  • How can I have an adaptive infrastructure that suits my real consumption?
  • How can I manage exceptional events that temporarily increase my traffic?

Automation is the answer.

How Can I Create Apps Faster?

First, we need to understand the concepts below from the cloud world:

In the world of application development, developers have several tools that they can use to accelerate the development process. We all know server virtualization has been a good tool that allows us to quickly create a new infrastructure to support new applications. This is the infrastructure-as-a-service in the diagram above. But this virtualization is not fast enough. We need to provision new OS for each virtual server which takes a long time to provision, and it is difficult to manage the high number of OS in the datacenter.

With the arrival of containers (like Docker), you can access virtualization by keeping the same operating system. This is the platform-as-a-service level in the diagram above. As developers’ we do not need to manage the OS. Therefore, the creation and suppression of new services can be done very quickly.

One application can run on several containers that need to talk to each other. Some platforms like Google Kubernetes are used to orchestrate these containers so you can build an application running on several containers that is completely automated. Kubernetes also introduces the capabilities to scale in/scale out an application in real time regarding the traffic load. That means we can imagine a VOD service like Netflix running more or fewer containers depending on the time of day. So, applications will use less computing power when there are fewer viewers that have a direct impact on the cost of the application.

We now understand why it is important to use automation at the application level, but an application does not only exist at the application level. When we publish our apps and make them available for use by external clients, they must travel through a lot of devices, such as a switch, router, firewall, and load balancer in order to function. These devices have to be configured for this application to know what to do on the network level. Historically, those elements are still very manual, and not automated, which results in slow exposure of new application/services because we need human intervention on those devices to build the configuration.

In the DevOps/SecOs domain, we try to create automation on these networks’ elements. Basically, we need to have a fully automated system that takes care of change/add/delete at the application level and do automatic configuration provision on network elements to support this application.

Software-Defined-Data-Center

That’s is what we call a Software-Defined-DataCenter (SDDC), which introduces some kind of “intelligence” in the infrastructure. In this way, it’s possible to have a dynamic infrastructure that follows the request from an application to the infrastructure layer:

  • Automation of application layer based on service virtualization (container)
  • Scale in / scale-out mechanism to provision / de-provision compute according to the exact needs
  • Expose an application automatically to the customer
  • Provision all network/security configuration that is required (switch, router, load balancer, reverse proxy, DDoS, etc.)

Using an intermediate orchestrator, acting as an abstraction layer, can provide a very strong tool to be integrated into this kind of SDDC infrastructure with:

  • Auto-provisioning of ADC services (Alteon VA or vADC on physical Alteon)
  • Auto-provisioning of configuration triggered by an external event (new apps in kubernetes for example)
  • Dynamic scale in / scale out
  • Auto-provisioning of security services (DDoS, WAF)

In the next article, I will continue to answer to the following questions using automation:

  • How can I guarantee my security level?
  • How can I have an adaptative infrastructure that suits my real consumption?
  • How can I manage an exceptional event that increases temporally my traffic?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication Delivery

Optimizing Multi-Cloud, Cross-DC Web Apps and Sites

September 27, 2018 — by Prakash Sinha0

multi_cloud_speed_web_apps-960x640.jpg

If you are working from your organization’s office, the chances are good that you are enjoying the responsiveness of your corporate LAN, thereby guaranteeing speedy load times for websites and applications.

Yahoo! found that making pages just 400 milliseconds faster resulted in a 9% increase in traffic. The faster site also doubled the number of sessions from search engine marketing and cut the number of required servers in half.

Don’t Fly Blind – Did Someone Say Waterfall?

Waterfall charts let you visualize cumulative data sequentially across a process. Performance waterfalls for webpages, shown below, generated using webpagetest.org, lets you see the series of actions that occur between a user and your application in order for that user to view a specific page of your site.

The Webpagetest.org waterfall chart below shows the connections view with a breakdown showing DNS Lookup, TCP connection establishment, Time to First Byte (TTFB), rendering time and document complete.

[You might also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

Optimizing Web-Facing Apps That Span Cloud and/or Data Center Boundaries

The performance of a website correlates directly to that web site’s success. The speed with which a web page renders in a user’s browser affects every conceivable business metric, such as page views, bounce rate, conversions, customer satisfaction, return visits, and of course revenue.

Latency, payload, caching and rendering are the key measures when evaluating website performance. Each round trip is subject to the connection latency. From the time the webpage is requested by the user to the time the resources on that webpage are downloaded in the browser is directly related to the weight of the page and its resources. The larger the total content size, the more time it will take to download everything needed for a page to become functional for the user.

Using caching and default caching headers may reduce the latency since less content is downloaded and it may result in fewer round trips to fetch the resources, although sometimes round trips may be to validate that the content in the cache is not stale.

Browsers need to render the HTML page and resources served to them. Client-side work may cause poor rendering at the browser and a degraded user experience, for example, some blocking calls (say 3rd party ads) or improper rendering of page resources can delay page load time and impact a user experience.

The low hanging fruit to enable optimizations are easy and obvious such as reducing the number of connection set up using keep-alive and pipelining. Another easy fix is to compress the objects to reduce the size of the payload for the data received by the browser and to utilize caching to manage static objects and pre-fetch data (if possible). A content delivery network (CDN) may serve static contents closer to the users to reduce latency. More involved and advanced optimizations may include techniques to consolidate resources when fetching from the server, compressing images that are sent to the browser depending on the type of device, the speed of connection, the location of the user, and reducing the size of objects requested by content minification. Some additional techniques, such as delaying ads after the page has become usable to the user, may improve the perception of web page and applications.

Read “Just Group Achieves Web Page Acceleration” to learn more.

Download Now

Application DeliverySecuritySSL

Adopt TLS 1.3 – Kill Two Birds with One Stone

September 13, 2018 — by Prakash Sinha13

tls_1.3_ssl_blog_img-960x600.jpg

Transport Layer Security (TLS) version 1.3 provides significant business benefits by making applications more secure, improving performance and reducing latency for the client. Changes in how handshake between client and server is designed has decreased site latency – utilizing a faster handshake, and use of Elliptic Curve (EC) based ciphers that allow faster page load time. TLS 1.3 also enforces forward security to prevent a replay of all recorded data if private session keys are compromised.

Transport Level Security – A Quick Recap

Transport Layer Security (TLS) version 1.0, the first standardized version of SSL introduced in 1999, which is based on SSL v3.0. TLS 1.0 is obsolete and vulnerable to various security issues, such as downgrade attacks. Payment Card Industry (PCI) had set a migration deadline of June 30, 2018 to migrate to TLS 1.1 or higher.

TLS 1.1, introduced in 2006, is more secure than TLS 1.0 and protected against certain types of Cipher Block Chaining (CBC) attacks such as BEAST. Some TLS 1.1 implementations are vulnerable to POODLE, a form of downgrade attack. TLS 1.1 also removed certain ciphers such as DES, and RC2 which are vulnerable and broken and introduced support for Forward Secrecy, although it is performance intensive.

TLS 1.2, introduced in 2008, added SHA256 as a hash algorithm and replaced SHA-1, which is considered insecure. It also added support for Advanced Encryption Standard (AES) cipher suites, Elliptic Curve Cryptography (ECC), and Perfect Forward Secrecy (PFS) without a significant performance hit. TLS 1.2 also removed the ability to downgrade to SSL v2.0 (highly insecure and broken).

Why TLS 1.3?

TLS 1.3 is now an approved standard of the Internet Engineering Task Force (IETF).  Sites utilizing TLS 1.3 can expect faster user connections than with earlier TLS standards while making the connections more secure due to the elimination of obsolete and less secure ciphers, server dictating the session security and faster establishment of handshake between client and server. TLS 1.3 eliminates the negotiation on the encryption to use. Instead, in the initial connection the server provides an encryption key, the client provides a session key, and then the connection is made. However, if needed TLS 1.3 provides a secure means to fall back to TLS 1.2 if TLS 1.3 is not supported by the endpoint.

[You might also like: High-Performance Visibility into SSL/TLS Traffic]

TLS 1.3 – Recommendations

To achieve SSL/TLS acceleration and effectively address the growing number and complexity of encrypted web attacks, organizations face serious strategic challenges. We recommend migration to TLS 1.3 to take advantage of significant business benefits and security that the newer standard provides. However, as with any transition to a new standard, be mindful of the adoption risks.

Evaluate the Risks and Plan Migration

The risks may be incompatibility between client and server due to poor implementations and bugs. You may also need to carefully evaluate the impact on devices that implement inspection based on RSA static keys, products that protect against data leaks or implement out of path web application protection based on a copy of decrypted traffic.

  • Adopt a gradual deployment of TLS 1.3 – A crawl-walk-run approach of deploying in QA environments, test sites, and low traffic sites
  • Evaluate or query the “middle box” vendors for compatibility with TLS 1.3, currently, only active TLS 1.3 terminators can provide compatibility
  • Utilize Application Delivery Controllers (ADCs) to terminate TLS 1.3 and front-end servers that are not capable of supporting TLS 1.3

TLS 1.3 provides improved security, forward security to secure data even if private keys are compromised, improved latency and better performance.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

Application DeliveryApplication SecuritySecurity

DDoS Protection is the Foundation for Application, Site and Data Availability

September 11, 2018 — by Daniel Lakier2

ddos-primer-part-1-960x788.jpg

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

[You might also like: Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers and Manufacturers?]

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

Keeping the aforementioned points in mind, here are three key features to consider when looking at modern enterprise DDoS solutions:

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

[You might also like: Marrying the Business Need with the Technology Drive: Recapping It All]

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now