main

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication VirtualizationSecurity

DevOps: Application Automation? The Inescapable Path

October 17, 2018 — by Benjamin Maze0

automation_devops_blog_2-960x663.jpg

The world is changing. IoT is becoming more and applications hold a prominent place in this new world. As IT infrastructure carries a huge cost and we need to find a way to optimize it.

  • How can I create apps faster?
  • How can I guarantee my security level?
  • How can I have an adaptive infrastructure that suits my real consumption?
  • How can I manage exceptional events that temporarily increase my traffic?

Automation is the answer.

How Can I Create Apps Faster?

First, we need to understand the concepts below from the cloud world:

In the world of application development, developers have several tools that they can use to accelerate the development process. We all know server virtualization has been a good tool that allows us to quickly create a new infrastructure to support new applications. This is the infrastructure-as-a-service in the diagram above. But this virtualization is not fast enough. We need to provision new OS for each virtual server which takes a long time to provision, and it is difficult to manage the high number of OS in the datacenter.

With the arrival of containers (like Docker), you can access virtualization by keeping the same operating system. This is the platform-as-a-service level in the diagram above. As developers’ we do not need to manage the OS. Therefore, the creation and suppression of new services can be done very quickly.

One application can run on several containers that need to talk to each other. Some platforms like Google Kubernetes are used to orchestrate these containers so you can build an application running on several containers that is completely automated. Kubernetes also introduces the capabilities to scale in/scale out an application in real time regarding the traffic load. That means we can imagine a VOD service like Netflix running more or fewer containers depending on the time of day. So, applications will use less computing power when there are fewer viewers that have a direct impact on the cost of the application.

We now understand why it is important to use automation at the application level, but an application does not only exist at the application level. When we publish our apps and make them available for use by external clients, they must travel through a lot of devices, such as a switch, router, firewall, and load balancer in order to function. These devices have to be configured for this application to know what to do on the network level. Historically, those elements are still very manual, and not automated, which results in slow exposure of new application/services because we need human intervention on those devices to build the configuration.

In the DevOps/SecOs domain, we try to create automation on these networks’ elements. Basically, we need to have a fully automated system that takes care of change/add/delete at the application level and do automatic configuration provision on network elements to support this application.

Software-Defined-Data-Center

That’s is what we call a Software-Defined-DataCenter (SDDC), which introduces some kind of “intelligence” in the infrastructure. In this way, it’s possible to have a dynamic infrastructure that follows the request from an application to the infrastructure layer:

  • Automation of application layer based on service virtualization (container)
  • Scale in / scale-out mechanism to provision / de-provision compute according to the exact needs
  • Expose an application automatically to the customer
  • Provision all network/security configuration that is required (switch, router, load balancer, reverse proxy, DDoS, etc.)

Using an intermediate orchestrator, acting as an abstraction layer, can provide a very strong tool to be integrated into this kind of SDDC infrastructure with:

  • Auto-provisioning of ADC services (Alteon VA or vADC on physical Alteon)
  • Auto-provisioning of configuration triggered by an external event (new apps in kubernetes for example)
  • Dynamic scale in / scale out
  • Auto-provisioning of security services (DDoS, WAF)

In the next article, I will continue to answer to the following questions using automation:

  • How can I guarantee my security level?
  • How can I have an adaptative infrastructure that suits my real consumption?
  • How can I manage an exceptional event that increases temporally my traffic?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication Delivery

Optimizing Multi-Cloud, Cross-DC Web Apps and Sites

September 27, 2018 — by Prakash Sinha0

multi_cloud_speed_web_apps-960x640.jpg

If you are working from your organization’s office, the chances are good that you are enjoying the responsiveness of your corporate LAN, thereby guaranteeing speedy load times for websites and applications.

Yahoo! found that making pages just 400 milliseconds faster resulted in a 9% increase in traffic. The faster site also doubled the number of sessions from search engine marketing and cut the number of required servers in half.

Don’t Fly Blind – Did Someone Say Waterfall?

Waterfall charts let you visualize cumulative data sequentially across a process. Performance waterfalls for webpages, shown below, generated using webpagetest.org, lets you see the series of actions that occur between a user and your application in order for that user to view a specific page of your site.

The Webpagetest.org waterfall chart below shows the connections view with a breakdown showing DNS Lookup, TCP connection establishment, Time to First Byte (TTFB), rendering time and document complete.

[You might also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

Optimizing Web-Facing Apps That Span Cloud and/or Data Center Boundaries

The performance of a website correlates directly to that web site’s success. The speed with which a web page renders in a user’s browser affects every conceivable business metric, such as page views, bounce rate, conversions, customer satisfaction, return visits, and of course revenue.

Latency, payload, caching and rendering are the key measures when evaluating website performance. Each round trip is subject to the connection latency. From the time the webpage is requested by the user to the time the resources on that webpage are downloaded in the browser is directly related to the weight of the page and its resources. The larger the total content size, the more time it will take to download everything needed for a page to become functional for the user.

Using caching and default caching headers may reduce the latency since less content is downloaded and it may result in fewer round trips to fetch the resources, although sometimes round trips may be to validate that the content in the cache is not stale.

Browsers need to render the HTML page and resources served to them. Client-side work may cause poor rendering at the browser and a degraded user experience, for example, some blocking calls (say 3rd party ads) or improper rendering of page resources can delay page load time and impact a user experience.

The low hanging fruit to enable optimizations are easy and obvious such as reducing the number of connection set up using keep-alive and pipelining. Another easy fix is to compress the objects to reduce the size of the payload for the data received by the browser and to utilize caching to manage static objects and pre-fetch data (if possible). A content delivery network (CDN) may serve static contents closer to the users to reduce latency. More involved and advanced optimizations may include techniques to consolidate resources when fetching from the server, compressing images that are sent to the browser depending on the type of device, the speed of connection, the location of the user, and reducing the size of objects requested by content minification. Some additional techniques, such as delaying ads after the page has become usable to the user, may improve the perception of web page and applications.

Read “Just Group Achieves Web Page Acceleration” to learn more.

Download Now

Application DeliverySecuritySSL

Adopt TLS 1.3 – Kill Two Birds with One Stone

September 13, 2018 — by Prakash Sinha8

tls_1.3_ssl_blog_img-960x600.jpg

Transport Layer Security (TLS) version 1.3 provides significant business benefits by making applications more secure, improving performance and reducing latency for the client. Changes in how handshake between client and server is designed has decreased site latency – utilizing a faster handshake, and use of Elliptic Curve (EC) based ciphers that allow faster page load time. TLS 1.3 also enforces forward security to prevent a replay of all recorded data if private session keys are compromised.

Transport Level Security – A Quick Recap

Transport Layer Security (TLS) version 1.0, the first standardized version of SSL introduced in 1999, which is based on SSL v3.0. TLS 1.0 is obsolete and vulnerable to various security issues, such as downgrade attacks. Payment Card Industry (PCI) had set a migration deadline of June 30, 2018 to migrate to TLS 1.1 or higher.

TLS 1.1, introduced in 2006, is more secure than TLS 1.0 and protected against certain types of Cipher Block Chaining (CBC) attacks such as BEAST. Some TLS 1.1 implementations are vulnerable to POODLE, a form of downgrade attack. TLS 1.1 also removed certain ciphers such as DES, and RC2 which are vulnerable and broken and introduced support for Forward Secrecy, although it is performance intensive.

TLS 1.2, introduced in 2008, added SHA256 as a hash algorithm and replaced SHA-1, which is considered insecure. It also added support for Advanced Encryption Standard (AES) cipher suites, Elliptic Curve Cryptography (ECC), and Perfect Forward Secrecy (PFS) without a significant performance hit. TLS 1.2 also removed the ability to downgrade to SSL v2.0 (highly insecure and broken).

Why TLS 1.3?

TLS 1.3 is now an approved standard of the Internet Engineering Task Force (IETF).  Sites utilizing TLS 1.3 can expect faster user connections than with earlier TLS standards while making the connections more secure due to the elimination of obsolete and less secure ciphers, server dictating the session security and faster establishment of handshake between client and server. TLS 1.3 eliminates the negotiation on the encryption to use. Instead, in the initial connection the server provides an encryption key, the client provides a session key, and then the connection is made. However, if needed TLS 1.3 provides a secure means to fall back to TLS 1.2 if TLS 1.3 is not supported by the endpoint.

[You might also like: High-Performance Visibility into SSL/TLS Traffic]

TLS 1.3 – Recommendations

To achieve SSL/TLS acceleration and effectively address the growing number and complexity of encrypted web attacks, organizations face serious strategic challenges. We recommend migration to TLS 1.3 to take advantage of significant business benefits and security that the newer standard provides. However, as with any transition to a new standard, be mindful of the adoption risks.

Evaluate the Risks and Plan Migration

The risks may be incompatibility between client and server due to poor implementations and bugs. You may also need to carefully evaluate the impact on devices that implement inspection based on RSA static keys, products that protect against data leaks or implement out of path web application protection based on a copy of decrypted traffic.

  • Adopt a gradual deployment of TLS 1.3 – A crawl-walk-run approach of deploying in QA environments, test sites, and low traffic sites
  • Evaluate or query the “middle box” vendors for compatibility with TLS 1.3, currently, only active TLS 1.3 terminators can provide compatibility
  • Utilize Application Delivery Controllers (ADCs) to terminate TLS 1.3 and front-end servers that are not capable of supporting TLS 1.3

TLS 1.3 provides improved security, forward security to secure data even if private keys are compromised, improved latency and better performance.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

Application DeliveryApplication SecuritySecurity

DDoS Protection is the Foundation for Application, Site and Data Availability

September 11, 2018 — by Daniel Lakier2

ddos-primer-part-1-960x788.jpg

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

[You might also like: Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers and Manufacturers?]

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

Keeping the aforementioned points in mind, here are three key features to consider when looking at modern enterprise DDoS solutions:

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

[You might also like: Marrying the Business Need with the Technology Drive: Recapping It All]

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Delivery

Considerations for Load Balancers When Migrating Applications to the Cloud

July 31, 2018 — by Prakash Sinha2

cloud-migration-load-balancing-960x600.jpg

According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%. Off-premises private cloud environments will represent 13% of cloud IT infrastructure spending, growing at 12.7% year over year. On-premises private clouds will account for 62.6% of spending on private cloud IT infrastructure and will grow 11.5% year-over-year in 2017.

Application DeliverySecurity

Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers And Manufacturers?

July 24, 2018 — by Daniel Lakier1

supplier-manufacturer-960x640.jpg

This is something that I have struggled with for most of my working life. As a technology professional, it is my job to pick the best products and solutions or to dig deeper to marry that technological decision with one that’s best for my organization. Is it incumbent on me to consider my suppliers’ financials, or their country or origin, or perhaps their business practices?

This thought was thrust sharply into focus during the past few months. First, we were reminded that a sound business still needs to have sound financials. The second warning is around the ramifications of a trade war.

Application Delivery

Single Sign On (SSO) Use Cases

May 24, 2018 — by Prakash Sinha0

sso-use-cases-960x640.jpg

SSO reduces password fatigue for users having to remember a password for each application. With SSO, a user logs into one application and then is able to sign into other applications automatically, regardless of the domain the user is in in or the technology in use. SSO makes use of a federation services or login page that orchestrates the user credentials between multiple applications.

Application Delivery

Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity

April 25, 2018 — by Fabio Palozza5

data-center-agility-1-960x612.jpg

Deciding on an appropriate application delivery controller (ADC) and evaluating the need for supporting infrastructure is a complex, complicated, and challenging job. Such challenges result from the fact that ADCs are increasingly used across diverse environments and virtual, cloud, and physical appliances.