main

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication VirtualizationSecurity

DevOps: Application Automation? The Inescapable Path

October 17, 2018 — by Benjamin Maze0

automation_devops_blog_2-960x663.jpg

The world is changing. IoT is becoming more and applications hold a prominent place in this new world. As IT infrastructure carries a huge cost and we need to find a way to optimize it.

  • How can I create apps faster?
  • How can I guarantee my security level?
  • How can I have an adaptive infrastructure that suits my real consumption?
  • How can I manage exceptional events that temporarily increase my traffic?

Automation is the answer.

How Can I Create Apps Faster?

First, we need to understand the concepts below from the cloud world:

In the world of application development, developers have several tools that they can use to accelerate the development process. We all know server virtualization has been a good tool that allows us to quickly create a new infrastructure to support new applications. This is the infrastructure-as-a-service in the diagram above. But this virtualization is not fast enough. We need to provision new OS for each virtual server which takes a long time to provision, and it is difficult to manage the high number of OS in the datacenter.

With the arrival of containers (like Docker), you can access virtualization by keeping the same operating system. This is the platform-as-a-service level in the diagram above. As developers’ we do not need to manage the OS. Therefore, the creation and suppression of new services can be done very quickly.

One application can run on several containers that need to talk to each other. Some platforms like Google Kubernetes are used to orchestrate these containers so you can build an application running on several containers that is completely automated. Kubernetes also introduces the capabilities to scale in/scale out an application in real time regarding the traffic load. That means we can imagine a VOD service like Netflix running more or fewer containers depending on the time of day. So, applications will use less computing power when there are fewer viewers that have a direct impact on the cost of the application.

We now understand why it is important to use automation at the application level, but an application does not only exist at the application level. When we publish our apps and make them available for use by external clients, they must travel through a lot of devices, such as a switch, router, firewall, and load balancer in order to function. These devices have to be configured for this application to know what to do on the network level. Historically, those elements are still very manual, and not automated, which results in slow exposure of new application/services because we need human intervention on those devices to build the configuration.

In the DevOps/SecOs domain, we try to create automation on these networks’ elements. Basically, we need to have a fully automated system that takes care of change/add/delete at the application level and do automatic configuration provision on network elements to support this application.

Software-Defined-Data-Center

That’s is what we call a Software-Defined-DataCenter (SDDC), which introduces some kind of “intelligence” in the infrastructure. In this way, it’s possible to have a dynamic infrastructure that follows the request from an application to the infrastructure layer:

  • Automation of application layer based on service virtualization (container)
  • Scale in / scale-out mechanism to provision / de-provision compute according to the exact needs
  • Expose an application automatically to the customer
  • Provision all network/security configuration that is required (switch, router, load balancer, reverse proxy, DDoS, etc.)

Using an intermediate orchestrator, acting as an abstraction layer, can provide a very strong tool to be integrated into this kind of SDDC infrastructure with:

  • Auto-provisioning of ADC services (Alteon VA or vADC on physical Alteon)
  • Auto-provisioning of configuration triggered by an external event (new apps in kubernetes for example)
  • Dynamic scale in / scale out
  • Auto-provisioning of security services (DDoS, WAF)

In the next article, I will continue to answer to the following questions using automation:

  • How can I guarantee my security level?
  • How can I have an adaptative infrastructure that suits my real consumption?
  • How can I manage an exceptional event that increases temporally my traffic?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication Delivery

Optimizing Multi-Cloud, Cross-DC Web Apps and Sites

September 27, 2018 — by Prakash Sinha0

multi_cloud_speed_web_apps-960x640.jpg

If you are working from your organization’s office, the chances are good that you are enjoying the responsiveness of your corporate LAN, thereby guaranteeing speedy load times for websites and applications.

Yahoo! found that making pages just 400 milliseconds faster resulted in a 9% increase in traffic. The faster site also doubled the number of sessions from search engine marketing and cut the number of required servers in half.

Don’t Fly Blind – Did Someone Say Waterfall?

Waterfall charts let you visualize cumulative data sequentially across a process. Performance waterfalls for webpages, shown below, generated using webpagetest.org, lets you see the series of actions that occur between a user and your application in order for that user to view a specific page of your site.

The Webpagetest.org waterfall chart below shows the connections view with a breakdown showing DNS Lookup, TCP connection establishment, Time to First Byte (TTFB), rendering time and document complete.

[You might also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

Optimizing Web-Facing Apps That Span Cloud and/or Data Center Boundaries

The performance of a website correlates directly to that web site’s success. The speed with which a web page renders in a user’s browser affects every conceivable business metric, such as page views, bounce rate, conversions, customer satisfaction, return visits, and of course revenue.

Latency, payload, caching and rendering are the key measures when evaluating website performance. Each round trip is subject to the connection latency. From the time the webpage is requested by the user to the time the resources on that webpage are downloaded in the browser is directly related to the weight of the page and its resources. The larger the total content size, the more time it will take to download everything needed for a page to become functional for the user.

Using caching and default caching headers may reduce the latency since less content is downloaded and it may result in fewer round trips to fetch the resources, although sometimes round trips may be to validate that the content in the cache is not stale.

Browsers need to render the HTML page and resources served to them. Client-side work may cause poor rendering at the browser and a degraded user experience, for example, some blocking calls (say 3rd party ads) or improper rendering of page resources can delay page load time and impact a user experience.

The low hanging fruit to enable optimizations are easy and obvious such as reducing the number of connection set up using keep-alive and pipelining. Another easy fix is to compress the objects to reduce the size of the payload for the data received by the browser and to utilize caching to manage static objects and pre-fetch data (if possible). A content delivery network (CDN) may serve static contents closer to the users to reduce latency. More involved and advanced optimizations may include techniques to consolidate resources when fetching from the server, compressing images that are sent to the browser depending on the type of device, the speed of connection, the location of the user, and reducing the size of objects requested by content minification. Some additional techniques, such as delaying ads after the page has become usable to the user, may improve the perception of web page and applications.

Read “Just Group Achieves Web Page Acceleration” to learn more.

Download Now

Application Acceleration & OptimizationApplication Delivery

Will Holiday Demand Make or Break Your E-Commerce Site?

October 27, 2015 — by Frank Yue3

As the weather turns and the leaves reveal their polychromatic wonder, I enter this time of year knowing that the holiday season is upon us.  Holidays means shopping and like any good technologist, I have transitioned to making most of my holiday purchases online.

As an online retailer (e-tailer), the holiday season is critical to the success of the business.  Estimates suggest that on average, over 23% of online sales are made during this time.  The stability and availability of the online platform these next couple months can make or break the business.  So, what do you do if your online store becomes too popular?

Application Acceleration & OptimizationApplication DeliveryWPO

HTTP/2 Is Ready, But Are You Ready for It?

October 21, 2015 — by Shamus McGillicuddy0

Shamus McGillicuddy is a Senior Analyst for EMA and is a featured guest blogger.

The Internet Engineering Task Force (IETF) published HTTP Version 2 (HTTP/2) as RFC 7540 in May 2015, and already several browsers (including Firefox and Chrome) support it. However, adoption of the new web application protocol probably won’t be particularly rapid. In fact, uptake of HTTP/2 might progress at a pace similar to that of IPv6. For many, there will be no compelling reason to implement the protocol, given the hassle involved.

Application Acceleration & OptimizationApplication DeliveryWPO

Do You Still Need Optimization After Migrating to HTTP/2?

September 17, 2015 — by Matt Young0

There’s a lot of talk about HTTP/2. Why? Possibly because it promises to help alleviate some of the bottlenecks that come along with the dynamic, rich webpages that people have come to expect.

The consumer market is driven by media consumption, be it high-definition videos, third-party plugins or animations, and these are bandwidth-hungry elements in an adversarial relationship with page load speeds.

Application Acceleration & OptimizationApplication DeliveryWPO

Why Are Online Retailers Leaving Millions Of Dollars On The Table?

September 15, 2015 — by Matt Young0

Online retailers are leaving millions of dollars – yes, millions – on the table.  Why is this?

In the hyper-competitive world of online commerce sites, every second is absolutely critical in ensuring a user experience that will yield the maximum likelihood of conversion, meaning a site visitor follows through and makes a purchase.

Application Acceleration & OptimizationApplication DeliverySecurity

HTTP/2 Will Break Your Security – Here’s How to Fix it

September 10, 2015 — by Yaron Azerual0

Now that HTTP/2 is here and widely adopted by client browsers, many of the performance challenges that existed with HTTP1.1 are finally addressed and solved.  But what about security?

While HTTP/2 provides a higher level of privacy by mandating (de-facto because of browser implementation) traffic encryption, security solutions such as Web Application Firewalls (WAFs) are not keeping pace with the HTTP/2 evolution.

Application Acceleration & OptimizationApplication DeliveryWPO

REPORT: State of the Union for Ecommerce Page Speed & Web Performance (Summer 2015)

September 8, 2015 — by Matt Young8

In the hyper-accelerated world of technology, the modern consumer is bombarded with near-daily news of technological breakthroughs, OS updates, device refreshes and breakneck broadband speeds. With this all comes a reinforcement of expectations for modern webpages to deliver dynamic, rich content on par with high-definition cable programming, delivered just as fast as a user would change a channel from one HD broadcast to another.

Application Acceleration & OptimizationApplication DeliverySSL

The Internet has Upgraded to HTTP/2, but One Key Feature will Slow You Down

August 26, 2015 — by Frank Yue0

Imagine a world where smartphones were only upgraded every 15 years.  It is hard to imagine waiting that long for new hardware and new functionality to meet consumer expectations and demands.  It is even harder to imagine how the update will integrate all the changes in the way people utilize their smartphones.