main

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication VirtualizationSecurity

DevOps: Application Automation? The Inescapable Path

October 17, 2018 — by Benjamin Maze0

automation_devops_blog_2-960x663.jpg

The world is changing. IoT is becoming more and applications hold a prominent place in this new world. As IT infrastructure carries a huge cost and we need to find a way to optimize it.

  • How can I create apps faster?
  • How can I guarantee my security level?
  • How can I have an adaptive infrastructure that suits my real consumption?
  • How can I manage exceptional events that temporarily increase my traffic?

Automation is the answer.

How Can I Create Apps Faster?

First, we need to understand the concepts below from the cloud world:

In the world of application development, developers have several tools that they can use to accelerate the development process. We all know server virtualization has been a good tool that allows us to quickly create a new infrastructure to support new applications. This is the infrastructure-as-a-service in the diagram above. But this virtualization is not fast enough. We need to provision new OS for each virtual server which takes a long time to provision, and it is difficult to manage the high number of OS in the datacenter.

With the arrival of containers (like Docker), you can access virtualization by keeping the same operating system. This is the platform-as-a-service level in the diagram above. As developers’ we do not need to manage the OS. Therefore, the creation and suppression of new services can be done very quickly.

One application can run on several containers that need to talk to each other. Some platforms like Google Kubernetes are used to orchestrate these containers so you can build an application running on several containers that is completely automated. Kubernetes also introduces the capabilities to scale in/scale out an application in real time regarding the traffic load. That means we can imagine a VOD service like Netflix running more or fewer containers depending on the time of day. So, applications will use less computing power when there are fewer viewers that have a direct impact on the cost of the application.

We now understand why it is important to use automation at the application level, but an application does not only exist at the application level. When we publish our apps and make them available for use by external clients, they must travel through a lot of devices, such as a switch, router, firewall, and load balancer in order to function. These devices have to be configured for this application to know what to do on the network level. Historically, those elements are still very manual, and not automated, which results in slow exposure of new application/services because we need human intervention on those devices to build the configuration.

In the DevOps/SecOs domain, we try to create automation on these networks’ elements. Basically, we need to have a fully automated system that takes care of change/add/delete at the application level and do automatic configuration provision on network elements to support this application.

Software-Defined-Data-Center

That’s is what we call a Software-Defined-DataCenter (SDDC), which introduces some kind of “intelligence” in the infrastructure. In this way, it’s possible to have a dynamic infrastructure that follows the request from an application to the infrastructure layer:

  • Automation of application layer based on service virtualization (container)
  • Scale in / scale-out mechanism to provision / de-provision compute according to the exact needs
  • Expose an application automatically to the customer
  • Provision all network/security configuration that is required (switch, router, load balancer, reverse proxy, DDoS, etc.)

Using an intermediate orchestrator, acting as an abstraction layer, can provide a very strong tool to be integrated into this kind of SDDC infrastructure with:

  • Auto-provisioning of ADC services (Alteon VA or vADC on physical Alteon)
  • Auto-provisioning of configuration triggered by an external event (new apps in kubernetes for example)
  • Dynamic scale in / scale out
  • Auto-provisioning of security services (DDoS, WAF)

In the next article, I will continue to answer to the following questions using automation:

  • How can I guarantee my security level?
  • How can I have an adaptative infrastructure that suits my real consumption?
  • How can I manage an exceptional event that increases temporally my traffic?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryDDoSSDNSecurityService ProviderWPO

Your Favorite Posts of 2015

December 30, 2015 — by Radware1

Over the past twelve months, our team of authors has offered advice, expertise, and analysis on a variety of topics facing the application delivery and security communities.  The articles below are the most read and shared ones we published this year.  Our goal was (and is) to share our experience and knowledge so you, our readers, can better prepare, implement, and gain insights that you can apply to your business.

Application DeliveryWPO

Web Performance Optimization: Treat Your Enterprise Apps Like E-Commerce Apps

October 7, 2015 — by Shamus McGillicuddy1

Shamus McGillicuddy is a Senior Analyst for EMA and is a featured guest blogger.

Online retailers have understood the importance of web application performance for a long time, since back when the Amazon was better known as a river than as an e-commerce giant. Enterprises have been a little slower to catch on. Sooner or later, though, all of them will realize that web performance optimization isn’t just for e-commerce apps anymore.

Application Acceleration & OptimizationApplication DeliveryWPO

REPORT: State of the Union for Ecommerce Page Speed & Web Performance (Summer 2015)

September 8, 2015 — by Matt Young8

In the hyper-accelerated world of technology, the modern consumer is bombarded with near-daily news of technological breakthroughs, OS updates, device refreshes and breakneck broadband speeds. With this all comes a reinforcement of expectations for modern webpages to deliver dynamic, rich content on par with high-definition cable programming, delivered just as fast as a user would change a channel from one HD broadcast to another.

WPO

New Findings: State of the Union for Ecommerce Page Speed and Web Performance [Spring 2015]

April 15, 2015 — by Kent Alstad6

There are compelling arguments why companies – particularly online retailers – should care about serving faster pages to their users. Countless studies have found an irrefutable connection between load times and key performance indicators ranging from page views to revenue.

For every 1 second of improvement, Walmart.com experienced up to a 2% conversion increase. Firefox reduced average page load time by 2.2 seconds, which increased downloads by 15.4% — resulting in an estimated 10 million additional downloads per year. And when auto parts retailer AutoAnything.com cut load times in half, it experienced a 13% increase in sales.

Recently at Radware, we released our latest research into the performance and page speed of the world’s top online retailers. This research aims to answer the question: in a world where every second counts, are retailers helping or hurting their users’ experience – and ultimately their own bottom line?

Application Delivery

The Art of Efficient ADC Administration – Develop Once, Apply Multiple Times

March 7, 2013 — by Yaron Azerual0

It’s no secret application delivery controller (ADC) services are often perceived as complex to master and administer. Although they may use the latest ADC device, many ADC deployments only use basic layer 4 load balancing. It can be challenging to find an ADC champion who can really take advantage of the most advanced capabilities of an application delivery solution and maximize its business benefits.