main

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha1

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingCloud Security

Embarking on a Cloud Journey: Expect More from Your Load Balancer

November 13, 2018 — by Prakash Sinha0

AdobeStock_215123311-1-960x593.jpg

Many enterprises are in transition to the cloud, either building their own private cloud, managing a hybrid environment – both physical and virtualized—or deploying on a public cloud. In addition, there is a shift from infrastructure-centric environments to application-centric ones. In a fluid development environment of continuous integration and continuous delivery, where services are frequently added or updated, the new paradigm requires support for needs across multiple environments and across many stakeholders.

When development teams choose unsupported cloud infrastructure without IT involvement, the network team loses visibility, and security and cost control is accountable over the service level agreement (SLA) provided once the developed application goes live.

The world is changing. So should your application delivery controller.

Application delivery and load balancing technologies have been the strategic component providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure must now address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

The objective here is not to block agile development and use of innovative services, but to have a controlled environment, which gives the organization the best of both DevOps and IT– that is, to keep a secure and controlled environment while enabling agility. The benefits speak for themselves:

Reduced shadow IT initiatives
To remain competitive, every business needs innovative technology consumable by the end‐user. Oftentimes, employees are driven to use shadow IT services because going through approval processes is cumbersome, and using available approved technology is complex to learn and use. If users cannot get quick service from IT, they will go to a cloud service provider for what they need. Sometimes this results in short‐term benefit, but may cause issues with organizations’ security, cost controls and visibility in the long-term. Automation and self-service address CI/CD demands and reduce the need for applications teams to acquire and use their own unsupported ADCs.

Flexibility and investment protection at a predictable cost
Flexible licensing is one of the critical elements to consider. As you move application delivery services and instances to the cloud when needed, you should be able to reuse existing licenses across a hybrid deployment. Many customers initially deploy on public cloud but cost unpredictability becomes an issue once the services scale with usage.

[You may also like: Load Balancers and Elastic Licensing]

Seamless integration with an SDDC ecosystem
As you move to private or public cloud, you should be able to reuse your investment in the orchestration system of your environment. Many developers are not used to networking or security nomenclature. Using self-service tools with which developers are familiar quickly becomes a requirement.

The journey from a physical data center to the cloud may sometimes require investments in new capabilities to enable migration to the new environment. If an application delivery controller capacity is no longer required in the physical data center, its capacity can be automatically reassigned. Automation and self-services applications address the needs of various stakeholders, as well as the flexible licensing and cost control aspects of this journey.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud Computing

Digital Transformation – Take Advantage of Application Delivery in Your Journey

October 31, 2018 — by Prakash Sinha0

adc_change_journey_blog-1-960x720.jpg

The adoption of new technologies is accelerating business transformation. In its essence, the digital transformation of businesses uses technologies to drive significant improvement in process effectiveness.

Cloud computing is one of the core technologies for Digital Transformation

Increasing maturity of cloud-based infrastructure enables organizations to deploy business-critical applications in public and private cloud. According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%.

Many enterprises are in the midst of this transition to the cloud, whether moving to a public cloud, building their own private cloud or managing a hybrid deployment. In this fluid environment, where new services are being frequently added and old ones updated, the new paradigm requires support for needs across multiple environments and across many constituencies – an IT administrator, an application developer, DevOps and tenants.

[You might also like: Optimizing Multi-Cloud, Cross-DC Web Apps and Sites]

Nobody Said It Was Easy!

However, the process of migration of applications to the cloud is not easy. The flexibility and cost benefit that drives the shift to the cloud also presents many challenges – security, business continuity, and application availability, latency reduction, issues with visibility, SLA guarantees and isolation of resources.  Some other aspects that require some thought – licensing, lock-in with a cloud service provider, architecture to address hybrid deployment, shadow IT, automation, user access, user privacy, and compliance needs.

One of the main challenges for enterprises moving to a cloud infrastructure is how to guarantee consistent quality of experience to consumers across multiple applications, many of which are business critical developed using legacy technologies and still hosted on-premise.

Along with the quality of experience, organizations need to look at the security policies. Sometimes policies require integration with a cloud service provider’s infrastructure or require new capabilities to complement on-premises architecture while addressing denial of service, application security and compliance for new attack surface exposed by applications in the cloud.

Convenience and productivity improvements are often the initial drivers for adopting IT services in the cloud. One way to address security and availability concerns for the enterprise embarking on the cloud journey is to ensure that the security and availability are also included as part of IT self-service, orchestration and automation systems, without requiring additional effort from those driving adoptions of cloud-based IT applications.

The World of Application Delivery Has Changed to Adapt!

Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business-critical applications to the cloud, the same load balancing and application delivery infrastructure has to evolve to address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliverySecuritySSL

Adopt TLS 1.3 – Kill Two Birds with One Stone

September 13, 2018 — by Prakash Sinha14

tls_1.3_ssl_blog_img-960x600.jpg

Transport Layer Security (TLS) version 1.3 provides significant business benefits by making applications more secure, improving performance and reducing latency for the client. Changes in how handshake between client and server is designed has decreased site latency – utilizing a faster handshake, and use of Elliptic Curve (EC) based ciphers that allow faster page load time. TLS 1.3 also enforces forward security to prevent a replay of all recorded data if private session keys are compromised.

Transport Level Security – A Quick Recap

Transport Layer Security (TLS) version 1.0, the first standardized version of SSL introduced in 1999, which is based on SSL v3.0. TLS 1.0 is obsolete and vulnerable to various security issues, such as downgrade attacks. Payment Card Industry (PCI) had set a migration deadline of June 30, 2018 to migrate to TLS 1.1 or higher.

TLS 1.1, introduced in 2006, is more secure than TLS 1.0 and protected against certain types of Cipher Block Chaining (CBC) attacks such as BEAST. Some TLS 1.1 implementations are vulnerable to POODLE, a form of downgrade attack. TLS 1.1 also removed certain ciphers such as DES, and RC2 which are vulnerable and broken and introduced support for Forward Secrecy, although it is performance intensive.

TLS 1.2, introduced in 2008, added SHA256 as a hash algorithm and replaced SHA-1, which is considered insecure. It also added support for Advanced Encryption Standard (AES) cipher suites, Elliptic Curve Cryptography (ECC), and Perfect Forward Secrecy (PFS) without a significant performance hit. TLS 1.2 also removed the ability to downgrade to SSL v2.0 (highly insecure and broken).

Why TLS 1.3?

TLS 1.3 is now an approved standard of the Internet Engineering Task Force (IETF).  Sites utilizing TLS 1.3 can expect faster user connections than with earlier TLS standards while making the connections more secure due to the elimination of obsolete and less secure ciphers, server dictating the session security and faster establishment of handshake between client and server. TLS 1.3 eliminates the negotiation on the encryption to use. Instead, in the initial connection the server provides an encryption key, the client provides a session key, and then the connection is made. However, if needed TLS 1.3 provides a secure means to fall back to TLS 1.2 if TLS 1.3 is not supported by the endpoint.

[You might also like: High-Performance Visibility into SSL/TLS Traffic]

TLS 1.3 – Recommendations

To achieve SSL/TLS acceleration and effectively address the growing number and complexity of encrypted web attacks, organizations face serious strategic challenges. We recommend migration to TLS 1.3 to take advantage of significant business benefits and security that the newer standard provides. However, as with any transition to a new standard, be mindful of the adoption risks.

Evaluate the Risks and Plan Migration

The risks may be incompatibility between client and server due to poor implementations and bugs. You may also need to carefully evaluate the impact on devices that implement inspection based on RSA static keys, products that protect against data leaks or implement out of path web application protection based on a copy of decrypted traffic.

  • Adopt a gradual deployment of TLS 1.3 – A crawl-walk-run approach of deploying in QA environments, test sites, and low traffic sites
  • Evaluate or query the “middle box” vendors for compatibility with TLS 1.3, currently, only active TLS 1.3 terminators can provide compatibility
  • Utilize Application Delivery Controllers (ADCs) to terminate TLS 1.3 and front-end servers that are not capable of supporting TLS 1.3

TLS 1.3 provides improved security, forward security to secure data even if private keys are compromised, improved latency and better performance.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

Application Delivery

Considerations for Load Balancers When Migrating Applications to the Cloud

July 31, 2018 — by Prakash Sinha9

cloud-migration-load-balancing-960x600.jpg

According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%. Off-premises private cloud environments will represent 13% of cloud IT infrastructure spending, growing at 12.7% year over year. On-premises private clouds will account for 62.6% of spending on private cloud IT infrastructure and will grow 11.5% year-over-year in 2017.

Application Delivery

Application SLA – Knowing is Half the Battle

January 4, 2018 — by Frank Yue0

sla-webinar-960x640.jpg

In today’s world, digital transformation has changed how people interact with businesses and conduct their work. They interface with applications on the network. These applications need to be responsive and provide a quality of experience that enables people to appreciate the business and the services they provide. When an application degrades in performance, it negatively affects the user’s experience. This negative experience translates to lost value to revenues, brand, and worker productivity.

Application Delivery

Marrying the Business Need With Technology Drive, Part One: Choosing Your Cloud

November 30, 2017 — by Daniel Lakier0

choosing-cloud-960x585.jpg

Several years ago, the monolithic approach to application development fell out of vogue because time to market became the key success metric in our ever-changing world. Agile development started to become the norm and the move to DevOps was born.  At the same time as this change was taking place, there was another ground breaking development: the advent of public clouds.  Either change by itself was industry -impacting but the two happening at the same time, both enabling each other, changed everything.

Application Delivery

The ADC Key Master Delegates Application Security Functions

August 3, 2017 — by Frank Yue0

delegate-security-functions-960x960.jpg

One of the responsibilities of the Key Master is to provide access to the sensitive and secure information hidden within the locked facilities.  In my last post, I explained why the application delivery controller (ADC) is the Key Master for SSL/TLS communications on the internet.  It is the responsibility of the ADC to manage and distribute the access to the different essential security services.