main

DDoS

How to Choose a Cloud DDoS Scrubbing Service

August 21, 2019 — by Eyal Arazi0

ddoscloud-960x720.jpg

Buying a cloud-based security solution is more than just buying a technology. Whereas when you buy a physical product, you care mostly about its immediate features and capabilities, a cloud-based service is more than just lines on a spec sheet; rather, it is a combination of multiple elements, all of which must work in tandem, in order to guarantee performance.

Cloud Service = Technology + Network + Support

There are three primary elements that determine the quality of a cloud security service: technology, network, and support.

Technology is crucial for the underlying security and protection capabilities. The network is required for a solid foundation on which the technology runs on, and the operation & support component is required to bring them together and keep them working.

[You may also like: Security Considerations for Cloud Hosted Services]

Take any one out, and the other two legs won’t be enough for the service to stand on.

This is particularly true when looking for a cloud-based DDoS scrubbing solution. Distributed Denial of Service (DDoS) attacks have distinct features that make them different than other types of cyber-attacks. Therefore, there are specific requirements for cloud-based DDoS protection service that cover the full gamut of technology, network, and support that are particular to DDoS protection.

Technology

As I explained earlier, technology is just one facet of what makes-up a cloud security service. However, it is the building block on which everything else is built.

The quality of the underlying technology is the most important factor in determining the quality of protection. It is the technology that determines how quickly an attack will be detected; it is the quality of the technology that determines whether it can tell the difference between a traffic spike in legitimate traffic, and a DDoS attack; and it is the technology that determines whether it can adapt to attack patterns in time to keep your application online or not.

[You may also like: Why You Still Need That DDoS Appliance]

In order to make sure that your protection is up to speed, there are a few key core features you want to make sure that your cloud service provides:

  • Behavioral detection: It is often difficult to tell the difference between a legitimate traffic in customer traffic – say, during peak shopping periods – and a surge caused by a DDoS attack. Rate-based detection won’t be able to tell the difference, resulting in false positives. Therefore, behavioral detection, which looks not just at traffic rates, but also at non-rate behavioral parameters is a must-have capability.
  • Automatic signature creation: Attackers are relying more and more on multi-vector and ‘hit-and-run’ burst attacks, which frequently switch between different attack methods. Any defense mechanism based on manual configurations will fail because it won’t be able to keep up with changed. Only defenses which provide automatic, real-time signature creation can keep up with such attacks, in order to tailor defenses to the specific characteristics of the attack.
  • SSL DDoS protection: As more and more internet traffic becomes encrypted – over 85% according to the latest estimates – protection against encrypted DDoS floods becomes ever more important. Attackers can leverage DDoS attacks in order to launch potent DDoS attacks which can quickly overwhelm server resources. Therefore, protection capabilities against SSL-based DDoS attacks is key.
  • Application-layer protection: As more and more services migrate online, application-layer (L7) DDoS attacks are increasingly used in order to take them down. Many traditional DDoS mitigation services look only at network-layer (L3/4) protocols, but up-to-date protection must including application-layer protection, as well.
  • Zero-day protection: Finally, attackers are constantly finding new ways of bypassing traditional security mechanisms and hitting organizations with attack methods never seen before. Even by making small changes to attack signatures hackers can craft attacks that are not recognized by manual signatures. That’s why including zero-day protection features, which can adapt to new attack types, is an absolute must-have.

[You may also like: Modern Analytics and End-to-End Visibility]

Network

The next building block is the network. Whereas the technology stops the attack itself, it is the network that scales-out the service and deploys it on a global scale. Here, too, there are specific requirements that are uniquely important in the case of DDoS scrubbing networks:

  • Massive capacity: When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new peaks. That is why having large-scale, massive capacity at your disposal in an absolute requirement to stop attacks.
  • Dedicated capacity: It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers rely on their CDN capacity, which is already being widely utilized, for DDoS mitigation, as well. Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.
  • Global footprint: Fast response and low latency are crucial components in service performance. A critical component in latency, however, is distance between the customer and the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer, which can only be achieve with a globally distributed network with a large footprint.

Support

The final piece of the ‘puzzle’ of providing a high-quality cloud security network is the human element; that is, maintenance, operation and support.

Beyond the cold figures of technical specifications, and the bits-and-bytes of network capacity, it is the service element that ties together the technology and network, and makes sure that they keep working in tandem.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Here, too, there are a few key elements to look at when considering a cloud security network:

  • Global Team: Maintaining global operations of a cloud security service requires a team large enough to ensure 24x7x365 operations. Moreover, sophisticated security teams use a ‘follow-the-sun’ model, with team member distributed strategically around the world, to make sure that experts are always available, regardless of time or location. Only teams that reach a certain size – and companies that reach a certain scale – can guarantee this.
  • Team Expertise: Apart from sheer numbers of team member, it is also their expertise that matter. Cyber security is a discipline, and DDoS protection, in particular, is a specialization. Only a team with a distinguished, long track record in  protecting specifically against DDoS attacks can ensure that you have the staff, skills, and experience required to be fully protected.
  • SLA: The final qualification are the service guarantees provided by your cloud security vendor. Many service providers make extensive guarantees, but fall woefully short when it comes to backing them up. The Service Level Agreement (SLA) is your guarantee that your service provider is willing to put their money where their mouth is. A high-quality SLA must provide individual measurable metrics for attack detection, diversion (if required), alerting, mitigation, and uptime. Falling short of those should call into question your vendors ability to deliver on their promises.

A high-quality cloud security service is more than the sum of its parts. It is the technology, network, and service all working in tandem – and hitting on all cylinders – in order to provide superior protection. Falling short on any one element can potentially jeopardize quality of the protection delivered to customers. Use the points outlined above to ask yourself whether your cloud security vendor has all the right pieces to provide quality protection, and if they don’t – perhaps it is time for you to consider alternatives.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Application Delivery

Modern Analytics and End-to-End Visibility

July 3, 2019 — by Prakash Sinha0

SLA-960x640.jpg

Many Cloud Service Providers (CSP) and large enterprises struggle to deliver a commitment level for an application service. For a tenant, without a proper Service Level Agreement (SLA), it is impossible to manage an application for his or her own users.

Delivering SLA without first gaining end-to-end visibility for an application, user and network is asking for trouble. This has long been an area of contention and finger pointing between network and application teams. Solutions for monitoring application performance and SLA are expensive and the task is complex, requiring inserting hardware probes and/or integrating software agents into every application server.

The Case for Application Analytics

Application analytics provides deep insights into application, user and network behavior and the root cause of an SLA breach by capturing, analyzing and visualizing application metrics.

[You may also like: Application SLA: Knowing Is Half the Battle]

When deploying applications, particular attention is required to see when things are slowing, so proactive monitoring becomes critical. Not only is proactive monitoring and troubleshooting through actionable insights helpful in configuring the appropriate technical capability to address the issue at hand, this visibility into application performance is important in terms of cost saving. For example, to de-provision unused resources when not needed or to mitigate an attack in progress.

An SLA breach may be due to device outage or configuration issues, problems of access from a particular geography, a specific device type, a particular data center, or something in between. Other reasons may be SSL handshake issues or security attacks that impacting application performance due to a lack of resources. It is important to know these issues before they become a business disruption.

In a multi-tenant environment, if the environments are not segregated, tenants may start competing for shared resources during peak utilization. In an environment where tenants share resources, a potential spike in resource consumption or a wrong configuration change of a single tenant may affect all other tenants – severely impacting an application’s SLA and availability.

End-to-End Visibility

Application Delivery Controllers are at the intersection of the network and applications. ADCs act as sensors to changing user demands of the applications – for example, detecting increased user latency or a lack of available application resources, or reaching a throughput limit, or outage of a specific service or a security attack in progress.

[You may also like: 6 Must-Have Metrics in Your SLA]

In order to detect any application performance issues in real-time before your customers experience them, it is essential to have an end-to-end monitoring capability that provides actionable insights and alerts through visualization. The ADC can act upon this telemetry to trigger automation and orchestration systems to program the applications or the network elements as needed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Delivery

Application SLA: Knowing Is Half the Battle

April 4, 2019 — by Radware0

ApplicationSLA-960x642.jpg

Applications have come to define the digital experience. They empower organizations to create new customer-friendly services, unlock data and content and deliver it to users at the time and device they desire, and provide a competitive differentiator over the competition.

Fueling these applications is the “digital core,” a vast plumbing infrastructure that includes networks, data repositories, Internet of Things (IoT) devices and more. If applications are a cornerstone of the digital experience, then managing and optimizing the digital core is the key to delivering these apps to the digitized user. When applications aren’t delivered efficiently, users can suffer from a degraded quality of experience (QoE), resulting in a tarnished brand, negatively affecting customer loyalty and lost revenue.

Application delivery controllers (ADCs) are ideally situated to ensure QoE, regardless of the operational scenario, by allowing IT to actively monitor and enforce application SLAs. The key is to understand the role ADCs play and the capabilities required to ensure the digital experience across various operational scenarios.

Optimize Normal Operations

Under normal operational conditions, ADCs optimize application performance, control and allocate resources to those applications and provide early warnings of potential issues.

[You may also like: 6 Must-Have Metrics in Your SLA]

For starters, any ADC should deliver web performance optimization (WPO) capabilities to turbocharge the performance of web-based applications. It transforms front-end optimization from a lengthy and complex process into an automated, streamlined function. Caching, compression, SSL offloading and TCP optimization are all key capabilities and will enable faster communication between the client and server while offloading CPU intensive tasks from the application server.

Along those same lines, an ADC can serve as a “bridge” between the web browsers that deliver web- based applications and the backend servers that host the applications. For example, HTTP/2 is the new standard in network protocols. ADCs can serve as a gateway between the web browsers that support HTTP/2 and backend servers that still don’t, optimizing performance to meet application SLAs.

Prevent Outages

Outages are few and far between, but when they occur, maintaining business continuity is critical via server load balancing, leveraging cloud elasticity and disaster recovery. ADCs play a critical role across all three and execute and automate these processes during a time of crisis.

[You may also like: Security Pros and Perils of Serverless Architecture]

If an application server fails, server load balancing should automatically redirect the client to another server. Likewise, in the event that an edge router or network connection to the data center fails, an ADC should automatically redirect to another data center, ensuring the web client can always access the application server even when there is a point of failure in the network infrastructure.

Minimize Degradation

Application SLA issues are most often the result of network degradation. The ecommerce industry is a perfect example. A sudden increase in network traffic during the holiday season can result in SLA degradation.

Leveraging server load balancing, ADCs provide elasticity by provisioning resources on-demand. Additional servers are added to the network infrastructure to maintain QoE, and after the spike has passed, returned to an idle state for use elsewhere. In addition, virtualized ADCs provide an additional benefit, as they provide scalability and isolation between vADC instance at the fault, management and network levels.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Finally, cyberattacks are the silent killers of application performance, as they typically create degradation. ADCs play an integrative role in protecting applications to maintain SLAs at all times.   They can prevent attack traffic from entering a network’s LAN and prevent volumetric attack traffic from saturating the Internet pipe.

The ADC should be equipped with security capabilities that allow it to be integrated into the security/ DDoS mitigation framework. This includes the ability to inspect traffic and network health parameters so the ADC serves as an alarm system to signal attack information to a DDoS mitigation solution. Other interwoven safety features should include integration with web application firewalls (WAFs), ability to decrypt/encrypt SSL traffic and device/user fingerprinting.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Delivery

Application SLA – Knowing is Half the Battle

January 4, 2018 — by Frank Yue0

sla-webinar-960x640.jpg

In today’s world, digital transformation has changed how people interact with businesses and conduct their work. They interface with applications on the network. These applications need to be responsive and provide a quality of experience that enables people to appreciate the business and the services they provide. When an application degrades in performance, it negatively affects the user’s experience. This negative experience translates to lost value to revenues, brand, and worker productivity.

Application Delivery

Using SLA Management to Save Time and Improve

December 13, 2017 — by Daniel Lakier0

sla-management-software-960x679.jpg

Today more than ever, the success or failure of our digital enterprise rests on whether our customer has a good user experience. No one wants to use something that is difficult to use or unreliable, and most of us don’t want to use something unless the user experience is consistent. All too often, organizations expend all their energy into making their tool /application look good, be easy to use or making it have great functionality. What they forget is that performance, especially consistent performance, can be just as important. All these things rolled into one are what I call the convenience factors. It’s not a new concept and many brick-and-mortar companies have failed over the years because of this. If we go back a few years, we can see many examples of technology or companies succumbing from an original position of strength because they never took this perceived convenience/quality factor into account. Three examples:

Application DeliveryWPO

Why Visibility and Automation Matter

February 24, 2016 — by Prakash Sinha0

5-ways-adc-improve-performance-1.png

In today’s virtualized world, organizations are looking for a single pane of glass – for visibility to user, application and network health, real-time status and performance data that is relevant. Why is this important? And how does this tie into orchestration and automation?

When provisioning applications and network infrastructure on-demand, particular attention is required when responses are slowing down, so proactive monitoring is critical. It’s important to know when an application is not meeting its SLA requirements or security attacks may be impacting application performance. AND it’s important to know these issues before they become a business disruption.

Application DeliveryWPO

An Introductory Guide to Developing Fault-Tolerant Networks

February 10, 2016 — by Frank Yue0

Guide-to-Developing-Fault-Tolerant-chart-960x499.jpg

In Greek mythology, the Titan Prometheus was chained to a rock. Every day, an eagle flew down and ate part of his liver. The organ regenerated during the night, replenishing the food source. The liver is one of the few organs in the human body that can spontaneously regenerate. Even more impressive, is the fact that while the liver is regenerating and fixing itself, it is still functional. The ancient Greeks knew of this capability and incorporated it to their mythology almost 3000 years ago.

Application Delivery

Application Service Level Assurance is Like Car Diagnostics

December 1, 2015 — by Frank Yue3

I own a car and drive it regularly.  I keep it maintained according to a schedule and make sure it is running well.  To ensure that vehicles are running properly, auto manufacturers introduced the on board diagnostics (OBD) standard.  Since 1996, the OBD-II standard has been required on all vehicles in the United States and Europe has the EOBD equivalent.

Application DeliveryNFVSDNVirtualization

Why Every Network Needs a Doctor

October 20, 2015 — by Frank Yue0

Every year, I go to my doctor for my annual physical.  My doctor goes through a standard series of procedures every time.  He asks me questions about my diet and my general physical well-being.  He puts his cold stethoscope to my chest and listens to my heart and breathing.  He checks my blood pressure and heart rate.  His lab technician takes samples of blood for tests to collect information related to organs like my liver and pancreas.  He may even request tests depending on what he has learned during the exam. Later, my doctor follows up to discuss the results.