main

DDoS

How to Choose a Cloud DDoS Scrubbing Service

August 21, 2019 — by Eyal Arazi0

ddoscloud-960x720.jpg

Buying a cloud-based security solution is more than just buying a technology. Whereas when you buy a physical product, you care mostly about its immediate features and capabilities, a cloud-based service is more than just lines on a spec sheet; rather, it is a combination of multiple elements, all of which must work in tandem, in order to guarantee performance.

Cloud Service = Technology + Network + Support

There are three primary elements that determine the quality of a cloud security service: technology, network, and support.

Technology is crucial for the underlying security and protection capabilities. The network is required for a solid foundation on which the technology runs on, and the operation & support component is required to bring them together and keep them working.

[You may also like: Security Considerations for Cloud Hosted Services]

Take any one out, and the other two legs won’t be enough for the service to stand on.

This is particularly true when looking for a cloud-based DDoS scrubbing solution. Distributed Denial of Service (DDoS) attacks have distinct features that make them different than other types of cyber-attacks. Therefore, there are specific requirements for cloud-based DDoS protection service that cover the full gamut of technology, network, and support that are particular to DDoS protection.

Technology

As I explained earlier, technology is just one facet of what makes-up a cloud security service. However, it is the building block on which everything else is built.

The quality of the underlying technology is the most important factor in determining the quality of protection. It is the technology that determines how quickly an attack will be detected; it is the quality of the technology that determines whether it can tell the difference between a traffic spike in legitimate traffic, and a DDoS attack; and it is the technology that determines whether it can adapt to attack patterns in time to keep your application online or not.

[You may also like: Why You Still Need That DDoS Appliance]

In order to make sure that your protection is up to speed, there are a few key core features you want to make sure that your cloud service provides:

  • Behavioral detection: It is often difficult to tell the difference between a legitimate traffic in customer traffic – say, during peak shopping periods – and a surge caused by a DDoS attack. Rate-based detection won’t be able to tell the difference, resulting in false positives. Therefore, behavioral detection, which looks not just at traffic rates, but also at non-rate behavioral parameters is a must-have capability.
  • Automatic signature creation: Attackers are relying more and more on multi-vector and ‘hit-and-run’ burst attacks, which frequently switch between different attack methods. Any defense mechanism based on manual configurations will fail because it won’t be able to keep up with changed. Only defenses which provide automatic, real-time signature creation can keep up with such attacks, in order to tailor defenses to the specific characteristics of the attack.
  • SSL DDoS protection: As more and more internet traffic becomes encrypted – over 85% according to the latest estimates – protection against encrypted DDoS floods becomes ever more important. Attackers can leverage DDoS attacks in order to launch potent DDoS attacks which can quickly overwhelm server resources. Therefore, protection capabilities against SSL-based DDoS attacks is key.
  • Application-layer protection: As more and more services migrate online, application-layer (L7) DDoS attacks are increasingly used in order to take them down. Many traditional DDoS mitigation services look only at network-layer (L3/4) protocols, but up-to-date protection must including application-layer protection, as well.
  • Zero-day protection: Finally, attackers are constantly finding new ways of bypassing traditional security mechanisms and hitting organizations with attack methods never seen before. Even by making small changes to attack signatures hackers can craft attacks that are not recognized by manual signatures. That’s why including zero-day protection features, which can adapt to new attack types, is an absolute must-have.

[You may also like: Modern Analytics and End-to-End Visibility]

Network

The next building block is the network. Whereas the technology stops the attack itself, it is the network that scales-out the service and deploys it on a global scale. Here, too, there are specific requirements that are uniquely important in the case of DDoS scrubbing networks:

  • Massive capacity: When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new peaks. That is why having large-scale, massive capacity at your disposal in an absolute requirement to stop attacks.
  • Dedicated capacity: It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers rely on their CDN capacity, which is already being widely utilized, for DDoS mitigation, as well. Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.
  • Global footprint: Fast response and low latency are crucial components in service performance. A critical component in latency, however, is distance between the customer and the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer, which can only be achieve with a globally distributed network with a large footprint.

Support

The final piece of the ‘puzzle’ of providing a high-quality cloud security network is the human element; that is, maintenance, operation and support.

Beyond the cold figures of technical specifications, and the bits-and-bytes of network capacity, it is the service element that ties together the technology and network, and makes sure that they keep working in tandem.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Here, too, there are a few key elements to look at when considering a cloud security network:

  • Global Team: Maintaining global operations of a cloud security service requires a team large enough to ensure 24x7x365 operations. Moreover, sophisticated security teams use a ‘follow-the-sun’ model, with team member distributed strategically around the world, to make sure that experts are always available, regardless of time or location. Only teams that reach a certain size – and companies that reach a certain scale – can guarantee this.
  • Team Expertise: Apart from sheer numbers of team member, it is also their expertise that matter. Cyber security is a discipline, and DDoS protection, in particular, is a specialization. Only a team with a distinguished, long track record in  protecting specifically against DDoS attacks can ensure that you have the staff, skills, and experience required to be fully protected.
  • SLA: The final qualification are the service guarantees provided by your cloud security vendor. Many service providers make extensive guarantees, but fall woefully short when it comes to backing them up. The Service Level Agreement (SLA) is your guarantee that your service provider is willing to put their money where their mouth is. A high-quality SLA must provide individual measurable metrics for attack detection, diversion (if required), alerting, mitigation, and uptime. Falling short of those should call into question your vendors ability to deliver on their promises.

A high-quality cloud security service is more than the sum of its parts. It is the technology, network, and service all working in tandem – and hitting on all cylinders – in order to provide superior protection. Falling short on any one element can potentially jeopardize quality of the protection delivered to customers. Use the points outlined above to ask yourself whether your cloud security vendor has all the right pieces to provide quality protection, and if they don’t – perhaps it is time for you to consider alternatives.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

DDoS

Why You Still Need That DDoS Appliance

July 2, 2019 — by Eyal Arazi0

AdobeStock_229146668-960x532.jpeg

More and more organizations are adopting cloud-based DDoS defenses and substituting them for their old, premise-based DDoS appliances. Nonetheless, there are still a number of reasons why you might want to keep that DDoS appliance around.

The Rise of Cloud Protection

More and more organizations are deploying cloud-based DDoS mitigation services. Indeed, Frost & Sullivan estimated that by 2021, cloud-based mitigation service will account for 70% of spending on DDoS protection.

The reasons for adopting cloud-based protections are numerous. First and foremost, is capacity. As DDoS attacks keep getting bigger, high-volume DDoS attacks capable of saturating the inbound communication pipe are becoming more common. For that reason, having large-scale cloud-based scrubbing capacity to absorb such attacks is indispensable.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Moreover, cloud-based DDoS defenses are purchased on a pay-as-you-go SaaS subscription model, so organizations can quickly scale up or down, and don’t need to allocate large amounts of capital expenditure (CAPEX) far in advance. In addition, cloud services usually provide easier management and lower overhead than on-prem equipment, and don’t require dedicated staff to manage.

It is no surprise, then, that more and more organizations are looking to the cloud for DDoS protection.

The benefits of the cloud notwithstanding, there are still several key reasons why organizations would still want to maintain their hardware appliances, alongside cloud-based services.

[You may also like: Managing Security Risks in the Cloud]

Two-Way Traffic Visibility

Cloud-based services, by definition, only provide visibility into ingress – or inbound – traffic into the organization. They inspect traffic as it flows through to the origin, and scrub-out malicious traffic it identifies. While this is perfectly fine for most types of DDoS attacks, there are certain types of DDoS attacks that require visibility into both traffic channels in order to be detected and mitigated.

Examples of attacks that require visibility into egress traffic in order to detect include:

  • Out-of-State Protocol Attacks: These attacks exploit weaknesses in protocol communication process (such as TCP’s three-way handshake) to create “out-of-state” connection requests which exhaust server resources. Although some attacks of this type – such as SYN floods – can be mitigated solely with visibility into ingress traffic only, other types of out-of-state DDoS attacks – such as an ACK flood – require visibility into the outbound channel, as well. Visibility into the egress channel will be required to detect that these ACK responses are not associated with a legitimate SYN/ACK response, and can therefore be blocked.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

  • Reflection/Amplification Attacks: These attacks take advantage of the asymmetric nature of some protocols or request types in order to launch attacks that will exhaust server resources or saturate the outbound communication channel. An example of such an attack is a large file download attack. In this case, visibility into the egress channel is required to detect the spike in outbound traffic flowing from the network.
  • Scanning attacks: Such attacks frequently bare the hallmarks of a DDoS attack, since they flood the network with large numbers of erroneous connection requests. Such scans frequently generate large numbers of error replies, which can clog-up the outbound channel. Again, visibility into the outbound traffic is required to identify the error response rate relative to legitimate inbound traffic, so that defenses can conclude that an attack is taking place.

Application-layer Protection

Similarly, relying on a premise-based appliance has certain advantages for application-layer (L7) DDoS protection and SSL handling.

Certain types of application-layer(L7) DDoS attacks exploit known protocol weaknesses in order to generate large numbers of forged application requests that exhaust server resources. Examples of such attacks are low-and-slow attacks or application-layer SYN floods, which draw-out TCP and HTTP connections to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Again, although some such attacks can be mitigated by cloud scrubbing service, mitigating some types of attacks requires application state-awareness that cloud-based mitigation services usually do not possess.

Using a premise-based DDoS mitigation appliance with application-layer DDoS protection capabilities allows organizations to have this.

SSL DDoS Protection

Moreover, SSL encryption is adding another layer of complexity, as the encryption layers makes it difficult to inspect traffic contents for malicious traffic. In order to inspect traffic contents, cloud-based services must decrypt all traffic, inspect it, scrub-out bad traffic, and re-encrypt it, before forwarding it to the customer origin.

[You may also like: 5 Must-Have DDoS Protection Technologies]

As a result, most cloud-based DDoS mitigation services either provide no protection at all for SSL-based traffic, or use full-proxy SSL offloading which require that customers upload their certificates to the service provider’s cloud infrastructure.

However, performing full SSL offloading in the cloud is frequently a burdensome process which adds latency to customer communications and violates user privacy. That is why many organizations are hesitant – or don’t have the capability – of sharing their SSL keys with third party cloud service providers.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Again, deploying a premise-based appliance allows organizations to protect against SSL DDoS floods while keeping SSL certificates in-house.

Layered Protection

Finally, using a premise-based hardware appliance in conjunction with a cloud service allows for layered protection in case attack traffic somehow gets through the cloud protection.

Using a premise-based appliances allows the organization control directly over device configuration and management. Although many organizations prefer that this be handled by cloud-based managed services, some organizations (and some security managers) prefer to have this deeper level of control.

[You may also like: DDoS Protection Requires Looking Both Ways]

This control also allows security policy granularity, so that security policies can be fine-tuned exactly to the needs of the organizations, and cover attack vectors that the cloud-layer does not – or cannot – cover.

Finally, this allows for security failover, so that if malicious traffic somehow gets through the cloud mitigation, the appliance will handle it.

The Best Practice: A Hybrid Approach

Ultimately, it is up to each organization to decide what is the optimal solution for them, and what type of deployment model (appliance, pure cloud, or hybrid) is best for them.

Nonetheless, more and more enterprises are adopting a hybrid approach, combining the best of both worlds between the security granularity of hardware appliances, and the capacity and resilience of cloud services.

In particular, an increasingly popular option is an always-on hybrid solution, which combines always-on cloud service together with a hardware DDoS mitigation appliance. Combining these defenses allows for constant, uninterrupted protection against volumetric protection, while also protecting against application-layer and SSL DDoS attacks, while reducing exposure of SSL keys and improving handling of SSL traffic.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSSecurity

Why Hybrid Always-On Protection Is Your Best Bet

June 19, 2019 — by Eyal Arazi0

hybridalwayson-960x640.jpg

Users today want more. The ubiquity and convenience of online competition means that customers want everything better, faster, and cheaper. One key component of the user experience is service availability. Customers expect applications and online services to be constantly available and responsive.

The problem, however, is that a new generation of larger and more sophisticated Distributed Denial of Service (DDoS) attacks is making DDoS protection a more challenging task than ever before. Massive IoT botnets are resulting in ever-larger volumetric DDoS attacks, while more sophisticated application-layer attacks find new ways of exhausting server resources. Above all, the ongoing shift to encrypted traffic is creating a new challenge with potent SSL DDoS floods.

Traditional DDoS defense – either premise-based or cloud-based – provide incomplete solutions which require inherent trade-offs between high-capacity volumetric protection, protection against sophisticated application-layer DDoS attacks, and handling of SSL certificates. The solution, therefore, is adopting a new hybrid DDoS protection model which combines premise-based appliances, together with an always-on cloud service.

Full Protection Requires Looking Both Ways

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

[You may also like: DDoS Protection Requires Looking Both Ways]

Attacks such as large-file DDoS attacks, ACK floods, scanning attacks, and others exploit the outbound communication channel for attacks that cannot be identified just by looking at ingress traffic. Such attacks are executed by sending small numbers of inbound requests, which have an asymmetric and disproportionate impact either on the outbound channel, or computing resources inside the network.

SSL is Creating New Challenges

On top of that, SSL/TLS traffic encryption is adding another layer of complexity. Within a short time, the majority of internet traffic has become encrypted. Traffic encryption helps secure customer data, and users now expect security to be part of the service experience. According to the Mozilla Foundation’s Let’s Encrypt project, nearly 80% of worldwide internet traffic is already encrypted, and the rate is constantly growing.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Ironically, while SSL/TLS is critical for securing user data, it also creates significant management challenges, and exposes services to a new generation of powerful DDoS attacks:

  • Increased Potency of DDoS Attacks: SSL/TLS connections requiring up to 15 times more resources from the target servers than the requesting host. This means that hackers can launch devastating attacks using only a small number of connections, and quickly overwhelm server resources using SSL floods.
  • Masking of Data Payload: Moreover, encryption masks – by definition – the internal contents of traffic requests, preventing deep inspection of packets against malicious traffic. This limits the effectiveness of anti-DDoS defense layers, and the types of attacks they can detect. This is particularly true for application-layer (L7) DDoS attacks which hide under the coverage of SSL encryption.
  • SSL Key Exposure: Many organizational, national, or industry regulations which forbid SSL keys from being shared with third-party entities. This creates a unique challenge to organizations who must provide the most secured user experience while also protecting their SSL keys from exposure.
  • Latency and Privacy Concerns: Offloading of SSL traffic in the cloud is usually a complex and time-consuming task. Most cloud-based SSL DDoS solutions require full decryption of customer traffic by the cloud provider, thereby compromising user privacy and adding latency to customer communications.

Existing Solutions Provide Partial Coverage

The problem, however, is that existing anti-DDoS defenses are unable to provide solutions that provide high-capacity volumetric protection while providing bi-directional protection required by sophisticated types of attacks.

On-Premise Appliances provide high level of protection against a wide variety of DDoS attacks, while providing very low latency and fast response. In addition, being on-premise, they allow companies to deal with SSL-based attacks without exposing their encryption keys to the outside world. Since they have visibility into both inbound and outbound traffic, they offer bi-directional protection against symmetric DDoS attacks. However, physical appliance can’t deal with large-scale volumetric attacks which have become commonplace in the era of massive IoT botnets.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Cloud-based DDoS protection services, on the other hand, possess the bandwidth to deal with large-scale volumetric attacks. However, they offer visibility only into the inbound communication channel. Thus, they have a hard time protecting against bi-directional DDoS attacks. Moreover, cloud-based SSL DDoS defenses – if the vendor has those at all – frequently require that the organization upload their SSL certificates online, increasing the risk of those keys being exposed.

The Optimal Solution: Hybrid Always-On Approach

For companies that place a high premium on the user experience, and wish to avoid even the slightest possible downtime as a result of DDoS attacks, the optimal solution is to deploy an always-on hybrid solution.

The hybrid approach to DDoS protection combines an on-premise hardware appliance with always-on cloud-based scrubbing capacity. This helps ensure that services are protected against any type of attack.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

Hybrid Always-On DDoS Protection

Compared to the pure-cloud always-on deployment model, the hybrid always-on approach adds multi-layered protection against symmetric DDoS attacks which saturate the outbound pipe, and allows for maintaining SSL certificates on-premise.

Benefits of the Hybrid Always-On Model

  • Multi-Layered DDoS Protection: The combination of a premise-based hardware mitigation device coupled with cloud-based scrubbing capacity offers multi-layered protection at different levels. If an attack somehow gets through the cloud protection layer, it will be stopped by the on-premise appliance.
  • Constant, Uninterrupted Volumetric Protection: Since all traffic passes through a cloud-based scrubbing center at all times, the cloud-based service provides uninterrupted, ongoing protection against high-capacity volumetric DDoS attack.
  • Bi-Directional DDoS Protection: While cloud-based DDoS protection services inspect only the inbound traffic channel, the addition of a premise-based appliance allows organizations to inspect the outbound channel, as well, thereby protecting themselves against two-way DDoS attacks which can saturate the outbound pipe, or otherwise require visibility to return traffic in order to identify attack patterns.
  • Reduced SSL Key Exposure: Many national or industry regulations require that encryption keys not be shared with anyone else. The inclusion of a premise-based hardware appliance allows organizations to protect themselves against encrypted DDoS attacks while keeping their SSL keys in-house.
  • Decreased Latency for Encrypted Traffic: SSL offloading in the cloud is frequently a complex and time-consuming affair, which adds much latency to user communications. Since inspection of SSL traffic in the hybrid always-on model is done primarily by the on-premise hardware appliance, users enjoy faster response times and lower latency.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Guaranteeing service availability while simultaneously ensuring the quality of the customer experience is a multi-faceted and complex proposition. Organizations are challenged by growth in the size of DDoS attacks, the increase in sophistication of application-layer DDoS attacks, and the challenges brought about by the shift to SSL encryption.

Deploying a hybrid always-on solution allows for both inbound and outbound visibility into traffic, enhanced protections for application-layer and encrypted traffic, and allows for SSL keys to be kept in-house, without exposing them to the outside.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Computing

Eliminating Excessive Permissions

June 11, 2019 — by Eyal Arazi0

excessivepermissionsblog-960x581.jpg

Excessive permissions are the #1 threat to workloads hosted on the public cloud. As organizations migrate their computing resources to public cloud environments, they lose visibility and control over their assets. In order to accelerate the speed of business, extensive permissions are frequently granted to users who shouldn’t have them, which creates a major security risk should any of these users ever become compromised by hackers.

Watch the video below to learn more about the importance of eliminating excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi0

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi0

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud SecuritySecurityWeb Application Firewall

Using Application Analytics to Achieve Security at Scale

October 16, 2018 — by Eyal Arazi1

application_analytics_security_blog-960x584.jpg

Are you overwhelmed by the number of security events per day? If so, you are not alone.

Alert Fatigue is Leaving You Exposed

It is not uncommon for security administrators to receive tens of thousands of security alerts per day, leading to alert fatigue – and worse – security events going unattended.

Tellingly, a study conducted by the Cloud Security Alliance (CSA) found that over 40% of security professionals think alerts lack actionable intelligence that can help them resolve security events. More than 30% of security professionals ignore alerts altogether because so many of them are false positives. Similarly, a study by Fidelis Cybersecurity found that almost two-thirds of organizations review less than 25% of alerts every day and only 6% triage 75% or more of alerts per day that they receive.

As a result of this alert flood, many organizations leave the majority of their security alerts unchecked. This is particularly a problem in the world of application security, as customer-facing applications frequently generate massive amounts of security events, based on user activity. Although many of these events are benign, some are not—and it only takes one alert to open the doors to devastating security events, like a data breach.

Not examining these events in detail leaves applications (and the data they store) exposed to security vulnerabilities, false positives, and sub-optimal security policies, which go unnoticed.

Many Events, but Few Activities

The irony of this alert flood is that when examined in detail, many alerts are, in fact, recurring events with discernible patterns. Examples of such recurring patterns are accessed to a specific resource, multiple scanning attempts from the same origin IP, or execution of a known attack vector.

Traditionally, web application firewall (WAF) systems log each individual event, without taking into consideration the overall context of the alert. For example, a legitimate attempt by a large group of users to access a common resource (such as a specific file or page), and a (clearly illegitimate) repeated scanning attempts by the same source IP address, would all be logged the same way: each individual event would be logged once, and not cross-linked to similar events.

How to Achieve Security at Scale

Achieving security at scale requires being able to separate the wheat from the chaff when it comes to security events. That is, distinguishing between large amounts of routine user actions which has little implication for application security, and high-priority alerts which are indicative of malicious hacking attempts or may otherwise suggest a problem with security policy configuration (for example, such as a legitimate request being blocked).

In order to be able to make this separation, there are a number of questions that security administrators need to ask themselves:

  1. How frequently does this event occur? That is, does this behavior occur often, or is it a one-off event?
  2. What is the trend for this event? How does this type of behavior reflect over time? Does it constantly occur at a constant rate, or is there a sudden massive spike?
  3. What is the relevant header request data? What are the relevant request methods, destination URL, resource types, and source/destination details?
  4. Is this type of activity indicative of a known attack? Is there a legitimate explanation for this event, or does it usually signify an attempted attack?

Each of these questions can go either way in terms of explaining security events. However, administrators will do well to have all of this information readily available, in order to reach an informed assessment based on the overall context.

Having such tools – and taking the overall context into consideration – confers security professionals with a number of significant benefits:

  • Increased visibility of security events, to better understand application behavior and focus on high-priority alerts.
  • More intelligent decision making on which events should be blocked or allowed.
  • A more effective response in order to secure applications against attacks as much as possible, while also making sure that legitimate users are not impacted.

Radware’s Application Analytics

Radware developed Application Analytics – the latest feature in Radware’s Cloud WAF Service to address these customer needs.

Radware’s Cloud WAF Application Analytics works via a process of analysis based on machine-learning algorithms, which identify patterns and group similar application events into recurring user activities:

  1. Data mapping of the log data set, to identify all potential event types
  2. Cluster analysis using machine learning algorithms to identify similar events with common characteristics
  3. Activity grouping of recurring user activities with common identifiers
  4. Data enrichment of supplemental details on activities to provide further context on activities

Radware’s Cloud WAF Application Analytics takes large numbers of recurring log events and condensing them into a small number of recurring activities.

In several customer trials, this capability allowed Radware to reduce the number of Cloud WAF alerts from several thousand (or even tens of thousands) to a single-digit (or double digit) number of activities. This allows administrators to focus on the alerts that matter.

For example, one customer reduced over 8,000 log events on one of their applications into 12 activities (seen above), whereas another customer reduced more than 3,500 security events into 13 activities.

The benefits for security administrators are easy to see: rather than drown in massive amounts of log events with little (or no) context to explain them. Cloud WAF Application Analytics now provides a tool to reduce log overload into a manageable number of activities to analyze, which administrators can now handle.

Ultimately, there is no silver bullet when it comes to WAF and application security management: administrators will always need to balance being as secure as possible (and protect private user data), with the need to be as accessible as possible to those same users. Cloud WAF Application Analytics are Radware’s attempt to disentangle this challenge.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoS AttacksHTTP Flood AttacksSecurity

Rate Limiting-A Cure Worse Than the Disease?

September 5, 2018 — by Eyal Arazi0

rate_limiting_l7_ddos_security-960x540.jpg
Rate limiting is a commonly-used tool to defend against application-layer (L7) DDoS attacks. However, the shortcomings of this approach raises the question of whether the cure is worse than the disease?

As more applications transition to web and cloud-based environments, application-layer (L7) DDoS attacks are becoming increasingly common and potent.

In fact, Radware research found that application-layer attacks have taken over network-layer DDoS attacks, and HTTP floods are now the number one most common attack across all vectors. This is mirrored by new generations of attack tools such as the Mirai botnet, which makes application-layer floods even more accessible and easier to launch.

It is, therefore, no surprise that more security vendors claim to provide protection against such attacks. The problem, however, is that the chosen approach by many vendors is rate limiting.

A More Challenging Form of DDoS Attack

What is it that makes application-layer DDoS attacks so difficult to defend against?

Application-layer DDoS attacks such as HTTP GET or HTTP POST floods are particularly difficult to protect against because they require analysis of the application-layer traffic in order to determine whether or not it is behaving legitimately.

For example, when a shopping website sees a spike in incoming HTTP traffic, is that because a DDoS attack is taking place, or because there is a flash crowd of shoppers looking for the latest hot item?

Looking at network-layer traffic volumes alone will not help us. The only option would be to look at application data directly and try to discern whether or not it is legitimate based on its behavior.

However, several vendors who claim to offer protection against application-layer DDoS attacks don’t have the capabilities to actually analyze application traffic and work out whether an attack is taking place. This leads many of them to rely on brute-force mechanisms such as HTTP rate limiting.

[You might also like: 8 Questions to Ask in DDoS Protection]

A Remedy (Almost) as Bad as the Disease

Explaining rate limiting is simple enough: when traffic goes over a certain threshold, rate limits are applied to throttle the amount of traffic to a level that the hosting server (or network pipe) can handle.

While this sounds simple enough, it also creates several problems:

  • Rate limiting does not distinguish between good and bad traffic: It has no mechanism for determining whether a connection is legitimate or not. It is an equal-opportunity blocker of traffic.
  • Rate limiting does not actually clean traffic: An important point to emphasize regarding rate limiting is that it does not actually block any bad traffic. Bad traffic will reach the original server, albeit at a slower rate.
  • Rate limiting blocks legitimate users: It does not distinguish between good and malicious requests and does not actually block bad traffic so rate limiting results in a high degree of false positives. This will lead to legitimate users being blocked from reaching the application.

Some vendors have more granular rate limiting controls which allow limiting connections not just per application, but also per user. However, sophisticated attackers get around this by spreading attacks over a large number of attack hosts. Moreover, modern web applications (and browsers) frequently use multiple concurrent connections, so limiting concurrent connections per user will likely impact legitimate users.

Considering that the aim of a DDoS attack is usually to disrupt the availability of web applications and prevent legitimate users from reaching them, we can see that rate limiting does not actually mitigate the problem: bad traffic will still reach the application, and legitimate users will be blocked.

In other words – rate limiting administers the pains of the medication, without providing the benefit of a remedy.

This is not to say that rate limiting cannot be a useful discipline in mitigating application-layer attacks, but it should be used as a last line of defense, when all else fails, and not as a first response.

A better approach with behavioral detection

An alternative approach to rate limiting – which would deliver better results – is to use a positive security model based on behavioral analysis.

Most defense mechanisms – including rate limiting – subscribe to a ‘negative’ security model. In a nutshell, it means that all traffic will be allowed through, except what is explicitly known to be malicious. This is how the majority of signature-based and volume-based DDoS and WAF solutions work.

A ‘positive’ security model, on the other hand, works the other way around: it uses behavioral-based learning processes to learn what constitutes legitimate user behavior and establishes a baseline of legitimate traffic patterns. It will then block any request that does not conform to this traffic pattern.

Such an approach is particularly useful when it comes to application-layer DDoS attacks since it can look at application-layer behavior, and determine whether this behavior adheres to recognized legitimate patterns. One such example would be to determine whether a spike in traffic is legitimate behavior or the result of a DDoS attack.

[You might also like: 5 Must-Have DDoS Protection Technologies]

The advantages of behavioral-based detections are numerous:

  • Blocks bad traffic: Unlike rate limiting, behavioral-based detection actually ‘scrubs’ bad traffic out, leaving only legitimate traffic to reach the application.
  • Reduces false positives: One of the key problems of rate limiting is the high number of false positives. A positive security approach greatly reduces this problem.
  • Does not block legitimate users: Most importantly, behavioral traffic analysis results in fewer (or none at all) blocked users, meaning that you don’t lose on customers, reputation, and revenue.

That’s Great, but How Do I know If I Have It?

The best way to find out what protections you have is to be informed. Here are a few questions to ask your security vendor:

  1. Do you provide application-layer (L7) DDoS protection as part of your DDoS solution, or does it require an add-on WAF component?
  2. Do you use behavioral learning algorithms to establish ‘legitimate’ traffic patterns?
  3. How do you distinguish between good and bad traffic?
  4. Do you have application-layer DDoS protection that goes beyond rate limiting?

If your vendor has these capabilities, make sure they’re turned-on and enabled. If not, the increase in application-layer DDoS attacks means that it might be time to look for other alternatives.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now