main

Cloud Computing

Eliminating Excessive Permissions

June 11, 2019 — by Eyal Arazi0

excessivepermissionsblog-960x581.jpg

Excessive permissions are the #1 threat to workloads hosted on the public cloud. As organizations migrate their computing resources to public cloud environments, they lose visibility and control over their assets. In order to accelerate the speed of business, extensive permissions are frequently granted to users who shouldn’t have them, which creates a major security risk should any of these users ever become compromised by hackers.

Watch the video below to learn more about the importance of eliminating excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi3

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi13

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud SecuritySecurityWeb Application Firewall

Using Application Analytics to Achieve Security at Scale

October 16, 2018 — by Eyal Arazi2

application_analytics_security_blog-960x584.jpg

Are you overwhelmed by the number of security events per day? If so, you are not alone.

Alert Fatigue is Leaving You Exposed

It is not uncommon for security administrators to receive tens of thousands of security alerts per day, leading to alert fatigue – and worse – security events going unattended.

Tellingly, a study conducted by the Cloud Security Alliance (CSA) found that over 40% of security professionals think alerts lack actionable intelligence that can help them resolve security events. More than 30% of security professionals ignore alerts altogether because so many of them are false positives. Similarly, a study by Fidelis Cybersecurity found that almost two-thirds of organizations review less than 25% of alerts every day and only 6% triage 75% or more of alerts per day that they receive.

As a result of this alert flood, many organizations leave the majority of their security alerts unchecked. This is particularly a problem in the world of application security, as customer-facing applications frequently generate massive amounts of security events, based on user activity. Although many of these events are benign, some are not—and it only takes one alert to open the doors to devastating security events, like a data breach.

Not examining these events in detail leaves applications (and the data they store) exposed to security vulnerabilities, false positives, and sub-optimal security policies, which go unnoticed.

Many Events, but Few Activities

The irony of this alert flood is that when examined in detail, many alerts are, in fact, recurring events with discernible patterns. Examples of such recurring patterns are accessed to a specific resource, multiple scanning attempts from the same origin IP, or execution of a known attack vector.

Traditionally, web application firewall (WAF) systems log each individual event, without taking into consideration the overall context of the alert. For example, a legitimate attempt by a large group of users to access a common resource (such as a specific file or page), and a (clearly illegitimate) repeated scanning attempts by the same source IP address, would all be logged the same way: each individual event would be logged once, and not cross-linked to similar events.

How to Achieve Security at Scale

Achieving security at scale requires being able to separate the wheat from the chaff when it comes to security events. That is, distinguishing between large amounts of routine user actions which has little implication for application security, and high-priority alerts which are indicative of malicious hacking attempts or may otherwise suggest a problem with security policy configuration (for example, such as a legitimate request being blocked).

In order to be able to make this separation, there are a number of questions that security administrators need to ask themselves:

  1. How frequently does this event occur? That is, does this behavior occur often, or is it a one-off event?
  2. What is the trend for this event? How does this type of behavior reflect over time? Does it constantly occur at a constant rate, or is there a sudden massive spike?
  3. What is the relevant header request data? What are the relevant request methods, destination URL, resource types, and source/destination details?
  4. Is this type of activity indicative of a known attack? Is there a legitimate explanation for this event, or does it usually signify an attempted attack?

Each of these questions can go either way in terms of explaining security events. However, administrators will do well to have all of this information readily available, in order to reach an informed assessment based on the overall context.

Having such tools – and taking the overall context into consideration – confers security professionals with a number of significant benefits:

  • Increased visibility of security events, to better understand application behavior and focus on high-priority alerts.
  • More intelligent decision making on which events should be blocked or allowed.
  • A more effective response in order to secure applications against attacks as much as possible, while also making sure that legitimate users are not impacted.

Radware’s Application Analytics

Radware developed Application Analytics – the latest feature in Radware’s Cloud WAF Service to address these customer needs.

Radware’s Cloud WAF Application Analytics works via a process of analysis based on machine-learning algorithms, which identify patterns and group similar application events into recurring user activities:

  1. Data mapping of the log data set, to identify all potential event types
  2. Cluster analysis using machine learning algorithms to identify similar events with common characteristics
  3. Activity grouping of recurring user activities with common identifiers
  4. Data enrichment of supplemental details on activities to provide further context on activities

Radware’s Cloud WAF Application Analytics takes large numbers of recurring log events and condensing them into a small number of recurring activities.

In several customer trials, this capability allowed Radware to reduce the number of Cloud WAF alerts from several thousand (or even tens of thousands) to a single-digit (or double digit) number of activities. This allows administrators to focus on the alerts that matter.

For example, one customer reduced over 8,000 log events on one of their applications into 12 activities (seen above), whereas another customer reduced more than 3,500 security events into 13 activities.

The benefits for security administrators are easy to see: rather than drown in massive amounts of log events with little (or no) context to explain them. Cloud WAF Application Analytics now provides a tool to reduce log overload into a manageable number of activities to analyze, which administrators can now handle.

Ultimately, there is no silver bullet when it comes to WAF and application security management: administrators will always need to balance being as secure as possible (and protect private user data), with the need to be as accessible as possible to those same users. Cloud WAF Application Analytics are Radware’s attempt to disentangle this challenge.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoS AttacksHTTP Flood AttacksSecurity

Rate Limiting-A Cure Worse Than the Disease?

September 5, 2018 — by Eyal Arazi0

rate_limiting_l7_ddos_security-960x540.jpg
Rate limiting is a commonly-used tool to defend against application-layer (L7) DDoS attacks. However, the shortcomings of this approach raises the question of whether the cure is worse than the disease?

As more applications transition to web and cloud-based environments, application-layer (L7) DDoS attacks are becoming increasingly common and potent.

In fact, Radware research found that application-layer attacks have taken over network-layer DDoS attacks, and HTTP floods are now the number one most common attack across all vectors. This is mirrored by new generations of attack tools such as the Mirai botnet, which makes application-layer floods even more accessible and easier to launch.

It is, therefore, no surprise that more security vendors claim to provide protection against such attacks. The problem, however, is that the chosen approach by many vendors is rate limiting.

A More Challenging Form of DDoS Attack

What is it that makes application-layer DDoS attacks so difficult to defend against?

Application-layer DDoS attacks such as HTTP GET or HTTP POST floods are particularly difficult to protect against because they require analysis of the application-layer traffic in order to determine whether or not it is behaving legitimately.

For example, when a shopping website sees a spike in incoming HTTP traffic, is that because a DDoS attack is taking place, or because there is a flash crowd of shoppers looking for the latest hot item?

Looking at network-layer traffic volumes alone will not help us. The only option would be to look at application data directly and try to discern whether or not it is legitimate based on its behavior.

However, several vendors who claim to offer protection against application-layer DDoS attacks don’t have the capabilities to actually analyze application traffic and work out whether an attack is taking place. This leads many of them to rely on brute-force mechanisms such as HTTP rate limiting.

[You might also like: 8 Questions to Ask in DDoS Protection]

A Remedy (Almost) as Bad as the Disease

Explaining rate limiting is simple enough: when traffic goes over a certain threshold, rate limits are applied to throttle the amount of traffic to a level that the hosting server (or network pipe) can handle.

While this sounds simple enough, it also creates several problems:

  • Rate limiting does not distinguish between good and bad traffic: It has no mechanism for determining whether a connection is legitimate or not. It is an equal-opportunity blocker of traffic.
  • Rate limiting does not actually clean traffic: An important point to emphasize regarding rate limiting is that it does not actually block any bad traffic. Bad traffic will reach the original server, albeit at a slower rate.
  • Rate limiting blocks legitimate users: It does not distinguish between good and malicious requests and does not actually block bad traffic so rate limiting results in a high degree of false positives. This will lead to legitimate users being blocked from reaching the application.

Some vendors have more granular rate limiting controls which allow limiting connections not just per application, but also per user. However, sophisticated attackers get around this by spreading attacks over a large number of attack hosts. Moreover, modern web applications (and browsers) frequently use multiple concurrent connections, so limiting concurrent connections per user will likely impact legitimate users.

Considering that the aim of a DDoS attack is usually to disrupt the availability of web applications and prevent legitimate users from reaching them, we can see that rate limiting does not actually mitigate the problem: bad traffic will still reach the application, and legitimate users will be blocked.

In other words – rate limiting administers the pains of the medication, without providing the benefit of a remedy.

This is not to say that rate limiting cannot be a useful discipline in mitigating application-layer attacks, but it should be used as a last line of defense, when all else fails, and not as a first response.

A better approach with behavioral detection

An alternative approach to rate limiting – which would deliver better results – is to use a positive security model based on behavioral analysis.

Most defense mechanisms – including rate limiting – subscribe to a ‘negative’ security model. In a nutshell, it means that all traffic will be allowed through, except what is explicitly known to be malicious. This is how the majority of signature-based and volume-based DDoS and WAF solutions work.

A ‘positive’ security model, on the other hand, works the other way around: it uses behavioral-based learning processes to learn what constitutes legitimate user behavior and establishes a baseline of legitimate traffic patterns. It will then block any request that does not conform to this traffic pattern.

Such an approach is particularly useful when it comes to application-layer DDoS attacks since it can look at application-layer behavior, and determine whether this behavior adheres to recognized legitimate patterns. One such example would be to determine whether a spike in traffic is legitimate behavior or the result of a DDoS attack.

[You might also like: 5 Must-Have DDoS Protection Technologies]

The advantages of behavioral-based detections are numerous:

  • Blocks bad traffic: Unlike rate limiting, behavioral-based detection actually ‘scrubs’ bad traffic out, leaving only legitimate traffic to reach the application.
  • Reduces false positives: One of the key problems of rate limiting is the high number of false positives. A positive security approach greatly reduces this problem.
  • Does not block legitimate users: Most importantly, behavioral traffic analysis results in fewer (or none at all) blocked users, meaning that you don’t lose on customers, reputation, and revenue.

That’s Great, but How Do I know If I Have It?

The best way to find out what protections you have is to be informed. Here are a few questions to ask your security vendor:

  1. Do you provide application-layer (L7) DDoS protection as part of your DDoS solution, or does it require an add-on WAF component?
  2. Do you use behavioral learning algorithms to establish ‘legitimate’ traffic patterns?
  3. How do you distinguish between good and bad traffic?
  4. Do you have application-layer DDoS protection that goes beyond rate limiting?

If your vendor has these capabilities, make sure they’re turned-on and enabled. If not, the increase in application-layer DDoS attacks means that it might be time to look for other alternatives.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

DDoSSecurity

8 Questions to Ask in DDoS Protection

June 7, 2018 — by Eyal Arazi0

8-ddos-questions-1-960x640.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attack.

Before evaluating DDoS protection solutions, it is important to assess the needs, objectives, and constraints of the organization, network and applications. These factors will define the criteria for selecting the optimal solution.

Attack Types & VectorsDDoSSecuritySSL

5 Must-Have DDoS Protection Technologies

May 30, 2018 — by Eyal Arazi0

5-ddos-capabilities-960x640.jpg

Distributed Denial of Service (DDoS) attacks have entered the 1 Tbps DDoS attack era. However, Radware research shows that DDoS attacks are not just getting bigger; they’re also getting more sophisticated. Hackers are constantly coming up with new and innovative ways of bypassing traditional DDoS defenses and compromise organizations’ service availability.