main

Cloud SecuritySecurityWeb Application Firewall

Using Application Analytics to Achieve Security at Scale

October 16, 2018 — by Eyal Arazi1

application_analytics_security_blog-960x584.jpg

Are you overwhelmed by the number of security events per day? If so, you are not alone.

Alert Fatigue is Leaving You Exposed

It is not uncommon for security administrators to receive tens of thousands of security alerts per day, leading to alert fatigue – and worse – security events going unattended.

Tellingly, a study conducted by the Cloud Security Alliance (CSA) found that over 40% of security professionals think alerts lack actionable intelligence that can help them resolve security events. More than 30% of security professionals ignore alerts altogether because so many of them are false positives. Similarly, a study by Fidelis Cybersecurity found that almost two-thirds of organizations review less than 25% of alerts every day and only 6% triage 75% or more of alerts per day that they receive.

As a result of this alert flood, many organizations leave the majority of their security alerts unchecked. This is particularly a problem in the world of application security, as customer-facing applications frequently generate massive amounts of security events, based on user activity. Although many of these events are benign, some are not—and it only takes one alert to open the doors to devastating security events, like a data breach.

Not examining these events in detail leaves applications (and the data they store) exposed to security vulnerabilities, false positives, and sub-optimal security policies, which go unnoticed.

Many Events, but Few Activities

The irony of this alert flood is that when examined in detail, many alerts are, in fact, recurring events with discernible patterns. Examples of such recurring patterns are accessed to a specific resource, multiple scanning attempts from the same origin IP, or execution of a known attack vector.

Traditionally, web application firewall (WAF) systems log each individual event, without taking into consideration the overall context of the alert. For example, a legitimate attempt by a large group of users to access a common resource (such as a specific file or page), and a (clearly illegitimate) repeated scanning attempts by the same source IP address, would all be logged the same way: each individual event would be logged once, and not cross-linked to similar events.

How to Achieve Security at Scale

Achieving security at scale requires being able to separate the wheat from the chaff when it comes to security events. That is, distinguishing between large amounts of routine user actions which has little implication for application security, and high-priority alerts which are indicative of malicious hacking attempts or may otherwise suggest a problem with security policy configuration (for example, such as a legitimate request being blocked).

In order to be able to make this separation, there are a number of questions that security administrators need to ask themselves:

  1. How frequently does this event occur? That is, does this behavior occur often, or is it a one-off event?
  2. What is the trend for this event? How does this type of behavior reflect over time? Does it constantly occur at a constant rate, or is there a sudden massive spike?
  3. What is the relevant header request data? What are the relevant request methods, destination URL, resource types, and source/destination details?
  4. Is this type of activity indicative of a known attack? Is there a legitimate explanation for this event, or does it usually signify an attempted attack?

Each of these questions can go either way in terms of explaining security events. However, administrators will do well to have all of this information readily available, in order to reach an informed assessment based on the overall context.

Having such tools – and taking the overall context into consideration – confers security professionals with a number of significant benefits:

  • Increased visibility of security events, to better understand application behavior and focus on high-priority alerts.
  • More intelligent decision making on which events should be blocked or allowed.
  • A more effective response in order to secure applications against attacks as much as possible, while also making sure that legitimate users are not impacted.

Radware’s Application Analytics

Radware developed Application Analytics – the latest feature in Radware’s Cloud WAF Service to address these customer needs.

Radware’s Cloud WAF Application Analytics works via a process of analysis based on machine-learning algorithms, which identify patterns and group similar application events into recurring user activities:

  1. Data mapping of the log data set, to identify all potential event types
  2. Cluster analysis using machine learning algorithms to identify similar events with common characteristics
  3. Activity grouping of recurring user activities with common identifiers
  4. Data enrichment of supplemental details on activities to provide further context on activities

Radware’s Cloud WAF Application Analytics takes large numbers of recurring log events and condensing them into a small number of recurring activities.

In several customer trials, this capability allowed Radware to reduce the number of Cloud WAF alerts from several thousand (or even tens of thousands) to a single-digit (or double digit) number of activities. This allows administrators to focus on the alerts that matter.

For example, one customer reduced over 8,000 log events on one of their applications into 12 activities (seen above), whereas another customer reduced more than 3,500 security events into 13 activities.

The benefits for security administrators are easy to see: rather than drown in massive amounts of log events with little (or no) context to explain them. Cloud WAF Application Analytics now provides a tool to reduce log overload into a manageable number of activities to analyze, which administrators can now handle.

Ultimately, there is no silver bullet when it comes to WAF and application security management: administrators will always need to balance being as secure as possible (and protect private user data), with the need to be as accessible as possible to those same users. Cloud WAF Application Analytics are Radware’s attempt to disentangle this challenge.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoS AttacksHTTP Flood AttacksSecurity

Rate Limiting-A Cure Worse Than the Disease?

September 5, 2018 — by Eyal Arazi0

rate_limiting_l7_ddos_security-960x540.jpg
Rate limiting is a commonly-used tool to defend against application-layer (L7) DDoS attacks. However, the shortcomings of this approach raises the question of whether the cure is worse than the disease?

As more applications transition to web and cloud-based environments, application-layer (L7) DDoS attacks are becoming increasingly common and potent.

In fact, Radware research found that application-layer attacks have taken over network-layer DDoS attacks, and HTTP floods are now the number one most common attack across all vectors. This is mirrored by new generations of attack tools such as the Mirai botnet, which makes application-layer floods even more accessible and easier to launch.

It is, therefore, no surprise that more security vendors claim to provide protection against such attacks. The problem, however, is that the chosen approach by many vendors is rate limiting.

A More Challenging Form of DDoS Attack

What is it that makes application-layer DDoS attacks so difficult to defend against?

Application-layer DDoS attacks such as HTTP GET or HTTP POST floods are particularly difficult to protect against because they require analysis of the application-layer traffic in order to determine whether or not it is behaving legitimately.

For example, when a shopping website sees a spike in incoming HTTP traffic, is that because a DDoS attack is taking place, or because there is a flash crowd of shoppers looking for the latest hot item?

Looking at network-layer traffic volumes alone will not help us. The only option would be to look at application data directly and try to discern whether or not it is legitimate based on its behavior.

However, several vendors who claim to offer protection against application-layer DDoS attacks don’t have the capabilities to actually analyze application traffic and work out whether an attack is taking place. This leads many of them to rely on brute-force mechanisms such as HTTP rate limiting.

[You might also like: 8 Questions to Ask in DDoS Protection]

A Remedy (Almost) as Bad as the Disease

Explaining rate limiting is simple enough: when traffic goes over a certain threshold, rate limits are applied to throttle the amount of traffic to a level that the hosting server (or network pipe) can handle.

While this sounds simple enough, it also creates several problems:

  • Rate limiting does not distinguish between good and bad traffic: It has no mechanism for determining whether a connection is legitimate or not. It is an equal-opportunity blocker of traffic.
  • Rate limiting does not actually clean traffic: An important point to emphasize regarding rate limiting is that it does not actually block any bad traffic. Bad traffic will reach the original server, albeit at a slower rate.
  • Rate limiting blocks legitimate users: It does not distinguish between good and malicious requests and does not actually block bad traffic so rate limiting results in a high degree of false positives. This will lead to legitimate users being blocked from reaching the application.

Some vendors have more granular rate limiting controls which allow limiting connections not just per application, but also per user. However, sophisticated attackers get around this by spreading attacks over a large number of attack hosts. Moreover, modern web applications (and browsers) frequently use multiple concurrent connections, so limiting concurrent connections per user will likely impact legitimate users.

Considering that the aim of a DDoS attack is usually to disrupt the availability of web applications and prevent legitimate users from reaching them, we can see that rate limiting does not actually mitigate the problem: bad traffic will still reach the application, and legitimate users will be blocked.

In other words – rate limiting administers the pains of the medication, without providing the benefit of a remedy.

This is not to say that rate limiting cannot be a useful discipline in mitigating application-layer attacks, but it should be used as a last line of defense, when all else fails, and not as a first response.

A better approach with behavioral detection

An alternative approach to rate limiting – which would deliver better results – is to use a positive security model based on behavioral analysis.

Most defense mechanisms – including rate limiting – subscribe to a ‘negative’ security model. In a nutshell, it means that all traffic will be allowed through, except what is explicitly known to be malicious. This is how the majority of signature-based and volume-based DDoS and WAF solutions work.

A ‘positive’ security model, on the other hand, works the other way around: it uses behavioral-based learning processes to learn what constitutes legitimate user behavior and establishes a baseline of legitimate traffic patterns. It will then block any request that does not conform to this traffic pattern.

Such an approach is particularly useful when it comes to application-layer DDoS attacks since it can look at application-layer behavior, and determine whether this behavior adheres to recognized legitimate patterns. One such example would be to determine whether a spike in traffic is legitimate behavior or the result of a DDoS attack.

[You might also like: 5 Must-Have DDoS Protection Technologies]

The advantages of behavioral-based detections are numerous:

  • Blocks bad traffic: Unlike rate limiting, behavioral-based detection actually ‘scrubs’ bad traffic out, leaving only legitimate traffic to reach the application.
  • Reduces false positives: One of the key problems of rate limiting is the high number of false positives. A positive security approach greatly reduces this problem.
  • Does not block legitimate users: Most importantly, behavioral traffic analysis results in fewer (or none at all) blocked users, meaning that you don’t lose on customers, reputation, and revenue.

That’s Great, but How Do I know If I Have It?

The best way to find out what protections you have is to be informed. Here are a few questions to ask your security vendor:

  1. Do you provide application-layer (L7) DDoS protection as part of your DDoS solution, or does it require an add-on WAF component?
  2. Do you use behavioral learning algorithms to establish ‘legitimate’ traffic patterns?
  3. How do you distinguish between good and bad traffic?
  4. Do you have application-layer DDoS protection that goes beyond rate limiting?

If your vendor has these capabilities, make sure they’re turned-on and enabled. If not, the increase in application-layer DDoS attacks means that it might be time to look for other alternatives.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

DDoSSecurity

8 Questions to Ask in DDoS Protection

June 7, 2018 — by Eyal Arazi0

8-ddos-questions-1-960x640.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attack.

Before evaluating DDoS protection solutions, it is important to assess the needs, objectives, and constraints of the organization, network and applications. These factors will define the criteria for selecting the optimal solution.

Attack Types & VectorsDDoSSecuritySSL

5 Must-Have DDoS Protection Technologies

May 30, 2018 — by Eyal Arazi0

5-ddos-capabilities-960x640.jpg

Distributed Denial of Service (DDoS) attacks have entered the 1 Tbps DDoS attack era. However, Radware research shows that DDoS attacks are not just getting bigger; they’re also getting more sophisticated. Hackers are constantly coming up with new and innovative ways of bypassing traditional DDoS defenses and compromise organizations’ service availability.

DDoSSecurity

Choosing the Right DDoS Solution – Part IV: Hybrid Protection

April 24, 2018 — by Eyal Arazi0

hybrid-solution-960x637.jpg

This is the last part of the blog series exploring the various alternatives for protection against DDoS attacks, and how to choose the optimal solution for you. The first part of this series covered premise-based hardware solutions, the second part discussed on-demand cloud solutions, and the third part covered always-on cloud solutions. This final piece will explore hybrid DDoS solutions, which combine both hardware and cloud-based components.

DDoSSecurity

Choosing the Right DDoS Solution – Part III: Always-On Cloud Service

April 4, 2018 — by Eyal Arazi0

always-on-cloud-960x598.jpg

This blog series dives into the different DDoS protection models, in order to help customers choose the optimal protection for their particular use-case. The first parts of this series covered premise-based appliances and on-demand cloud services. This installment will cover always-on cloud DDoS protection deployments, its advantages and drawbacks, and what use-cases are best for it. The final part of this series will focus on hybrid deployments, which combine premise-based and cloud-based protections.

DDoSSecurity

Choosing the Right DDoS Solution – Part II: On-Demand Cloud Service

March 29, 2018 — by Eyal Arazi0

on-demand-cloud-960x640.jpg

This blog series explores the various options for DDoS protection and help organizations choose the optimal solution for themselves. The first part of this series covered the premise-based DDoS mitigation appliance. This installment will provide an overview of on-demand cloud-based solutions. Subsequent chapters will also cover always-on and hybrid solutions.

Attack Types & VectorsDDoS AttacksSecurity

Choosing the Right DDoS Solution – Part I: On-Prem Appliance

March 14, 2018 — by Eyal Arazi0

choosing-ddos-part-1-960x534.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attacks.

However, DDoS protection is not a one-size-fits-all fixed menu; rather, it is an a-la-carte buffet of multiple choices. Each option has its unique advantages and drawbacks, and it is up to the customer to select the optimal solution that best fits their needs, threats, and budget.

This blog series explores the various options for DDoS protection deployments and discusses the considerations, advantages and drawbacks of each approach, and who it is usually best suited for.

Security

You Need a New Approach to Stop Evasive Malware

February 14, 2018 — by Eyal Arazi1

cloud-malware-960x551.jpg

Evasive malware has become a key threat to businesses’ sensitive data. Stealing and selling sensitive data on the Darknet is a lucrative business for hackers, who increasingly rely on evasive malware to penetrate corporate networks.

A study by Verizon found that over 50% of data breaches involve the usage of malware in some capacity. Indeed, some of the largest and best-known data breaches on record, such as Target, Anthem Health, The Home Depot and the U.S. Federal Office of Personnel Management (OPM) were the result of evasive malware running undetected in the network over long periods. These organizations all have large security teams, massive IT budgets and multi-layered anti-malware protections. And yet, in each of these cases these defenses were all circumvented by evasive malware.