main

DDoSSecurity

Why Hybrid Always-On Protection Is Your Best Bet

June 19, 2019 — by Eyal Arazi0

hybridalwayson-960x640.jpg

Users today want more. The ubiquity and convenience of online competition means that customers want everything better, faster, and cheaper. One key component of the user experience is service availability. Customers expect applications and online services to be constantly available and responsive.

The problem, however, is that a new generation of larger and more sophisticated Distributed Denial of Service (DDoS) attacks is making DDoS protection a more challenging task than ever before. Massive IoT botnets are resulting in ever-larger volumetric DDoS attacks, while more sophisticated application-layer attacks find new ways of exhausting server resources. Above all, the ongoing shift to encrypted traffic is creating a new challenge with potent SSL DDoS floods.

Traditional DDoS defense – either premise-based or cloud-based – provide incomplete solutions which require inherent trade-offs between high-capacity volumetric protection, protection against sophisticated application-layer DDoS attacks, and handling of SSL certificates. The solution, therefore, is adopting a new hybrid DDoS protection model which combines premise-based appliances, together with an always-on cloud service.

Full Protection Requires Looking Both Ways

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

[You may also like: DDoS Protection Requires Looking Both Ways]

Attacks such as large-file DDoS attacks, ACK floods, scanning attacks, and others exploit the outbound communication channel for attacks that cannot be identified just by looking at ingress traffic. Such attacks are executed by sending small numbers of inbound requests, which have an asymmetric and disproportionate impact either on the outbound channel, or computing resources inside the network.

SSL is Creating New Challenges

On top of that, SSL/TLS traffic encryption is adding another layer of complexity. Within a short time, the majority of internet traffic has become encrypted. Traffic encryption helps secure customer data, and users now expect security to be part of the service experience. According to the Mozilla Foundation’s Let’s Encrypt project, nearly 80% of worldwide internet traffic is already encrypted, and the rate is constantly growing.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Ironically, while SSL/TLS is critical for securing user data, it also creates significant management challenges, and exposes services to a new generation of powerful DDoS attacks:

  • Increased Potency of DDoS Attacks: SSL/TLS connections requiring up to 15 times more resources from the target servers than the requesting host. This means that hackers can launch devastating attacks using only a small number of connections, and quickly overwhelm server resources using SSL floods.
  • Masking of Data Payload: Moreover, encryption masks – by definition – the internal contents of traffic requests, preventing deep inspection of packets against malicious traffic. This limits the effectiveness of anti-DDoS defense layers, and the types of attacks they can detect. This is particularly true for application-layer (L7) DDoS attacks which hide under the coverage of SSL encryption.
  • SSL Key Exposure: Many organizational, national, or industry regulations which forbid SSL keys from being shared with third-party entities. This creates a unique challenge to organizations who must provide the most secured user experience while also protecting their SSL keys from exposure.
  • Latency and Privacy Concerns: Offloading of SSL traffic in the cloud is usually a complex and time-consuming task. Most cloud-based SSL DDoS solutions require full decryption of customer traffic by the cloud provider, thereby compromising user privacy and adding latency to customer communications.

Existing Solutions Provide Partial Coverage

The problem, however, is that existing anti-DDoS defenses are unable to provide solutions that provide high-capacity volumetric protection while providing bi-directional protection required by sophisticated types of attacks.

On-Premise Appliances provide high level of protection against a wide variety of DDoS attacks, while providing very low latency and fast response. In addition, being on-premise, they allow companies to deal with SSL-based attacks without exposing their encryption keys to the outside world. Since they have visibility into both inbound and outbound traffic, they offer bi-directional protection against symmetric DDoS attacks. However, physical appliance can’t deal with large-scale volumetric attacks which have become commonplace in the era of massive IoT botnets.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Cloud-based DDoS protection services, on the other hand, possess the bandwidth to deal with large-scale volumetric attacks. However, they offer visibility only into the inbound communication channel. Thus, they have a hard time protecting against bi-directional DDoS attacks. Moreover, cloud-based SSL DDoS defenses – if the vendor has those at all – frequently require that the organization upload their SSL certificates online, increasing the risk of those keys being exposed.

The Optimal Solution: Hybrid Always-On Approach

For companies that place a high premium on the user experience, and wish to avoid even the slightest possible downtime as a result of DDoS attacks, the optimal solution is to deploy an always-on hybrid solution.

The hybrid approach to DDoS protection combines an on-premise hardware appliance with always-on cloud-based scrubbing capacity. This helps ensure that services are protected against any type of attack.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

Hybrid Always-On DDoS Protection

Compared to the pure-cloud always-on deployment model, the hybrid always-on approach adds multi-layered protection against symmetric DDoS attacks which saturate the outbound pipe, and allows for maintaining SSL certificates on-premise.

Benefits of the Hybrid Always-On Model

  • Multi-Layered DDoS Protection: The combination of a premise-based hardware mitigation device coupled with cloud-based scrubbing capacity offers multi-layered protection at different levels. If an attack somehow gets through the cloud protection layer, it will be stopped by the on-premise appliance.
  • Constant, Uninterrupted Volumetric Protection: Since all traffic passes through a cloud-based scrubbing center at all times, the cloud-based service provides uninterrupted, ongoing protection against high-capacity volumetric DDoS attack.
  • Bi-Directional DDoS Protection: While cloud-based DDoS protection services inspect only the inbound traffic channel, the addition of a premise-based appliance allows organizations to inspect the outbound channel, as well, thereby protecting themselves against two-way DDoS attacks which can saturate the outbound pipe, or otherwise require visibility to return traffic in order to identify attack patterns.
  • Reduced SSL Key Exposure: Many national or industry regulations require that encryption keys not be shared with anyone else. The inclusion of a premise-based hardware appliance allows organizations to protect themselves against encrypted DDoS attacks while keeping their SSL keys in-house.
  • Decreased Latency for Encrypted Traffic: SSL offloading in the cloud is frequently a complex and time-consuming affair, which adds much latency to user communications. Since inspection of SSL traffic in the hybrid always-on model is done primarily by the on-premise hardware appliance, users enjoy faster response times and lower latency.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Guaranteeing service availability while simultaneously ensuring the quality of the customer experience is a multi-faceted and complex proposition. Organizations are challenged by growth in the size of DDoS attacks, the increase in sophistication of application-layer DDoS attacks, and the challenges brought about by the shift to SSL encryption.

Deploying a hybrid always-on solution allows for both inbound and outbound visibility into traffic, enhanced protections for application-layer and encrypted traffic, and allows for SSL keys to be kept in-house, without exposing them to the outside.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware1

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

BotnetsDDoSSecurityWAF

Protecting Sensitive Data: A Black Swan Never Truly Sits Still

October 10, 2018 — by Mike O'Malley2

protecting-sensitive-data-never-sit-still-960x540.jpg

The black swan – a rare and unpredictable event notorious for its ability to completely change the tides of a situation.

For cybersecurity, these nightmares can take the form of disabled critical services such as municipal electrical grids and other connected infrastructure networks, data breaches, application failures, and DDoS attacks. They can range from the levels of Equifax’s 2018 Breach Penalty Fines (estimated close to $1.5 billion), to the bankruptcy of Code Spaces following their DDoS attack and breach (one of the 61% of SMBs companies that faced bankruptcy per service provider Verizon’s investigations), to a government-wide shutdown of web access in public servants’ computers in response to a string of cyberattacks.

Litigation and regulation can only do so much to reduce the impact of black swans, but it is up to companies to prepare and defend themselves from cyberattacks that can lead to rippling effects across industries.

[You might also like: What a Breach Means to Your Business]

If It’s So Rare, Why Should My Company Care?

Companies should concern themselves with black swans to understand the depth of the potential long-term financial and reputation damage and suffering. Radware’s research on C-Suite Perspectives regarding the relationship between cybersecurity and customer experience shows that these executives prioritize Customer Loss (41%), Brand Reputation (34%), and Productivity/Operational Loss (34%). Yet, a majority of these same executives have not yet integrated security practices into their company’s security infrastructure such as their application DevOps teams.

The long-term damage on a company’s finances is note-worthy enough. IT provider CGI found that for technology and financial companies alone, they can lose 5-8.5% in enterprise value from the breach. What often goes unreported, however, is the increased customer onboarding costs to combat against large-scale customer churn following breaches.

For the financial sector, global accounting firm KPMG found that consumers not only expect institutions to act quickly and take responsibility, but 48% are willing to switch banks due to lack of responsibility and preparation for future attacks, and untimely notification of the breaches. News publication The Financial Brand found that banking customers have an average churn rate of 20-40% in 12 months, while a potential onboarding cost per customer can be within the $300-$20,000 range. Network hardware manufacturer Cisco estimates as high as 20% of customers and opportunities could be lost.

Just imagine the customer churn rate for a recently-attacked company.

How does that affect me personally as a business leader within my company?

When data breaches occur, the first person that typically takes the blame is the CISO or CSO. A common misconception, however, is that everyone else will be spared any accountability. But the damage is not limited to just security leadership. Due to the wide array of impacts that result from a cyberattack, nearly all C-level executives are at risk; examples include but are not limited to Equifax’s CEO, Richard Smith, Target CEO Gregg Steinhafel and CIO Beth Jacob. This results in a sudden emptiness of C-Suite level employees. Suddenly, there’s a lack of leadership and direction, causing its own internal combination of instability.

Today’s business leaders need to understand that a data breach is no longer limited to the company’s reputation, but the level of welfare of its customers. Just the event of a data breach can shatter the trust between the two entities. CEOs are now expected to be involved with managing the black swan’s consequences; in times of these hardships, they are particularly expected to continue being the voice of the company and to provide direction and assurance to vulnerable customers.

A business leader can be ousted from the company for not having taken cybersecurity seriously enough and/or not understanding the true costs of a cyberattack – that is, if the company hasn’t filed for bankruptcy yet.

Isn’t this something that my company’s Public Relations department should be handling?

One of the biggest contributors to the aftermath chaos of a black swan is the poor/lack of communication from the public relations team. By not disclosing a data breach in a timely manner, companies incur the wrath of the consumer and suffer an even bigger loss in customer loyalty because of delays. A timely announcement is expected as soon as the company discovers the incident, or according to the GDPR, within 72 hours of the discovery.

A company and its CEO should not solely depend on their public relations department to handle a black swan nightmare. Equifax revealed its data breach six weeks after the incident and still hadn’t directly contacted those that were affected, instead of creating a website for customer inquiries. Equifax continues to suffer from customer distrust because of the lack of guidance from the company’s leadership during those critical days in 2017. At a time of confusion and mayhem, a company’s leader must remain forthcoming, reassuring and credible through the black swan’s tide-changing effects.

Following the cybersecurity black swan, a vast majority of consumers must also be convinced that all the security issues have been addressed and rectified, and the company has a plan in place for any future repeated incidents. Those that fail to do so are at risk of losing at least every 1 in 10 customers, exhibiting the potential reach of impact a black swan can have within a company alone, beyond financial aspects.

How Do You Prepare for When the Black Swan Strikes?

When it comes to the black swan, the strategic method isn’t limited to be proactive or reactive, but to be preemptive, according to news publication ComputerWeekly. The black swan is primarily feared for its unpredictability. The key advantage of being preemptive is the level of detail that goes into planning; instead of reacting in real-time during the chaos or having a universal one-size fits all type of strategy, companies should do their best to develop multiple procedures for multiple worst-case scenarios.

Companies cannot afford to be sitting ducks waiting for the black swan to strike, but must have prepared mitigation plans in place for the likelihood. The ability to mitigate through extreme cyber threats and emerging cyberattack tactics is a dual threat to the company, depending on the level of cybersecurity preparation a company possesses. By implementing a strong cybersecurity architecture (internal or third-party), companies can adapt and evolve with the constant-changing security threats landscape; thereby minimizing the opportunities for hackers to take advantage.

In addition to having a well-built security system, precautions should be taken to further strengthen it including WAF Protection, SSL Inspections, DDoS Protection, Bot Protection, and more. Risk management is flawed due to its nature of emphasis on internal risks only. What’s been missing is companies must do more to include the possibilities of industry-wide black swans, such as the Target data breach in 2013 that later extended to Home Depot and other retailers.

It’s Time To Protect Sensitive Data

In the end, the potential impact of a black swan on a company comes down to its business owners. Cybersecurity is no longer limited to a CISO or CSO’s decision, but the CEO. As the symbol and leader of a company, CEOs need to ask themselves if they know how their security model works. Is it easily penetrated? Can it defend against massive cyberattacks?  What IP and customer data am I protecting?  What would happen to the business if that data was breached?

Does it protect sensitive data?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

BotnetsSecurity

Defending Against the Mirai Botnet

September 12, 2018 — by Ron Winward5

mirai_handbook_blog_image-960x540.jpg

When attacks from the Mirai botnet hit the network in 2016, we all knew something was different. You could feel it. In a 31-day span, the internet suffered three record-breaking attacks; Brian Krebs’ at 620 Gbps, OVH at 1.2 Tbps, and the widespread outages caused by the attack on Dyn DNS. Also within that window, the source code for Mirai was released to the world.

Mirai no longer holds the record for the largest volumetric attack on the Internet. That honor goes to the Memcached reflection attacks on Github. In fact, once the code was released, the botnets went from a few botnets with several enslaved members, to several botnets with fewer members. More botnets were fighting to enslave a pool of devices.

[You might also like: The Dyn Attack – One Year Later]

Attackers Get Creative

Attackers, as they always do, got creative. By modifying the Mirai code, attackers could discover new devices by leveraging other known exploits. While many attackers were fighting for telnet access to IoT devices with traditional Mirai, new variants were developed to find additional methods of exploitation and infection. Examples include TR-064 exploits that were quickly added to the code (and used to infect the endpoints of service providers), a 0-day exploit on Huawei routers in several botnets, and the Reaper botnet, which includes 10 previously disclosed CVEs.

One thing that has remained the same, however, is the attack vectors that are included in the modern botnets. They’re largely all based on Mirai, and even if their infection methods differ, the attacks don’t change much.

For example, Masuta and DaddysMirai include the original Mirai vectors but removed the HTTP attack. Orion is an exact copy of the original Mirai attack table (and just like Mirai, has abandoned the PROXY attack). Owari added two new vectors, STD and XMAS.

Understanding IoT Attacks

My background in network engineering naturally made me curious about the impact of these attacks on the network. What do they look like in flight? How is each one different? Is one more of a threat than another? I have been studying the attack vectors since they were released in 2016, but with the observation that new variants largely included the same attacks (and some twists), it was clearly worth revisiting.

[You might also like: IoT Threats: Whose problem is it?]

Today we launch a new publication, IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants. This is a collection of research on the attack vectors themselves and what they look like on the wire. You will see that they’re not much different from each other, with the only truly interesting change being the introduction of a Christmas Tree attack in Owari. But that too had some interesting challenges. You’ll have to read the guide to find out why.

It’s important to understand the capabilities of Mirai and other IoT botnets so that your organization can truly comprehend the threat. Manually reacting to these attacks is not viable, especially in a prolonged campaign. In many cases, it is possible to block some of these attacks on infrastructure devices such as core routers or upstream transit links, but in many cases, it’s not.

Effectively fighting these attacks requires specialized solutions, including behavioral technologies that can identify the threats posed by Mirai and other IoT botnets. It also requires a true understanding of how to successfully mitigate the largest attacks ever seen. Hopefully, this handbook provides the guidance and insight needed for each vector if your organization ever needs to take emergency measures.

Read the “IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants” to learn more.

Download Now

Application DeliveryApplication SecuritySecurity

DDoS Protection is the Foundation for Application, Site and Data Availability

September 11, 2018 — by Daniel Lakier2

ddos-primer-part-1-960x788.jpg

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

[You might also like: Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers and Manufacturers?]

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

Keeping the aforementioned points in mind, here are three key features to consider when looking at modern enterprise DDoS solutions:

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

[You might also like: Marrying the Business Need with the Technology Drive: Recapping It All]

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

DDoS AttacksHTTP Flood AttacksSecurity

Rate Limiting-A Cure Worse Than the Disease?

September 5, 2018 — by Eyal Arazi0

rate_limiting_l7_ddos_security-960x540.jpg
Rate limiting is a commonly-used tool to defend against application-layer (L7) DDoS attacks. However, the shortcomings of this approach raises the question of whether the cure is worse than the disease?

As more applications transition to web and cloud-based environments, application-layer (L7) DDoS attacks are becoming increasingly common and potent.

In fact, Radware research found that application-layer attacks have taken over network-layer DDoS attacks, and HTTP floods are now the number one most common attack across all vectors. This is mirrored by new generations of attack tools such as the Mirai botnet, which makes application-layer floods even more accessible and easier to launch.

It is, therefore, no surprise that more security vendors claim to provide protection against such attacks. The problem, however, is that the chosen approach by many vendors is rate limiting.

A More Challenging Form of DDoS Attack

What is it that makes application-layer DDoS attacks so difficult to defend against?

Application-layer DDoS attacks such as HTTP GET or HTTP POST floods are particularly difficult to protect against because they require analysis of the application-layer traffic in order to determine whether or not it is behaving legitimately.

For example, when a shopping website sees a spike in incoming HTTP traffic, is that because a DDoS attack is taking place, or because there is a flash crowd of shoppers looking for the latest hot item?

Looking at network-layer traffic volumes alone will not help us. The only option would be to look at application data directly and try to discern whether or not it is legitimate based on its behavior.

However, several vendors who claim to offer protection against application-layer DDoS attacks don’t have the capabilities to actually analyze application traffic and work out whether an attack is taking place. This leads many of them to rely on brute-force mechanisms such as HTTP rate limiting.

[You might also like: 8 Questions to Ask in DDoS Protection]

A Remedy (Almost) as Bad as the Disease

Explaining rate limiting is simple enough: when traffic goes over a certain threshold, rate limits are applied to throttle the amount of traffic to a level that the hosting server (or network pipe) can handle.

While this sounds simple enough, it also creates several problems:

  • Rate limiting does not distinguish between good and bad traffic: It has no mechanism for determining whether a connection is legitimate or not. It is an equal-opportunity blocker of traffic.
  • Rate limiting does not actually clean traffic: An important point to emphasize regarding rate limiting is that it does not actually block any bad traffic. Bad traffic will reach the original server, albeit at a slower rate.
  • Rate limiting blocks legitimate users: It does not distinguish between good and malicious requests and does not actually block bad traffic so rate limiting results in a high degree of false positives. This will lead to legitimate users being blocked from reaching the application.

Some vendors have more granular rate limiting controls which allow limiting connections not just per application, but also per user. However, sophisticated attackers get around this by spreading attacks over a large number of attack hosts. Moreover, modern web applications (and browsers) frequently use multiple concurrent connections, so limiting concurrent connections per user will likely impact legitimate users.

Considering that the aim of a DDoS attack is usually to disrupt the availability of web applications and prevent legitimate users from reaching them, we can see that rate limiting does not actually mitigate the problem: bad traffic will still reach the application, and legitimate users will be blocked.

In other words – rate limiting administers the pains of the medication, without providing the benefit of a remedy.

This is not to say that rate limiting cannot be a useful discipline in mitigating application-layer attacks, but it should be used as a last line of defense, when all else fails, and not as a first response.

A better approach with behavioral detection

An alternative approach to rate limiting – which would deliver better results – is to use a positive security model based on behavioral analysis.

Most defense mechanisms – including rate limiting – subscribe to a ‘negative’ security model. In a nutshell, it means that all traffic will be allowed through, except what is explicitly known to be malicious. This is how the majority of signature-based and volume-based DDoS and WAF solutions work.

A ‘positive’ security model, on the other hand, works the other way around: it uses behavioral-based learning processes to learn what constitutes legitimate user behavior and establishes a baseline of legitimate traffic patterns. It will then block any request that does not conform to this traffic pattern.

Such an approach is particularly useful when it comes to application-layer DDoS attacks since it can look at application-layer behavior, and determine whether this behavior adheres to recognized legitimate patterns. One such example would be to determine whether a spike in traffic is legitimate behavior or the result of a DDoS attack.

[You might also like: 5 Must-Have DDoS Protection Technologies]

The advantages of behavioral-based detections are numerous:

  • Blocks bad traffic: Unlike rate limiting, behavioral-based detection actually ‘scrubs’ bad traffic out, leaving only legitimate traffic to reach the application.
  • Reduces false positives: One of the key problems of rate limiting is the high number of false positives. A positive security approach greatly reduces this problem.
  • Does not block legitimate users: Most importantly, behavioral traffic analysis results in fewer (or none at all) blocked users, meaning that you don’t lose on customers, reputation, and revenue.

That’s Great, but How Do I know If I Have It?

The best way to find out what protections you have is to be informed. Here are a few questions to ask your security vendor:

  1. Do you provide application-layer (L7) DDoS protection as part of your DDoS solution, or does it require an add-on WAF component?
  2. Do you use behavioral learning algorithms to establish ‘legitimate’ traffic patterns?
  3. How do you distinguish between good and bad traffic?
  4. Do you have application-layer DDoS protection that goes beyond rate limiting?

If your vendor has these capabilities, make sure they’re turned-on and enabled. If not, the increase in application-layer DDoS attacks means that it might be time to look for other alternatives.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

DDoSSecurity

8 Questions to Ask in DDoS Protection

June 7, 2018 — by Eyal Arazi0

8-ddos-questions-1-960x640.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attack.

Before evaluating DDoS protection solutions, it is important to assess the needs, objectives, and constraints of the organization, network and applications. These factors will define the criteria for selecting the optimal solution.

DDoSSecurity

Choosing the Right DDoS Solution – Part III: Always-On Cloud Service

April 4, 2018 — by Eyal Arazi1

always-on-cloud-960x598.jpg

This blog series dives into the different DDoS protection models, in order to help customers choose the optimal protection for their particular use-case. The first parts of this series covered premise-based appliances and on-demand cloud services. This installment will cover always-on cloud DDoS protection deployments, its advantages and drawbacks, and what use-cases are best for it. The final part of this series will focus on hybrid deployments, which combine premise-based and cloud-based protections.

DDoSSecurity

Choosing the Right DDoS Solution – Part II: On-Demand Cloud Service

March 29, 2018 — by Eyal Arazi1

on-demand-cloud-960x640.jpg

This blog series explores the various options for DDoS protection and help organizations choose the optimal solution for themselves. The first part of this series covered the premise-based DDoS mitigation appliance. This installment will provide an overview of on-demand cloud-based solutions. Subsequent chapters will also cover always-on and hybrid solutions.

Attack Types & VectorsDDoS AttacksSecurity

Choosing the Right DDoS Solution – Part I: On-Prem Appliance

March 14, 2018 — by Eyal Arazi1

choosing-ddos-part-1-960x534.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attacks.

However, DDoS protection is not a one-size-fits-all fixed menu; rather, it is an a-la-carte buffet of multiple choices. Each option has its unique advantages and drawbacks, and it is up to the customer to select the optimal solution that best fits their needs, threats, and budget.

This blog series explores the various options for DDoS protection deployments and discusses the considerations, advantages and drawbacks of each approach, and who it is usually best suited for.