main

Attack Types & Vectors

Empowering the Infosec Community

September 19, 2019 — by Ben Zilberman0

ThreatMap-960x468.png

Despite the technological advancements, innovation, and experience the knights of the cyber order have acquired over the past 25 years or so, the “bad guys” are still a step ahead. Why? In large part, because of the power of community.

While information security vendors live in a competitive market and must protect their intellectual property, hackers communicate, share information and contribute to each other’s immediate success and long-term skill set.

The Infosec Community

In recent years, we’ve seen more partnerships and collaborations between infosec vendors. For example, the Cyber Threat Alliance (of which Radware is a member) enables cybersecurity practitioners to share credible cyber threat information. Each vendor collects and shares security incidents detected by their security solutions, honeypots and research teams worldwide in order to disrupt malicious actors and protect end-users.

Similarly, several vendors offer live threat maps, which, as the name suggests, help detect live attacks as they’re launched.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

Radware’s Live Threat Map, which is open to the public, presents near real-time information on cyberattacks–from scanners to intruders to DDoS and web application hacks–as they occur, based on our global threat deception network (comprised of distributed honeypots that collect information about active threat actors), and cloud systems’ event information. The systems transmit a variety of anonymized and sampled network and application attacks to our Threat Research Center and are shared with the community.

More specifically, our machine learning algorithms profile the attackers and their intent, the attack vector and target – be it a network, a server, an IoT device or an application. Various validation mechanisms assure high-fidelity and minimize false positives. This makes our map sturdy and essentially flawless, if I say so myself.

Visibility Is Key

Detecting live attacks despite all evasion mechanisms is just the first step. The “good guys” must also translate these massive data lakes into guidance for those who wish to gain a better understanding of what, exactly, we’re monitoring and how they can improve their own security posture.

[You may also like: Here’s How You Can Better Mitigate a Cyberattack]

Visibility is key to achieving this. The fact is, the market is overwhelmed with security technologies that constantly generate alerts; but to fight attackers and fend off future cyber attacks, businesses need more than notifications. They need guidance and advanced analytics.

For example, the ability to dig into data related to their own protected objects, while enjoying a unified view of all application and network security events with near real-time alerts via customizable dashboards (like Radware provides) will go a long way towards improving security posture — not just for individual companies, but the infosec community as a whole.

Download Radware’s “Hackers Almanac” to learn more.

Download Now

Security

Past GDPR Predictions: Have They Come To Fruition?

September 17, 2019 — by David Hobbs0

GDPR-960x540.jpg

In July 2017, I wrote about GDPR and HITEC and asked if the past could predict the future. At the time, GDPR had not yet gone into effect. Now that it has been active for over a year, let’s take stock at what’s occurred.

First, a quick refresher: GDPR implements a two-tiered approach to categorizing violations and related fines. The most significant breaches can result in a fine of up to 4 percent of a company’s annual global revenue, or €20 million (whichever is greater).

These higher-tier violations include failing to obtain the necessary level of customer consent to process data, failing to permit data subjects to exercise their rights including as to data erasure and portability, and transferring personal data outside the EU without appropriate safeguards.

[You may also like: The Impact of GDPR One Year In]

For less serious violations, which include failing to maintain records of customer consent or failing to notify the relevant parties when a data breach has occurred, the maximum fine is limited to 2 percent of annual global revenue, or €10 million (whichever is greater).

Rising Complaints & Notifications

The first year’s snapshot from May 2019 of the Data Protection Commission (DPC) demonstrates that GDPR has given rise to a significant increase in contacts with the DPC over the past 12 months:

  • 6,624 complaints were received.
  • 5,818 valid data security breaches were notified.
  • Over 48,000 contacts were received through the DPC’s Information and Assessment Unit.
  • 54 investigations were opened.
  • 1,206 Data Protection Officer notifications were received.

[You may also like: WAF and DDoS Help You on the Road to GDPR Compliancy]

In my first article, I discussed Memorial Healthcare System’s breach and resulting settlement of $5.5 Million USD. Now, let’s look at the first round of investigations under GDPR.

High-Profile Breaches: 2018-19 Edition

Marriott. In December 2018, news of Marriott’s massive breach hit. Upon receiving Marriott’s breach report in September 2018, the International Commissioner’s Office (ICO) — the UK’s GDPR supervisory authority — launched an investigation.

When a data breach is experienced that results in the exposure of EU citizen’s data, the breach must be reported to ICO within 72 hours of discovery. ICO investigates data breaches to determine whether GDPR rules were violated, as well as complaints about GDPR violations from consumers.

In July 2019, the ICO announced that it plans to fine the hotel chain $123 million USD. Marriott said it plans to appeal the decision.

[You may also like: Marriott: The Case for Cybersecurity Due Diligence During M&A]

Bergen, Denmark. One file in the wrong place landed the municipality of Bergen in Denmark in trouble. Computer files containing login credentials for 35,000 students and employees were insufficiently secured and accessed.

Per the European Data Protection Board, “the lack of security measures in the system made it possible for anyone to log in to the school’s various information systems, and thereby to access various categories of personal data relating to the pupils and employees of the schools.” As a result, the Norwegian Data Protection Authority fined the municipality of Bergen
€170,000.

British Airways. This is the largest fine to date, with an overwhelming price tag of £183.4m or $223.4M USD.  After an extensive investigation, the ICO concluded that information was compromised by “poor security arrangements” at British Airways. This relates to security around log in, payment card, and travel booking details, as well name and address information.

Sergic. France’s data protection agency, CNIL, found that real estate company Sergic knew of a vulnerability in its website for many months and did not protect user data. This data contained identity cards, tax notices, account statements and other personal details. The fall out? A €400,000 fine (roughly $445,000 USD).  

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

Haga Hospital. Now looking at healthcare, Haga Hospital in the Netherlands was hit with a €460,000 fine ($510,000 USD) for breaching data confidentiality. This investigation followed when it appeared that dozens of hospital staff had unnecessarily checked the medical records of a well-known Dutch person.

In my previous article, I wrote, “other industries you may not think about, such as airlines, car rentals and hotels which allow booking from the internet may be impacted. Will the HITECH Act fines become the harbinger of much larger fines to come?”

We can see that this prediction was spot on.  Some of the largest fines to date are pointing at airlines and hotels and the travel industry. I predict in the next year, we will start to really see the various agencies in the EU continue to ramp up fines, including cross border/international ones. 

CCPA is Almost Upon Us

Now, for the U.S.: California’s new Consumer Privacy Act (CCPA) goes into effect in January 2020. Will the state start rolling fines out like those imposed under GDPR?

If you’re an international company withany U.S. based customers, it’s pretty likely that you’ll have Californians in your database.  The CCPA focuses almost entirely on data collection and privacy, giving Californians the right to access their personal information, ask whether it’s being collected or sold, say no to it being collected or sold and still receive the same service or price even if they do say no.  

[You may also like: Why Cyber-Security Is Critical to The Loyalty of Your Most Valued Customers]

Come January 2020, you’ll either have to meticulously segment your database by state to create separate procedures for Californian citizens (and EU ones for that matter), or you’ll have to implement different data collection and privacy procedures for all your customers going forward.

With all of the new privacy rules coming and fines that are already starting to hit from GDPR, what will you do to maintain all the laws of the world to keep your customers safe?

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Attack Types & Vectors

Defacements: The Digital Graffiti of the Internet

September 12, 2019 — by Radware0

graffiti-960x586.jpg

A defacement typically refers to a remote code execution attack or SQL injection that allows the hacker to manipulate the visual appearance of the website by breaking into a web server and replacing the current website content with the hacker’s own.

Defacements are considered digital graffiti and typically contain some type of political or rivalry statement from the hacker. Hacktivist groups often leverage defacements.

These groups are typically unskilled, using basic software to automate their attacks. When major websites are defaced, it is typically due to network operator negligence. Web application firewalls are the best way to prevent these attacks, but updating content management systems or web services is also effective.

If you think that you are the target of a defacement campaign, update and patch your system immediately and alert network administrators to look for malicious activity, as a hacker will typically add a page to your domain. You can also monitor for such attacks retroactively via social media.

Security

Meet the Four Generations of Bots

September 11, 2019 — by Radware0

4GenBots-960x720.jpg

With the escalating race between bot developers and security experts — along with the increasing use of Javascript and HTML5 web technologies — bots have evolved significantly from their origins as simple scripting tools that used command line interfaces.

Bots now leverage full-fledged browsers and are programmed to mimic human behavior in the way they traverse a website or application, move the mouse, tap and swipe on mobile devices and generally try to simulate real visitors to evade security systems.

First Generation

First-generation bots were built with basic scripting tools and make cURL-like requests to websites using a small number of IP addresses (often just one or two). They do not have the ability to store cookies or execute JavaScript, so they do not possess the capabilities of a real web browser.

[You may also like: 5 Simple Bot Management Techniques]

Impact: These bots are generally used to carry out scraping, carding and form spam.

Mitigation: These simple bots generally originate from data centers and use proxy IP addresses and inconsistent UAs. They often make thousands of hits from just one or two IP addresses. They also operate through scraping tools, such as ScreamingFrog and DeepCrawl. They are the easiest to detect since they cannot maintain cookies, which most websites use. In addition, they fail JavaScript challenges because they cannot execute them. First-generation bots can be blocked by blacklisting their IP addresses and UAs, as well as combinations of IPs and UAs.

Second Generation

These bots operate through website development and testing tools known as “headless” browsers (examples: PhantomJS and SimpleBrowser), as well as later versions of Chrome and Firefox, which allow for operation in headless mode. Unlike first-generation bots, they can maintain cookies and execute JavaScript. Botmasters began using headless browsers in response to the growing use of JavaScript challenges in websites and applications.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

Impact: These bots are used for application DDoS attacks, scraping, form spam, skewed analytics and ad fraud.

Mitigation: These bots can be identified through their browser and device characteristics, including the presence of specific JavaScript variables, iframe tampering, sessions and cookies. Once the bot is identified, it can be blocked based on its fingerprints. Another method of detecting these bots is to analyze metrics and typical user journeys and then look for large discrepancies in the traffic across different sections of a website. Those discrepancies can provide telltale signs of bots intending to carry out different types of attacks, such as account takeover and scraping.

Third Generation

These bots use full-fledged browsers — dedicated or hijacked by malware — for their operation. They can simulate basic human-like interactions, such as simple mouse movements and keystrokes. However, they may fail to demonstrate human-like randomness in their behavior.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Impact: Third-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud, among other purposes.

Mitigation: Third-generation bots are difficult to detect based on device and browser characteristics. Interaction-based user behavioral analysis is required to detect such bots, which generally follow a programmatic sequence of URL traversals.

Fourth Generation

The latest generation of bots have advanced human-like interaction characteristics — including moving the mouse pointer in a random, human-like pattern instead of in straight lines. These bots also can change their UAs while rotating through thousands of IP addresses. There is growing evidence that points to bot developers carrying out “behavior hijacking” — recording the way in which real users touch and swipe on hijacked mobile apps to more closely mimic human behavior on a website or app. Behavior hijacking makes them much harder to detect, as their activities cannot easily be differentiated from those of real users. What’s more, their wide distribution is attributable to the large number of users whose browsers and devices have been hijacked.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Impact: Fourth-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud.

Mitigation: These bots are massively distributed across tens of thousands of IP addresses, often carrying out “low and slow” attacks to slip past security measures. Detecting these bots based on shallow interaction characteristics, such as mouse movement patterns, will result in a high number of false positives. Prevailing techniques are therefore inadequate for mitigating such bots. Machine learning-based technologies, such as intent-based deep behavioral analysis (IDBA) — which are semi-supervised machine learning models to identify the intent of bots with the highest precision — are required to accurately detect fourth-generation bots with zero false positives.

Such analysis spans the visitor’s journey through the entire web property — with a focus on interaction patterns, such as mouse movements, scrolling and taps, along with the sequence of URLs traversed, the referrers used and the time spent at each page. This analysis should also capture additional parameters related to the browser stack, IP reputation, fingerprints and other characteristics.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

DDoS

5 Steps to Prepare for a DDoS Attack

September 10, 2019 — by Eyal Arazi3

5StepsDDoS-960x640.jpg

It’s inevitable almost as death and taxes: somewhere, at some point, you will come under a DDoS attack.

The reasons for DDoS attacks can vary from cyber crime to hacktivism to simple bad luck, but eventually someone will be out there to try and take you down.

The good news, however, is that there is plenty to be done about it. Below are five key steps you can begin taking today so that you are prepared when the attack comes.

Step 1: Map Vulnerable Assets

The ancient Greeks said that knowing thyself is the beginning of wisdom.

It is no surprise, therefore, that the first step to securing your assets against a DDoS attack is to know what assets there are to be secured.

[You may also like: DDoS Protection Requires Looking Both Ways]

Begin by listing all external-facing assets that might potentially be attacked. This list should include both physical and virtual assets:

  • Physical locations & offices
  • Data centers
  • Servers
  • Applications
  • IP addresses and subnets
  • Domains, sub-domains and specific FQDN’s

Mapping out all externally-facing assets will help you draw your threat surface and identify your point of vulnerability.

Step 2: Assess Potential Damages

After listing all potentially vulnerable assets, figure out how much they are worth to you.

This is a key question, as the answer will help determine how much you should spend in protecting these properties.

[You may also like: The Costs of Cyberattacks Are Real]

Keep in mind that some damages are direct, while other may be indirect. Some of the potential damages from a DDoS attack include:

  • Direct loss of revenue – If your website or application is generating revenue directly on a regular basis, then any loss of availability will cause direct, immediate losses in revenue. For example, if your website generates $1m a day, every hour of downtime, on average, will cause over $40,000 in damages.
  • Loss in productivity – For organizations that rely on online services, such as email, scheduling, storage, CRM or databases, any loss of availability to any of these services will directly result in loss of productivity and lost workdays.
  • SLA obligations – For applications and services that are bound by service commitments, any downtime can lead to breach of SLA, resulting in refunding customers for lost services, granting service credits, and even potentially facing lawsuits.
  • Damage to brand – In a world that is becoming ever-more connected, being available is increasingly tied to a company’s brand and identity. Any loss of availability as a result of a cyber-attack, therefore, can directly impact a company’s brand and reputation. In fact, Radware’s 2018 Application and Network Security Report showed that 43% of companies had experienced reputation loss as a result of a cyber-attack.
  • Loss of customers – One of the biggest potential damages of a successful DDoS attack is loss of customers. This can be either direct loss (i.e., a customer chooses to abandon you as a result of a cyber-attack) or indirect (i.e., potential customers who are unable to reach you and lost business opportunities). Either way, this is a key concern.

[You may also like: How Cyberattacks Directly Impact Your Brand]

When evaluating potential damages of a DDoS attack, assess each vulnerable asset individually. A DDoS attack against a customer-facing e-commerce site, for example, will result in very different damages than an attack against a remote field office.

After you assess the risk to each asset, prioritize them according to risk and potential damages. This will not only help you assess which assets need protection, but also the type of protection they require.

Step 3: Assign Responsibility

Once you create an inventory of potentially vulnerable assets, and then assign a dollar-figure (or any other currency…) to how much they are worth for you, the next step is to decide who is responsible for protecting them.

DDoS attacks are a unique type of cyber attack, as they affect different levels of IT infrastructure and can therefore potentially fall under the responsibility of different stakeholders:

  • Is DDoS the responsibility of the network administrator, since it affects network performance?
  • Is it the responsibility of application owner, since it impacts application availability?
  • Is it the responsibility of the business manager, since it affects revenue?
  • Is it the responsibility of the CISO, since it is a type of cyber attack?

A surprising number of organizations don’t have properly defined areas of responsibility with regards to DDoS protection. This can result in DDoS defense “falling between the cracks,” leaving assets potentially exposed.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Step 4: Set Up Detection Mechanisms

Now that you’ve evaluated which assets you must protect and who’s responsible for protecting them, the next step is to set up measures that will alert you to when you come under attack.

After all, you don’t want your customers – or worse, your boss – to be the ones to tell you that your services and applications are offline.

Detection measures can be deployed either at the network level or at the application level.

Make sure these measures are configured so that they don’t just detect attacks, but also alert you when something bad happens.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Step 5: Deploy a DDoS Protection Solution

Finally, after you’ve assessed your vulnerabilities and costs, and set up attack detection mechanisms, now is the time to deploy actual protection.

This step is best done before you get attacked, and not when you are already under one.

DDoS protection is not a one-size-fits-all proposition, and there are many types of protection options, depending on the characteristics, risk and value of each individual asset.

On-demand cloud mitigation services are activated only once an attack is detected. They require the lowest overhead and are the lowest cost solution, but require traffic diversion for protection to kick-in. As a result, they are best suited for cost-sensitive customers, services which are not mission-critical, and customers who have never been (or are infrequently) attacked, but want a basic form of backup.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Always-on cloud services route all traffic through a cloud scrubbing center at all times. No diversion is required, but there is minor added latency to requests. This type of protection is best for mission-critical applications which cannot afford any downtime, and organizations that are frequently attacked.

Hardware-based appliances provide advanced capabilities and fast-response of premise-based equipment. However, an appliance, on its own, is limited in its capacity. Therefore, they are best used for service providers who are building their own scrubbing capabilities, or in combination with a cloud service.

Finally, hybrid DDoS protection combines the massive capacity of cloud services with the advanced capabilities and fast response of a hardware appliance. Hybrid protection is best for mission-critical and latency-sensitive services, and organizations who encrypt their user traffic, but don’t want to put their SSL keys in the cloud.

Ultimately, you can’t control if-and-when you are attacked, but following these steps will help you be prepared when DDoS attackers come knocking at your door.

Download Radware’s “Hackers Almanac” to learn more.

Download Now

Attack Mitigation

5 Simple Bot Management Techniques

September 5, 2019 — by Radware0

mitigate-960x504.jpg

When it comes to detection and mitigation, security and medical treatment have more in common than you may think. Both require careful evaluation of the risks, trade-offs and implications of false positives and false negatives.

In both disciplines, it’s critical to use the right treatment or tool for the problem at hand. Taking antibiotics when you have a viral infection can introduce unwanted side effects and does nothing to resolve your illness. Similarly, using CAPTCHA isn’t a cure-all for every bot attack. It simply won’t work for some bot types, and if you deploy it broadly, it’s sure to cause negative customer experience “side effects.”

[You may also like: Navigating the Bot Ecosystem]

And in both medicine and security, treatment is rarely a one-size-fits-all exercise. Treating or mitigating a problem is an entirely different exercise from diagnosing or detecting it. Figuring out the “disease” at hand may be long and complex, but effective mitigation can be surprisingly simple. It depends on several variables — and requires expert knowledge, skills and judgment. It depends on several variables — and requires expert knowledge, skills and judgment.

Block or Manage?

Blocking bots may seem like the obvious approach to mitigation; however, mitigation isn’t always about eradicating bots. Instead, you can focus on managing them.  What follows is a round of mitigation techniques worth consideration.

[You may also like: A Buyer’s Guide to Bot Management]

Feed fake data to the bot. Keep the bot active and allow it to continue attempting to attack your app. But rather than replying with real content, reply with fake data. You could reply with modified faked values (that is, wrong pricing values). In this way, you manipulate the bot to receive the value you want rather than the real price. Another option is to redirect the bot to a similar fake app, where content is reduced and simplified and the bot is unable to access your original content.

Challenge the bot with a visible CAPTCHA. CAPTCHA can function as an effective mitigation tool in some scenarios, but you must use it carefully. If detection is not effective and accurate, the use of CAPTCHA could have a significant usability impact. Since CAPTCHA is a challenge by nature, it may also help improve the quality of detection. After all, clients who resolve a CAPTCHA are more than likely not bots. On the other hand, sophisticated bots may be able to resolve CAPTCHA. Consequently, it is not a bulletproof solution.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

Use throttling. When an attack source is persistently attacking your apps, a throttling approach may be effective while still allowing legit sources access to the application in a scenario of false positives.

Implement an invisible challenge. Invisible challenges can involve an expectation to move the mouse or type data in mandatory form fields — actions that a bot would be unable to complete.

Block the source. When a source is being blocked, there’s no need to process its traffic, no need to apply protection rules and no logs to store. Considering that bots can generate more than 90% of traffic for highly attacked targets and applications, this cost savings may be significant. Thus, this approach may appear to be the most effective and cost-efficient approach. The bad news? A persistent attack source that updates its bot code frequently may find this mitigation easy to identify and overcome. It will simply update the bot code immediately, and in this way, a simple first-generation bot can evolve into a more sophisticated bot that will be challenging to detect and block in future attack phases.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Security

A Buyer’s Guide to Bot Management

September 4, 2019 — by Ron Winward0

UGBM-Image-1-960x515.jpg

Web-based bots are a critical part of your business’s digital presence. They help collect content, index it, and even promote it to your customers. A bot may even be responsible for getting you here. These are good bots and they help your business grow, providing consumers with individualized, interesting content.

Not all bots have good intentions, though. In fact, about one quarter of the traffic on the internet can be bad bots. Bad bots can do things like automated account takeover, inventory manipulation, content or price scraping, and skew analytics.

Unwanted Bot Traffic

In a recent customer engagement, we found that over 85% of their traffic was unwanted bot traffic. Not only does this create an undesirable situation for the content managers and the security team, but the company also had to overbuild their infrastructure in order to support that unwanted traffic. Without a way to distinguish between good bots and bad bots, they needed to support it all.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

Allowing the good bots to access your website while blocking the bad ones is nearly impossible if you don’t have the right tools. Understanding the bot landscape can be a daunting task and choosing the right solution can be a challenge if you don’t know what to look for.

The bots themselves are changing too. The rise of highly sophisticated human-like bots in recent years requires more advanced techniques in detection and response than in the past.

Choosing The Right Solution

When choosing the right solution for your environment, you need to understand how a vendor’s solution identifies bots and their intent. If a threat is detected, how will the solution manage it? Do you want to block the bot, or maybe feed it fake data as a countermeasure?

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Having flexible deployment options is also critical because every environment is different. Look for a bot management solution that provides easy, seamless deployment without infrastructure changes or the need to reroute traffic if you don’t want to.

Radware’s Ultimate Guide to Bot Management is a foundational resource for understanding bots, their benefits, the challenges they create, and what you should consider when deciding on a solution for bot management. At the end of this e-book, you’ll find a buyer’s checklist that will help you understand what criteria to evaluate when selecting the right bot management solution for your environment.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Businesses need to manage bot traffic and associated risks, whether for security, brand protection, revenue protection or infrastructure protection. And because most organizations can’t tell the difference between good bots and bad bots in their network, you may not even be sure if you have a bot problem (pro-tip:Radware’s Bad Bot Analyzer will help you assess the true bot activity in your environment for free).

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

DDoS

The Emergence of Denial-of-Service Groups

August 27, 2019 — by Radware2

DosAttack-960x640.jpg

Denial-of-Service (DoS) attacks are cyberattacks designed to render a computer or network service unavailable to its users. A standard DoS attack is when an attacker utilizes a single machine to launch an attack to exhaust the resources of another machine. A DDoS attack uses multiple machines to exhaust the resources of a single machine.

DoS attacks have been around for some time, but only recently has there been an emergence of denial-of-service groups that have constructed large botnets to target massive organizations for profit or fame. These groups often utilize their own stresser services and amplification methods to launch massive volumetric attacks, but they have also been known to make botnets available for rent via the darknet.

If a denial-of-service group is targeting your organization, ensure that your network is prepared to face an array of attack vectors ranging from saturation floods to Burst attacks designed to overwhelm mitigation devices.

Hybrid DDoS mitigation capabilities that combine on-premise and cloud-based volumetric protection for real-time DDoS mitigation are recommended. This requires the ability to efficiently identify and block anomalies that strike your network while not adversely affecting legitimate traffic. An emergency response plan is also required.

Learn more:

Download Radware’s “Hackers Almanac” to learn more.

Download Now