main

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service

May 2, 2019 — by Dileep Mishra0

ddosmitigation-960x540.jpg

Internet pipes have gotten fatter in the last decade. We have gone from expensive 1 Mbps links to 1 Gbps links, which are available at a relatively low cost. Most enterprises have at least a 1 Gbps ISP link to their data center, many have multiple 1 Gbps links at each data center. In the past, QoS, packet shaping, application prioritization, etc., used to be a big deal, but now we just throw more capacity to solve any potential performance problems.

However, when it comes to protecting your infrastructure from DDoS attacks, 1 Gbps, 10Gbps or even 40Gbps is not enough capacity. This is because in 2019, even relatively small DDoS attacks are a few Gbps in size, and the larger ones are greater than 1 Tbps.

For this reason, when security professionals design a DDoS mitigation solution, one of the key considerations is the capacity of the DDoS mitigation service. That said, it isn’t easy to figure out which DDoS mitigation service actually has the capacity to withstand the largest DDoS attacks. This is because there are a range of DDoS mitigation solutions to pick from, and capacity is a parameter most vendors can spin to make their solution appear to be flush with capacity.

Let us examine some of the solutions available and understand the difference between their announced capacity and their real ability to block a large bandwidth DDoS attack.

On-premises DDoS Mitigation Appliances 

First of all, be wary of any Router, Switch, or Network Firewall which is also being positioned as a DDoS mitigation appliance. Chances are it does NOT have the ability to withstand a multi Gbps DDoS attack.

There are a handful of companies that make purpose built DDoS mitigation appliances. These devices are usually deployed at the edge of your network, as close as possible to the ISP link. Many of these devices canmitigate attacks which are in the 10s of Gbps, however, the advertised mitigation capacity is usually based on one particular attack vector with all attack packets being of a specific size.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Irrespective of the vendor, don’t buy into 20/40/60 Gbps of mitigation capacity without quizzing the device’s ability to withstand a multi-vector attack, the real-world performance and its ability to pass clean traffic at a given throughput while also mitigating a large attack. Don’t forget, pps is sometimes more important than bps, and many devices will hit their pps limit first. Also be sure to delve into the internals of the attack mitigation appliance, in particular if the same CPU is used to mitigate an attack while passing normal traffic. The most effective devices have the attack “plane” segregated from the clean traffic “plane,” thus ensuring attack mitigation without affecting normal traffic.

Finally, please keep in mind that if your ISP link capacity is 1 Gbps and you have a DDoS mitigation appliance capable of 10Gbps of mitigation, you are NOT protected against a 10Gbps attack. This is because the attack will fill your pipe even before the on-premises device gets a chance to “scrub” the attack traffic.

Cloud-based Scrubbing Centers

The second type of DDoS mitigation solution that is widely deployed is a cloud-based scrubbing solution. Here, you don’t install a DDoS mitigation device at your data center. Rather, you use a DDoS mitigation service deployed in the cloud. With this type of solution, you send telemetry to the cloud service from your data center on a continuous basis, and when there is a spike that corresponds to a DDoS attack, you “divert” your traffic to the cloud service.

[You may also like: DDoS Protection Requires Looking Both Ways]

There are a few vendors who provide this type of solution but again, when it comes to the capacity of the cloud DDoS service, the devil is in the details. Some vendors simply add the “net” capacity of all the ISP links they have at all their data centers. This is misleading because they may be adding the normal daily clean traffic to the advertised capacity — so ask about the available attack mitigation capacity, excluding the normal clean traffic.

Also, chances are the provider has different capacities in different scrubbing centers and the net capacity across all the scrubbing centers may not be a good reflection of the scrubbing center attack mitigation capacity  in the geography of your interest (where your data center is located).

Another item to inquire about is Anycast capabilities, because this gives the provider the ability to mitigate the attack close to the source. In other words, if a 100 Gbps attack is coming from China, it will be mitigated at the scrubbing center in APAC.

[You may also like: 8 Questions to Ask in DDoS Protection]

Finally, it is important that the DDoS mitigation provider has a completely separate data path for clean traffic and does not mix clean customer traffic with attack traffic.

Content Distribution Networks

A third type of DDoS mitigation architecture is based upon leveraging a content distribution network (CDN) to diffuse large DDoS attacks. When it comes to the DDoS mitigation capacity of a CDN however, again, the situation is blurry.

Most CDNs have 10s, 100s, or 1000s of PoPs geographically distributed across the globe. Many simply count the net aggregate capacity across all of these PoPs and advertise that as the total attack mitigation capacity. This has two major flaws. It is quite likely that a real world DDoS attack is sourced from a limited number of geographical locations, in which case the capacity that really matters is the local CDN PoP capacity, not the global capacity at all the PoPs.

[You may also like: 5 Must-Have DDoS Protection Technologies]

Second, most CDNs pass a significant amount of normal customer traffic on all of the CDN nodes, so if a CDN service claims its attack mitigation capacity is 40 Tbps , it may be counting in 30Tbps of normal traffic. The question to ask is what is the total unused capacity, both on a net aggregate level as well as within a geographical region.

ISP Provider-based DDoS Mitigation

Many ISP providers offer DDoS mitigation as an add-on to the ISP pipe. It sounds like a natural choice, as they see all traffic coming into your data center even before it comes to your infrastructure, so it is best to block the attack within the ISP’s infrastructure – right?

Unfortunately, most ISPs have semi-adequate DDoS mitigation deployed within their own infrastructure and are likely to pass along the attack traffic to your data center. In fact, in some scenarios, some ISPs could actually black hole your traffic when you are under attack to protect their other customers who might be using a shared portion of their infrastructure. The question to ask your ISP is what happens if they see a 500Gbps attack coming towards your infrastructure and if there is any cap on the maximum attack traffic.

[You may also like: ISP DDoS Protection May Not Cover All of Bases]

All of the DDoS mitigation solutions discussed above are effective and are widely deployed. We don’t endorse or recommend one over the other. However, one should take any advertised attack mitigation capacity from any provider with a grain of salt. Quiz your provider on local capacity, differentiation between clean and attack traffic, any caps on attack, and any SLAs. Also, carefully examine vendor proposals for any exclusions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

Is It Legal to Evaluate a DDoS Mitigation Service?

March 27, 2019 — by Dileep Mishra3

ddostesting-960x640.jpg

A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.

During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.

Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.

About That Proof of Concept…

How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!

Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks?  Well, the customer answered “yes” to both.

[You may also like: DDoS Protection Requires Looking Both Ways]

Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.

We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks.  And then the prospect told us to launch the attacks; that they didn’t have any attack tools.

A Bad Idea

Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?

[You may also like: 8 Questions to Ask in DDoS Protection]

WRONG! Here’s why that’s a bad idea:

  • Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
  • Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
  • Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.

Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.

[You may also like: 5 Must-Have DDoS Protection Technologies]

A Better Way

So what are the alternatives? How should you do DDoS testing?

  • With DDoS testing, the focus should be on evaluating  the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like  trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
  • Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
  • It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
  • If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
  • Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware1

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSSecurityWeb Application Firewall

Security Risks: How ‘Similar-Solution’ Information Sharing Reduces Risk at the Network Perimeter

August 23, 2018 — by Thomas Gobet0

security_network_perimeter-960x540.jpg

We live in a connected world where we have access to several tools to assist in finding any information we need. If we choose to do something risky, there is often some type of notification that warns us of the risk.

The same holds true in IT departments. When a problem occurs, we search for answers that allow us to make decisions and take action. What problem created the outage? Do I need to increase the bandwidth or choose a CDN offering? Do I need to replace my devices or add a new instance to a cluster?

Connected Security

We all know that connected IT can help us make critical decisions. In the past, we have depended on standalone, best-of-breed security solutions that detect and mitigate locally but do not share data with other mitigation solutions across the network.

[You might also like: Web Application in a Digitally Connected World]

Even when information is shared, it’s typically between identical solutions deployed across various sites within a company. While this represents a good first step, there is still plenty of room for improvement. Let us consider the physical security solutions found at a bank as an analogy for cybersecurity solutions.

A robber enters a bank. Cameras didn’t detect the intruder wearing casual clothes or anything identifying him or her as a criminal. The intruder goes to the first teller and asks for money. The teller closes the window. Next, the robber moves to a second window, demanding money and that teller closes the window. The robber moves to the third window, and so on until all available windows are closed.

Is this the most effective security strategy? Wouldn’t it make more sense if the bank had a unified solution that shared information and shut down all of the windows after the first attempt? What if this robber was a hacker who is trying to penetrate your system? Would you allow the hacker to try and break into more than one network silo after the first attempt?

Comprehensive Security Via An Enterprise-Grade Suite Solution

As we’ve seen in the example above, having mitigation solutions that can share attack information allows an organization to block a new “signature” when you see the request. But this only applies when the traffic reaches the solution. How could the bank better protect itself from the robber?

  • Should they do active verification at the entrance?
    • No, it would be time-consuming for customers who may consider not coming back.
  • Should they keep a list of customers allowed?
    • No, otherwise they would turn off new customers.
  • Should they signal the risk to other desks and entrance security?
    • Yes, that way all windows would be closed simultaneously and security guards would be able to catch the intruder and any future attempts to enter.

Imagine these windows are your different sites and the security guard placed at the entrance is your security solution at the perimeter of your network. Identifying abnormal behavior from normal behavior requires you to perform analysis of network traffic. The more advanced the analysis is the closer to the backend application the solution is. That way we can ensure only traffic allowed by prior solutions doing first security barriers gets through. Being close to the application means that analyzed traffic went through: router, firewalls, switches, IPs, anti-virus, anti-DLP and many other solutions (in classic architectures).

Organizations require a fully integrated WAF and DDoS mitigation appliance that can communicate effectively to allow WAF solutions (deployed close to the application) to warn anti-DDoS systems (deployed at the perimeter) that an attacker is trying to penetrate the perimeter.

In the blog “Accessing Application With A Driving License,” Radware recommends blocking any requests coming from clients with abnormal behavior. This mechanism was only applied to the WAF, but with this added communication, it goes even one step further and blocks bad requests and/or bad clients who are trying to access your network.

[You might also like: Accessing Application With a Driving License]

With a fully integrated WAF and DDoS detection and mitigation solution that communicates with one another, these devices will save you time and processing power and they will be more effective in blocking intrusions to your network.

Download “Web Application Security in a Digitally Connected World” to learn more.

Download Now

Attack Types & VectorsDDoSSecurity

Battling Cyber Risks with Intelligent Automation

June 26, 2018 — by Louis Scialabba0

automation-960x640.jpg

Organizations are losing the cybersecurity race.

Cyber threats are evolving faster than security teams can adapt. The proliferation of data from dozens of security products are outpacing the ability for security teams to process it. And budget and talent shortfalls limit the ability for security teams to expand rapidly.

The question is how does a network security team improve the ability to scale and minimize data breaches, all the while dealing with increasingly complex attack vectors?

The answer is automation.

DDoSSecurity

8 Questions to Ask in DDoS Protection

June 7, 2018 — by Eyal Arazi0

8-ddos-questions-1-960x640.jpg

As DDoS attacks grow more frequent, more powerful, and more sophisticated, many organizations turn to DDoS mitigation providers to protect themselves against attack.

Before evaluating DDoS protection solutions, it is important to assess the needs, objectives, and constraints of the organization, network and applications. These factors will define the criteria for selecting the optimal solution.

DDoSSecurity

Choosing the Right DDoS Solution – Part III: Always-On Cloud Service

April 4, 2018 — by Eyal Arazi0

always-on-cloud-960x598.jpg

This blog series dives into the different DDoS protection models, in order to help customers choose the optimal protection for their particular use-case. The first parts of this series covered premise-based appliances and on-demand cloud services. This installment will cover always-on cloud DDoS protection deployments, its advantages and drawbacks, and what use-cases are best for it. The final part of this series will focus on hybrid deployments, which combine premise-based and cloud-based protections.

Attack Types & VectorsDDoSSecurity

Layer 7 Attack Mitigation

November 8, 2016 — by Lior Rozen2

layer-7-attack-mitigation-960x720.jpg

The DDoS world hits new records lately, with the attacks on KrebsOnSecurity.com and later on OVH and Dyn reached a bandwidth of more than 1T of traffic. While the bandwidth numbers are impressive indeed, the numbers themselves were expected. The DDoS security experts expect the previous record (about 450G bps) will be broken soon. This 1 Terabyte throughput record will probably be broken again by the end of this year, or early in the next one. The amazing part of the latest attack was the fact that this was not the reflective attack the DDoS world got used to, which leverages large internet servers amplifying the attacker requests. This time, the attack consisted of many semi-legit HTTP get requests. Such layer 7 attacks, which are aimed at the internet pipe as well as the application server behind it, are much harder to block than a layer 3 and layer 4 attack. Such attacks are also much harder to conduct.