main

BotnetsDDoS

Botnets: DDoS and Beyond

June 20, 2019 — by Daniel Smith1

botnets-960x540.jpg

Traditionally, DDoS is an avenue of profit for botherders. But today’s botnets have evolved to include several attack vectors other than DDoS that are more profitable. And just as any business-oriented person would do, attackers follow the money.

As a result, botherders are targeting enterprise and network software, since residential devices have become over saturated. The days of simple credentials-based attacks are long behind us. Attackers are now looking for enterprise devices that will help expand their offerings and assists in developing additional avenues of profit.

A few years ago, when IoT botnets became all the rage, they were mainly targeting residential devices with simple credential attacks (something the DDoS industry does not prevent from happening; instead we take the position of mitigating attacks coming from infected residential devices).

[You may also like: IoT Botnets on the Rise]

From Personal to Enterprise

But now that attackers are targeting enterprise devices, the industry must reevaluate the growing threat behind today’s botnets.

We now have to focus on not only protecting the network from external attacks but also the devices and servers found in a typical enterprise network from being infected by botnet malware and leveraged to launch attacks.

In a blog posted on MIT’s Technology Review titled, Inside the business model for botnets, C.G.J. Putman and colleagues from the University of Twente in the Netherlands detail the economics of a botnet. The article sheds some light on the absence of DDoS attacks and the growth of other vectors of attack generated from a botnet.

In their report, the team states that DDoS attacks from a botnet with 30,000 infected devices could generate around $26,000 a month. While that might seem like a lot, it’s actually a drop in the bucket compared to other attack vectors that can be produced from a botnet.

For example, C.G.J. Putman and Associates reported that a spamming botnet with 10,000 infected devices can generate $300,000 a month. The most profitable? Click fraud, which can generate over $20 million per month in profit.

[You may also like: Ad Fraud 101: How Cybercriminals Profit from Clicks]

To put that in perspective, AppleJ4ck and P1st from Lizard Squad made close to $600,000 over 2 years’ operating a stresser service called vDoS.

So let me ask this: If you are a botherder risking your freedom for profit, are you going to construct a botnet strictly for DDoS attacks or will you construct a botnet with more architecturally diverse devices to support additional vectors of profit?

Exactly. Botherders will continue to maximize their efforts and profitability by targeting enterprise devices.

Read the “IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants” to learn more.

Download Now

DDoSSecurity

Why Hybrid Always-On Protection Is Your Best Bet

June 19, 2019 — by Eyal Arazi1

hybridalwayson-960x640.jpg

Users today want more. The ubiquity and convenience of online competition means that customers want everything better, faster, and cheaper. One key component of the user experience is service availability. Customers expect applications and online services to be constantly available and responsive.

The problem, however, is that a new generation of larger and more sophisticated Distributed Denial of Service (DDoS) attacks is making DDoS protection a more challenging task than ever before. Massive IoT botnets are resulting in ever-larger volumetric DDoS attacks, while more sophisticated application-layer attacks find new ways of exhausting server resources. Above all, the ongoing shift to encrypted traffic is creating a new challenge with potent SSL DDoS floods.

Traditional DDoS defense – either premise-based or cloud-based – provide incomplete solutions which require inherent trade-offs between high-capacity volumetric protection, protection against sophisticated application-layer DDoS attacks, and handling of SSL certificates. The solution, therefore, is adopting a new hybrid DDoS protection model which combines premise-based appliances, together with an always-on cloud service.

Full Protection Requires Looking Both Ways

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

[You may also like: DDoS Protection Requires Looking Both Ways]

Attacks such as large-file DDoS attacks, ACK floods, scanning attacks, and others exploit the outbound communication channel for attacks that cannot be identified just by looking at ingress traffic. Such attacks are executed by sending small numbers of inbound requests, which have an asymmetric and disproportionate impact either on the outbound channel, or computing resources inside the network.

SSL is Creating New Challenges

On top of that, SSL/TLS traffic encryption is adding another layer of complexity. Within a short time, the majority of internet traffic has become encrypted. Traffic encryption helps secure customer data, and users now expect security to be part of the service experience. According to the Mozilla Foundation’s Let’s Encrypt project, nearly 80% of worldwide internet traffic is already encrypted, and the rate is constantly growing.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Ironically, while SSL/TLS is critical for securing user data, it also creates significant management challenges, and exposes services to a new generation of powerful DDoS attacks:

  • Increased Potency of DDoS Attacks: SSL/TLS connections requiring up to 15 times more resources from the target servers than the requesting host. This means that hackers can launch devastating attacks using only a small number of connections, and quickly overwhelm server resources using SSL floods.
  • Masking of Data Payload: Moreover, encryption masks – by definition – the internal contents of traffic requests, preventing deep inspection of packets against malicious traffic. This limits the effectiveness of anti-DDoS defense layers, and the types of attacks they can detect. This is particularly true for application-layer (L7) DDoS attacks which hide under the coverage of SSL encryption.
  • SSL Key Exposure: Many organizational, national, or industry regulations which forbid SSL keys from being shared with third-party entities. This creates a unique challenge to organizations who must provide the most secured user experience while also protecting their SSL keys from exposure.
  • Latency and Privacy Concerns: Offloading of SSL traffic in the cloud is usually a complex and time-consuming task. Most cloud-based SSL DDoS solutions require full decryption of customer traffic by the cloud provider, thereby compromising user privacy and adding latency to customer communications.

Existing Solutions Provide Partial Coverage

The problem, however, is that existing anti-DDoS defenses are unable to provide solutions that provide high-capacity volumetric protection while providing bi-directional protection required by sophisticated types of attacks.

On-Premise Appliances provide high level of protection against a wide variety of DDoS attacks, while providing very low latency and fast response. In addition, being on-premise, they allow companies to deal with SSL-based attacks without exposing their encryption keys to the outside world. Since they have visibility into both inbound and outbound traffic, they offer bi-directional protection against symmetric DDoS attacks. However, physical appliance can’t deal with large-scale volumetric attacks which have become commonplace in the era of massive IoT botnets.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Cloud-based DDoS protection services, on the other hand, possess the bandwidth to deal with large-scale volumetric attacks. However, they offer visibility only into the inbound communication channel. Thus, they have a hard time protecting against bi-directional DDoS attacks. Moreover, cloud-based SSL DDoS defenses – if the vendor has those at all – frequently require that the organization upload their SSL certificates online, increasing the risk of those keys being exposed.

The Optimal Solution: Hybrid Always-On Approach

For companies that place a high premium on the user experience, and wish to avoid even the slightest possible downtime as a result of DDoS attacks, the optimal solution is to deploy an always-on hybrid solution.

The hybrid approach to DDoS protection combines an on-premise hardware appliance with always-on cloud-based scrubbing capacity. This helps ensure that services are protected against any type of attack.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

Hybrid Always-On DDoS Protection

Compared to the pure-cloud always-on deployment model, the hybrid always-on approach adds multi-layered protection against symmetric DDoS attacks which saturate the outbound pipe, and allows for maintaining SSL certificates on-premise.

Benefits of the Hybrid Always-On Model

  • Multi-Layered DDoS Protection: The combination of a premise-based hardware mitigation device coupled with cloud-based scrubbing capacity offers multi-layered protection at different levels. If an attack somehow gets through the cloud protection layer, it will be stopped by the on-premise appliance.
  • Constant, Uninterrupted Volumetric Protection: Since all traffic passes through a cloud-based scrubbing center at all times, the cloud-based service provides uninterrupted, ongoing protection against high-capacity volumetric DDoS attack.
  • Bi-Directional DDoS Protection: While cloud-based DDoS protection services inspect only the inbound traffic channel, the addition of a premise-based appliance allows organizations to inspect the outbound channel, as well, thereby protecting themselves against two-way DDoS attacks which can saturate the outbound pipe, or otherwise require visibility to return traffic in order to identify attack patterns.
  • Reduced SSL Key Exposure: Many national or industry regulations require that encryption keys not be shared with anyone else. The inclusion of a premise-based hardware appliance allows organizations to protect themselves against encrypted DDoS attacks while keeping their SSL keys in-house.
  • Decreased Latency for Encrypted Traffic: SSL offloading in the cloud is frequently a complex and time-consuming affair, which adds much latency to user communications. Since inspection of SSL traffic in the hybrid always-on model is done primarily by the on-premise hardware appliance, users enjoy faster response times and lower latency.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Guaranteeing service availability while simultaneously ensuring the quality of the customer experience is a multi-faceted and complex proposition. Organizations are challenged by growth in the size of DDoS attacks, the increase in sophistication of application-layer DDoS attacks, and the challenges brought about by the shift to SSL encryption.

Deploying a hybrid always-on solution allows for both inbound and outbound visibility into traffic, enhanced protections for application-layer and encrypted traffic, and allows for SSL keys to be kept in-house, without exposing them to the outside.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service

May 2, 2019 — by Dileep Mishra0

ddosmitigation-960x540.jpg

Internet pipes have gotten fatter in the last decade. We have gone from expensive 1 Mbps links to 1 Gbps links, which are available at a relatively low cost. Most enterprises have at least a 1 Gbps ISP link to their data center, many have multiple 1 Gbps links at each data center. In the past, QoS, packet shaping, application prioritization, etc., used to be a big deal, but now we just throw more capacity to solve any potential performance problems.

However, when it comes to protecting your infrastructure from DDoS attacks, 1 Gbps, 10Gbps or even 40Gbps is not enough capacity. This is because in 2019, even relatively small DDoS attacks are a few Gbps in size, and the larger ones are greater than 1 Tbps.

For this reason, when security professionals design a DDoS mitigation solution, one of the key considerations is the capacity of the DDoS mitigation service. That said, it isn’t easy to figure out which DDoS mitigation service actually has the capacity to withstand the largest DDoS attacks. This is because there are a range of DDoS mitigation solutions to pick from, and capacity is a parameter most vendors can spin to make their solution appear to be flush with capacity.

Let us examine some of the solutions available and understand the difference between their announced capacity and their real ability to block a large bandwidth DDoS attack.

On-premises DDoS Mitigation Appliances 

First of all, be wary of any Router, Switch, or Network Firewall which is also being positioned as a DDoS mitigation appliance. Chances are it does NOT have the ability to withstand a multi Gbps DDoS attack.

There are a handful of companies that make purpose built DDoS mitigation appliances. These devices are usually deployed at the edge of your network, as close as possible to the ISP link. Many of these devices canmitigate attacks which are in the 10s of Gbps, however, the advertised mitigation capacity is usually based on one particular attack vector with all attack packets being of a specific size.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Irrespective of the vendor, don’t buy into 20/40/60 Gbps of mitigation capacity without quizzing the device’s ability to withstand a multi-vector attack, the real-world performance and its ability to pass clean traffic at a given throughput while also mitigating a large attack. Don’t forget, pps is sometimes more important than bps, and many devices will hit their pps limit first. Also be sure to delve into the internals of the attack mitigation appliance, in particular if the same CPU is used to mitigate an attack while passing normal traffic. The most effective devices have the attack “plane” segregated from the clean traffic “plane,” thus ensuring attack mitigation without affecting normal traffic.

Finally, please keep in mind that if your ISP link capacity is 1 Gbps and you have a DDoS mitigation appliance capable of 10Gbps of mitigation, you are NOT protected against a 10Gbps attack. This is because the attack will fill your pipe even before the on-premises device gets a chance to “scrub” the attack traffic.

Cloud-based Scrubbing Centers

The second type of DDoS mitigation solution that is widely deployed is a cloud-based scrubbing solution. Here, you don’t install a DDoS mitigation device at your data center. Rather, you use a DDoS mitigation service deployed in the cloud. With this type of solution, you send telemetry to the cloud service from your data center on a continuous basis, and when there is a spike that corresponds to a DDoS attack, you “divert” your traffic to the cloud service.

[You may also like: DDoS Protection Requires Looking Both Ways]

There are a few vendors who provide this type of solution but again, when it comes to the capacity of the cloud DDoS service, the devil is in the details. Some vendors simply add the “net” capacity of all the ISP links they have at all their data centers. This is misleading because they may be adding the normal daily clean traffic to the advertised capacity — so ask about the available attack mitigation capacity, excluding the normal clean traffic.

Also, chances are the provider has different capacities in different scrubbing centers and the net capacity across all the scrubbing centers may not be a good reflection of the scrubbing center attack mitigation capacity  in the geography of your interest (where your data center is located).

Another item to inquire about is Anycast capabilities, because this gives the provider the ability to mitigate the attack close to the source. In other words, if a 100 Gbps attack is coming from China, it will be mitigated at the scrubbing center in APAC.

[You may also like: 8 Questions to Ask in DDoS Protection]

Finally, it is important that the DDoS mitigation provider has a completely separate data path for clean traffic and does not mix clean customer traffic with attack traffic.

Content Distribution Networks

A third type of DDoS mitigation architecture is based upon leveraging a content distribution network (CDN) to diffuse large DDoS attacks. When it comes to the DDoS mitigation capacity of a CDN however, again, the situation is blurry.

Most CDNs have 10s, 100s, or 1000s of PoPs geographically distributed across the globe. Many simply count the net aggregate capacity across all of these PoPs and advertise that as the total attack mitigation capacity. This has two major flaws. It is quite likely that a real world DDoS attack is sourced from a limited number of geographical locations, in which case the capacity that really matters is the local CDN PoP capacity, not the global capacity at all the PoPs.

[You may also like: 5 Must-Have DDoS Protection Technologies]

Second, most CDNs pass a significant amount of normal customer traffic on all of the CDN nodes, so if a CDN service claims its attack mitigation capacity is 40 Tbps , it may be counting in 30Tbps of normal traffic. The question to ask is what is the total unused capacity, both on a net aggregate level as well as within a geographical region.

ISP Provider-based DDoS Mitigation

Many ISP providers offer DDoS mitigation as an add-on to the ISP pipe. It sounds like a natural choice, as they see all traffic coming into your data center even before it comes to your infrastructure, so it is best to block the attack within the ISP’s infrastructure – right?

Unfortunately, most ISPs have semi-adequate DDoS mitigation deployed within their own infrastructure and are likely to pass along the attack traffic to your data center. In fact, in some scenarios, some ISPs could actually black hole your traffic when you are under attack to protect their other customers who might be using a shared portion of their infrastructure. The question to ask your ISP is what happens if they see a 500Gbps attack coming towards your infrastructure and if there is any cap on the maximum attack traffic.

[You may also like: ISP DDoS Protection May Not Cover All of Bases]

All of the DDoS mitigation solutions discussed above are effective and are widely deployed. We don’t endorse or recommend one over the other. However, one should take any advertised attack mitigation capacity from any provider with a grain of salt. Quiz your provider on local capacity, differentiation between clean and attack traffic, any caps on attack, and any SLAs. Also, carefully examine vendor proposals for any exclusions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

Is It Legal to Evaluate a DDoS Mitigation Service?

March 27, 2019 — by Dileep Mishra3

ddostesting-960x640.jpg

A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.

During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.

Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.

About That Proof of Concept…

How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!

Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks?  Well, the customer answered “yes” to both.

[You may also like: DDoS Protection Requires Looking Both Ways]

Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.

We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks.  And then the prospect told us to launch the attacks; that they didn’t have any attack tools.

A Bad Idea

Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?

[You may also like: 8 Questions to Ask in DDoS Protection]

WRONG! Here’s why that’s a bad idea:

  • Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
  • Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
  • Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.

Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.

[You may also like: 5 Must-Have DDoS Protection Technologies]

A Better Way

So what are the alternatives? How should you do DDoS testing?

  • With DDoS testing, the focus should be on evaluating  the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like  trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
  • Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
  • It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
  • If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
  • Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi5

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware2

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

Top 3 Cyberattacks Targeting Proxy Servers

January 16, 2019 — by Daniel Smith0

Proxy-960x540.jpg

Today, many organizations are now realizing that DDoS defense is critical to maintaining an exceptional customer experience. Why? Because nothing diminishes load times or impacts the end user’s experience more than a cyberattack.

As a facilitator of access to content and networks, proxy servers have become a focal point for those seeking to cause grief to organizations via cyberattacks due to the fallout a successful assault can have.

Attacking the CDN Proxy

New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyberattacks. Here are five cyber “blind spots” that are often attacked – and how to mitigate the risks:

Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.

SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.

Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.

Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications.  Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer’s origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the internet pipe is saturated, including ones served by the CDN.

[You may also like: CDN Security is NOT Enough for Today]

Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based flood attacks such as UDP floods or ICMP floods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.

Web application attacks. CDN protection from threats is limited and exposes web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN- based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that do provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.

[You may also like: Are Your Applications Secure?]

Finding the Watering Holes

Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:

  • App stores
  • Security update services
  • Domain name services
  • Public code repositories to build websites
  • Webanalytics platforms
  • Identity and access single sign-on platforms
  • Open source code commonly used by vendors
  • Third-party vendors that participate in the website

The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 and 2019 as automation begins to pervade every aspect of our life.

Attacking from the Side

In many ways, side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:

  • DDoS the company’s analytics provider
  • Brute-force attack against all users or against all of the site’s third-party companies
  • Port the admin’s phone and steal login information
  • Massive load on “page dotting”
  • Large botnets to “learn” ins and outs of a site

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

2018 In Review: Memcache and Drupalgeddon

December 20, 2018 — by Daniel Smith0

AdobeStock_199421574-960x640.jpg

Attackers don’t just utilize old, unpatched vulnerabilities, they also exploit recent disclosures at impressive rates. This year we witnessed two worldwide events that highlight the evolution and speed with which attackers will weaponize a vulnerability: Memcache and Druppalgeddon.

Memcached DDoS Attacks

In late February, Radware’s Threat Detection Network signaled an increase in activity on UDP port 11211. At the same time, several organizations began alerting to the same trend of attackers abusing Memcached servers for amplified attacks. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send spoofed attack traffic to a targeted victim. Memcached, like other UDP-based services (SSDP, DNS and NTP), are Internet servers that do not have native authentication and are therefore hijacked to launch amplified attacks against their victims. The Memcached protocol was never intended to be exposed to the Internet and thus did not have sufficient security controls in place. Because of this exposure, attackers are able to abuse Memcached UDP port 11211 for reflective, volumetric DDoS attacks.

On February 27, Memcached version 1.5.6 was released which noted that UDP port 11211 was exposed and fixed the issue by disabling the UDP protocol by default. The following day, before the update could be applied, attackers leveraged this new attack vector to launch the world’s largest DDoS attack, a title previously held by the Mirai botnet.

There were two main concerns with regards to the Memcached vulnerability. The first is centered around the number of exposed Memcached servers. With just under 100,000 servers and only a few thousand required to launch a 1Tbps attack, the cause for concern is great. Most organizations at this point are likely unaware that they have vulnerable Memcached servers exposed to the Internet and it takes time to block or filter this service. Memcached servers will be vulnerable for some time, allowing attackers to generate volumetric DDoS attacks with few resources.

[You may also like: Entering into the 1Tbps Era]

The second concern is the time it took attackers to begin exploiting this vulnerability. The spike in activity was known for several days prior to the patch and publication of the Memcached vulnerability. Within 24 hours of publication, an attacker was able to build an amplification list of vulnerable MMemcached servers and launch the massive attack.

Adding to this threat, Defcon.pro, a notorious stresser service, quickly incorporated Memcache into their premium offerings after the disclosure. Stresser services are normally quick to utilize the newest attack vector for many reasons. The first reason being publicity. Attackers looking to purchase DDoS-as-a-service will search for a platform offering the latest vectors. Including them in a service shows demand for the latest vectors. In addition, an operator might include the Memcache DDoS-as-a-service so they can provide their users with more power. A stresser service offering a Memcache DDoS-as-a-service will likely also attract more customers who are looking for volume and once again plays into marketing and availability.

[You may also like: The Rise of Booter and Stresser Services]

DDoS-as-a-service operators are running a business and are currently evolving at rapid rates to keep up with demand. Oftentimes, these operators are using the public attention created by news coverage similar to extortionists. Similarly, ransom denial-of-service (RDoS) operators are quick to threaten the use of new tools due to the risks they pose. DDoS-as-a-service will do the same, but once the threat is mitigated by security experts, cyber criminals will look for newer vectors to incorporate  into their latest toolkit or offerings.

This leads into the next example of Drupalgeddon campaign and how quickly hacktivists incorporated this attack vector into their toolkit for the purpose of spreading messages via defacements.

Drupalgeddon

In early 2018, Radware’s Emergency Response Team (ERT) was following AnonPlus Italia, an Anonymous-affiliated group that was engaged in digital protests throughout April and May. The group–involved in political hacktivism as they targeted the Italian government–executed numerous web defacements to protest war, religion, politics and financial power while spreading a message about their social network by abusing the content management systems (CMS).

On April 20, 2018 AnonPlus Italia began a new campaign and defaced two websites to advertise their website and IRC channel. Over the next six days, AnonPlus Italia would claim responsibility for defacing 21 websites, 20 of which used the popular open-source CMS Drupal.

[You may also like: Hacking Democracy: Vulnerable Voting Infrastructure and the Future of Election Security]

Prior to these attacks, on March 29, 2018, the Drupal security team released a patch for a critical remote code execution (RCE) against Drupal that allowed attackers to execute arbitrary code on unpatched servers as a result of an issue affecting multiple subsystems with default or common module configurations. Exploits for CVE-2018-7600 were posted to Github and Exploit-DB under the guise of education purposes only. The first PoC was posted to Exploit DB on April 13, 2018. On April 14, Legion B0mb3r, a member of the Bangladesh-based hacking group Err0r Squad, posted a video to YouTube demonstrating how to use this CVE-2018-7600 to deface an unpatched version of Drupal. A few days later, on April 17, a Metasploit module was also released to the public.

In May, AnonPlus Italia executed 27 more defacements, of which 19 were Drupal.

Content management systems like WordPress and Joomla are normally abused by Anonymous hacktivists to target other web servers. In this recent string of defacements, the group AnonPlus Italia is abusing misconfigured or unpatched CMS instances with remote code exploits, allowing them to upload shells and deface unmaintained websites for headline attention.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoSDDoS AttacksSecurityWAF

What Can We Learn About Cybersecurity from the Challenger Disaster? Everything.

December 5, 2018 — by Radware1

AdobeStock_115308434-960x640.jpg

Understanding the potential threats that your organization faces is an essential part of risk management in modern times. It involves forecasting and evaluating all the factors that impact risk. Processes, procedures and investments can all increase, minimize or even eliminate risk.

Another factor is the human element. Often times, within an organization, a culture exists in which reams of historical data tell one story, but management believes something entirely different. This “cognitive dissonance” can lead to an overemphasis and reliance on near-term data and/or experiences and a discounting of long-term statistical analysis.

Perhaps no better example of this exists than the space shuttle Challenger disaster in 1986, which now serves as a case study in improperly managing risk. In January of that year, the Challenger disintegrated 73 seconds after launch due to the failure of a gasket (called an O-ring) in one of the rocket boosters. While the physical cause of the disaster was caused by the failure of the O-ring, the resulting Rogers Commission that investigated the accident found that NASA had failed to correctly identify “flaws in management procedures and technical design that, if corrected, might have prevented the Challenger tragedy.”

Despite strong evidence dating back to 1977 that the O-ring was a flawed design that could fail under certain conditions/temperatures, neither NASA management nor the rocket manufacturer, Morton Thiokol, responded adequately to the danger posed by the deficient joint design. Rather than redesigning the joint, they came to define the problem as an “acceptable flight risk.” Over the course of 24 preceding successful space shuttle flights, a “safety culture” was established within NASA management that downplayed the technical risks associated with flying the space shuttle despite mountains of data, and warnings about the O-ring, provided by research and development (R & D) engineers.

As American physicist Richard Feynman said regarding the disaster, “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

Truer words have never been spoken when they pertain to cybersecurity. C-suite executives need to stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and establish a culture of “acceptable risk” and start managing to real-world risks — risks that are supported by hard data.

Risk Management and Cybersecurity

The threat of a cyberattack on your organization is no longer a question of if, but when, and C-suite executives know it. According to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts, 96% of executives were concerned about network vulnerabilities and security risks resulting from hybrid computing environments. Managing risk requires organizations to plan for and swiftly respond to risks and potential risks as they arise. Cybersecurity is no exception. For any organization, risks can be classified into four basic categories:

The Challenger disaster underscores all four of these risk categories. Take strategic risk as an example. Engineers from Morton Thiokol expressed concerns and presented data regarding the performance of the O-rings, both in the years prior and days leading up to the launch, and stated the launch should be delayed. NASA, under pressure to launch the already delayed mission and emboldened by the 24 preceding successful shuttle flights that led them to discount the reality of failure, pressured Morton Thiokol to supply a different recommendation. Morton Thiokol management decided to place organizational goals ahead of safety concerns that were supported by hard data. The recommendation for the launch was given, resulting in one of the most catastrophic incidents in manned space exploration. Both Morton Thiokol and NASA made strategic decisions that placed the advancements of their respective organizations over the risks that were presented.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

This example of strategic risk serves as a perfect analogy for organizations implementing cybersecurity strategies and solutions. There are countless examples of high-profile cyberattacks and data breaches in which upper management was warned in advance of network vulnerabilities, yet no actions were taken to prevent an impending disaster. The infamous 2018 Panera Bread data breach is one such example. Facebook is yet another. Its platform operations manager between 2011 and 2012 warned management at the social tech giant to implement audits or enforce other mechanisms to ensure user data extracted from the social network was not misused by third-party developers and/or systems. These warnings were apparently ignored.

So why does this continually occur? The implementation of DDoS and WAF mitigation solutions often involves three key components within an organization: management, the security team/SOC and compliance. Despite reams of hard data provided by a security team that an organization is either currently vulnerable or not prepared for the newest generation of attack vectors, management will often place overemphasis on near-term security results/experiences; they feel secure in the fact that the organization has never been the victim of a successful cyberattack to date. The aforementioned Facebook story is a perfect example: They allowed history to override hard data presented by a platform manager regarding new security risks.

Underscoring this “cognitive dissonance” is the compliance team, which often seeks to evaluate DDoS mitigation solutions based solely on checkbox functionality that fulfills minimal compliance standards. Alternatively, this strategy also drives a cost-savings approach that yields short-term financial savings within an organization that often times views cybersecurity as an afterthought vis-à-vis other strategic programs, such as mobility, IoT and cloud computing.

The end result? Organizations aren’t managing real-world risks, but rather are managing “yesterday’s” risks, thereby leaving themselves vulnerable to new attack vectors, IoT botnet vulnerabilities, cybercriminals and other threats that didn’t exist weeks or even days ago.

The True Cost of a Cyberattack

To understand just how detrimental this can be to the long-term success of an organization requires grasping the true cost of a cyberattack. Sadly, these data points are often as poorly understood, or dismissed, as the aforementioned statistics regarding vulnerability. The cost of a cyberattack can be mapped by the four risk categories:

  • Strategic Risk: Cyberattacks, on average, cost more than one million USD/EUR, according to 40% of executives. Five percent estimated this cost to be more than 25 million USD/EUR.
  • Reputation Risk: Customer attrition rates can increase by as much as 30% following a cyberattack. Moreover, organizations that lose over four percent of their customers following a data breach suffer an average total cost of $5.1 million. In addition, 41% of executives reported that customers have taken legal action against their companies following a data breach. The Yahoo and Equifax data breach lawsuits are two high-profile examples.
  • Product Risk: The IP Commission estimated that counterfeit goods, pirated software and stolen trade secrets cost the U.S. economy $600 billion annually.
  • Governance Risk: “Hidden” costs associated with a data breach include increased insurance premiums, lower credit ratings and devaluation of trade names. Equifax was devalued by $4 billion by Wall Street following the announcement of its data breach.

[You may also like: Understanding the Real Cost of a Cyber-Attack and Building a Cyber-Resilient Business]

Secure the Customer Experience, Manage Risk

It’s only by identifying the new risks that an organization faces each and every day and having a plan in place to minimize them that enables its executives to build a foundation upon which their company will succeed. In the case of the space shuttle program, mounds of data that clearly demonstrated an unacceptable flight risk were pushed aside by the need to meet operational goals. What lessons can be learned from that fateful day in January of 1986 and applied to cybersecurity? To start, the disaster highlights the five key steps of managing risks.

In the case of cybersecurity, this means that the executive leadership must weigh the opinions of its network security team, compliance team and upper management and use data to identify vulnerabilities and the requirements to successfully mitigate them. In the digital age, cybersecurity must be viewed as an ongoing strategic initiative and cannot be delegated solely to compliance. Leadership must fully weigh the potential cost of a cyberattack/data breach on the organization versus the resources required to implement the right security strategies and solutions. Lastly, when properly understood, risk can actually be turned into a competitive advantage. In the case of cybersecurity, it can be used as a competitive differentiator with consumers that demand fast network performance, responsive applications and a secure customer experience. This enables companies to target and retain customers by supplying a forward-looking security solution that seamlessly protects users today and into the future.

So how are executives expected to accomplish this while facing new security threats, tight budgets, a shortfall in cybersecurity professionals and the need to safeguard increasingly diversified infrastructures? The key is creating a secure climate for the business and its customers.

To create this climate, research shows that executives must be willing to accept new technologies, be openminded to new ideologies and embrace change, according to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts. Executives committed to staying on top of this ever-evolving threat must break down the silos that exist in the organization to assess the dimensions of the risks across the enterprise and address these exposures holistically. Next is balancing the aforementioned investment versus risk equation. All executives will face tough choices when deciding where to invest resources to propel their companies forward. C-suite executives must leverage the aforementioned data points and carefully evaluate the risks associated with security vulnerabilities and the costs of implementing effective security solutions to avoid becoming the next high-profile data breach.

According to the same report, four in 10 respondents identified increasing infrastructure complexity, digital transformation plans, integration of artificial intelligence and migration to the cloud as events that put pressure on security planning and budget allocation.

The stakes are high. Security threats can seriously impact a company’s brand reputation, resulting in customer loss, reduced operational productivity and lawsuits. C-suite executives must heed the lessons of the space shuttle Challenger disaster: Stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and start managing to real-world risks by trusting data, pushing aside near-term experiences/“gut instincts” and understanding the true cost of a cyberattack. Those executives who are willing to embrace technology and change and prioritize cybersecurity will be the ones to win the trust and loyalty of the 21st-century consumer.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now