main

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service

May 2, 2019 — by Dileep Mishra0

ddosmitigation-960x540.jpg

Internet pipes have gotten fatter in the last decade. We have gone from expensive 1 Mbps links to 1 Gbps links, which are available at a relatively low cost. Most enterprises have at least a 1 Gbps ISP link to their data center, many have multiple 1 Gbps links at each data center. In the past, QoS, packet shaping, application prioritization, etc., used to be a big deal, but now we just throw more capacity to solve any potential performance problems.

However, when it comes to protecting your infrastructure from DDoS attacks, 1 Gbps, 10Gbps or even 40Gbps is not enough capacity. This is because in 2019, even relatively small DDoS attacks are a few Gbps in size, and the larger ones are greater than 1 Tbps.

For this reason, when security professionals design a DDoS mitigation solution, one of the key considerations is the capacity of the DDoS mitigation service. That said, it isn’t easy to figure out which DDoS mitigation service actually has the capacity to withstand the largest DDoS attacks. This is because there are a range of DDoS mitigation solutions to pick from, and capacity is a parameter most vendors can spin to make their solution appear to be flush with capacity.

Let us examine some of the solutions available and understand the difference between their announced capacity and their real ability to block a large bandwidth DDoS attack.

On-premises DDoS Mitigation Appliances 

First of all, be wary of any Router, Switch, or Network Firewall which is also being positioned as a DDoS mitigation appliance. Chances are it does NOT have the ability to withstand a multi Gbps DDoS attack.

There are a handful of companies that make purpose built DDoS mitigation appliances. These devices are usually deployed at the edge of your network, as close as possible to the ISP link. Many of these devices canmitigate attacks which are in the 10s of Gbps, however, the advertised mitigation capacity is usually based on one particular attack vector with all attack packets being of a specific size.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Irrespective of the vendor, don’t buy into 20/40/60 Gbps of mitigation capacity without quizzing the device’s ability to withstand a multi-vector attack, the real-world performance and its ability to pass clean traffic at a given throughput while also mitigating a large attack. Don’t forget, pps is sometimes more important than bps, and many devices will hit their pps limit first. Also be sure to delve into the internals of the attack mitigation appliance, in particular if the same CPU is used to mitigate an attack while passing normal traffic. The most effective devices have the attack “plane” segregated from the clean traffic “plane,” thus ensuring attack mitigation without affecting normal traffic.

Finally, please keep in mind that if your ISP link capacity is 1 Gbps and you have a DDoS mitigation appliance capable of 10Gbps of mitigation, you are NOT protected against a 10Gbps attack. This is because the attack will fill your pipe even before the on-premises device gets a chance to “scrub” the attack traffic.

Cloud-based Scrubbing Centers

The second type of DDoS mitigation solution that is widely deployed is a cloud-based scrubbing solution. Here, you don’t install a DDoS mitigation device at your data center. Rather, you use a DDoS mitigation service deployed in the cloud. With this type of solution, you send telemetry to the cloud service from your data center on a continuous basis, and when there is a spike that corresponds to a DDoS attack, you “divert” your traffic to the cloud service.

[You may also like: DDoS Protection Requires Looking Both Ways]

There are a few vendors who provide this type of solution but again, when it comes to the capacity of the cloud DDoS service, the devil is in the details. Some vendors simply add the “net” capacity of all the ISP links they have at all their data centers. This is misleading because they may be adding the normal daily clean traffic to the advertised capacity — so ask about the available attack mitigation capacity, excluding the normal clean traffic.

Also, chances are the provider has different capacities in different scrubbing centers and the net capacity across all the scrubbing centers may not be a good reflection of the scrubbing center attack mitigation capacity  in the geography of your interest (where your data center is located).

Another item to inquire about is Anycast capabilities, because this gives the provider the ability to mitigate the attack close to the source. In other words, if a 100 Gbps attack is coming from China, it will be mitigated at the scrubbing center in APAC.

[You may also like: 8 Questions to Ask in DDoS Protection]

Finally, it is important that the DDoS mitigation provider has a completely separate data path for clean traffic and does not mix clean customer traffic with attack traffic.

Content Distribution Networks

A third type of DDoS mitigation architecture is based upon leveraging a content distribution network (CDN) to diffuse large DDoS attacks. When it comes to the DDoS mitigation capacity of a CDN however, again, the situation is blurry.

Most CDNs have 10s, 100s, or 1000s of PoPs geographically distributed across the globe. Many simply count the net aggregate capacity across all of these PoPs and advertise that as the total attack mitigation capacity. This has two major flaws. It is quite likely that a real world DDoS attack is sourced from a limited number of geographical locations, in which case the capacity that really matters is the local CDN PoP capacity, not the global capacity at all the PoPs.

[You may also like: 5 Must-Have DDoS Protection Technologies]

Second, most CDNs pass a significant amount of normal customer traffic on all of the CDN nodes, so if a CDN service claims its attack mitigation capacity is 40 Tbps , it may be counting in 30Tbps of normal traffic. The question to ask is what is the total unused capacity, both on a net aggregate level as well as within a geographical region.

ISP Provider-based DDoS Mitigation

Many ISP providers offer DDoS mitigation as an add-on to the ISP pipe. It sounds like a natural choice, as they see all traffic coming into your data center even before it comes to your infrastructure, so it is best to block the attack within the ISP’s infrastructure – right?

Unfortunately, most ISPs have semi-adequate DDoS mitigation deployed within their own infrastructure and are likely to pass along the attack traffic to your data center. In fact, in some scenarios, some ISPs could actually black hole your traffic when you are under attack to protect their other customers who might be using a shared portion of their infrastructure. The question to ask your ISP is what happens if they see a 500Gbps attack coming towards your infrastructure and if there is any cap on the maximum attack traffic.

[You may also like: ISP DDoS Protection May Not Cover All of Bases]

All of the DDoS mitigation solutions discussed above are effective and are widely deployed. We don’t endorse or recommend one over the other. However, one should take any advertised attack mitigation capacity from any provider with a grain of salt. Quiz your provider on local capacity, differentiation between clean and attack traffic, any caps on attack, and any SLAs. Also, carefully examine vendor proposals for any exclusions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

Is It Legal to Evaluate a DDoS Mitigation Service?

March 27, 2019 — by Dileep Mishra3

ddostesting-960x640.jpg

A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.

During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.

Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.

About That Proof of Concept…

How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!

Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks?  Well, the customer answered “yes” to both.

[You may also like: DDoS Protection Requires Looking Both Ways]

Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.

We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks.  And then the prospect told us to launch the attacks; that they didn’t have any attack tools.

A Bad Idea

Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?

[You may also like: 8 Questions to Ask in DDoS Protection]

WRONG! Here’s why that’s a bad idea:

  • Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
  • Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
  • Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.

Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.

[You may also like: 5 Must-Have DDoS Protection Technologies]

A Better Way

So what are the alternatives? How should you do DDoS testing?

  • With DDoS testing, the focus should be on evaluating  the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like  trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
  • Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
  • It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
  • If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
  • Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoS AttacksSecurity

Understanding the Darknet and Its Impact on Cybersecurity

February 19, 2019 — by Radware11

darknet-960x656.jpeg

The darknet is a very real concern for today’s businesses. In recent years, it has redefined the art of hacking and, in the process, dramatically expanded the threat landscape that organizations now face. So, what exactly is the darknet and why should you care?

WHAT IS THE DARKNET?

Not to be confused with the deep web, the dark web/darknet is a collection of thousands of websites that can’t be accessed via normal means and aren’t indexed by search engines like Google or Yahoo.

Simply put, the darknet is an overlay of networks that requires specific tools and software in order to gain   access. The history of the darknet predates the 1980s, and the term was originally used to describe computers on ARPANET that were hidden and programmed to receive messages but which did not respond to or acknowledge anything, thus remaining invisible, or in the dark. Since then, “darknet” has evolved into an umbrella term that describes the portions of the internet purposefully not open to public view or hidden networks whose architecture is superimposed on that of the internet.

[You may also like: Darknet: Attacker’s Operations Room]

Ironically, the darknet’s evolution can be traced somewhat to the U.S. military. The most common way to access the darknet is through tools such as the Tor network. The network routing capabilities that the Tor network uses were developed in the mid-1990s by mathematicians and computer scientists at the U.S. Naval Research Laboratory with the purpose of protecting U.S. intelligence communications online.

USE AND ACCESS

Uses of the darknet are nearly as wide and as diverse as the internet: everything from email and social media to hosting and sharing files, news websites and e-commerce. Accessing it requires specific software, configurations or authorization, often using nonstandard communication protocols and ports. Currently, two of the most popular ways to access the darknet are via two overlay networks. The first is the aforementioned Tor; the second is called I2P.

Tor, which stands for “onion router” or “onion routing,” is designed primarily to keep users anonymous. Just like the layers of an onion, data is stored within multiple layers of encryption. Each layer reveals the next relay until the final layer sends the data to its destination. Information is sent bidirectionally, so data is being sent back and forth via the same tunnel. On any given day, over one million users are active on the Tor network.

I2P, which stands for the Invisible Internet Project, is designed for user-to-user file sharing. It takes data and encapsulates it within multiple layers. Just like a clove of garlic, information is bunched together with other people’s information to prevent de-packing and inspection, and it sends that data via a unidirectional tunnel.

WHAT’S OUT THERE?

As mentioned previously, the darknet provides news, e-commerce sites, and email and hosting services. While many of the services are innocent and are simply alternatives to what can be found on the internet, a portion of the darknet is highly nefarious and tied to illicit activities due to its surreptitious nature. As a result, since the 1990s, cybercriminals have found a “digital home” on the darknet as a way to communicate, coordinate and, most recently, monetize the art of cyberattacks to a wide range of non-technical novices.

[You may also like: Darknet: A One-Stop Shop for Would-Be Criminals]

One of the most popular services are email services, which have seen a dramatic increase in recent years that parallels the increased popularity of ransomware. Cyberattackers will often use these email services to execute their campaigns to remain hidden from authorities.

Hosting services are yet another. Similar to the cloud computing environments that enterprises might use as part of their IT infrastructure, darknet hosting services are leveraged by cybercriminals and hackers to host websites or e-commerce marketplaces that sell distributed denial-of-service (DDoS) tools and services. These hosting services are typically very unstable as they can be “taken down” by law enforcement or vigilante hackers for political, ideological or moral reasons.

Forums also exist to allow hackers and criminals to have independent discussions for the purpose of knowledge exchanging, including organizing and coordinating DDoS campaigns (such as those planned by Anonymous) and/or exchanging cyberattack best practices. These forums come with a variety of technical options and languages and can be associated with particular threat actors/ groups, hacktivists, attack vectors, etc.

Lastly, just like the real internet, darknet search engines, like Candle and Torch, exist to allow users to easily locate and navigate these various forums, sites and e-commerce stores.

A DIGITAL STORE

Perhaps more than any other service usage, e-commerce sites on the darknet have exploded in popularity in recent years due to the rise of DDoS as a service and stresser services, resulting in huge profit margins for entrepreneurial hackers. Everything from DDoS attack tools and botnet rentals to “contracting” the services of a hacker are now available on the darknet.

[You may also like: The Cost of a DDoS Attack on the Darknet]

The result? These e-commerce sites and their products have commoditized cyberattacks in addition to making them available to a wide range of non-technical users. Often times, these services come with intuitive, GUI-based interfaces that make setting up and launching attacks quick and simple.

Examples abound, but one example of DDoS as a service is PutinStresser. PutinStresser illustrates the ease of access that these services have reached and provides potential buyers with various payment options, discovery tools, a variety of attack vectors and even chat-based customer support. Botnet rental services are also available — their growth paralleling the growth and use of botnets since 2016. A perfect example of a botnet service that is available on the darknet is the JenX botnet, which was discovered in 2018.

Prices for these tools are as diverse as the attack vectors that buyers can purchase and range from as low as $100 to several thousand dollars. Prices are typically based on various factors, such as the number of attack vectors included within the service, the size of the attack (Gbps/Tbps) and the demand.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Malware and ransomware are equally popular. The notorious WannaCry global ransomware campaign had its C2C servers hosted on the darknet. In addition, just like their botnet and DDoS brethren, malware and ransomware have their own “pay for play” services which dramatically simplify the process of launching a ransomware campaign. Numerous ransomware services exist that allow a user to simply  specify the ransom amount and add notes/ letters, and then the user is provided a simple executable to send to victims.

Lastly, an array of services is available allowing nearly anyone with access to the darknet (and the ability to convert money to bitcoin for payment) to contract hackers for their work. Services include hacking emails, hacking social media accounts and designing malicious software.

Many of these services revolve around the education vertical. The act of educational institutions moving their teaching tools and testing to online networks has bred a new generation of students willing to purchase the services of hackers to change grades and launch DDoS attacks on schools’ networks to postpone tests.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware2

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

Top 3 Cyberattacks Targeting Proxy Servers

January 16, 2019 — by Daniel Smith0

Proxy-960x540.jpg

Today, many organizations are now realizing that DDoS defense is critical to maintaining an exceptional customer experience. Why? Because nothing diminishes load times or impacts the end user’s experience more than a cyberattack.

As a facilitator of access to content and networks, proxy servers have become a focal point for those seeking to cause grief to organizations via cyberattacks due to the fallout a successful assault can have.

Attacking the CDN Proxy

New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyberattacks. Here are five cyber “blind spots” that are often attacked – and how to mitigate the risks:

Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.

SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.

Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.

Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications.  Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer’s origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the internet pipe is saturated, including ones served by the CDN.

[You may also like: CDN Security is NOT Enough for Today]

Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based flood attacks such as UDP floods or ICMP floods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.

Web application attacks. CDN protection from threats is limited and exposes web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN- based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that do provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.

[You may also like: Are Your Applications Secure?]

Finding the Watering Holes

Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:

  • App stores
  • Security update services
  • Domain name services
  • Public code repositories to build websites
  • Webanalytics platforms
  • Identity and access single sign-on platforms
  • Open source code commonly used by vendors
  • Third-party vendors that participate in the website

The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 and 2019 as automation begins to pervade every aspect of our life.

Attacking from the Side

In many ways, side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:

  • DDoS the company’s analytics provider
  • Brute-force attack against all users or against all of the site’s third-party companies
  • Port the admin’s phone and steal login information
  • Massive load on “page dotting”
  • Large botnets to “learn” ins and outs of a site

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

2018 In Review: Memcache and Drupalgeddon

December 20, 2018 — by Daniel Smith0

AdobeStock_199421574-960x640.jpg

Attackers don’t just utilize old, unpatched vulnerabilities, they also exploit recent disclosures at impressive rates. This year we witnessed two worldwide events that highlight the evolution and speed with which attackers will weaponize a vulnerability: Memcache and Druppalgeddon.

Memcached DDoS Attacks

In late February, Radware’s Threat Detection Network signaled an increase in activity on UDP port 11211. At the same time, several organizations began alerting to the same trend of attackers abusing Memcached servers for amplified attacks. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send spoofed attack traffic to a targeted victim. Memcached, like other UDP-based services (SSDP, DNS and NTP), are Internet servers that do not have native authentication and are therefore hijacked to launch amplified attacks against their victims. The Memcached protocol was never intended to be exposed to the Internet and thus did not have sufficient security controls in place. Because of this exposure, attackers are able to abuse Memcached UDP port 11211 for reflective, volumetric DDoS attacks.

On February 27, Memcached version 1.5.6 was released which noted that UDP port 11211 was exposed and fixed the issue by disabling the UDP protocol by default. The following day, before the update could be applied, attackers leveraged this new attack vector to launch the world’s largest DDoS attack, a title previously held by the Mirai botnet.

There were two main concerns with regards to the Memcached vulnerability. The first is centered around the number of exposed Memcached servers. With just under 100,000 servers and only a few thousand required to launch a 1Tbps attack, the cause for concern is great. Most organizations at this point are likely unaware that they have vulnerable Memcached servers exposed to the Internet and it takes time to block or filter this service. Memcached servers will be vulnerable for some time, allowing attackers to generate volumetric DDoS attacks with few resources.

[You may also like: Entering into the 1Tbps Era]

The second concern is the time it took attackers to begin exploiting this vulnerability. The spike in activity was known for several days prior to the patch and publication of the Memcached vulnerability. Within 24 hours of publication, an attacker was able to build an amplification list of vulnerable MMemcached servers and launch the massive attack.

Adding to this threat, Defcon.pro, a notorious stresser service, quickly incorporated Memcache into their premium offerings after the disclosure. Stresser services are normally quick to utilize the newest attack vector for many reasons. The first reason being publicity. Attackers looking to purchase DDoS-as-a-service will search for a platform offering the latest vectors. Including them in a service shows demand for the latest vectors. In addition, an operator might include the Memcache DDoS-as-a-service so they can provide their users with more power. A stresser service offering a Memcache DDoS-as-a-service will likely also attract more customers who are looking for volume and once again plays into marketing and availability.

[You may also like: The Rise of Booter and Stresser Services]

DDoS-as-a-service operators are running a business and are currently evolving at rapid rates to keep up with demand. Oftentimes, these operators are using the public attention created by news coverage similar to extortionists. Similarly, ransom denial-of-service (RDoS) operators are quick to threaten the use of new tools due to the risks they pose. DDoS-as-a-service will do the same, but once the threat is mitigated by security experts, cyber criminals will look for newer vectors to incorporate  into their latest toolkit or offerings.

This leads into the next example of Drupalgeddon campaign and how quickly hacktivists incorporated this attack vector into their toolkit for the purpose of spreading messages via defacements.

Drupalgeddon

In early 2018, Radware’s Emergency Response Team (ERT) was following AnonPlus Italia, an Anonymous-affiliated group that was engaged in digital protests throughout April and May. The group–involved in political hacktivism as they targeted the Italian government–executed numerous web defacements to protest war, religion, politics and financial power while spreading a message about their social network by abusing the content management systems (CMS).

On April 20, 2018 AnonPlus Italia began a new campaign and defaced two websites to advertise their website and IRC channel. Over the next six days, AnonPlus Italia would claim responsibility for defacing 21 websites, 20 of which used the popular open-source CMS Drupal.

[You may also like: Hacking Democracy: Vulnerable Voting Infrastructure and the Future of Election Security]

Prior to these attacks, on March 29, 2018, the Drupal security team released a patch for a critical remote code execution (RCE) against Drupal that allowed attackers to execute arbitrary code on unpatched servers as a result of an issue affecting multiple subsystems with default or common module configurations. Exploits for CVE-2018-7600 were posted to Github and Exploit-DB under the guise of education purposes only. The first PoC was posted to Exploit DB on April 13, 2018. On April 14, Legion B0mb3r, a member of the Bangladesh-based hacking group Err0r Squad, posted a video to YouTube demonstrating how to use this CVE-2018-7600 to deface an unpatched version of Drupal. A few days later, on April 17, a Metasploit module was also released to the public.

In May, AnonPlus Italia executed 27 more defacements, of which 19 were Drupal.

Content management systems like WordPress and Joomla are normally abused by Anonymous hacktivists to target other web servers. In this recent string of defacements, the group AnonPlus Italia is abusing misconfigured or unpatched CMS instances with remote code exploits, allowing them to upload shells and deface unmaintained websites for headline attention.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

BotnetsBrute Force AttacksDDoS AttacksPhishing

Top 6 Threat Discoveries of 2018

December 18, 2018 — by Radware0

AdobeStock_192801212-960x540.jpg

Over the course of 2018, Radware’s Emergency Response Team (ERT) identified several cyberattacks and security threats across the globe. Below is a round-up of our top discoveries from the past year. For more detailed information on each attack, please visit DDoS Warriors.

DemonBot

Radware’s Threat Research Center has been monitoring and tracking a malicious agent that is leveraging a Hadoop YARN (Yet-Another-Resource-Negotiator) unauthenticated remote command execution to infect Hadoop clusters with an unsophisticated new bot that identifies itself as DemonBot.

After a spike in requests for /ws/v1/cluster/apps/new-application appeared in our Threat Deception Network, DemonBot was identified and we have been tracking over 70 active exploit servers that are actively spreading DemonBot and are exploiting servers at an aggregated rate of over 1 million exploits per day.

[You may also like: IoT Botnets on the Rise]

Credential Stuffing Campaign

In October, Radware began tracking a credential stuffing campaign—a subset of Bruce Force attacks—targeting the financial industry in the United States and Europe.

This particular campaign is motivated by fraud. Criminals are using credentials from prior data breaches to gain access to users’ bank accounts. When significant breaches occur, the compromised emails and passwords are quickly leveraged by cybercriminals. Armed with tens of millions of credentials from recently breached websites, attackers will use these credentials, along with scripts and proxies, to distribute their attack against the financial institution to take over banking accounts. These login attempts can happen in such volumes that they resemble a distributed denial-of-service (DDoS) attack.

DNS Hijacking Targets Brazilian Banks

This summer, Radware’s Threat Research Center identified a hijacking campaign aimed at Brazilian Bank customers through their IoT devices, attempting to gain their bank credentials.

The research center had been tracking malicious activity targeting DLink DSL modem routers in Brazil since early June. Through known old exploits dating from 2015, a malicious agent is attempting to modify the DNS server settings in the routers of Brazilian residents, redirecting all their DNS requests through a malicious DNS server. The malicious DNS server is hijacking requests for the hostname of Banco de Brasil (www.bb.com.br) and redirecting to a fake, cloned website hosted on the same malicious DNS server, which has no connection whatsoever to the legitimate Banco de Brasil website.

[You may also like: Financial Institutions Must Protect the Data Like They Protect the Money]

Nigelthorn Malware

In May, Radware’s cloud malware protection service detected a zero-day malware threat at one of its customers, a global manufacturing firm, by using machine-learning algorithms. This malware campaign is propagating via socially-engineered links on Facebook and is infecting users by abusing a Google Chrome extension (the ‘Nigelify’ application) that performs credential theft, cryptomining, click fraud and more.

Further investigation by Radware’s Threat Research group revealed that this group has been active since at least March 2018 and has already infected more than 100,000 users in over 100 countries.

[You may also like: The Origin of Ransomware and Its Impact on Businesses]

Stresspaint Malware Campaign

On April 12, 2018, Radware’s Threat Research group detected malicious activity via internal feeds of a group collecting user credentials and payment methods from Facebook users across the globe. The group manipulates victims via phishing emails to download a painting application called ‘Relieve Stress Paint.’ While benign in appearance, it runs a malware dubbed ‘Stresspaint’ in the background. Within a few days, the group had infected over 40,000 users, stealing tens of thousands Facebook user credentials/cookies.

DarkSky Botnet

In early 2018, Radware’s Threat Research group discovered a new botnet, dubbed DarkSky. DarkSky features several evasion mechanisms, a malware downloader and a variety of network- and application-layer DDoS attack vectors. This bot is now available for sale for less than $20 over the Darknet.

As published by its authors, this malware is capable of running under Windows XP/7/8/10, both x32 and x64 versions, and has anti-virtual machine capabilities to evade security controls such as a sandbox, thereby allowing it to only infect ‘real’ machines.

Read the “IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants” to learn more.

Download Now

Application SecurityAttack MitigationDDoS AttacksSecurity

2018 In Review: Healthcare Under Attack

December 12, 2018 — by Daniel Smith0

Healthcare-Under-Attack-960x568.jpg

Radware’s ERT and Threat Research Center monitored an immense number of events over the last year, giving us a chance to review and analyze attack patterns to gain further insight into today’s trends and changes in the attack landscape. Here are some insights into what we have observed over the last year.

Healthcare Under Attack

Over the last decade there has been a dramatic digital transformation within healthcare; more facilities are relying on electronic forms and online processes to help improve and streamline the patient experience. As a result, the medical industry has new responsibilities and priorities to ensure client data is kept secure and available–which unfortunately aren’t always kept up with.

This year, the healthcare industry dominated news with an ever-growing list of breaches and attacks. Aetna, CarePlus, Partners Healthcare, BJC Healthcare, St. Peter’s Surgery and Endoscopy Center, ATI Physical Therapy, Inogen, UnityPoint Health, Nuance Communication, LifeBridge Health, Aultman Health Foundation, Med Associates and more recently Nashville Metro Public Health, UMC Physicians, and LabCorp Diagnostics have all disclosed or settled major breaches.

[You may also like: 2019 Predictions: Will Cyber Serenity Soon Be a Thing of the Past?]

Generally speaking, the risk of falling prey to data breaches is high, due to password sharing, outdated and unpatched software, or exposed and vulnerable servers. When you look at medical facilities in particular, other risks begin to appear, like those surrounding the number of hospital employees who have full or partial access to your health records during your stay there. The possibilities for a malicious insider or abuse of access is also very high, as is the risk of third party breaches. For example, it was recently disclosed that NHS patient records may have been exposed when passwords were stolen from Embrace Learning, a training business used by healthcare workers to learn about data protection.

Profiting From Medical Data

These recent cyber-attacks targeting the healthcare industry underscore the growing threat to hospitals, medical institutions and insurance companies around the world. So, what’s driving the trend? Profit. Personal data, specifically healthcare records, are in demand and quite valuable on today’s black market, often fetching more money per record than your financial records, and are a crucial part of today’s Fullz packages sold by cyber criminals.

Not only are criminals exfiltrating patient data and selling it for a profit, but others have opted to encrypt medical records with ransomware or hold the data hostage until their extortion demand is met. Often hospitals are quick to pay an extortionist because backups are non-existent, or it may take too long to restore services. Because of this, cyber-criminals have a focus on this industry.

[You may also like: How Secure is Your Medical Data?]

Most of the attacks targeting the medical industry are ransomware attacks, often delivered via phishing campaigns. There have also been cases where ransomware and malware have been delivered via drive-by downloads and comprised third party vendors. We have also seen criminals use SQL injections to steal data from medical applications as well as flooding those networks with DDoS attacks. More recently, we have seen large scale scanning and exploitation of internet connected devices for the purpose of crypto mining, some of which have been located inside medical networks. In addition to causing outages and encrypting data, these attacks have resulted in canceling elective cases, diverting incoming patients and rescheduling surgeries.

For-profit hackers will target and launch a number of different attacks against medical networks designed to obtain and steal your personal information from vulnerable or exposed databases. They are looking for a complete or partial set of information such as name, date of birth, Social Security numbers, diagnosis or treatment information, Medicare or Medicaid identification number, medical record number, billing/claims information, health insurance information, disability code, birth or marriage certificate information, Employer Identification Number, driver’s license numbers, passport information, banking or financial account numbers, and usernames and passwords so they can resell that information for a profit.

[You may also like: Fraud on the Darknet: How to Own Over 1 Million Usernames and Passwords]

Sometimes the data obtained by the criminal is incomplete, but that data can be leveraged as a stepping stone to gather additional information. Criminals can use partial information to create a spear-phishing kit designed to gain your trust by citing a piece of personal information as bait. And they’ll move very quickly once they gain access to PHI or payment information. Criminals will normally sell the information obtained, even if incomplete, in bulk or in packages on private forums to other criminals who have the ability to complete the Fullz package or quickly cash the accounts out. Stolen data will also find its way to public auctions and marketplaces on the dark net, where sellers try to get the highest price possible for data or gain attention and notoriety for the hack.

Don’t let healthcare data slip through the cracks; be prepared.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoSDDoS AttacksSecurityWAF

What Can We Learn About Cybersecurity from the Challenger Disaster? Everything.

December 5, 2018 — by Radware1

AdobeStock_115308434-960x640.jpg

Understanding the potential threats that your organization faces is an essential part of risk management in modern times. It involves forecasting and evaluating all the factors that impact risk. Processes, procedures and investments can all increase, minimize or even eliminate risk.

Another factor is the human element. Often times, within an organization, a culture exists in which reams of historical data tell one story, but management believes something entirely different. This “cognitive dissonance” can lead to an overemphasis and reliance on near-term data and/or experiences and a discounting of long-term statistical analysis.

Perhaps no better example of this exists than the space shuttle Challenger disaster in 1986, which now serves as a case study in improperly managing risk. In January of that year, the Challenger disintegrated 73 seconds after launch due to the failure of a gasket (called an O-ring) in one of the rocket boosters. While the physical cause of the disaster was caused by the failure of the O-ring, the resulting Rogers Commission that investigated the accident found that NASA had failed to correctly identify “flaws in management procedures and technical design that, if corrected, might have prevented the Challenger tragedy.”

Despite strong evidence dating back to 1977 that the O-ring was a flawed design that could fail under certain conditions/temperatures, neither NASA management nor the rocket manufacturer, Morton Thiokol, responded adequately to the danger posed by the deficient joint design. Rather than redesigning the joint, they came to define the problem as an “acceptable flight risk.” Over the course of 24 preceding successful space shuttle flights, a “safety culture” was established within NASA management that downplayed the technical risks associated with flying the space shuttle despite mountains of data, and warnings about the O-ring, provided by research and development (R & D) engineers.

As American physicist Richard Feynman said regarding the disaster, “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

Truer words have never been spoken when they pertain to cybersecurity. C-suite executives need to stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and establish a culture of “acceptable risk” and start managing to real-world risks — risks that are supported by hard data.

Risk Management and Cybersecurity

The threat of a cyberattack on your organization is no longer a question of if, but when, and C-suite executives know it. According to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts, 96% of executives were concerned about network vulnerabilities and security risks resulting from hybrid computing environments. Managing risk requires organizations to plan for and swiftly respond to risks and potential risks as they arise. Cybersecurity is no exception. For any organization, risks can be classified into four basic categories:

The Challenger disaster underscores all four of these risk categories. Take strategic risk as an example. Engineers from Morton Thiokol expressed concerns and presented data regarding the performance of the O-rings, both in the years prior and days leading up to the launch, and stated the launch should be delayed. NASA, under pressure to launch the already delayed mission and emboldened by the 24 preceding successful shuttle flights that led them to discount the reality of failure, pressured Morton Thiokol to supply a different recommendation. Morton Thiokol management decided to place organizational goals ahead of safety concerns that were supported by hard data. The recommendation for the launch was given, resulting in one of the most catastrophic incidents in manned space exploration. Both Morton Thiokol and NASA made strategic decisions that placed the advancements of their respective organizations over the risks that were presented.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

This example of strategic risk serves as a perfect analogy for organizations implementing cybersecurity strategies and solutions. There are countless examples of high-profile cyberattacks and data breaches in which upper management was warned in advance of network vulnerabilities, yet no actions were taken to prevent an impending disaster. The infamous 2018 Panera Bread data breach is one such example. Facebook is yet another. Its platform operations manager between 2011 and 2012 warned management at the social tech giant to implement audits or enforce other mechanisms to ensure user data extracted from the social network was not misused by third-party developers and/or systems. These warnings were apparently ignored.

So why does this continually occur? The implementation of DDoS and WAF mitigation solutions often involves three key components within an organization: management, the security team/SOC and compliance. Despite reams of hard data provided by a security team that an organization is either currently vulnerable or not prepared for the newest generation of attack vectors, management will often place overemphasis on near-term security results/experiences; they feel secure in the fact that the organization has never been the victim of a successful cyberattack to date. The aforementioned Facebook story is a perfect example: They allowed history to override hard data presented by a platform manager regarding new security risks.

Underscoring this “cognitive dissonance” is the compliance team, which often seeks to evaluate DDoS mitigation solutions based solely on checkbox functionality that fulfills minimal compliance standards. Alternatively, this strategy also drives a cost-savings approach that yields short-term financial savings within an organization that often times views cybersecurity as an afterthought vis-à-vis other strategic programs, such as mobility, IoT and cloud computing.

The end result? Organizations aren’t managing real-world risks, but rather are managing “yesterday’s” risks, thereby leaving themselves vulnerable to new attack vectors, IoT botnet vulnerabilities, cybercriminals and other threats that didn’t exist weeks or even days ago.

The True Cost of a Cyberattack

To understand just how detrimental this can be to the long-term success of an organization requires grasping the true cost of a cyberattack. Sadly, these data points are often as poorly understood, or dismissed, as the aforementioned statistics regarding vulnerability. The cost of a cyberattack can be mapped by the four risk categories:

  • Strategic Risk: Cyberattacks, on average, cost more than one million USD/EUR, according to 40% of executives. Five percent estimated this cost to be more than 25 million USD/EUR.
  • Reputation Risk: Customer attrition rates can increase by as much as 30% following a cyberattack. Moreover, organizations that lose over four percent of their customers following a data breach suffer an average total cost of $5.1 million. In addition, 41% of executives reported that customers have taken legal action against their companies following a data breach. The Yahoo and Equifax data breach lawsuits are two high-profile examples.
  • Product Risk: The IP Commission estimated that counterfeit goods, pirated software and stolen trade secrets cost the U.S. economy $600 billion annually.
  • Governance Risk: “Hidden” costs associated with a data breach include increased insurance premiums, lower credit ratings and devaluation of trade names. Equifax was devalued by $4 billion by Wall Street following the announcement of its data breach.

[You may also like: Understanding the Real Cost of a Cyber-Attack and Building a Cyber-Resilient Business]

Secure the Customer Experience, Manage Risk

It’s only by identifying the new risks that an organization faces each and every day and having a plan in place to minimize them that enables its executives to build a foundation upon which their company will succeed. In the case of the space shuttle program, mounds of data that clearly demonstrated an unacceptable flight risk were pushed aside by the need to meet operational goals. What lessons can be learned from that fateful day in January of 1986 and applied to cybersecurity? To start, the disaster highlights the five key steps of managing risks.

In the case of cybersecurity, this means that the executive leadership must weigh the opinions of its network security team, compliance team and upper management and use data to identify vulnerabilities and the requirements to successfully mitigate them. In the digital age, cybersecurity must be viewed as an ongoing strategic initiative and cannot be delegated solely to compliance. Leadership must fully weigh the potential cost of a cyberattack/data breach on the organization versus the resources required to implement the right security strategies and solutions. Lastly, when properly understood, risk can actually be turned into a competitive advantage. In the case of cybersecurity, it can be used as a competitive differentiator with consumers that demand fast network performance, responsive applications and a secure customer experience. This enables companies to target and retain customers by supplying a forward-looking security solution that seamlessly protects users today and into the future.

So how are executives expected to accomplish this while facing new security threats, tight budgets, a shortfall in cybersecurity professionals and the need to safeguard increasingly diversified infrastructures? The key is creating a secure climate for the business and its customers.

To create this climate, research shows that executives must be willing to accept new technologies, be openminded to new ideologies and embrace change, according to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts. Executives committed to staying on top of this ever-evolving threat must break down the silos that exist in the organization to assess the dimensions of the risks across the enterprise and address these exposures holistically. Next is balancing the aforementioned investment versus risk equation. All executives will face tough choices when deciding where to invest resources to propel their companies forward. C-suite executives must leverage the aforementioned data points and carefully evaluate the risks associated with security vulnerabilities and the costs of implementing effective security solutions to avoid becoming the next high-profile data breach.

According to the same report, four in 10 respondents identified increasing infrastructure complexity, digital transformation plans, integration of artificial intelligence and migration to the cloud as events that put pressure on security planning and budget allocation.

The stakes are high. Security threats can seriously impact a company’s brand reputation, resulting in customer loss, reduced operational productivity and lawsuits. C-suite executives must heed the lessons of the space shuttle Challenger disaster: Stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and start managing to real-world risks by trusting data, pushing aside near-term experiences/“gut instincts” and understanding the true cost of a cyberattack. Those executives who are willing to embrace technology and change and prioritize cybersecurity will be the ones to win the trust and loyalty of the 21st-century consumer.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now