A DDoS attack can carry significant costs and consequences. However, these key steps can help minimize the impact and get you on your way to recovery.
The healthcare industry is a prime target of hackers. According to Radware’s 2018-2019 Global Application and Network Security Report, healthcare was the second-most attacked industry after the government sector in 2018. In fact, about 39 percent of healthcare organizations were hit daily or weekly by hackers and only 6 percent said they’d never experienced a cyber attack.
Increased digitization in healthcare is a contributor to the industry’s enlarged attack surface. And it’s accelerated by a number of factors: the broad adoption of Electronic Health Records Systems (EHRS), integration of IoT technology in medical devices (software-based medical equipment like MRIs, EKGs, infusion pumps), and a migration to cloud services.
Case in point: 96% of non-federal acute care hospitals have an EHRS. This is up from 8% in 2008.
Accenture estimates that the loss of data and related failures will cost healthcare companies nearly $6 trillion in damages in 2020, compared to $3 trillion in 2017. Cyber crime can have a devastating financial impact on the healthcare sector in the next four to five years.
According to the aforementioned Radware report, healthcare organizations saw a significant increase in malware or bot attacks, with socially engineered threats and DDoS steadily growing, as well. While overall ransomware attacks have decreased, hackers continue to hit the healthcare industry the hardest with these attacks. And they will continue to refine ransomware attacks and likely hijack IoT devices to hold tech hostage.
Indeed, the increasing use of medical IoT devices makes healthcare organizations more vulnerable to DDoS attacks; attackers use infected IoT devices in botnets to launch coordinated attacks.
Additionally, cryptomining is on the rise, with 44 percent of organizations experiencing a cryptomining or ransomware attack. Another 14 percent experienced both. What’s worse is that these health providers don’t feel prepared for these attacks. The report found healthcare “is still intimidated by ransomware.”
The Office of Civil Rights (OCR) has warned about the dangers of DDoS attacks on healthcare organizations; in one incident, a DDoS attack overloaded a hospital network and computers, disrupting operations and causing hundreds of thousands of dollars in losses and damages.
The healthcare industry is targeted for a variety of reasons. For one thing, money. By 2026, healthcare spending will consume 20% of the GDP, making the industry an attractive financial target for cyber criminals. And per Radware’s report, the value of medical records on the darknet is higher than that of passwords and credit cards.
And as my colleague Daniel Smith previously wrote, “not only are criminals exfiltrating patient data and selling it for a profit, but others have opted to encrypt medical records with ransomware or hold the data hostage until their extortion demand is met. Often hospitals are quick to pay an extortionist because backups are non-existent, or it may take too long to restore services.”
Regardless of motivation, one thing is certain: Ransomware and DDoS attacks pose a dangerous threat to patients and those dealing with health issues. Many ailments are increasingly treated with cloud-based monitoring services, IoT-embedded devices and self or automated administration of prescription medicines. Cyber attacks could establish a foothold in the delivery of health services and put people’s lives and well-being at risk.
Securing digital assets can no longer be delegated solely to the IT department. Security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives–not just for enterprises, but for hospitals and healthcare providers alike.
To prevent or mitigate DDoS attacks, US-Computer Emergency Readiness Team (US-CERT) recommends that organizations consider the following measures:
- Continuously monitoring and scanning for vulnerable and comprised IoT devices on their networks and following proper remediation actions
- Creating and implementing password management policies and procedures for devices and their users; ensuring all default passwords are changed to strong passwords
- Installing and maintaining anti-virus software and security patches; updating IoT devices with security patches as soon as patches become available is critical.
- Installing a firewall and configuring it to restrict traffic coming into and leaving the network and IT systems
- Segmenting networks where appropriate and applying security controls for access to network segments
- Disabling universal plug and play on routers unless absolutely necessary
What is an HTTPS flood attack? Why is everybody talking about it these days? And is it really such a big threat?
HTTPS flood attack is a generic name for DDoS attacks that exploit SSL/TLS protocols over HTTP communications. Lately, we’ve been hearing much about this specific type of DDoS attack and other SSL/TLS attack vectors; according to our 2018-2019 Global Application & Network Security report, encrypted web attacks were the most commonly reported form of application layer attack in 2018.
And with regards to the last question, there is a simple answer: YES.
The Benefits of Encryption
We all know that encryption is being used almost everywhere today, with more than 70% of the web pages worldwide loaded over HTTPS. Encryption lets us enjoy many of benefits while being connected: We can securely send our private credentials to our bank, shop easily on Amazon without worrying whether our credit card details will be intercepted, and we can text safely and transfer files with peace-of-mind.
Basically, by using encryption, or SSL/TLS in more technical jargon, we enjoy authenticity (meaning, to know the source of traffic), integrity (meaning, to know that no one tampered with the data between the two end-points), and of course, confidentiality (encryption turns data into a cypher-text using symmetric and asymmetric key exchanges).
It sounds so good, shut up and take my money!
A Fly in the Ointment
Indeed, data encryption gives us tremendous power over data transfer, but there is a fly in the ointment. All of these incredible capabilities require many system resources, and thus attract hackers and cyber criminals who wish to wreak havoc.
When it comes to the destination server or an organization’s server, the SSL/TLS connection requires even greater amounts of allocated resources – 15 times more than from the requesting host to be exact.
In other words, if a group knows how to manipulate the protocols and vulnerabilities inherent in it, they can cause significant damage by running powerful encrypted DDoS attacks.
Now, there is only one option for organizations that wish to protect against HTTPS DDoS attacks: They must protect their network and infrastructure with dedicated, sophisticated devices that can detect and mitigate HTTPS DDoS attacks.
An Evolving Solution
Traditional protection devices require a copy of the SSL certificates (or keys) in order to decrypt the packets that are being transmitted through the device. However, while doing so, they damage user privacy (especially in the era of GDPR and other worldwide privacy regulations) and add latency. And needless to say, if not handled properly, the process can create additional security risks. What’s more, traditional devices are stateful and thus themselves vulnerable to DDoS attacks.
For service providers and carriers, whose security policies prevent them from holding their network tenants’ decryption keys, this is problematic. Without their network tenants’ keys, traditional off-the-shelf solutions are ineffective.
So, how can service providers properly protect their tenants from cyber attacks?
Keyless protection against HTTPS flood attacks based on stateless architecture is ideal for service providers and carriers. Such a solution not only eliminates operational complexity that comes with managing decryption keys, but protects against SSL-based HTTP DDoS attacks at scale without adding latency or compromising user privacy.
In June, I traveled to Israel to attend BsidesTLV and Cyber Week. Both of these events included incredible presentations, workshops, and networking opportunities. They also provided many unique opportunities to discuss research, privacy, and policy on many different levels with industry leaders and government officials from around the world.
Some of my preferred events during Cyber Week included Exploring The Grey Zone of Cyber Defense, Cyber Attacks Against Nations, and Academic Perspective’s on Cybersecurity Challenges.
One of the expert lectures during the Academic Perspective’s event struck a chord with me. The speech was titled, ‘Normalization as an Approach to Norms,’ and was presented by Prof. Martin Libicki, Professor at the U.S. Naval Academy.
At a high level, the talk was about the use of normalization as an approach to determining what cyber behaviors, carried out by governments, could be considered social norms in the cyber domain and who gets to set this gold standard. (If you would like to watch it for yourself, it can be found here on YouTube).
The part that resonated with me is when Prof. Libicki started talking about who might set the gold standard and what is considered normal cyber behaviors from different countries. For example, North Korea is known for robbing banks, and Russia is known for election interference and targeting the energy sector. Are these activities we want to accept as normal behavior? Of course not.
What about China’s behaviors that include launching DDoS attacks on dissidents? Are we, the security industry, the gold standard, comfortable with allowing others to use denial of service attacks as a way to silence others?
This lecture was focused on nation-state attacks and real cyber warfare, but it left me connecting dots and wondering, hasn’t the security industry already accepted denial of service attacks as normalized behavior?
Are Denial of Service Attacks a Social Norm?
In my opinion, yes, denial of service attacks and assisting the behaviors are now accepted and expected on all levels. But why has this happened? Why have denial of service attacks become tolerated? The sad truth is we, the security and tech industry, allowed this to happen by accepting specific actions within the community and not speaking up about others.
One of the main reasons why denial of service attacks became a social norm is because of their popularity, and the attention paid to them earlier in the decade among hacktivist and gamers. With this came the availability for anyone to freely access source codes, tools, and resources need to conduct an attack of their own.
In general, no one prevents the availability of the source code and tools from being publicly accessible. In fact, criminals AND researchers do their fair share in propagating these tools and scripts used to launch denial of service attacks by hosting them on code repository sites.
Another reason why denial of service attacks became a social norm is that legitimate companies like hosting providers and social media outlets allowed the activity for one reason or another. For example, social media platforms enable criminals to not only post operational details but also to advertise their malicious services publicly. At the same time, the hosting providers turn a blind eye for profit and allow criminals to host and mask their infrastructure with their services.
Also, at this point, you could almost say manufactures and some ISPs are co-conspirators. Manufacturers are building and shipping vulnerable IoT devices with no intention of patching or providing software updates for known exploits thus contributing to the number of possible devices that could be leveraged by a botherder for a denial of service attack. You also have ISPs that know they are significant offenders and the main source of the malicious traffic, yet do very little to mitigate the activity, let alone respond to abuse reports.
So, are we comfortable allowing others to use denial of service attacks as a way to silence people? From my perspective, it seems like we do a lot to support the activity.
Acceptance is a Slippery Slope
To be clear, in no way am I saying that a denial of service attack is nothing to worry about now that they have become a norm. But I believe most of us have grown to accept denial of service attacks, specifically temporary network outages, as a regular occurrence or have written it off as the cost of doing business in the digital era, which has led to this path of acceptance and normalization.
At any rate, if China’s use of denial of service attacks against foreign platforms used by Chinese dissidents is acceptable, or something we allow to happen without any action, then the average denial of service attack against your corporate network is considered normal behavior as well.
Under this current environment of acceptance, it becomes harder to look at the average botherder and say their behavior is not normal or acceptable, while simultaneously taking a passive approach on nation-states that use the same attack vector.
If we want to reduce the number of denial of service attacks by non-government actors, then we have to lead by example as the gold standard. We have to make sure people know that nation-states use of denial of service attack is unacceptable. We also have to do more to prevent malicious actors from gaining access to the tools used to launch these attacks.
Hosting attack services and code should not be acceptable behavior from the security community.
How Much More Will We Tolerate?
This is a question I don’t have an answer for. At the moment, we tolerate a lot. At this rate, almost every teenager, at some point, will be involved in or know someone who is engaged in launching a DDoS attack. And while some will write it off as child’s play to just knock their friend offline, we all know they likely got the code from one of our public repositories or used different services that some of us manage to mask their origin.
Remember, we as the security industry set the golden standard, and when we tolerate specific behavior for long enough, it becomes socially acceptable.
A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.
Below are five key considerations in evaluating a DDoS scrubbing network.
When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.
To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).
In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.
It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.
The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.
Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?
Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.
One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.
As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.
A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.
Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.
Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.
Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.
Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.
This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.
Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.
Ask the Questions
Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.
Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.
Internet pipes have gotten fatter in the last decade. We have gone from expensive 1 Mbps links to 1 Gbps links, which are available at a relatively low cost. Most enterprises have at least a 1 Gbps ISP link to their data center, many have multiple 1 Gbps links at each data center. In the past, QoS, packet shaping, application prioritization, etc., used to be a big deal, but now we just throw more capacity to solve any potential performance problems.
However, when it comes to protecting your infrastructure from DDoS attacks, 1 Gbps, 10Gbps or even 40Gbps is not enough capacity. This is because in 2019, even relatively small DDoS attacks are a few Gbps in size, and the larger ones are greater than 1 Tbps.
For this reason, when security professionals design a DDoS mitigation solution, one of the key considerations is the capacity of the DDoS mitigation service. That said, it isn’t easy to figure out which DDoS mitigation service actually has the capacity to withstand the largest DDoS attacks. This is because there are a range of DDoS mitigation solutions to pick from, and capacity is a parameter most vendors can spin to make their solution appear to be flush with capacity.
Let us examine some of the solutions available and understand the difference between their announced capacity and their real ability to block a large bandwidth DDoS attack.
On-premises DDoS Mitigation Appliances
First of all, be wary of any Router, Switch, or Network Firewall which is also being positioned as a DDoS mitigation appliance. Chances are it does NOT have the ability to withstand a multi Gbps DDoS attack.
There are a handful of companies that make purpose built DDoS mitigation appliances. These devices are usually deployed at the edge of your network, as close as possible to the ISP link. Many of these devices canmitigate attacks which are in the 10s of Gbps, however, the advertised mitigation capacity is usually based on one particular attack vector with all attack packets being of a specific size.
Irrespective of the vendor, don’t buy into 20/40/60 Gbps of mitigation capacity without quizzing the device’s ability to withstand a multi-vector attack, the real-world performance and its ability to pass clean traffic at a given throughput while also mitigating a large attack. Don’t forget, pps is sometimes more important than bps, and many devices will hit their pps limit first. Also be sure to delve into the internals of the attack mitigation appliance, in particular if the same CPU is used to mitigate an attack while passing normal traffic. The most effective devices have the attack “plane” segregated from the clean traffic “plane,” thus ensuring attack mitigation without affecting normal traffic.
Finally, please keep in mind that if your ISP link capacity is 1 Gbps and you have a DDoS mitigation appliance capable of 10Gbps of mitigation, you are NOT protected against a 10Gbps attack. This is because the attack will fill your pipe even before the on-premises device gets a chance to “scrub” the attack traffic.
Cloud-based Scrubbing Centers
The second type of DDoS mitigation solution that is widely deployed is a cloud-based scrubbing solution. Here, you don’t install a DDoS mitigation device at your data center. Rather, you use a DDoS mitigation service deployed in the cloud. With this type of solution, you send telemetry to the cloud service from your data center on a continuous basis, and when there is a spike that corresponds to a DDoS attack, you “divert” your traffic to the cloud service.
There are a few vendors who provide this type of solution but again, when it comes to the capacity of the cloud DDoS service, the devil is in the details. Some vendors simply add the “net” capacity of all the ISP links they have at all their data centers. This is misleading because they may be adding the normal daily clean traffic to the advertised capacity — so ask about the available attack mitigation capacity, excluding the normal clean traffic.
Also, chances are the provider has different capacities in different scrubbing centers and the net capacity across all the scrubbing centers may not be a good reflection of the scrubbing center attack mitigation capacity in the geography of your interest (where your data center is located).
Another item to inquire about is Anycast capabilities, because this gives the provider the ability to mitigate the attack close to the source. In other words, if a 100 Gbps attack is coming from China, it will be mitigated at the scrubbing center in APAC.
Finally, it is important that the DDoS mitigation provider has a completely separate data path for clean traffic and does not mix clean customer traffic with attack traffic.
Content Distribution Networks
A third type of DDoS mitigation architecture is based upon leveraging a content distribution network (CDN) to diffuse large DDoS attacks. When it comes to the DDoS mitigation capacity of a CDN however, again, the situation is blurry.
Most CDNs have 10s, 100s, or 1000s of PoPs geographically distributed across the globe. Many simply count the net aggregate capacity across all of these PoPs and advertise that as the total attack mitigation capacity. This has two major flaws. It is quite likely that a real world DDoS attack is sourced from a limited number of geographical locations, in which case the capacity that really matters is the local CDN PoP capacity, not the global capacity at all the PoPs.
Second, most CDNs pass a significant amount of normal customer traffic on all of the CDN nodes, so if a CDN service claims its attack mitigation capacity is 40 Tbps , it may be counting in 30Tbps of normal traffic. The question to ask is what is the total unused capacity, both on a net aggregate level as well as within a geographical region.
ISP Provider-based DDoS Mitigation
Many ISP providers offer DDoS mitigation as an add-on to the ISP pipe. It sounds like a natural choice, as they see all traffic coming into your data center even before it comes to your infrastructure, so it is best to block the attack within the ISP’s infrastructure – right?
Unfortunately, most ISPs have semi-adequate DDoS mitigation deployed within their own infrastructure and are likely to pass along the attack traffic to your data center. In fact, in some scenarios, some ISPs could actually black hole your traffic when you are under attack to protect their other customers who might be using a shared portion of their infrastructure. The question to ask your ISP is what happens if they see a 500Gbps attack coming towards your infrastructure and if there is any cap on the maximum attack traffic.
All of the DDoS mitigation solutions discussed above are effective and are widely deployed. We don’t endorse or recommend one over the other. However, one should take any advertised attack mitigation capacity from any provider with a grain of salt. Quiz your provider on local capacity, differentiation between clean and attack traffic, any caps on attack, and any SLAs. Also, carefully examine vendor proposals for any exclusions.
A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.
During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.
Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.
About That Proof of Concept…
How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!
Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks? Well, the customer answered “yes” to both.
Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.
We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks. And then the prospect told us to launch the attacks; that they didn’t have any attack tools.
A Bad Idea
Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?
WRONG! Here’s why that’s a bad idea:
- Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
- Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
- Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.
Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.
A Better Way
So what are the alternatives? How should you do DDoS testing?
- With DDoS testing, the focus should be on evaluating the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
- Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
- It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
- If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
- Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.
The darknet is a very real concern for today’s businesses. In recent years, it has redefined the art of hacking and, in the process, dramatically expanded the threat landscape that organizations now face. So, what exactly is the darknet and why should you care?
WHAT IS THE DARKNET?
Not to be confused with the deep web, the dark web/darknet is a collection of thousands of websites that can’t be accessed via normal means and aren’t indexed by search engines like Google or Yahoo.
Simply put, the darknet is an overlay of networks that requires specific tools and software in order to gain access. The history of the darknet predates the 1980s, and the term was originally used to describe computers on ARPANET that were hidden and programmed to receive messages but which did not respond to or acknowledge anything, thus remaining invisible, or in the dark. Since then, “darknet” has evolved into an umbrella term that describes the portions of the internet purposefully not open to public view or hidden networks whose architecture is superimposed on that of the internet.
Ironically, the darknet’s evolution can be traced somewhat to the U.S. military. The most common way to access the darknet is through tools such as the Tor network. The network routing capabilities that the Tor network uses were developed in the mid-1990s by mathematicians and computer scientists at the U.S. Naval Research Laboratory with the purpose of protecting U.S. intelligence communications online.
USE AND ACCESS
Uses of the darknet are nearly as wide and as diverse as the internet: everything from email and social media to hosting and sharing files, news websites and e-commerce. Accessing it requires specific software, configurations or authorization, often using nonstandard communication protocols and ports. Currently, two of the most popular ways to access the darknet are via two overlay networks. The first is the aforementioned Tor; the second is called I2P.
Tor, which stands for “onion router” or “onion routing,” is designed primarily to keep users anonymous. Just like the layers of an onion, data is stored within multiple layers of encryption. Each layer reveals the next relay until the final layer sends the data to its destination. Information is sent bidirectionally, so data is being sent back and forth via the same tunnel. On any given day, over one million users are active on the Tor network.
I2P, which stands for the Invisible Internet Project, is designed for user-to-user file sharing. It takes data and encapsulates it within multiple layers. Just like a clove of garlic, information is bunched together with other people’s information to prevent de-packing and inspection, and it sends that data via a unidirectional tunnel.
WHAT’S OUT THERE?
As mentioned previously, the darknet provides news, e-commerce sites, and email and hosting services. While many of the services are innocent and are simply alternatives to what can be found on the internet, a portion of the darknet is highly nefarious and tied to illicit activities due to its surreptitious nature. As a result, since the 1990s, cybercriminals have found a “digital home” on the darknet as a way to communicate, coordinate and, most recently, monetize the art of cyberattacks to a wide range of non-technical novices.
One of the most popular services are email services, which have seen a dramatic increase in recent years that parallels the increased popularity of ransomware. Cyberattackers will often use these email services to execute their campaigns to remain hidden from authorities.
Hosting services are yet another. Similar to the cloud computing environments that enterprises might use as part of their IT infrastructure, darknet hosting services are leveraged by cybercriminals and hackers to host websites or e-commerce marketplaces that sell distributed denial-of-service (DDoS) tools and services. These hosting services are typically very unstable as they can be “taken down” by law enforcement or vigilante hackers for political, ideological or moral reasons.
Forums also exist to allow hackers and criminals to have independent discussions for the purpose of knowledge exchanging, including organizing and coordinating DDoS campaigns (such as those planned by Anonymous) and/or exchanging cyberattack best practices. These forums come with a variety of technical options and languages and can be associated with particular threat actors/ groups, hacktivists, attack vectors, etc.
Lastly, just like the real internet, darknet search engines, like Candle and Torch, exist to allow users to easily locate and navigate these various forums, sites and e-commerce stores.
A DIGITAL STORE
Perhaps more than any other service usage, e-commerce sites on the darknet have exploded in popularity in recent years due to the rise of DDoS as a service and stresser services, resulting in huge profit margins for entrepreneurial hackers. Everything from DDoS attack tools and botnet rentals to “contracting” the services of a hacker are now available on the darknet.
The result? These e-commerce sites and their products have commoditized cyberattacks in addition to making them available to a wide range of non-technical users. Often times, these services come with intuitive, GUI-based interfaces that make setting up and launching attacks quick and simple.
Examples abound, but one example of DDoS as a service is PutinStresser. PutinStresser illustrates the ease of access that these services have reached and provides potential buyers with various payment options, discovery tools, a variety of attack vectors and even chat-based customer support. Botnet rental services are also available — their growth paralleling the growth and use of botnets since 2016. A perfect example of a botnet service that is available on the darknet is the JenX botnet, which was discovered in 2018.
Prices for these tools are as diverse as the attack vectors that buyers can purchase and range from as low as $100 to several thousand dollars. Prices are typically based on various factors, such as the number of attack vectors included within the service, the size of the attack (Gbps/Tbps) and the demand.
Malware and ransomware are equally popular. The notorious WannaCry global ransomware campaign had its C2C servers hosted on the darknet. In addition, just like their botnet and DDoS brethren, malware and ransomware have their own “pay for play” services which dramatically simplify the process of launching a ransomware campaign. Numerous ransomware services exist that allow a user to simply specify the ransom amount and add notes/ letters, and then the user is provided a simple executable to send to victims.
Lastly, an array of services is available allowing nearly anyone with access to the darknet (and the ability to convert money to bitcoin for payment) to contract hackers for their work. Services include hacking emails, hacking social media accounts and designing malicious software.
Many of these services revolve around the education vertical. The act of educational institutions moving their teaching tools and testing to online networks has bred a new generation of students willing to purchase the services of hackers to change grades and launch DDoS attacks on schools’ networks to postpone tests.
New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.
So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.
The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.
Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.
Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.
A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.
Adapting for the Future: DDoS Mitigation
To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.
A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.
Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.
Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.
Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.
An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.
Inspecting Encrypted Data
Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.
The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.
Dealing with DevOps and Agile Software Development
Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.
In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.
Adaptability is the Name of the Game
The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:
- Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
- Understand the long-term TCO of any cyber security solution you purchase.
- The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.
Today, many organizations are now realizing that DDoS defense is critical to maintaining an exceptional customer experience. Why? Because nothing diminishes load times or impacts the end user’s experience more than a cyberattack.
As a facilitator of access to content and networks, proxy servers have become a focal point for those seeking to cause grief to organizations via cyberattacks due to the fallout a successful assault can have.
Attacking the CDN Proxy
New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyberattacks. Here are five cyber “blind spots” that are often attacked – and how to mitigate the risks:
Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.
SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.
During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.
Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.
Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications. Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer’s origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the internet pipe is saturated, including ones served by the CDN.
Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based ﬂood attacks such as UDP ﬂoods or ICMP ﬂoods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.
Web application attacks. CDN protection from threats is limited and exposes web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN- based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that do provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.
Finding the Watering Holes
Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:
- App stores
- Security update services
- Domain name services
- Public code repositories to build websites
- Webanalytics platforms
- Identity and access single sign-on platforms
- Open source code commonly used by vendors
- Third-party vendors that participate in the website
The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 and 2019 as automation begins to pervade every aspect of our life.
Attacking from the Side
In many ways, side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:
- DDoS the company’s analytics provider
- Brute-force attack against all users or against all of the site’s third-party companies
- Port the admin’s phone and steal login information
- Massive load on “page dotting”
- Large botnets to “learn” ins and outs of a site