main

BotnetsDDoS

Botnets: DDoS and Beyond

June 20, 2019 — by Daniel Smith1

botnets-960x540.jpg

Traditionally, DDoS is an avenue of profit for botherders. But today’s botnets have evolved to include several attack vectors other than DDoS that are more profitable. And just as any business-oriented person would do, attackers follow the money.

As a result, botherders are targeting enterprise and network software, since residential devices have become over saturated. The days of simple credentials-based attacks are long behind us. Attackers are now looking for enterprise devices that will help expand their offerings and assists in developing additional avenues of profit.

A few years ago, when IoT botnets became all the rage, they were mainly targeting residential devices with simple credential attacks (something the DDoS industry does not prevent from happening; instead we take the position of mitigating attacks coming from infected residential devices).

[You may also like: IoT Botnets on the Rise]

From Personal to Enterprise

But now that attackers are targeting enterprise devices, the industry must reevaluate the growing threat behind today’s botnets.

We now have to focus on not only protecting the network from external attacks but also the devices and servers found in a typical enterprise network from being infected by botnet malware and leveraged to launch attacks.

In a blog posted on MIT’s Technology Review titled, Inside the business model for botnets, C.G.J. Putman and colleagues from the University of Twente in the Netherlands detail the economics of a botnet. The article sheds some light on the absence of DDoS attacks and the growth of other vectors of attack generated from a botnet.

In their report, the team states that DDoS attacks from a botnet with 30,000 infected devices could generate around $26,000 a month. While that might seem like a lot, it’s actually a drop in the bucket compared to other attack vectors that can be produced from a botnet.

For example, C.G.J. Putman and Associates reported that a spamming botnet with 10,000 infected devices can generate $300,000 a month. The most profitable? Click fraud, which can generate over $20 million per month in profit.

[You may also like: Ad Fraud 101: How Cybercriminals Profit from Clicks]

To put that in perspective, AppleJ4ck and P1st from Lizard Squad made close to $600,000 over 2 years’ operating a stresser service called vDoS.

So let me ask this: If you are a botherder risking your freedom for profit, are you going to construct a botnet strictly for DDoS attacks or will you construct a botnet with more architecturally diverse devices to support additional vectors of profit?

Exactly. Botherders will continue to maximize their efforts and profitability by targeting enterprise devices.

Read the “IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants” to learn more.

Download Now

DDoSSecurity

Why Hybrid Always-On Protection Is Your Best Bet

June 19, 2019 — by Eyal Arazi1

hybridalwayson-960x640.jpg

Users today want more. The ubiquity and convenience of online competition means that customers want everything better, faster, and cheaper. One key component of the user experience is service availability. Customers expect applications and online services to be constantly available and responsive.

The problem, however, is that a new generation of larger and more sophisticated Distributed Denial of Service (DDoS) attacks is making DDoS protection a more challenging task than ever before. Massive IoT botnets are resulting in ever-larger volumetric DDoS attacks, while more sophisticated application-layer attacks find new ways of exhausting server resources. Above all, the ongoing shift to encrypted traffic is creating a new challenge with potent SSL DDoS floods.

Traditional DDoS defense – either premise-based or cloud-based – provide incomplete solutions which require inherent trade-offs between high-capacity volumetric protection, protection against sophisticated application-layer DDoS attacks, and handling of SSL certificates. The solution, therefore, is adopting a new hybrid DDoS protection model which combines premise-based appliances, together with an always-on cloud service.

Full Protection Requires Looking Both Ways

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

[You may also like: DDoS Protection Requires Looking Both Ways]

Attacks such as large-file DDoS attacks, ACK floods, scanning attacks, and others exploit the outbound communication channel for attacks that cannot be identified just by looking at ingress traffic. Such attacks are executed by sending small numbers of inbound requests, which have an asymmetric and disproportionate impact either on the outbound channel, or computing resources inside the network.

SSL is Creating New Challenges

On top of that, SSL/TLS traffic encryption is adding another layer of complexity. Within a short time, the majority of internet traffic has become encrypted. Traffic encryption helps secure customer data, and users now expect security to be part of the service experience. According to the Mozilla Foundation’s Let’s Encrypt project, nearly 80% of worldwide internet traffic is already encrypted, and the rate is constantly growing.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Ironically, while SSL/TLS is critical for securing user data, it also creates significant management challenges, and exposes services to a new generation of powerful DDoS attacks:

  • Increased Potency of DDoS Attacks: SSL/TLS connections requiring up to 15 times more resources from the target servers than the requesting host. This means that hackers can launch devastating attacks using only a small number of connections, and quickly overwhelm server resources using SSL floods.
  • Masking of Data Payload: Moreover, encryption masks – by definition – the internal contents of traffic requests, preventing deep inspection of packets against malicious traffic. This limits the effectiveness of anti-DDoS defense layers, and the types of attacks they can detect. This is particularly true for application-layer (L7) DDoS attacks which hide under the coverage of SSL encryption.
  • SSL Key Exposure: Many organizational, national, or industry regulations which forbid SSL keys from being shared with third-party entities. This creates a unique challenge to organizations who must provide the most secured user experience while also protecting their SSL keys from exposure.
  • Latency and Privacy Concerns: Offloading of SSL traffic in the cloud is usually a complex and time-consuming task. Most cloud-based SSL DDoS solutions require full decryption of customer traffic by the cloud provider, thereby compromising user privacy and adding latency to customer communications.

Existing Solutions Provide Partial Coverage

The problem, however, is that existing anti-DDoS defenses are unable to provide solutions that provide high-capacity volumetric protection while providing bi-directional protection required by sophisticated types of attacks.

On-Premise Appliances provide high level of protection against a wide variety of DDoS attacks, while providing very low latency and fast response. In addition, being on-premise, they allow companies to deal with SSL-based attacks without exposing their encryption keys to the outside world. Since they have visibility into both inbound and outbound traffic, they offer bi-directional protection against symmetric DDoS attacks. However, physical appliance can’t deal with large-scale volumetric attacks which have become commonplace in the era of massive IoT botnets.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Cloud-based DDoS protection services, on the other hand, possess the bandwidth to deal with large-scale volumetric attacks. However, they offer visibility only into the inbound communication channel. Thus, they have a hard time protecting against bi-directional DDoS attacks. Moreover, cloud-based SSL DDoS defenses – if the vendor has those at all – frequently require that the organization upload their SSL certificates online, increasing the risk of those keys being exposed.

The Optimal Solution: Hybrid Always-On Approach

For companies that place a high premium on the user experience, and wish to avoid even the slightest possible downtime as a result of DDoS attacks, the optimal solution is to deploy an always-on hybrid solution.

The hybrid approach to DDoS protection combines an on-premise hardware appliance with always-on cloud-based scrubbing capacity. This helps ensure that services are protected against any type of attack.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

Hybrid Always-On DDoS Protection

Compared to the pure-cloud always-on deployment model, the hybrid always-on approach adds multi-layered protection against symmetric DDoS attacks which saturate the outbound pipe, and allows for maintaining SSL certificates on-premise.

Benefits of the Hybrid Always-On Model

  • Multi-Layered DDoS Protection: The combination of a premise-based hardware mitigation device coupled with cloud-based scrubbing capacity offers multi-layered protection at different levels. If an attack somehow gets through the cloud protection layer, it will be stopped by the on-premise appliance.
  • Constant, Uninterrupted Volumetric Protection: Since all traffic passes through a cloud-based scrubbing center at all times, the cloud-based service provides uninterrupted, ongoing protection against high-capacity volumetric DDoS attack.
  • Bi-Directional DDoS Protection: While cloud-based DDoS protection services inspect only the inbound traffic channel, the addition of a premise-based appliance allows organizations to inspect the outbound channel, as well, thereby protecting themselves against two-way DDoS attacks which can saturate the outbound pipe, or otherwise require visibility to return traffic in order to identify attack patterns.
  • Reduced SSL Key Exposure: Many national or industry regulations require that encryption keys not be shared with anyone else. The inclusion of a premise-based hardware appliance allows organizations to protect themselves against encrypted DDoS attacks while keeping their SSL keys in-house.
  • Decreased Latency for Encrypted Traffic: SSL offloading in the cloud is frequently a complex and time-consuming affair, which adds much latency to user communications. Since inspection of SSL traffic in the hybrid always-on model is done primarily by the on-premise hardware appliance, users enjoy faster response times and lower latency.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Guaranteeing service availability while simultaneously ensuring the quality of the customer experience is a multi-faceted and complex proposition. Organizations are challenged by growth in the size of DDoS attacks, the increase in sophistication of application-layer DDoS attacks, and the challenges brought about by the shift to SSL encryption.

Deploying a hybrid always-on solution allows for both inbound and outbound visibility into traffic, enhanced protections for application-layer and encrypted traffic, and allows for SSL keys to be kept in-house, without exposing them to the outside.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Security

Executives Are Turning Infosec into a Competitive Advantage

June 18, 2019 — by Anna Convery-Pelletier0

cs7-960x584.jpg

Companies are more connected to their customers now than ever before.  After spending billions to digitally transform themselves, organizations have exponentially increased the number of touchpoints as well as the frequency of communication they have with their customer base. 

Thanks to digital transformation, organizations are more agile, flexible, efficient, and customer-centric. However, with greater access to customers comes an equal measure of increased vulnerability. We have all seen the havoc that a data breach can wreak upon a brand; hackers are the modern-day David to the Goliaths of the Fortune 1000 world. As a result, we have experienced a fundamental shift in management philosophy around the role that information security plays across organizations. The savviest leaders have shifted from a defensive to offensive position and are turning information security into a competitive market advantage.

Each year, Radware surveys C-Suite executives to measure leadership sentiment around information security, its costs and business impacts.  This year, we studied the views and insights from 263 senior leaders at organizations primarily with revenue in excess of 1 billion USD/EUR/GBP around the world. Respondents represented 30% financial services, 21% retail/hospitality, 21% telecom/service provider, 7% manufacturing/distribution, 7% computer products/services, 6% business services/consulting, and 9% other.

This year’s report shines a spotlight on increased sophistication of management philosophy for information security and security strategy. While responsibility for cybersecurity continues to be spearheaded by the CIO and CISO, it is also being shared throughout the entire C-Suite.

[You may also like: How Cyberattacks Directly Impact Your Brand]

In fact, 72% of executives responding to our survey claimed that it’s a topic discussed in every board meeting. 82% of responding CEOs reported high levels of knowledge around information security, as did 72% of non-technical C-Suite titles – an all-time high! Security issues now influence brand reputation, brand trust, and consumer trust, which forces organizations to infuse information security into core business functions such as customer experience, marketing and business operations.

All with good reason. The average cost of a cyberattack is now roughly $4.6M, and the number of organizations that claim attacks cost them more than $10M has doubled from 2018 to 2019.

Customers are quite aware of the onslaught of data breaches that have affected nearly every industry, from banking to online dating, throughout the past ten years. Even though many governments have passed many laws to protect consumers against misuse of their data, such as GDPR, CASL, HIPPA, Personally Identifiable Information (PII), etc., companies still can’t keep up with the regulations. 

[You may also like: The Costs of Cyberattacks Are Real]

Case in point: 74% of European executives report they have experienced a data breach in the past 12 months, compared to 53% in America and 44% in APAC. Half (52%) of executives in Europe have experienced a self-reported incident under GDPR in the past year.  

Consumer confidence is at an all-time low. These same customers want to understand what companies have done to secure their products and services and they are willing to take their business elsewhere if that brand promise is broken. Customers are increasingly taking action following a breach. 

[You may also like: How Do Marketers Add Security into Their Messaging?]

Reputation management is a critical component of organizational management. Savvy leaders recognize the connection between information security and reputation management and subsequently adopted information security as a market advantage.

So How Do Companies Start to Earn Back Trust?

These leaders recognize that security must become part of the brand promise. Our research shows that 75% of executives claim security is a key part of their product marketing messages. 50% of companies surveyed offer dedicated security products and services to their customers. Additionally, 41% offer security features as add-ons within their products and services, and another 7% are considering building security services into their products.

Balancing Security Concerns with Deployment of Private and Public Clouds

Digital transformation drove a mass migration into public and private cloud environments.  Organizations were wooed by the promise of flexibility, streamlined business operations, improved efficiency, lower operational costs, and greater business agility. Rightfully so, as cloud environments have largely fulfilled their promises.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

However, along with these incredible benefits comes a far greater risk than most organizations anticipated. While 54% of respondents report improving information security is one of their top three reasons for initiating digital transformation processes, 73% of executives indicate they have had unauthorized access to their public cloud assets.  What is more alarming is how these unauthorized access incidents have occurred.

The technical sophistication of the modern business world has eroded the trust between brands and their customers, opening the door for a new conversation around security. 

Leading organizations have already begun to weave security into the very fabric of their culture – and it’s evidenced by going to market with secure marketing messages (as Apple’s new ad campaigns demonstrate), sharing responsibility for information security across the entire leadership team, creating privacy-centric business policies and processes, making information security and customer data-privacy part of an organization’s core values, etc.  The biggest challenges organizations still face is in how best to execute it, but that is a topic for another blog…

To learn more about the insights and perspectives on information security from the C-Suite, please download the report.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Security

IDBA: A Patented Bot Detection Technology

June 13, 2019 — by Radware1

AdobeStock_42807981-960x720.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. Competitors and adversaries alike deploy “bad” bots that leverage different methods to achieve nefarious objectives. This includes account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions.

These attacks often go undetected by conventional mitigation systems and strategies because bots have evolved from basic scripts to large-scale distributed bots with human-like interaction capabilities to evade detection mechanisms. To stay ahead of the threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. One of the key technical capabilities required to stop today’s most advanced bots is intent-based deep behavioral analysis (IDBA).

What Exactly is IDBA?

IDBA is a major step forward in bot detection technology because it performs behavioral analysis at a higher level of abstraction of intent, unlike the commonly used, shallow interaction-based behavioral analysis. For example, account takeover is an example of an intent, while “mouse pointer moving in a straight line” is an example of an interaction.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Capturing intent enables IDBA to provide significantly higher levels of accuracy to detect advanced bots. IDBA is designed to leverage the latest developments in deep learning.

More specifically, IDBA uses semi-supervised learning models to overcome the challenges of inaccurately labeled data, bot mutation and the anomalous behavior of human users. And it leverages intent encoding, intent analysis and adaptive-learning techniques to accurately detect large-scale distributed bots with sophisticated human-like interaction capabilities.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

3 Stages of IDBA

A visitor’s journey through a web property needs to be analyzed in addition to the interaction-level characteristics, such as mouse movements. Using richer behavioral information, an incoming visitor can be classified as a human or bot in three stages:

  • Intent encoding: The visitor’s journey through a web property is captured through signals such as mouse or keystroke interactions, URL and referrer traversals, and time stamps. These signals are encoded using a proprietary, deep neural network architecture into an intent encoding-based, fixed-length representation. The encoding network jointly achieves two objectives: to be able to represent the anomalous characteristics of completely new categories of bots and to provide greater weight to behavioral characteristics that differ between humans and bots.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

  • Intent analysis: Here, the intent encoding of the user is analyzed using multiple machine learning modules in parallel. A combination of supervised and unsupervised learning-based modules are used to detect both known and unknown patterns.
  • Adaptive learning: The adaptive-learning module collects the predictions made by the different models and takes actions on bots based on these predictions. In many cases, the action involves presenting a challenge to the visitor like a CAPTCHA or an SMS OTP that provides a feedback mechanism (i.e., CAPTCHA solved). This feedback is incorporated to improvise the decision-making process. Decisions can be broadly categorized into two types of tasks.
    • Determining thresholds: The thresholds to be chosen for anomaly scores and classification probabilities are determined through adaptive threshold control techniques.
    • Identifying bot clusters: Selective incremental blacklisting is performed on suspicious clusters. The suspicion scores associated with the clusters (obtained from the collusion detector module) are used to set prior bias.

[You may also like: The Big, Bad Bot Problem]

IDBA or Bust!

Current bot detection and classification methodologies are ineffective in countering the threats posed by rapidly evolving and mutating sophisticated bots.

Bot detection techniques that use interaction-based behavioral analysis can identify Level 3 bots but fail to detect the advanced Level 4 bots that have human-like interaction capabilities. The unavailability of correctly labeled data for Level 4 bots, bot mutations and the anomalous behavior of human visitors from disparate industry domains require the development of semi-supervised models that work at a higher level of abstraction of intent, unlike only interaction-based behavioral analysis.

IDBA leverages a combination of intent encoding, intent analysis and adaptive-learning techniques to identify the intent behind attacks perpetrated by massively distributed human-like bots.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Botnets

What You Need to Know About Botnets

June 12, 2019 — by Radware1

BotNet.jpeg

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes.

A single attack can result in downtime, lost business and significant financial damages. Understanding the current tactics, techniques and procedures used by today’s cyber criminals is key to defending your network.

Watch this latest video from Radware’s Hacker’s Almanac to learn more about Botnets and how you can help protect your business from this type of sabotage.

Download “Hackers Almanac” to learn more.

Download Now

Application SecurityWAFWeb Application Firewall

Bot Manager vs. WAF: Why You Actually Need Both

June 6, 2019 — by Ben Zilberman1

BotWAF-960x641.jpg

Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.

While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

When Will You Need a Bot Detection Solution?

Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:

Why a WAF Isn’t an Effective Bot Detection Tool

WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.

However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.

[You may also like: The Big, Bad Bot Problem]

Against these sophisticated threats, WAFs won’t get the job done.

The Benefits of Synergy

As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.

Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

5 Things to Consider When Choosing a Bot Management Solution

June 4, 2019 — by Radware0

5thingsbot-960x540.jpg

For organizations both large and small, securing the digital experience necessitates the need for a dedicated bot management solution. Regardless of the size of your organization, the escalating intensity of global bot traffic and the increasing severity of its overall impact mean that bot management solutions are crucial to ensuring business continuity and success.

The rise in malicious bot traffic, and more specifically, bots that mimic human-like behavior and require advanced machine learning to mitigate, require the ability to distinguish the wolf in sheep’s clothing.

We previously covered the basics in bot management evaluation, and urge you to likewise consider the following factors when choosing a solution.

Extensibility and Flexibility

True bot management goes beyond just the website. An enterprise-grade solution should protect all online assets, including your website, mobile apps and APIs. Protecting APIs and mobile apps is equally crucial, as is interoperability with systems belonging to your business partners and vital third-party APIs.

[You may also like: Key Considerations In Bot Management Evaluation]

Flexible Deployment Options

Bot mitigation solutions should be easy to deploy and operate with the existing infrastructure, such as CDNs and WAFs, as well as various technology stacks and application servers. Look for solutions that have a range of integration options, including web servers/CDNs/CMS plugins, SDKs for Java, PHP, .NET, Python, ColdFusion, Node.js, etc., as well as via JavaScript tags and virtual appliances.

A solution with non-intrusive API-based integration capability is key to ensuring minimal impact on your web assets.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Finally, any solution provider should ideally have multiple globally distributed points of presence to maximize system availability, minimize latency and overcome any internet congestion issues.

Is It a Fully Managed and Self-Reliant Service?

Webpage requests can number in the millions per minute for popular websites, and data processing for bot detection needs to be accomplished in real time. This makes manual intervention impossible — even adding suspected IP address ranges is useless in countering bots that cycle through vast numbers of addresses to evade detection. As a result, a key question that needs to be answered is does the solution require a specialized team to manage it, or does it operate autonomously after initial setup?

[You may also like: The Big, Bad Bot Problem]

Bot mitigation engines equipped with advanced technologies, such as machine learning, help with automating their management capabilities to significantly reduce the time and workforce needed to manage bots. Automated responses to threats and a system that does not require manual intervention considerably reduce the total cost of ownership.

Building vs. Buying

Large organizations have resources to develop their own in-house bot management solutions, but most companies do not have the time, resources or money to accomplish that. Building an adaptive and sophisticated bot mitigation solution, which can counter constantly evolving bots, can take years of specialized development.

Financially, it makes business sense to minimize capex and purchase cloud-based bot mitigation solutions on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Data Security, Privacy and Compliance Factors

A solution should ensure that traffic does not leave a network — or, in case it does, data should be in an encrypted and hashed format to maximize privacy and compliance. Ensuring that the bot mitigation solution is compliant with the GDPR regulations pertaining to data at rest and data in transit will help avoid personal data breaches and the risk of financial and legal penalties

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

Are Darknet Take-Downs Effective?

May 29, 2019 — by Daniel Smith0

darknet-960x540.jpg

Raids and take-downs have become standard on the Darknet as agents across the world continue to step up enforcement. While these take-downs are generally digital perp walks meant to remind the public that agents are doing their job, we have to ask, are they actually solving the problem?

Moreover, does the Darknet, specifically Tor, really matter in the grand scheme of things? No. Darknet marketplaces only provide a layer of protection. In fact, most of the items you find listed on any given Darknet marketplace can also find on normal Clearnet markets and forums. In reality, Darknet take-downs are only temporarily impacting, but do not prevent overall illicit activity.

For example, when you look at the sale of stolen data online you will find several major vendors that have sold databases throughout a variety of darknet marketplaces over the years. But databases containing PII and credentials are also sold on well-known Clearnet sites like Exploit, which is indexed by major search engines and has not been taken down to this day.

[You may also like: Understanding the Darknet and Its Impact on Cybersecurity]

DDoS-as-a-Service

When you look at attack services such as DDoS-as-a-Service, you will find that it was never a major player in Darknet marketplace, but during the rise of Mirai, a few vendors were found offering attack services with the newly publicized botnet. While vendors never fully adopted the use of hidden service, a few vendors sell overpriced DDoS services on Darknet marketplaces today. This is because most of the bot herders own and operate stresser services on Clearnet websites.

While Operation Power Off, a series of take-downs targeting the DDoS-as-a-Service industry, has been a major success in limiting the number of DDoS attacks, the powerful and customizable source code for IoT botnets like Mirai is still highly available. Because of this, the DDoS-as-a-Service market has become so over saturated that you can find entry-level vendors selling botnet spots with low bot counts on Instagram.

User advertises Mana Botnet on Instagram

More users with source code, more problems, no matter how many stresser services are taken down.

A Growing Criminal Landscape

In all, the digital marketplace, both on the clear and darknet, have allowed the criminal landscape to grow beyond street dealers with limited options and includes several new ways to make profit while not actually touching the products or services offered.

At the beginning of May, DeepDotWeb, a Clearnet site that listed current Darknet marketplaces and covered news related to the Darknet was raided and seized by law enforcement for referral linking. Most recently, news just broke that BestMixer, a multi-million-dollar cryptocurrency tumbler used to launder cryptocurrency was also raided.

As the tactics and techniques change, new avenues of profit will always open up.

At this point, it’s clear the landscape has changed dramatically over the last decade, and law enforcement is targeting the new ecosystem—but with limited success, in my opinion. Like low-level hackers, law enforcement is going for the low hanging fruit, and while it provides for great headlines and temporary impacts, it doesn’t truly solve anything and only creates more problems down range.

[You may also like: Darknet: Attacker’s Operations Room]

Stay Vigilant

I’ll leave you with an article titled, Libertas Market is Available Via I2P.

The use of hidden services (Tor) is only the beginning of the digital underground marketplace. Admin and vendors will continue to seek different methods to avoid law enforcement as long as demands and profits are high.

In other words, don’t fall into a false sense of security; the Darknet isn’t going anywhere anytime soon.

Download “Hackers Almanac” to learn more.

Download Now

Cloud Security

How to (Securely) Share Certificates with Your Cloud Security Provider

May 23, 2019 — by Ben Zilberman1

encrypt-960x640.jpg

Businesses today know they must handle sensitive data with extra care. But evolving cyber threats combined with regulatory demands can lead executives to hold their proverbial security cards close to their chest. For example, they may be reluctant to share encryption keys and certificates with a third party (i.e., cloud service providers), fearing data theft, MITM attacks or violations of local privacy regulations.

In turn, this can cause conflicts when integrating with cloud security services.

So how can businesses securely share this information as they transition to the cloud?

Encryption Basics

Today, nearly all web applications use HTTPS (encrypted traffic sent to and from the user). Any website with HTTPS service requires a signed SSL certificate. In order to communicate securely via encrypted traffic and complete the SSL handshake, the server requires three components: a private key, a public key (certificate) and a certificate chain.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

These are essential to accomplish the following objectives:

  • Authentication – The client authenticates the server identity.
  • Encryption – A symmetric session key is created by the client and server for session encryption.
  • Private keys stay private – The private key never leaves the server side and is not used as session key by the client.

Hardware Security Module (HSM)

A Hardware Security Module is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. HSMs are particularly useful for those industries that require high security, businesses with cloud-native applications and global organizations. More specifically, common use cases include:

[You may also like: SSL Attacks – When Hackers Use Security Against You]

  • Federal Information Processing Standards (FIPS) compliance – For example, finance, healthcare and government applications that traditionally require FIPS-level security.
  • Native cloud applications – Cloud applications designed with security in mind might use managed HSM (or KMS) for critical workloads such as password management.
  • Centralized management – Global organizations with global applications need to secure and manage their keys in one place.

Managing cryptographic key lifecycle necessitates a few fundamentals:

  • Using random number generator to create/renew keys
  • Processing crypto-operations (encrypt/decrypt)
  • Ensuring keys never leave the HSM
  • Establishing secure access control (intrusion-resistant, tamper-evident, audit-logged, FIPS-validated appliances)

The Challenge with Cloud Security Services…

One of the main challenges with cloud security services is the fact that reverse proxies need SSL keys. Managed security services, such as a cloud WAF service, force enterprises to hand over their private keys for terminating SSL connections. However, some can’t (FIPS-compliant businesses, for example) or simply don’t want to (for trust and liability concerns, or simply due to multi-tenancy between multiple customers). This is usually where the business relationship gets stuck.

[You may also like: Managing Security Risks in the Cloud]

…And the Solution!

Simply put, the solution is a HSM Cloud Service.

Wait, what?

Yes, integrating a cloud WAF service with a public cloud provider (like AWS CloudHSM) into an external HSM is the answer. It can easily be set up by a VPN among a cluster sharing the HSM credentials, per application or at large.

Indeed, cloudHSM is a popular solution–being both FIPS and PCI DSS compliant — trusted by customers in the finance sector. By moving the last on-prem component to the cloud to reduce data center maintenance costs, organizations are actually shifting towards consuming HSM as a Service.

Such an integration supports any type of certificate (single domain, wildcard or SAN) and secures minimal latency as public cloud providers have PoPs all around the globe. The external HSM is only used once, while there are no limitations to the amount of certificates that are hosted on the service.

This is the recommended approach to help businesses overcome the concern of sharing private keys. Learn more about Radware Cloud WAF service here.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

5 Key Considerations in Choosing a DDoS Mitigation Network

May 21, 2019 — by Eyal Arazi0

ddos1-960x540.jpg

A DDoS mitigation service is more than just the technology or the service guarantees. The quality and resilience of the underlying network is a critical component in your armor, and one which must be carefully evaluated to determine how well it can protect you against sophisticated DDoS attacks.

Below are five key considerations in evaluating a DDoS scrubbing network.

Massive Capacity

When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new heights (and scales) of attacks.

To date, the largest-ever verified DDoS attack was a memcached-based attack against GitHub. This attacked reached peak of approximately 1.3 terabits per second (Tbps) and 126 million packets per second (PPS).

In order to withstand such an attack, scrubbing networks must have not just enough to ‘cover’ the attack, but also ample overflow capacity to accommodate other customers on the network and other attacks that might be going on at the same time. A good rule of thumb is to look for mitigation networks with at least 2-3 times the capacity of the largest attacks observed to date.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Dedicated Capacity

It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers – particularly those who take an ‘edge’ security approach – rely on their Content Distribution Network (CDN) capacity for DDoS mitigation, as well.

The problem, however, is that the majority of this traffic is already being utilized on a routine basis. CDN providers don’t like to pay for unused capacity, and therefore CDN bandwidth utilization rates routinely reach 60-70%, and can frequently reach up to 80% or more. This leaves very little room for ‘overflow’ traffic that can result from a large-scale volumetric DDoS attack.

[You may also like: DDoS Protection Requires Looking Both Ways]

Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.

Global Footprint

Organizations deploy DDoS mitigation solution in order to ensure the availability of their services. An increasingly important aspect of availability is speed of response. That is, the question is not only is the service available, but also how quickly can it respond?

Cloud-based DDoS protection services operate by routing customer traffic through the service providers’ scrubbing centers, removing any malicious traffic, and then forwarding clean traffic to the customer’s servers. As a result, this process inevitably adds a certain amount of latency to user communications.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

One of the key factors affecting latency is distance from the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer. This can only be achieved with a globally-distributed network, with a large number of scrubbing centers deployed at strategic communication hubs, where there is large-scale access to high-speed fiber connections.

As a result, when examining a DDoS protection network, it is important not just to look at capacity figures, but also at the number of scrubbing centers and their distribution.

Anycast Routing

A key component impacting response time is the quality of the network itself, and its back-end routing mechanisms. In order to ensure maximal speed and resilience, modern security networks are based on anycast-based routing.

Anycast-based routing establishes a one-to-many relationship between IP addresses and network nodes (i.e., there are multiple network nodes with the same IP address). When a request is sent to the network, the routing mechanism applies principles of least-cost-routing to determine which network node is the optimal destination.

Routing paths can be selected based on the number of hops, distance, latency, or path cost considerations. As a result, traffic from any given point will usually be routed to the nearest and fastest node.

[You may also like: The Costs of Cyberattacks Are Real]

Anycast helps improve the speed and efficiency of traffic routing within the network. DDoS scrubbing networks based on anycast routing enjoy these benefits, which ultimately results in faster response and lower latency for end-users.

Multiple Redundancy

Finally, when selecting a DDoS scrubbing network, it is important to always have a backup. The whole point of a DDoS protection service is to ensure service availability. Therefore, you cannot have it – or any component in it – be a single point-of-failure. This means that every component within the security network must be backed up with multiple redundancy.

This includes not just multiple scrubbing centers and overflow capacity, but also requires redundancy in ISP links, routers, switches, load balancers, mitigation devices, and more.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Only a network with full multiple redundancy for all components can ensure full service availability at all times, and guarantee that your DDoS mitigation service does not become a single point-of-failure of its own.

Ask the Questions

Alongside technology and service, the underlying network forms a critical part of a cloud security network. The five considerations above outline the key metrics by which you should evaluate the network powering potential DDoS protection services.

Ask your service provider – or any service provider that you are evaluating – about their capabilities with regards to each of these metrics, and if you don’t like the answer, then you should consider looking for alternatives.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now