main

Security

IDBA: A Patented Bot Detection Technology

June 13, 2019 — by Radware1

AdobeStock_42807981-960x720.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. Competitors and adversaries alike deploy “bad” bots that leverage different methods to achieve nefarious objectives. This includes account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions.

These attacks often go undetected by conventional mitigation systems and strategies because bots have evolved from basic scripts to large-scale distributed bots with human-like interaction capabilities to evade detection mechanisms. To stay ahead of the threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. One of the key technical capabilities required to stop today’s most advanced bots is intent-based deep behavioral analysis (IDBA).

What Exactly is IDBA?

IDBA is a major step forward in bot detection technology because it performs behavioral analysis at a higher level of abstraction of intent, unlike the commonly used, shallow interaction-based behavioral analysis. For example, account takeover is an example of an intent, while “mouse pointer moving in a straight line” is an example of an interaction.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Capturing intent enables IDBA to provide significantly higher levels of accuracy to detect advanced bots. IDBA is designed to leverage the latest developments in deep learning.

More specifically, IDBA uses semi-supervised learning models to overcome the challenges of inaccurately labeled data, bot mutation and the anomalous behavior of human users. And it leverages intent encoding, intent analysis and adaptive-learning techniques to accurately detect large-scale distributed bots with sophisticated human-like interaction capabilities.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

3 Stages of IDBA

A visitor’s journey through a web property needs to be analyzed in addition to the interaction-level characteristics, such as mouse movements. Using richer behavioral information, an incoming visitor can be classified as a human or bot in three stages:

  • Intent encoding: The visitor’s journey through a web property is captured through signals such as mouse or keystroke interactions, URL and referrer traversals, and time stamps. These signals are encoded using a proprietary, deep neural network architecture into an intent encoding-based, fixed-length representation. The encoding network jointly achieves two objectives: to be able to represent the anomalous characteristics of completely new categories of bots and to provide greater weight to behavioral characteristics that differ between humans and bots.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

  • Intent analysis: Here, the intent encoding of the user is analyzed using multiple machine learning modules in parallel. A combination of supervised and unsupervised learning-based modules are used to detect both known and unknown patterns.
  • Adaptive learning: The adaptive-learning module collects the predictions made by the different models and takes actions on bots based on these predictions. In many cases, the action involves presenting a challenge to the visitor like a CAPTCHA or an SMS OTP that provides a feedback mechanism (i.e., CAPTCHA solved). This feedback is incorporated to improvise the decision-making process. Decisions can be broadly categorized into two types of tasks.
    • Determining thresholds: The thresholds to be chosen for anomaly scores and classification probabilities are determined through adaptive threshold control techniques.
    • Identifying bot clusters: Selective incremental blacklisting is performed on suspicious clusters. The suspicion scores associated with the clusters (obtained from the collusion detector module) are used to set prior bias.

[You may also like: The Big, Bad Bot Problem]

IDBA or Bust!

Current bot detection and classification methodologies are ineffective in countering the threats posed by rapidly evolving and mutating sophisticated bots.

Bot detection techniques that use interaction-based behavioral analysis can identify Level 3 bots but fail to detect the advanced Level 4 bots that have human-like interaction capabilities. The unavailability of correctly labeled data for Level 4 bots, bot mutations and the anomalous behavior of human visitors from disparate industry domains require the development of semi-supervised models that work at a higher level of abstraction of intent, unlike only interaction-based behavioral analysis.

IDBA leverages a combination of intent encoding, intent analysis and adaptive-learning techniques to identify the intent behind attacks perpetrated by massively distributed human-like bots.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Application SecurityWAFWeb Application Firewall

Bot Manager vs. WAF: Why You Actually Need Both

June 6, 2019 — by Ben Zilberman2

BotWAF-960x641.jpg

Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.

While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

When Will You Need a Bot Detection Solution?

Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:

Why a WAF Isn’t an Effective Bot Detection Tool

WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.

However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.

[You may also like: The Big, Bad Bot Problem]

Against these sophisticated threats, WAFs won’t get the job done.

The Benefits of Synergy

As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.

Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

5 Things to Consider When Choosing a Bot Management Solution

June 4, 2019 — by Radware3

5thingsbot-960x540.jpg

For organizations both large and small, securing the digital experience necessitates the need for a dedicated bot management solution. Regardless of the size of your organization, the escalating intensity of global bot traffic and the increasing severity of its overall impact mean that bot management solutions are crucial to ensuring business continuity and success.

The rise in malicious bot traffic, and more specifically, bots that mimic human-like behavior and require advanced machine learning to mitigate, require the ability to distinguish the wolf in sheep’s clothing.

We previously covered the basics in bot management evaluation, and urge you to likewise consider the following factors when choosing a solution.

Extensibility and Flexibility

True bot management goes beyond just the website. An enterprise-grade solution should protect all online assets, including your website, mobile apps and APIs. Protecting APIs and mobile apps is equally crucial, as is interoperability with systems belonging to your business partners and vital third-party APIs.

[You may also like: Key Considerations In Bot Management Evaluation]

Flexible Deployment Options

Bot mitigation solutions should be easy to deploy and operate with the existing infrastructure, such as CDNs and WAFs, as well as various technology stacks and application servers. Look for solutions that have a range of integration options, including web servers/CDNs/CMS plugins, SDKs for Java, PHP, .NET, Python, ColdFusion, Node.js, etc., as well as via JavaScript tags and virtual appliances.

A solution with non-intrusive API-based integration capability is key to ensuring minimal impact on your web assets.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Finally, any solution provider should ideally have multiple globally distributed points of presence to maximize system availability, minimize latency and overcome any internet congestion issues.

Is It a Fully Managed and Self-Reliant Service?

Webpage requests can number in the millions per minute for popular websites, and data processing for bot detection needs to be accomplished in real time. This makes manual intervention impossible — even adding suspected IP address ranges is useless in countering bots that cycle through vast numbers of addresses to evade detection. As a result, a key question that needs to be answered is does the solution require a specialized team to manage it, or does it operate autonomously after initial setup?

[You may also like: The Big, Bad Bot Problem]

Bot mitigation engines equipped with advanced technologies, such as machine learning, help with automating their management capabilities to significantly reduce the time and workforce needed to manage bots. Automated responses to threats and a system that does not require manual intervention considerably reduce the total cost of ownership.

Building vs. Buying

Large organizations have resources to develop their own in-house bot management solutions, but most companies do not have the time, resources or money to accomplish that. Building an adaptive and sophisticated bot mitigation solution, which can counter constantly evolving bots, can take years of specialized development.

Financially, it makes business sense to minimize capex and purchase cloud-based bot mitigation solutions on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Data Security, Privacy and Compliance Factors

A solution should ensure that traffic does not leave a network — or, in case it does, data should be in an encrypted and hashed format to maximize privacy and compliance. Ensuring that the bot mitigation solution is compliant with the GDPR regulations pertaining to data at rest and data in transit will help avoid personal data breaches and the risk of financial and legal penalties

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

Key Considerations In Bot Management Evaluation

May 9, 2019 — by Radware0

AdobeStock_232727332-960x524.jpg

The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behavior and easily deceive conventional cybersecurity solutions/bot management systems.

Addressing highly sophisticated and automated bot-based cyberthreats requires deep analysis of bots’ tactics and intentions. According to Forrester Research’s The Forrester New Wave™: Bot Management, Q3 2018 report, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”

When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.

Basic Bot Management Features

Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.

Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.

Capability to Detect Large-Scale Distributed Humanlike Bots

When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Traditional defenses fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioral learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behavior.

The rise of highly sophisticated humanlike bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioral analyses, collective bot intelligence and threat research, as well as other foundational techniques.

A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors

  • How advanced is the solution’s bot detection technology?
  • Does it use unique device and browser fingerprinting?
  • Does it leverage intent analysis in addition to user behavioral analysis?
  • How deep and effective are the fingerprinting and user behavioral modeling?
  • Do they leverage collective threat intelligence?

[You may also like: The Big, Bad Bot Problem]

Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behavior. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.

Impact on User Experience — Latency, Accuracy and Scalability

Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.

Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Application Security

4 Emerging Challenges in Securing Modern Applications

May 1, 2019 — by Radware0

appsecurity-960x474.jpg

Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.

Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture. 

The WAF Ten Commandments

The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.

However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.

Challenge 1: Bot Management

52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Challenge 2: Securing APIs

Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.

[You may also like: How to Prevent Real-Time API Abuse]

Challenge 3: Denial of Service

Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.

[You may also like: DDoS Protection Requires Looking Both Ways]

Challenge 4: Continuous Security

For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Protecting All Applications

It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:

  • Application security solutions must encompass web and mobile apps, as well as APIs.
  • Bot Management solutions need to overcome the most sophisticated bot attacks.
  • Mitigating DDoS attacks is an essential and integrated part of application security solutions.
  • A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
  • To keep up with continuous application delivery, security protections must adapt in real time.
  • A fully managed service should be considered to remove complexity and minimize resources.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

SecurityService Provider

Bot Management: A Business Opportunity for Service Providers

April 30, 2019 — by Radware1

BotManagementSP-960x540.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. These “bad” bots are often deployed with various capabilities to achieve their nefarious objectives, which can include account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions. Sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

Bots represent a clear and present danger to service providers. The inability to accurately distinguish malicious bots from legitimate traffic/users can leave a service provider exposed and at risk to suffer customer loss, lost profits and irreparable brand damage.

In an age where securing the digital experience is a competitive differentiator, telecommunication companies, management services organizations (MSOs) and internet service providers (ISPs) must transform their infrastructures into service-aware architectures that deliver scalability and security to customers, all the while differentiating themselves and creating revenue by selling security services.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

Bot Traffic in the Service Provider Network

Bot attacks often go undetected by conventional mitigation systems and strategies because they have evolved from basic scripts into large-scale distributed bots with human-like interaction capabilities. Bots have undergone a transformation, or evolution, over the years. Generally speaking, they can be classified into four categories, or levels, based on their degree of sophistication.

The four categories of malicious bots.

In addition to the aforementioned direct impact that these bots have, there is the added cost associated with increased traffic loads imposed on service providers’ networks. In an age of increased competition and the growth of multimedia consumption, it is critical that service providers accurately eliminate “bad” bots from their networks.

[You may also like: The Big, Bad Bot Problem]

Staying ahead of the evolving threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. These include combining behavioral modeling, collective bot intelligence and capabilities such as device fingerprinting and intent-based deep behavioral analysis (IDBA) for precise bot management across all channels.

Protecting Core Application from Bot Access

Bots attack web and mobile applications as well as application programming interfaces (APIs). Bot-based application DoS attacks degrade web applications by exhausting system resources, third-party APIs, inventory databases and other critical resources.

[You may also like: How to Prevent Real-Time API Abuse]

IDBA is now one of the critical capabilities needed to mitigate advanced bots. It performs behavioral analysis at a higher level of abstraction of “intent,” unlike commonly used, shallow “interaction”-based behavior analysis. IDBA is a critical next-generation capability to mitigate account takeovers executed by more advanced Generation 3 and 4 bots, as it leverages the latest developments in deep learning and behavioral analysis to decode the true intention of bots. IDBA goes beyond analyzing mouse movements and keystrokes to detect human-like bots, so “bad” bots can be parsed from legitimate traffic to ensure a seamless online experience for consumers.

API Exposure

APIs are increasingly used to exchange data or to integrate with partners, and attackers understand this. It is essential to accurately distinguish between “good” API calls and “bad” API calls for online businesses. Attackers reverse engineer mobile and web applications to hijack API calls and program bots to invade these APIs. By doing so, they can take over accounts, scrape critical data and perform application DDoS attacks by deluging API servers with unwanted requests.

Account Takeover

This category encompasses ways in which bots are programmed to use false identities to obtain access to data or goods. Their methods for account takeover can vary. They can hijack existing accounts by cracking a password via Brute Force attacks or by using known credentials that have been leaked via credential stuffing. Lastly, they can be programmed to create new accounts to carry out their nefarious intentions.

[You may also like: Will We Ever See the End of Account Theft?]

As its name suggests, this category encompasses an array of attacks focused on cracking credentials, tokens or verification codes/numbers with the goal of creating or cracking account access to data or products. Examples include account creation, token cracking and credential cracking/stuffing. Nearly all of these attacks primarily target login pages.

The impact of account takeover? Fraudulent transactions, abuse of reward programs, and damage to brand reputation.

Advertising Traffic Fraud

Malicious bots create false impressions and generate illegitimate clicks on publishing sites and their mobile apps. In addition, website metrics, such as visits and conversions, are vulnerable to skewing. Bots pollute metrics, disrupt funnel analysis and inhibit key performance indicator (KPI) tracking. Automated traffic on your website also affects product metrics, campaign data and traffic analytics. Skewed analytics are a major hindrance to marketers who need reliable data for their decision-making processes.

[You may also like: Ad Fraud 101: How Cybercriminals Profit from Clicks]

The Business Opportunity for Service Providers

Regardless of the type of attack, service providers are typically held to high expectations when it comes to keeping customer data secure and maintaining service availability. With each attack, service providers risk customer loss, brand reputation, lost profits and at the worst, costly governmental involvement and the resulting investigations and lawsuits.

These same business expectations apply to service providers’ customers, many of whom require security services. Although large organizations can attempt to develop their own in-house bot management solutions, these companies do not necessarily have the time, money and expertise to build and maintain them.

Building an adaptive bot mitigation solution can take years of specialized development. Financially, it makes sense to minimize capex and purchase a cloud-based bot mitigation solution on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

Lastly, this allows service providers to protect their core infrastructure and their own customers from bot-based cyberattacks and provides the opportunity to extend any bot management solution as part of a cloud security services offering to generate a new revenue stream.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Security

Bot Managers Are a Cash-Back Program For Your Company

April 17, 2019 — by Ben Zilberman1

Bot_Cash_Back-960x640.jpg

In my previous blog, I briefly discussed what bot managers are and why they are needed. . Today, we will conduct a short ROI exercise (perhaps the toughest task in information security!).

To recap: Bots generate a little over half of today’s internet traffic. Roughly half of that half (i.e. a quarter, for rusty ones like myself…) is generated by bad bots, a.k.a. automated programs targeting applications with the intent to steal information or disrupt service. Over the years, they have gotten so sophisticated, they can easily mimic human behavior, perform allegedly uncorrelated violation actions and essentially fool most application security solutions out there.

Bot, bot management, traffic

These bots affect each and every arm of your business. If you are in the e-commerce or travel industries, no need to tell you that… if you aren’t, go to your next C-level executive meeting and look for those who scratch their heads the most. Why? Because they can’t understand where the money goes, and why the predicted performance didn’t materialize as expected.

Let’s go talk to these C-Suite executives, shall we?

Chief Revenue Officer

Imagine you are selling product online–whether that’s tickets, hotel rooms or even 30-pound dog food bags–and this is your principal channel for revenue generation. Now, imagine that bots act as faux buyers, and hold the inventory “hostage” so genuine customers can not access them.

[You may also like: Will We Ever See the End of Account Theft?]

Sure, you can elapse the process every 10 minutes, but as this is an automated program, it will re-initiate the process in a split second. And what about CAPTCHA? Don’t assume CAPTCHA will weed out all bots; some bots activate after a human has solved it. How would you know when you are communicating with a bot or a human? (Hint: you’d know if you had a bot management solution).

Wondering why the movie hall is empty half the time even though it’s a hot release? Does everybody go to the theater across the street? No. Bots are to blame. And they cause direct, immediate and painful revenue loss.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Chief Marketing Officer

Digital marketing tools, end-to-end automation of the customer journey, lead generation, and content syndication are great tools that help CMOs measure ROI and plan budgets. But what if the analysis they provide are false? What if half the clicks you are paying for are fictitious, and you were subject to a click-fraud campaign by bots? What if a competitor uses a bot to scrape data of registrants out of your landing pages? Unfortunately, bots often skew the analysis and can lead you to make wrong decisions that result in poor performance. Without bot management, you’re wasting money in vain.

Chief Operations Officer/Chief Information Officer

Does your team complain that your network resources are in the “red zone,” close to maximum performance, but your customer base isn’t growing at the same pace?

Blame bots.

[You may also like: Disaster Recovery: The Big, Bad Bot Problem]

Obviously some bots are “good,” like automated services that help accelerate and streamline your business, analyze data quickly and help you to make better decisions. However, bad bots (26% of the total traffic you are processing) put a load on your infrastructure and make your IT staff cry for more capacity. So you invest $200-500K in bigger firewalls, ADCs, and broader internet pipes, and upgrade your servers.

Next thing you know, a large DDoS attack from IoT botnets knocks everything down. If only you had invested $50k upfront to filter out the bad traffic from the get-go… That could’ve translated to $300k cash back!

Chief Information Security Officer

Every hour, a new security vendor knocks on your door with another solution for a 0.0001% probability what-if scenario… your budget is all over the place, spent on multiple protections and a complex architecture trying to take an actionable snapshot of what’s going on at every moment. At the end of the day, your task is to protect your company’s information assets. And there are so many ways to get a hold of those precious secrets!

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Bad bots are your enemy. They can scrape content, files, pricing, and intellectual property from your website. They can take over user accounts by cracking their passwords or launch a credential stuffing attack (and then retrieve their payment info). And they can take down service with DDoS attacks and hold up inventory, as I previously mentioned.

You can absolutely reduce these risks significantly if you could distinguish human versus bot traffic (remember, sophisticated bots today can mimic human behavior and bypass all sorts of challenges, not only CAPTCA), and more than that, which bot is legitimate and which is malicious.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Bot management equals less risk, better posture, stable business, no budget increases or unexpected expenses. Cash back!

Chief Financial Officer

Your management peers could have made better investments, but now you have to clean up their mess. This can include paying legal fees and compensation to customers whose data was compromised, paying regulatory fines for coming up short in compliance, shelling out for a crisis management consultant firm, and absorbing costs associated with inventory hold up and downed service.

If you only had a bot management solution in place… so much cash back.

The Bottom Line

Run–do not walk–to your CEO and request a much-needed bot management solution. Not only does s/he have nothing to lose, s/he has a lot to gain.

* This week, Radware integrates bot management service with its cloud WAF for a complete, fully managed, application security suite.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & Vectors

Can You Crack the Hack?

April 11, 2019 — by Daniel Smith1

credential_stuffing-960x640.jpg

Let’s play a game. Below are clues describing a specific type of cyberattack; can you guess what it is?

  • This cyberattack is an automated bot-based attack
  • It uses automation tools such as cURL and PhantomJS
  • It leverages breached usernames and passwords
  • Its primary goal is to hijack accounts to access sensitive data, but denial of service is another consequence
  • The financial services industry has been the primary target

Struggling? We understand, it’s tricky! Here are two more clues:

  • Hackers will often route login requests through proxy servers to avoid blacklisting their IP addresses
  • It is a subset of Brute Force attacks, but different from credential cracking 

And the Answer Is….

Credential stuffing! If you didn’t guess correctly, don’t worry. You certainly aren’t alone. At this year’s RSA Conference, Radware invited attendees to participate in a #HackerChallenge. Participants were given clues and asked to diagnose threats. While most were able to surmise two other cyber threats, credential stuffing stumped the majority.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

Understandably so. For one, events are happening at a breakneck pace. In the last few months alone, there have been several high-profile attacks leveraging different password attacks, from credential stuffing to credential spraying. It’s entirely possible that people are conflating the terms and thus the attack vectors. Likewise, they may also confuse credential stuffing with credential cracking.

Stuffing vs. Cracking vs. Spraying

As we’ve previously written, credential stuffing is a subset of brute force attacks but is different from credential cracking. Credential stuffing campaigns do not involve the process of brute forcing password combinations. Rather, they leverage leaked username and passwords in an automated fashion against numerous websites to take over users’ accounts due to credential reuse.

Conversely, credential cracking attacks are an automated web attack wherein criminals attempt to crack users’ passwords or PIN numbers by processing through all possible combines of characters in sequence. These attacks are only possible when applications do not have a lockout policy for failed login attempts. Software for this attack will attempt to crack the user’s password by mutating or brute forcing values until the attacker is successfully authenticated.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

As for credential (or password) spraying, this technique involves using a limited set of company-specific passwords in attempted logins for known usernames. When conducting these types of attacks, advanced cybercriminals will typically scan your infrastructure for external facing apps and network services such as webmail, SSO and VPN gateways. Usually, these interfaces have strict timeout features. Actors will use password spraying vs. brute force attacks to avoid being timed out and possibly alerting admins.

So What Can You Do?

A dedicated bot management solution that is tightly integrated into your Web Application Firewall (WAF) is critical. Device fingerprinting, CAPTCHA, IP rate-based detection, in-session detection and terminations JavaScript challenge is also important.

In addition to these steps, network operators should apply two-factor authentication where eligible and monitor dump credentials for potential leaks or threats.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsSecurity

CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats

March 21, 2019 — by Abhinaw Kumar0

BADBOTS-960x305.jpg

According to a study by the Ponemon Institute in December 2018, bots comprised over 52% of all Internet traffic. While ‘good’ bots discreetly index websites, fetch information and content, and perform useful tasks for consumers and businesses, ‘bad’ bots have become a primary and growing concern to CISOs, webmasters, and security professionals today. They carry out a range of malicious activities, such as account takeover, content scraping, carding, form spam, and much more. The negative impacts resulting from these activities include loss of revenue and harm to brand reputation, theft of content and personal information, lowered search engine rankings, and distorted web analytics, to mention a few.

For these reasons, researchers at Forrester recommend that, “The first step in protecting your company from bad bots is to understand what kinds of bots are attacking your firm.” So let us briefly look at the main bad bot threats CISOs have to face, and then delve into their industry-wise prevalence.

Bad Bot Attacks That Worry CISOs The Most

The impact of bad bots results from the specific activities they’re programmed to execute. Many of them aim to defraud businesses and/or their customers for monetary gain, while others involve business competitors and nefarious parties who scrape content (including articles, reviews, and prices) to gain business intelligence.

[You may also like: The Big, Bad Bot Problem]

  • Account Takeover attacks use credential stuffing and brute force techniques to gain unauthorized access to customer accounts.
  • Application DDoS attacks slow down web applications by exhausting system resources, 3rd-party APIs, inventory databases, and other critical resources.
  • API Abuse results from nefarious entities exploiting API vulnerabilities to steal sensitive data (such as personal information and business-critical data), take over user accounts, and execute denial-of-service attacks.
  • Ad Fraud is the generation of false impressions and illegitimate clicks on ads shown on publishing sites and their mobile apps. A related form of attack is affiliate marketing fraud (also known as affiliate ad fraud) which is the use of automated traffic by fraudsters to generate commissions from an affiliate marketing program.
  • Carding attacks use bad bots to make multiple payment authorization attempts to verify the validity of payment card data, expiry dates, and security codes for stolen payment card data (by trying different values). These attacks also target gift cards, coupons and voucher codes.
  • Scraping is a strategy often used by competitors who deploy bad bots on your website to steal business-critical content, product details, and pricing information.
  • Skewed Analytics is a result of bot traffic on your web property, which skews site and app metrics and misleads decision making.
  • Form Spam refers to the posting of spam leads and comments, as well as fake registrations on marketplaces and community forums.
  • Denial of Inventory is used by competitors/fraudsters to deplete goods or services in inventory without ever purchasing the goods or completing the transaction.

Industry-wise Impact of Bot Traffic

To illustrate the impact of bad bots, we aggregated all the bad bot traffic that was blocked by our Bot Manager during Q2 and Q3 of 2018 across four industries selected from our diverse customer base: E-commerce, Real Estate, Classifieds & Online Marketplaces, and Media & Publishing. While the prevalence of bad bots can vary considerably over time and even within the same industry, our data shows that specific types of bot attacks tend to target certain industries more than others.

[You may also like: Adapting Application Security to the New World of Bots]

E-Commerce

Intent-wise distribution of bad bot traffic on E-commerce sites (in %)

Bad bots target e-commerce sites to carry out a range of attacks — such as scraping, account takeovers, carding, scalping, and denial of inventory. However, the most prevalent bad bot threat encountered by our e-commerce customers during our study were attempts at affiliate fraud. Bad bot traffic made up roughly 55% of the overall traffic on pages that contain links to affiliates. Content scraping and carding were the most prevalent bad bot threats to e-commerce portals two to five years ago, but the latest data indicates that attempts at affiliate fraud and account takeover are rapidly growing when compared to earlier years.

Real Estate

Intent-wise distribution of bad bot traffic on Real Estate sites (in %)

Bad bots often target real estate portals to scrape listings and the contact details of realtors and property owners. However, we are seeing growing volumes of form spam and fake registrations, which have historically been the biggest problems caused by bots on these portals. Bad bots comprised 42% of total traffic on pages with forms in the real estate sector. These malicious activities anger advertisers, reduce marketing ROI and conversions, and produce skewed analytics that hinder decision making. Bad bot traffic also strains web infrastructure, affects the user experience, and increases operational expenses.

Classifieds & Online Marketplaces

Intent-wise distribution of bad bot traffic on Classifieds sites (in %)

Along with real estate businesses, classifieds sites and online marketplaces are among the biggest targets for content and price scrapers. Their competitors use bad bots not only to scrape their exclusive ads and product prices to illegally gain a competitive advantage, but also to post fake ads and spam web forms to access advertisers’ contact details. In addition, bad bot traffic strains servers, third-party APIs, inventory databases and other critical resources, creates application DDoS-like situations, and distorts web analytics. Bad bot traffic accounted for over 27% of all traffic on product pages from where prices could be scraped, and nearly 23% on pages with valuable content such as product reviews, descriptions, and images.

Media & Publishing

Intent-wise distribution of bad bot traffic on Media & Publishing sites (in %)

More than ever, digital media and publishing houses are scrambling to deal with bad bot attacks that perform automated attacks such as scraping of proprietary content, and ad fraud. The industry is beset with high levels of ad fraud, which hurts advertisers and publishers alike. Comment spam often derails discussions and results in negative user experiences. Bot traffic also inflates traffic metrics and prevents marketers from gaining accurate insights. Over the six-month period that we analyzed, bad bots accounted for 18% of overall traffic on pages with high-value content, 10% on ads, and nearly 13% on pages with forms.

As we can see, security chiefs across a range of industries are facing increasing volumes and types of bad bot attacks. What can they do to mitigate malicious bots that are rapidly evolving in ways that make them significantly harder to detect? Conventional security systems that rely on rate-limiting and signature-matching approaches were never designed to detect human-like bad bots that rapidly mutate and operate in widely-distributed botnets using ‘low and slow’ attack strategies and a multitude of (often hijacked) IP addresses.

The core challenge for any bot management solution, then, is to detect every visitor’s intent to help differentiate between human and malicious non-human traffic. As more bad bot developers incorporate artificial intelligence (AI) to make human-like bots that can sneak past security systems, any effective countermeasures must also leverage AI and machine learning (ML) techniques to accurately detect the most advanced bad bots.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack MitigationSecurity

The Big, Bad Bot Problem

March 5, 2019 — by Ben Zilberman0

AdobeStock_103497099-960x559.jpeg

Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.” These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.

Bad Bots = Bad Business

Bots represent a problem for businesses, regardless of industry (though travel and e-commerce have the highest percentage of “bad” bot traffic). Nonetheless, many organizations, especially large enterprises, are focused on conventional cyber threats and solutions, and do not fully estimate the impact bots can have on their business, which is quite broad and goes beyond just security.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Indeed, the far-ranging business impacts of bots means “bad” bot attacks aren’t just a problem for IT managers, but for C-level executives as well. For example, consider the following scenarios:

  • Your CISO is exposed to account takeover, Web scraping, DoS, fraud and inventory hold-ups;
  • Your CRO is concerned when bots act as faux buyers, holding inventory for hours or days, representing a direct loss of revenue;
  • Your COO invests more in capacity to accommodate this growing demand of faux traffic;
  • Your CFO must compensate customers who were victims of fraud via account takeovers and/or stolen payment information, as well as any data privacy regulatory fines and/or legal fees, depending on scale;
  • Your CMO is dazzled by analytic tools and affiliate services skewed by malicious bot activity, leading to biased decisions.

The Evolution of Bots

For those organizations that do focus on bots, the overwhelming majority (79%, according to Radware’s research) can’t definitively distinguish between good and bad bots, and sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

[You may also like: Are Your Applications Secure?]

To complicate matters, bots evolve rapidly. They are now in their 4th generation of sophistication, with evasion techniques so advanced they require the most powerful technology to combat them.

  • Generation 1 – Basic scripts making cURL-like requests from a small number of IP addresses. These bots can’t store cookies or execute JavaScript and can be easily detected and mitigated through blacklisting its IP address and User-Agent combination.
  • Generation 2 – Leverage headless browsers such as PhantomJS and can store cookies and execute JavaScript. They require a more sophisticated, IP-agnostic approach such as device-fingerprinting, by collecting their unique combination of browser and device characteristics — such as the OS, JavaScript variables, sessions and cookies info, etc.
  • Generation 3 – These bots use full-fledged browsers and can simulate basic human-like patterns during interactions, like simple mouse movements and keystrokes. This behavior makes it difficult to detect; these bots normally bypass traditional security solutions, requiring a more sophisticated approach than blacklisting or fingerprinting.
  • Generation 4 – These bots are the most sophisticated. They use more advanced human-like interaction characteristics (so shallow-interaction based detection yields False Positives) and are distributed across tens of thousands of IP addresses. And they can carry out various violations from various sources at various (random) times, requiring a high level of intelligence, correlation and contextual analysis.

[You may also like: Attackers Are Leveraging Automation]

It’s All About Intent

Organizations must make an accurate distinction between human and bot-based traffic, and even further, distinguish between “good” and “bad” bots. Why? Because sophisticated bots that mimic human behavior bypass CAPTCHA and other challenges, dynamic IP attacks render IP-based protection ineffective, and third and fourth generation bots force behavioral analysis capabilities. The challenge is detection, but at a high precision, so that genuine users aren’t affected.

To ensure precision in detecting and classifying bots, the solution must identify the intent of the attack. Yesterday, Radware announced its Bot Manager solution, the result of its January 2019 acquisition of ShieldSquare, which does just that. By leveraging patented Intent-based Deep Behavior Analysis, Radware Bot Manager detects the intent behind attacks and provides accurate classifications of genuine users, good bots and bad bots—including those pesky fourth generation bots. Learn more about it here.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now