main

Attack MitigationSecurity

The Big, Bad Bot Problem

March 5, 2019 — by Ben Zilberman0

Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.” These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.

Bad Bots = Bad Business

Bots represent a problem for businesses, regardless of industry (though travel and e-commerce have the highest percentage of “bad” bot traffic). Nonetheless, many organizations, especially large enterprises, are focused on conventional cyber threats and solutions, and do not fully estimate the impact bots can have on their business, which is quite broad and goes beyond just security.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Indeed, the far-ranging business impacts of bots means “bad” bot attacks aren’t just a problem for IT managers, but for C-level executives as well. For example, consider the following scenarios:

  • Your CISO is exposed to account takeover, Web scraping, DoS, fraud and inventory hold-ups;
  • Your CRO is concerned when bots act as faux buyers, holding inventory for hours or days, representing a direct loss of revenue;
  • Your COO invests more in capacity to accommodate this growing demand of faux traffic;
  • Your CFO must compensate customers who were victims of fraud via account takeovers and/or stolen payment information, as well as any data privacy regulatory fines and/or legal fees, depending on scale;
  • Your CMO is dazzled by analytic tools and affiliate services skewed by malicious bot activity, leading to biased decisions.

The Evolution of Bots

For those organizations that do focus on bots, the overwhelming majority (79%, according to Radware’s research) can’t definitively distinguish between good and bad bots, and sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

[You may also like: Are Your Applications Secure?]

To complicate matters, bots evolve rapidly. They are now in their 4th generation of sophistication, with evasion techniques so advanced they require the most powerful technology to combat them.

  • Generation 1 – Basic scripts making cURL-like requests from a small number of IP addresses. These bots can’t store cookies or execute JavaScript and can be easily detected and mitigated through blacklisting its IP address and User-Agent combination.
  • Generation 2 – Leverage headless browsers such as PhantomJS and can store cookies and execute JavaScript. They require a more sophisticated, IP-agnostic approach such as device-fingerprinting, by collecting their unique combination of browser and device characteristics — such as the OS, JavaScript variables, sessions and cookies info, etc.
  • Generation 3 – These bots use full-fledged browsers and can simulate basic human-like patterns during interactions, like simple mouse movements and keystrokes. This behavior makes it difficult to detect; these bots normally bypass traditional security solutions, requiring a more sophisticated approach than blacklisting or fingerprinting.
  • Generation 4 – These bots are the most sophisticated. They use more advanced human-like interaction characteristics (so shallow-interaction based detection yields False Positives) and are distributed across tens of thousands of IP addresses. And they can carry out various violations from various sources at various (random) times, requiring a high level of intelligence, correlation and contextual analysis.

[You may also like: Attackers Are Leveraging Automation]

It’s All About Intent

Organizations must make an accurate distinction between human and bot-based traffic, and even further, distinguish between “good” and “bad” bots. Why? Because sophisticated bots that mimic human behavior bypass CAPTCHA and other challenges, dynamic IP attacks render IP-based protection ineffective, and third and fourth generation bots force behavioral analysis capabilities. The challenge is detection, but at a high precision, so that genuine users aren’t affected.

To ensure precision in detecting and classifying bots, the solution must identify the intent of the attack. Yesterday, Radware announced its Bot Manager solution, the result of its January 2019 acquisition of ShieldSquare, which does just that. By leveraging patented Intent-based Deep Behavior Analysis, Radware Bot Manager detects the intent behind attacks and provides accurate classifications of genuine users, good bots and bad bots—including those pesky fourth generation bots. Learn more about it here.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Ben Zilberman

Ben Zilberman is a product-marketing manager in Radware’s security team. In this role, Ben specializes in application security and threat intelligence, working closely with Radware’s Emergency Response and research teams to raise awareness of high profile and impending attacks. Ben has a diverse experience in network security, including firewalls, threat prevention, web security and DDoS technologies. Prior to joining Radware, Ben served as a trusted advisor at Checkpoint Software technologies where he led partnerships, collaborations, and campaigns with system integrators, service, and cloud providers. Ben holds a BA in Economics and a MBA, from Tel Aviv University.

Leave a Reply

Your email address will not be published. Required fields are marked *