APIs can be highly vulnerable, making them frequent attack targets. And they play a major role and fuel major risks when it comes to bot management.
Bots now leverage full-fledged browsers and are programmed to mimic human behavior in the way they traverse a website or application, move the mouse, tap and swipe on mobile devices and generally try to simulate real visitors to evade security systems.
Impact: These bots are generally used to carry out scraping, carding and form spam.
These bots use full-fledged browsers — dedicated or hijacked by malware — for their operation. They can simulate basic human-like interactions, such as simple mouse movements and keystrokes. However, they may fail to demonstrate human-like randomness in their behavior.
Impact: Third-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud, among other purposes.
Mitigation: Third-generation bots are difficult to detect based on device and browser characteristics. Interaction-based user behavioral analysis is required to detect such bots, which generally follow a programmatic sequence of URL traversals.
The latest generation of bots have advanced human-like interaction characteristics — including moving the mouse pointer in a random, human-like pattern instead of in straight lines. These bots also can change their UAs while rotating through thousands of IP addresses. There is growing evidence that points to bot developers carrying out “behavior hijacking” — recording the way in which real users touch and swipe on hijacked mobile apps to more closely mimic human behavior on a website or app. Behavior hijacking makes them much harder to detect, as their activities cannot easily be differentiated from those of real users. What’s more, their wide distribution is attributable to the large number of users whose browsers and devices have been hijacked.
Impact: Fourth-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud.
Mitigation: These bots are massively distributed across tens of thousands of IP addresses, often carrying out “low and slow” attacks to slip past security measures. Detecting these bots based on shallow interaction characteristics, such as mouse movement patterns, will result in a high number of false positives. Prevailing techniques are therefore inadequate for mitigating such bots. Machine learning-based technologies, such as intent-based deep behavioral analysis (IDBA) — which are semi-supervised machine learning models to identify the intent of bots with the highest precision — are required to accurately detect fourth-generation bots with zero false positives.
Such analysis spans the visitor’s journey through the entire web property — with a focus on interaction patterns, such as mouse movements, scrolling and taps, along with the sequence of URLs traversed, the referrers used and the time spent at each page. This analysis should also capture additional parameters related to the browser stack, IP reputation, fingerprints and other characteristics.
When it comes to detection and mitigation, security and medical treatment have more in common than you may think. Both require careful evaluation of the risks, trade-offs and implications of false positives and false negatives.
In both disciplines, it’s critical to use the right treatment or tool for the problem at hand. Taking antibiotics when you have a viral infection can introduce unwanted side effects and does nothing to resolve your illness. Similarly, using CAPTCHA isn’t a cure-all for every bot attack. It simply won’t work for some bot types, and if you deploy it broadly, it’s sure to cause negative customer experience “side effects.”
And in both medicine and security, treatment is rarely a one-size-fits-all exercise. Treating or mitigating a problem is an entirely different exercise from diagnosing or detecting it. Figuring out the “disease” at hand may be long and complex, but effective mitigation can be surprisingly simple. It depends on several variables — and requires expert knowledge, skills and judgment. It depends on several variables — and requires expert knowledge, skills and judgment.
Block or Manage?
Blocking bots may seem like the obvious approach to mitigation; however, mitigation isn’t always about eradicating bots. Instead, you can focus on managing them. What follows is a round of mitigation techniques worth consideration.
Feed fake data to the bot. Keep the bot active and allow it to continue attempting to attack your app. But rather than replying with real content, reply with fake data. You could reply with modified faked values (that is, wrong pricing values). In this way, you manipulate the bot to receive the value you want rather than the real price. Another option is to redirect the bot to a similar fake app, where content is reduced and simplified and the bot is unable to access your original content.
Challenge the bot with a visible CAPTCHA. CAPTCHA can function as an effective mitigation tool in some scenarios, but you must use it carefully. If detection is not effective and accurate, the use of CAPTCHA could have a significant usability impact. Since CAPTCHA is a challenge by nature, it may also help improve the quality of detection. After all, clients who resolve a CAPTCHA are more than likely not bots. On the other hand, sophisticated bots may be able to resolve CAPTCHA. Consequently, it is not a bulletproof solution.
Use throttling. When an attack source is persistently attacking your apps, a throttling approach may be effective while still allowing legit sources access to the application in a scenario of false positives.
Implement an invisible challenge. Invisible challenges can involve an expectation to move the mouse or type data in mandatory form fields — actions that a bot would be unable to complete.
Block the source. When a source is being blocked, there’s no need to process its traffic, no need to apply protection rules and no logs to store. Considering that bots can generate more than 90% of traffic for highly attacked targets and applications, this cost savings may be significant. Thus, this approach may appear to be the most effective and cost-efficient approach. The bad news? A persistent attack source that updates its bot code frequently may find this mitigation easy to identify and overcome. It will simply update the bot code immediately, and in this way, a simple first-generation bot can evolve into a more sophisticated bot that will be challenging to detect and block in future attack phases.
Web-based bots are a critical part of your business’s digital presence. They help collect content, index it, and even promote it to your customers. A bot may even be responsible for getting you here. These are good bots and they help your business grow, providing consumers with individualized, interesting content.
Not all bots have good intentions, though. In fact, about one quarter of the traffic on the internet can be bad bots. Bad bots can do things like automated account takeover, inventory manipulation, content or price scraping, and skew analytics.
Unwanted Bot Traffic
In a recent customer engagement, we found that over 85% of their traffic was unwanted bot traffic. Not only does this create an undesirable situation for the content managers and the security team, but the company also had to overbuild their infrastructure in order to support that unwanted traffic. Without a way to distinguish between good bots and bad bots, they needed to support it all.
Allowing the good bots to access your website while blocking the bad ones is nearly impossible if you don’t have the right tools. Understanding the bot landscape can be a daunting task and choosing the right solution can be a challenge if you don’t know what to look for.
The bots themselves are changing too. The rise of highly sophisticated human-like bots in recent years requires more advanced techniques in detection and response than in the past.
Choosing The Right Solution
When choosing the right solution for your environment, you need to understand how a vendor’s solution identifies bots and their intent. If a threat is detected, how will the solution manage it? Do you want to block the bot, or maybe feed it fake data as a countermeasure?
Having flexible deployment options is also critical because every environment is different. Look for a bot management solution that provides easy, seamless deployment without infrastructure changes or the need to reroute traffic if you don’t want to.
Radware’s Ultimate Guide to Bot Management is a foundational resource for understanding bots, their benefits, the challenges they create, and what you should consider when deciding on a solution for bot management. At the end of this e-book, you’ll find a buyer’s checklist that will help you understand what criteria to evaluate when selecting the right bot management solution for your environment.
Businesses need to manage bot traffic and associated risks, whether for security, brand protection, revenue protection or infrastructure protection. And because most organizations can’t tell the difference between good bots and bad bots in their network, you may not even be sure if you have a bot problem (pro-tip:Radware’s Bad Bot Analyzer will help you assess the true bot activity in your environment for free).
While working on Radware’s Ultimate Guide to Bot Management, I began wondering what would it take to build a botnet.
Would I have to dive into the Darknet and find criminal hackers and marketplaces to obtain the tools to make one? How much effort would it take to build a complicated system that would avoid detection and mitigation, and what level of expertise is required to make a scraping/credential stuffing and website abuse botnet?
At Your Fingertips
What I discovered was amazing. I didn’t even need to dive into the Darknet; everything anyone would need was readily available on the public internet.
My learning didn’t end there. During this exploration, I noticed that many organizations use botnets in one form or another against their competitors or to gain a competitive advantage. Of course, I knew hackers leverage botnets for profit; but the availability of botnet building tools makes it easy for anyone to construct botnets that can access web interfaces and APIs while disguising their location and user agents.
The use cases being advertised from these toolsets range from data harvesting, to account creation and account takeover, to inventory manipulation capabilities, advertising fraud and a variety of ways to monetize and automate integrations into well known systems for IT.
Mobile Phone Farms
These tools designers and services clearly know there is a market for cyber criminality, and some are shameless about promoting it.
For example, per a recent Vice article examining mobile phone farms, companies are incentivizing traffic to their apps and content by paying users. Indeed, it appears that people can make anywhere from $100-300 a month per mobile phone on apps like perk TV, Fusion TV, MyPoints or even categorizing shows for Netflix. They merely have to take surveys, watch television shows, categorize content or check into establishments.
More specifically, people are building mobile phone farms with cheap android devices and used phones, and scale up their operations to a point where they can make a couple of thousands of dollars (or more!) per month. These farms can be rented out to conduct more nefarious activities, like price scraping, data harvesting, ticket purchasing, account takeover, fake article writing and social media development, hacking, launching launching DDoS attacks and more. To complicate matters, thanks to proxy servers and VPN tools, it has become nearly impossible to detect if a phone farm is being used against a site.
It’s not a far leap to assume that incentivized engagement may very well invite people to build botnets. How long until somebody develops an app to “rent your phone’s spare cycles” to scrape data, or watch content, write reviews, etc. (in other words, things that aren’t completely against the law) for money? Would people sign up to make extra beer money in exchange for allowing botnet operators to click on ads and look at websites for data harvesting?
I think it’s just a matter of time before this idea takes flight. Are you prepared today to protect against the sophisticated botnets? Do you have a dedicated bot management solution? When the botnets evolve into the next generation, will you be ready?
Bots touch virtually every part of our digital lives — and now account for over half of all web traffic.
This represents both a problem and a paradox. Bots can be good, and bots can be bad; removing good bots is bad and leaving bad bots can be even worse.
Having said that, few businesses, application owners, users, designers, security practitioners, or network engineers can distinguish the difference between good bots and bad bots in their operating environments.
As the speed of business continues to accelerate and automate, the instantaneous ability to distinguish legitimate, automated communications from illegitimate will be among the most crucial security controls we can on board.
Differentiating Between Good & Bad Bots
Indeed, as the volume of automated communication over the internet has dramatically increased,and according to Radware’s research, today’s internet now represents a majority (52%) of bot traffic. But how much of that traffic is “good” vs. “bad”?
Some help populate our news feeds, tell the weather, provide stock quotes and control search rankings. We use bots to book travel, access online customer support, even to turn our lights on and off and unlock our doors.
But other bots are designed for more mischievous purposes — including account takeover, content scraping, payment fraud and denial-of-service (DoS) attacks. These bots account for as much as 26% of total internet traffic, and their attacks are often carried out by competitors looking to undermine your competitive advantage, steal your information or increase your online marketing costs.
These “bad bots” represent one of the fastest growing and gravest threats to websites, mobile applications and application programming interfaces (APIs). And they’re fueling a rise in automated attacks against businesses, driving the need for bot management.
In the early days, the use of bots was limited to small scraping attempts or spamming. Today, things are vastly different. Bots are used to take over user accounts, perform DDoS attacks, abuse APIs, scrape unique content and pricing information, increase costs of competitors, deny inventory turnover and more. It’s no surprise, then, that Gartner mentioned bot management at the peak of inflated expectations under the high benefit category in its Hype Cycle for Application Security 2018.
The ULTIMATE Guide to Bot Management
Recognizing the inescapable reality of today’s evolving bots, we have released the Ultimate Guide to Bot Management. This e-book provides an overview of evolving bot threats, outlines options for detection and mitigation, and offers a concise buyer guide to help evaluate potential bot management solutions.
From the generational leaps forward in bot design and use, to the techniques leveraged to outsmart and cloak themselves from detection, we’ve got you covered. The guide also dives into the bot problems across web, API and SDK / Mobile applications, and the most effective architectural strategies in pursuing solutions.
We hope you enjoy this tool as it becomes a must-have reference manual and provides you with the necessary map to navigate the murky waters and mayhem of bot management!
Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.”
These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.
Watch this webcast sponsored by Radware to discover all about about bots, including malicious bot traffic and what you can do to protect your organization from such threats.
Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.
This year, 82% of Radware’s C-Suite Perspectives survey respondents reported a focus on automation compared to 71% who indicated the same response in 2018. What’s driving the need for increased automation in cybersecurity solutions?
The increasing threat posed by next-generation malicious bots that mimic human behavior.
Almost half of all executives believed that their websites were extremely or likely prone to attacks. More than one-quarter of the respondents reported that their mobile applications were attacked on a daily or more frequent basis.
Websites and mobile apps are the digital tools that customers use to interact with companies. About half of the respondents indicated that the impact of attacks on their company’s website was stolen accounts, unauthorized access or content scraping. Two in five said that the attacks were launched by both humans and bots, while one-third credited humans only for the attacks.
Executives in AMER were more likely than those in other regions to say that their sites were extremely prone to attacks.
The Impacts of Bots on Business
Most respondents said that they have discussed the impact of bots on business operations at the executive level. Rankings of how frequently items regarding bots were discussed at the executive level vary by vertical.
Half of the executives acknowledged that bot attacks were a risk but were confident that their staff was managing the threat. Despite this confidence, the market for bot management solutions is still small and emerging, and is expected to experience a compound annual growth rate of 36.7% from 2017 to 2022, according to Frost and Sullivan.
Two in five said that they relied on bots to accelerate business processes and information sharing. An equal number of respondents complained about how bots influence the metrics of their business unit. AMER executives were more likely than those in APAC to say that bots are cost-effective.
Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.
Over half of all internet traffic is generated by bots — some legitimate, some malicious. Competitors and adversaries alike deploy “bad” bots that leverage different methods to achieve nefarious objectives. This includes account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions.
These attacks often go undetected by conventional mitigation systems and strategies because bots have evolved from basic scripts to large-scale distributed bots with human-like interaction capabilities to evade detection mechanisms. To stay ahead of the threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. One of the key technical capabilities required to stop today’s most advanced bots is intent-based deep behavioral analysis (IDBA).
What Exactly is IDBA?
IDBA is a major step forward in bot detection technology because it performs behavioral analysis at a higher level of abstraction of intent, unlike the commonly used, shallow interaction-based behavioral analysis. For example, account takeover is an example of an intent, while “mouse pointer moving in a straight line” is an example of an interaction.
Capturing intent enables IDBA to provide significantly higher levels of accuracy to detect advanced bots. IDBA is designed to leverage the latest developments in deep learning.
More specifically, IDBA uses semi-supervised learning models to overcome the challenges of inaccurately labeled data, bot mutation and the anomalous behavior of human users. And it leverages intent encoding, intent analysis and adaptive-learning techniques to accurately detect large-scale distributed bots with sophisticated human-like interaction capabilities.
3 Stages of IDBA
A visitor’s journey through a web property needs to be analyzed in addition to the interaction-level characteristics, such as mouse movements. Using richer behavioral information, an incoming visitor can be classified as a human or bot in three stages:
- Intent encoding: The visitor’s journey through a web property is captured through signals such as mouse or keystroke interactions, URL and referrer traversals, and time stamps. These signals are encoded using a proprietary, deep neural network architecture into an intent encoding-based, fixed-length representation. The encoding network jointly achieves two objectives: to be able to represent the anomalous characteristics of completely new categories of bots and to provide greater weight to behavioral characteristics that differ between humans and bots.
- Intent analysis: Here, the intent encoding of the user is analyzed using multiple machine learning modules in parallel. A combination of supervised and unsupervised learning-based modules are used to detect both known and unknown patterns.
- Adaptive learning: The adaptive-learning module collects the predictions made by the different models and takes actions on bots based on these predictions. In many cases, the action involves presenting a challenge to the visitor like a CAPTCHA or an SMS OTP that provides a feedback mechanism (i.e., CAPTCHA solved). This feedback is incorporated to improvise the decision-making process. Decisions can be broadly categorized into two types of tasks.
- Determining thresholds: The thresholds to be chosen for anomaly scores and classification probabilities are determined through adaptive threshold control techniques.
- Identifying bot clusters: Selective incremental blacklisting is performed on suspicious clusters. The suspicion scores associated with the clusters (obtained from the collusion detector module) are used to set prior bias.
IDBA or Bust!
Current bot detection and classification methodologies are ineffective in countering the threats posed by rapidly evolving and mutating sophisticated bots.
Bot detection techniques that use interaction-based behavioral analysis can identify Level 3 bots but fail to detect the advanced Level 4 bots that have human-like interaction capabilities. The unavailability of correctly labeled data for Level 4 bots, bot mutations and the anomalous behavior of human visitors from disparate industry domains require the development of semi-supervised models that work at a higher level of abstraction of intent, unlike only interaction-based behavioral analysis.
IDBA leverages a combination of intent encoding, intent analysis and adaptive-learning techniques to identify the intent behind attacks perpetrated by massively distributed human-like bots.
Read “How to Evaluate Bot Management Solutions” to learn more.
Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.
While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.
When Will You Need a Bot Detection Solution?
Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:
Why a WAF Isn’t an Effective Bot Detection Tool
WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.
However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.
Against these sophisticated threats, WAFs won’t get the job done.
The Benefits of Synergy
As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.
Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.