The need for bot management is fueled by the rise in automated attacks. In the early days, the use of bots was limited to small scraping attempts or spamming. Today, things are vastly different. Bots are being used to take over user accounts, perform DDoS attacks, abuse APIs, scrape unique content and pricing information and more.In its “Hype Cycle for Application Security 2018,” Gartner mentioned bot management at the peak of inflated expectations under the high benefit category.
Despite serious threats, are enterprise businesses adopting bot management solutions? The answer is, no. Many are still in denial. These businesses are trying to restrain bots using in-house resources/solutions, putting user security at risk. In a recent study, Development of In-house Bot Management Solutions and their Pitfalls, security researchers from ShieldSquare found that managing bots through in-house resources is doing more harm than the good.
Against 22.39% of actual bad bot traffic, advanced in-house bot management solutions detected only 11.54% of bad bots. Not only did these solutions fail at detecting most of the bad bots, but nearly 50% of the 11.54% detected were also false positives.
So why do in-house bot management solutions fail? Before we dive deeper into finding out the reasons behind the failure of in-house bot management solutions, let’s look at a few critical factors.
More Than Half of Bad Bots Originate From the U.S.
As figure 2 shows (see below), 56.4% of bad bots originated from the U.S. in Q1 2019. Bot herders know that the U.S. is the epicenter of business and showing their origin from the U.S. helps them in escaping geography-based traffic filtration.
For example, many organizations that leverage in-house resources to restrain bots often block the countries where they don’t have any business. Or, they block countries such as Russia, suspecting that’s where most of the bad bots originate. The fact is contrary: Only 2.6% of total bad bots originated from Russia in Q1 2019.
Cyber attackers now leverage advanced technologies to sift through thousands of IPs and evade geography-based traffic filtration. When bots emanate from diverse geographical locations, solutions based on IP-based or geographical filtering heuristics are becoming useless. Detection requires understanding the intent of your visitors to nab the suspected ones.
One-Third of Bad Bots Can Mimic Human Behavior
In Q1 2019 alone, 37% of bad bots were human-like. These bots can mimic human behavior (such as mouse movements and keystrokes) to evade existing security systems (Generation 3 and Generation 4 bad bots, as shown in figure 3).
Sophisticated bots are distributed over thousands of IP addresses or device IDs and can connect through random IPs to evade detection. These stealthy detection-avoiding actions don’t stop there. The programs of these sophisticated bots understand the measures that you can take to stop them. They know that apart from random IP addresses, geographical location is another area that they can exploit. Bots leverage different combinations of user agents to evade in-house security measures.
In-house solutions don’t have visibility into different types of bots, and that’s where they fail. These solutions work based on the data collected from internal resources and lack global threat intelligence. Bot management is a niche space and requires a comprehensive understanding and continuous research to keep up with notorious cybercriminals.
Organizations that are working across various industries deploy in-house measures as their first mitigation step when facing bad bots. To their dismay, in-house solutions often fail to recognize sophisticated bot patterns.
Deploy Challenge-Response Authentication: Challenge-response authentication helps you filter first-generation bots. There are different types of challenge-response authentications, CAPTCHAs being the most widely used. However, challenge-response authentication can only help in filtering outdated user agents/browsers and basic automated scripts and can’t stop sophisticated bots that can mimic human behavior.
Implement Strict Authentication Mechanism on APIs: With the widespread adoption of APIs, bot attacks on poorly protected APIs are increasing. APIs typically only verify the authentication status, but not the authenticity of the user. Attackers exploit these flaws in various ways (including session hijacking and account aggregation) to imitate genuine API calls. Implementing strict authentication mechanisms on APIs can help to prevent security breaches.
Monitor Failed Login Attempts and Sudden Spikes in Traffic: Cyber attackers deploy bad bots to perform credential stuffing and credential cracking attacks on login pages. Since such approaches involve trying different credentials or a different combination of user IDs and passwords, it increases the number of failed login attempts. The presence of bad bots on your website suddenly increases the traffic as well. Monitoring failed login attempts and a sudden spike in traffic can help you take pre-emptive measures before bad bots penetrate your web applications.
Deploy a Dedicated Bot Management Solution: In-house measures, such as the practices mentioned above, provide basic protection but do not ensure the safety of your business-critical content, user accounts and other sensitive data. Sophisticated third- and fourth-generation bots, which now account for 37% of bad-bot traffic, can be distributed over thousands of IP addresses and can attack your business in multiple ways. They can execute low and slow attacks or make large-scale distributed attacks that can result in downtime. A dedicated bot management solution facilitates real-time detection and mitigation of such sophisticated, automated activities.
Note: This piece originally appeared in Security Boulevard.