This post is also available in: French German Italian Portuguese (Brazil) Spanish Russian
The need for bot management is fueled by the rise in automated attacks. In the early days, the use of bots was limited to small scraping attempts or spamming. Today, things are vastly different. Bots are being used to take over user accounts, perform DDoS attacks, abuse APIs, scrape unique content and pricing information and more.
Despite serious threats, are enterprise businesses adopting bot management solutions? The answer is no. Many are still in denial. These businesses are trying to restrain bots using in-house resources/solutions, putting user security at risk. In a study, Development of In-house Bot Management Solutions and their Pitfalls, security researchers from Radware’s Innovation Center found that managing bots through in-house resources is doing more harm than good.
Against 22.39% of actual bad bot traffic, advanced in-house bot management solutions detected only 11.54% of bad bots. Not only did these solutions fail at detecting most of the bad bots, but nearly 50% of the 11.54% detected were also false positives.
[You may also like: Protecting an Airline from Bad Bots: A Case Study]
So why do in-house bot management solutions fail? Before we dive deeper into finding out the reasons behind the failure of in-house bot management solutions, let’s look at a few critical factors.
More Than Half of Bad Bots Originate from Smaller Countries
When comparing countries with the highest percentage of bot traffic as part of the total outbound traffic, many of the nations are very small. For example, Andorra is a tiny principality in Europe, known as a tax shelter. Because Andorra isn’t part of the European Union (EU), it has no obligation to share the data it stores. Thus, attackers utilize servers located in Andorra to launch bot attacks because data is sheltered.
Cyber attackers now leverage advanced technologies to sift through thousands of IPs and evade geography-based traffic filtration. When bots emanate from diverse geographical locations, solutions based on
IP-based or geographical filtering heuristics are becoming useless. Detection requires understanding the intent of your visitors to nab the suspected ones.
One-Third of Bad Bots Can Mimic Human Behavior
Bot management is complex and requires a dedicated technology with experts behind it who have a deep knowledge of good and bad bot behaviors. These bots can mimic human behavior (such as mouse movements and keystrokes) to evade existing security systems.
Sophisticated bots are distributed over thousands of IP addresses or device IDs and can connect through random IPs to evade detection. These stealthy detection-avoiding actions don’t stop there. The programs of these sophisticated bots understand the measures that you can take to stop them. They know that apart from random IP addresses, geographical location is another area that they can exploit. Bots leverage different combinations of user agents to evade in-house security measures.
In-house solutions don’t have visibility into different types of bots, and that’s where they fail. These solutions work based on the data collected from internal resources and lack global threat intelligence. Bot management is a niche space and requires a comprehensive understanding and continuous research to keep up with notorious cybercriminals.
Organizations that are working across various industries deploy in-house measures as their first mitigation step when facing bad bots. To their dismay, in-house solutions often fail to recognize sophisticated bot patterns.
Deploy Challenge-Response Authentication: Challenge-response authentication helps you filter first-generation bots. There are different types of challenge-response authentications, CAPTCHAs being the most widely used. However, challenge-response authentication can only help in filtering outdated user agents/browsers and basic automated scripts and can’t stop sophisticated bots that can mimic human behavior.
Implement Strict Authentication Mechanism on APIs: With the widespread adoption of APIs, bot attacks on poorly protected APIs are increasing. APIs typically only verify the authentication status, but not the authenticity of the user. Attackers exploit these flaws in various ways (including session hijacking and account aggregation) to imitate genuine API calls. Implementing strict authentication mechanisms on APIs can help to prevent security breaches.
Monitor Failed Login Attempts and Sudden Spikes in Traffic: Cyber attackers deploy bad bots to perform credential stuffing and credential cracking attacks on login pages. Since such approaches involve trying different credentials or a different combination of user IDs and passwords, it increases the number of failed login attempts. The presence of bad bots on your website suddenly increases the traffic as well. Monitoring failed login attempts and a sudden spike in traffic can help you take pre-emptive measures before bad bots penetrate your web applications.
Deploy a Dedicated Bot Management Solution: In-house measures, such as the practices mentioned above, provide basic protection but do not ensure the safety of your business-critical content, user accounts and other sensitive data. Sophisticated third- and fourth-generation bots, which now account for 37% of bad-bot traffic, can be distributed over thousands of IP addresses and can attack your business in multiple ways. They can execute low and slow attacks or make large-scale distributed attacks that can result in downtime. A dedicated bot management solution facilitates real-time detection and mitigation of such sophisticated, automated activities.
Like this post? Subscribe now to get the latest Radware content in your inbox
weekly plus exclusive access to Radware’s Premium Content