When it comes to detection and mitigation, security and medical treatment have more in common than you may think. Both require careful evaluation of the risks, trade-offs and implications of false positives and false negatives.
In both disciplines, it’s critical to use the right treatment or tool for the problem at hand. Taking antibiotics when you have a viral infection can introduce unwanted side effects and does nothing to resolve your illness. Similarly, using CAPTCHA isn’t a cure-all for every bot attack. It simply won’t work for some bot types, and if you deploy it broadly, it’s sure to cause negative customer experience “side effects.”
And in both medicine and security, treatment is rarely a one-size-fits-all exercise. Treating or mitigating a problem is an entirely different exercise from diagnosing or detecting it. Figuring out the “disease” at hand may be long and complex, but effective mitigation can be surprisingly simple. It depends on several variables — and requires expert knowledge, skills and judgment. It depends on several variables — and requires expert knowledge, skills and judgment.
Block or Manage?
Blocking bots may seem like the obvious approach to mitigation; however, mitigation isn’t always about eradicating bots. Instead, you can focus on managing them. What follows is a round of mitigation techniques worth consideration.
Feed fake data to the bot. Keep the bot active and allow it to continue attempting to attack your app. But rather than replying with real content, reply with fake data. You could reply with modified faked values (that is, wrong pricing values). In this way, you manipulate the bot to receive the value you want rather than the real price. Another option is to redirect the bot to a similar fake app, where content is reduced and simplified and the bot is unable to access your original content.
Challenge the bot with a visible CAPTCHA. CAPTCHA can function as an effective mitigation tool in some scenarios, but you must use it carefully. If detection is not effective and accurate, the use of CAPTCHA could have a significant usability impact. Since CAPTCHA is a challenge by nature, it may also help improve the quality of detection. After all, clients who resolve a CAPTCHA are more than likely not bots. On the other hand, sophisticated bots may be able to resolve CAPTCHA. Consequently, it is not a bulletproof solution.
Use throttling. When an attack source is persistently attacking your apps, a throttling approach may be effective while still allowing legit sources access to the application in a scenario of false positives.
Implement an invisible challenge. Invisible challenges can involve an expectation to move the mouse or type data in mandatory form fields — actions that a bot would be unable to complete.
Block the source. When a source is being blocked, there’s no need to process its traffic, no need to apply protection rules and no logs to store. Considering that bots can generate more than 90% of traffic for highly attacked targets and applications, this cost savings may be significant. Thus, this approach may appear to be the most effective and cost-efficient approach. The bad news? A persistent attack source that updates its bot code frequently may find this mitigation easy to identify and overcome. It will simply update the bot code immediately, and in this way, a simple first-generation bot can evolve into a more sophisticated bot that will be challenging to detect and block in future attack phases.