Here’s How Human-like Bots Perform Online Fraud

3
3779

In 2019, login pages were the prime target of fraudsters across different verticals. They are using bad bots to carry out two types of online fraud: (1) account takeover to steal PII and payment card details (2) fake account creation to validate stolen payment card details (carding attacks) or cash out stolen cards.   

For online businesses and their customers, the growing threat of online fraud is a real concern. With stringent regulations on data and privacy such as GDPR and CCPA, online fraud is not just a business issue, but a legal challenge as well. For many organizations, a data breach means ceasing to exist due to massive fines under new data security regulations.   

The real challenge here is not weak data and payment security opted by the organization. With every measure online merchants take to tighten security and thwart malicious activities, cyber criminals seem to up their game and outwit them. Online businesses today are faced with a tireless legion of bots that can bypass security defenses to perform fraud. 

The bad bots that perform online frauds are highly sophisticated and can mimic human behavior. According to the Big Bad Bot Problem 2020 report, 62.7% of bad bots on the login page can mimic human behavior. That means these bots can take over user accounts or can even create fake accounts to perform carding or cashing out attacks. Similarly, 57.5% of bad bots on the checkout page can simulate human behavior when performing carding attacks. 

Figure 1: Behavior of Bad Bots by Generation 

Online Fraud Attacks and Symptoms to Identify If You’re Under Attack 

Per OWASP, here are the attack symptoms you should watch out for:

Online Fraud During the Coronavirus Pandemic 

While the world struggles to find a cure for coronavirus, even healthcare organizations are under cyber-attack. We observed a spike in bot activity against e-commerce, entertainment, and BFSI in March. Cybercriminals are targeting e-commerce and financial services institutions with account takeover attacks during this pandemic.   

Figure 2: Traffic Distribution by Industry – March 2020 

A Real-World Case Study of Online Fraud  

When it comes to fraud, financial services institutions are a prime target. Following is a case study on how a credit union from the US was targeted with large-scale, distributed account takeover attacks. Though Radware Bot Manager averted the attacks, it’s a must-read case on how sophisticated and distributed online fraud attacks can be. 

[You may also like: Here’s How Bots Are Exploiting Coronavirus Fears]

Industry: BFSI 

Function: A credit union  

Duration of Study: 30 days  

Problem: Large-scale, distributed account takeover attacks 

Attack Surface: Login Page of web applications, mobile apps, and Authentication API 

Result: Averted by Radware Bot Manager 

[You may also like: Trends in the Automated Attack Landscape & the Impact on Businesses]

Business Problem: Severe account takeover attacks were never-ending for this organization. Millions of bad bots bombarded the login page of this credit union with large scale, sophisticated credential stuffing attacks.  

The Intensity of Attacks – Example A: A variety of bots with different signatures attacked the login page and authentication API of the credit union during the study period. Primarily, attackers made three types of hit:  

  • Attacks on constant intervals 
  • Low and slow  
  • Continuous  

Low and slow attacks are the most sophisticated attacks and can bypass security defenses if dedicated measures are not in place.  

Figure 3: Different types of bot signatures 

The Intensity of Attacks – Example B: In this instance, the subnet of IPs (marked in blue) originating from the same internet service provider (ISP) with rotating user agents (labeled in red) is being used to target the login page (authentication API). It is a case of large-scale distributed attacks where attackers use only one ISP to hide behind genuine users to avoid being blocked based on their ISP address.  

Figure 4: Distributed bad bot pattern 

Classification of Bad Bots: Cybercriminals leveraged human-like and distributed human-like bad bots. On the login page of the credit union’s platform, 63.9% of bad bots could mimic human behavior. 

Figure 5: Types of bad bots target the credit union 

Conclusion 

Online businesses need to adopt various measures to avert online fraud. Conventional security measures identify and block bots using thresholds set on traffic from recognized attack sources, for example, known botnet herders’ IPs. Such approaches are ineffective in stopping bots that simulate human behavior and shift through thousands of IPs to commit fraud.

We recommend following the action plan to spot and prevent online fraud: 

  • Constantly monitor traffic sources and restrict login attempts per session/user/IP address/device. 
  • Develop competencies to detect automated behavioral patterns of users and deploy systems that can detect the intent of automated traffic distributed across multiple sessions and sources. 
  • Building an accurate bot detection engine is a tightrope act. If you try to eliminate false negatives, you end up with few false positives — and vice versa.  Lack of historical labeled data is one of the major concerns for an accurate detection system. The best approach for an organization that is trying to build an ML-powered automated bot management solution is to create a closed-loop feedback system that dynamically improves the machine-learning models based on signals collected directly from end-users. 
  • Monitor and restrict social media login. Ensure that users have unique passwords and educate users about password re-use to prevent credential stuffing and credential cracking attempts. 

Read Radware’s “The Big Bad Bot Report” to learn more.

Download Now

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here