The e-commerce industry is growing fast. In a matter of seconds, lucrative shopping deals are being availed and transactions are done. If an organization’s IT infrastructure is not up to the task of protecting applications that enable easy shopping, sophisticated automated attacks can happen in the blink of an eye.
The sophistication level of bad bots is increasing across industries. Their ability to mimic human behavior and be distributed over thousands of IPs is a major cause of concern to e-commerce firms and their applications. The fourth-generation bad bots are not only capable of mimicking human behavior, but they can also be distributed over thousands of IPs and can automatically mutate to carry out cyber-attacks.
E-Commerce businesses rely on ‘good bots’ to promote their business; bots provides them more visibility in the virtual space, whether by digital advertising, search engines, social networks and affiliate programs. Therefore, these bots play a key role in online shopping and should be let through. However, as ‘bad’ bots carry out cyberattacks, the precision in classification is crucial and has an immediate business impact (ROI).
So how can online retailers manage bots — both good and bad?
All large e-commerce platforms have sophisticated bot activity on their website, mobile apps, and APIs that can expose them to account takeover, content scraping and loss of Gross Merchandise Value (GMV). E-tailers must be diligent in their approach to find and mitigate malicious sources of bot activity.
Build capabilities to ID automated activity in seemingly legitimate user behaviors. Sophisticated bots simulate mouse movements, perform random clicks, and navigate pages in a human-like manner. Preventing such attacks requires deep behavioral models, device/browser fingerprinting, and closed-loop feedback systems to ensure that you don’t block genuine users. Purpose-built bot mitigation solutions detect such sophisticated automated activities and help to take preemptive actions. In comparison, traditional security solutions – such as firewalls and WAFs – are limited to tracking spoofed cookies, user agents, and IP reputation.
Deploy challenge-response authentication. Challenge-response authentication is one basic security protocol that can help you filter bad bots. There are different types of authentication using challenge-response authentication, CAPTCHAs being the most widely used one. Challenge-response authentication can help in filtering outdated user agents/browsers and basic automated scripts but won’t assist in blocking sophisticated bots that mimic human behavior and can solve CAPTHCAs. Also, challenge-response authentication requires a risk scoring mechanism, as showing multiple CAPTCHAs to users disrupts the customer experience.
Block bad bot harboring public clouds/data centers. Data centers/public cloud service safe harbor bad bots. Organizations can block suspected data centers/public cloud services and ISPs. However, blocking all the traffic coming from data centers or ISPs without considering the user behavior can cause false positives. For example, a significant number of users on digital publishing sites come from commercial organizations that use secure web gateways (SWGs) located in data centers to filter user-initiated traffic. Blocking data center traffic without considering domain-specific user behavior can cause false positives for digital publishing sites.
Spot highly active new or existing user accounts that don’t buy. E-commerce portals must track old or newly created accounts that are highly active on the platform but haven’t made any purchase in a long time. Such accounts may be handled by bots that mimic genuine user behavior to scrape product details and pricing information.
Don’t overlook unusual traffic on selected product pages. E-tailers should monitor unusual spikes in page views of certain products. These spikes can be periodic in nature. A sudden surge in engagement on selected product pages can be a symptom of non-human activity on your website.
Watch out for competitive price tracking and monitoring. Many e-commerce firms deploy bots or hire professionals to scrape product details and pricing information from their rival portals. You must regularly track competitors for signs of price and product catalog matching.
Monitor failed login attempts and sudden spikes in traffic for AuthBot attacks. Cyber attackers deploy bad bots such as AuthBots to perform credential stuffing and credential cracking attacks on login pages. Since such approaches involve trying different credentials or different combinations of user IDs and passwords, it increases the number of failed login attempts. The presence of bad bots on your website (to perform scraping, account takeover, or any other type of automated activity) suddenly increases the traffic. Monitoring failed login attempts and a sudden spike in traffic can help organizations take preemptive measures before bad bots cause any damage.