Ransomware is a type of malware that restricts access to user data by encrypting an infected computer’s files in exchange for payment to decrypt. The attacker often distributes a large-scale phishing campaign in the hope that someone will open the malicious attachment or link. Once infected, the device is unusable and the victim is faced with the decision of whether or not to pay the extortionist to recover the decryption key.
Only in certain cases have keys been recovered. Over the years, Radware researchers have also followed the ransomware-as-a-service (RaaS) industry, which offers novice users the ability to launch their own campaigns for an established price or percentage of the profit. Ransomware has existed for over two decades but has only recently gained popularity among for-profit criminals. This trend has tapered off because ransomware campaigns generate a great deal of attention, notifying potential victims and thereby discouraging them from paying. Campaigns that attract less attention are typically more profitable.
Ransomware campaigns follow a standard pattern of increased activity in the beginning before settling down. Ransomware, once incredibly popular, has fallen out of favor with attackers, who now prefer cryptojacking campaigns. Because of the amount of attention that ransomware campaigns generate, most groups target a wide range of industries, including manufacturing, retail and shipping, in the hope of finding some success.
If you think that your organization could be a target of a ransomware campaign, shoring up your network is critical. Ransomware can be delivered in various ways, most commonly via spam/phishing emails containing a malicious document. Other forms of infection include exploit kits, Trojans and the use of exploits to gain unauthorized access to an infected device.
Download Radware’s “Hackers Almanac” to learn more.
A Trojan horse is a malicious computer program masquerading as a useful or otherwise non-malicious, legitimate piece of software. Generally spread via social engineering and web attacks, Trojan horses often install a backdoor for remote access and unauthorized access of the infected machine.
An attacker can perform various criminal tasks, including, but not limited to, “zombifying” the machine within a botnet or DDoS attack, data theft, downloading or installing additional malware, file modification or deletion, keylogging, monitoring the user’s screen, crashing the computer and anonymous internet viewing.
If you think that you are a target of this attack vector, secure both your corporate network and user devices. Proper education and user hygiene help prevent an employee from infecting your network. Often an employee opens a malicious document via phishing or infects via a drive-by download, allowing the Trojan to download malicious payloads.
Learn more about this cyberthreat by watching our security researcher Daniel Smith outline the risks it presents to organizations:
Download Radware’s “Hackers Almanac” to learn more.
Exploit kits are prepackaged tool kits containing specific exploitsand payloads used to drop malicious payloads onto a victim’s machine. Once a popular avenue for attacks, they are now barely used due to the popularity of other attack vectors, such as cryptomining. However, they are still utilized to deploy ransomware and mining malware.
These tools can target nearly everyone. Organizations should consider themselves a daily target for possible exploit kits designed to deliver malicious payloads onto their network.
To prevent this, update network devices and ensure that all employee devices are also updated. Often times, these attacks are browser based and exploit vulnerabilities once an employee visits the malicious landing page.
Training and preparation start with user education. Humans are the weakest link, and authors of exploit kits target the masses in the hope that someone will fall for their landing pages.
Watch our video with security researcher Daniel Smith to learn more:
Struggling? We understand, it’s tricky! Here are two more
Hackers will often route login requests through
proxy servers to avoid blacklisting their IP addresses
It is a subset of Brute Force attacks, but
different from credential cracking
And the Answer Is….
Credential stuffing! If you didn’t guess correctly, don’t
worry. You certainly aren’t alone. At this year’s RSA
Conference, Radware invited attendees to participate
in a #HackerChallenge. Participants were given clues and asked to diagnose
threats. While most were able to surmise two other cyber threats, credential stuffing stumped the majority.
Understandably so. For one, events are
happening at a breakneck pace. In the last few months alone, there have been
several high-profile attacks leveraging different password attacks, from
credential stuffing to credential
spraying. It’s entirely possible that people are
conflating the terms and thus the attack vectors. Likewise, they may also
confuse credential stuffing with credential cracking.
Stuffing vs. Cracking vs. Spraying
As we’ve previously
written, credential stuffing is a subset of brute force attacks but is
different from credential cracking. Credential stuffing campaigns do not
involve the process of brute forcing password combinations. Rather, they
leverage leaked username and passwords in an automated fashion against numerous
websites to take over users’ accounts due to credential reuse.
Conversely, credential cracking attacks are an automated web attack wherein criminals attempt to crack users’ passwords or PIN numbers by processing through all possible combines of characters in sequence. These attacks are only possible when applications do not have a lockout policy for failed login attempts. Software for this attack will attempt to crack the user’s password by mutating or brute forcing values until the attacker is successfully authenticated.
As for credential (or password) spraying,
this technique involves using a limited set of company-specific passwords in
attempted logins for known usernames. When conducting these types of attacks, advanced
cybercriminals will typically scan your infrastructure for external facing apps
and network services such as webmail, SSO and VPN gateways. Usually, these
interfaces have strict timeout features. Actors will use password spraying vs.
brute force attacks to avoid being timed out and possibly alerting admins.
So What Can You Do?
A dedicated bot
management solution that is tightly integrated into your Web Application
Firewall (WAF) is critical. Device fingerprinting, CAPTCHA, IP rate-based
In addition to these steps, network operators should apply
two-factor authentication where eligible and monitor dump credentials for
potential leaks or threats.
Read “Radware’s 2018 Web Application Security Report” to learn more.
Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.
Target Profile: A Social Media/Mobile App Company
The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.
Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.
Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.
Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.
Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.
Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.
Your Permissions Equal Your
Threat Surface: Leveraging public cloud environments means that
resources that used to be hosted inside your
organization’s perimeter are now outside where they are no longer under the
control of system administrators and can be accessed from anywhere in the
world. Workload security, therefore, is defined by the people who can access
those workloads and the permissions they have. In effect, your permissions
equal your attack surface.
Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.
Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.
What Could Have Been Done Differently?
Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.
Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that,
even if the account is compromised, the damage an attacker can do is limited.
Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.
Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Humans aren’t the only ones consumed with connected devices
these days. Cows have joined our ranks.
Believe it or not, farmers are increasingly relying on IoT
devices to keep their cattle connected. No, not so that they can moo-nitor (see
what I did there?) Instagram, but to improve efficiency and productivity. For
example, in the case of dairy farms, robots feed, milk and monitor cows’ health,
collecting data along the way that help farmers adjust techniques and processes
to increase milk production, and thereby profitability.
The implications are massive. As the Financial Times pointed
out, “Creating a system where a cow’s birth, life, produce and death are
not only controlled but entirely predictable could have a dramatic impact on
the efficiency of the dairy industry.”
From Dairy Farm to Data Center
So, how do connected cows factor into cybersecurity? By the
simple fact that the IoT devices tasked with milking, feeding and monitoring
them are turning dairy farms into data centers – which has major security
implications. Because let’s face it, farmers know cows, not cybersecurity.
Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.
It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.
Admittedly, connected cows aren’t new; IoT devices have been
assisting farmers for several years now. And it’s a booming business. Per the
FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in
2016 (including $363m in farm management and sensor technology)…and is set to
grow further as dairy farms become a test bed for the wider IoT strategy of big
But what is new is
the rollout of 5G networks, which promise faster speeds, low latency and
increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve
previously discussed, with new benefits come new risks. As network
architectures evolve to support 5G, security vulnerabilities will abound if
cybersecurity isn’t prioritized and integrated into a 5G deployment from the
In the new world of 5G, cyberattacks can become much more
potent, as a single hacker can easily multiply into an army through botnet
deployment. Indeed, 5G opens
the door to a complex world of interconnected devices that hackers will be
able to exploit via a single point of access in a cloud application to quickly
expand an attack radius to other connected devices and applications. Just
imagine the impact of a botnet deployment on the dairy industry.
Zero-day attacks are the latest, never-before-seen generation of attacks. They are not volumetric or detectable from a known application signature. Security systems and experts must react instantly to solve the new issues, that is, they have zero days to react. Advanced application-level attacks typically fit into this category.
Two Distinct Phases
Probe and Learn: Hackers assess network defenses and probe for vulnerabilities, looking for different weaknesses and identifying the type of attacks that will potentially be effective. It’s like an archer who picks the best arrows to put in his quiver before battle. For example, a hacker may determine that a combination of encrypted attacks, attacks from a rotating IP address source, new low and slow attacks and headless browser attacks will be most effective.
Optimize, Morph and Attack: Hackers launch the attack and then vary the attack vectors (or arrows from the quiver). In this case, hackers often understand that legacy DDoS mitigators need manual intervention to troubleshoot and mitigate a zero-day attack. So they attack the weakness of the legacy mitigator (multiple manual troubleshooting cycles to stop an attack) in addition to attacking the application vulnerabilities.
Who Are the Attackers?
Richard Clarke, former special cybersecurity advisor to the U.S. president, devised an acronym — C.H.E.W. — to categorize and explain the origin of cyberattacks (that specifically target carriers and enterprises):
Cybercrime — the notion that someone is going to attack you with the primary motive being financial gain from the endeavor.
Hacktivism — attacks motivated by ideological differences. The primary focus of these attacks is not financial gain but rather persuading or dissuading certain actions or “voices.”
Espionage — straightforward motive of gaining information on another organization in pursuit of political, financial, capitalistic, market share or some other form of leverage.
War (Cyber) — the notion of a nation-state or transnational threat to an adversary’s centers of power via a cyberattack. Attacks could focus on nonmilitary critical infrastructure.
The attackers can range from a tech-savvy teenager to a highly organized group that taps into huge server farms in places like Russia and Ukraine to facilitate attacks.
The types of hackers are as varied that the methods they employ and include APTs (advanced persistent threats) agents, corporate spies, cybercriminals, cyberwarriors, hacktivists, rogue hackers, spammers and malware spreaders.
Read “Radware’s 2018 Web Application Security Report” to learn more.
According to a study by the Ponemon Institute in December 2018, bots comprised over 52% of all Internet traffic. While ‘good’ bots discreetly index websites, fetch information and content, and perform useful tasks for consumers and businesses, ‘bad’ bots have become a primary and growing concern to CISOs, webmasters, and security professionals today. They carry out a range of malicious activities, such as account takeover, content scraping, carding, form spam, and much more. The negative impacts resulting from these activities include loss of revenue and harm to brand reputation, theft of content and personal information, lowered search engine rankings, and distorted web analytics, to mention a few.
For these reasons, researchers at Forrester recommend that, “The first step in protecting your company from bad bots is to understand what kinds of bots are attacking your firm.” So let us briefly look at the main bad bot threats CISOs have to face, and then delve into their industry-wise prevalence.
Bad Bot Attacks That Worry CISOs The Most
The impact of bad bots results from the specific activities they’re programmed to execute. Many of them aim to defraud businesses and/or their customers for monetary gain, while others involve business competitors and nefarious parties who scrape content (including articles, reviews, and prices) to gain business intelligence.
Account Takeover attacks use credential stuffing and brute force techniques to gain unauthorized access to customer accounts.
Application DDoS attacks slow down web applications by exhausting system resources, 3rd-party APIs, inventory databases, and other critical resources.
API Abuse results from nefarious entities exploiting API vulnerabilities to steal sensitive data (such as personal information and business-critical data), take over user accounts, and execute denial-of-service attacks.
Ad Fraud is the generation of false impressions and illegitimate clicks on ads shown on publishing sites and their mobile apps. A related form of attack is affiliate marketing fraud (also known as affiliate ad fraud) which is the use of automated traffic by fraudsters to generate commissions from an affiliate marketing program.
Carding attacks use bad bots to make multiple payment authorization attempts to verify the validity of payment card data, expiry dates, and security codes for stolen payment card data (by trying different values). These attacks also target gift cards, coupons and voucher codes.
Scraping is a strategy often used by competitors who deploy bad bots on your website to steal business-critical content, product details, and pricing information.
Skewed Analytics is a result of bot traffic on your web property, which skews site and app metrics and misleads decision making.
Form Spam refers to the posting of spam leads and comments, as well as fake registrations on marketplaces and community forums.
Denial of Inventory is used by competitors/fraudsters to deplete goods or services in inventory without ever purchasing the goods or completing the transaction.
Industry-wise Impact of Bot Traffic
To illustrate the impact of bad bots, we aggregated all the bad bot traffic that was blocked by our Bot Manager during Q2 and Q3 of 2018 across four industries selected from our diverse customer base: E-commerce, Real Estate, Classifieds & Online Marketplaces, and Media & Publishing. While the prevalence of bad bots can vary considerably over time and even within the same industry, our data shows that specific types of bot attacks tend to target certain industries more than others.
Bad bots target e-commerce sites to carry out a range of attacks — such as scraping, account takeovers, carding, scalping, and denial of inventory. However, the most prevalent bad bot threat encountered by our e-commerce customers during our study were attempts at affiliate fraud. Bad bot traffic made up roughly 55% of the overall traffic on pages that contain links to affiliates. Content scraping and carding were the most prevalent bad bot threats to e-commerce portals two to five years ago, but the latest data indicates that attempts at affiliate fraud and account takeover are rapidly growing when compared to earlier years.
Bad bots often target real estate portals to scrape listings and the contact details of realtors and property owners. However, we are seeing growing volumes of form spam and fake registrations, which have historically been the biggest problems caused by bots on these portals. Bad bots comprised 42% of total traffic on pages with forms in the real estate sector. These malicious activities anger advertisers, reduce marketing ROI and conversions, and produce skewed analytics that hinder decision making. Bad bot traffic also strains web infrastructure, affects the user experience, and increases operational expenses.
Classifieds & Online Marketplaces
Along with real estate businesses, classifieds sites and online marketplaces are among the biggest targets for content and price scrapers. Their competitors use bad bots not only to scrape their exclusive ads and product prices to illegally gain a competitive advantage, but also to post fake ads and spam web forms to access advertisers’ contact details. In addition, bad bot traffic strains servers, third-party APIs, inventory databases and other critical resources, creates application DDoS-like situations, and distorts web analytics. Bad bot traffic accounted for over 27% of all traffic on product pages from where prices could be scraped, and nearly 23% on pages with valuable content such as product reviews, descriptions, and images.
Media & Publishing
More than ever, digital media and publishing houses are scrambling to deal with bad bot attacks that perform automated attacks such as scraping of proprietary content, and ad fraud. The industry is beset with high levels of ad fraud, which hurts advertisers and publishers alike. Comment spam often derails discussions and results in negative user experiences. Bot traffic also inflates traffic metrics and prevents marketers from gaining accurate insights. Over the six-month period that we analyzed, bad bots accounted for 18% of overall traffic on pages with high-value content, 10% on ads, and nearly 13% on pages with forms.
As we can see, security chiefs across a range of industries are facing increasing volumes and types of bad bot attacks. What can they do to mitigate malicious bots that are rapidly evolving in ways that make them significantly harder to detect? Conventional security systems that rely on rate-limiting and signature-matching approaches were never designed to detect human-like bad bots that rapidly mutate and operate in widely-distributed botnets using ‘low and slow’ attack strategies and a multitude of (often hijacked) IP addresses.
The core challenge for any bot management solution, then, is to detect every visitor’s intent to help differentiate between human and malicious non-human traffic. As more bad bot developers incorporate artificial intelligence (AI) to make human-like bots that can sneak past security systems, any effective countermeasures must also leverage AI and machine learning (ML) techniques to accurately detect the most advanced bad bots.
Read “Radware’s 2018 Web Application Security Report” to learn more.
In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.
As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.
On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.
With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.
As the saying goes, this is why we can’t have nice things.
If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.
E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.
Types of Application Attacks
It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.
Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.
Consecutive login attempts with different credentials from the same HTTP client
Unusual request activity for selected application content and data
Unexpected changes in website performance and metrics
A sudden increase in account creation rate
Elevated traffic for certain limited-availability goods or services
Intelligence is the Solution
Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.
A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.
Read “Radware’s 2018 Web Application Security Report” to learn more.
In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.
Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.
Positive Impact: Business Acceleration
Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.
Good bots include:
Crawlers — are used by search engines and contribute to SEO and SEM efforts
Chatbots — automate and extend customer service and first response
Fetchers — collect data from multiple locations (for instance, live sporting events)
Pricers — compare pricing information from different services
Traders — are used in commercial systems to find the best quote or rate for a transaction
The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:
Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.
Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.
Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.
For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.
Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.
Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.