main

Security

Executives’ Changing Views on Cybersecurity

July 9, 2019 — by Radware0

cs8-960x540.jpg

What does the shift in how cybersecurity is viewed by senior executives within organizations mean? To find out, Radware surveyed more than 260 executives worldwide and discovered that cybersecurity has moved well beyond the domain of the IT department and is now the direct responsibility of senior executives.

Security as a Business Driver

The protection of public and private cloud networks and digital assets is a business driver that needs to be researched and evaluated just like other crucial issues that affect the health of organizations.

Just because the topic is being elevated to the boardroom doesn’t necessarily mean that progress is being made. Executive preference for cybersecurity management skewed toward internal management (45%), especially in the AMER region (55%), slightly higher than in 2018. Yet the number of respondents who said that hackers can penetrate their networks remained static at 67% from last year’s C-suite perspectives report.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

As in the past two years’ surveys, two in five executives reported relying on their security vendors to stay current and keep their security products up to date. Similar percentages also reported daily research or subscriptions to third-party research centers.

At the same time, the estimated cost of an attack jumped 53% from 3 million USD/EUR/GBP in 2018 to 4.6 million USD/EUR/GBP in 2019.

Staying Current on Attack Vectors

Looking Forward

The respondents ranked improvement of information security (54%) and business efficiency (38%) as the top two business transformation goals of integrating new technologies. In last year’s survey, the same two goals earned the top two spots, but the emphasis on information security increased quite a bit this year from 38% in 2018 (business efficiency held steady from 37% in 2018).

Although the intent to enhance cybersecurity increases, actions do not necessarily follow. Often the work to deploy new technologies to streamline processes, lower operating costs, offer more customer touch points and be able to react with more agility to market changes proceeds faster than the implementation of security measures.

Every new touchpoint added to networks, both public and private, exponentially increases organizations’ exposure and vulnerabilities to cyberattacks. If organizations are truly going to benefit from advances in technology, that will require the right level of budgetary investment.

The true costs of cyberattacks and data breaches are only known if they are successful. Senior executives who spend the time now to figure out what cybersecurity infrastructure makes sense for their organizations reduce the risk of incurring those costs. The investment can also be leveraged to build market advantage if organizations let their customers and suppliers know that cybersecurity is part of their culture of doing business. Prevention, not remediation, should be the focus.

[You may also like: How Cyberattacks Directly Impact Your Brand]

Securing digital assets can no longer be delegated solely to the IT department. Rather, security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives. The C-suite must lead the way.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Security

The Impact of GDPR One Year In

June 27, 2019 — by Radware1

gdpr1-960x540.jpg

Data breaches are expensive, and the costs are only going up.

Those reporting attacks that cost 10 million USD/EUR/GBP or more almost doubled from last year — from 7% in 2018 to 13% in 2019. Half of Radware’s C-Suite Perspectives survey respondents estimated that an attack cost somewhere between 500,001 and 9.9 million USD/EUR/GBP.

One Year In

Arguably, the General Data Protection Regulation (GDPR), which has been active in the European Union since May 2018, contributes to these rising costs.

Every EU state has a data protection authority (DPA) that is authorized to impose administrative fines for improper handling of data. Fines can go up to 4% of a company’s worldwide revenues for more serious violations. Article 83 of the GDPR requires that fines be “effective, proportionate and dissuasive.”

More than half of Radware’s 2019 C-Suite Perspective survey respondents from EMEA experienced a self-reported incident under the GDPR in the past 12 months.

In the largest fine to date, France levied a fine against Google for €50 million for lack of consent on advertisements. Germany fined Knuddels €20,000 for insufficiently securing user data, enabling hackers to steal user passwords. And a sports betting café in Austria received a €5,000 fine for unlawful video surveillance.

C-Suite Perspectives: From Defense to Offense — Executives Turn Information Security Into a Competitive Advantage

So far, DPAs have received almost 150,000 complaints about data handling. Most are about video surveillance and advertising calls or mailings, according to the EU Commission. While fines have not yet been imposed in many cases, the potential for significant penalties is there.

The takeaway? C-suite executives in all regions should not let the leniency of the first year of GDPR enforcement lull them into complacency. The threat of GDPR fines is just one risk facing organizations that experience a data breach.

The danger is very real.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Security

Transforming Into a Multicloud Environment

June 26, 2019 — by Radware0

Multicloud-960x540.jpg

While C-suite executives are taking on larger roles in proactively discussing cybersecurity issues, they are also evaluating how to leverage advances in technology to improve business agility. But as network architectures get more complex, there is added pressure to secure new points of attack vulnerability.

Organizations continue to host applications and data in the public cloud, typically spread across multiple cloud providers. This multicloud approach enables enterprises to be nimbler with network operations, improve the customer experience and reduce costs.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

Public Cloud Challenges

Every public cloud provider utilizes different hardware and software security policies, methods and mechanisms. This creates a challenge for enterprises to maintain standard policies and configurations across all infrastructures.

Furthermore, public cloud providers generally only meet basic security standards for their platform. And application security of workloads on public clouds is not included in the public cloud offering.

Even with concerns about the security of public clouds–almost three in five respondents expressed concern about vulnerabilities within their companies’ public cloud networks–organizations are moving applications and data to cloud service providers.

The Human Side of the Cloud

Sometimes the biggest threat to an organization’s digital assets are the people who are hired to protect them. Whether on purpose or through carelessness, people can compromise the permissions designed to create a security barrier.

[You may also like: Eliminating Excessive Permissions]

Of the almost three-fourths who indicated that they have experienced unauthorized access to their public cloud assets, the most common reasons were:

  • An employee neglected credentials in a development forum (41%);
  • A hacker made it through the provider’s security (37%) or the company’s security (31%); or
  • An insider left a way in (21%).

An insider?! Yes, indeed. Organizations may run into malicious insiders (legitimate users who exploit their privileges to cause harm) and/or negligent insiders (also legitimate users, such as Dev/DevOps engineers who make configuration mistakes, or other employees with access who practice low security hygiene and leave ways for hackers to get in).

[You may also like: Are Your DevOps Your Biggest Security Risks?]

To limit the human factor, senior-level executives should make sure that continuous hardening checks are applied to configurations in order to validate permissions and limit the possibility of attacks as much as possible.

The goals? To avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure that communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Security

IDBA: A Patented Bot Detection Technology

June 13, 2019 — by Radware1

AdobeStock_42807981-960x720.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. Competitors and adversaries alike deploy “bad” bots that leverage different methods to achieve nefarious objectives. This includes account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions.

These attacks often go undetected by conventional mitigation systems and strategies because bots have evolved from basic scripts to large-scale distributed bots with human-like interaction capabilities to evade detection mechanisms. To stay ahead of the threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. One of the key technical capabilities required to stop today’s most advanced bots is intent-based deep behavioral analysis (IDBA).

What Exactly is IDBA?

IDBA is a major step forward in bot detection technology because it performs behavioral analysis at a higher level of abstraction of intent, unlike the commonly used, shallow interaction-based behavioral analysis. For example, account takeover is an example of an intent, while “mouse pointer moving in a straight line” is an example of an interaction.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Capturing intent enables IDBA to provide significantly higher levels of accuracy to detect advanced bots. IDBA is designed to leverage the latest developments in deep learning.

More specifically, IDBA uses semi-supervised learning models to overcome the challenges of inaccurately labeled data, bot mutation and the anomalous behavior of human users. And it leverages intent encoding, intent analysis and adaptive-learning techniques to accurately detect large-scale distributed bots with sophisticated human-like interaction capabilities.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

3 Stages of IDBA

A visitor’s journey through a web property needs to be analyzed in addition to the interaction-level characteristics, such as mouse movements. Using richer behavioral information, an incoming visitor can be classified as a human or bot in three stages:

  • Intent encoding: The visitor’s journey through a web property is captured through signals such as mouse or keystroke interactions, URL and referrer traversals, and time stamps. These signals are encoded using a proprietary, deep neural network architecture into an intent encoding-based, fixed-length representation. The encoding network jointly achieves two objectives: to be able to represent the anomalous characteristics of completely new categories of bots and to provide greater weight to behavioral characteristics that differ between humans and bots.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

  • Intent analysis: Here, the intent encoding of the user is analyzed using multiple machine learning modules in parallel. A combination of supervised and unsupervised learning-based modules are used to detect both known and unknown patterns.
  • Adaptive learning: The adaptive-learning module collects the predictions made by the different models and takes actions on bots based on these predictions. In many cases, the action involves presenting a challenge to the visitor like a CAPTCHA or an SMS OTP that provides a feedback mechanism (i.e., CAPTCHA solved). This feedback is incorporated to improvise the decision-making process. Decisions can be broadly categorized into two types of tasks.
    • Determining thresholds: The thresholds to be chosen for anomaly scores and classification probabilities are determined through adaptive threshold control techniques.
    • Identifying bot clusters: Selective incremental blacklisting is performed on suspicious clusters. The suspicion scores associated with the clusters (obtained from the collusion detector module) are used to set prior bias.

[You may also like: The Big, Bad Bot Problem]

IDBA or Bust!

Current bot detection and classification methodologies are ineffective in countering the threats posed by rapidly evolving and mutating sophisticated bots.

Bot detection techniques that use interaction-based behavioral analysis can identify Level 3 bots but fail to detect the advanced Level 4 bots that have human-like interaction capabilities. The unavailability of correctly labeled data for Level 4 bots, bot mutations and the anomalous behavior of human visitors from disparate industry domains require the development of semi-supervised models that work at a higher level of abstraction of intent, unlike only interaction-based behavioral analysis.

IDBA leverages a combination of intent encoding, intent analysis and adaptive-learning techniques to identify the intent behind attacks perpetrated by massively distributed human-like bots.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Botnets

What You Need to Know About Botnets

June 12, 2019 — by Radware0

BotNet.jpeg

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes.

A single attack can result in downtime, lost business and significant financial damages. Understanding the current tactics, techniques and procedures used by today’s cyber criminals is key to defending your network.

Watch this latest video from Radware’s Hacker’s Almanac to learn more about Botnets and how you can help protect your business from this type of sabotage.

Download “Hackers Almanac” to learn more.

Download Now

Security

5 Things to Consider When Choosing a Bot Management Solution

June 4, 2019 — by Radware1

5thingsbot-960x540.jpg

For organizations both large and small, securing the digital experience necessitates the need for a dedicated bot management solution. Regardless of the size of your organization, the escalating intensity of global bot traffic and the increasing severity of its overall impact mean that bot management solutions are crucial to ensuring business continuity and success.

The rise in malicious bot traffic, and more specifically, bots that mimic human-like behavior and require advanced machine learning to mitigate, require the ability to distinguish the wolf in sheep’s clothing.

We previously covered the basics in bot management evaluation, and urge you to likewise consider the following factors when choosing a solution.

Extensibility and Flexibility

True bot management goes beyond just the website. An enterprise-grade solution should protect all online assets, including your website, mobile apps and APIs. Protecting APIs and mobile apps is equally crucial, as is interoperability with systems belonging to your business partners and vital third-party APIs.

[You may also like: Key Considerations In Bot Management Evaluation]

Flexible Deployment Options

Bot mitigation solutions should be easy to deploy and operate with the existing infrastructure, such as CDNs and WAFs, as well as various technology stacks and application servers. Look for solutions that have a range of integration options, including web servers/CDNs/CMS plugins, SDKs for Java, PHP, .NET, Python, ColdFusion, Node.js, etc., as well as via JavaScript tags and virtual appliances.

A solution with non-intrusive API-based integration capability is key to ensuring minimal impact on your web assets.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Finally, any solution provider should ideally have multiple globally distributed points of presence to maximize system availability, minimize latency and overcome any internet congestion issues.

Is It a Fully Managed and Self-Reliant Service?

Webpage requests can number in the millions per minute for popular websites, and data processing for bot detection needs to be accomplished in real time. This makes manual intervention impossible — even adding suspected IP address ranges is useless in countering bots that cycle through vast numbers of addresses to evade detection. As a result, a key question that needs to be answered is does the solution require a specialized team to manage it, or does it operate autonomously after initial setup?

[You may also like: The Big, Bad Bot Problem]

Bot mitigation engines equipped with advanced technologies, such as machine learning, help with automating their management capabilities to significantly reduce the time and workforce needed to manage bots. Automated responses to threats and a system that does not require manual intervention considerably reduce the total cost of ownership.

Building vs. Buying

Large organizations have resources to develop their own in-house bot management solutions, but most companies do not have the time, resources or money to accomplish that. Building an adaptive and sophisticated bot mitigation solution, which can counter constantly evolving bots, can take years of specialized development.

Financially, it makes business sense to minimize capex and purchase cloud-based bot mitigation solutions on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Data Security, Privacy and Compliance Factors

A solution should ensure that traffic does not leave a network — or, in case it does, data should be in an encrypted and hashed format to maximize privacy and compliance. Ensuring that the bot mitigation solution is compliant with the GDPR regulations pertaining to data at rest and data in transit will help avoid personal data breaches and the risk of financial and legal penalties

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

Key Considerations In Bot Management Evaluation

May 9, 2019 — by Radware0

AdobeStock_232727332-960x524.jpg

The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behavior and easily deceive conventional cybersecurity solutions/bot management systems.

Addressing highly sophisticated and automated bot-based cyberthreats requires deep analysis of bots’ tactics and intentions. According to Forrester Research’s The Forrester New Wave™: Bot Management, Q3 2018 report, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”

When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.

Basic Bot Management Features

Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.

Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.

Capability to Detect Large-Scale Distributed Humanlike Bots

When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Traditional defenses fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioral learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behavior.

The rise of highly sophisticated humanlike bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioral analyses, collective bot intelligence and threat research, as well as other foundational techniques.

A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors

  • How advanced is the solution’s bot detection technology?
  • Does it use unique device and browser fingerprinting?
  • Does it leverage intent analysis in addition to user behavioral analysis?
  • How deep and effective are the fingerprinting and user behavioral modeling?
  • Do they leverage collective threat intelligence?

[You may also like: The Big, Bad Bot Problem]

Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behavior. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.

Impact on User Experience — Latency, Accuracy and Scalability

Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.

Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Application Security

4 Emerging Challenges in Securing Modern Applications

May 1, 2019 — by Radware0

appsecurity-960x474.jpg

Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.

Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture. 

The WAF Ten Commandments

The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.

However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.

Challenge 1: Bot Management

52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Challenge 2: Securing APIs

Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.

[You may also like: How to Prevent Real-Time API Abuse]

Challenge 3: Denial of Service

Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.

[You may also like: DDoS Protection Requires Looking Both Ways]

Challenge 4: Continuous Security

For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Protecting All Applications

It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:

  • Application security solutions must encompass web and mobile apps, as well as APIs.
  • Bot Management solutions need to overcome the most sophisticated bot attacks.
  • Mitigating DDoS attacks is an essential and integrated part of application security solutions.
  • A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
  • To keep up with continuous application delivery, security protections must adapt in real time.
  • A fully managed service should be considered to remove complexity and minimize resources.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

SecurityService Provider

Bot Management: A Business Opportunity for Service Providers

April 30, 2019 — by Radware0

BotManagementSP-960x540.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. These “bad” bots are often deployed with various capabilities to achieve their nefarious objectives, which can include account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions. Sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

Bots represent a clear and present danger to service providers. The inability to accurately distinguish malicious bots from legitimate traffic/users can leave a service provider exposed and at risk to suffer customer loss, lost profits and irreparable brand damage.

In an age where securing the digital experience is a competitive differentiator, telecommunication companies, management services organizations (MSOs) and internet service providers (ISPs) must transform their infrastructures into service-aware architectures that deliver scalability and security to customers, all the while differentiating themselves and creating revenue by selling security services.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

Bot Traffic in the Service Provider Network

Bot attacks often go undetected by conventional mitigation systems and strategies because they have evolved from basic scripts into large-scale distributed bots with human-like interaction capabilities. Bots have undergone a transformation, or evolution, over the years. Generally speaking, they can be classified into four categories, or levels, based on their degree of sophistication.

The four categories of malicious bots.

In addition to the aforementioned direct impact that these bots have, there is the added cost associated with increased traffic loads imposed on service providers’ networks. In an age of increased competition and the growth of multimedia consumption, it is critical that service providers accurately eliminate “bad” bots from their networks.

[You may also like: The Big, Bad Bot Problem]

Staying ahead of the evolving threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. These include combining behavioral modeling, collective bot intelligence and capabilities such as device fingerprinting and intent-based deep behavioral analysis (IDBA) for precise bot management across all channels.

Protecting Core Application from Bot Access

Bots attack web and mobile applications as well as application programming interfaces (APIs). Bot-based application DoS attacks degrade web applications by exhausting system resources, third-party APIs, inventory databases and other critical resources.

[You may also like: How to Prevent Real-Time API Abuse]

IDBA is now one of the critical capabilities needed to mitigate advanced bots. It performs behavioral analysis at a higher level of abstraction of “intent,” unlike commonly used, shallow “interaction”-based behavior analysis. IDBA is a critical next-generation capability to mitigate account takeovers executed by more advanced Generation 3 and 4 bots, as it leverages the latest developments in deep learning and behavioral analysis to decode the true intention of bots. IDBA goes beyond analyzing mouse movements and keystrokes to detect human-like bots, so “bad” bots can be parsed from legitimate traffic to ensure a seamless online experience for consumers.

API Exposure

APIs are increasingly used to exchange data or to integrate with partners, and attackers understand this. It is essential to accurately distinguish between “good” API calls and “bad” API calls for online businesses. Attackers reverse engineer mobile and web applications to hijack API calls and program bots to invade these APIs. By doing so, they can take over accounts, scrape critical data and perform application DDoS attacks by deluging API servers with unwanted requests.

Account Takeover

This category encompasses ways in which bots are programmed to use false identities to obtain access to data or goods. Their methods for account takeover can vary. They can hijack existing accounts by cracking a password via Brute Force attacks or by using known credentials that have been leaked via credential stuffing. Lastly, they can be programmed to create new accounts to carry out their nefarious intentions.

[You may also like: Will We Ever See the End of Account Theft?]

As its name suggests, this category encompasses an array of attacks focused on cracking credentials, tokens or verification codes/numbers with the goal of creating or cracking account access to data or products. Examples include account creation, token cracking and credential cracking/stuffing. Nearly all of these attacks primarily target login pages.

The impact of account takeover? Fraudulent transactions, abuse of reward programs, and damage to brand reputation.

Advertising Traffic Fraud

Malicious bots create false impressions and generate illegitimate clicks on publishing sites and their mobile apps. In addition, website metrics, such as visits and conversions, are vulnerable to skewing. Bots pollute metrics, disrupt funnel analysis and inhibit key performance indicator (KPI) tracking. Automated traffic on your website also affects product metrics, campaign data and traffic analytics. Skewed analytics are a major hindrance to marketers who need reliable data for their decision-making processes.

[You may also like: Ad Fraud 101: How Cybercriminals Profit from Clicks]

The Business Opportunity for Service Providers

Regardless of the type of attack, service providers are typically held to high expectations when it comes to keeping customer data secure and maintaining service availability. With each attack, service providers risk customer loss, brand reputation, lost profits and at the worst, costly governmental involvement and the resulting investigations and lawsuits.

These same business expectations apply to service providers’ customers, many of whom require security services. Although large organizations can attempt to develop their own in-house bot management solutions, these companies do not necessarily have the time, money and expertise to build and maintain them.

Building an adaptive bot mitigation solution can take years of specialized development. Financially, it makes sense to minimize capex and purchase a cloud-based bot mitigation solution on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

Lastly, this allows service providers to protect their core infrastructure and their own customers from bot-based cyberattacks and provides the opportunity to extend any bot management solution as part of a cloud security services offering to generate a new revenue stream.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application SecuritySecurity

How to Prevent Real-Time API Abuse

April 18, 2019 — by Radware1

API-abuse-960x640.jpg

The widespread adoption of mobile and IoT devices, and increased use of cloud systems are driving a major change in modern application architecture. Application Programming Interfaces (APIs) have emerged as the bridge to facilitate communication between different application architectures. However, with the widespread deployment of APIs, automated attacks on poorly protected APIs are mounting. Personally Identifiable Information (PII), payment card details, and business-critical services are at risk due to automated attacks on APIs.

API. application programming interface, cybersecurity, technology

So what are key API vulnerabilities, and how can you protect against API abuse?

Authentication Flaws

Many APIs only check authentication status, but not if the request is coming from a genuine user. Attackers exploit such flaws through various ways (including session hijacking and account aggregation) to imitate genuine API calls. Attackers also target APIs by reverse-engineering mobile apps to discover how it calls the API. If API keys are embedded into the app, this can result in an API breach. API keys should not be used alone for user authentication.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Lack of Robust Encryption

Many APIs lack robust encryptions between API client and API server. Attackers exploit such vulnerabilities through man-in-the-middle attacks. Attackers also intercept unencrypted or poorly protected API transactions between API client and API server to steal sensitive information or alter transaction data.

What’s more, the ubiquitous use of mobile devices, cloud systems, and microservice design patterns have further complicated API security as now multiple gateways are involved to facilitate interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Business Logic Vulnerability

APIs are vulnerable to business logic abuse. Attackers make repeated and large-scale API calls on an application server or slow POST requests that result in denial of service. A DDoS attack on an API can result in massive disruption on a front-end web application.

Poor Endpoint Security

Most IoT devices and micro-service tools are programmed to communicate with their server through API channels. These devices authenticate themselves on API servers using client certificates. Hackers attempt to get control over an API from the IoT endpoint and if they succeed, they can easily re-sequence API order that can result in a data breach.

[You may also like: The Evolution of IoT Attacks]

How You Can Prevent API Abuse

A bot management solution that defends APIs against automated attacks and ensures that only genuine users have the ability to access APIs is paramount. When evaluating such a solution, consider whether it offers
broad attack detection and coverage, comprehensive reporting and analytics, and flexible deployment options.

Other steps you can (and should) take include:

  • Monitor and manage API calls coming from automated scripts (bots)
  • Drop primitive authentication
  • Implement measures to prevent API access by sophisticated human-like bots
  • Robust encryption is a must-have
  • Deploy token-based rate limiting equipped with features to limit API access based on the number of IPs, sessions, and tokens
  • Implement robust security on endpoints

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now