main

Security

IDBA: A Patented Bot Detection Technology

June 13, 2019 — by Radware1

AdobeStock_42807981-960x720.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. Competitors and adversaries alike deploy “bad” bots that leverage different methods to achieve nefarious objectives. This includes account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions.

These attacks often go undetected by conventional mitigation systems and strategies because bots have evolved from basic scripts to large-scale distributed bots with human-like interaction capabilities to evade detection mechanisms. To stay ahead of the threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. One of the key technical capabilities required to stop today’s most advanced bots is intent-based deep behavioral analysis (IDBA).

What Exactly is IDBA?

IDBA is a major step forward in bot detection technology because it performs behavioral analysis at a higher level of abstraction of intent, unlike the commonly used, shallow interaction-based behavioral analysis. For example, account takeover is an example of an intent, while “mouse pointer moving in a straight line” is an example of an interaction.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Capturing intent enables IDBA to provide significantly higher levels of accuracy to detect advanced bots. IDBA is designed to leverage the latest developments in deep learning.

More specifically, IDBA uses semi-supervised learning models to overcome the challenges of inaccurately labeled data, bot mutation and the anomalous behavior of human users. And it leverages intent encoding, intent analysis and adaptive-learning techniques to accurately detect large-scale distributed bots with sophisticated human-like interaction capabilities.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

3 Stages of IDBA

A visitor’s journey through a web property needs to be analyzed in addition to the interaction-level characteristics, such as mouse movements. Using richer behavioral information, an incoming visitor can be classified as a human or bot in three stages:

  • Intent encoding: The visitor’s journey through a web property is captured through signals such as mouse or keystroke interactions, URL and referrer traversals, and time stamps. These signals are encoded using a proprietary, deep neural network architecture into an intent encoding-based, fixed-length representation. The encoding network jointly achieves two objectives: to be able to represent the anomalous characteristics of completely new categories of bots and to provide greater weight to behavioral characteristics that differ between humans and bots.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

  • Intent analysis: Here, the intent encoding of the user is analyzed using multiple machine learning modules in parallel. A combination of supervised and unsupervised learning-based modules are used to detect both known and unknown patterns.
  • Adaptive learning: The adaptive-learning module collects the predictions made by the different models and takes actions on bots based on these predictions. In many cases, the action involves presenting a challenge to the visitor like a CAPTCHA or an SMS OTP that provides a feedback mechanism (i.e., CAPTCHA solved). This feedback is incorporated to improvise the decision-making process. Decisions can be broadly categorized into two types of tasks.
    • Determining thresholds: The thresholds to be chosen for anomaly scores and classification probabilities are determined through adaptive threshold control techniques.
    • Identifying bot clusters: Selective incremental blacklisting is performed on suspicious clusters. The suspicion scores associated with the clusters (obtained from the collusion detector module) are used to set prior bias.

[You may also like: The Big, Bad Bot Problem]

IDBA or Bust!

Current bot detection and classification methodologies are ineffective in countering the threats posed by rapidly evolving and mutating sophisticated bots.

Bot detection techniques that use interaction-based behavioral analysis can identify Level 3 bots but fail to detect the advanced Level 4 bots that have human-like interaction capabilities. The unavailability of correctly labeled data for Level 4 bots, bot mutations and the anomalous behavior of human visitors from disparate industry domains require the development of semi-supervised models that work at a higher level of abstraction of intent, unlike only interaction-based behavioral analysis.

IDBA leverages a combination of intent encoding, intent analysis and adaptive-learning techniques to identify the intent behind attacks perpetrated by massively distributed human-like bots.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Botnets

What You Need to Know About Botnets

June 12, 2019 — by Radware1

BotNet.jpeg

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes.

A single attack can result in downtime, lost business and significant financial damages. Understanding the current tactics, techniques and procedures used by today’s cyber criminals is key to defending your network.

Watch this latest video from Radware’s Hacker’s Almanac to learn more about Botnets and how you can help protect your business from this type of sabotage.

Download “Hackers Almanac” to learn more.

Download Now

Security

5 Things to Consider When Choosing a Bot Management Solution

June 4, 2019 — by Radware0

5thingsbot-960x540.jpg

For organizations both large and small, securing the digital experience necessitates the need for a dedicated bot management solution. Regardless of the size of your organization, the escalating intensity of global bot traffic and the increasing severity of its overall impact mean that bot management solutions are crucial to ensuring business continuity and success.

The rise in malicious bot traffic, and more specifically, bots that mimic human-like behavior and require advanced machine learning to mitigate, require the ability to distinguish the wolf in sheep’s clothing.

We previously covered the basics in bot management evaluation, and urge you to likewise consider the following factors when choosing a solution.

Extensibility and Flexibility

True bot management goes beyond just the website. An enterprise-grade solution should protect all online assets, including your website, mobile apps and APIs. Protecting APIs and mobile apps is equally crucial, as is interoperability with systems belonging to your business partners and vital third-party APIs.

[You may also like: Key Considerations In Bot Management Evaluation]

Flexible Deployment Options

Bot mitigation solutions should be easy to deploy and operate with the existing infrastructure, such as CDNs and WAFs, as well as various technology stacks and application servers. Look for solutions that have a range of integration options, including web servers/CDNs/CMS plugins, SDKs for Java, PHP, .NET, Python, ColdFusion, Node.js, etc., as well as via JavaScript tags and virtual appliances.

A solution with non-intrusive API-based integration capability is key to ensuring minimal impact on your web assets.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Finally, any solution provider should ideally have multiple globally distributed points of presence to maximize system availability, minimize latency and overcome any internet congestion issues.

Is It a Fully Managed and Self-Reliant Service?

Webpage requests can number in the millions per minute for popular websites, and data processing for bot detection needs to be accomplished in real time. This makes manual intervention impossible — even adding suspected IP address ranges is useless in countering bots that cycle through vast numbers of addresses to evade detection. As a result, a key question that needs to be answered is does the solution require a specialized team to manage it, or does it operate autonomously after initial setup?

[You may also like: The Big, Bad Bot Problem]

Bot mitigation engines equipped with advanced technologies, such as machine learning, help with automating their management capabilities to significantly reduce the time and workforce needed to manage bots. Automated responses to threats and a system that does not require manual intervention considerably reduce the total cost of ownership.

Building vs. Buying

Large organizations have resources to develop their own in-house bot management solutions, but most companies do not have the time, resources or money to accomplish that. Building an adaptive and sophisticated bot mitigation solution, which can counter constantly evolving bots, can take years of specialized development.

Financially, it makes business sense to minimize capex and purchase cloud-based bot mitigation solutions on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Data Security, Privacy and Compliance Factors

A solution should ensure that traffic does not leave a network — or, in case it does, data should be in an encrypted and hashed format to maximize privacy and compliance. Ensuring that the bot mitigation solution is compliant with the GDPR regulations pertaining to data at rest and data in transit will help avoid personal data breaches and the risk of financial and legal penalties

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Security

Key Considerations In Bot Management Evaluation

May 9, 2019 — by Radware0

AdobeStock_232727332-960x524.jpg

The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behavior and easily deceive conventional cybersecurity solutions/bot management systems.

Addressing highly sophisticated and automated bot-based cyberthreats requires deep analysis of bots’ tactics and intentions. According to Forrester Research’s The Forrester New Wave™: Bot Management, Q3 2018 report, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”

When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.

Basic Bot Management Features

Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.

Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.

Capability to Detect Large-Scale Distributed Humanlike Bots

When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Traditional defenses fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioral learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behavior.

The rise of highly sophisticated humanlike bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioral analyses, collective bot intelligence and threat research, as well as other foundational techniques.

A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors

  • How advanced is the solution’s bot detection technology?
  • Does it use unique device and browser fingerprinting?
  • Does it leverage intent analysis in addition to user behavioral analysis?
  • How deep and effective are the fingerprinting and user behavioral modeling?
  • Do they leverage collective threat intelligence?

[You may also like: The Big, Bad Bot Problem]

Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behavior. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.

Impact on User Experience — Latency, Accuracy and Scalability

Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.

Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Application Security

4 Emerging Challenges in Securing Modern Applications

May 1, 2019 — by Radware0

appsecurity-960x474.jpg

Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.

Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture. 

The WAF Ten Commandments

The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.

However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.

Challenge 1: Bot Management

52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Challenge 2: Securing APIs

Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.

[You may also like: How to Prevent Real-Time API Abuse]

Challenge 3: Denial of Service

Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.

[You may also like: DDoS Protection Requires Looking Both Ways]

Challenge 4: Continuous Security

For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Protecting All Applications

It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:

  • Application security solutions must encompass web and mobile apps, as well as APIs.
  • Bot Management solutions need to overcome the most sophisticated bot attacks.
  • Mitigating DDoS attacks is an essential and integrated part of application security solutions.
  • A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
  • To keep up with continuous application delivery, security protections must adapt in real time.
  • A fully managed service should be considered to remove complexity and minimize resources.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

SecurityService Provider

Bot Management: A Business Opportunity for Service Providers

April 30, 2019 — by Radware0

BotManagementSP-960x540.jpg

Over half of all internet traffic is generated by bots — some legitimate, some malicious. These “bad” bots are often deployed with various capabilities to achieve their nefarious objectives, which can include account takeover, scraping data, denying available inventory and launching denial-of-service attacks with the intent of stealing data or causing service disruptions. Sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

Bots represent a clear and present danger to service providers. The inability to accurately distinguish malicious bots from legitimate traffic/users can leave a service provider exposed and at risk to suffer customer loss, lost profits and irreparable brand damage.

In an age where securing the digital experience is a competitive differentiator, telecommunication companies, management services organizations (MSOs) and internet service providers (ISPs) must transform their infrastructures into service-aware architectures that deliver scalability and security to customers, all the while differentiating themselves and creating revenue by selling security services.

[You may also like: Bot Managers Are a Cash-Back Program For Your Company]

Bot Traffic in the Service Provider Network

Bot attacks often go undetected by conventional mitigation systems and strategies because they have evolved from basic scripts into large-scale distributed bots with human-like interaction capabilities. Bots have undergone a transformation, or evolution, over the years. Generally speaking, they can be classified into four categories, or levels, based on their degree of sophistication.

The four categories of malicious bots.

In addition to the aforementioned direct impact that these bots have, there is the added cost associated with increased traffic loads imposed on service providers’ networks. In an age of increased competition and the growth of multimedia consumption, it is critical that service providers accurately eliminate “bad” bots from their networks.

[You may also like: The Big, Bad Bot Problem]

Staying ahead of the evolving threat landscape requires more sophisticated, advanced capabilities to accurately detect and mitigate these threats. These include combining behavioral modeling, collective bot intelligence and capabilities such as device fingerprinting and intent-based deep behavioral analysis (IDBA) for precise bot management across all channels.

Protecting Core Application from Bot Access

Bots attack web and mobile applications as well as application programming interfaces (APIs). Bot-based application DoS attacks degrade web applications by exhausting system resources, third-party APIs, inventory databases and other critical resources.

[You may also like: How to Prevent Real-Time API Abuse]

IDBA is now one of the critical capabilities needed to mitigate advanced bots. It performs behavioral analysis at a higher level of abstraction of “intent,” unlike commonly used, shallow “interaction”-based behavior analysis. IDBA is a critical next-generation capability to mitigate account takeovers executed by more advanced Generation 3 and 4 bots, as it leverages the latest developments in deep learning and behavioral analysis to decode the true intention of bots. IDBA goes beyond analyzing mouse movements and keystrokes to detect human-like bots, so “bad” bots can be parsed from legitimate traffic to ensure a seamless online experience for consumers.

API Exposure

APIs are increasingly used to exchange data or to integrate with partners, and attackers understand this. It is essential to accurately distinguish between “good” API calls and “bad” API calls for online businesses. Attackers reverse engineer mobile and web applications to hijack API calls and program bots to invade these APIs. By doing so, they can take over accounts, scrape critical data and perform application DDoS attacks by deluging API servers with unwanted requests.

Account Takeover

This category encompasses ways in which bots are programmed to use false identities to obtain access to data or goods. Their methods for account takeover can vary. They can hijack existing accounts by cracking a password via Brute Force attacks or by using known credentials that have been leaked via credential stuffing. Lastly, they can be programmed to create new accounts to carry out their nefarious intentions.

[You may also like: Will We Ever See the End of Account Theft?]

As its name suggests, this category encompasses an array of attacks focused on cracking credentials, tokens or verification codes/numbers with the goal of creating or cracking account access to data or products. Examples include account creation, token cracking and credential cracking/stuffing. Nearly all of these attacks primarily target login pages.

The impact of account takeover? Fraudulent transactions, abuse of reward programs, and damage to brand reputation.

Advertising Traffic Fraud

Malicious bots create false impressions and generate illegitimate clicks on publishing sites and their mobile apps. In addition, website metrics, such as visits and conversions, are vulnerable to skewing. Bots pollute metrics, disrupt funnel analysis and inhibit key performance indicator (KPI) tracking. Automated traffic on your website also affects product metrics, campaign data and traffic analytics. Skewed analytics are a major hindrance to marketers who need reliable data for their decision-making processes.

[You may also like: Ad Fraud 101: How Cybercriminals Profit from Clicks]

The Business Opportunity for Service Providers

Regardless of the type of attack, service providers are typically held to high expectations when it comes to keeping customer data secure and maintaining service availability. With each attack, service providers risk customer loss, brand reputation, lost profits and at the worst, costly governmental involvement and the resulting investigations and lawsuits.

These same business expectations apply to service providers’ customers, many of whom require security services. Although large organizations can attempt to develop their own in-house bot management solutions, these companies do not necessarily have the time, money and expertise to build and maintain them.

Building an adaptive bot mitigation solution can take years of specialized development. Financially, it makes sense to minimize capex and purchase a cloud-based bot mitigation solution on a subscription basis. This can help companies realize the value of bot management without making a large upfront investment.

Lastly, this allows service providers to protect their core infrastructure and their own customers from bot-based cyberattacks and provides the opportunity to extend any bot management solution as part of a cloud security services offering to generate a new revenue stream.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application SecuritySecurity

How to Prevent Real-Time API Abuse

April 18, 2019 — by Radware1

API-abuse-960x640.jpg

The widespread adoption of mobile and IoT devices, and increased use of cloud systems are driving a major change in modern application architecture. Application Programming Interfaces (APIs) have emerged as the bridge to facilitate communication between different application architectures. However, with the widespread deployment of APIs, automated attacks on poorly protected APIs are mounting. Personally Identifiable Information (PII), payment card details, and business-critical services are at risk due to automated attacks on APIs.

API. application programming interface, cybersecurity, technology

So what are key API vulnerabilities, and how can you protect against API abuse?

Authentication Flaws

Many APIs only check authentication status, but not if the request is coming from a genuine user. Attackers exploit such flaws through various ways (including session hijacking and account aggregation) to imitate genuine API calls. Attackers also target APIs by reverse-engineering mobile apps to discover how it calls the API. If API keys are embedded into the app, this can result in an API breach. API keys should not be used alone for user authentication.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Lack of Robust Encryption

Many APIs lack robust encryptions between API client and API server. Attackers exploit such vulnerabilities through man-in-the-middle attacks. Attackers also intercept unencrypted or poorly protected API transactions between API client and API server to steal sensitive information or alter transaction data.

What’s more, the ubiquitous use of mobile devices, cloud systems, and microservice design patterns have further complicated API security as now multiple gateways are involved to facilitate interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Business Logic Vulnerability

APIs are vulnerable to business logic abuse. Attackers make repeated and large-scale API calls on an application server or slow POST requests that result in denial of service. A DDoS attack on an API can result in massive disruption on a front-end web application.

Poor Endpoint Security

Most IoT devices and micro-service tools are programmed to communicate with their server through API channels. These devices authenticate themselves on API servers using client certificates. Hackers attempt to get control over an API from the IoT endpoint and if they succeed, they can easily re-sequence API order that can result in a data breach.

[You may also like: The Evolution of IoT Attacks]

How You Can Prevent API Abuse

A bot management solution that defends APIs against automated attacks and ensures that only genuine users have the ability to access APIs is paramount. When evaluating such a solution, consider whether it offers
broad attack detection and coverage, comprehensive reporting and analytics, and flexible deployment options.

Other steps you can (and should) take include:

  • Monitor and manage API calls coming from automated scripts (bots)
  • Drop primitive authentication
  • Implement measures to prevent API access by sophisticated human-like bots
  • Robust encryption is a must-have
  • Deploy token-based rate limiting equipped with features to limit API access based on the number of IPs, sessions, and tokens
  • Implement robust security on endpoints

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware1

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

SecurityService Provider

Out of the Shadows, Into the Network

April 9, 2019 — by Radware1

darkness-960x540.jpg

Network security is a priority for every carrier worldwide. Investments in human resources and technology solutions to combat attacks are a significant part of carriers’ network operating budgets.

The goal is to protect their networks by staying a few steps ahead of hackers. Currently, carriers may be confident that their network security solution is detecting and mitigating DDoS attacks.

All the reports generated by the solution show the number and severity of attacks as well as how they were thwarted. Unfortunately, we know it’s a false sense of well-being because dirty traffic in the form of sophisticated application attacks is getting through security filters. No major outages or data breaches have been attributed to application attacks yet, so why should carriers care?

Maintaining a Sunny Reputation

The impact of application attacks on carriers and their customers takes many forms:

  • Service degradation
  • Network outages
  • Data exposure
  • Consumption of bandwidth resources
  • Consumption of system resources

[You may also like: How Cyberattacks Directly Impact Your Brand]

A large segment of carriers’ high-value customers have zero tolerance for service interruption. There is a direct correlation between service outages and user churn.

Application attacks put carriers’ reputations at risk. For customers, a small slowdown in services may not be a big deal initially. But as the number and severity of application attacks increase, clogged pipes and slow services are not going to be acceptable. Carriers sell services based on speed and reliability. Bad press about service outages and data compromises has long-lasting negative effects. Then add the compounding power of social networking to quickly spread the word about service issues, and you have a recipe for reputation disaster.

[You may also like: Securing the Customer Experience for 5G and IoT]

Always Under Attack

It’s safe for carriers to assume that their networks are always under attack. DDoS attack volume is escalating as hackers develop new and more technologically sophisticated ways to target carriers and their customers In 2018, attack campaigns were primarily composed of multiple attacks vectors, according to the Radware 2018–2019 Global Application & Network Security Report.

The report finds that “a bigger picture is likely to emerge about the need to deploy security solutions that not only adapt to changing attack vectors to mitigate evolving threats but also maintain service availability at the same time.”

[You may also like: Here’s How Carriers Can Differentiate Their 5G Offerings]

Attack vectors include:

  • SYN Flood
  • UDP Flood
  • DNS Flood
  • HTTP Application Flood
  • SSL Flood
  • Burst Attacks
  • Bot Attacks

Attackers prefer to keep a target busy by launching one or a few attacks at a time rather than firing the entire arsenal all at once. Carriers may be successful at blocking four or five attack vectors, but it only takes one failure for the damage to be done.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application Delivery

Application SLA: Knowing Is Half the Battle

April 4, 2019 — by Radware2

ApplicationSLA-960x642.jpg

Applications have come to define the digital experience. They empower organizations to create new customer-friendly services, unlock data and content and deliver it to users at the time and device they desire, and provide a competitive differentiator over the competition.

Fueling these applications is the “digital core,” a vast plumbing infrastructure that includes networks, data repositories, Internet of Things (IoT) devices and more. If applications are a cornerstone of the digital experience, then managing and optimizing the digital core is the key to delivering these apps to the digitized user. When applications aren’t delivered efficiently, users can suffer from a degraded quality of experience (QoE), resulting in a tarnished brand, negatively affecting customer loyalty and lost revenue.

Application delivery controllers (ADCs) are ideally situated to ensure QoE, regardless of the operational scenario, by allowing IT to actively monitor and enforce application SLAs. The key is to understand the role ADCs play and the capabilities required to ensure the digital experience across various operational scenarios.

Optimize Normal Operations

Under normal operational conditions, ADCs optimize application performance, control and allocate resources to those applications and provide early warnings of potential issues.

[You may also like: 6 Must-Have Metrics in Your SLA]

For starters, any ADC should deliver web performance optimization (WPO) capabilities to turbocharge the performance of web-based applications. It transforms front-end optimization from a lengthy and complex process into an automated, streamlined function. Caching, compression, SSL offloading and TCP optimization are all key capabilities and will enable faster communication between the client and server while offloading CPU intensive tasks from the application server.

Along those same lines, an ADC can serve as a “bridge” between the web browsers that deliver web- based applications and the backend servers that host the applications. For example, HTTP/2 is the new standard in network protocols. ADCs can serve as a gateway between the web browsers that support HTTP/2 and backend servers that still don’t, optimizing performance to meet application SLAs.

Prevent Outages

Outages are few and far between, but when they occur, maintaining business continuity is critical via server load balancing, leveraging cloud elasticity and disaster recovery. ADCs play a critical role across all three and execute and automate these processes during a time of crisis.

[You may also like: Security Pros and Perils of Serverless Architecture]

If an application server fails, server load balancing should automatically redirect the client to another server. Likewise, in the event that an edge router or network connection to the data center fails, an ADC should automatically redirect to another data center, ensuring the web client can always access the application server even when there is a point of failure in the network infrastructure.

Minimize Degradation

Application SLA issues are most often the result of network degradation. The ecommerce industry is a perfect example. A sudden increase in network traffic during the holiday season can result in SLA degradation.

Leveraging server load balancing, ADCs provide elasticity by provisioning resources on-demand. Additional servers are added to the network infrastructure to maintain QoE, and after the spike has passed, returned to an idle state for use elsewhere. In addition, virtualized ADCs provide an additional benefit, as they provide scalability and isolation between vADC instance at the fault, management and network levels.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Finally, cyberattacks are the silent killers of application performance, as they typically create degradation. ADCs play an integrative role in protecting applications to maintain SLAs at all times.   They can prevent attack traffic from entering a network’s LAN and prevent volumetric attack traffic from saturating the Internet pipe.

The ADC should be equipped with security capabilities that allow it to be integrated into the security/ DDoS mitigation framework. This includes the ability to inspect traffic and network health parameters so the ADC serves as an alarm system to signal attack information to a DDoS mitigation solution. Other interwoven safety features should include integration with web application firewalls (WAFs), ability to decrypt/encrypt SSL traffic and device/user fingerprinting.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now