main

Application SecurityWAFWeb Application Firewall

Bot Manager vs. WAF: Why You Actually Need Both

June 6, 2019 — by Ben Zilberman1

BotWAF-960x641.jpg

Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.

While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

When Will You Need a Bot Detection Solution?

Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:

Why a WAF Isn’t an Effective Bot Detection Tool

WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.

However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.

[You may also like: The Big, Bad Bot Problem]

Against these sophisticated threats, WAFs won’t get the job done.

The Benefits of Synergy

As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.

Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Cloud Security

How to (Securely) Share Certificates with Your Cloud Security Provider

May 23, 2019 — by Ben Zilberman1

encrypt-960x640.jpg

Businesses today know they must handle sensitive data with extra care. But evolving cyber threats combined with regulatory demands can lead executives to hold their proverbial security cards close to their chest. For example, they may be reluctant to share encryption keys and certificates with a third party (i.e., cloud service providers), fearing data theft, MITM attacks or violations of local privacy regulations.

In turn, this can cause conflicts when integrating with cloud security services.

So how can businesses securely share this information as they transition to the cloud?

Encryption Basics

Today, nearly all web applications use HTTPS (encrypted traffic sent to and from the user). Any website with HTTPS service requires a signed SSL certificate. In order to communicate securely via encrypted traffic and complete the SSL handshake, the server requires three components: a private key, a public key (certificate) and a certificate chain.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

These are essential to accomplish the following objectives:

  • Authentication – The client authenticates the server identity.
  • Encryption – A symmetric session key is created by the client and server for session encryption.
  • Private keys stay private – The private key never leaves the server side and is not used as session key by the client.

Hardware Security Module (HSM)

A Hardware Security Module is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. HSMs are particularly useful for those industries that require high security, businesses with cloud-native applications and global organizations. More specifically, common use cases include:

[You may also like: SSL Attacks – When Hackers Use Security Against You]

  • Federal Information Processing Standards (FIPS) compliance – For example, finance, healthcare and government applications that traditionally require FIPS-level security.
  • Native cloud applications – Cloud applications designed with security in mind might use managed HSM (or KMS) for critical workloads such as password management.
  • Centralized management – Global organizations with global applications need to secure and manage their keys in one place.

Managing cryptographic key lifecycle necessitates a few fundamentals:

  • Using random number generator to create/renew keys
  • Processing crypto-operations (encrypt/decrypt)
  • Ensuring keys never leave the HSM
  • Establishing secure access control (intrusion-resistant, tamper-evident, audit-logged, FIPS-validated appliances)

The Challenge with Cloud Security Services…

One of the main challenges with cloud security services is the fact that reverse proxies need SSL keys. Managed security services, such as a cloud WAF service, force enterprises to hand over their private keys for terminating SSL connections. However, some can’t (FIPS-compliant businesses, for example) or simply don’t want to (for trust and liability concerns, or simply due to multi-tenancy between multiple customers). This is usually where the business relationship gets stuck.

[You may also like: Managing Security Risks in the Cloud]

…And the Solution!

Simply put, the solution is a HSM Cloud Service.

Wait, what?

Yes, integrating a cloud WAF service with a public cloud provider (like AWS CloudHSM) into an external HSM is the answer. It can easily be set up by a VPN among a cluster sharing the HSM credentials, per application or at large.

Indeed, cloudHSM is a popular solution–being both FIPS and PCI DSS compliant — trusted by customers in the finance sector. By moving the last on-prem component to the cloud to reduce data center maintenance costs, organizations are actually shifting towards consuming HSM as a Service.

Such an integration supports any type of certificate (single domain, wildcard or SAN) and secures minimal latency as public cloud providers have PoPs all around the globe. The external HSM is only used once, while there are no limitations to the amount of certificates that are hosted on the service.

This is the recommended approach to help businesses overcome the concern of sharing private keys. Learn more about Radware Cloud WAF service here.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Security

Bot Managers Are a Cash-Back Program For Your Company

April 17, 2019 — by Ben Zilberman2

Bot_Cash_Back-960x640.jpg

In my previous blog, I briefly discussed what bot managers are and why they are needed. . Today, we will conduct a short ROI exercise (perhaps the toughest task in information security!).

To recap: Bots generate a little over half of today’s internet traffic. Roughly half of that half (i.e. a quarter, for rusty ones like myself…) is generated by bad bots, a.k.a. automated programs targeting applications with the intent to steal information or disrupt service. Over the years, they have gotten so sophisticated, they can easily mimic human behavior, perform allegedly uncorrelated violation actions and essentially fool most application security solutions out there.

Bot, bot management, traffic

These bots affect each and every arm of your business. If you are in the e-commerce or travel industries, no need to tell you that… if you aren’t, go to your next C-level executive meeting and look for those who scratch their heads the most. Why? Because they can’t understand where the money goes, and why the predicted performance didn’t materialize as expected.

Let’s go talk to these C-Suite executives, shall we?

Chief Revenue Officer

Imagine you are selling product online–whether that’s tickets, hotel rooms or even 30-pound dog food bags–and this is your principal channel for revenue generation. Now, imagine that bots act as faux buyers, and hold the inventory “hostage” so genuine customers can not access them.

[You may also like: Will We Ever See the End of Account Theft?]

Sure, you can elapse the process every 10 minutes, but as this is an automated program, it will re-initiate the process in a split second. And what about CAPTCHA? Don’t assume CAPTCHA will weed out all bots; some bots activate after a human has solved it. How would you know when you are communicating with a bot or a human? (Hint: you’d know if you had a bot management solution).

Wondering why the movie hall is empty half the time even though it’s a hot release? Does everybody go to the theater across the street? No. Bots are to blame. And they cause direct, immediate and painful revenue loss.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Chief Marketing Officer

Digital marketing tools, end-to-end automation of the customer journey, lead generation, and content syndication are great tools that help CMOs measure ROI and plan budgets. But what if the analysis they provide are false? What if half the clicks you are paying for are fictitious, and you were subject to a click-fraud campaign by bots? What if a competitor uses a bot to scrape data of registrants out of your landing pages? Unfortunately, bots often skew the analysis and can lead you to make wrong decisions that result in poor performance. Without bot management, you’re wasting money in vain.

Chief Operations Officer/Chief Information Officer

Does your team complain that your network resources are in the “red zone,” close to maximum performance, but your customer base isn’t growing at the same pace?

Blame bots.

[You may also like: Disaster Recovery: The Big, Bad Bot Problem]

Obviously some bots are “good,” like automated services that help accelerate and streamline your business, analyze data quickly and help you to make better decisions. However, bad bots (26% of the total traffic you are processing) put a load on your infrastructure and make your IT staff cry for more capacity. So you invest $200-500K in bigger firewalls, ADCs, and broader internet pipes, and upgrade your servers.

Next thing you know, a large DDoS attack from IoT botnets knocks everything down. If only you had invested $50k upfront to filter out the bad traffic from the get-go… That could’ve translated to $300k cash back!

Chief Information Security Officer

Every hour, a new security vendor knocks on your door with another solution for a 0.0001% probability what-if scenario… your budget is all over the place, spent on multiple protections and a complex architecture trying to take an actionable snapshot of what’s going on at every moment. At the end of the day, your task is to protect your company’s information assets. And there are so many ways to get a hold of those precious secrets!

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Bad bots are your enemy. They can scrape content, files, pricing, and intellectual property from your website. They can take over user accounts by cracking their passwords or launch a credential stuffing attack (and then retrieve their payment info). And they can take down service with DDoS attacks and hold up inventory, as I previously mentioned.

You can absolutely reduce these risks significantly if you could distinguish human versus bot traffic (remember, sophisticated bots today can mimic human behavior and bypass all sorts of challenges, not only CAPTCA), and more than that, which bot is legitimate and which is malicious.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Bot management equals less risk, better posture, stable business, no budget increases or unexpected expenses. Cash back!

Chief Financial Officer

Your management peers could have made better investments, but now you have to clean up their mess. This can include paying legal fees and compensation to customers whose data was compromised, paying regulatory fines for coming up short in compliance, shelling out for a crisis management consultant firm, and absorbing costs associated with inventory hold up and downed service.

If you only had a bot management solution in place… so much cash back.

The Bottom Line

Run–do not walk–to your CEO and request a much-needed bot management solution. Not only does s/he have nothing to lose, s/he has a lot to gain.

* This week, Radware integrates bot management service with its cloud WAF for a complete, fully managed, application security suite.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack MitigationSecurity

The Big, Bad Bot Problem

March 5, 2019 — by Ben Zilberman0

AdobeStock_103497099-960x559.jpeg

Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.” These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.

Bad Bots = Bad Business

Bots represent a problem for businesses, regardless of industry (though travel and e-commerce have the highest percentage of “bad” bot traffic). Nonetheless, many organizations, especially large enterprises, are focused on conventional cyber threats and solutions, and do not fully estimate the impact bots can have on their business, which is quite broad and goes beyond just security.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Indeed, the far-ranging business impacts of bots means “bad” bot attacks aren’t just a problem for IT managers, but for C-level executives as well. For example, consider the following scenarios:

  • Your CISO is exposed to account takeover, Web scraping, DoS, fraud and inventory hold-ups;
  • Your CRO is concerned when bots act as faux buyers, holding inventory for hours or days, representing a direct loss of revenue;
  • Your COO invests more in capacity to accommodate this growing demand of faux traffic;
  • Your CFO must compensate customers who were victims of fraud via account takeovers and/or stolen payment information, as well as any data privacy regulatory fines and/or legal fees, depending on scale;
  • Your CMO is dazzled by analytic tools and affiliate services skewed by malicious bot activity, leading to biased decisions.

The Evolution of Bots

For those organizations that do focus on bots, the overwhelming majority (79%, according to Radware’s research) can’t definitively distinguish between good and bad bots, and sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.

[You may also like: Are Your Applications Secure?]

To complicate matters, bots evolve rapidly. They are now in their 4th generation of sophistication, with evasion techniques so advanced they require the most powerful technology to combat them.

  • Generation 1 – Basic scripts making cURL-like requests from a small number of IP addresses. These bots can’t store cookies or execute JavaScript and can be easily detected and mitigated through blacklisting its IP address and User-Agent combination.
  • Generation 2 – Leverage headless browsers such as PhantomJS and can store cookies and execute JavaScript. They require a more sophisticated, IP-agnostic approach such as device-fingerprinting, by collecting their unique combination of browser and device characteristics — such as the OS, JavaScript variables, sessions and cookies info, etc.
  • Generation 3 – These bots use full-fledged browsers and can simulate basic human-like patterns during interactions, like simple mouse movements and keystrokes. This behavior makes it difficult to detect; these bots normally bypass traditional security solutions, requiring a more sophisticated approach than blacklisting or fingerprinting.
  • Generation 4 – These bots are the most sophisticated. They use more advanced human-like interaction characteristics (so shallow-interaction based detection yields False Positives) and are distributed across tens of thousands of IP addresses. And they can carry out various violations from various sources at various (random) times, requiring a high level of intelligence, correlation and contextual analysis.

[You may also like: Attackers Are Leveraging Automation]

It’s All About Intent

Organizations must make an accurate distinction between human and bot-based traffic, and even further, distinguish between “good” and “bad” bots. Why? Because sophisticated bots that mimic human behavior bypass CAPTCHA and other challenges, dynamic IP attacks render IP-based protection ineffective, and third and fourth generation bots force behavioral analysis capabilities. The challenge is detection, but at a high precision, so that genuine users aren’t affected.

To ensure precision in detecting and classifying bots, the solution must identify the intent of the attack. Yesterday, Radware announced its Bot Manager solution, the result of its January 2019 acquisition of ShieldSquare, which does just that. By leveraging patented Intent-based Deep Behavior Analysis, Radware Bot Manager detects the intent behind attacks and provides accurate classifications of genuine users, good bots and bad bots—including those pesky fourth generation bots. Learn more about it here.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Security

HTTPS: The Myth of Secure Encrypted Traffic Exposed

February 5, 2019 — by Ben Zilberman0

https--960x540.jpeg

The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?

Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.

While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.

Encrypted Application Layers

When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.

To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:

  • Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
    applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
  • Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
  • Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

Environmental Aspects

Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.

  • Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
  • Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
  • Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
  • Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.

[You may also like: Are Your Applications Secure?]

Regulatory Limitations

Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.

Encryption Protocols

The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.

[You may also like: Adopt TLS 1.3 – Kill Two Birds with One Stone]

Attack Patterns

Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.

  • Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
  • Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
  • Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
  • Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
  • Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
  • Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.

End-to-End Protection

Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack MitigationAttack Types & Vectors

How Cyberattacks Directly Impact Your Brand: New Radware Report

January 15, 2019 — by Ben Zilberman0

BinaryCodeEncryption-002-960x600.jpg

Whether you’re an executive or practitioner, brimming with business acumen or tech savviness, your job is to preserve and grow your company’s brand. Brand equity relies heavily on customer trust, which can take years to build and only moments to demolish. 2018’s cyber threat landscape demonstrates this clearly; the delicate relationship between organizations and their customers is in hackers’ cross hairs and suffers during a successful cyberattack. Make no mistake: Leaders who undervalue customer trust–who do not secure an optimized customer experience or adequately safeguard sensitive data–will feel the sting in their balance sheet, brand reputation and even their job security.

Radware’s 2018-2019 Global Application and Network Security report builds upon a worldwide industry survey encompassing 790 business and security executives and professionals from different countries, industries and company sizes. It also features original Radware threat research, including an analysis of emerging trends in both defensive and offensive technologies. Here, I discuss key takeaways.

Repercussions of Compromising Customer Trust

Without question, cyberattacks are a viable threat to operating expenditures (OPEX). This past year alone, the average estimated cost of an attack grew by 52% and now exceeds $1 million (the number of estimations above $1 million increased 60%). For those organizations that formalized a real calculation process rather than merely estimate the cost, that number is even higher, averaging $1.67 million.

Despite these mounting costs, three in four have no formalized procedure to assess the business impact of a cyberattack against their organization. This becomes particularly troubling when you consider that most organizations have experienced some type of attack within the course of a year (only 7% of respondents claim not to have experienced an attack at all), with 21% reporting daily attacks, a significant rise from 13% last year.

There is quite a range in cost evaluation across different verticals. Those who report the highest damage are retail and high-tech, while education stands out with its extremely low financial impact estimation:

Repercussions can vary: 43% report a negative customer experience, 37% suffered brand reputation loss and one in four lost customers. The most common consequence was loss of productivity, reported by 54% of survey respondents. For small-to-medium sized businesses, the outcome can be particularly severe, as these organizations typically lack sufficient protection measures and know-how.

It would behoove all businesses, regardless of size, to consider the following:

  • Direct costs: Extended labor, investigations, audits, software patches development, etc.
  • Indirect costs: Crisis management, fines, customer compensation, legal expenses, share value
  • Prevention: Emergency response and disaster recovery plans, hardening endpoints, servers and cloud workloads

Risk Exposure Grows with Multi-Dimensional Complexity

As the cost of cyberattacks grow, so does the complexity. Information networks today are amorphic. In public clouds, they undergo a constant metamorphose, where instances of software entities and components are created, run and disappear. We are marching towards the no-visibility era, and as complexity grows it will become harder for business executives to analyze potential risks.

The increase in complexity immediately translates to a larger attack surface, or in other words, a greater risk exposure. DevOps organizations benefit from advanced automation tools that set up environments in seconds, allocate necessary resources, provision and integrate with each other through REST APIs, providing a faster time to market for application services at a minimal human intervention. However, these tools are processing sensitive data and cannot defend themselves from attacks.

Protect your Customer Experience

The report found that the primary goal of cyber-attacks is service disruption, followed by data theft. Cyber criminals understand that service disruptions result in a negative customer experience, and to this end, they utilize a broad set of techniques. Common methods include bursts of high traffic volume, usage of encrypted traffic to overwhelm security solutions’ resource consumption, and crypto-jacking that reduces the productivity of servers and endpoints by enslaving their CPUs for the sake of mining cryptocurrencies. Indeed, 44% of organizations surveyed suffered either ransom attacks or crypto-mining by cyber criminals looking for easy profits.

What’s more, attack tools became more effective in the past year; the number of outages grew by 15% and more than half saw slowdowns in productivity. Application layer attacks—which cause the most harm—continue to be the preferred vector for DDoSers over the network layer. It naturally follows, then, that 34% view application vulnerabilities as the biggest threat in 2019.

Essential Protection Strategies

Businesses understand the seriousness of the changing threat landscape and are taking steps to protect their digital assets. However, some tasks – such as protecting a growing number of cloud workloads, or discerning a malicious bot from a legitimate one – require leveling the defense up. Security solutions must support and enable the business processes, and as such, should be dynamic, elastic and automated.

Analyzing the 2018 threat landscape, Radware recommends the following essential security solution capabilities:

  1. Machine Learning: As hackers leverage advanced tools, organizations must minimize false positive calls in order to optimize the customer experience. This can be achieved by machine-learning capabilities that analyze big data samples for maximum accuracy (nearly half of survey respondents point at security as the driver to explore machine-learning based technologies).
  2. Automation: When so many processes are automated, the protected objects constantly change, and attackers quickly change lanes trying different vectors every time. As such, a security solution must be able to immediately detect and mitigate a threat. Solutions based on machine learning should be able to auto tune security policies.
  3. Real Time Intelligence: Cyber delinquents can disguise themselves in many forms. Compromised devices sometimes make legitimate requests, while other times they are malicious. Machines coming behind CDN or NAT can not be blocked based on IP reputation and generally, static heuristics are becoming useless. Instead, actionable, accurate real time information can reveal malicious activity as it emerges and protect businesses and their customers – especially when relying on analysis and qualifications of events from multiple sources.
  4. Security Experts: Keep human supervision for the moments when the pain is real. Human intervention is required in advanced attacks or when the learning process requires tuning. Because not every organization can maintain the know-how in-house at all times, having an expert from a trusted partner or a security vendor on-call is a good idea.

It is critical for organizations to incorporate cybersecurity into their long-term growth plans. Securing digital assets can no longer be delegated solely to the IT department. Rather, security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives. CEOs and executive teams must lead the way in setting the tone and invest in securing their customers’ experience and trust.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack MitigationDDoS AttacksSecurityWAF

Protecting Applications in a Serverless Architecture

November 8, 2018 — by Ben Zilberman2

Serverless-960x640.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. Until recently, information security architecture was relatively simple; you built a fortress around a server containing sensitive data, and deployed security solutions to control the flow of users accessing and leaving that server.

But how do you secure a server-less environment?

The Basics of Serverless Architecture

Serverless architecture is an emerging trend in cloud-hosted environments and refers to applications that significantly depend on third-party services (known as Backend-as-a-Service or “BaaS”) or on custom code that’s run in ephemeral containers (known as Function-as-a-Service or “FaaS”). And it is significantly more cost effective than buying or renting servers.

The rapid adoption of micro-efficiency-based pricing models (a.k.a PPU, or pay-per-use) pushes public cloud providers to introduce a business model that meets this requirement. Serverless computing helps providers optimize that model by dynamically managing the allocation of machine resources. As a result, organizations pay based on the actual amount of resources their applications consume, rather than ponying up for pre-purchased units of workload capacity (which is usually higher than what they utilize in reality).

What’s more, going serverless also frees developers and operators from the burdens of provisioning the cloud workload and infrastructure. There is no need to deploy operating systems and patch them, no need to install and configure web servers, and no need to set up or tune auto-scaling policies and systems.

[You may also like: Application Delivery and Application Security Should be Combined]

Security Implications of Going Serverless

The new serverless model coerces a complete change in architecture – nano services of a lot of software ‘particles.’ The operational unit is set of function containers that execute REST API functions, which are invoked upon a relevant client-side event. These function instances are created, run and then terminated. During their run time, they receive, modify and send information that organizations want to monitor and protect. The protection should be dynamic and swift:

  • There is no perimeter or OS to secure
  • Agents and a persistent footprint become redundant.
  • To optimize the business model, the solution must be scalable and ephemeral automation is the key to success

If we break down our application into components that run in a serverless model, the server that runs the APIs uses different layers of code to parse the requests, essentially enlarging the attack surface. However, this isn’t an enterprise problem anymore; it’s the cloud provider’s. Unfortunately, even they sometimes lag in patch management and hardening workloads. Will your DevOps read all of the cloud provider documentation in details?  Most likely, they’ll go with generic permissions. If you want to do something right, you better do it yourself.

Serverless computing doesn’t eradicate all traditional security concerns. Application-level vulnerabilities can still be exploited—with attacks carried out by human hackers or bots—whether they are inherent in the FaaS infrastructure or in the developer function code.

When using a FaaS model, the lack of local persistent storage encourages data transfer between the function and the different persistent storage services (e.g., S3 and DynamoDB by AWS) instead. Additionally, each function eventually processes data received from storage, the client application or from a different function. Every time it’s moved, it becomes vulnerable to leakage or tampering.

In such an environment, it is impossible to track all potential and actual security events. One can’t follow each function’s operation to prevent it from accessing wrong resources. Visibility and forensics must be automated and perform real time contextual analysis. But the question is not whether to use serverless or not because it is more in/secure. Rather, the question is how to do it when your organization goes there.

[You may also like: Web Application Security in a Digitally Connected World]

A New Approach

Simply put, going serverless requires a completely different security approach—one that is dynamic, elastic, and real-time. The security components must be able to move around at the same pace as the applications, functions and data they protect.

First thing’s first: To help avoid code exploitation (which is what attacks boil down to), use encryption and monitor the function’s activity and data access so it has, by default, minimum permissions. Abnormal function behavior, such as expected access to data or non-reasonable traffic flow, must be analyzed.

Next, consider additional measures, like a web application firewall (WAF), to secure your APIs. While an API gateway can manage authentication and enforce JSON and XML validity checks, not all API gateways support schema and structure validation, nor do they provide full coverage of OWASP top 10 vulnerabilities like a WAF does. WAFs apply dozens of protection measures on both inbound and outbound traffic, which is parsed to detect protocol manipulations. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.

[You may also like: Taking Stock of Application-Layer Security Threats]

In addition to detecting known attacks, for the purposes of zero-day attack protection and comprehensive application security, a high-end WAF allows strict policy enforcement where each function can have its own parameters white listed—the recommended approach when deploying a function processing sensitive data or mission-critical business logic.

And—this is critical—continue to mitigate for DDoS attacks. Going serverless does not eliminate the potential for falling susceptible to these attacks, which have changed dramatically over the past few years. Make no mistake: With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack MitigationSecurityWeb Application Firewall

Are Your Applications Secure?

October 3, 2018 — by Ben Zilberman12

WAF_REPORT_BLOG_Cover_img-960x715.jpg

Executives express mixed feelings and a surprisingly high level of confidence in Radware’s 2018 Web Application Security Report. 

As we close out a year of headline-grabbing data breaches (British Airways, Under Armor, Panera Bread), the introduction of GDPR and the emergence of new application development architectures and frameworks, Radware examined the state of application security in its latest report. This global survey among executives and IT professionals yielded insights about threats, concerns and application security strategies.

The common trend among a variety of application security challenges including data breaches, bot management, DDoS mitigation, API security and DevSecOps, was the high level of confidence reported by those surveyed. 90% of all respondents across regions reported confidence that their security model is effective at mitigating web application attacks.

Attacks against applications are at a record high and sensitive data is shared more than ever. So how can execs and IT pros have such confidence in the security of their applications?

To get a better understanding, we researched the current threat landscape and application protection strategies organizations currently take. Contradicting evidence stood out immediately:

  • 90% suffered attacks against their applications
  • One in three shared sensitive data with third parties
  • 33% allowed third parties to create/modify/delete data via APIs
  • 67% believed a hacker can penetrate their network
  • 89% saw web-scraping as a significant threat to their IP
  • 83% run bug bounty programs to find vulnerabilities they miss

There were quite a few threats to application services that were not properly addressed, challenging traditional security approaches. In parallel, the adoption of emerging frameworks and architectures, which rely on numerous integrations with multiple services, adds more complexity and increases the attack surface.

Current Threat Landscape

Last November, OWASP released a new list of top 10 vulnerabilities in web applications. Hackers continue to use injections, XSS, and a few old techniques such as CSRF, RFI/LFI and session hijacking to exploit these vulnerabilities and gain unauthorized access to sensitive information. Protection is becoming more complex as attacks come through trusted sources such as a CDN, encrypted traffic, or APIs of systems and services we integrate with. Bots behave like real users and bypass challenges such as CAPTCHA, IP-based detection and others, making it even harder to secure and optimize the user experience.

[You might also like: WAFs Should Do A  Lot More Against Current Threats Than Covering OWASP Top 10]

Web application security solutions must be smarter and address a broad spectrum of vulnerability exploitation scenarios. On top of protecting the application from these common vulnerabilities, it has to protect APIs and mitigate DoS attacks, manage bot traffic and make a distinction between legitimate bots (search engines for instance) and bad ones like botnets, web-scrapers and more.

DDoS Attacks

63% suffered a denial of service attack against their application. DoS attacks render applications inoperable by exhausting the application resources. Buffer overflow and HTTP floods were the most common types of DoS attacks, and this form of attack is more common in APAC. 36% find HTTP/Layer-7 DDoS as the most difficult attack to mitigate. Half of the organizations take rate-based approaches (such as limiting the number of request from a certain source or simply buying a rate-based DDoS protection solution) which are ineffective once the threshold is exceeded and real users can’t connect.

API Attacks

APIs simplify the architecture and delivery of application services and make digital interactions possible. Unfortunately, they also introduce a wide range of risks and vulnerabilities as a backdoor for hackers to break into networks. Through APIs, data is exchanged in HTTP where both parties receive, process and share information. A third party is theoretically able to insert, modify, delete and retrieve content from applications. This is nothing but an invitation to attack:

  • 62% of respondents did not encrypt data sent via API
  • 70% of respondents did not require authentication
  • 33% allowed third parties to perform actions (GET/ POST / PUT/ DELETE)

Attacks against APIs:

  • 39% Access violations
  • 32% Brute-force
  • 29% Irregular JSON/XML expressions
  • 38% Protocol attacks
  • 31% Denial of service
  • 29% Injections

Bot Attacks

The amount of both good and bad bot traffic is growing. Organizations are forced to increase network capacity and need to be able to precisely tell a friend from a foe so both customer experience and security are maintained. Surprisingly, 98% claimed they can make such a distinction. However, a similar amount sees web-scraping as a significant threat. 87% were impacted by such an attack over the past 12 months, despite a variety of methods companies use to overcome the challenge – CAPTCHA, in-session termination, IP-based detection or even buying a dedicated anti-bot solution.

Impact of Web-scraping:

  • 50% gathered pricing information
  • 43% copied website
  • 42% theft of intellectual property
  • 37% inventory queued/being held by bots
  • 34% inventory held
  • 26% inventory bought out

Data Breaches

Multinational organizations keep close tabs on what kinds of data they collect and share. However, almost every other business (46%) reports having suffered a breach. On average an organization suffers 16.5 breach attempts every year. Most (85%) take between hours and days to discover. Data breaches are the most difficult attack to detect, as well as mitigate, in the eyes of our survey respondents.

How do organizations discover data breaches?

  • 69% Anomaly detection tools/SIEM
  • 51% Darknet monitoring service
  • 45% Information was leaked publicly
  • 27% Ransom demand

IMPACT OF ATTACKS

Negative consequences such as loss of reputation, customer compensation, legal action (more common in EMEA), churn (more common in APAC), stock price drops (more common in AMER) and executives who lose their jobs are quick to follow a successful attack, while the process of repairing the damage of a company’s reputation is long and not always successful. About half admitted having encountered such consequences.

Securing Emerging Application Development Frameworks

The rapidly growing amount of applications and their distribution across multiple environments requires adjustments that lead to variations once a change to the application is needed. It is nearly impossible to deploy and maintain the same security policy efficiently across all environments. Our research shows that ~60% of all applications undergo changes on a weekly basis. How can the security team keep up?

While 93% of organizations use a web application firewall (WAF), only three in ten use a WAF that combines both positive and negative security models for effective application protection.

Technologies Used By DevOps

  • 63% – DevOps and Automation Tools
  • 48% – Containers (3 in 5 use Orchestration)
  • 44% – Serverless / FaaS
  • 37% – Microservers

Among the respondents that used micro-services, one-half rated data protection as the biggest challenge, followed by availability assurance, policy enforcement, authentication, and visibility.

Summary

Is there a notion that organizations are confident? Yes. Is that a false sense of security? Yes. Attacks are constantly evolving and security measures are not foolproof. Having application security tools and processes in place may provide a sense of control but they are likely to be breached or bypassed sooner or later. Another question we are left with is whether senior management is fully aware of the day to day incidents. Rightfully so, they look to their internal teams tasked with application security to manage the issue, but there seems to be a disconnect between their perceptions of the effectiveness of their organizations’ application security strategies and the actual exposure to risk.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

SecurityWAF

WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10

July 12, 2018 — by Ben Zilberman0

owasp-top-10-960x640.jpg

Looking in the rearview mirror

The application threat landscape has rapidly evolved. For years, users consumed applications over the internet using the common tool – web browsers. At every point in time, there were 2-5 web browsers to support, and the variety of application development and testing frameworks was relatively limited. For instance, almost all databases were built using the SQL language. Unfortunately, not long before hackers began to abuse applications in order to steal, delete and modify data. They could take advantage of applications in different ways, primarily by tricking the application user, injecting or remotely executing code. Shortly after, commercialized solutions named Web Application Firewalls (WAF) emerged, and the community responded by creating the Open Web Application Security Project (OWASP) to set and maintain standards and methodologies for secure applications.