main

Application Security

HTTPS: The Myth of Secure Encrypted Traffic Exposed

February 5, 2019 — by Ben Zilberman0

https--960x540.jpeg

The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?

Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.

While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.

Encrypted Application Layers

When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.

To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:

  • Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
    applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
  • Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
  • Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

Environmental Aspects

Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.

  • Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
  • Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
  • Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
  • Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.

[You may also like: Are Your Applications Secure?]

Regulatory Limitations

Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.

Encryption Protocols

The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.

[You may also like: Adopt TLS 1.3 – Kill Two Birds with One Stone]

Attack Patterns

Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.

  • Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
  • Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
  • Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
  • Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
  • Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
  • Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.

End-to-End Protection

Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack MitigationAttack Types & Vectors

How Cyberattacks Directly Impact Your Brand: New Radware Report

January 15, 2019 — by Ben Zilberman0

BinaryCodeEncryption-002-960x600.jpg

Whether you’re an executive or practitioner, brimming with business acumen or tech savviness, your job is to preserve and grow your company’s brand. Brand equity relies heavily on customer trust, which can take years to build and only moments to demolish. 2018’s cyber threat landscape demonstrates this clearly; the delicate relationship between organizations and their customers is in hackers’ cross hairs and suffers during a successful cyberattack. Make no mistake: Leaders who undervalue customer trust–who do not secure an optimized customer experience or adequately safeguard sensitive data–will feel the sting in their balance sheet, brand reputation and even their job security.

Radware’s 2018-2019 Global Application and Network Security report builds upon a worldwide industry survey encompassing 790 business and security executives and professionals from different countries, industries and company sizes. It also features original Radware threat research, including an analysis of emerging trends in both defensive and offensive technologies. Here, I discuss key takeaways.

Repercussions of Compromising Customer Trust

Without question, cyberattacks are a viable threat to operating expenditures (OPEX). This past year alone, the average estimated cost of an attack grew by 52% and now exceeds $1 million (the number of estimations above $1 million increased 60%). For those organizations that formalized a real calculation process rather than merely estimate the cost, that number is even higher, averaging $1.67 million.

Despite these mounting costs, three in four have no formalized procedure to assess the business impact of a cyberattack against their organization. This becomes particularly troubling when you consider that most organizations have experienced some type of attack within the course of a year (only 7% of respondents claim not to have experienced an attack at all), with 21% reporting daily attacks, a significant rise from 13% last year.

There is quite a range in cost evaluation across different verticals. Those who report the highest damage are retail and high-tech, while education stands out with its extremely low financial impact estimation:

Repercussions can vary: 43% report a negative customer experience, 37% suffered brand reputation loss and one in four lost customers. The most common consequence was loss of productivity, reported by 54% of survey respondents. For small-to-medium sized businesses, the outcome can be particularly severe, as these organizations typically lack sufficient protection measures and know-how.

It would behoove all businesses, regardless of size, to consider the following:

  • Direct costs: Extended labor, investigations, audits, software patches development, etc.
  • Indirect costs: Crisis management, fines, customer compensation, legal expenses, share value
  • Prevention: Emergency response and disaster recovery plans, hardening endpoints, servers and cloud workloads

Risk Exposure Grows with Multi-Dimensional Complexity

As the cost of cyberattacks grow, so does the complexity. Information networks today are amorphic. In public clouds, they undergo a constant metamorphose, where instances of software entities and components are created, run and disappear. We are marching towards the no-visibility era, and as complexity grows it will become harder for business executives to analyze potential risks.

The increase in complexity immediately translates to a larger attack surface, or in other words, a greater risk exposure. DevOps organizations benefit from advanced automation tools that set up environments in seconds, allocate necessary resources, provision and integrate with each other through REST APIs, providing a faster time to market for application services at a minimal human intervention. However, these tools are processing sensitive data and cannot defend themselves from attacks.

Protect your Customer Experience

The report found that the primary goal of cyber-attacks is service disruption, followed by data theft. Cyber criminals understand that service disruptions result in a negative customer experience, and to this end, they utilize a broad set of techniques. Common methods include bursts of high traffic volume, usage of encrypted traffic to overwhelm security solutions’ resource consumption, and crypto-jacking that reduces the productivity of servers and endpoints by enslaving their CPUs for the sake of mining cryptocurrencies. Indeed, 44% of organizations surveyed suffered either ransom attacks or crypto-mining by cyber criminals looking for easy profits.

What’s more, attack tools became more effective in the past year; the number of outages grew by 15% and more than half saw slowdowns in productivity. Application layer attacks—which cause the most harm—continue to be the preferred vector for DDoSers over the network layer. It naturally follows, then, that 34% view application vulnerabilities as the biggest threat in 2019.

Essential Protection Strategies

Businesses understand the seriousness of the changing threat landscape and are taking steps to protect their digital assets. However, some tasks – such as protecting a growing number of cloud workloads, or discerning a malicious bot from a legitimate one – require leveling the defense up. Security solutions must support and enable the business processes, and as such, should be dynamic, elastic and automated.

Analyzing the 2018 threat landscape, Radware recommends the following essential security solution capabilities:

  1. Machine Learning: As hackers leverage advanced tools, organizations must minimize false positive calls in order to optimize the customer experience. This can be achieved by machine-learning capabilities that analyze big data samples for maximum accuracy (nearly half of survey respondents point at security as the driver to explore machine-learning based technologies).
  2. Automation: When so many processes are automated, the protected objects constantly change, and attackers quickly change lanes trying different vectors every time. As such, a security solution must be able to immediately detect and mitigate a threat. Solutions based on machine learning should be able to auto tune security policies.
  3. Real Time Intelligence: Cyber delinquents can disguise themselves in many forms. Compromised devices sometimes make legitimate requests, while other times they are malicious. Machines coming behind CDN or NAT can not be blocked based on IP reputation and generally, static heuristics are becoming useless. Instead, actionable, accurate real time information can reveal malicious activity as it emerges and protect businesses and their customers – especially when relying on analysis and qualifications of events from multiple sources.
  4. Security Experts: Keep human supervision for the moments when the pain is real. Human intervention is required in advanced attacks or when the learning process requires tuning. Because not every organization can maintain the know-how in-house at all times, having an expert from a trusted partner or a security vendor on-call is a good idea.

It is critical for organizations to incorporate cybersecurity into their long-term growth plans. Securing digital assets can no longer be delegated solely to the IT department. Rather, security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives. CEOs and executive teams must lead the way in setting the tone and invest in securing their customers’ experience and trust.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack MitigationDDoS AttacksSecurityWAF

Protecting Applications in a Serverless Architecture

November 8, 2018 — by Ben Zilberman0

Serverless-960x640.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. Until recently, information security architecture was relatively simple; you built a fortress around a server containing sensitive data, and deployed security solutions to control the flow of users accessing and leaving that server.

But how do you secure a server-less environment?

The Basics of Serverless Architecture

Serverless architecture is an emerging trend in cloud-hosted environments and refers to applications that significantly depend on third-party services (known as Backend-as-a-Service or “BaaS”) or on custom code that’s run in ephemeral containers (known as Function-as-a-Service or “FaaS”). And it is significantly more cost effective than buying or renting servers.

The rapid adoption of micro-efficiency-based pricing models (a.k.a PPU, or pay-per-use) pushes public cloud providers to introduce a business model that meets this requirement. Serverless computing helps providers optimize that model by dynamically managing the allocation of machine resources. As a result, organizations pay based on the actual amount of resources their applications consume, rather than ponying up for pre-purchased units of workload capacity (which is usually higher than what they utilize in reality).

What’s more, going serverless also frees developers and operators from the burdens of provisioning the cloud workload and infrastructure. There is no need to deploy operating systems and patch them, no need to install and configure web servers, and no need to set up or tune auto-scaling policies and systems.

[You may also like: Application Delivery and Application Security Should be Combined]

Security Implications of Going Serverless

The new serverless model coerces a complete change in architecture – nano services of a lot of software ‘particles.’ The operational unit is set of function containers that execute REST API functions, which are invoked upon a relevant client-side event. These function instances are created, run and then terminated. During their run time, they receive, modify and send information that organizations want to monitor and protect. The protection should be dynamic and swift:

  • There is no perimeter or OS to secure
  • Agents and a persistent footprint become redundant.
  • To optimize the business model, the solution must be scalable and ephemeral automation is the key to success

If we break down our application into components that run in a serverless model, the server that runs the APIs uses different layers of code to parse the requests, essentially enlarging the attack surface. However, this isn’t an enterprise problem anymore; it’s the cloud provider’s. Unfortunately, even they sometimes lag in patch management and hardening workloads. Will your DevOps read all of the cloud provider documentation in details?  Most likely, they’ll go with generic permissions. If you want to do something right, you better do it yourself.

Serverless computing doesn’t eradicate all traditional security concerns. Application-level vulnerabilities can still be exploited—with attacks carried out by human hackers or bots—whether they are inherent in the FaaS infrastructure or in the developer function code.

When using a FaaS model, the lack of local persistent storage encourages data transfer between the function and the different persistent storage services (e.g., S3 and DynamoDB by AWS) instead. Additionally, each function eventually processes data received from storage, the client application or from a different function. Every time it’s moved, it becomes vulnerable to leakage or tampering.

In such an environment, it is impossible to track all potential and actual security events. One can’t follow each function’s operation to prevent it from accessing wrong resources. Visibility and forensics must be automated and perform real time contextual analysis. But the question is not whether to use serverless or not because it is more in/secure. Rather, the question is how to do it when your organization goes there.

[You may also like: Web Application Security in a Digitally Connected World]

A New Approach

Simply put, going serverless requires a completely different security approach—one that is dynamic, elastic, and real-time. The security components must be able to move around at the same pace as the applications, functions and data they protect.

First thing’s first: To help avoid code exploitation (which is what attacks boil down to), use encryption and monitor the function’s activity and data access so it has, by default, minimum permissions. Abnormal function behavior, such as expected access to data or non-reasonable traffic flow, must be analyzed.

Next, consider additional measures, like a web application firewall (WAF), to secure your APIs. While an API gateway can manage authentication and enforce JSON and XML validity checks, not all API gateways support schema and structure validation, nor do they provide full coverage of OWASP top 10 vulnerabilities like a WAF does. WAFs apply dozens of protection measures on both inbound and outbound traffic, which is parsed to detect protocol manipulations. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.

[You may also like: Taking Stock of Application-Layer Security Threats]

In addition to detecting known attacks, for the purposes of zero-day attack protection and comprehensive application security, a high-end WAF allows strict policy enforcement where each function can have its own parameters white listed—the recommended approach when deploying a function processing sensitive data or mission-critical business logic.

And—this is critical—continue to mitigate for DDoS attacks. Going serverless does not eliminate the potential for falling susceptible to these attacks, which have changed dramatically over the past few years. Make no mistake: With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack MitigationSecurityWeb Application Firewall

Are Your Applications Secure?

October 3, 2018 — by Ben Zilberman11

WAF_REPORT_BLOG_Cover_img-960x715.jpg

Executives express mixed feelings and a surprisingly high level of confidence in Radware’s 2018 Web Application Security Report. 

As we close out a year of headline-grabbing data breaches (British Airways, Under Armor, Panera Bread), the introduction of GDPR and the emergence of new application development architectures and frameworks, Radware examined the state of application security in its latest report. This global survey among executives and IT professionals yielded insights about threats, concerns and application security strategies.

The common trend among a variety of application security challenges including data breaches, bot management, DDoS mitigation, API security and DevSecOps, was the high level of confidence reported by those surveyed. 90% of all respondents across regions reported confidence that their security model is effective at mitigating web application attacks.

Attacks against applications are at a record high and sensitive data is shared more than ever. So how can execs and IT pros have such confidence in the security of their applications?

To get a better understanding, we researched the current threat landscape and application protection strategies organizations currently take. Contradicting evidence stood out immediately:

  • 90% suffered attacks against their applications
  • One in three shared sensitive data with third parties
  • 33% allowed third parties to create/modify/delete data via APIs
  • 67% believed a hacker can penetrate their network
  • 89% saw web-scraping as a significant threat to their IP
  • 83% run bug bounty programs to find vulnerabilities they miss

There were quite a few threats to application services that were not properly addressed, challenging traditional security approaches. In parallel, the adoption of emerging frameworks and architectures, which rely on numerous integrations with multiple services, adds more complexity and increases the attack surface.

Current Threat Landscape

Last November, OWASP released a new list of top 10 vulnerabilities in web applications. Hackers continue to use injections, XSS, and a few old techniques such as CSRF, RFI/LFI and session hijacking to exploit these vulnerabilities and gain unauthorized access to sensitive information. Protection is becoming more complex as attacks come through trusted sources such as a CDN, encrypted traffic, or APIs of systems and services we integrate with. Bots behave like real users and bypass challenges such as CAPTCHA, IP-based detection and others, making it even harder to secure and optimize the user experience.

[You might also like: WAFs Should Do A  Lot More Against Current Threats Than Covering OWASP Top 10]

Web application security solutions must be smarter and address a broad spectrum of vulnerability exploitation scenarios. On top of protecting the application from these common vulnerabilities, it has to protect APIs and mitigate DoS attacks, manage bot traffic and make a distinction between legitimate bots (search engines for instance) and bad ones like botnets, web-scrapers and more.

DDoS Attacks

63% suffered a denial of service attack against their application. DoS attacks render applications inoperable by exhausting the application resources. Buffer overflow and HTTP floods were the most common types of DoS attacks, and this form of attack is more common in APAC. 36% find HTTP/Layer-7 DDoS as the most difficult attack to mitigate. Half of the organizations take rate-based approaches (such as limiting the number of request from a certain source or simply buying a rate-based DDoS protection solution) which are ineffective once the threshold is exceeded and real users can’t connect.

API Attacks

APIs simplify the architecture and delivery of application services and make digital interactions possible. Unfortunately, they also introduce a wide range of risks and vulnerabilities as a backdoor for hackers to break into networks. Through APIs, data is exchanged in HTTP where both parties receive, process and share information. A third party is theoretically able to insert, modify, delete and retrieve content from applications. This is nothing but an invitation to attack:

  • 62% of respondents did not encrypt data sent via API
  • 70% of respondents did not require authentication
  • 33% allowed third parties to perform actions (GET/ POST / PUT/ DELETE)

Attacks against APIs:

  • 39% Access violations
  • 32% Brute-force
  • 29% Irregular JSON/XML expressions
  • 38% Protocol attacks
  • 31% Denial of service
  • 29% Injections

Bot Attacks

The amount of both good and bad bot traffic is growing. Organizations are forced to increase network capacity and need to be able to precisely tell a friend from a foe so both customer experience and security are maintained. Surprisingly, 98% claimed they can make such a distinction. However, a similar amount sees web-scraping as a significant threat. 87% were impacted by such an attack over the past 12 months, despite a variety of methods companies use to overcome the challenge – CAPTCHA, in-session termination, IP-based detection or even buying a dedicated anti-bot solution.

Impact of Web-scraping:

  • 50% gathered pricing information
  • 43% copied website
  • 42% theft of intellectual property
  • 37% inventory queued/being held by bots
  • 34% inventory held
  • 26% inventory bought out

Data Breaches

Multinational organizations keep close tabs on what kinds of data they collect and share. However, almost every other business (46%) reports having suffered a breach. On average an organization suffers 16.5 breach attempts every year. Most (85%) take between hours and days to discover. Data breaches are the most difficult attack to detect, as well as mitigate, in the eyes of our survey respondents.

How do organizations discover data breaches?

  • 69% Anomaly detection tools/SIEM
  • 51% Darknet monitoring service
  • 45% Information was leaked publicly
  • 27% Ransom demand

IMPACT OF ATTACKS

Negative consequences such as loss of reputation, customer compensation, legal action (more common in EMEA), churn (more common in APAC), stock price drops (more common in AMER) and executives who lose their jobs are quick to follow a successful attack, while the process of repairing the damage of a company’s reputation is long and not always successful. About half admitted having encountered such consequences.

Securing Emerging Application Development Frameworks

The rapidly growing amount of applications and their distribution across multiple environments requires adjustments that lead to variations once a change to the application is needed. It is nearly impossible to deploy and maintain the same security policy efficiently across all environments. Our research shows that ~60% of all applications undergo changes on a weekly basis. How can the security team keep up?

While 93% of organizations use a web application firewall (WAF), only three in ten use a WAF that combines both positive and negative security models for effective application protection.

Technologies Used By DevOps

  • 63% – DevOps and Automation Tools
  • 48% – Containers (3 in 5 use Orchestration)
  • 44% – Serverless / FaaS
  • 37% – Microservers

Among the respondents that used micro-services, one-half rated data protection as the biggest challenge, followed by availability assurance, policy enforcement, authentication, and visibility.

Summary

Is there a notion that organizations are confident? Yes. Is that a false sense of security? Yes. Attacks are constantly evolving and security measures are not foolproof. Having application security tools and processes in place may provide a sense of control but they are likely to be breached or bypassed sooner or later. Another question we are left with is whether senior management is fully aware of the day to day incidents. Rightfully so, they look to their internal teams tasked with application security to manage the issue, but there seems to be a disconnect between their perceptions of the effectiveness of their organizations’ application security strategies and the actual exposure to risk.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

SecurityWAF

WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10

July 12, 2018 — by Ben Zilberman0

owasp-top-10-960x640.jpg

Looking in the rearview mirror

The application threat landscape has rapidly evolved. For years, users consumed applications over the internet using the common tool – web browsers. At every point in time, there were 2-5 web browsers to support, and the variety of application development and testing frameworks was relatively limited. For instance, almost all databases were built using the SQL language. Unfortunately, not long before hackers began to abuse applications in order to steal, delete and modify data. They could take advantage of applications in different ways, primarily by tricking the application user, injecting or remotely executing code. Shortly after, commercialized solutions named Web Application Firewalls (WAF) emerged, and the community responded by creating the Open Web Application Security Project (OWASP) to set and maintain standards and methodologies for secure applications.

Security

Can Security Be Efficient Without Expertise or Intelligence?

April 12, 2018 — by Ben Zilberman0

managed-services-960x571.jpg

Threats evolve fast, don’t lag behind!

I recently returned from a business trip to an exotic destination, which is also a massive emerging market depending on how you look at it. The folks I’ve met do not seem to face other challenges than what you see in mature markets, but I could easily relate to the sheer interest of people to learn and adapt and act quickly. They were keen to get knowledge and use it, knowing that without it they may stay behind.

Suffering

In today’s threat landscape, if you aren’t able to react quickly enough, you will suffer.

Security

CAPTCHA Limitations of Bot Mitigation

March 15, 2018 — by Ben Zilberman1

captcha-960x633.jpg

An essential part of the technological evolution is creating systems, machines and applications that autonomously and independently create, collect and communicate data. This automation frees information technology folk to focus on other tasks. Currently, such bots generate more than half of the internet traffic, but unfortunately every evolution brings with it some form of abuse. Various ‘bad’ bots aim to achieve different goals, among which are web scraping, web application DDoS and clickjacking. While simple script-based bots are not much of a challenge to detect and block, advanced bots dramatically complicate the mitigation process using techniques such as mimicking user behavior, using dynamic IP addresses, operating behind anonymous proxies and CDNs, etc.

Captcha means “Completely Automated Public Turing Test to tell Computers and Humans Apart”.

Attack Types & VectorsDDoSSecurity

Has Cyber Security Reached Its Limits?

January 16, 2018 — by Ben Zilberman0

Hackermanstealinformation-1-960x576.jpg

Thoughts from Radware’s Global Application and Network Security Report

  • Rise of cryptocurrency trade and value boosts attacks;
  • Notorious attacks of the year point at the human factor to blame;
  • Machine-learning technologies are not fully mature nor broadly adopted;
  • Despite a notion of tolerance, in one of four cases customers will take action against a targeted organization;
  • IoT devices power more effective DDoS attacks, but nobody takes responsibility to patch the known holes;
  • Data Leakage is the number one concern of organizations today.

These are just a handful of insights from Radware’s 2017-2018 Global Application and Network Security Report, providing a comprehensive view of the industry trends and evolutions. 2017 was an eventful year, with global cyber-attack campaigns that grabbed headlines in mainstream media and affected the lives of many, in particular the WannaCry, NotPetya and BadRabbit ransom sprees, as well as Equifax and Forever 21 data leaks. Let’s take a closer look at 2017 trends and 2018 predictions: