HTTPS: The Myth of Secure Encrypted Traffic Exposed


The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?

Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.

While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.

Encrypted Application Layers

When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.

To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:

  • Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
    applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
  • Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
  • Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

Environmental Aspects

Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.

  • Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
  • Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
  • Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
  • Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.

[You may also like: Are Your Applications Secure?]

Regulatory Limitations

Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.

Encryption Protocols

The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.

[You may also like: Adopt TLS 1.3 – Kill Two Birds with One Stone]

Attack Patterns

Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.

  • Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
  • Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
  • Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
  • Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
  • Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
  • Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.

End-to-End Protection

Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Ben Zilberman

Ben Zilberman is a director of product-marketing, covering application security at Radware. In this role, Ben specializes in web application and API protection, as well as bot management solutions. In parallel, Ben drives some of Radware’s thought leadership and research programs. Ben has over 10 years of diverse experience in the industry, leading marketing programs for network and application security solutions, including firewalls, threat prevention, web security and DDoS protection technologies. Prior to joining Radware, Ben served as a trusted advisor at Check Point Software Technologies, where he led channel partnerships and sales operations. Ben holds a BA in Economics and a MBA from Tel Aviv University.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center