main

DDoSSecuritySSL

Cyber Security Predictions

December 12, 2017 — by Carl Herberger0

2016 was the Year of DDoS. 2017 was the Year of Ransom. Can we assess leading indicators of new attack techniques and motivations to predict what 2018 will bring? The answer is a resounding “yes.” We believe 2018 will be the Year of Automation—or, more precisely, big, bad attacks on automated technology processes. Here are four reasons why.

Prediction 1: Artificial Intelligence (AI) Is Weaponized

Elon Musk recently made headlines for suggesting we should be more worried about AI than North Korea. Musk’s comment speaks to the risk of robots playing games and beating humans. It also reinforces fears that the human brain can’t outperform or keep pace with certain kinds of automation. The truth is that no one yet knows exactly what AI can do for humankind. What happens if AI falls into the wrong hands?

There is evidence that 2018 could be the year it happens. We are already facing a barrage of bad bots fighting good ones. The black market for off-the-shelf attacks is maturing. Anyone responsible for network or application security will experience firsthand just how automated cyber-attacks have become. It will become apparent that humans simply can’t process information quickly enough to beat the bots.

The only hope will be to fight AI with AI. Most cyber-security applications already use some form of AI to detect attack patterns and other anomalies. Such capabilities are used in various domains—from host-based security (malware) to network security (intrusion/DDoS). What all share is the ability to find and exploit meaningful information in massive collections of data.

[You might also like: 2017’s 5 Most Dangerous DDoS Attacks & How to Mitigate Them (Part 1)]

White and black hats alike are continually hunting for vulnerabilities and zero-day attack concepts. Both can use machine learning/deep learning to collect information and either fix the problem or, in the case of unethical hackers, create one. A prime example is finding vulnerabilities in source code, reversed code or binary code and identifying suspect pieces of code that might lead to the discovery of new zero-day concepts. These are activities that can be easily automated—as illustrated by the discovery of the Reaper botnet in late 2017.

It now feels like a race. Who will find the vulnerabilities first?

Sometimes organizations make it too easy for unethical hackers to win. How often have we seen attacks on vulnerabilities disclosed a few weeks or even several months before? WannaCry, for example, exploited the reality that people fail to upgrade in a timely manner. Hackers were able to launch massive, untargeted attack campaigns without the need to perform any research. The same was true with the Equifax breach, which exploited a recently discovered vulnerability. These opportunities were simply handed to attackers on a plate.

Other hackers—particularly those tasked with state-sponsored attacks—are more ambitious. For them, research is paramount. Consider that Vladimir Putin is on record stating that the nation that achieves an AI breakthrough will be the nation that achieves world domination.

Will AI be used to jam communication links, plunge cities into darkness, set oil rigs on fire or destroy emergency services? Those may be worst-case scenarios, but they point to the need for every enterprise to consider how AI could both damage and protect it.

Prediction 2: APIs Come Under Attack

APIs are a double-edged sword for modern applications such as mobile apps, IoT apps and third-party services embedded into existing applications. They simplify architecture and delivery but introduce a wide range of risks and vulnerabilities. Unfortunately, API vulnerabilities still do not get the required visibility. All of the risks that affect web applications also affect web services, and yet traditional application security assessment tools such as Dynamic Application Security Testing (DAST) and Static Application Security Testing (SAST) either don’t work well with APIs or are simply irrelevant to them.

APIs will be at the heart of many AI capabilities. Radware believes that protecting them may be the biggest problem of the future of the Internet. Here’s just a brief example of the areas of concern for APIs—many of which will be attacked in 2018:

  • TLS is required to secure the communications between the client and APIs for transport confidentiality and integrity of data in transit.
  • TCP Termination for network evasion attacks detection where IP fragmentation is applied.
  • HTTP protocol parsing and enforcement of HTTP RFC protects against various HTTP attacks such as NULL byte injection, encoded attacks, HRS attacks, content-type mismatch, etc.
  • Traffic normalization for evasion attacks detection. Encoded attacks can easily bypass security solutions.
  • Message size policy enforcement on HTTP message, body, headers and JSON/XML element sizes secures the application against buffer overflow attacks, resource exhaustion and other availability attacks on API infrastructure.
  • Access control policy management with:
    • IP-based and geo location restrictions when relevant
    • Access restriction to particular APIs where, for example, some APIs should be exposed for public access while others are just for internal use.
    • Access restrictions to specific HTTP methods where the set of operations which are allowed for certain users are prohibited for other users or sources. (For example, a user can generate a license but cannot delete the license once generated.)
  • Strong typing and a positive security model provide tight protection to the API infrastructure. It will be impossible to generate most of the attacks if, for instance, the only allowed value type in the JSON element is an integer with the value rage of 1 – 100.
  • XML/JSON validity check and schema validation is an extremely important security protection. Types, value ranges, sizes and order of XML elements must be configurable.
  • Rate-based protection per application or per API is an important protection against service abuse (for informational APIs), brute-force attacks and DoS attacks.
  • XSS protection should be based on rules and signatures of known attack patterns.
  • SQL and no-SQL injection protections can be achieved by sanitizing and validating user inputs and via rule-based attack detection.
  • Session management can be used to protect the API key, which is posted as a body argument or in the cookie.
  • Data leak protection is essential to making sure error messages and sensitive information is not leaking out to the potential attacker
  • DDoS protection is key to preventing and mitigating a wide variety of DDoS attack techniques that may exploit API vulnerabilities.

Prediction 3: Proxies Fall Prey to Three Types of Attacks

Radware predicts three proxy-based attack vectors worth noting: attacks against the CDN proxy, watering hole attacks and side channel attacks.

[You might also like: 2017’s 5 Most Dangerous DDoS Attacks & How to Mitigate Them (Part 2)]

Attacking the CDN Proxy

New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyber-attacks. Here are five cyber “blind spots” that will be attacked in 2018—and how to mitigate the risks:

  1. Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and they fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.
  2. SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.

During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.

Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.

  1. Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications. Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the Internet pipe is saturated, including ones served by the CDN.
  2. Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based flood attacks such as UDP floods or ICMP floods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.
  3. Web application attacks. CDN protection from threats is limited and exposes the web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN-based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that DO provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.

[You might also like: Pandora’s Box: Auditing for DDoS Vulnerabilities, Part I]

Finding the Watering Holes

Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:

  • App stores
  • Security update services
  • Domain name services
  • Public code repositories to build website
  • Web analytics platforms
  • Identity and access single sign-on platforms
  • Open source code commonly used by vendors
  • Third-party vendors that participate in the website

The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 as automation begins to pervade every aspect of our life.

Attacking from the Side

In many ways side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:

  • DDoS the enterprise’s analytics company
  • Brute-force attack against all users or against all of the site’s third-party companies
  • Port the admin’s phone and steal login information
  • Massive load on “page dotting”
  • Large botnets to “learn” ins and outs of a site

Prediction 4: Social Engineering Gets Automated

Social engineering is the use of deceptive techniques to trick individuals into providing information or access to systems. Often the techniques take advantage of normal human impulses, such as the desire to be helpful and kind. One of the most common examples is attackers posing as helpdesk representatives and calling employees to request their login credentials. Social engineering has long been a challenge to security. What’s changing now is the risks of automation transforming human behavior into vulnerabilities. Automated social engineering makes it possible to do two things:

  • Exploit human inputs into automated processes and cause them to make automated processes to work against us or on behalf of the perpetrator.
  • Accelerate the speed and effectiveness of longstanding social engineering methods such as phone calls, emails, texts and even conversations.

These realities have already emerged as large automation issues. Dropbox, Amazon Web Services and Google have all announced huge outages caused by human interaction errors with automated processes with either networking or application changes. Can 2018 exploits of such human error vectors be far behind?

[You might also like: Pandora’s Box: Auditing for DDoS Vulnerabilities, Part II]

Striving for Cyber Serenity: Is the Best Behind Us?

2017 was a monumental year. The discovery of BrickerBot marked the first time a software-based botnet would render a physical (IoT) device permanently unusable. It also foreshadowed a new genre of botnets and attack techniques that automate dastardly deeds. The WannaCry and NotPetya ransom attacks that followed each demonstrated crude forms of automation.

The conclusion we can draw is this: If growth of attack surface, techniques and means continues into 2018 through various attacks on automated technologies, the best years of security of our systems may be behind us. As we move into 2018, Radware offers up two key questions: How will the rise of automation fuel corresponding rises in new vectors for exploits? And, given the threat landscape, how can we develop tools and techniques today to protect ourselves from these technical, somewhat arcane threat vectors so that we may all live securely and peacefully?

Internet-connected devices are being deployed in virtually every aspect of our lives. Yet they are largely implemented in an insecure manner—often prompting decay to insecure architectures or configurations. The result is an environment in which automated attacks can and will thrive. Let us hope that 2018 will be the year when our collective societies learn how to transform the threat equation into a reasonable problem and abate the ominous signs before us all.

Until then, we urge you to pay special attention to weaponized AI, large API attacks, proxy attacks and automated social engineering. As they target the hidden attack surface of automation, they will no doubt be very problematic.

Read “Cyber-Security Perceptions and Realities: A View from the C-Suite” to learn more.

Download Now

Carl Herberger

Carl is an IT security expert and responsible for Radware’s global security practice. With over a decade of experience, he began his career working at the Pentagon evaluating computer security events affecting daily Air Force operations. Carl also managed critical operational intelligence for computer network attack programs to aid the National Security Council and Secretary of the Air Force with policy and budgetary defense. Carl writes about network security strategy, trends, and implementation.

Leave a Reply

Your email address will not be published. Required fields are marked *