main

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware0

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

Top 3 Cyberattacks Targeting Proxy Servers

January 16, 2019 — by Daniel Smith1

Proxy-960x540.jpg

Today, many organizations are now realizing that DDoS defense is critical to maintaining an exceptional customer experience. Why? Because nothing diminishes load times or impacts the end user’s experience more than a cyberattack.

As a facilitator of access to content and networks, proxy servers have become a focal point for those seeking to cause grief to organizations via cyberattacks due to the fallout a successful assault can have.

Attacking the CDN Proxy

New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyberattacks. Here are five cyber “blind spots” that are often attacked – and how to mitigate the risks:

Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.

SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.

Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.

Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications.  Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer’s origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the internet pipe is saturated, including ones served by the CDN.

[You may also like: CDN Security is NOT Enough for Today]

Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based flood attacks such as UDP floods or ICMP floods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.

Web application attacks. CDN protection from threats is limited and exposes web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN- based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that do provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.

[You may also like: Are Your Applications Secure?]

Finding the Watering Holes

Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:

  • App stores
  • Security update services
  • Domain name services
  • Public code repositories to build websites
  • Webanalytics platforms
  • Identity and access single sign-on platforms
  • Open source code commonly used by vendors
  • Third-party vendors that participate in the website

The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 and 2019 as automation begins to pervade every aspect of our life.

Attacking from the Side

In many ways, side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:

  • DDoS the company’s analytics provider
  • Brute-force attack against all users or against all of the site’s third-party companies
  • Port the admin’s phone and steal login information
  • Massive load on “page dotting”
  • Large botnets to “learn” ins and outs of a site

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsCloud SecurityDDoS AttacksSecurity

2019 Predictions: Will Cyber Serenity Soon Be a Thing of the Past?

November 29, 2018 — by Daniel Smith5

AdobeStock_227784320-2-960x600.jpg

In 2018 the threat landscape evolved at a breakneck pace, from predominantly DDoS and ransom attacks (in 2016 and 2017, respectively), to automated attacks. We saw sensational attacks on APIs, the ability to leverage weaponized Artificial Intelligence, and growth in side-channel and proxy-based attacks.

And by the looks of it, 2019 will be an extension of the proverbial game of whack-a-mole, with categorical alterations to the current tactics, techniques and procedures (TTPs). While nobody knows exactly what the future holds, strong indicators today enable us to forecast trends in the coming year.

The public cloud will experience a massive security attack

The worldwide public cloud services market is projected to grow 17.3 percent in 2019 to total $206.2 billion, up from $175.8 billion in 2018, according to Gartner, Inc. This means organizations are rapidly shifting content to the cloud, and with that data shift comes new vulnerabilities and threats. While cloud adoption is touted as faster, better, and easier, security is often overlooked for performance and overall cost. Organizations trust and expect their cloud providers to adequately secure information for them, but perception is not always a reality when it comes to current cloud security, and 2019 will demonstrate this.

[You may also like: Cloud vs DDoS, the Seven Layers of Complexity]

Ransom techniques will surge

Ransom, including ransomware and ransom RDoS, will give way to hijacking new embedded technologies, along with holding healthcare systems and smart cities hostage with the launch of 5G networks and devices. What does this look like? The prospects are distressing:

  • Hijacking the availability of a service—like stock trading, streaming video or music, or even 911—and demanding a ransom for the digital return of the devices or network.
  • Hijacking a device. Not only are smart home devices like thermostats and refrigerators susceptible to security lapses, but so are larger devices, like automobiles.
  • Healthcare ransom attacks pose a particularly terrifying threat. As healthcare is increasingly interwoven with cloud-based monitoring, services and IoT embedded devices responsible for administering health management (think prescriptions/urgent medications, health records, etc.) are vulnerable, putting those seeking medical care in jeopardy of having their healthcare devices that they a dependent on being targeted by malware or their devices supporting network being hijacked.

[You may also like: The Origin of Ransomware and Its Impact on Businesses]

Nation state attacks will increase

As trade and other types of “soft-based’ power conflicts increase in number and severity, nation states and other groups will seek new ways of causing widespread disruption including Internet outages at the local or regional level, service outages, supply chain attacks and application blacklisting by government in attempted power grabs. Contractors and government organizations are likely to be targeted, and other industries will stand to lose millions of dollars as indirect victims if communications systems fail and trade grinds to a halt.

More destructive DDoS attacks are on the way

Over the past several years, we’ve witnessed the development and deployment of massive IoT-based botnets, such as Mirai, Brickerbot, Reaper and Haijme, whose systems are built around thousands of compromised IoT devices.  Most of these weaponized botnets have been used in cyberattacks to knock out critical devices or services in a relatively straightforward manner.

Recently there has been a change in devices targeted by bot herders. Based on developments we are seeing in the wild, attackers are not only infiltrating resource-constrained IoT devices, they are also targeting powerful cloud-based servers. When targeted, only a handful of compromised instances are needed to create a serious threat. Since IoT malware is cross-compiled for many platforms, including x86_64, we expect to see attackers consistently altering and updating Mirai/Qbot scanners to include more cloud-based exploits going into 2019.

[You may also like: IoT Botnets on the Rise]

Cyber serenity may be a thing of the past

If the growth of the attack landscape continues to evolve into 2019 through various chaining attacks and alteration of the current TTP’s to include automated features, the best years of cybersecurity may be behind us. Let’s hope that 2019 will be the year we collectively begin to really share intelligence and aid one another in knowledge transfer; it’s critical in order to address the threat equation and come up with reasonable and achievable solutions that will abate the ominous signs before us all.

Until then, pay special attention to weaponized AI, large API attacks, proxy attacks and automated social engineering. As they target the hidden attack surface of automation, they will no doubt become very problematic moving forward.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application SecurityAttack MitigationDDoS AttacksSecurityWAF

Protecting Applications in a Serverless Architecture

November 8, 2018 — by Ben Zilberman0

Serverless-960x640.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. Until recently, information security architecture was relatively simple; you built a fortress around a server containing sensitive data, and deployed security solutions to control the flow of users accessing and leaving that server.

But how do you secure a server-less environment?

The Basics of Serverless Architecture

Serverless architecture is an emerging trend in cloud-hosted environments and refers to applications that significantly depend on third-party services (known as Backend-as-a-Service or “BaaS”) or on custom code that’s run in ephemeral containers (known as Function-as-a-Service or “FaaS”). And it is significantly more cost effective than buying or renting servers.

The rapid adoption of micro-efficiency-based pricing models (a.k.a PPU, or pay-per-use) pushes public cloud providers to introduce a business model that meets this requirement. Serverless computing helps providers optimize that model by dynamically managing the allocation of machine resources. As a result, organizations pay based on the actual amount of resources their applications consume, rather than ponying up for pre-purchased units of workload capacity (which is usually higher than what they utilize in reality).

What’s more, going serverless also frees developers and operators from the burdens of provisioning the cloud workload and infrastructure. There is no need to deploy operating systems and patch them, no need to install and configure web servers, and no need to set up or tune auto-scaling policies and systems.

[You may also like: Application Delivery and Application Security Should be Combined]

Security Implications of Going Serverless

The new serverless model coerces a complete change in architecture – nano services of a lot of software ‘particles.’ The operational unit is set of function containers that execute REST API functions, which are invoked upon a relevant client-side event. These function instances are created, run and then terminated. During their run time, they receive, modify and send information that organizations want to monitor and protect. The protection should be dynamic and swift:

  • There is no perimeter or OS to secure
  • Agents and a persistent footprint become redundant.
  • To optimize the business model, the solution must be scalable and ephemeral automation is the key to success

If we break down our application into components that run in a serverless model, the server that runs the APIs uses different layers of code to parse the requests, essentially enlarging the attack surface. However, this isn’t an enterprise problem anymore; it’s the cloud provider’s. Unfortunately, even they sometimes lag in patch management and hardening workloads. Will your DevOps read all of the cloud provider documentation in details?  Most likely, they’ll go with generic permissions. If you want to do something right, you better do it yourself.

Serverless computing doesn’t eradicate all traditional security concerns. Application-level vulnerabilities can still be exploited—with attacks carried out by human hackers or bots—whether they are inherent in the FaaS infrastructure or in the developer function code.

When using a FaaS model, the lack of local persistent storage encourages data transfer between the function and the different persistent storage services (e.g., S3 and DynamoDB by AWS) instead. Additionally, each function eventually processes data received from storage, the client application or from a different function. Every time it’s moved, it becomes vulnerable to leakage or tampering.

In such an environment, it is impossible to track all potential and actual security events. One can’t follow each function’s operation to prevent it from accessing wrong resources. Visibility and forensics must be automated and perform real time contextual analysis. But the question is not whether to use serverless or not because it is more in/secure. Rather, the question is how to do it when your organization goes there.

[You may also like: Web Application Security in a Digitally Connected World]

A New Approach

Simply put, going serverless requires a completely different security approach—one that is dynamic, elastic, and real-time. The security components must be able to move around at the same pace as the applications, functions and data they protect.

First thing’s first: To help avoid code exploitation (which is what attacks boil down to), use encryption and monitor the function’s activity and data access so it has, by default, minimum permissions. Abnormal function behavior, such as expected access to data or non-reasonable traffic flow, must be analyzed.

Next, consider additional measures, like a web application firewall (WAF), to secure your APIs. While an API gateway can manage authentication and enforce JSON and XML validity checks, not all API gateways support schema and structure validation, nor do they provide full coverage of OWASP top 10 vulnerabilities like a WAF does. WAFs apply dozens of protection measures on both inbound and outbound traffic, which is parsed to detect protocol manipulations. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.

[You may also like: Taking Stock of Application-Layer Security Threats]

In addition to detecting known attacks, for the purposes of zero-day attack protection and comprehensive application security, a high-end WAF allows strict policy enforcement where each function can have its own parameters white listed—the recommended approach when deploying a function processing sensitive data or mission-critical business logic.

And—this is critical—continue to mitigate for DDoS attacks. Going serverless does not eliminate the potential for falling susceptible to these attacks, which have changed dramatically over the past few years. Make no mistake: With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliverySecurity

Simple to Use Link Availability Solutions

November 1, 2018 — by Daniel Lakier0

simple_to_use_link_availability_solutions_blog-960x640.jpg

Previously, I discussed how rerouting data center host infrastructure should be part of next-generation DDoS solutions. In this blog, I will discuss how link availability solutions should also play a part. Traditional DDoS solutions offer us a measure of protection against a number of things that can disrupt service to our applications or environment. This is good, but what do we do when our mitigation solutions are downstream from the problem? In other words, what do we do if our service provider goes down either from a cyberattack or other event?

What if we had the capacity to clean the bandwidth provided by our service provider, but the service provider itself is down. How do we prepare for that eventuality? Admittedly, in first world nations with modern infrastructure, this is a less likely scenario. In third world nations with smaller carriers/ISPs and/or outdated infrastructure, it is more common. However, times are changing. The plethora of IoT devices deploying throughout the world makes this scenario more likely. While there is no silver bullet, there are several strategies to help mitigate this risk.

[You may also like: Disaster Recovery: Data Center or Host Infrastructure Reroute]

Is Border Gateway Protocol the Right Solution?

Most companies who consider a secondary provider for internet services have been setting up Border Gateway Protocol (BGP) as the service mechanism. While this can work, it may not be the right choice. BGP is a rigid protocol that takes a reasonable skill level to configure and maintain. It can often introduce complexity and some idiosyncrasies that can cause their own problems—not to mention it tends to be an either-or protocol. You cannot set all traffic to take the best route at all times. It has thresholds and not considered a load balancing protocol. All traffic configured to move in a certain route will move that way until certain thresholds are met and will only switch back once those thresholds/parameters change again. It can also introduce its own problems, including flapping, table size limitations, or cost overruns when it has been used to eliminate pay for usage links.

Any solution in this space needs to solve both the technical and economic issues associated with link availability. The technical issues are broken into two parts: people and technology. In other words, make it easy to use and configure; make it work for multiple use cases that include both inbound and outbound; and if possible eliminate the risk factors that can be associated with rigid solutions like link flapping and the associated downtime that can be caused via re-convergence. The second problem is economic.  Allow people to leverage their investments’ fully. If they pay for bandwidth they should be able to use it. Both links should be active (and load balanced if the customer wants). A common problem with BGP is that one link is fully leveraged, and therefore hits its maximum threshold, while the other link sits idle due to lack of flow control or load balancing.

For several years, organizations have looked for alternatives. The link load balancing and VXLAN marketplace have both been popular alternatives, especially as it relates to branch edge redundancy solutions. Most of these solutions have limitations with inbound network load balancing, resulting in curtailed adoption. In many data centers, especially cloud deployments, the usual flow of traffic involves out-of-network users from the outside initiating the traffic flow.  Most link load balancing solutions and VXLAN solutions are very good at load balancing outbound traffic. The key reason for the technology adoption has been two-fold: the ability to reduce cost with WAN/internet providers and the ability to reduce complexity.

The reduction in cost is focused on two main areas:

  • The ability to use less costly bandwidth (and traditionally less reliable) because the stability was compensated for by load balancing links dynamically
  • The ability to use what we were paying for a buy only the required bandwidth

The reduction in complexity comes from the ease in configuration and simplicity of being able to buy link redundancy solutions as a service.

The unique value of this solution is that you can protect yourself from upstream service outages or upstream burst attacks that trip thresholds in your environment and cause the BGP environment to transition back and forth as failover parameters are met, essentially causing port flapping. The carrier may not experience an outage, but if someone can insert enough latency into the link on a regular basis it could cause a continual outage. Purpose-built link protection and load balancing solutions not only serve an economic purpose but also protect your organization from upstream cyberattacks.

Read “Flexibility Is The Name of the Game” to learn more.

Download Now

Application SecuritySecurityWeb Application Firewall

Credential Stuffing Campaign Targets Financial Services

October 23, 2018 — by Daniel Smith0

credential_financial_hacking-960x677.jpg

Over the last few weeks, Radware has been tracking a significant Credential Stuffing Campaign targeting the financial industry in the United States and Europe.

Background

Credential Stuffing is an emerging threat in 2018 that continues to accelerate as more breaches occur. Today, a breach doesn’t just impact the compromised organization and its users, but it also affects every other website that the users may use.

Additionally, resetting passwords for a compromised application will only solve the problem locally while criminals are still able to leverage those credentials externally against other applications due to poor user credential hygiene.

Credential Stuffing is a subset of brute force attacks but is different from Credential Cracking. Credential Stuffing campaigns do not involve the process of brute forcing password combinations. Credential Stuffing campaigns leverage leaked username and passwords in an automated fashion against numerous websites in an attempt to take over users accounts due to credential reuse.

Criminals, like researchers, collect and data mine leaks databases and breached accounts for several reasons. Typically cybercriminals will keep this information for future targeted attacks, sell it for profit or exploit it in fraudulent ways.

The motivations behind the current campaign that Radware is seeing are strictly fraud related. Criminals are using credentials from prior data breaches in an attempt to gain access and take over user’s bank accounts. These attackers have been seen targeting financial organizations in both the United States and Europe. When significant breaches occur, the compromised email addresses and passwords are quickly leveraged by cybercriminals. Armed with tens of millions of credentials from a recently breached website, attackers will use these credentials along with scripts and proxies to distribute their attack in an automated fashion against the financial institution in an attempt to take over banking accounts. These login attempts can happen in such volumes that they resemble a Distributed Denial of Service (DDoS) attack.

Attack Methods

Credential Stuffing is one of the most commonly used attack vectors by cybercriminals today. It’s an automated web injection attack where criminals use a list of breached credentials in an attempt to gain access and take over accounts across different platforms due to poor credential hygiene. Attackers will route their login request through proxy servers to avoid blacklisting their IP address.

Attackers automate the logins of millions of previously discovered credentials with automation tools like cURL and PhantomJS or tools designed specifically for the attack like Sentry MBA and SNIPR.

This threat is dangerous to both the consumer and organizations due to the ripple effect caused by data breaches. When a company is breached, those credentials compromised will either be used by the attacker or sold to other cybercriminals. Once credentials reach its final destination, a for-profit criminal will use that data, or credentials obtain from a leak site, in an attempt to take over user accounts on multiple websites like social media, banking, and marketplaces. In addition to the threat of fraud and identity theft to the consumer, organizations have to mitigate credential stuffing campaigns that generate high volumes or login requests, eating up resources and bandwidth in the process.

Credential Cracking

Credential Cracking attacks are an automated web attack where criminals attempt to crack users password or PIN numbers by processing through all possible combines of characters in sequence. These attacks are only possible when applications do not have a lockout policy for failed login attempts.

Attackers will use a list of common words or recently leaked passwords in an automated fashion in an attempt to take over a specific account. Software for this attack will attempt to crack the user’s password by mutating, brute forcing, values until the attacker is successfully authenticated.

Targets

In recent campaigns, Radware has seen financial institutions targeted in both the United States and Europe by Credential Stuffing campaigns.

Crimeware

Sentry MBA is one of the most popular Credential Stuffing toolkits used by cybercriminals today. This tool is hosted on the Sentry MBA crackers forum. The tool simplifies and automates the process of checking credentials across multiple websites and allows the attackers to configure a proxy list so they can anonymize their login requests.

SNIPR – Credential Stuffing Toolkit

SNIPR is a popular Credential Stuffing toolkit used by cybercriminals and is found hosted on the SNIPR crackers forums. SNIPR comes with over 100 config files preloaded and the ability to upload personal config files to the public repository.

Reasons for Concern

Recent breaches over the last few years have exposed hundreds of millions of user credentials. One of the main reasons for concern of a Credential Stuffing campaign is due to the impact that it has on the users. Users who reuse credentials across multiple websites are exposing themselves to an increased risk of fraud and identity theft.

The second concern is for organizations who have to mitigate high volumes of fraudulent login attempts that can saturate a network. This saturation can be a cause for concern, as it will appear to be a DDoS attack, originating from random IP addresses coming from a variety of sources, including behind proxies. These requests will look like legitimate attempts since the attacker is not running a brute force attack. If the user: pass for that account does not exist or authenticate on the targeted application the program will move on to the next set of credentials.

Mitigation

In order to defend against a Credential Stuffing campaign, organizations need to deploy a WAF that can properly fingerprint and identify malicious bot traffic as well as automated login attacks directed at your web application. Radware’s AppWall addresses the multiples challenges faced by Credential Stuffing campaigns by introducing additional layers of mitigation including activity tracking and source blocking.

Radware’s AppWall is a Web Application Firewall (WAF) capable of securing Web applications as well as enabling PCI compliance by mitigating web application security threats and vulnerabilities. Radware’s WAF prevents data from leaking or being manipulated which is critically important in regard to sensitive corporate data and/or information about its customers.

The AppWall security filter also detects such attempts to hack into the system by checking the replies sent from the Web server for Bad/OK replies in a specific timeframe. In the event of a Brute Force attack, the number of Bad replies from the Web server (due to a bad username, incorrect password, etc.) triggers the BruteForce security filter to monitor and take action against that specific attacker. This blocking method prevents a hacker from using automated tools to carry out an attack against Web application login page.

In addition to these steps, network operators should apply two-factor authentication where eligible and monitor dump credentials for potential leaks or threats.

Effective Web Application Security Essentials

  • Full OWASP Top-10 coverage against defacements, injections, etc.
  • Low false positive rate – using negative and positive security models for maximum accuracy
  • Auto policy generation capabilities for the widest coverage with the lowest operational effort
  • Bot protection and device fingerprinting capabilities to overcome dynamic IP attacks and achieve improved bot detection and blocking
  • Securing APIs by filtering paths, understanding XML and JSON schemas for enforcement, and activity tracking mechanisms to trace bots and guard internal resources
  • Flexible deployment options – on-premise, out-of-path, virtual or cloud-based

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoSSecurity

Disaster Recovery: Data Center or Host Infrastructure Reroute

October 11, 2018 — by Daniel Lakier2

disaster-recovery-data-center-host-infrastructure-reroute-blog-960x540.jpg

Companies, even large ones, haven’t considered disaster recovery plans outside of their primary cloud providers own infrastructure as regularly as they should. In March of this year, Amazon Web Services (AWS) had a massive failure which directly impacted some of the world’s largest brands, taking them offline for several hours. In this case, it was not a malicious attack but the end result was the same— an outage.

When the organization’s leadership questioned their IT departments on how this outage could happen, most received an answer that was somehow acceptable:  It was AWS. Amazon failed, not us. However, that answer should not be acceptable.

AWS implies they are invulnerable, but the people running IT departments are running it for a reason. They are meant to be skeptics, and it is their job to build redundancies that protect the system against any one point of failure.  Some of those companies use AWS disaster recovery services, but if the data center and all the technology required to turn those fail-safes on crashes, then you’re down. This is why we need to treat the problem with the same logic that we use for any other system. Today it is easier than ever to create a resilient DoS resistant architecture that not only takes traditional malicious activity into account but also critical business failures. The solution isn’t purely technical either, it needs to be based upon sound business principles using readily available technology.

[You might also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

In the past enterprise disaster recovery architecture revolved around having a fully operational secondary location. If we wanted true resiliency that was the only option. Today although that can still be one of the foundation pillars to your approach it doesn’t have to be the only answer. You need to be more circumspect about what your requirements are and choose the right solution for each environment/problem.  For example:

  • A) You can still build it either in your own data center or in a cloud (match the performance requirements to a business value equation).
  • B) Several ‘Backups-as-a-Service’ will offer more than just storage in the cloud. They offer resources for rent (servers to run your corporate environments in case of an outage). If your business can sustain an environment going down just long enough to turn it back on (several hours), this can be a very cost-effective solution.
  • C) For non-critical items, rely on the cloud provider you currently use to provide near-time failure protection.

The Bottom Line

Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach.  But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design.

We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible. Radware’s dynamic performance-based route optimization solution (GSLB) allows us to offer a unique customer experience to each and every user regardless of where they are coming from, how they access the environment or what they are trying to do. This same technology allows us to reroute users in the event of a DoS event that takes down an entire site be it from malicious behavior, hardware failure or simple human error. This functionality can be procured as a product or a service as it is environment/cloud agnostic and relatively simple to deploy. It is not labor intensive and may be the least expensive part of an enterprise DOS architecture.

What we can conclude is that any company that blames the cloud provider for a down site in the future should be asked the hard questions because solving this problem is easier today than ever before.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack MitigationSecurityWeb Application Firewall

Are Your Applications Secure?

October 3, 2018 — by Ben Zilberman11

WAF_REPORT_BLOG_Cover_img-960x715.jpg

Executives express mixed feelings and a surprisingly high level of confidence in Radware’s 2018 Web Application Security Report. 

As we close out a year of headline-grabbing data breaches (British Airways, Under Armor, Panera Bread), the introduction of GDPR and the emergence of new application development architectures and frameworks, Radware examined the state of application security in its latest report. This global survey among executives and IT professionals yielded insights about threats, concerns and application security strategies.

The common trend among a variety of application security challenges including data breaches, bot management, DDoS mitigation, API security and DevSecOps, was the high level of confidence reported by those surveyed. 90% of all respondents across regions reported confidence that their security model is effective at mitigating web application attacks.

Attacks against applications are at a record high and sensitive data is shared more than ever. So how can execs and IT pros have such confidence in the security of their applications?

To get a better understanding, we researched the current threat landscape and application protection strategies organizations currently take. Contradicting evidence stood out immediately:

  • 90% suffered attacks against their applications
  • One in three shared sensitive data with third parties
  • 33% allowed third parties to create/modify/delete data via APIs
  • 67% believed a hacker can penetrate their network
  • 89% saw web-scraping as a significant threat to their IP
  • 83% run bug bounty programs to find vulnerabilities they miss

There were quite a few threats to application services that were not properly addressed, challenging traditional security approaches. In parallel, the adoption of emerging frameworks and architectures, which rely on numerous integrations with multiple services, adds more complexity and increases the attack surface.

Current Threat Landscape

Last November, OWASP released a new list of top 10 vulnerabilities in web applications. Hackers continue to use injections, XSS, and a few old techniques such as CSRF, RFI/LFI and session hijacking to exploit these vulnerabilities and gain unauthorized access to sensitive information. Protection is becoming more complex as attacks come through trusted sources such as a CDN, encrypted traffic, or APIs of systems and services we integrate with. Bots behave like real users and bypass challenges such as CAPTCHA, IP-based detection and others, making it even harder to secure and optimize the user experience.

[You might also like: WAFs Should Do A  Lot More Against Current Threats Than Covering OWASP Top 10]

Web application security solutions must be smarter and address a broad spectrum of vulnerability exploitation scenarios. On top of protecting the application from these common vulnerabilities, it has to protect APIs and mitigate DoS attacks, manage bot traffic and make a distinction between legitimate bots (search engines for instance) and bad ones like botnets, web-scrapers and more.

DDoS Attacks

63% suffered a denial of service attack against their application. DoS attacks render applications inoperable by exhausting the application resources. Buffer overflow and HTTP floods were the most common types of DoS attacks, and this form of attack is more common in APAC. 36% find HTTP/Layer-7 DDoS as the most difficult attack to mitigate. Half of the organizations take rate-based approaches (such as limiting the number of request from a certain source or simply buying a rate-based DDoS protection solution) which are ineffective once the threshold is exceeded and real users can’t connect.

API Attacks

APIs simplify the architecture and delivery of application services and make digital interactions possible. Unfortunately, they also introduce a wide range of risks and vulnerabilities as a backdoor for hackers to break into networks. Through APIs, data is exchanged in HTTP where both parties receive, process and share information. A third party is theoretically able to insert, modify, delete and retrieve content from applications. This is nothing but an invitation to attack:

  • 62% of respondents did not encrypt data sent via API
  • 70% of respondents did not require authentication
  • 33% allowed third parties to perform actions (GET/ POST / PUT/ DELETE)

Attacks against APIs:

  • 39% Access violations
  • 32% Brute-force
  • 29% Irregular JSON/XML expressions
  • 38% Protocol attacks
  • 31% Denial of service
  • 29% Injections

Bot Attacks

The amount of both good and bad bot traffic is growing. Organizations are forced to increase network capacity and need to be able to precisely tell a friend from a foe so both customer experience and security are maintained. Surprisingly, 98% claimed they can make such a distinction. However, a similar amount sees web-scraping as a significant threat. 87% were impacted by such an attack over the past 12 months, despite a variety of methods companies use to overcome the challenge – CAPTCHA, in-session termination, IP-based detection or even buying a dedicated anti-bot solution.

Impact of Web-scraping:

  • 50% gathered pricing information
  • 43% copied website
  • 42% theft of intellectual property
  • 37% inventory queued/being held by bots
  • 34% inventory held
  • 26% inventory bought out

Data Breaches

Multinational organizations keep close tabs on what kinds of data they collect and share. However, almost every other business (46%) reports having suffered a breach. On average an organization suffers 16.5 breach attempts every year. Most (85%) take between hours and days to discover. Data breaches are the most difficult attack to detect, as well as mitigate, in the eyes of our survey respondents.

How do organizations discover data breaches?

  • 69% Anomaly detection tools/SIEM
  • 51% Darknet monitoring service
  • 45% Information was leaked publicly
  • 27% Ransom demand

IMPACT OF ATTACKS

Negative consequences such as loss of reputation, customer compensation, legal action (more common in EMEA), churn (more common in APAC), stock price drops (more common in AMER) and executives who lose their jobs are quick to follow a successful attack, while the process of repairing the damage of a company’s reputation is long and not always successful. About half admitted having encountered such consequences.

Securing Emerging Application Development Frameworks

The rapidly growing amount of applications and their distribution across multiple environments requires adjustments that lead to variations once a change to the application is needed. It is nearly impossible to deploy and maintain the same security policy efficiently across all environments. Our research shows that ~60% of all applications undergo changes on a weekly basis. How can the security team keep up?

While 93% of organizations use a web application firewall (WAF), only three in ten use a WAF that combines both positive and negative security models for effective application protection.

Technologies Used By DevOps

  • 63% – DevOps and Automation Tools
  • 48% – Containers (3 in 5 use Orchestration)
  • 44% – Serverless / FaaS
  • 37% – Microservers

Among the respondents that used micro-services, one-half rated data protection as the biggest challenge, followed by availability assurance, policy enforcement, authentication, and visibility.

Summary

Is there a notion that organizations are confident? Yes. Is that a false sense of security? Yes. Attacks are constantly evolving and security measures are not foolproof. Having application security tools and processes in place may provide a sense of control but they are likely to be breached or bypassed sooner or later. Another question we are left with is whether senior management is fully aware of the day to day incidents. Rightfully so, they look to their internal teams tasked with application security to manage the issue, but there seems to be a disconnect between their perceptions of the effectiveness of their organizations’ application security strategies and the actual exposure to risk.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityCloud SecurityDDoS AttacksSecurityWAF

Protecting Sensitive Data: The Death of an SMB

September 26, 2018 — by Mike O'Malley1

protecting-sensitive-data-death-of-small-medium-business-960x522.jpg

True or False?

90% of small businesses lack any type of data protection for their company and customer information.

The answer?

Unfortunately true.

Due to this lack of care, 61% of data breach victims are specifically small businesses according to service provider Verizon’s 2018 Data Breach Investigations.

Although large corporations garner the most attention in mainstream headlines, small and mid-sized businesses (SMB) are increasingly attractive to hackers because of the combination of valuable records and lack of security protections. The high priority of sensitive data protection should not be limited to large companies but for organizations of all sizes.

While large corporations house large amounts of data, they are also capable of supporting their data center with the respective necessary protections. The combination of lacking security resources while maintaining sensitive personal information is what makes smaller-sized businesses the perfect targets for attackers. Hackers aren’t simply looking at how much information they can gather, but at the ease of access to that data – an area where SMB’s are largely deficient.

The bad publicity and dark connotation that data breaches hold create a survive-or-die situation for SMBs, but there are ways SMBs can mitigate the threat despite limited resources – and they exist in the cloud.

The Struggle to Survive

Because of their smaller stature as a company, most SMBs struggle with the ability to manage cybersecurity protections and mitigation of attacks – especially data breaches. In fact, financial services company UPS Capital found that 60% of smaller businesses fall out of business within six months after a cyberattack. Unlike business giants, SMBs cannot afford the financial hit of data breaches.

Security and privacy of sensitive data is a trending hot topic in today’s society, becoming more of an influence on customers’ purchase decisions. Customers are willing to pay more for provided security protections. Auditor giant KPMG reports that for mobile service providers alone, consumers would not hesitate to switch carriers if one provided better security than the other, as long as pricing is competitive or even for a moderate premium.

[You might also like: Protecting Sensitive Data: What a Breach Means to Your Business]

One Person Just Isn’t Enough

Many SMBs tend to prioritize their business over cybersecurity because of the false belief that attackers would go after large companies first. Research Center Ponemon Institute reports that 51% of its survey respondents say their company believes they are too small to be targeted. For businesses that do invest in cybersecurity, they narrowly focus on anti-virus solutions and neglect other types of attacks such as DDoS, malware, and system exploits that intrusion detection systems can protect from.

Auto dealerships, for example, are typically family-owned and operated businesses, valued at $4 million USD, with typically an average of 15-20 employees overall. Because of its size, of that number of employees there is typically only one employee that manages the IT responsibilities. Dealerships attempt to satisfy the need of security protection with this employee that has relevant certifications and experience; they are equipped with resources to support their day-to-day tasks, but not to manage high-level attacks and threats. Ponemon Institute’s research reports that 73% of its respondents believe they are unable to achieve full effective IT security because of insufficient personnel.

A study conducted by news publication Automotive News found that 33% of consumers lack confidence in the security protection of sensitive data at dealerships. The seriousness of cybersecurity protection, however, should not correlate to the number of employees but the amount and value of the sensitive data collected. The common error dealerships make isn’t the lack of care in their handling of sensitive data, but the underestimation of their likelihood of being attacked.

Dealerships collect valuable consumer information, both personal and financial – ranging from driver’s license information to social security numbers, to bank account information, and even past vehicle records. An insufficient budget and management of IT security make auto dealerships a prime target. In fact, software company MacKeeper in 2016 revealed a massive data breach of 120+ U.S. dealership systems made available on Shodan – a search engine for connected, but unsecured databases and devices. The source of the breach originated from backing up individual data systems to the vendor’s common central systems, without any cybersecurity protections in place.

The Answer is in the Clouds

Cybersecurity is often placed on the backburner of company priorities, perceived as an unnecessary expenditure because of the flawed perception and underestimated likelihood of being attacked. However, the level of protection over personal data is highly valued among today’s consumers and is enough to be the deciding factor for which OS or mobile app/site people would frequent, and likely which SMB they would patronize.

Witnessing the growing trend of data breaches and the rapid advancements of cyberattacks, SMBs are taking note and beginning to increase spending. It is crucial for organizations to not only increase their security budget but to spend it effectively and efficiently. Research firm Cyren and Osterman Research found that 63% of SMBs are increasing their security spending, but still experience breaches.

Internal security systems may seem more secure to smaller business owners, but SMBs lack the necessary security architecture and expertise to safeguard the data being housed. Cloud solutions offer what these businesses need: a data storage system with better security protection services. Meanwhile, in the same Cyren and Osterman Research report, only 29% of IT managers are open to utilizing cloud services. By utilizing cloud-based security as a solution, small-and medium-sized businesses no longer have to depend on one-staff IT departments, but can focus on the growth of their business. Cloud-based security solutions provide enterprise-grade protection alongside improved flexibility and agility that smaller organizations typically lack compared to their large-scale brethren.

Managed security vendors offer a range of fully-managed cloud security solutions for cyberattacks from WAF to DDoS. They are capable of providing more accurate real-time protection and coverage. Although the security is provided by an outside firm, reports and audits can be provided for a deeper analysis of not only the attacks but the company’s defenses. Outsourcing this type of security service to experts enables SMBs to continue achieving and prioritizing their business goals while protecting their work and customer data.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsSecurity

Free DNS Resolver Services and Data Mining

August 22, 2018 — by Lior Rozen1

dns_resolver_services_data_mining-960x640.jpg

Why would companies offer free DNS recursive servers? DNS data is extremely valuable for threat intelligence. If a company runs a recursive DNS for consumers, it can collect data on new domains that “pop up”. It can analyze trends, build baselines on domain resolution and enrich its threat intelligence overall (machine learning and big data are often used here). Companies can also sell this data to advertisers to measure site ratings and build user profiles.

The DNS resolver market for consumers is ruled by ISPs, as well as some other known servers by Google (8.8.8.8) and Level3 (CenturyLink). Since Cisco bought OpenDNS in August 2015, it has also become a major player, offering DNS services for individuals and organizations with its cloud security platform, Umbrella. Cisco OpenDNS focuses on malware prevention, as well as parental control for consumers. Akamai is also involved in the market, offering both recursive DNS for enterprises (a rather new service, based on a 2015 acquisition of Xerocole), and authorizes DNS services for their CDN clients. In several publications, Akamai claims to see more than 30% of internet data and is using this data as an add-on feed to its KONA service.

[You might also like: DNS and DNS Attacks]

In the Fall of 2017, IBM announced its new quad 9 (9.9.9.9) DNS service. This security-focused DNS uses IBM’s threat intelligence to prevent revolving known malicious domains (and protect against Malware) with approximately 70 servers worldwide. It claims to offer decent speed, and IBM has promised not to store any personal information (PII). On April 1, 2018, Cloudflare came out with a new quad 1 resolver – 1.1.1.1– that focuses on speed. With more than 1,000 servers, it promises to be the fastest resolver to any location. Additionally, Cloudflare promises never to sell the resolving user data, and to delete the resolver logs every 24 hours. Several independent measurements have confirmed Cloudflare’s success on speed which is typically the fastest after the ISP resolver. The one issue with a large number of servers is diffusion time as quad 1 takes significantly more time than other DNS providers to update about changing DNS records.

Another DNS initiative is DoH – DNS over HTTPS. This is a new standard proposal which is reviewed as the encrypted version of DNS (like HTTPS to HTTP). The focus here is both on privacy and security as DNS requests are done over HTTPS to prevent any interception of the request. If a user is using a different DNS, the ISP can still track the clear-text DNS requests, log them, or override them to use its own DNS resolver. The DoH protocol prevents this. Two major cloud DNS recursive servers support this protocol – the recent quad 1 by Cloudflare and Google’s DNS, as well as some other smaller ones. Mozilla recently ran a PoC with native Firefox support for DoH which was described here by Ars Technica.

[You might also like: DNS Reflective Attacks]

As we’ve shown, the DNS continues to evolve, both as a spec and as a service. Companies continue to invest a lot of money in collecting DNS data as they see the value in it. While each company provides a slightly different service, most are looking to mine the data for their own purposes. In order to do that, companies will be happy to provide the DNS service for free and compete in this saturated market.

Read “Radware’s 2017-2018 Global Application & Network Security Report” to learn more.

Download Now