main

Security

Bot Managers Are a Cash-Back Program For Your Company

April 17, 2019 — by Ben Zilberman0

Bot_Cash_Back-960x640.jpg

In my previous blog, I briefly discussed what bot managers are and why they are needed. Today, we will conduct a short ROI exercise (perhaps the toughest task in information security!).

To recap: Bots generate a little over half of today’s internet traffic. Roughly half of that half (i.e. a quarter, for rusty ones like myself…) is generated by bad bots, a.k.a. automated programs targeting applications with the intent to steal information or disrupt service. Over the years, they have gotten so sophisticated, they can easily mimic human behavior, perform allegedly uncorrelated violation actions and essentially fool most of application security solutions out there.

Bot, bot management, traffic

These bots affect each and every arm of your business. If you are in the e-commerce or travel industries, no need to tell you that… if you aren’t, go to your next C-level executive meeting and look for those who scratch their heads the most. Why? Because they can’t understand where the money goes, and why the predicted performance didn’t materialize as expected.

Let’s go talk to these C-Suite executives, shall we?

Chief Revenue Officer

Imagine you are selling product online–whether that’s tickets, hotel rooms or even 30-pound dog food bags–and this is your principal channel for revenue generation. Now, imagine that bots act as faux buyers, and hold the inventory “hostage” so genuine customers can not access them.

[You may also like: Will We Ever See the End of Account Theft?]

Sure, you can elapse the process every 10 minutes, but as this is an automated program, it will re-initiate the process in a split second. And what about CAPTCHA? Don’t assume CAPTCHA will weed out all bots; some bots activate after a human has solved it. How would you know when you are communicating with a bot or a human? (Hint: you’d know if you had a bot management solution).

Wondering why the movie hall is empty half the time even though it’s a hot release? Does everybody go to the theater across the street? No. Bots are to blame. And they cause direct, immediate and painful revenue loss.

[You may also like: Bots 101: This is Why We Can’t Have Nice Things]

Chief Marketing Officer

Digital marketing tools, end-to-end automation of the customer journey, lead generation, and content syndication are great tools that help CMOs measure ROI and plan budgets. But what if the analysis they provide are false? What if half the clicks you are paying for are fictitious, and you were subject to a click-fraud campaign by bots? What if a competitor uses a bot to scrape data of registrants out of your landing pages? Unfortunately, bots often skew the analysis and can lead you to make wrong decisions that result in poor performance. Without bot management, you’re wasting money in vain.

Chief Operations Officer/Chief Information Officer

Does your team complain that your network resources are in the “red zone,” close to maximum performance, but your customer base isn’t growing at the same pace?

Blame bots.

[You may also like: Disaster Recovery: The Big, Bad Bot Problem]

Obviously some bots are “good,” like automated services that help accelerate and streamline your business, analyze data quickly and help you to make better decisions. However, bad bots (26% of the total traffic you are processing) put a load on your infrastructure and make your IT staff cry for more capacity. So you invest $200-500K in bigger firewalls, ADCs, and broader internet pipes, and upgrade your servers.

Next thing you know, a large DDoS attack from IoT botnets knocks everything down. If only you had invested $50k upfront to filter out the bad traffic from the get-go… That could’ve translated to $300k cash back!

Chief Information Security Officer

Every hour, a new security vendor knocks on your door with another solution for a 0.0001% probability what-if scenario… your budget is all over the place, spent on multiple protections and a complex architecture trying to take an actionable snapshot of what’s going on at every moment. At the end of the day, your task is to protect your company’s information assets. And there are so many ways to get a hold of those precious secrets!

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Bad bots are your enemy. They can scrape content, files, pricing, and intellectual property from your website. They can take over user accounts by cracking their passwords or launch a credential stuffing attack (and then retrieve their payment info). And they can take down service with DDoS attacks and hold up inventory, as I previously mentioned.

You can absolutely reduce these risks significantly if you could distinguish human versus bot traffic (remember, sophisticated bots today can mimic human behavior and bypass all sorts of challenges, not only CAPTCA), and more than that, which bot is legitimate and which is malicious.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Bot management equals less risk, better posture, stable business, no budget increases or unexpected expenses. Cash back!

Chief Financial Officer

Your management peers could have made better investments, but now you have to clean up their mess. This can include paying legal fees and compensation to customers whose data was compromised, paying regulatory fines for coming up short in compliance, shelling out for a crisis management consultant firm, and absorbing costs associated with inventory hold up and downed service.

If you only had a bot management solution in place… so much cash back.

The Bottom Line

Run–do not walk–to your CEO and request a much-needed bot management solution. Not only does s/he have nothing to lose, s/he has a lot to gain.

* This week, Radware integrates bot management service with its cloud WAF for a complete, fully managed, application security suite.


Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi0

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware0

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

Top 3 Cyberattacks Targeting Proxy Servers

January 16, 2019 — by Daniel Smith0

Proxy-960x540.jpg

Today, many organizations are now realizing that DDoS defense is critical to maintaining an exceptional customer experience. Why? Because nothing diminishes load times or impacts the end user’s experience more than a cyberattack.

As a facilitator of access to content and networks, proxy servers have become a focal point for those seeking to cause grief to organizations via cyberattacks due to the fallout a successful assault can have.

Attacking the CDN Proxy

New vulnerabilities in content delivery networks (CDNs) have left many wondering if the networks themselves are vulnerable to a wide variety of cyberattacks. Here are five cyber “blind spots” that are often attacked – and how to mitigate the risks:

Increase in dynamic content attacks. Attackers have discovered that treatment of dynamic content requests is a major blind spot in CDNs. Since the dynamic content is not stored on CDN servers, all requests for dynamic content are sent to the origin’s servers. Attackers are taking advantage of this behavior to generate attack traffic that contains random parameters in HTTP GET requests. CDN servers immediately redirect this attack traffic to the origin—expecting the origin’s server to handle the requests. However, in many cases the origin’s servers do not have the capacity to handle all those attack requests and fail to provide online services to legitimate users. That creates a denial-of-service situation. Many CDNs can limit the number of dynamic requests to the server under attack. This means they cannot distinguish attackers from legitimate users and the rate limit will result in legitimate users being blocked.

SSL-based DDoS attacks. SSL-based DDoS attacks leverage this cryptographic protocol to target the victim’s online services. These attacks are easy to launch and difficult to mitigate, making them a hacker favorite. To detect and mitigate SSL-based attacks, CDN servers must first decrypt the traffic using the customer’s SSL keys. If the customer is not willing to provide the SSL keys to its CDN provider, then the SSL attack traffic is redirected to the customer’s origin. That leaves the customer vulnerable to SSL attacks. Such attacks that hit the customer’s origin can easily take down the secured online service.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

During DDoS attacks, when web application firewall (WAF) technologies are involved, CDNs also have a significant scalability weakness in terms of how many SSL connections per second they can handle. Serious latency issues can arise. PCI and other security compliance issues are also a problem because they limit the data centers that can be used to service the customer. This can increase latency and cause audit issues.

Keep in mind these problems are exacerbated with the massive migration from RSA algorithms to ECC and DH-based algorithms.

Attacks on non-CDN services. CDN services are often offered only for HTTP/S and DNS applications.  Other online services and applications in the customer’s data center, such as VoIP, mail, FTP and proprietary protocols, are not served by the CDN. Therefore, traffic to those applications is not routed through the CDN. Attackers are taking advantage of this blind spot and launching attacks on such applications. They are hitting the customer’s origin with large-scale attacks that threaten to saturate the Internet pipe of the customer. All the applications at the customer’s origin become unavailable to legitimate users once the internet pipe is saturated, including ones served by the CDN.

[You may also like: CDN Security is NOT Enough for Today]

Direct IP attacks. Even applications that are served by a CDN can be attacked once attackers launch a direct hit on the IP address of the web servers at the customer’s data center. These can be network-based flood attacks such as UDP floods or ICMP floods that will not be routed through CDN services and will directly hit the customer’s servers. Such volumetric network attacks can saturate the Internet pipe. That results in degradation to application and online services, including those served by the CDN.

Web application attacks. CDN protection from threats is limited and exposes web applications of the customer to data leakage and theft and other threats that are common with web applications. Most CDN- based WAF capabilities are minimal, covering only a basic set of predefined signatures and rules. Many of the CDN-based WAFs do not learn HTTP parameters and do not create positive security rules. Therefore, these WAFs cannot protect from zero-day attacks and known threats. For companies that do provide tuning for the web applications in their WAF, the cost is extremely high to get this level of protection. In addition to the significant blind spots identified, most CDN security services are simply not responsive enough, resulting in security configurations that take hours to manually deploy. Security services are using technologies (e.g., rate limit) that have proven inefficient in recent years and lack capabilities such as network behavioral analysis, challenge-response mechanisms and more.

[You may also like: Are Your Applications Secure?]

Finding the Watering Holes

Waterhole attack vectors are all about finding the weakest link in a technology chain. These attacks target often forgotten, overlooked or not intellectually attended to automated processes. They can lead to unbelievable devastation. What follows is a list of sample watering hole targets:

  • App stores
  • Security update services
  • Domain name services
  • Public code repositories to build websites
  • Webanalytics platforms
  • Identity and access single sign-on platforms
  • Open source code commonly used by vendors
  • Third-party vendors that participate in the website

The DDoS attack on Dyn in 2016 has been the best example of the water-holing vector technique to date. However, we believe this vector will gain momentum heading into 2018 and 2019 as automation begins to pervade every aspect of our life.

Attacking from the Side

In many ways, side channels are the most obscure and obfuscated attack vectors. This technique attacks the integrity of a company’s site through a variety of tactics:

  • DDoS the company’s analytics provider
  • Brute-force attack against all users or against all of the site’s third-party companies
  • Port the admin’s phone and steal login information
  • Massive load on “page dotting”
  • Large botnets to “learn” ins and outs of a site

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsDDoSDDoS Attacks

2018 In Review: Memcache and Drupalgeddon

December 20, 2018 — by Daniel Smith0

AdobeStock_199421574-960x640.jpg

Attackers don’t just utilize old, unpatched vulnerabilities, they also exploit recent disclosures at impressive rates. This year we witnessed two worldwide events that highlight the evolution and speed with which attackers will weaponize a vulnerability: Memcache and Druppalgeddon.

Memcached DDoS Attacks

In late February, Radware’s Threat Detection Network signaled an increase in activity on UDP port 11211. At the same time, several organizations began alerting to the same trend of attackers abusing Memcached servers for amplified attacks. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send spoofed attack traffic to a targeted victim. Memcached, like other UDP-based services (SSDP, DNS and NTP), are Internet servers that do not have native authentication and are therefore hijacked to launch amplified attacks against their victims. The Memcached protocol was never intended to be exposed to the Internet and thus did not have sufficient security controls in place. Because of this exposure, attackers are able to abuse Memcached UDP port 11211 for reflective, volumetric DDoS attacks.

On February 27, Memcached version 1.5.6 was released which noted that UDP port 11211 was exposed and fixed the issue by disabling the UDP protocol by default. The following day, before the update could be applied, attackers leveraged this new attack vector to launch the world’s largest DDoS attack, a title previously held by the Mirai botnet.

There were two main concerns with regards to the Memcached vulnerability. The first is centered around the number of exposed Memcached servers. With just under 100,000 servers and only a few thousand required to launch a 1Tbps attack, the cause for concern is great. Most organizations at this point are likely unaware that they have vulnerable Memcached servers exposed to the Internet and it takes time to block or filter this service. Memcached servers will be vulnerable for some time, allowing attackers to generate volumetric DDoS attacks with few resources.

[You may also like: Entering into the 1Tbps Era]

The second concern is the time it took attackers to begin exploiting this vulnerability. The spike in activity was known for several days prior to the patch and publication of the Memcached vulnerability. Within 24 hours of publication, an attacker was able to build an amplification list of vulnerable MMemcached servers and launch the massive attack.

Adding to this threat, Defcon.pro, a notorious stresser service, quickly incorporated Memcache into their premium offerings after the disclosure. Stresser services are normally quick to utilize the newest attack vector for many reasons. The first reason being publicity. Attackers looking to purchase DDoS-as-a-service will search for a platform offering the latest vectors. Including them in a service shows demand for the latest vectors. In addition, an operator might include the Memcache DDoS-as-a-service so they can provide their users with more power. A stresser service offering a Memcache DDoS-as-a-service will likely also attract more customers who are looking for volume and once again plays into marketing and availability.

[You may also like: The Rise of Booter and Stresser Services]

DDoS-as-a-service operators are running a business and are currently evolving at rapid rates to keep up with demand. Oftentimes, these operators are using the public attention created by news coverage similar to extortionists. Similarly, ransom denial-of-service (RDoS) operators are quick to threaten the use of new tools due to the risks they pose. DDoS-as-a-service will do the same, but once the threat is mitigated by security experts, cyber criminals will look for newer vectors to incorporate  into their latest toolkit or offerings.

This leads into the next example of Drupalgeddon campaign and how quickly hacktivists incorporated this attack vector into their toolkit for the purpose of spreading messages via defacements.

Drupalgeddon

In early 2018, Radware’s Emergency Response Team (ERT) was following AnonPlus Italia, an Anonymous-affiliated group that was engaged in digital protests throughout April and May. The group–involved in political hacktivism as they targeted the Italian government–executed numerous web defacements to protest war, religion, politics and financial power while spreading a message about their social network by abusing the content management systems (CMS).

On April 20, 2018 AnonPlus Italia began a new campaign and defaced two websites to advertise their website and IRC channel. Over the next six days, AnonPlus Italia would claim responsibility for defacing 21 websites, 20 of which used the popular open-source CMS Drupal.

[You may also like: Hacking Democracy: Vulnerable Voting Infrastructure and the Future of Election Security]

Prior to these attacks, on March 29, 2018, the Drupal security team released a patch for a critical remote code execution (RCE) against Drupal that allowed attackers to execute arbitrary code on unpatched servers as a result of an issue affecting multiple subsystems with default or common module configurations. Exploits for CVE-2018-7600 were posted to Github and Exploit-DB under the guise of education purposes only. The first PoC was posted to Exploit DB on April 13, 2018. On April 14, Legion B0mb3r, a member of the Bangladesh-based hacking group Err0r Squad, posted a video to YouTube demonstrating how to use this CVE-2018-7600 to deface an unpatched version of Drupal. A few days later, on April 17, a Metasploit module was also released to the public.

In May, AnonPlus Italia executed 27 more defacements, of which 19 were Drupal.

Content management systems like WordPress and Joomla are normally abused by Anonymous hacktivists to target other web servers. In this recent string of defacements, the group AnonPlus Italia is abusing misconfigured or unpatched CMS instances with remote code exploits, allowing them to upload shells and deface unmaintained websites for headline attention.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

BotnetsBrute Force AttacksDDoS AttacksPhishing

Top 6 Threat Discoveries of 2018

December 18, 2018 — by Radware0

AdobeStock_192801212-960x540.jpg

Over the course of 2018, Radware’s Emergency Response Team (ERT) identified several cyberattacks and security threats across the globe. Below is a round-up of our top discoveries from the past year. For more detailed information on each attack, please visit DDoS Warriors.

DemonBot

Radware’s Threat Research Center has been monitoring and tracking a malicious agent that is leveraging a Hadoop YARN (Yet-Another-Resource-Negotiator) unauthenticated remote command execution to infect Hadoop clusters with an unsophisticated new bot that identifies itself as DemonBot.

After a spike in requests for /ws/v1/cluster/apps/new-application appeared in our Threat Deception Network, DemonBot was identified and we have been tracking over 70 active exploit servers that are actively spreading DemonBot and are exploiting servers at an aggregated rate of over 1 million exploits per day.

[You may also like: IoT Botnets on the Rise]

Credential Stuffing Campaign

In October, Radware began tracking a credential stuffing campaign—a subset of Bruce Force attacks—targeting the financial industry in the United States and Europe.

This particular campaign is motivated by fraud. Criminals are using credentials from prior data breaches to gain access to users’ bank accounts. When significant breaches occur, the compromised emails and passwords are quickly leveraged by cybercriminals. Armed with tens of millions of credentials from recently breached websites, attackers will use these credentials, along with scripts and proxies, to distribute their attack against the financial institution to take over banking accounts. These login attempts can happen in such volumes that they resemble a distributed denial-of-service (DDoS) attack.

DNS Hijacking Targets Brazilian Banks

This summer, Radware’s Threat Research Center identified a hijacking campaign aimed at Brazilian Bank customers through their IoT devices, attempting to gain their bank credentials.

The research center had been tracking malicious activity targeting DLink DSL modem routers in Brazil since early June. Through known old exploits dating from 2015, a malicious agent is attempting to modify the DNS server settings in the routers of Brazilian residents, redirecting all their DNS requests through a malicious DNS server. The malicious DNS server is hijacking requests for the hostname of Banco de Brasil (www.bb.com.br) and redirecting to a fake, cloned website hosted on the same malicious DNS server, which has no connection whatsoever to the legitimate Banco de Brasil website.

[You may also like: Financial Institutions Must Protect the Data Like They Protect the Money]

Nigelthorn Malware

In May, Radware’s cloud malware protection service detected a zero-day malware threat at one of its customers, a global manufacturing firm, by using machine-learning algorithms. This malware campaign is propagating via socially-engineered links on Facebook and is infecting users by abusing a Google Chrome extension (the ‘Nigelify’ application) that performs credential theft, cryptomining, click fraud and more.

Further investigation by Radware’s Threat Research group revealed that this group has been active since at least March 2018 and has already infected more than 100,000 users in over 100 countries.

[You may also like: The Origin of Ransomware and Its Impact on Businesses]

Stresspaint Malware Campaign

On April 12, 2018, Radware’s Threat Research group detected malicious activity via internal feeds of a group collecting user credentials and payment methods from Facebook users across the globe. The group manipulates victims via phishing emails to download a painting application called ‘Relieve Stress Paint.’ While benign in appearance, it runs a malware dubbed ‘Stresspaint’ in the background. Within a few days, the group had infected over 40,000 users, stealing tens of thousands Facebook user credentials/cookies.

DarkSky Botnet

In early 2018, Radware’s Threat Research group discovered a new botnet, dubbed DarkSky. DarkSky features several evasion mechanisms, a malware downloader and a variety of network- and application-layer DDoS attack vectors. This bot is now available for sale for less than $20 over the Darknet.

As published by its authors, this malware is capable of running under Windows XP/7/8/10, both x32 and x64 versions, and has anti-virtual machine capabilities to evade security controls such as a sandbox, thereby allowing it to only infect ‘real’ machines.

Read the “IoT Attack Handbook – A Field Guide to Understanding IoT Attacks from the Mirai Botnet and its Modern Variants” to learn more.

Download Now

Application SecurityAttack MitigationDDoS AttacksSecurity

2018 In Review: Healthcare Under Attack

December 12, 2018 — by Daniel Smith0

Healthcare-Under-Attack-960x568.jpg

Radware’s ERT and Threat Research Center monitored an immense number of events over the last year, giving us a chance to review and analyze attack patterns to gain further insight into today’s trends and changes in the attack landscape. Here are some insights into what we have observed over the last year.

Healthcare Under Attack

Over the last decade there has been a dramatic digital transformation within healthcare; more facilities are relying on electronic forms and online processes to help improve and streamline the patient experience. As a result, the medical industry has new responsibilities and priorities to ensure client data is kept secure and available–which unfortunately aren’t always kept up with.

This year, the healthcare industry dominated news with an ever-growing list of breaches and attacks. Aetna, CarePlus, Partners Healthcare, BJC Healthcare, St. Peter’s Surgery and Endoscopy Center, ATI Physical Therapy, Inogen, UnityPoint Health, Nuance Communication, LifeBridge Health, Aultman Health Foundation, Med Associates and more recently Nashville Metro Public Health, UMC Physicians, and LabCorp Diagnostics have all disclosed or settled major breaches.

[You may also like: 2019 Predictions: Will Cyber Serenity Soon Be a Thing of the Past?]

Generally speaking, the risk of falling prey to data breaches is high, due to password sharing, outdated and unpatched software, or exposed and vulnerable servers. When you look at medical facilities in particular, other risks begin to appear, like those surrounding the number of hospital employees who have full or partial access to your health records during your stay there. The possibilities for a malicious insider or abuse of access is also very high, as is the risk of third party breaches. For example, it was recently disclosed that NHS patient records may have been exposed when passwords were stolen from Embrace Learning, a training business used by healthcare workers to learn about data protection.

Profiting From Medical Data

These recent cyber-attacks targeting the healthcare industry underscore the growing threat to hospitals, medical institutions and insurance companies around the world. So, what’s driving the trend? Profit. Personal data, specifically healthcare records, are in demand and quite valuable on today’s black market, often fetching more money per record than your financial records, and are a crucial part of today’s Fullz packages sold by cyber criminals.

Not only are criminals exfiltrating patient data and selling it for a profit, but others have opted to encrypt medical records with ransomware or hold the data hostage until their extortion demand is met. Often hospitals are quick to pay an extortionist because backups are non-existent, or it may take too long to restore services. Because of this, cyber-criminals have a focus on this industry.

[You may also like: How Secure is Your Medical Data?]

Most of the attacks targeting the medical industry are ransomware attacks, often delivered via phishing campaigns. There have also been cases where ransomware and malware have been delivered via drive-by downloads and comprised third party vendors. We have also seen criminals use SQL injections to steal data from medical applications as well as flooding those networks with DDoS attacks. More recently, we have seen large scale scanning and exploitation of internet connected devices for the purpose of crypto mining, some of which have been located inside medical networks. In addition to causing outages and encrypting data, these attacks have resulted in canceling elective cases, diverting incoming patients and rescheduling surgeries.

For-profit hackers will target and launch a number of different attacks against medical networks designed to obtain and steal your personal information from vulnerable or exposed databases. They are looking for a complete or partial set of information such as name, date of birth, Social Security numbers, diagnosis or treatment information, Medicare or Medicaid identification number, medical record number, billing/claims information, health insurance information, disability code, birth or marriage certificate information, Employer Identification Number, driver’s license numbers, passport information, banking or financial account numbers, and usernames and passwords so they can resell that information for a profit.

[You may also like: Fraud on the Darknet: How to Own Over 1 Million Usernames and Passwords]

Sometimes the data obtained by the criminal is incomplete, but that data can be leveraged as a stepping stone to gather additional information. Criminals can use partial information to create a spear-phishing kit designed to gain your trust by citing a piece of personal information as bait. And they’ll move very quickly once they gain access to PHI or payment information. Criminals will normally sell the information obtained, even if incomplete, in bulk or in packages on private forums to other criminals who have the ability to complete the Fullz package or quickly cash the accounts out. Stolen data will also find its way to public auctions and marketplaces on the dark net, where sellers try to get the highest price possible for data or gain attention and notoriety for the hack.

Don’t let healthcare data slip through the cracks; be prepared.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha0

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

DDoSDDoS AttacksSecurityWAF

What Can We Learn About Cybersecurity from the Challenger Disaster? Everything.

December 5, 2018 — by Radware0

AdobeStock_115308434-960x640.jpg

Understanding the potential threats that your organization faces is an essential part of risk management in modern times. It involves forecasting and evaluating all the factors that impact risk. Processes, procedures and investments can all increase, minimize or even eliminate risk.

Another factor is the human element. Often times, within an organization, a culture exists in which reams of historical data tell one story, but management believes something entirely different. This “cognitive dissonance” can lead to an overemphasis and reliance on near-term data and/or experiences and a discounting of long-term statistical analysis.

Perhaps no better example of this exists than the space shuttle Challenger disaster in 1986, which now serves as a case study in improperly managing risk. In January of that year, the Challenger disintegrated 73 seconds after launch due to the failure of a gasket (called an O-ring) in one of the rocket boosters. While the physical cause of the disaster was caused by the failure of the O-ring, the resulting Rogers Commission that investigated the accident found that NASA had failed to correctly identify “flaws in management procedures and technical design that, if corrected, might have prevented the Challenger tragedy.”

Despite strong evidence dating back to 1977 that the O-ring was a flawed design that could fail under certain conditions/temperatures, neither NASA management nor the rocket manufacturer, Morton Thiokol, responded adequately to the danger posed by the deficient joint design. Rather than redesigning the joint, they came to define the problem as an “acceptable flight risk.” Over the course of 24 preceding successful space shuttle flights, a “safety culture” was established within NASA management that downplayed the technical risks associated with flying the space shuttle despite mountains of data, and warnings about the O-ring, provided by research and development (R & D) engineers.

As American physicist Richard Feynman said regarding the disaster, “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

Truer words have never been spoken when they pertain to cybersecurity. C-suite executives need to stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and establish a culture of “acceptable risk” and start managing to real-world risks — risks that are supported by hard data.

Risk Management and Cybersecurity

The threat of a cyberattack on your organization is no longer a question of if, but when, and C-suite executives know it. According to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts, 96% of executives were concerned about network vulnerabilities and security risks resulting from hybrid computing environments. Managing risk requires organizations to plan for and swiftly respond to risks and potential risks as they arise. Cybersecurity is no exception. For any organization, risks can be classified into four basic categories:

The Challenger disaster underscores all four of these risk categories. Take strategic risk as an example. Engineers from Morton Thiokol expressed concerns and presented data regarding the performance of the O-rings, both in the years prior and days leading up to the launch, and stated the launch should be delayed. NASA, under pressure to launch the already delayed mission and emboldened by the 24 preceding successful shuttle flights that led them to discount the reality of failure, pressured Morton Thiokol to supply a different recommendation. Morton Thiokol management decided to place organizational goals ahead of safety concerns that were supported by hard data. The recommendation for the launch was given, resulting in one of the most catastrophic incidents in manned space exploration. Both Morton Thiokol and NASA made strategic decisions that placed the advancements of their respective organizations over the risks that were presented.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

This example of strategic risk serves as a perfect analogy for organizations implementing cybersecurity strategies and solutions. There are countless examples of high-profile cyberattacks and data breaches in which upper management was warned in advance of network vulnerabilities, yet no actions were taken to prevent an impending disaster. The infamous 2018 Panera Bread data breach is one such example. Facebook is yet another. Its platform operations manager between 2011 and 2012 warned management at the social tech giant to implement audits or enforce other mechanisms to ensure user data extracted from the social network was not misused by third-party developers and/or systems. These warnings were apparently ignored.

So why does this continually occur? The implementation of DDoS and WAF mitigation solutions often involves three key components within an organization: management, the security team/SOC and compliance. Despite reams of hard data provided by a security team that an organization is either currently vulnerable or not prepared for the newest generation of attack vectors, management will often place overemphasis on near-term security results/experiences; they feel secure in the fact that the organization has never been the victim of a successful cyberattack to date. The aforementioned Facebook story is a perfect example: They allowed history to override hard data presented by a platform manager regarding new security risks.

Underscoring this “cognitive dissonance” is the compliance team, which often seeks to evaluate DDoS mitigation solutions based solely on checkbox functionality that fulfills minimal compliance standards. Alternatively, this strategy also drives a cost-savings approach that yields short-term financial savings within an organization that often times views cybersecurity as an afterthought vis-à-vis other strategic programs, such as mobility, IoT and cloud computing.

The end result? Organizations aren’t managing real-world risks, but rather are managing “yesterday’s” risks, thereby leaving themselves vulnerable to new attack vectors, IoT botnet vulnerabilities, cybercriminals and other threats that didn’t exist weeks or even days ago.

The True Cost of a Cyberattack

To understand just how detrimental this can be to the long-term success of an organization requires grasping the true cost of a cyberattack. Sadly, these data points are often as poorly understood, or dismissed, as the aforementioned statistics regarding vulnerability. The cost of a cyberattack can be mapped by the four risk categories:

  • Strategic Risk: Cyberattacks, on average, cost more than one million USD/EUR, according to 40% of executives. Five percent estimated this cost to be more than 25 million USD/EUR.
  • Reputation Risk: Customer attrition rates can increase by as much as 30% following a cyberattack. Moreover, organizations that lose over four percent of their customers following a data breach suffer an average total cost of $5.1 million. In addition, 41% of executives reported that customers have taken legal action against their companies following a data breach. The Yahoo and Equifax data breach lawsuits are two high-profile examples.
  • Product Risk: The IP Commission estimated that counterfeit goods, pirated software and stolen trade secrets cost the U.S. economy $600 billion annually.
  • Governance Risk: “Hidden” costs associated with a data breach include increased insurance premiums, lower credit ratings and devaluation of trade names. Equifax was devalued by $4 billion by Wall Street following the announcement of its data breach.

[You may also like: Understanding the Real Cost of a Cyber-Attack and Building a Cyber-Resilient Business]

Secure the Customer Experience, Manage Risk

It’s only by identifying the new risks that an organization faces each and every day and having a plan in place to minimize them that enables its executives to build a foundation upon which their company will succeed. In the case of the space shuttle program, mounds of data that clearly demonstrated an unacceptable flight risk were pushed aside by the need to meet operational goals. What lessons can be learned from that fateful day in January of 1986 and applied to cybersecurity? To start, the disaster highlights the five key steps of managing risks.

In the case of cybersecurity, this means that the executive leadership must weigh the opinions of its network security team, compliance team and upper management and use data to identify vulnerabilities and the requirements to successfully mitigate them. In the digital age, cybersecurity must be viewed as an ongoing strategic initiative and cannot be delegated solely to compliance. Leadership must fully weigh the potential cost of a cyberattack/data breach on the organization versus the resources required to implement the right security strategies and solutions. Lastly, when properly understood, risk can actually be turned into a competitive advantage. In the case of cybersecurity, it can be used as a competitive differentiator with consumers that demand fast network performance, responsive applications and a secure customer experience. This enables companies to target and retain customers by supplying a forward-looking security solution that seamlessly protects users today and into the future.

So how are executives expected to accomplish this while facing new security threats, tight budgets, a shortfall in cybersecurity professionals and the need to safeguard increasingly diversified infrastructures? The key is creating a secure climate for the business and its customers.

To create this climate, research shows that executives must be willing to accept new technologies, be openminded to new ideologies and embrace change, according to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts. Executives committed to staying on top of this ever-evolving threat must break down the silos that exist in the organization to assess the dimensions of the risks across the enterprise and address these exposures holistically. Next is balancing the aforementioned investment versus risk equation. All executives will face tough choices when deciding where to invest resources to propel their companies forward. C-suite executives must leverage the aforementioned data points and carefully evaluate the risks associated with security vulnerabilities and the costs of implementing effective security solutions to avoid becoming the next high-profile data breach.

According to the same report, four in 10 respondents identified increasing infrastructure complexity, digital transformation plans, integration of artificial intelligence and migration to the cloud as events that put pressure on security planning and budget allocation.

The stakes are high. Security threats can seriously impact a company’s brand reputation, resulting in customer loss, reduced operational productivity and lawsuits. C-suite executives must heed the lessons of the space shuttle Challenger disaster: Stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and start managing to real-world risks by trusting data, pushing aside near-term experiences/“gut instincts” and understanding the true cost of a cyberattack. Those executives who are willing to embrace technology and change and prioritize cybersecurity will be the ones to win the trust and loyalty of the 21st-century consumer.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Mobile SecuritySecurity

Cybersecurity for the Business Traveler: A Tale of Two Internets

November 27, 2018 — by David Hobbs0

travel-960x506.jpg

Many of us travel for work, and there are several factors we take into consideration when we do. Finding the best flights, hotels and transportation to fit in the guidelines of compliance is the first set of hurdles, but the second can be a bit trickier: Trusting your selected location. Most hotels do not advertise their physical security details, let alone any cybersecurity efforts.

I recently visited New Delhi, India, where I stayed at a hotel in the Diplomatic Enclave. Being extremely security conscious, I did a test on the connection from the hotel and found there was little-to-no protection on the wi-fi network. This hotel touts its appeal to elite guests, including diplomats and businessmen on official business. But if it doesn’t offer robust security on its network, how can it protect our records and personal data?  What kind of protection could I expect if a hacking group decided to target guests?

[You may also like: Protecting Sensitive Data: A Black Swan Never Truly Sits Still]

If I had to guess, most hotel guests—whether they’re traveling for business or pleasure—don’t spend much time or energy considering the security implications of their new, temporary wi-fi access. But they should.

More and more, we are seeing hacking groups target high-profile travelers. For example, the Fin7 group stole over $1 billion with aggressive hacking techniques aimed at hotels and their guests. And in 2017, an espionage group known as APT28 sought to steal password credentials from Western government and business travelers using hotel wi-fi networks.

A Tale of Two Internets

To address cybersecurity concerns—while also setting themselves apart with a competitive advantage—conference centers, hotels and other watering holes for business travelers could easily offer two connectivity options for guests:

  • Secure Internet: With this option, the hotel would provide basic levels of security monitoring, from virus connections to command and control infrastructure, and look for rogue attackers on the network. It could also alert guests to potential attacks when they log on and could make a “best effort.”
  • Wide Open Internet: In this tier, guests could access high speed internet to do as they please, without rigorous security checks in place. This is the way most hotels, convention centers and other public wi-fi networks work today.

A two-tiered approach is a win-win for both guests and hotels. If hotels offer multiple rates for wi-fi packages, business travelers may pay more to ensure their sensitive company data is protected, thereby helping to cover cybersecurity-related expenses. And guests would have the choice to decide which package best suits their security needs—a natural byproduct of which is consumer education, albeit brief, on the existence of network vulnerabilities and the need for cybersecurity. After all, guests may not have even considered the possibility of security breaches in a hotel’s wi-fi, but evaluating different Internet options would, by default, change that.

[You may also like: Protecting Sensitive Data: The Death of an SMB]

Once your average traveler is aware of the potential for security breaches during hotel stays, the sky’s the limit! Imagine a cultural shift in which hotels were encouraged to promote their cybersecurity initiatives and guests could rate them online in travel site reviews? Secure hotel wi-fi could become a standard amenity and a selling point for travelers.

I, for one, would gladly select a wi-fi option that offered malware alerts, stopped DDoS attacks and proactively looked for known attacks and vulnerabilities (while still using a VPN, of course). Wouldn’t it be better if we could surf a network more secure than the wide open Internet?

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now