main

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware1

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Mobile SecurityService Provider

Securing the Customer Experience for 5G and IoT

February 21, 2019 — by Louis Scialabba1

iot-5g-networks-cybersecurity-blog-img-960x519.jpg

5G is set to bring fast speeds, low latency and more data to the customer experience for today’s digitized consumer. Driven by global demand for 24×7 high-speed internet access, the business landscape will only increase in competitiveness as service providers jockey to deliver improved network capabilities.

Although the mass roll-out of the cutting-edge technology is expected around 2020, the race to 5G deployment has already begun. In addition to serving as the foundation for the aforementioned digital transformation, 5G networks will also deliver the integral infrastructure required for increased agility and flexibility.


But with new benefits come new risks. As network architectures evolve to support 5G, it will leave security vulnerabilities if cybersecurity isn’t prioritized and integrated into a 5G deployment from the get-go to provide a secure environment that safeguards customers’ data and devices.

Cybersecurity for 5G shouldn’t be viewed as an additional operational cost, but rather as a business opportunity/competitive differentiator that is integrated throughout the overall architecture. Just as personal data has become a commodity in today’s world, carriers will need the right security solution to keep data secure while improving the customer experience via a mix of availability and security.

For more insight into how service providers can mitigate the business risks of 5G deployment, please read our white paper.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi0

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

HacksSecurity

How Hackable Is Your Dating App?

February 14, 2019 — by Mike O'Malley0

datingapps-960x653.jpeg

If you’re looking to find a date in 2019, you’re in luck. Dozens of apps and sites exist for this sole purpose – Bumble, Tinder, OKCupid, Match, to name a few. Your next partner could be just a swipe away! But that’s not all; your personal data is likewise a swipe or click away from falling into the hands of cyber criminals (or other creeps).

Online dating, while certainly more popular and acceptable now than it was a decade ago, can be risky. There are top-of-mind risks—does s/he look like their photo? Could this person be a predator?—as well as less prominent (albeit equally important) concerns surrounding data privacy. What, if anything, do your dating apps and sites do to protect your personal data? How hackable are these apps, is there an API where 3rd parties (or hackers) can access your information, and what does that mean for your safety?

Privacy? What Privacy?

A cursory glance at popular dating apps’ privacy policies aren’t exactly comforting. For example, Tinder states, “you should not expect that your personal information, chats, or other communications will always remain secure.” Bumble isn’t much better (“We cannot guarantee the security of your personal data while it is being transmitted to our site and any transmission is at your own risk”) and neither is OKCupid (“As with all technology companies, although we take steps to secure your information, we do not promise, and you should not expect, that your personal information will always remain secure”).

Granted, these are just a few examples, but they paint a concerning picture. These apps and sites house massive amounts of sensitive data—names, locations, birth dates, email addresses, personal interests, and even health statuses—and don’t accept liability for security breaches.

If you’re thinking, “these types of hacks or lapses in privacy aren’t common, there’s no need to panic,” you’re sadly mistaken.

[You may also like: Are Your Applications Secure?]

Hacking Love

The fact is, dating sites and apps have a history of being hacked. In 2015, Ashley Madison, a site for “affairs and discreet married dating,” was notoriously hacked and nearly 37 million customers’ private data was published by hackers.

The following year, BeautifulPeople.com was hacked and the responsible cyber criminals sold the data of 1.1 million users, including personal habits, weight, height, eye color, job, education and more, online. Then there’s the AdultFriendFinder hack, Tinder profile scraping, Jack’d data exposure, and now the very shady practice of data brokers selling online data profiles by the millions.

In other words, between the apparent lack of protection and cyber criminals vying to get a hold of such personal data—whether to sell it for profit, publicly embarrass users, steal identities or build a profile on individuals for compromise—the opportunity and motivation to hack dating apps are high.

[You may also like: Here’s Why Foreign Intelligence Agencies Want Your Data]

Protect Yourself

Dating is hard enough as it is, without the threat of data breaches. So how can you best protect yourself?

First thing’s first: Before you sign up for an app, conduct your due diligence. Does your app use SSL-encrypted data transfers? Does it share your data with third parties? Does it authorize through Facebook (which lacks a certificate verification)? Does the company accept any liability to protect your data?

[You may also like: Ensuring Data Privacy in Public Clouds]

Once you’ve joined a dating app or site, beware of what personal information you share. Oversharing details (education level, job, social media handles, contact information, religion, hobbies, information about your kids, etc.), especially when combined with geo-matching, allows creepy would-be daters to build a playbook on how to target or blackmail you. And if that data is breached and sold or otherwise publicly released, your reputation and safety could be at risk.

Likewise, switch up your profile photos. Because so many apps are connected via Facebook, using the same picture across social platforms lets potential criminals connect the dots and identify you, even if you use an anonymous handle.

Finally, you should use a VPN and ensure your mobile device is up-to-date with security features so that you mitigate cyber risks while you’re swiping left or right.

It’s always better to be safe and secure than sorry.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack MitigationSecurity

The Costs of Cyberattacks Are Real

February 13, 2019 — by Radware0

2018_19_ERT_Rpt_Long-TermBusImpactsOfCyberattacks_hi-960x542.png

Customers put their trust in companies to deliver on promises of security. Think about how quickly most people tick the boxes on required privacy agreements, likely without reading them. They want to believe the companies they choose to associate with have their best interests at heart and expect them to implement the necessary safeguards. The quickest way to lose customers is to betray that confidence, especially when it comes to their personal information.

Hackers understand that, too. They quickly adapt tools and techniques to disrupt that delicate balance. Executives from every business unit need to understand how cybersecurity affects the overall success of their businesses.

Long Lasting Impacts

In our digital world, businesses feel added pressure to maintain this social contract as the prevalence and severity of cyberattacks increase. Respondents to Radware’s global industry survey were definitely feeling the pain: ninety-three percent of the organizations worldwide indicated that they suffered some kind of negative impact to their relationships with customers as a result of cyberattacks.

Data breaches have real and long-lasting business impacts. Quantifiable monetary losses can be directly tied to the aftermath of cyberattacks in lost revenue, unexpected budget expenditures and drops in stock values. Protracted repercussions are most likely to emerge as a result of negative customer experiences, damage to brand reputation and loss of customers.

[You may also like: How Cyberattacks Directly Impact Your Brand: New Radware Report]

Indeed, expenditures related to cyberattacks are often realized over the course of several years. Here, we highlight recent massive data breaches–which could have been avoided with careful security hygiene and diligence to publicly reported system exploits:

The bottom line? Management boards and directorates should understand the impact of cyberattacks on their businesses. They should also prioritize how much liability they can absorb and what is considered a major risk to business continuity.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

What Do Banks and Cybersecurity Have in Common? Everything.

February 7, 2019 — by Radware0

bank-960x640.jpg

New cyber-security threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.

So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.

The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.

Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.

A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.

Adapting for the Future: DDoS Mitigation

To understand how and why cyber-security models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyber-attacks, and DevOps and agile software development.

[You may also like: Agile, DevOps and Load Balancers: Evolution of Network Operations]

A DDoS attack is any cyber-attack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.

Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement. In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloud-based mitigation services, is critical.

[You may also like: 8 Questions to Ask in DDoS Protection]

Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.

Hybrid DDoS protections aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise.

[You may also like: Choosing the Right DDoS Solution – Part IV: Hybrid Protection]

An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.

Inspecting Encrypted Data

Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new, stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing. To stop hackers from leveraging SSL/TLS-based cyber-attacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.

Dealing with DevOps and Agile Software Development

Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cyber-security solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

In the world of cyber-security, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation. It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.

Adaptability is the Name of the Game

The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:

  • Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping yourorganization up and running and your customer’s data secure.
  • Understand the long-term TCO of any cyber security solution you purchase.
  • The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application Security

HTTPS: The Myth of Secure Encrypted Traffic Exposed

February 5, 2019 — by Ben Zilberman0

https--960x540.jpeg

The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?

Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.

While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.

Encrypted Application Layers

When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.

To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:

  • Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
    applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
  • Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
  • Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.

[You may also like: SSL Attacks – When Hackers Use Security Against You]

Environmental Aspects

Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.

  • Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
  • Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
  • Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
  • Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.

[You may also like: Are Your Applications Secure?]

Regulatory Limitations

Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.

Encryption Protocols

The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.

[You may also like: Adopt TLS 1.3 – Kill Two Birds with One Stone]

Attack Patterns

Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.

  • Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
  • Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
  • Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
  • Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
  • Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
  • Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.

End-to-End Protection

Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsBotnets

Attackers Are Leveraging Automation

January 31, 2019 — by Radware0

automation-960x681.jpg

Cybercriminals are weaponizing automation and machine learning to create increasingly evasive attack vectors, and the internet of things (IoT) has proven to be the catalyst driving this trend. IoT is the birthplace of many of the new types of automated bots and malware.

At the forefront are botnets, which are increasingly sophisticated, lethal and highly automated digitized armies running amok on corporate networks. For example, hackers now leverage botnets to conduct early exploitation and network reconnaissance prior to unleashing an attack.

The Mirai botnet, which was made famous by its use in the 2016 attack on DNS provider Dyn, along with its subsequent variants, embodies many of these characteristics. It leverages a network-scanning and attack architecture capable of identifying “competing” malware and removing it from the IoT device to block remote administrative control. In addition, it leverages the infamous Water Torture attack to generate randomized domain names on a DNS infrastructure. Follow-up variants use automation to allow the malware to craft malicious queries in real time.

[You may also like: A Quick History of IoT Botnets]

Modern-day malware is an equally sophisticated multi-vector cyberattack weapon designed to elude detection using an array of evasion tools and camouflage techniques. Hackers now leverage machine learning to create custom malware that defeats anti-malware defenses. One example is Generative Adversarial Network algorithms
that can bypass black-box machine-learning models. In another example, a cybersecurity company adapted Elon Musk’s OpenAI framework to create forms of malware that mitigation solutions couldn’t detect.

Automation for Detection and Mitigation

So how does a network security team improve its ability to deal with these increasingly multifarious cyberattacks? Fight fire with fire. Automated cybersecurity solutions provide the data-processing muscle to mitigate these advanced threats.

Executives clearly understand this and are ready to take advantage of automation. According to Radware’s C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts report, the vast majority of executives (71%) report shifting more of their network security budget into technologies that employ machine learning and automation. The need to protect increasingly heterogeneous infrastructures, a shortage in cybersecurity talent and increasingly dangerous
cyberthreats were indicated as the primary drivers of this fiscal shift.

In addition, the trust factor is increasing. Four in 10 executives trust automated systems more than humans to protect their organization against cyberattacks.

[You may also like: Looking Past the Hype to Discover the Real Potential of AI]

Traditional DDoS solutions use rate limiting and manual signature creation to mitigate attacks. Rate limiting can be effective but can also result in a high number of false positives. As a result, manual signatures are then used to block offending traffic to reduce the number of false positives. Moreover, manual signatures take time to create because identifying offending traffic is only possible AFTER the attack starts. With machine-learning botnets now breaching defenses in less than 20 seconds, this hands-on strategy does not suffice.

Automation and, more specifically, machine learning overcome the drawbacks of manual signature creation and rate-limiting protection by automatically creating signatures and adapting protections to changing attack vectors. Machine learning leverages advanced mathematical models and algorithms to look at baseline network parameters, assess network behavior, automatically create attack signatures and adapt security configurations and/or policies to mitigate attacks. Machine learning transitions an organization’s DDoS protection strategy from manual, ratio- and rate-based protection to behavioral-based detection and mitigation.

The Final Step: Self-Learning

A market-leading DDoS protection solution combines machine-learning capabilities with negative and positive security protection models to mitigate automated attack vectors, such as the aforementioned DNS Water Torture attacks made notorious by Mirai. By employing machine learning and ingress-only positive protection models, this sort of an attack vector is eliminated, regardless of whether the protected DNS infrastructure is an authoritative or a recursive DNS.

The final step of automated cybersecurity is automated self-learning. DDoS mitigation solutions should leverage a deep neural network (DNN) that conducts post-analysis of all the generated data, isolates known attack information and feeds those data points back into the machine learning algorithms. DNNs require massive amounts of storage and computing power and can be prohibitively expensive to house and manage within a privately hosted data center.

[You may also like: Are Application Testing Tools Still Relevant with Self Learning WAFs?]

As a result, ideally a DNN is housed and maintained by your organization’s DDoS mitigation vendor, which leverages its network of cloud-based scrubbing centers (and the massive volumes of threat intelligence data that it collects) to process this information via big data analytics and automatically feed it back into your organization’s DDoS mitigation solution via a real-time threat intelligence feed.This makes the input of thousands of malicious IPs and new attack signatures into an automated process that no SOC team could ever hope to accomplish manually.

The result is a DDoS mitigation system that automatically collects data from multiple sources and leverages machine learning to conduct zero-day characterization. Attack signatures and security policies are automatically updated and not reliant on a SOC engineer who is free to conduct higher-level analysis, system management and threat analysis.

Automation is the future of cybersecurity. As cybercriminals become more savvy and increasingly rely on automation to achieve their mischievous goals, automation and machine learning will become the cornerstone of cybersecurity solutions to effectively combat the onslaught from the next generation of attacks. It will allow organizations to improve the ability to scale network security teams, minimize human errors and safeguard digital assets to ensure brand reputation and the customer experience.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsSecurity

The Rise in Cryptomining

January 29, 2019 — by Radware1

cryptomining-960x255.jpg

There are four primary motivations for cyberattacks: crime, hacktivism, espionage and war. Setting aside nation-state sponsored groups, the largest faction of attackers are cybercriminals, individuals or well-established organizations looking to turn a profit.

For the last several years, ransom-based cyberattacks and ransomware had been the financial modus operandi for hackers, but 2018 flipped the coin to unveil a new attack vector: cryptomining.

Always Crypto

Radware’s Malware Threat Research Group monitored this phenomenon throughout the year and identified two recurring trends. Some groups use cryptomining to score a quick, easy profit by infecting machines and mining cryptocurrencies. Other groups use cryptomining as an ongoing source of income, simply by reselling installations on infected machines or selling harvested data.

While there is no definitive reason why cryptomining has become popular, what is clear are some of the advantages it has over older attacks methods:

  • It’s easy – There’s no need to develop a cryptomining tool or even buy one. An attacker can just download a free tool into the victim’s machine and run it with a simple configuration that instructs it to mine the pool.
  • CPU – While Bitcoin requires a graphic processing unit (GPU) to perform effective mining, other cryptocurrency, such as Monero, require only CPU to effectively mine a machine. Since every machine has a CPU, including web cameras, smartphones, smart TVs and computers, there many potential targets.
  • Minimal footprint — Other attack types require the hackers to market their “goods” or to actively use the information they acquired for malicious purposes. In cryptomining, the money moves directly to the attacker.
  • Value — The value of cryptocurrencies skyrocketed in late 2017 and early 2018. The outbreak quickly followed. More recently, as monetary value declined, so has the number of incidences.
  • Multipurpose hack — After successfully infecting a machine, hackers can leverage the installation of the malware program for multiple activities. Stealing credentials from machines? Why not use those machines to cryptomine as well (and vice versa)? Selling data mining installations on machines to other people? Add a cryptomining tool to run at the same time.

[You may also like: Top Cryptomining Malware. Top Ransomware.]

The Malware Ecosystem

There are a few popular ways for cybercriminals to launch cryptomining attacks:

  • Information stealing — By distributing a data harvesting malware, attackers steal access credentials or files (photos, documents, etc.), and even identities found on an infected machine, its browser or inside the network. Then, the cybercriminals generally use the stolen data to steal. In the case of bank credentials, the hackers use the information to steal money from accounts. They may also sell the stolen data through an underground market on the dark web to other hackers. Credit cards, social security numbers and medical records go for just a few dollars. Social media accounts and identities are popular, as well. Facebook and Instagram accounts have been hijacked and used for propagation.
  • Downloaders — Malware is distributed with simple capabilities to download additional malware and install on other systems.The motivation is to infect as many machines as possible. The next step is to sell malware installations on those machines. Apparently, even infected machines enjoy brand premium fees — machines from a Fortune 500 company cost a lot more.
  • Ransomware — Machines are infected with a malware that encrypts files, which are usually valuable to the victim, such as photos, Microsoft files (.xlsx,.docx) and Adobe Acrobat files. Victims are then asked to pay a significant amount of money in order to get a tool to decrypt their files. This attack was first introduced against individuals but grew exponentially when hackers figured out that organizations can pay a higher premium.
  • DDoS for ransom (RDoS) — Attackers send targets a letter that threatens a DDoS attack on a certain day and time unless the organization makes a payment, usually via Bitcoin. Often hackers know the IP address of the targeted server or network and launch a small-scale attack as a preview of what could follow.

[You may also like: Malicious Cryptocurrency Mining: The Road Ahead]

Social Propagation

Malware protection is a mature market with many competitors. It is a challenge for hackers to create a one-size-fits-all zero-day attack that will run on as many operating systems, servers and endpoints as possible, as well as bypass most, if not all, security solutions. So in addition to seeking ways to penetrate protection engines, hackers are also looking for ways to bypass them.

During the past year, Radware noticed several campaigns where malware was created to hijack social network credentials. That enabled hackers to spread across the social network accessing legitimate files on the machine and private information (or computing resources, in the context of cryptomining).

[You may also like: 5 Ways Modern Malware Defeats Cyber Defenses & What You Can Do About It]

Here are a few examples:

  • Nigelthorn – Radware first detected this campaign, which involved a malicious chrome extension, in a customer’s network. The hackers bypassed Google Chrome native security mechanisms to disguise the malware as a legitimate extension. The group managed to infect more than 100,000 machines. The purpose of the extension was cryptomining Monero currency by the host machine, as well as stealing the credentials of the victim’s Facebook and/or Instagram accounts. The credentials were abused to propagate the attack through the Facebook user’s contact network. It is also possible that the credentials were later sold on the black market.
  • Stresspaint — In this spree, hackers used a benign-looking drawing application to hijack Facebook users’ cookies. They deceived victims by using an allegedly legitimate AOL.net URL, which was actually a unicode representation. The true address is “xn--80a2a18a.net.” The attackers were building a database of users with their contact
    network, business pages and payment details. Radware suspects that the ultimate goal was to use this information to fund public opinion influence campaigns on the social network.
  • CodeFork — This campaign was also detected in some of Radware’s customers’ networks when the infected machines tried to communicate with their C&C servers. Radware intercepted the communication and determined that this group was infecting machines in order to sell their installations. The group has been active for several years during which time we have seen them distributing different malware to the infected machines. The 2018 attack included an enhancement that distributes
    cryptomining malware.

Moving Forward

Radware believes that the cryptomining trend will persist in 2019. The motivation of financial gain will continue, pushing attackers to try to profit from malicious malware. In addition, hackers of all types can potentially add cryptomining capabilities to the infected machines that they already control. Our concern is that during the next phase, hackers will invest their profits to leverage machine-learning capabilities to find ways to access and exploit resources in networks and applications.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Ensuring Data Privacy in Public Clouds

January 24, 2019 — by Radware0

publicprivatecloud-960x640.jpg

Most enterprises spread data and applications across multiple cloud providers, typically referred to as a multicloud approach. While it is in the best interest of public cloud providers to offer network security as part of their service offerings, every public cloud provider utilizes different hardware and software security policies, methods and mechanisms, creating a challenge for the enterprise to maintain the exact same policy and configuration across all infrastructures. Public cloud providers typically meet basic security standards in an effort to standardize how they monitor and mitigate threats across their entire customer base. Seventy percent of organizations reported using public cloud providers with varied approaches to security management. Moreover, enterprises typically prefer neutral security vendors instead of over-relying on public cloud vendors to protect their workloads. As the multicloud approach expands, it is important to centralize all security aspects.

When Your Inside Is Out, Your Outside Is In

Moving workloads to publicly hosted environments leads to new threats, previously unknown in the world of premise-based computing. Computing resources hosted inside an organization’s perimeter are more easily controlled. Administrators have immediate physical access, and the workload’s surface exposure to insider threats is limited. When those same resources are moved to the public cloud, they are no longer under the direct control of the organization. Administrators no longer have physical access to their workloads. Even the most sensitive configurations must be done from afar via remote connections. Putting internal resources in the outside world results in a far larger attack surface with long, undefined boundaries of the security perimeter.

In other words, when your inside is out, then your outside is in.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

External threats that could previously be easily contained can now strike directly at the heart of an organization’s workloads. Hackers can have identical access to workloads as do the administrators managing them. In effect, the whole world is now an insider threat.

In such circumstances, restricting the permissions to access an organization’s workloads and hardening its security configuration are key aspects of workload security.

Poor Security HYGIENE Leaves You Exposed

Cloud environments make it very easy to grant access permissions and very difficult to keep track of who has them. With customer demands constantly increasing and development teams put under pressure to quickly roll out new enhancements, many organizations spin up new resources and grant excessive permissions on a routine basis. This is particularly true in many DevOps environments where speed and agility are highly valued and security concerns are often secondary.

Over time, the gap between the permissions that users have and the permissions that they actually need (and use) becomes a significant crack in the organization’s security posture. Promiscuous permissions leave workloads vulnerable to data theft and resource exploitation should any of the users who have access permissions to them become compromised. As a result, misconfiguration of access permissions (that is, giving permissions to too many people and/or granting permissions that are overly generous)
becomes the most urgent security threat that organizations need to address in public cloud environments.

[You may also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

The Glaring Issue of Misconfiguration

Public cloud providers offer identity access management tools for enterprises to control access to applications, services and databases based on permission policies. It is the responsibility of enterprises to deploy security policies that determine what entities are allowed to connect with other entities or resources in the network. These policies are usually a set of static definitions and rules that control what entities are valid to, for example, run an API or access data.

One of the biggest threats to the public cloud is misconfiguration. If permission policies are not managed properly by an enterprise will the tools offered by the public cloud provider, excessive permissions will expand the attack surface, thereby enabling hackers to exploit one entry to gain access to the entire network.

Moreover, common misconfiguration scenarios result from a DevOps engineer who uses predefined permission templates, called managed permission policies, in which the granted standardized policy may contain wider permissions than needed. The result is excessive permissions that are never used. Misconfigurations can cause accidental exposure of data, services or machines to the internet, as well as leave doors wide open for attackers.

[You may also like: The Hybrid Cloud Habit You Need to Break]

For example, an attacker can steal data by using the security credentials of a DevOps engineer gathered in a phishing attack. The attacker leverages the privileged role to take a snapshot of elastic block storage (EBS) to steal data, then shares the EBS snapshot and data on an account in another public network without installing anything. The attacker is able to leverage a role with excessive permissions to create a new machine at the beginning of the attack and then infiltrate deeper into the network to share
AMI and RDS snapshots (Amazon Machine Images and Relational Database Service, respectively), and then unshare resources.

Year over year in Radware’s global industry survey, the most frequently mentioned security challenges encountered with migrating applications to the cloud are governance issues followed by skill shortage and complexity of managing security policies. All contribute to the high rate of excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now