main

Cloud Security

Security Considerations for Cloud Hosted Services

July 31, 2019 — by Prakash Sinha0

cloudsecurity-960x655.jpg

A multi-cloud approach enables organizations to move application services to various public cloud providers depending on the desired service level and price points. Most organizations will use multiple cloud providers; some in addition to their private cloud and on premise deployments. 

Multi-cloud subsumes and is an evolution of hybrid cloud. According to IDC, enterprise adoption of multi-cloud services has moved into the mainstream; 85% of organizations are using services from multiple cloud providers while 93% of organizations will use services from multiple cloud providers within the next 12 months.

C-level executives are pushing a “cloud first” policy for their IT service capabilities.

Multi vs. Hybrid Cloud Deployment

Multi-cloud may include any combination of public cloud (eg. Microsoft Azure, AWS), SaaS applications (eg. Office 365, Salesforce), and private clouds (eg. OpenStack, VMware, KVM).

[You may also like: Transforming Into a Multicloud Environment]

Hybrid cloud deployments might be permanent, to serve a business need, such as to maintain proprietary data or intellectual property information within an organizations control, while public clouds may be maintained to initiate an eventual transition to cloud – whether public or private.

Sometimes organizations adopt multi-cloud deployments in order to enable DevOps to test their services and reduce shadow IT or to enhance disaster recovery or scalability in times of need.

Security Considerations

As organizations transition to the cloud, availability, management AND security should be top-of-mind concerns in the move to adopt containers. This concern is evident in the survey conducted by IDC in April 2017.

In addition to using built-in tools for container security, traditional approaches to security still apply to services delivered through the cloud.

Many container applications services composed using Application Programming Interfaces (APIs) are accessible over the web, and they are open to malicious attacks. Such deployments also increase the attack surface, some of which may be beyond an organization’s direct control.

[You may also like: How to Prevent Real-Time API Abuse]

Multi-Pronged Prevention

As hackers probe network and application vulnerability in multiple ways to gain access to sensitive data, the prevention strategy for unauthorized access needs to be multi-pronged as well:

  • Routinely applying security patches
  • Preventing denial of service attacks
  • Preventing rogue application ports/applications from running in the enterprise or on their hosted container applications in the cloud
  • Routine vulnerability assessment scans on container applications
  • Preventing bots from targeting applications and systems while being able to differentiate between good bots and bad bots
  • Scanning application source code for vulnerabilities and fixing them or using preventive measure such as deploying application firewalls
  • Encrypting the data at rest and in motion
  • Preventing malicious access by validating users before they can access an application

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

It is important to protect business-critical assets by enhancing your security posture to varying threat landscapes. You can do that by gaining visibility into multiple threat vectors using SIEM tools and analytics, and adopting security solutions such as SSL inspection, intrusion detection and prevention, network firewalls, DDoS prevention, Identity and Access Management (IAM), data leak prevention (DLP), SSL threat mitigation, application firewalls, and identity management.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Security

Have Crypto-Miners Infiltrated Your Public Cloud?

July 16, 2019 — by Haim Zelikovsky0

Crypto-960x641.jpg

How do you know if bad actors are siphoning off your power and racking up huge bills for your organization? These silent malware scripts could be infecting your public cloud infrastructure right now, but would you even know it? 

Crytpo Basics

The concept of crypto-jacking is fairly simple. A script (or malware) infects a host computer and silently steals CPU power for the purpose of mining crypto-currency. 

It first gained popularity in 2017 with ThePirateBay website, which infected its visitors with the malware. Its popularity surged in 2018 with companies like Coinhive, that promised ad-free web browsing in exchange for CPU power. Coinhive, and others like it, lowered the barrier to entry for criminals and was quickly exploited by ‘crypto-gangs’ that leveraged the scripts to infect multitudes of popular websites across the globe and mine for Monero. 

[You may also like: The Rise in Cryptomining]

Most individuals could not detect the malware unless it over-taxed the device. Common symptoms of crypto-jacking malware include device performance degradation, batteries overheating, device malfunction, and increases in power consumption. During its peak in December, 2017, Symantec claimed to have blocked more than 8 million cryptojacking events across its customer base.

Shifting Targets

Then, market conditions changed. Many end-point security solutions learned to identify and blacklist crypto-mining malwares running on individual endpoints. Coinhive (and many copycat companies) have been shut down. The price of crypto-currency crashed, which made smaller mining operations unprofitable. So, hackers looking for the larger payoff continued to develop more sophisticated crypto-jacking malwaresand went hunting for the bigger fish.

It is no surprise that crypto-miners have shifted the targets of their malware from individuals to enterprises. Public cloud infrastructure is an incredibly attractive target. Even an army of infected personal devices can’t deliver the kind of concentrated and unlimited CPU power of a large enterprise’s public cloud infrastructure. In the eyes of a miner, it’s like looking at a mountain of gold— and often, that gold is under-protected.

Digital transformation has pushed the migration of most enterprise networks into some form of public cloud infrastructure. In doing so, these companies have inadvertently increased their attack surface and handed the access to its developers and DevOps.

Essentially, due to the dynamic nature of public cloud environment (which makes it harder to keep a stable and hardened environment over time), as well as the ease with which permissions are granted to developers and DevOps, the attack surface is dramatically increased.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hackers of all types have identified and exploited these security weaknesses, and crypto-jackers are no exception. Today, there are thousands of different crypto-jacking malwares out in the wild. Crypto-jacking still generates huge profits for hackers.

In fact, crypto-jackers exploited vulnerable Jenkins servers to mine more than $3M worth of Monero currency. Jenkins continuous integration server is an open source automation server written in Java. Jenkins is widely used across the globe with a growing community of more than 1 million users, and generates massive amounts of potential unpatched Jenkins servers, which made it a desirable target for crypto-jackers.  Tesla also suffered losses and public embarrassment when crypto-jackers targeted an unprotected Kubernetes console and used it to spin up large amounts of containers to do their own mining operation on behalf of Tesla’s cloud account.

[You may also like: Managing Security Risks in the Cloud]

Protect Yourself

So what can organizations do to keep the crypto-miners out of their public clouds? 

  • Secure the public cloud credentials. If your public cloud credentials are breached, attackers can leverage them to launch numerous types of attacks against your public cloud assets, of which crypto-jacking is one.  
  • Be extra careful about public exposure of hosts. Hackers can utilized unpatched and vulnerable hosts exposed to the internet in order to install crypto mining clients to utilize cloud native infrastructure for crypto mining activity.
  • Limit excessive permissions. In the public cloud, your permissions are the attack surface. Always employ the principle of less privileges, and continuously limit excessive permissions.
  • Install public cloud workload protection services that can detect and block crypto-mining activities. These should include the automatic detection of anomalous activities in your public cloud, the correlation of such events, which is indicative of malicious activity, and scripts to automatically block such activity upon its detection.  

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSSecurity

Why Hybrid Always-On Protection Is Your Best Bet

June 19, 2019 — by Eyal Arazi0

hybridalwayson-960x640.jpg

Users today want more. The ubiquity and convenience of online competition means that customers want everything better, faster, and cheaper. One key component of the user experience is service availability. Customers expect applications and online services to be constantly available and responsive.

The problem, however, is that a new generation of larger and more sophisticated Distributed Denial of Service (DDoS) attacks is making DDoS protection a more challenging task than ever before. Massive IoT botnets are resulting in ever-larger volumetric DDoS attacks, while more sophisticated application-layer attacks find new ways of exhausting server resources. Above all, the ongoing shift to encrypted traffic is creating a new challenge with potent SSL DDoS floods.

Traditional DDoS defense – either premise-based or cloud-based – provide incomplete solutions which require inherent trade-offs between high-capacity volumetric protection, protection against sophisticated application-layer DDoS attacks, and handling of SSL certificates. The solution, therefore, is adopting a new hybrid DDoS protection model which combines premise-based appliances, together with an always-on cloud service.

Full Protection Requires Looking Both Ways

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

[You may also like: DDoS Protection Requires Looking Both Ways]

Attacks such as large-file DDoS attacks, ACK floods, scanning attacks, and others exploit the outbound communication channel for attacks that cannot be identified just by looking at ingress traffic. Such attacks are executed by sending small numbers of inbound requests, which have an asymmetric and disproportionate impact either on the outbound channel, or computing resources inside the network.

SSL is Creating New Challenges

On top of that, SSL/TLS traffic encryption is adding another layer of complexity. Within a short time, the majority of internet traffic has become encrypted. Traffic encryption helps secure customer data, and users now expect security to be part of the service experience. According to the Mozilla Foundation’s Let’s Encrypt project, nearly 80% of worldwide internet traffic is already encrypted, and the rate is constantly growing.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Ironically, while SSL/TLS is critical for securing user data, it also creates significant management challenges, and exposes services to a new generation of powerful DDoS attacks:

  • Increased Potency of DDoS Attacks: SSL/TLS connections requiring up to 15 times more resources from the target servers than the requesting host. This means that hackers can launch devastating attacks using only a small number of connections, and quickly overwhelm server resources using SSL floods.
  • Masking of Data Payload: Moreover, encryption masks – by definition – the internal contents of traffic requests, preventing deep inspection of packets against malicious traffic. This limits the effectiveness of anti-DDoS defense layers, and the types of attacks they can detect. This is particularly true for application-layer (L7) DDoS attacks which hide under the coverage of SSL encryption.
  • SSL Key Exposure: Many organizational, national, or industry regulations which forbid SSL keys from being shared with third-party entities. This creates a unique challenge to organizations who must provide the most secured user experience while also protecting their SSL keys from exposure.
  • Latency and Privacy Concerns: Offloading of SSL traffic in the cloud is usually a complex and time-consuming task. Most cloud-based SSL DDoS solutions require full decryption of customer traffic by the cloud provider, thereby compromising user privacy and adding latency to customer communications.

Existing Solutions Provide Partial Coverage

The problem, however, is that existing anti-DDoS defenses are unable to provide solutions that provide high-capacity volumetric protection while providing bi-directional protection required by sophisticated types of attacks.

On-Premise Appliances provide high level of protection against a wide variety of DDoS attacks, while providing very low latency and fast response. In addition, being on-premise, they allow companies to deal with SSL-based attacks without exposing their encryption keys to the outside world. Since they have visibility into both inbound and outbound traffic, they offer bi-directional protection against symmetric DDoS attacks. However, physical appliance can’t deal with large-scale volumetric attacks which have become commonplace in the era of massive IoT botnets.

[You may also like: How to (Securely) Share Certificates with Your Cloud Security Provider]

Cloud-based DDoS protection services, on the other hand, possess the bandwidth to deal with large-scale volumetric attacks. However, they offer visibility only into the inbound communication channel. Thus, they have a hard time protecting against bi-directional DDoS attacks. Moreover, cloud-based SSL DDoS defenses – if the vendor has those at all – frequently require that the organization upload their SSL certificates online, increasing the risk of those keys being exposed.

The Optimal Solution: Hybrid Always-On Approach

For companies that place a high premium on the user experience, and wish to avoid even the slightest possible downtime as a result of DDoS attacks, the optimal solution is to deploy an always-on hybrid solution.

The hybrid approach to DDoS protection combines an on-premise hardware appliance with always-on cloud-based scrubbing capacity. This helps ensure that services are protected against any type of attack.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

Hybrid Always-On DDoS Protection

Compared to the pure-cloud always-on deployment model, the hybrid always-on approach adds multi-layered protection against symmetric DDoS attacks which saturate the outbound pipe, and allows for maintaining SSL certificates on-premise.

Benefits of the Hybrid Always-On Model

  • Multi-Layered DDoS Protection: The combination of a premise-based hardware mitigation device coupled with cloud-based scrubbing capacity offers multi-layered protection at different levels. If an attack somehow gets through the cloud protection layer, it will be stopped by the on-premise appliance.
  • Constant, Uninterrupted Volumetric Protection: Since all traffic passes through a cloud-based scrubbing center at all times, the cloud-based service provides uninterrupted, ongoing protection against high-capacity volumetric DDoS attack.
  • Bi-Directional DDoS Protection: While cloud-based DDoS protection services inspect only the inbound traffic channel, the addition of a premise-based appliance allows organizations to inspect the outbound channel, as well, thereby protecting themselves against two-way DDoS attacks which can saturate the outbound pipe, or otherwise require visibility to return traffic in order to identify attack patterns.
  • Reduced SSL Key Exposure: Many national or industry regulations require that encryption keys not be shared with anyone else. The inclusion of a premise-based hardware appliance allows organizations to protect themselves against encrypted DDoS attacks while keeping their SSL keys in-house.
  • Decreased Latency for Encrypted Traffic: SSL offloading in the cloud is frequently a complex and time-consuming affair, which adds much latency to user communications. Since inspection of SSL traffic in the hybrid always-on model is done primarily by the on-premise hardware appliance, users enjoy faster response times and lower latency.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Guaranteeing service availability while simultaneously ensuring the quality of the customer experience is a multi-faceted and complex proposition. Organizations are challenged by growth in the size of DDoS attacks, the increase in sophistication of application-layer DDoS attacks, and the challenges brought about by the shift to SSL encryption.

Deploying a hybrid always-on solution allows for both inbound and outbound visibility into traffic, enhanced protections for application-layer and encrypted traffic, and allows for SSL keys to be kept in-house, without exposing them to the outside.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Security

Executives Are Turning Infosec into a Competitive Advantage

June 18, 2019 — by Anna Convery-Pelletier0

cs7-960x584.jpg

Companies are more connected to their customers now than ever before.  After spending billions to digitally transform themselves, organizations have exponentially increased the number of touchpoints as well as the frequency of communication they have with their customer base. 

Thanks to digital transformation, organizations are more agile, flexible, efficient, and customer-centric. However, with greater access to customers comes an equal measure of increased vulnerability. We have all seen the havoc that a data breach can wreak upon a brand; hackers are the modern-day David to the Goliaths of the Fortune 1000 world. As a result, we have experienced a fundamental shift in management philosophy around the role that information security plays across organizations. The savviest leaders have shifted from a defensive to offensive position and are turning information security into a competitive market advantage.

Each year, Radware surveys C-Suite executives to measure leadership sentiment around information security, its costs and business impacts.  This year, we studied the views and insights from 263 senior leaders at organizations primarily with revenue in excess of 1 billion USD/EUR/GBP around the world. Respondents represented 30% financial services, 21% retail/hospitality, 21% telecom/service provider, 7% manufacturing/distribution, 7% computer products/services, 6% business services/consulting, and 9% other.

This year’s report shines a spotlight on increased sophistication of management philosophy for information security and security strategy. While responsibility for cybersecurity continues to be spearheaded by the CIO and CISO, it is also being shared throughout the entire C-Suite.

[You may also like: How Cyberattacks Directly Impact Your Brand]

In fact, 72% of executives responding to our survey claimed that it’s a topic discussed in every board meeting. 82% of responding CEOs reported high levels of knowledge around information security, as did 72% of non-technical C-Suite titles – an all-time high! Security issues now influence brand reputation, brand trust, and consumer trust, which forces organizations to infuse information security into core business functions such as customer experience, marketing and business operations.

All with good reason. The average cost of a cyberattack is now roughly $4.6M, and the number of organizations that claim attacks cost them more than $10M has doubled from 2018 to 2019.

Customers are quite aware of the onslaught of data breaches that have affected nearly every industry, from banking to online dating, throughout the past ten years. Even though many governments have passed many laws to protect consumers against misuse of their data, such as GDPR, CASL, HIPPA, Personally Identifiable Information (PII), etc., companies still can’t keep up with the regulations. 

[You may also like: The Costs of Cyberattacks Are Real]

Case in point: 74% of European executives report they have experienced a data breach in the past 12 months, compared to 53% in America and 44% in APAC. Half (52%) of executives in Europe have experienced a self-reported incident under GDPR in the past year.  

Consumer confidence is at an all-time low. These same customers want to understand what companies have done to secure their products and services and they are willing to take their business elsewhere if that brand promise is broken. Customers are increasingly taking action following a breach. 

[You may also like: How Do Marketers Add Security into Their Messaging?]

Reputation management is a critical component of organizational management. Savvy leaders recognize the connection between information security and reputation management and subsequently adopted information security as a market advantage.

So How Do Companies Start to Earn Back Trust?

These leaders recognize that security must become part of the brand promise. Our research shows that 75% of executives claim security is a key part of their product marketing messages. 50% of companies surveyed offer dedicated security products and services to their customers. Additionally, 41% offer security features as add-ons within their products and services, and another 7% are considering building security services into their products.

Balancing Security Concerns with Deployment of Private and Public Clouds

Digital transformation drove a mass migration into public and private cloud environments.  Organizations were wooed by the promise of flexibility, streamlined business operations, improved efficiency, lower operational costs, and greater business agility. Rightfully so, as cloud environments have largely fulfilled their promises.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

However, along with these incredible benefits comes a far greater risk than most organizations anticipated. While 54% of respondents report improving information security is one of their top three reasons for initiating digital transformation processes, 73% of executives indicate they have had unauthorized access to their public cloud assets.  What is more alarming is how these unauthorized access incidents have occurred.

The technical sophistication of the modern business world has eroded the trust between brands and their customers, opening the door for a new conversation around security. 

Leading organizations have already begun to weave security into the very fabric of their culture – and it’s evidenced by going to market with secure marketing messages (as Apple’s new ad campaigns demonstrate), sharing responsibility for information security across the entire leadership team, creating privacy-centric business policies and processes, making information security and customer data-privacy part of an organization’s core values, etc.  The biggest challenges organizations still face is in how best to execute it, but that is a topic for another blog…

To learn more about the insights and perspectives on information security from the C-Suite, please download the report.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Security

Managing Security Risks in the Cloud

May 8, 2019 — by Daniel Smith2

cloudrisk-960x640.jpg

Often, I find that only a handful of organizations have a complete understanding of where they stand in today’s threat landscape. That’s a problem. If your organization does not have the ability to identify its assets, threats, and vulnerabilities accurately, you’re going to have a bad time.

A lack of visibility prevents both IT and security administrators from accurately determining their actual exposure and limits their ability to address their most significant risk on premise. However, moving computing workloads to a publicly hosted cloud service exposes organizations to new risk by losing direct physical control over their workloads and relinquishing many aspects of security through the shared responsibility model.

Cloud-y With a Chance of Risk

Don’t get me wrong; cloud environments make it very easy for companies to quickly scale by allowing them to spin up new resources for their user base instantly. While this helps organizations decrease their overall time to market and streamline business process, it also makes it very difficult to track user permission and manage resources.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

As many companies have discovered over the years, migrating workloads to a cloud-native solution present new challenges when it comes to risk and threats in a native cloud environment.

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protections via mechanisms such as firewalls, intrusion prevention/detection systems (IPS/IDS), web application firewall (WAF) and distributed denial-of-service (DDoS) protection, secure web gateways (SWGs), etc.

However, moving workloads to the cloud has presented new risks for organizations. Typically, public clouds provide only basic protections and are mainly focused on securing their overall computing environments, leaving individual and organizations workloads vulnerable. Because of this, deployed cloud environment are at risk of not only account compromises and data breaches, but also resource exploitation due to misconfigurations, lack of visibility or user error.

[You may also like: Ensuring Data Privacy in Public Clouds]

The Details

The typical attack profile includes:

  • Spear phishing employees
  • Compromised credentials
  • Misconfigurations and excessive permissions
  • Privilege escalation
  • Data exfiltration

The complexity and growing risk of cloud environments are placing more responsibility for writing and testing secure apps on developers as well. While most are not cloud-oriented security experts, there are many things we can do to help them and contribute to a better security posture.

[You may also like: Anatomy of a Cloud-Native Data Breach]

Recent examples of attacks include:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

Mitigating Risk

The good news is that most of these attacks can be prevented by addressing software vulnerabilities, finding misconfigurations and deploying identity access management through a workload protection service.

With this in mind, your cloud workload protection solution should:

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

There are many blind spots involved in today’s large-scale cloud environments. The right cloud workload protection reduces the attack surface, detects data theft activity and provides comprehensive protection in a cloud-native solution.

As the trend around cybercriminals targeting operational technologies continues, it’s critical to reduce organizational risk by rigorously enforcing protection policies, detecting malicious activity and improving response capabilities while providing insurance to the developers.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application DeliveryCloud Computing

Application Delivery Use Cases for Cloud and On-Premise Applications

April 23, 2019 — by Prakash Sinha0

ADC-960x548.jpg

Most of us use web applications in our daily lives, whether at work or for personal reasons. These applications include sites offering banking and financial services, payroll, utilities, online training, just to name a few. Users get frustrated, sometimes annoyed, if the applications – such as bank account access, loading of a statement, emails, or bills – are slow to respond. Heaven help us if we lose these services right in the middle of a payment!

data center, servers, application delivery controllers, ADCs, ADC
White and blue firewall activated on server room data center 3D rendering

If you look at these applications from a service provider perspective, especially those that have web facing applications, this loss of customer interest or frustration is expensive and translates into real loss of revenue, almost $8,900 per minute of downtime in addition to loss of customer satisfaction and reputation. And if your services are in the cloud and you don’t have a fall back? Good luck…

Traditional Use of ADCs for Applications

This is where application delivery controllers (ADCs), commonly referred to as load balancers, come in. ADCs focus on a few aspects to help applications. ADCs make it seem to the user that the services being accessed are always up, and in doing so, reduce the latency that a user perceives when accessing the application. ADCs also help in securing and scaling the applications across millions of users.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

Traditionally, these load balancers were deployed as physical devices as a redundant pair, then as virtualization took hold in the data centers, ADCs began to be deployed as virtual appliance. Now, as applications move to cloud environments, ADCs are being deployed as a service in the cloud, or as a mix of virtual, cloud and physical devices (depending on cost and desired performance characteristics, as well the familiarity and expertise of the administrator of these services – DevOps, NetOps or SecOps).

The ADC World is Changing

The world of ADCs is changing rapidly. Due to the fast changing world of applications, with micro-services, agile approach, continuous delivery and integration, there are many changes afoot in the world of ADCs.

ADCs still have the traditional job of making applications available locally in a data center or globally across data centers, and providing redundancy to links in a data center. In addition to providing availability to applications, these devices are still used for latency reduction – using caching, compressions and web performance optimizations – but due to where they sit in the network, they’ve taken on additional roles of a security choreographer and a single point of visibility across a variety of different applications.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

We are beginning to see additional use cases, such as web application firewalls for application protection, SSL inspection for preventing leaks of sensitive information, and single sign on across many applications and services. The deployment topology of the ADC is also changing – either run within a container for load balancing and scaling micro-services and embedded ADCs, or be able to provide additional value-add capabilities to the embedded ADCs or micro-services within a container.

Providing high availability is one of the core use cases for an ADC. HA addresses the need for an application to recover from failures within and between data centers themselves. SSL Offload is also considered a core use case. As SSL and TLS become pervasive to secure and protect web transactions, offloading non-business functions from application and web servers so that they may be dedicated to business processing is needed not only to reduce application latency but also to lower the cost of application footprint needed to serve users.

As users connecting to a particular application service grow, new instances of application services are brought online in order to scale applications. Scaling-in and scaling-out in an automated way is one of the primary reasons why ADCs have built-in automation and integrations with orchestration systems. Advanced automation allows ADCs to discover and add or remove new application instances to the load balancing pool without manual intervention. This not only helps reduce manual errors and lowers administrative costs, but also removes the requirements for all users of an ADC to be experts.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

As we move to the cloud, other uses cases are emerging and quickly becoming a necessity. Elastic licensing, for example, is directed to cap the cost of licenses as organizations transition from physical hardware or virtual deployment to the cloud. Another use case is to provide analytics and end-to-end visibility, designed to pin-point root a cause of an issue quickly without finger-pointing between networking and application teams.

ADCs at the Intersection of Networking and Applications

Since ADCs occupy an important place between applications and networks, it’s quite logical to see ADCs take on additional responsibilities, as applications serve the users. Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure has evolved to  address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware1

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware2

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware1

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now