main

Cloud Security

Security Considerations for Cloud Hosted Services

July 31, 2019 — by Prakash Sinha0

cloudsecurity-960x655.jpg

A multi-cloud approach enables organizations to move application services to various public cloud providers depending on the desired service level and price points. Most organizations will use multiple cloud providers; some in addition to their private cloud and on premise deployments. 

Multi-cloud subsumes and is an evolution of hybrid cloud. According to IDC, enterprise adoption of multi-cloud services has moved into the mainstream; 85% of organizations are using services from multiple cloud providers while 93% of organizations will use services from multiple cloud providers within the next 12 months.

C-level executives are pushing a “cloud first” policy for their IT service capabilities.

Multi vs. Hybrid Cloud Deployment

Multi-cloud may include any combination of public cloud (eg. Microsoft Azure, AWS), SaaS applications (eg. Office 365, Salesforce), and private clouds (eg. OpenStack, VMware, KVM).

[You may also like: Transforming Into a Multicloud Environment]

Hybrid cloud deployments might be permanent, to serve a business need, such as to maintain proprietary data or intellectual property information within an organizations control, while public clouds may be maintained to initiate an eventual transition to cloud – whether public or private.

Sometimes organizations adopt multi-cloud deployments in order to enable DevOps to test their services and reduce shadow IT or to enhance disaster recovery or scalability in times of need.

Security Considerations

As organizations transition to the cloud, availability, management AND security should be top-of-mind concerns in the move to adopt containers. This concern is evident in the survey conducted by IDC in April 2017.

In addition to using built-in tools for container security, traditional approaches to security still apply to services delivered through the cloud.

Many container applications services composed using Application Programming Interfaces (APIs) are accessible over the web, and they are open to malicious attacks. Such deployments also increase the attack surface, some of which may be beyond an organization’s direct control.

[You may also like: How to Prevent Real-Time API Abuse]

Multi-Pronged Prevention

As hackers probe network and application vulnerability in multiple ways to gain access to sensitive data, the prevention strategy for unauthorized access needs to be multi-pronged as well:

  • Routinely applying security patches
  • Preventing denial of service attacks
  • Preventing rogue application ports/applications from running in the enterprise or on their hosted container applications in the cloud
  • Routine vulnerability assessment scans on container applications
  • Preventing bots from targeting applications and systems while being able to differentiate between good bots and bad bots
  • Scanning application source code for vulnerabilities and fixing them or using preventive measure such as deploying application firewalls
  • Encrypting the data at rest and in motion
  • Preventing malicious access by validating users before they can access an application

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

It is important to protect business-critical assets by enhancing your security posture to varying threat landscapes. You can do that by gaining visibility into multiple threat vectors using SIEM tools and analytics, and adopting security solutions such as SSL inspection, intrusion detection and prevention, network firewalls, DDoS prevention, Identity and Access Management (IAM), data leak prevention (DLP), SSL threat mitigation, application firewalls, and identity management.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Security

Have Crypto-Miners Infiltrated Your Public Cloud?

July 16, 2019 — by Haim Zelikovsky0

Crypto-960x641.jpg

How do you know if bad actors are siphoning off your power and racking up huge bills for your organization? These silent malware scripts could be infecting your public cloud infrastructure right now, but would you even know it? 

Crytpo Basics

The concept of crypto-jacking is fairly simple. A script (or malware) infects a host computer and silently steals CPU power for the purpose of mining crypto-currency. 

It first gained popularity in 2017 with ThePirateBay website, which infected its visitors with the malware. Its popularity surged in 2018 with companies like Coinhive, that promised ad-free web browsing in exchange for CPU power. Coinhive, and others like it, lowered the barrier to entry for criminals and was quickly exploited by ‘crypto-gangs’ that leveraged the scripts to infect multitudes of popular websites across the globe and mine for Monero. 

[You may also like: The Rise in Cryptomining]

Most individuals could not detect the malware unless it over-taxed the device. Common symptoms of crypto-jacking malware include device performance degradation, batteries overheating, device malfunction, and increases in power consumption. During its peak in December, 2017, Symantec claimed to have blocked more than 8 million cryptojacking events across its customer base.

Shifting Targets

Then, market conditions changed. Many end-point security solutions learned to identify and blacklist crypto-mining malwares running on individual endpoints. Coinhive (and many copycat companies) have been shut down. The price of crypto-currency crashed, which made smaller mining operations unprofitable. So, hackers looking for the larger payoff continued to develop more sophisticated crypto-jacking malwaresand went hunting for the bigger fish.

It is no surprise that crypto-miners have shifted the targets of their malware from individuals to enterprises. Public cloud infrastructure is an incredibly attractive target. Even an army of infected personal devices can’t deliver the kind of concentrated and unlimited CPU power of a large enterprise’s public cloud infrastructure. In the eyes of a miner, it’s like looking at a mountain of gold— and often, that gold is under-protected.

Digital transformation has pushed the migration of most enterprise networks into some form of public cloud infrastructure. In doing so, these companies have inadvertently increased their attack surface and handed the access to its developers and DevOps.

Essentially, due to the dynamic nature of public cloud environment (which makes it harder to keep a stable and hardened environment over time), as well as the ease with which permissions are granted to developers and DevOps, the attack surface is dramatically increased.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hackers of all types have identified and exploited these security weaknesses, and crypto-jackers are no exception. Today, there are thousands of different crypto-jacking malwares out in the wild. Crypto-jacking still generates huge profits for hackers.

In fact, crypto-jackers exploited vulnerable Jenkins servers to mine more than $3M worth of Monero currency. Jenkins continuous integration server is an open source automation server written in Java. Jenkins is widely used across the globe with a growing community of more than 1 million users, and generates massive amounts of potential unpatched Jenkins servers, which made it a desirable target for crypto-jackers.  Tesla also suffered losses and public embarrassment when crypto-jackers targeted an unprotected Kubernetes console and used it to spin up large amounts of containers to do their own mining operation on behalf of Tesla’s cloud account.

[You may also like: Managing Security Risks in the Cloud]

Protect Yourself

So what can organizations do to keep the crypto-miners out of their public clouds? 

  • Secure the public cloud credentials. If your public cloud credentials are breached, attackers can leverage them to launch numerous types of attacks against your public cloud assets, of which crypto-jacking is one.  
  • Be extra careful about public exposure of hosts. Hackers can utilized unpatched and vulnerable hosts exposed to the internet in order to install crypto mining clients to utilize cloud native infrastructure for crypto mining activity.
  • Limit excessive permissions. In the public cloud, your permissions are the attack surface. Always employ the principle of less privileges, and continuously limit excessive permissions.
  • Install public cloud workload protection services that can detect and block crypto-mining activities. These should include the automatic detection of anomalous activities in your public cloud, the correlation of such events, which is indicative of malicious activity, and scripts to automatically block such activity upon its detection.  

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Transforming Into a Multicloud Environment

June 26, 2019 — by Radware0

Multicloud-960x540.jpg

While C-suite executives are taking on larger roles in proactively discussing cybersecurity issues, they are also evaluating how to leverage advances in technology to improve business agility. But as network architectures get more complex, there is added pressure to secure new points of attack vulnerability.

Organizations continue to host applications and data in the public cloud, typically spread across multiple cloud providers. This multicloud approach enables enterprises to be nimbler with network operations, improve the customer experience and reduce costs.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

Public Cloud Challenges

Every public cloud provider utilizes different hardware and software security policies, methods and mechanisms. This creates a challenge for enterprises to maintain standard policies and configurations across all infrastructures.

Furthermore, public cloud providers generally only meet basic security standards for their platform. And application security of workloads on public clouds is not included in the public cloud offering.

Even with concerns about the security of public clouds–almost three in five respondents expressed concern about vulnerabilities within their companies’ public cloud networks–organizations are moving applications and data to cloud service providers.

The Human Side of the Cloud

Sometimes the biggest threat to an organization’s digital assets are the people who are hired to protect them. Whether on purpose or through carelessness, people can compromise the permissions designed to create a security barrier.

[You may also like: Eliminating Excessive Permissions]

Of the almost three-fourths who indicated that they have experienced unauthorized access to their public cloud assets, the most common reasons were:

  • An employee neglected credentials in a development forum (41%);
  • A hacker made it through the provider’s security (37%) or the company’s security (31%); or
  • An insider left a way in (21%).

An insider?! Yes, indeed. Organizations may run into malicious insiders (legitimate users who exploit their privileges to cause harm) and/or negligent insiders (also legitimate users, such as Dev/DevOps engineers who make configuration mistakes, or other employees with access who practice low security hygiene and leave ways for hackers to get in).

[You may also like: Are Your DevOps Your Biggest Security Risks?]

To limit the human factor, senior-level executives should make sure that continuous hardening checks are applied to configurations in order to validate permissions and limit the possibility of attacks as much as possible.

The goals? To avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure that communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Security

How to (Securely) Share Certificates with Your Cloud Security Provider

May 23, 2019 — by Ben Zilberman4

encrypt-960x640.jpg

Businesses today know they must handle sensitive data with extra care. But evolving cyber threats combined with regulatory demands can lead executives to hold their proverbial security cards close to their chest. For example, they may be reluctant to share encryption keys and certificates with a third party (i.e., cloud service providers), fearing data theft, MITM attacks or violations of local privacy regulations.

In turn, this can cause conflicts when integrating with cloud security services.

So how can businesses securely share this information as they transition to the cloud?

Encryption Basics

Today, nearly all web applications use HTTPS (encrypted traffic sent to and from the user). Any website with HTTPS service requires a signed SSL certificate. In order to communicate securely via encrypted traffic and complete the SSL handshake, the server requires three components: a private key, a public key (certificate) and a certificate chain.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

These are essential to accomplish the following objectives:

  • Authentication – The client authenticates the server identity.
  • Encryption – A symmetric session key is created by the client and server for session encryption.
  • Private keys stay private – The private key never leaves the server side and is not used as session key by the client.

Hardware Security Module (HSM)

A Hardware Security Module is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. HSMs are particularly useful for those industries that require high security, businesses with cloud-native applications and global organizations. More specifically, common use cases include:

[You may also like: SSL Attacks – When Hackers Use Security Against You]

  • Federal Information Processing Standards (FIPS) compliance – For example, finance, healthcare and government applications that traditionally require FIPS-level security.
  • Native cloud applications – Cloud applications designed with security in mind might use managed HSM (or KMS) for critical workloads such as password management.
  • Centralized management – Global organizations with global applications need to secure and manage their keys in one place.

Managing cryptographic key lifecycle necessitates a few fundamentals:

  • Using random number generator to create/renew keys
  • Processing crypto-operations (encrypt/decrypt)
  • Ensuring keys never leave the HSM
  • Establishing secure access control (intrusion-resistant, tamper-evident, audit-logged, FIPS-validated appliances)

The Challenge with Cloud Security Services…

One of the main challenges with cloud security services is the fact that reverse proxies need SSL keys. Managed security services, such as a cloud WAF service, force enterprises to hand over their private keys for terminating SSL connections. However, some can’t (FIPS-compliant businesses, for example) or simply don’t want to (for trust and liability concerns, or simply due to multi-tenancy between multiple customers). This is usually where the business relationship gets stuck.

[You may also like: Managing Security Risks in the Cloud]

…And the Solution!

Simply put, the solution is a HSM Cloud Service.

Wait, what?

Yes, integrating a cloud WAF service with a public cloud provider (like AWS CloudHSM) into an external HSM is the answer. It can easily be set up by a VPN among a cluster sharing the HSM credentials, per application or at large.

Indeed, cloudHSM is a popular solution–being both FIPS and PCI DSS compliant — trusted by customers in the finance sector. By moving the last on-prem component to the cloud to reduce data center maintenance costs, organizations are actually shifting towards consuming HSM as a Service.

Such an integration supports any type of certificate (single domain, wildcard or SAN) and secures minimal latency as public cloud providers have PoPs all around the globe. The external HSM is only used once, while there are no limitations to the amount of certificates that are hosted on the service.

This is the recommended approach to help businesses overcome the concern of sharing private keys. Learn more about Radware Cloud WAF service here.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Managing Security Risks in the Cloud

May 8, 2019 — by Daniel Smith3

cloudrisk-960x640.jpg

Often, I find that only a handful of organizations have a complete understanding of where they stand in today’s threat landscape. That’s a problem. If your organization does not have the ability to identify its assets, threats, and vulnerabilities accurately, you’re going to have a bad time.

A lack of visibility prevents both IT and security administrators from accurately determining their actual exposure and limits their ability to address their most significant risk on premise. However, moving computing workloads to a publicly hosted cloud service exposes organizations to new risk by losing direct physical control over their workloads and relinquishing many aspects of security through the shared responsibility model.

Cloud-y With a Chance of Risk

Don’t get me wrong; cloud environments make it very easy for companies to quickly scale by allowing them to spin up new resources for their user base instantly. While this helps organizations decrease their overall time to market and streamline business process, it also makes it very difficult to track user permission and manage resources.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

As many companies have discovered over the years, migrating workloads to a cloud-native solution present new challenges when it comes to risk and threats in a native cloud environment.

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protections via mechanisms such as firewalls, intrusion prevention/detection systems (IPS/IDS), web application firewall (WAF) and distributed denial-of-service (DDoS) protection, secure web gateways (SWGs), etc.

However, moving workloads to the cloud has presented new risks for organizations. Typically, public clouds provide only basic protections and are mainly focused on securing their overall computing environments, leaving individual and organizations workloads vulnerable. Because of this, deployed cloud environment are at risk of not only account compromises and data breaches, but also resource exploitation due to misconfigurations, lack of visibility or user error.

[You may also like: Ensuring Data Privacy in Public Clouds]

The Details

The typical attack profile includes:

  • Spear phishing employees
  • Compromised credentials
  • Misconfigurations and excessive permissions
  • Privilege escalation
  • Data exfiltration

The complexity and growing risk of cloud environments are placing more responsibility for writing and testing secure apps on developers as well. While most are not cloud-oriented security experts, there are many things we can do to help them and contribute to a better security posture.

[You may also like: Anatomy of a Cloud-Native Data Breach]

Recent examples of attacks include:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

Mitigating Risk

The good news is that most of these attacks can be prevented by addressing software vulnerabilities, finding misconfigurations and deploying identity access management through a workload protection service.

With this in mind, your cloud workload protection solution should:

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

There are many blind spots involved in today’s large-scale cloud environments. The right cloud workload protection reduces the attack surface, detects data theft activity and provides comprehensive protection in a cloud-native solution.

As the trend around cybercriminals targeting operational technologies continues, it’s critical to reduce organizational risk by rigorously enforcing protection policies, detecting malicious activity and improving response capabilities while providing insurance to the developers.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware1

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware2

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware1

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi0

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers.

In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now