main

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware0

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware1

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware2

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi3

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Ensuring Data Privacy in Public Clouds

January 24, 2019 — by Radware0

publicprivatecloud-960x640.jpg

Most enterprises spread data and applications across multiple cloud providers, typically referred to as a multicloud approach. While it is in the best interest of public cloud providers to offer network security as part of their service offerings, every public cloud provider utilizes different hardware and software security policies, methods and mechanisms, creating a challenge for the enterprise to maintain the exact same policy and configuration across all infrastructures. Public cloud providers typically meet basic security standards in an effort to standardize how they monitor and mitigate threats across their entire customer base. Seventy percent of organizations reported using public cloud providers with varied approaches to security management. Moreover, enterprises typically prefer neutral security vendors instead of over-relying on public cloud vendors to protect their workloads. As the multicloud approach expands, it is important to centralize all security aspects.

When Your Inside Is Out, Your Outside Is In

Moving workloads to publicly hosted environments leads to new threats, previously unknown in the world of premise-based computing. Computing resources hosted inside an organization’s perimeter are more easily controlled. Administrators have immediate physical access, and the workload’s surface exposure to insider threats is limited. When those same resources are moved to the public cloud, they are no longer under the direct control of the organization. Administrators no longer have physical access to their workloads. Even the most sensitive configurations must be done from afar via remote connections. Putting internal resources in the outside world results in a far larger attack surface with long, undefined boundaries of the security perimeter.

In other words, when your inside is out, then your outside is in.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

External threats that could previously be easily contained can now strike directly at the heart of an organization’s workloads. Hackers can have identical access to workloads as do the administrators managing them. In effect, the whole world is now an insider threat.

In such circumstances, restricting the permissions to access an organization’s workloads and hardening its security configuration are key aspects of workload security.

Poor Security HYGIENE Leaves You Exposed

Cloud environments make it very easy to grant access permissions and very difficult to keep track of who has them. With customer demands constantly increasing and development teams put under pressure to quickly roll out new enhancements, many organizations spin up new resources and grant excessive permissions on a routine basis. This is particularly true in many DevOps environments where speed and agility are highly valued and security concerns are often secondary.

Over time, the gap between the permissions that users have and the permissions that they actually need (and use) becomes a significant crack in the organization’s security posture. Promiscuous permissions leave workloads vulnerable to data theft and resource exploitation should any of the users who have access permissions to them become compromised. As a result, misconfiguration of access permissions (that is, giving permissions to too many people and/or granting permissions that are overly generous)
becomes the most urgent security threat that organizations need to address in public cloud environments.

[You may also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

The Glaring Issue of Misconfiguration

Public cloud providers offer identity access management tools for enterprises to control access to applications, services and databases based on permission policies. It is the responsibility of enterprises to deploy security policies that determine what entities are allowed to connect with other entities or resources in the network. These policies are usually a set of static definitions and rules that control what entities are valid to, for example, run an API or access data.

One of the biggest threats to the public cloud is misconfiguration. If permission policies are not managed properly by an enterprise will the tools offered by the public cloud provider, excessive permissions will expand the attack surface, thereby enabling hackers to exploit one entry to gain access to the entire network.

Moreover, common misconfiguration scenarios result from a DevOps engineer who uses predefined permission templates, called managed permission policies, in which the granted standardized policy may contain wider permissions than needed. The result is excessive permissions that are never used. Misconfigurations can cause accidental exposure of data, services or machines to the internet, as well as leave doors wide open for attackers.

[You may also like: The Hybrid Cloud Habit You Need to Break]

For example, an attacker can steal data by using the security credentials of a DevOps engineer gathered in a phishing attack. The attacker leverages the privileged role to take a snapshot of elastic block storage (EBS) to steal data, then shares the EBS snapshot and data on an account in another public network without installing anything. The attacker is able to leverage a role with excessive permissions to create a new machine at the beginning of the attack and then infiltrate deeper into the network to share
AMI and RDS snapshots (Amazon Machine Images and Relational Database Service, respectively), and then unshare resources.

Year over year in Radware’s global industry survey, the most frequently mentioned security challenges encountered with migrating applications to the cloud are governance issues followed by skill shortage and complexity of managing security policies. All contribute to the high rate of excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Now or Never: Financial Services and the Cloud

January 9, 2019 — by Sandy Toplis0

FinServ-960x640.jpg

I will get straight to the point: The time is right for the financial services (FS) industry to leverage the power of the cloud. It dovetails quite nicely with retail banking’s competitive moves to provide users with more flexible choices, banking simplification and an improved, positive customer experience. Indeed, I am encouraged that roughly 70% of my financial services customers are looking to move more services to the cloud, and approximately 50% have a cloud-first strategy.

This is a departure from the FS industry’s history with the public cloud. Historically, it has shied away from cloud adoption—not because it’s against embracing new technologies for business improvement, but because it is one of the most heavily regulated and frequently scrutinized industries in terms of data privacy and security. Concerns regarding the risk of change and impact to business continuity, customer satisfaction, a perceived lack of control, data security, and costs have played a large role in the industry’s hesitation to transition to the cloud.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

Embracing Change

More and more, banks are moving applications on the cloud to take advantage of scalability, lower capital costs, ease of operations and resilience offered by cloud solutions. Due to the differing requirements on data residency from jurisdiction-to-jurisdiction, banks need to choose solutions that allow them to have exacting control over transient and permanent data flows. Solutions that are flexible enough to be deployed in a hybrid mode, on a public cloud infrastructure as well as private infrastructure, are key to allowing banks to have the flexibility of leveraging existing investments, as well as meeting these strict regulatory requirements.

[You may also like: The Hybrid Cloud Habit You Need to Break]

Although the rate of cloud adoption within the financial services industry still has much room for growth, the industry is addressing many of its concerns and is putting to bed the myths surrounding cloud-based security. Indeed, multi-cloud adoption is proliferating and it’s becoming clear that banks are increasingly turning to the cloud and into new (FinTech) technology.  In some cases, banks are already using cloud services for non-core and non-critical uses such as HR, email, customer analytics, customer relationship management (CRM), and for development and testing purposes.

Interestingly, smaller banks have more readily made the transition by moving entire core services (treasury, payments, retail banking, enterprise data) to the cloud.  As these and other larger banks embrace new FinTech, their service offerings will stand out among the competitive landscape, helping to propel the digital transformation race.

What’s Driving the Change?

There are several key drivers for the adoption of multi (public) cloud-based services for the FS industry, including:

  • Risk mitigation in cloud migration. Many companies operate a hybrid security model, so the cloud environment works adjacent to existing infrastructure. Organisations are also embracing the hybrid model to deploy cloud-based innovation sandboxes to rapidly validate consumers’ acceptance of new services without disrupting their existing business. The cloud can help to lower risks associated with traditional infrastructure technology where capacity, redundancy and resiliency are operational concerns.  From a regulatory perspective, the scalability of the cloud means that banks can scan potentially thousands of transactions per second, which dramatically improves the industry’s ability to combat financial crime, such as fraud and money laundering.
  • Security. Rightly so, information security remains the number one concern for CISOs. When correctly deployed, cloud applications are no less secure than traditional in-house deployments. What’s more, the flexibility to scale in a cloud environment can empower banks with more control over security issues.
  • Agile innovation and competitive edge. Accessing the cloud can increase a bank’s ability to innovate by enhancing agility, efficiency and productivity. Gaining agility with faster onboarding of services (from the traditional two-to-three weeks to implement a service to almost instantly in the cloud) gives banks a competitive edge: they can launch new services to the market quicker and with security confidence. Additionally, the scaling up (or down) of services is fast and reliable, which can help banks to reallocate resources away from the administration of IT infrastructure, and towards innovation and fast delivery of products and services to markets.
  • Cost benefits. As FS customers move from on-prem to cloud environments, costs shift from capex to opex. The cost savings of public cloud solutions are significant, especially given the reduction in initial capex requirements for traditional IT infrastructure. During periods of volumetric traffic, the cloud can allow banks to manage computing capacity more efficiently. And when the cloud is adopted for risk mitigation and innovation purposes, cost benefits arise from the resultant improvements in business efficiency. According to KPMG, shifting back-office functions to the cloud allows banks to achieve savings of between 30 and 40 percent.

[You may also like: The Executive Guide to Demystify Cybersecurity]

A Fundamental Movement

Cloud innovation is fast becoming a fundamental driver in global digital disruption and is increasingly gaining more prominence and cogency with banks. In fact, Gartner predicts that by 2020, a corporate no-cloud policy will become as rare as a no-internet policy is today.

Regardless of the size of your business—be it Retail Banking, Investment Banking, Insurance, Forex, Building Societies, etc.—protecting your business from cybercriminals and their ever-changing means of “getting in” is essential.  The bottom line: Whatever cloud deployment best suits your business is considerably more scalable and elastic than hosting in-house, and therefore suits any organisation.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Botnets

Bot or Not? Distinguishing Between the Good, the Bad & the Ugly

January 8, 2019 — by Anna Convery-Pelletier2

bot_management-960x460.jpg

Bots touch virtually every part of our digital lives. They help populate our news feeds, tell us the weather, provide stock quotes, control our search rankings, and help us comparison shop. We use bots to book travel, for online customer support, and even to turn our lights on and off and unlock our doors.

Yet, for every ‘good’ bot, there is a nefarious one designed to disrupt, steal or manipulate. Indeed, at least one third of all Internet traffic is populated by a spectrum of ‘bad’ bots. On one end, there are the manipulative bots, like those designed to buy out retailers’ inventory to resell high-demand goods at markup (like limited edition sneakers or ticket scalping) or simulate advertiser click counts. On the other, more extreme end, malicious bots take over accounts, conduct API abuse and enslave our IoT devices to launch massive DDoS attacks.

Equally troubling is the speed at which the bot ecosystem is evolving. Like most criminal elements, threat actors are singularly focused in their goals: They constantly update, mutate, and modify their tool sets to work around the various protections companies put in place.

[You may also like: The Evolution of IoT Attacks]

In other words, what protected your organization against bots last year may not work today. Research from Radware’s 2018 State of Web Application Security Report shows that most organizations rely on tools like Captcha to detect their bot traffic, but modern, sophisticated bots can easily bypass those tools, making it difficult to even detect bot traffic, let alone identify the bot’s intentions.

Organizations need to look for bot management solutions that not only effectively detect and mitigate bot attacks but can also distinguish between ‘good’ and ‘bad’ bots in real-time.

Yesterday, Radware announced its intent to acquire ShieldSquare, which is a pioneer in the bot mitigation industry and one of three recognized solution leaders by Forrester with strong differentiation in the Attack Detection, Threat Research, Reporting, and Analytics categories.

The strong technology synergy between the two companies around advanced machine learning and the opportunity to extend Radware’s existing cloud security services bring a tremendous advantage to our customers and partners.

[You may also like: 9 Ways to Ensure Cloud Security]

This acquisition allows Radware to expand our portfolio with more robust bot management solutions that can stand alone as product offerings as well as integrate into our suite of attack mitigation solutions. Radware will offer ShieldSquare’s bot management and mitigation product under the new Radware Bot Management product line. It enhances Radware’s advanced anti-bot capabilities from multi-protocol IoT DDoS attacks to more crafted e-commerce attacks affecting six emerging problems:

  • Data harvesting and Scraping Attacks
  • Account creation and Account Takeover Attacks
  • Denial of Inventory
  • Application DDoS & Brute Force Attacks
  • Brand Image / Reputation Attacks

It also provides ShieldSquare’s customers with access to the full suite of Radware security and availability solutions both on-prem and in the cloud, including our Cloud WAF services for comprehensive protection of applications.

We look forward to welcoming the ShieldSquare team into the Radware family and joining forces to offer some of the world’s best bot management solutions.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsCloud SecurityDDoS AttacksSecurity

2019 Predictions: Will Cyber Serenity Soon Be a Thing of the Past?

November 29, 2018 — by Daniel Smith0

AdobeStock_227784320-2-960x600.jpg

In 2018 the threat landscape evolved at a breakneck pace, from predominantly DDoS and ransom attacks (in 2016 and 2017, respectively), to automated attacks. We saw sensational attacks on APIs, the ability to leverage weaponized Artificial Intelligence, and growth in side-channel and proxy-based attacks.

And by the looks of it, 2019 will be an extension of the proverbial game of whack-a-mole, with categorical alterations to the current tactics, techniques and procedures (TTPs). While nobody knows exactly what the future holds, strong indicators today enable us to forecast trends in the coming year.

The public cloud will experience a massive security attack

The worldwide public cloud services market is projected to grow 17.3 percent in 2019 to total $206.2 billion, up from $175.8 billion in 2018, according to Gartner, Inc. This means organizations are rapidly shifting content to the cloud, and with that data shift comes new vulnerabilities and threats. While cloud adoption is touted as faster, better, and easier, security is often overlooked for performance and overall cost. Organizations trust and expect their cloud providers to adequately secure information for them, but perception is not always a reality when it comes to current cloud security, and 2019 will demonstrate this.

[You may also like: Cloud vs DDoS, the Seven Layers of Complexity]

Ransom techniques will surge

Ransom, including ransomware and ransom RDoS, will give way to hijacking new embedded technologies, along with holding healthcare systems and smart cities hostage with the launch of 5G networks and devices. What does this look like? The prospects are distressing:

  • Hijacking the availability of a service—like stock trading, streaming video or music, or even 911—and demanding a ransom for the digital return of the devices or network.
  • Hijacking a device. Not only are smart home devices like thermostats and refrigerators susceptible to security lapses, but so are larger devices, like automobiles.
  • Healthcare ransom attacks pose a particularly terrifying threat. As healthcare is increasingly interwoven with cloud-based monitoring, services and IoT embedded devices responsible for administering health management (think prescriptions/urgent medications, health records, etc.) are vulnerable, putting those seeking medical care in jeopardy of having their healthcare devices that they a dependent on being targeted by malware or their devices supporting network being hijacked.

[You may also like: The Origin of Ransomware and Its Impact on Businesses]

Nation state attacks will increase

As trade and other types of “soft-based’ power conflicts increase in number and severity, nation states and other groups will seek new ways of causing widespread disruption including Internet outages at the local or regional level, service outages, supply chain attacks and application blacklisting by government in attempted power grabs. Contractors and government organizations are likely to be targeted, and other industries will stand to lose millions of dollars as indirect victims if communications systems fail and trade grinds to a halt.

More destructive DDoS attacks are on the way

Over the past several years, we’ve witnessed the development and deployment of massive IoT-based botnets, such as Mirai, Brickerbot, Reaper and Haijme, whose systems are built around thousands of compromised IoT devices.  Most of these weaponized botnets have been used in cyberattacks to knock out critical devices or services in a relatively straightforward manner.

Recently there has been a change in devices targeted by bot herders. Based on developments we are seeing in the wild, attackers are not only infiltrating resource-constrained IoT devices, they are also targeting powerful cloud-based servers. When targeted, only a handful of compromised instances are needed to create a serious threat. Since IoT malware is cross-compiled for many platforms, including x86_64, we expect to see attackers consistently altering and updating Mirai/Qbot scanners to include more cloud-based exploits going into 2019.

[You may also like: IoT Botnets on the Rise]

Cyber serenity may be a thing of the past

If the growth of the attack landscape continues to evolve into 2019 through various chaining attacks and alteration of the current TTP’s to include automated features, the best years of cybersecurity may be behind us. Let’s hope that 2019 will be the year we collectively begin to really share intelligence and aid one another in knowledge transfer; it’s critical in order to address the threat equation and come up with reasonable and achievable solutions that will abate the ominous signs before us all.

Until then, pay special attention to weaponized AI, large API attacks, proxy attacks and automated social engineering. As they target the hidden attack surface of automation, they will no doubt become very problematic moving forward.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud SecuritySecurity

Evolving Cyberthreats: Enhance Your IT Security Mechanisms

November 28, 2018 — by Fabio Palozza2

cyber-960x720.jpg

For years, cybersecurity professionals across the globe have been highly alarmed by threats appearing in the form of malware, including Trojans, viruses, worms, and spear phishing attacks. And this year was no different. 2018 witnessed its fair share of attacks, including some new trends: credential theft emerged as a major concern, and although ransomware remains a major player in the cyberthreat landscape, we have observed a sharp decline in insider threats.

This especially holds true for the UK and Germany, which are now under the jurisdiction of the General Data Protection Regulation (GDPR). However, in the U.S., insider threats are on the rise, from 72% in 2017 to an alarming 80% in 2018.

The Value of Data Backups

When WannaCry was launched in May 2017, it caused damages worth hundreds of billions of dollars, affecting 300,000 computers in 150 nations within just a few days. According to a CyberEdge Group report, 55% of organizations around the world were victimized by ransomware in 2017; nearly 87% chose not to pay the ransom and were able to retrieve their data thanks to offline data-backup systems. Among the organizations that had no option other than paying the ransom, only half could retrieve their data.

What does this teach us? That offline data backups are a practical solution to safeguard businesses against ransomware attacks. Luckily, highly efficient and practical cloud-based backup solutions have been introduced in the market, which can help businesses adopt appropriate proactive measures to maintain data security.

[You may also like: SMB Vulnerabilities – WannaCry, Adylkuzz and SambaCry]

Security Concerns Give Way to Opportunities

However, there are concerns with regards to cloud security, as well with data privacy and data confidentiality maintenance. For instance, apprehensions regarding access control, constant and efficient threat-monitoring, risk assessment, and maintenance of regulatory compliance inhibit the holistic implementation of cloud solutions.

But while these concerns act as impediments for companies, they also serve as opportunities for security vendors to step into the scene and develop richer and more effective solutions.

And, make no mistake, there is a definite need for better solutions. According to Verizon’s 2015 Data Breach Investigations Report, even after the Common Vulnerabilities and Exposures (CVE) was published, 99.9% of exploited vulnerabilities went on to be compromised for more than a year, despite the availability of patches.

Why? Despite IT security experts’ insistence on regularly monitoring and patching vulnerabilities in a timely manner, doing so has its challenges; patching involves taking systems offline, which, in turn, affects employee productivity and company revenue. Some organizations even fail to implement patching due to lack of qualified staff. Indeed, more than 83% of companies report experiencing patching challenges.

[You may also like: The Evolving Network Security Environment – Can You Protect Your Customers in a 5G Universe?]

This is all to say, today’s dearth of effective patch and vulnerability management platforms provides opportunities for vendors to explore these fields and deliver cutting-edge solutions. And with IT security budgets healthier than ever, there’s a glimmer of hope that businesses will indeed invest in these solutions.

Let’s see what 2019 brings.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now