main

Cloud Security

Transforming Into a Multicloud Environment

June 26, 2019 — by Radware0

Multicloud-960x540.jpg

While C-suite executives are taking on larger roles in proactively discussing cybersecurity issues, they are also evaluating how to leverage advances in technology to improve business agility. But as network architectures get more complex, there is added pressure to secure new points of attack vulnerability.

Organizations continue to host applications and data in the public cloud, typically spread across multiple cloud providers. This multicloud approach enables enterprises to be nimbler with network operations, improve the customer experience and reduce costs.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

Public Cloud Challenges

Every public cloud provider utilizes different hardware and software security policies, methods and mechanisms. This creates a challenge for enterprises to maintain standard policies and configurations across all infrastructures.

Furthermore, public cloud providers generally only meet basic security standards for their platform. And application security of workloads on public clouds is not included in the public cloud offering.

Even with concerns about the security of public clouds–almost three in five respondents expressed concern about vulnerabilities within their companies’ public cloud networks–organizations are moving applications and data to cloud service providers.

The Human Side of the Cloud

Sometimes the biggest threat to an organization’s digital assets are the people who are hired to protect them. Whether on purpose or through carelessness, people can compromise the permissions designed to create a security barrier.

[You may also like: Eliminating Excessive Permissions]

Of the almost three-fourths who indicated that they have experienced unauthorized access to their public cloud assets, the most common reasons were:

  • An employee neglected credentials in a development forum (41%);
  • A hacker made it through the provider’s security (37%) or the company’s security (31%); or
  • An insider left a way in (21%).

An insider?! Yes, indeed. Organizations may run into malicious insiders (legitimate users who exploit their privileges to cause harm) and/or negligent insiders (also legitimate users, such as Dev/DevOps engineers who make configuration mistakes, or other employees with access who practice low security hygiene and leave ways for hackers to get in).

[You may also like: Are Your DevOps Your Biggest Security Risks?]

To limit the human factor, senior-level executives should make sure that continuous hardening checks are applied to configurations in order to validate permissions and limit the possibility of attacks as much as possible.

The goals? To avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure that communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Cloud Computing

Eliminating Excessive Permissions

June 11, 2019 — by Eyal Arazi0

excessivepermissionsblog-960x581.jpg

Excessive permissions are the #1 threat to workloads hosted on the public cloud. As organizations migrate their computing resources to public cloud environments, they lose visibility and control over their assets. In order to accelerate the speed of business, extensive permissions are frequently granted to users who shouldn’t have them, which creates a major security risk should any of these users ever become compromised by hackers.

Watch the video below to learn more about the importance of eliminating excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application DeliveryCloud Computing

Economics of Load Balancing When Transitioning to the Cloud

May 22, 2019 — by Prakash Sinha0

adc2-960x566.jpg

One of the concerns I hear often is that application delivery controller (ADC) licensing models do not support cloud transitions for the enterprise or address the business needs of cloud service providers that have a large number of tenants.

Of course, there are many models to choose from – perpetual pricing per instance, bring-your-own license (BYOL), consumption and metered licensing models by licensing by CPU cores, per-user, by throughput, service provider-licensing agreements (SPLA), to name a few. The biggest concern is the complexity in licensing of ADC capacity. In a cloud environment, the performance profile for a particular instance may need to change to accommodate traffic spike. The licensing infrastructure and automation needs to accommodate this characteristic.

Traditionally, load balancers were deployed as physical devices as a redundant pair supported by perpetual pricing, a non-expiring license to use an instance, whether it’s hardware, virtualized or in the cloud. The customer has no obligation to pay for support or update services, although they are offered at an additional yearly cost. As virtualization took hold in the data centers, ADCs began to be deployed as virtual appliances and started supporting subscription licensing model – a renewable license, usually annual or monthly, that includes software support and updates during the subscription term. The license is automatically terminated unless it is renewed at the end of the term. Now, as applications move to cloud, ADCs are being deployed as a service in the cloud and consumption-based pricing is becoming common.

[You may also like: Keeping Pace in the Race for Flexibility]

Evaluating Choices: The Problem of Plenty

There are many licensing models to choose from – perpetual , subscription, consumption/metered, so how do you decide what to choose? The key is to understand what problem you’re trying to solve, identify the *MUST* have capabilities you’d expect for your applications, and plan how much of the capacity you’d need and then do an apples-to-apples comparison.

Understand the use case

Let us consider a cloud service provider (CSP) tenant onboarding as an example. The provider offers service to its tenants (medium and large enterprises), which consume their own homegrown applications and those offered and hosted by the CSP.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

For example, a CSP whose tenants are hospitals and physician networks offers patient registration systems as a shared SaaS offering among multiple tenants. Each tenant has varying needs for a load balancer – small ones require public cloud-based ADCs, whereas mid-sized and large ones have both public and private cloud solutions. Some of the larger tenants of the CSP also require their application services proxied by hardware ADCs due to low latency requirements. Self-service is a must for the CSP to reduce cost of doing business and so it automation and integration to support the tenants that administer their own environments.

Based on the use case, evaluate what functionality you’d need and what type of form factor support is required

CSPs are increasingly concerned about the rapid growth and expansion of Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform into their markets. Hosting providers that only provide commodity services, such as co-location and web hosting, have realized they are one service outage away from losing customers to larger cloud providers.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

In addition, many CSPs that provide managed services are struggling to grow because their current business is resource intensive and difficult to scale. In order to survive this competitive landscape, CSPs must have:

  • Cost predictability for the CSP (and tenants)
  • The ability to offer value-added advisory services, such as technical and consulting opportunities to differentiate
  • Self-service to reduce resources via the ability to automate and integrate with a customer’s existing systems
  • Solutions that span both private and public cloud infrastructure and includes hardware

For the CSP onboarding use case above, from a technical requirement, this breaks down to: Self-service, ability to create ADC instances of various sizes, automated provisioning, support for Ansible, vRO and Cisco ACI. From a business perceptive, the CSP needs to offer a host of solutions for their tenants that span cloud, private and hardware based ADCs.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

Plan Capacity

Once you understand the use case and have defined functional technical and business requirements, it’s time to review what kind of capacity you’ll need – now and in future. You may use existing analytics dashboards and tools to gain visibility into what you consume today. The data may be your HTTP, HTTP/S, UDP, SSL certificates, throughput per application at peak, connection and requests per second. Based on your growth projections you may define future needs.

Compare Available Options

The next step is to look at the various vendors for the performance metric that’s important to your applications. If you have a lot of SSL traffic, then look at that metric as a cost/unit across various vendors.

[You may also like: Are Your Applications Secure?]

Once you have narrowed down the list of vendors to those that support the functionality your applications MUST have, now it’s time to review the pricing to be within your budget. It’s important to compare apples-to-apples. So based on your capacity and utilizations profile, compare vendors on your short list. The chart below shows one example of comparison on AWS using on demand instances versus Radware Global Elastic Licensing subscription as a yearly cost.

As enterprises and service providers embark on a cloud journey, there is a need for simpler and flexible licensing model and infrastructure that eliminates planning risk, enables predictable costs, simplifies and automates licensing for provisioned capacity and enabled the ability to transfer capacity from existing physical deployment to cloud to realize savings.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Security

Managing Security Risks in the Cloud

May 8, 2019 — by Daniel Smith2

cloudrisk-960x640.jpg

Often, I find that only a handful of organizations have a complete understanding of where they stand in today’s threat landscape. That’s a problem. If your organization does not have the ability to identify its assets, threats, and vulnerabilities accurately, you’re going to have a bad time.

A lack of visibility prevents both IT and security administrators from accurately determining their actual exposure and limits their ability to address their most significant risk on premise. However, moving computing workloads to a publicly hosted cloud service exposes organizations to new risk by losing direct physical control over their workloads and relinquishing many aspects of security through the shared responsibility model.

Cloud-y With a Chance of Risk

Don’t get me wrong; cloud environments make it very easy for companies to quickly scale by allowing them to spin up new resources for their user base instantly. While this helps organizations decrease their overall time to market and streamline business process, it also makes it very difficult to track user permission and manage resources.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

As many companies have discovered over the years, migrating workloads to a cloud-native solution present new challenges when it comes to risk and threats in a native cloud environment.

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protections via mechanisms such as firewalls, intrusion prevention/detection systems (IPS/IDS), web application firewall (WAF) and distributed denial-of-service (DDoS) protection, secure web gateways (SWGs), etc.

However, moving workloads to the cloud has presented new risks for organizations. Typically, public clouds provide only basic protections and are mainly focused on securing their overall computing environments, leaving individual and organizations workloads vulnerable. Because of this, deployed cloud environment are at risk of not only account compromises and data breaches, but also resource exploitation due to misconfigurations, lack of visibility or user error.

[You may also like: Ensuring Data Privacy in Public Clouds]

The Details

The typical attack profile includes:

  • Spear phishing employees
  • Compromised credentials
  • Misconfigurations and excessive permissions
  • Privilege escalation
  • Data exfiltration

The complexity and growing risk of cloud environments are placing more responsibility for writing and testing secure apps on developers as well. While most are not cloud-oriented security experts, there are many things we can do to help them and contribute to a better security posture.

[You may also like: Anatomy of a Cloud-Native Data Breach]

Recent examples of attacks include:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

Mitigating Risk

The good news is that most of these attacks can be prevented by addressing software vulnerabilities, finding misconfigurations and deploying identity access management through a workload protection service.

With this in mind, your cloud workload protection solution should:

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

There are many blind spots involved in today’s large-scale cloud environments. The right cloud workload protection reduces the attack surface, detects data theft activity and provides comprehensive protection in a cloud-native solution.

As the trend around cybercriminals targeting operational technologies continues, it’s critical to reduce organizational risk by rigorously enforcing protection policies, detecting malicious activity and improving response capabilities while providing insurance to the developers.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

DDoSDDoS Attacks

Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service

May 2, 2019 — by Dileep Mishra0

ddosmitigation-960x540.jpg

Internet pipes have gotten fatter in the last decade. We have gone from expensive 1 Mbps links to 1 Gbps links, which are available at a relatively low cost. Most enterprises have at least a 1 Gbps ISP link to their data center, many have multiple 1 Gbps links at each data center. In the past, QoS, packet shaping, application prioritization, etc., used to be a big deal, but now we just throw more capacity to solve any potential performance problems.

However, when it comes to protecting your infrastructure from DDoS attacks, 1 Gbps, 10Gbps or even 40Gbps is not enough capacity. This is because in 2019, even relatively small DDoS attacks are a few Gbps in size, and the larger ones are greater than 1 Tbps.

For this reason, when security professionals design a DDoS mitigation solution, one of the key considerations is the capacity of the DDoS mitigation service. That said, it isn’t easy to figure out which DDoS mitigation service actually has the capacity to withstand the largest DDoS attacks. This is because there are a range of DDoS mitigation solutions to pick from, and capacity is a parameter most vendors can spin to make their solution appear to be flush with capacity.

Let us examine some of the solutions available and understand the difference between their announced capacity and their real ability to block a large bandwidth DDoS attack.

On-premises DDoS Mitigation Appliances 

First of all, be wary of any Router, Switch, or Network Firewall which is also being positioned as a DDoS mitigation appliance. Chances are it does NOT have the ability to withstand a multi Gbps DDoS attack.

There are a handful of companies that make purpose built DDoS mitigation appliances. These devices are usually deployed at the edge of your network, as close as possible to the ISP link. Many of these devices canmitigate attacks which are in the 10s of Gbps, however, the advertised mitigation capacity is usually based on one particular attack vector with all attack packets being of a specific size.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Irrespective of the vendor, don’t buy into 20/40/60 Gbps of mitigation capacity without quizzing the device’s ability to withstand a multi-vector attack, the real-world performance and its ability to pass clean traffic at a given throughput while also mitigating a large attack. Don’t forget, pps is sometimes more important than bps, and many devices will hit their pps limit first. Also be sure to delve into the internals of the attack mitigation appliance, in particular if the same CPU is used to mitigate an attack while passing normal traffic. The most effective devices have the attack “plane” segregated from the clean traffic “plane,” thus ensuring attack mitigation without affecting normal traffic.

Finally, please keep in mind that if your ISP link capacity is 1 Gbps and you have a DDoS mitigation appliance capable of 10Gbps of mitigation, you are NOT protected against a 10Gbps attack. This is because the attack will fill your pipe even before the on-premises device gets a chance to “scrub” the attack traffic.

Cloud-based Scrubbing Centers

The second type of DDoS mitigation solution that is widely deployed is a cloud-based scrubbing solution. Here, you don’t install a DDoS mitigation device at your data center. Rather, you use a DDoS mitigation service deployed in the cloud. With this type of solution, you send telemetry to the cloud service from your data center on a continuous basis, and when there is a spike that corresponds to a DDoS attack, you “divert” your traffic to the cloud service.

[You may also like: DDoS Protection Requires Looking Both Ways]

There are a few vendors who provide this type of solution but again, when it comes to the capacity of the cloud DDoS service, the devil is in the details. Some vendors simply add the “net” capacity of all the ISP links they have at all their data centers. This is misleading because they may be adding the normal daily clean traffic to the advertised capacity — so ask about the available attack mitigation capacity, excluding the normal clean traffic.

Also, chances are the provider has different capacities in different scrubbing centers and the net capacity across all the scrubbing centers may not be a good reflection of the scrubbing center attack mitigation capacity  in the geography of your interest (where your data center is located).

Another item to inquire about is Anycast capabilities, because this gives the provider the ability to mitigate the attack close to the source. In other words, if a 100 Gbps attack is coming from China, it will be mitigated at the scrubbing center in APAC.

[You may also like: 8 Questions to Ask in DDoS Protection]

Finally, it is important that the DDoS mitigation provider has a completely separate data path for clean traffic and does not mix clean customer traffic with attack traffic.

Content Distribution Networks

A third type of DDoS mitigation architecture is based upon leveraging a content distribution network (CDN) to diffuse large DDoS attacks. When it comes to the DDoS mitigation capacity of a CDN however, again, the situation is blurry.

Most CDNs have 10s, 100s, or 1000s of PoPs geographically distributed across the globe. Many simply count the net aggregate capacity across all of these PoPs and advertise that as the total attack mitigation capacity. This has two major flaws. It is quite likely that a real world DDoS attack is sourced from a limited number of geographical locations, in which case the capacity that really matters is the local CDN PoP capacity, not the global capacity at all the PoPs.

[You may also like: 5 Must-Have DDoS Protection Technologies]

Second, most CDNs pass a significant amount of normal customer traffic on all of the CDN nodes, so if a CDN service claims its attack mitigation capacity is 40 Tbps , it may be counting in 30Tbps of normal traffic. The question to ask is what is the total unused capacity, both on a net aggregate level as well as within a geographical region.

ISP Provider-based DDoS Mitigation

Many ISP providers offer DDoS mitigation as an add-on to the ISP pipe. It sounds like a natural choice, as they see all traffic coming into your data center even before it comes to your infrastructure, so it is best to block the attack within the ISP’s infrastructure – right?

Unfortunately, most ISPs have semi-adequate DDoS mitigation deployed within their own infrastructure and are likely to pass along the attack traffic to your data center. In fact, in some scenarios, some ISPs could actually black hole your traffic when you are under attack to protect their other customers who might be using a shared portion of their infrastructure. The question to ask your ISP is what happens if they see a 500Gbps attack coming towards your infrastructure and if there is any cap on the maximum attack traffic.

[You may also like: ISP DDoS Protection May Not Cover All of Bases]

All of the DDoS mitigation solutions discussed above are effective and are widely deployed. We don’t endorse or recommend one over the other. However, one should take any advertised attack mitigation capacity from any provider with a grain of salt. Quiz your provider on local capacity, differentiation between clean and attack traffic, any caps on attack, and any SLAs. Also, carefully examine vendor proposals for any exclusions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware1

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware0

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware1

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi0

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now