main

Cloud ComputingCloud Security

Ensuring Data Privacy in Public Clouds

January 24, 2019 — by Radware0

publicprivatecloud-960x640.jpg

Most enterprises spread data and applications across multiple cloud providers, typically referred to as a multicloud approach. While it is in the best interest of public cloud providers to offer network security as part of their service offerings, every public cloud provider utilizes different hardware and software security policies, methods and mechanisms, creating a challenge for the enterprise to maintain the exact same policy and configuration across all infrastructures. Public cloud providers typically meet basic security standards in an effort to standardize how they monitor and mitigate threats across their entire customer base. Seventy percent of organizations reported using public cloud providers with varied approaches to security management. Moreover, enterprises typically prefer neutral security vendors instead of over-relying on public cloud vendors to protect their workloads. As the multicloud approach expands, it is important to centralize all security aspects.

When Your Inside Is Out, Your Outside Is In

Moving workloads to publicly hosted environments leads to new threats, previously unknown in the world of premise-based computing. Computing resources hosted inside an organization’s perimeter are more easily controlled. Administrators have immediate physical access, and the workload’s surface exposure to insider threats is limited. When those same resources are moved to the public cloud, they are no longer under the direct control of the organization. Administrators no longer have physical access to their workloads. Even the most sensitive configurations must be done from afar via remote connections. Putting internal resources in the outside world results in a far larger attack surface with long, undefined boundaries of the security perimeter.

In other words, when your inside is out, then your outside is in.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

External threats that could previously be easily contained can now strike directly at the heart of an organization’s workloads. Hackers can have identical access to workloads as do the administrators managing them. In effect, the whole world is now an insider threat.

In such circumstances, restricting the permissions to access an organization’s workloads and hardening its security configuration are key aspects of workload security.

Poor Security HYGIENE Leaves You Exposed

Cloud environments make it very easy to grant access permissions and very difficult to keep track of who has them. With customer demands constantly increasing and development teams put under pressure to quickly roll out new enhancements, many organizations spin up new resources and grant excessive permissions on a routine basis. This is particularly true in many DevOps environments where speed and agility are highly valued and security concerns are often secondary.

Over time, the gap between the permissions that users have and the permissions that they actually need (and use) becomes a significant crack in the organization’s security posture. Promiscuous permissions leave workloads vulnerable to data theft and resource exploitation should any of the users who have access permissions to them become compromised. As a result, misconfiguration of access permissions (that is, giving permissions to too many people and/or granting permissions that are overly generous)
becomes the most urgent security threat that organizations need to address in public cloud environments.

[You may also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

The Glaring Issue of Misconfiguration

Public cloud providers offer identity access management tools for enterprises to control access to applications, services and databases based on permission policies. It is the responsibility of enterprises to deploy security policies that determine what entities are allowed to connect with other entities or resources in the network. These policies are usually a set of static definitions and rules that control what entities are valid to, for example, run an API or access data.

One of the biggest threats to the public cloud is misconfiguration. If permission policies are not managed properly by an enterprise will the tools offered by the public cloud provider, excessive permissions will expand the attack surface, thereby enabling hackers to exploit one entry to gain access to the entire network.

Moreover, common misconfiguration scenarios result from a DevOps engineer who uses predefined permission templates, called managed permission policies, in which the granted standardized policy may contain wider permissions than needed. The result is excessive permissions that are never used. Misconfigurations can cause accidental exposure of data, services or machines to the internet, as well as leave doors wide open for attackers.

[You may also like: The Hybrid Cloud Habit You Need to Break]

For example, an attacker can steal data by using the security credentials of a DevOps engineer gathered in a phishing attack. The attacker leverages the privileged role to take a snapshot of elastic block storage (EBS) to steal data, then shares the EBS snapshot and data on an account in another public network without installing anything. The attacker is able to leverage a role with excessive permissions to create a new machine at the beginning of the attack and then infiltrate deeper into the network to share
AMI and RDS snapshots (Amazon Machine Images and Relational Database Service, respectively), and then unshare resources.

Year over year in Radware’s global industry survey, the most frequently mentioned security challenges encountered with migrating applications to the cloud are governance issues followed by skill shortage and complexity of managing security policies. All contribute to the high rate of excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Now or Never: Financial Services and the Cloud

January 9, 2019 — by Sandy Toplis0

FinServ-960x640.jpg

I will get straight to the point: The time is right for the financial services (FS) industry to leverage the power of the cloud. It dovetails quite nicely with retail banking’s competitive moves to provide users with more flexible choices, banking simplification and an improved, positive customer experience. Indeed, I am encouraged that roughly 70% of my financial services customers are looking to move more services to the cloud, and approximately 50% have a cloud-first strategy.

This is a departure from the FS industry’s history with the public cloud. Historically, it has shied away from cloud adoption—not because it’s against embracing new technologies for business improvement, but because it is one of the most heavily regulated and frequently scrutinized industries in terms of data privacy and security. Concerns regarding the risk of change and impact to business continuity, customer satisfaction, a perceived lack of control, data security, and costs have played a large role in the industry’s hesitation to transition to the cloud.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

Embracing Change

More and more, banks are moving applications on the cloud to take advantage of scalability, lower capital costs, ease of operations and resilience offered by cloud solutions. Due to the differing requirements on data residency from jurisdiction-to-jurisdiction, banks need to choose solutions that allow them to have exacting control over transient and permanent data flows. Solutions that are flexible enough to be deployed in a hybrid mode, on a public cloud infrastructure as well as private infrastructure, are key to allowing banks to have the flexibility of leveraging existing investments, as well as meeting these strict regulatory requirements.

[You may also like: The Hybrid Cloud Habit You Need to Break]

Although the rate of cloud adoption within the financial services industry still has much room for growth, the industry is addressing many of its concerns and is putting to bed the myths surrounding cloud-based security. Indeed, multi-cloud adoption is proliferating and it’s becoming clear that banks are increasingly turning to the cloud and into new (FinTech) technology.  In some cases, banks are already using cloud services for non-core and non-critical uses such as HR, email, customer analytics, customer relationship management (CRM), and for development and testing purposes.

Interestingly, smaller banks have more readily made the transition by moving entire core services (treasury, payments, retail banking, enterprise data) to the cloud.  As these and other larger banks embrace new FinTech, their service offerings will stand out among the competitive landscape, helping to propel the digital transformation race.

What’s Driving the Change?

There are several key drivers for the adoption of multi (public) cloud-based services for the FS industry, including:

  • Risk mitigation in cloud migration. Many companies operate a hybrid security model, so the cloud environment works adjacent to existing infrastructure. Organisations are also embracing the hybrid model to deploy cloud-based innovation sandboxes to rapidly validate consumers’ acceptance of new services without disrupting their existing business. The cloud can help to lower risks associated with traditional infrastructure technology where capacity, redundancy and resiliency are operational concerns.  From a regulatory perspective, the scalability of the cloud means that banks can scan potentially thousands of transactions per second, which dramatically improves the industry’s ability to combat financial crime, such as fraud and money laundering.
  • Security. Rightly so, information security remains the number one concern for CISOs. When correctly deployed, cloud applications are no less secure than traditional in-house deployments. What’s more, the flexibility to scale in a cloud environment can empower banks with more control over security issues.
  • Agile innovation and competitive edge. Accessing the cloud can increase a bank’s ability to innovate by enhancing agility, efficiency and productivity. Gaining agility with faster onboarding of services (from the traditional two-to-three weeks to implement a service to almost instantly in the cloud) gives banks a competitive edge: they can launch new services to the market quicker and with security confidence. Additionally, the scaling up (or down) of services is fast and reliable, which can help banks to reallocate resources away from the administration of IT infrastructure, and towards innovation and fast delivery of products and services to markets.
  • Cost benefits. As FS customers move from on-prem to cloud environments, costs shift from capex to opex. The cost savings of public cloud solutions are significant, especially given the reduction in initial capex requirements for traditional IT infrastructure. During periods of volumetric traffic, the cloud can allow banks to manage computing capacity more efficiently. And when the cloud is adopted for risk mitigation and innovation purposes, cost benefits arise from the resultant improvements in business efficiency. According to KPMG, shifting back-office functions to the cloud allows banks to achieve savings of between 30 and 40 percent.

[You may also like: The Executive Guide to Demystify Cybersecurity]

A Fundamental Movement

Cloud innovation is fast becoming a fundamental driver in global digital disruption and is increasingly gaining more prominence and cogency with banks. In fact, Gartner predicts that by 2020, a corporate no-cloud policy will become as rare as a no-internet policy is today.

Regardless of the size of your business—be it Retail Banking, Investment Banking, Insurance, Forex, Building Societies, etc.—protecting your business from cybercriminals and their ever-changing means of “getting in” is essential.  The bottom line: Whatever cloud deployment best suits your business is considerably more scalable and elastic than hosting in-house, and therefore suits any organisation.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha3

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingCloud Security

Embarking on a Cloud Journey: Expect More from Your Load Balancer

November 13, 2018 — by Prakash Sinha0

AdobeStock_215123311-1-960x593.jpg

Many enterprises are in transition to the cloud, either building their own private cloud, managing a hybrid environment – both physical and virtualized—or deploying on a public cloud. In addition, there is a shift from infrastructure-centric environments to application-centric ones. In a fluid development environment of continuous integration and continuous delivery, where services are frequently added or updated, the new paradigm requires support for needs across multiple environments and across many stakeholders.

When development teams choose unsupported cloud infrastructure without IT involvement, the network team loses visibility, and security and cost control is accountable over the service level agreement (SLA) provided once the developed application goes live.

The world is changing. So should your application delivery controller.

Application delivery and load balancing technologies have been the strategic component providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure must now address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

The objective here is not to block agile development and use of innovative services, but to have a controlled environment, which gives the organization the best of both DevOps and IT– that is, to keep a secure and controlled environment while enabling agility. The benefits speak for themselves:

Reduced shadow IT initiatives
To remain competitive, every business needs innovative technology consumable by the end‐user. Oftentimes, employees are driven to use shadow IT services because going through approval processes is cumbersome, and using available approved technology is complex to learn and use. If users cannot get quick service from IT, they will go to a cloud service provider for what they need. Sometimes this results in short‐term benefit, but may cause issues with organizations’ security, cost controls and visibility in the long-term. Automation and self-service address CI/CD demands and reduce the need for applications teams to acquire and use their own unsupported ADCs.

Flexibility and investment protection at a predictable cost
Flexible licensing is one of the critical elements to consider. As you move application delivery services and instances to the cloud when needed, you should be able to reuse existing licenses across a hybrid deployment. Many customers initially deploy on public cloud but cost unpredictability becomes an issue once the services scale with usage.

[You may also like: Load Balancers and Elastic Licensing]

Seamless integration with an SDDC ecosystem
As you move to private or public cloud, you should be able to reuse your investment in the orchestration system of your environment. Many developers are not used to networking or security nomenclature. Using self-service tools with which developers are familiar quickly becomes a requirement.

The journey from a physical data center to the cloud may sometimes require investments in new capabilities to enable migration to the new environment. If an application delivery controller capacity is no longer required in the physical data center, its capacity can be automatically reassigned. Automation and self-services applications address the needs of various stakeholders, as well as the flexible licensing and cost control aspects of this journey.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud Computing

Digital Transformation – Take Advantage of Application Delivery in Your Journey

October 31, 2018 — by Prakash Sinha0

adc_change_journey_blog-1-960x720.jpg

The adoption of new technologies is accelerating business transformation. In its essence, the digital transformation of businesses uses technologies to drive significant improvement in process effectiveness.

Cloud computing is one of the core technologies for Digital Transformation

Increasing maturity of cloud-based infrastructure enables organizations to deploy business-critical applications in public and private cloud. According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%.

Many enterprises are in the midst of this transition to the cloud, whether moving to a public cloud, building their own private cloud or managing a hybrid deployment. In this fluid environment, where new services are being frequently added and old ones updated, the new paradigm requires support for needs across multiple environments and across many constituencies – an IT administrator, an application developer, DevOps and tenants.

[You might also like: Optimizing Multi-Cloud, Cross-DC Web Apps and Sites]

Nobody Said It Was Easy!

However, the process of migration of applications to the cloud is not easy. The flexibility and cost benefit that drives the shift to the cloud also presents many challenges – security, business continuity, and application availability, latency reduction, issues with visibility, SLA guarantees and isolation of resources.  Some other aspects that require some thought – licensing, lock-in with a cloud service provider, architecture to address hybrid deployment, shadow IT, automation, user access, user privacy, and compliance needs.

One of the main challenges for enterprises moving to a cloud infrastructure is how to guarantee consistent quality of experience to consumers across multiple applications, many of which are business critical developed using legacy technologies and still hosted on-premise.

Along with the quality of experience, organizations need to look at the security policies. Sometimes policies require integration with a cloud service provider’s infrastructure or require new capabilities to complement on-premises architecture while addressing denial of service, application security and compliance for new attack surface exposed by applications in the cloud.

Convenience and productivity improvements are often the initial drivers for adopting IT services in the cloud. One way to address security and availability concerns for the enterprise embarking on the cloud journey is to ensure that the security and availability are also included as part of IT self-service, orchestration and automation systems, without requiring additional effort from those driving adoptions of cloud-based IT applications.

The World of Application Delivery Has Changed to Adapt!

Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business-critical applications to the cloud, the same load balancing and application delivery infrastructure has to evolve to address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingSSL

LBaaS: Load Balancing for the Age of Cloud

February 9, 2015 — by Jim Frey0

Jim Frey is Vice President of Research, Network Management for Enterprise Management Associates (EMA) and is a featured guest blogger.

As organizations embrace the cloud, whether as enterprises looking to transform IT or providers expanding and enriching service offerings, traditional IT/networking technologies must be adapted to fit the form of the new reality. This is certainly true of load balancing (LB) as well as application delivery control (ADC), which has undergone virtualization and software programmability/orchestration evolutions.

Application DeliveryCloud Computing

Load balancing and application delivery in VMware vCloud. How to best serve your applications?

January 21, 2014 — by Lior Cohen1

Whether you are building a private cloud or deploying applications over a public cloud infrastructure, you will be using application delivery functionality such as load balancing to scale out application workloads. The level of required application delivery functionality differs between environments. While cutting edge web applications are often designed using REST oriented architectures which require the load balancer only to distribute traffic evenly, typical applications heavily rely on application delivery functionality such as SSL offloading, L7 policies, and session stickiness to properly operate. Furthermore, due to the strategic placement of ADCs in the application stack – in front of the application – security and performance optimization are natural extensions to effectively optimize application delivery.

Application DeliveryCloud ComputingService Provider

The “As-a-Service” Revolution: How Cloud-based Services are Reshaping the Enterprise

July 15, 2013 — by Travis Volk1

The growth of cloud-based services is challenging customers to second-guess their brick and mortar investments. From price and performance to state of the art technology, enterprises are ready to return the complex task of managing advanced services to cloud providers. This shift not only complements the mobile aspect of employee, consumer and partner interaction in today’s marketplace, but also allows enterprises to focus more on their core business, enabling further growth.

Application DeliveryCloud ComputingData CenterSDN

SDN Focus Shifts to Network Services

March 4, 2013 — by Brad Casemore0

Brad Casemore is Research Director for IDC’s Datacenter Networks practice and is a featured guest blogger.

Cloud computing requires a datacenter network and corresponding network services that are more flexible and responsive than traditional datacenter network architectures. Responding to that need, software-defined networking (SDN) has emerged as a means of realizing network virtualization and network programmability, while also ensuring that the network can deliver the virtualized network and security services that are essential to the provision of scalable multi-tenant services.