main

Application Delivery

Application SLA: Knowing Is Half the Battle

April 4, 2019 — by Radware0

ApplicationSLA-960x642.jpg

Applications have come to define the digital experience. They empower organizations to create new customer-friendly services, unlock data and content and deliver it to users at the time and device they desire, and provide a competitive differentiator over the competition.

Fueling these applications is the “digital core,” a vast plumbing infrastructure that includes networks, data repositories, Internet of Things (IoT) devices and more. If applications are a cornerstone of the digital experience, then managing and optimizing the digital core is the key to delivering these apps to the digitized user. When applications aren’t delivered efficiently, users can suffer from a degraded quality of experience (QoE), resulting in a tarnished brand, negatively affecting customer loyalty and lost revenue.

Application delivery controllers (ADCs) are ideally situated to ensure QoE, regardless of the operational scenario, by allowing IT to actively monitor and enforce application SLAs. The key is to understand the role ADCs play and the capabilities required to ensure the digital experience across various operational scenarios.

Optimize Normal Operations

Under normal operational conditions, ADCs optimize application performance, control and allocate resources to those applications and provide early warnings of potential issues.

[You may also like: 6 Must-Have Metrics in Your SLA]

For starters, any ADC should deliver web performance optimization (WPO) capabilities to turbocharge the performance of web-based applications. It transforms front-end optimization from a lengthy and complex process into an automated, streamlined function. Caching, compression, SSL offloading and TCP optimization are all key capabilities and will enable faster communication between the client and server while offloading CPU intensive tasks from the application server.

Along those same lines, an ADC can serve as a “bridge” between the web browsers that deliver web- based applications and the backend servers that host the applications. For example, HTTP/2 is the new standard in network protocols. ADCs can serve as a gateway between the web browsers that support HTTP/2 and backend servers that still don’t, optimizing performance to meet application SLAs.

Prevent Outages

Outages are few and far between, but when they occur, maintaining business continuity is critical via server load balancing, leveraging cloud elasticity and disaster recovery. ADCs play a critical role across all three and execute and automate these processes during a time of crisis.

[You may also like: Security Pros and Perils of Serverless Architecture]

If an application server fails, server load balancing should automatically redirect the client to another server. Likewise, in the event that an edge router or network connection to the data center fails, an ADC should automatically redirect to another data center, ensuring the web client can always access the application server even when there is a point of failure in the network infrastructure.

Minimize Degradation

Application SLA issues are most often the result of network degradation. The ecommerce industry is a perfect example. A sudden increase in network traffic during the holiday season can result in SLA degradation.

Leveraging server load balancing, ADCs provide elasticity by provisioning resources on-demand. Additional servers are added to the network infrastructure to maintain QoE, and after the spike has passed, returned to an idle state for use elsewhere. In addition, virtualized ADCs provide an additional benefit, as they provide scalability and isolation between vADC instance at the fault, management and network levels.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Finally, cyberattacks are the silent killers of application performance, as they typically create degradation. ADCs play an integrative role in protecting applications to maintain SLAs at all times.   They can prevent attack traffic from entering a network’s LAN and prevent volumetric attack traffic from saturating the Internet pipe.

The ADC should be equipped with security capabilities that allow it to be integrated into the security/ DDoS mitigation framework. This includes the ability to inspect traffic and network health parameters so the ADC serves as an alarm system to signal attack information to a DDoS mitigation solution. Other interwoven safety features should include integration with web application firewalls (WAFs), ability to decrypt/encrypt SSL traffic and device/user fingerprinting.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware3

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware1

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware2

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi3

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Ensuring Data Privacy in Public Clouds

January 24, 2019 — by Radware0

publicprivatecloud-960x640.jpg

Most enterprises spread data and applications across multiple cloud providers, typically referred to as a multicloud approach. While it is in the best interest of public cloud providers to offer network security as part of their service offerings, every public cloud provider utilizes different hardware and software security policies, methods and mechanisms, creating a challenge for the enterprise to maintain the exact same policy and configuration across all infrastructures. Public cloud providers typically meet basic security standards in an effort to standardize how they monitor and mitigate threats across their entire customer base. Seventy percent of organizations reported using public cloud providers with varied approaches to security management. Moreover, enterprises typically prefer neutral security vendors instead of over-relying on public cloud vendors to protect their workloads. As the multicloud approach expands, it is important to centralize all security aspects.

When Your Inside Is Out, Your Outside Is In

Moving workloads to publicly hosted environments leads to new threats, previously unknown in the world of premise-based computing. Computing resources hosted inside an organization’s perimeter are more easily controlled. Administrators have immediate physical access, and the workload’s surface exposure to insider threats is limited. When those same resources are moved to the public cloud, they are no longer under the direct control of the organization. Administrators no longer have physical access to their workloads. Even the most sensitive configurations must be done from afar via remote connections. Putting internal resources in the outside world results in a far larger attack surface with long, undefined boundaries of the security perimeter.

In other words, when your inside is out, then your outside is in.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

External threats that could previously be easily contained can now strike directly at the heart of an organization’s workloads. Hackers can have identical access to workloads as do the administrators managing them. In effect, the whole world is now an insider threat.

In such circumstances, restricting the permissions to access an organization’s workloads and hardening its security configuration are key aspects of workload security.

Poor Security HYGIENE Leaves You Exposed

Cloud environments make it very easy to grant access permissions and very difficult to keep track of who has them. With customer demands constantly increasing and development teams put under pressure to quickly roll out new enhancements, many organizations spin up new resources and grant excessive permissions on a routine basis. This is particularly true in many DevOps environments where speed and agility are highly valued and security concerns are often secondary.

Over time, the gap between the permissions that users have and the permissions that they actually need (and use) becomes a significant crack in the organization’s security posture. Promiscuous permissions leave workloads vulnerable to data theft and resource exploitation should any of the users who have access permissions to them become compromised. As a result, misconfiguration of access permissions (that is, giving permissions to too many people and/or granting permissions that are overly generous)
becomes the most urgent security threat that organizations need to address in public cloud environments.

[You may also like: Considerations for Load Balancers When Migrating Applications to the Cloud]

The Glaring Issue of Misconfiguration

Public cloud providers offer identity access management tools for enterprises to control access to applications, services and databases based on permission policies. It is the responsibility of enterprises to deploy security policies that determine what entities are allowed to connect with other entities or resources in the network. These policies are usually a set of static definitions and rules that control what entities are valid to, for example, run an API or access data.

One of the biggest threats to the public cloud is misconfiguration. If permission policies are not managed properly by an enterprise will the tools offered by the public cloud provider, excessive permissions will expand the attack surface, thereby enabling hackers to exploit one entry to gain access to the entire network.

Moreover, common misconfiguration scenarios result from a DevOps engineer who uses predefined permission templates, called managed permission policies, in which the granted standardized policy may contain wider permissions than needed. The result is excessive permissions that are never used. Misconfigurations can cause accidental exposure of data, services or machines to the internet, as well as leave doors wide open for attackers.

[You may also like: The Hybrid Cloud Habit You Need to Break]

For example, an attacker can steal data by using the security credentials of a DevOps engineer gathered in a phishing attack. The attacker leverages the privileged role to take a snapshot of elastic block storage (EBS) to steal data, then shares the EBS snapshot and data on an account in another public network without installing anything. The attacker is able to leverage a role with excessive permissions to create a new machine at the beginning of the attack and then infiltrate deeper into the network to share
AMI and RDS snapshots (Amazon Machine Images and Relational Database Service, respectively), and then unshare resources.

Year over year in Radware’s global industry survey, the most frequently mentioned security challenges encountered with migrating applications to the cloud are governance issues followed by skill shortage and complexity of managing security policies. All contribute to the high rate of excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Now or Never: Financial Services and the Cloud

January 9, 2019 — by Sandy Toplis0

FinServ-960x640.jpg

I will get straight to the point: The time is right for the financial services (FS) industry to leverage the power of the cloud. It dovetails quite nicely with retail banking’s competitive moves to provide users with more flexible choices, banking simplification and an improved, positive customer experience. Indeed, I am encouraged that roughly 70% of my financial services customers are looking to move more services to the cloud, and approximately 50% have a cloud-first strategy.

This is a departure from the FS industry’s history with the public cloud. Historically, it has shied away from cloud adoption—not because it’s against embracing new technologies for business improvement, but because it is one of the most heavily regulated and frequently scrutinized industries in terms of data privacy and security. Concerns regarding the risk of change and impact to business continuity, customer satisfaction, a perceived lack of control, data security, and costs have played a large role in the industry’s hesitation to transition to the cloud.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

Embracing Change

More and more, banks are moving applications on the cloud to take advantage of scalability, lower capital costs, ease of operations and resilience offered by cloud solutions. Due to the differing requirements on data residency from jurisdiction-to-jurisdiction, banks need to choose solutions that allow them to have exacting control over transient and permanent data flows. Solutions that are flexible enough to be deployed in a hybrid mode, on a public cloud infrastructure as well as private infrastructure, are key to allowing banks to have the flexibility of leveraging existing investments, as well as meeting these strict regulatory requirements.

[You may also like: The Hybrid Cloud Habit You Need to Break]

Although the rate of cloud adoption within the financial services industry still has much room for growth, the industry is addressing many of its concerns and is putting to bed the myths surrounding cloud-based security. Indeed, multi-cloud adoption is proliferating and it’s becoming clear that banks are increasingly turning to the cloud and into new (FinTech) technology.  In some cases, banks are already using cloud services for non-core and non-critical uses such as HR, email, customer analytics, customer relationship management (CRM), and for development and testing purposes.

Interestingly, smaller banks have more readily made the transition by moving entire core services (treasury, payments, retail banking, enterprise data) to the cloud.  As these and other larger banks embrace new FinTech, their service offerings will stand out among the competitive landscape, helping to propel the digital transformation race.

What’s Driving the Change?

There are several key drivers for the adoption of multi (public) cloud-based services for the FS industry, including:

  • Risk mitigation in cloud migration. Many companies operate a hybrid security model, so the cloud environment works adjacent to existing infrastructure. Organisations are also embracing the hybrid model to deploy cloud-based innovation sandboxes to rapidly validate consumers’ acceptance of new services without disrupting their existing business. The cloud can help to lower risks associated with traditional infrastructure technology where capacity, redundancy and resiliency are operational concerns.  From a regulatory perspective, the scalability of the cloud means that banks can scan potentially thousands of transactions per second, which dramatically improves the industry’s ability to combat financial crime, such as fraud and money laundering.
  • Security. Rightly so, information security remains the number one concern for CISOs. When correctly deployed, cloud applications are no less secure than traditional in-house deployments. What’s more, the flexibility to scale in a cloud environment can empower banks with more control over security issues.
  • Agile innovation and competitive edge. Accessing the cloud can increase a bank’s ability to innovate by enhancing agility, efficiency and productivity. Gaining agility with faster onboarding of services (from the traditional two-to-three weeks to implement a service to almost instantly in the cloud) gives banks a competitive edge: they can launch new services to the market quicker and with security confidence. Additionally, the scaling up (or down) of services is fast and reliable, which can help banks to reallocate resources away from the administration of IT infrastructure, and towards innovation and fast delivery of products and services to markets.
  • Cost benefits. As FS customers move from on-prem to cloud environments, costs shift from capex to opex. The cost savings of public cloud solutions are significant, especially given the reduction in initial capex requirements for traditional IT infrastructure. During periods of volumetric traffic, the cloud can allow banks to manage computing capacity more efficiently. And when the cloud is adopted for risk mitigation and innovation purposes, cost benefits arise from the resultant improvements in business efficiency. According to KPMG, shifting back-office functions to the cloud allows banks to achieve savings of between 30 and 40 percent.

[You may also like: The Executive Guide to Demystify Cybersecurity]

A Fundamental Movement

Cloud innovation is fast becoming a fundamental driver in global digital disruption and is increasingly gaining more prominence and cogency with banks. In fact, Gartner predicts that by 2020, a corporate no-cloud policy will become as rare as a no-internet policy is today.

Regardless of the size of your business—be it Retail Banking, Investment Banking, Insurance, Forex, Building Societies, etc.—protecting your business from cybercriminals and their ever-changing means of “getting in” is essential.  The bottom line: Whatever cloud deployment best suits your business is considerably more scalable and elastic than hosting in-house, and therefore suits any organisation.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud Computing

Ensuring a Secure Cloud Journey in a World of Containers

December 11, 2018 — by Prakash Sinha0

Load-Balancer-960x541.jpg

As organizations transition to the cloud, many are adopting microservice architecture to implement business applications as a collection of loosely coupled services, in order to enable isolation, scale, and continuous delivery for complex applications. However, you have to balance the complexity that comes with such a distributed architecture with the application security and scale requirements, as well as time-to-market constraints.

Many application architects choose application containers as a tool of choice to implement the microservices architecture. Among its many advantages, such as resource footprint, instantiation time, and better resource utilization, containers provide a lightweight run time and a consistent environment for the application—from development to testing to a production deployment.

That said, adopting containers doesn’t remove the traditional security and application availability concerns; application vulnerabilities can still be exploited. Recent ransomware attacks highlight the need to secure against DDoS and application attacks.

[You may also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

Security AND availability should be top-of-mind concerns in the move to adopt containers.

Let Your Load Balancer Do the Heavy Lifting

For many years, application delivery controllers (ADCs), a.k.a. load balancer, have been integral to addressing service-level needs for applications, deployed on premise or on the cloud, to meet availability and many of the security requirements of the applications.

Layered security is a MUST: In addition to using built-in tools for container security, traditional approaches to security are still relevant. Many container-deployed services are composed using Application Programming Interfaces (APIs). Since these services are accessible over the web, they are open to malicious attacks.

As hackers probe network and application vulnerability to gain access to sensitive data, the prevention of unauthorized access needs to be multi-pronged as well:

  • Preventing denial of service attacks
  • Routine vulnerability assessment scans on container applications
  • Scanning application source code for vulnerabilities and fixing them
  • Preventing malicious access by validate users before they can access a container application.
  • Preventing rogue application ports/applications from running
  • Securing the data at rest and in motion.

Since ADCs terminate user connections, scrubbing the data with a web application firewall (WAF) will help identify and prevent malicious attacks, while authenticating users against an identity management system to prevent unauthorized access to a container service.

Availability is not just a nice-to-have: A client interacting with a microservices-based application does not need to know about the instances that’s serving it. This is precisely the isolation and decoupling that a load balancer provides, thus ensuring availability in case one of the instances becomes unavailable.

Allocating and managing it manually is not an option:  Although there are many benefits to a container-based application, it is a challenge to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and re-configuring the load balancer to incorporate newly instantiated services is inefficient and error prone. It becomes problematic at scale, especially with those that have short lifetimes. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require deep familiarity or expertise.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

If you don’t monitor, you won’t know: When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. Monitoring may include information about a microservice such as latency, security issues, service uptime, and problems of access.

Businesses must support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. Organizations should be concerned with ensuring security with a layered approach to security controls. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

ComplianceSecurity

Marriott: The Case for Cybersecurity Due Diligence During M&A

December 4, 2018 — by Mike O'Malley0

Marriott-960x640.jpg

If ever there was a perfectly packaged case study on data breaches, it’s Marriott’s recently disclosed megabreach. Last week, the hotel chain announced that its Starwood guest reservation system was hacked in 2014—two years before Marriott purchased Starwood properties, which include the St. Regis, Westin, Sheraton and W Hotels—potentially exposing the personal information of 500 million guests.

The consequences were almost immediate; on the day it announced the breach, Marriott’s stocks were down 5% in early trading and two lawsuits seeking class-action status (one for $12.5 billion in damages) were filed. And the U.S. Senate started to discuss stiffer fines and regulations for security breaches. So far, this is all par for the course.

But what makes Marriott’s breach particularly noteworthy is the obvious lack of cybersecurity due diligence conducted during the M&A process.

Never Ever Skip a Step

In September 2016, Marriott International announced that it had completed the acquisition of Starwood Resorts & Hotels Worldwide, creating the largest hotel company in the world. In its press release, Marriott specifically touted the best-in-class loyalty program that the two brands, combined, could now offer members.

What Marriott International executives didn’t realize was that hackers had gained unauthorized access to Starwood’s loyalty program since 2014, exposing guests’ private information including names, phone numbers, email addresses, passport numbers, dates of birth, credit card numbers and more.

However, if Marriott had done its homework, it might have avoided the mountain of legal fees and compliance fines it now faces. In today’s digital age, cybersecurity due diligence during any M&A process is, without question, imperative.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

And it’s not just security evangelists like myself who emphasize this. The American Bar Association likewise asserts that “it is critical to understand the nature and significance of a target’s vulnerabilities, the potential scope of the damage that may occur (or that already has occurred) in the event of a breach, and the extent and effectiveness of the cyber defenses the target business has put in place to protect itself. An appropriate evaluation of these issues could, quite literally, have a major impact on the value the acquirer places on the target company and on the way it structures the deal.”

The cost of cyberattacks is simply too great to not succeed in mitigating every threat, every time. A successful cyberattack and resulting data breach obliterates trust and destroys brands.

The Only Way Forward

When one company acquires another, it doesn’t just acquire assets. It also assumes the target company’s risks. Put simply, their gaps become your gaps.

In addition, lack of cybersecurity due diligence can actually undermine the value drivers of the deal.  In Marriotts’ case, a big driver was retention of the Starwood high value travelers: the people who make up the loyalty program. Due the pain these customers will now endure—changing credit card numbers, passports, etc.—this value driver has been irrevocably damaged.

It is critical that organizations incorporate cybersecurity into every fabric of the business, from the C-level to IT. Securing digital assets can no longer by delegated solely to the IT department; it must be infused into product and service offerings, security, and perhaps most importantly, development plans and business initiatives. In the case of Marriott, its $13 billion acquisition of Starwood represented a strategic initiative that involved the board of directors, C-level executives and management—all of whom are now partially responsible for the erosion of Marriott’s brand affinity.

[You may also like: Why Cyber-Security Is Critical to The Loyalty of Your Most Valued Customers]

And as we’ve written before, when it comes to loyalty programs, security must transition from the domain of reactive disaster recovery and business continuity into the realm of proactive protection. If loyalty programs are designed to focus on your most valuable customers, why wouldn’t its security fall in line with the other mission-critical assets and infrastructure responsible for servicing these very clients?

Marriott’s Starwood breach is an unfortunate case study for why CEO and executive teams must lead the way in setting the tone when it comes to securing the customer experience. When cybersecurity is overlooked or treated as an afterthought, the potential damage goes far beyond dollars and cents. Your very reputation is at stake.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application DeliveryCloud ComputingCloud Security

Embarking on a Cloud Journey: Expect More from Your Load Balancer

November 13, 2018 — by Prakash Sinha0

AdobeStock_215123311-1-960x593.jpg

Many enterprises are in transition to the cloud, either building their own private cloud, managing a hybrid environment – both physical and virtualized—or deploying on a public cloud. In addition, there is a shift from infrastructure-centric environments to application-centric ones. In a fluid development environment of continuous integration and continuous delivery, where services are frequently added or updated, the new paradigm requires support for needs across multiple environments and across many stakeholders.

When development teams choose unsupported cloud infrastructure without IT involvement, the network team loses visibility, and security and cost control is accountable over the service level agreement (SLA) provided once the developed application goes live.

The world is changing. So should your application delivery controller.

Application delivery and load balancing technologies have been the strategic component providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure must now address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

The objective here is not to block agile development and use of innovative services, but to have a controlled environment, which gives the organization the best of both DevOps and IT– that is, to keep a secure and controlled environment while enabling agility. The benefits speak for themselves:

Reduced shadow IT initiatives
To remain competitive, every business needs innovative technology consumable by the end‐user. Oftentimes, employees are driven to use shadow IT services because going through approval processes is cumbersome, and using available approved technology is complex to learn and use. If users cannot get quick service from IT, they will go to a cloud service provider for what they need. Sometimes this results in short‐term benefit, but may cause issues with organizations’ security, cost controls and visibility in the long-term. Automation and self-service address CI/CD demands and reduce the need for applications teams to acquire and use their own unsupported ADCs.

Flexibility and investment protection at a predictable cost
Flexible licensing is one of the critical elements to consider. As you move application delivery services and instances to the cloud when needed, you should be able to reuse existing licenses across a hybrid deployment. Many customers initially deploy on public cloud but cost unpredictability becomes an issue once the services scale with usage.

[You may also like: Load Balancers and Elastic Licensing]

Seamless integration with an SDDC ecosystem
As you move to private or public cloud, you should be able to reuse your investment in the orchestration system of your environment. Many developers are not used to networking or security nomenclature. Using self-service tools with which developers are familiar quickly becomes a requirement.

The journey from a physical data center to the cloud may sometimes require investments in new capabilities to enable migration to the new environment. If an application delivery controller capacity is no longer required in the physical data center, its capacity can be automatically reassigned. Automation and self-services applications address the needs of various stakeholders, as well as the flexible licensing and cost control aspects of this journey.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now