main

Application Delivery

Modern Analytics and End-to-End Visibility

July 3, 2019 — by Prakash Sinha0

SLA-960x640.jpg

Many Cloud Service Providers (CSP) and large enterprises struggle to deliver a commitment level for an application service. For a tenant, without a proper Service Level Agreement (SLA), it is impossible to manage an application for his or her own users.

Delivering SLA without first gaining end-to-end visibility for an application, user and network is asking for trouble. This has long been an area of contention and finger pointing between network and application teams. Solutions for monitoring application performance and SLA are expensive and the task is complex, requiring inserting hardware probes and/or integrating software agents into every application server.

The Case for Application Analytics

Application analytics provides deep insights into application, user and network behavior and the root cause of an SLA breach by capturing, analyzing and visualizing application metrics.

[You may also like: Application SLA: Knowing Is Half the Battle]

When deploying applications, particular attention is required to see when things are slowing, so proactive monitoring becomes critical. Not only is proactive monitoring and troubleshooting through actionable insights helpful in configuring the appropriate technical capability to address the issue at hand, this visibility into application performance is important in terms of cost saving. For example, to de-provision unused resources when not needed or to mitigate an attack in progress.

An SLA breach may be due to device outage or configuration issues, problems of access from a particular geography, a specific device type, a particular data center, or something in between. Other reasons may be SSL handshake issues or security attacks that impacting application performance due to a lack of resources. It is important to know these issues before they become a business disruption.

In a multi-tenant environment, if the environments are not segregated, tenants may start competing for shared resources during peak utilization. In an environment where tenants share resources, a potential spike in resource consumption or a wrong configuration change of a single tenant may affect all other tenants – severely impacting an application’s SLA and availability.

End-to-End Visibility

Application Delivery Controllers are at the intersection of the network and applications. ADCs act as sensors to changing user demands of the applications – for example, detecting increased user latency or a lack of available application resources, or reaching a throughput limit, or outage of a specific service or a security attack in progress.

[You may also like: 6 Must-Have Metrics in Your SLA]

In order to detect any application performance issues in real-time before your customers experience them, it is essential to have an end-to-end monitoring capability that provides actionable insights and alerts through visualization. The ADC can act upon this telemetry to trigger automation and orchestration systems to program the applications or the network elements as needed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Computing

Eliminating Excessive Permissions

June 11, 2019 — by Eyal Arazi0

excessivepermissionsblog-960x581.jpg

Excessive permissions are the #1 threat to workloads hosted on the public cloud. As organizations migrate their computing resources to public cloud environments, they lose visibility and control over their assets. In order to accelerate the speed of business, extensive permissions are frequently granted to users who shouldn’t have them, which creates a major security risk should any of these users ever become compromised by hackers.

Watch the video below to learn more about the importance of eliminating excessive permissions.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application Delivery

Auto-Discover, -Scale and -License ADCs

June 5, 2019 — by Prakash Sinha0

ADC--960x591.jpeg

In the changing world of micro-service applications, agile approach, continuous delivery and integration, and the migration of applications and service to the cloud, ADCs (aka load balancers) are likewise transforming.

ADCs still make applications and services available–locally or globally, within and across cloud and data centers–while providing redundancy to links and reducing latency for the consumers of application services. However, due to where ADCs sit in the network, they have taken on additional roles of a security choreographer and a single point of visibility across the front-end, networks and applications.

Traditionally deployed as a redundant pair of physical devices, ADCs have begun to be deployed as virtual appliances. Now, as applications move to the cloud, ADCs are available as a service in the cloud or as a mix of virtual, cloud and physical devices depending on cost and performance characteristics desired.

Core Use Cases

Providing high availability (HA) is one of the core use cases for an ADC. HA addresses the need for an application to recover from failures within and between data centers. SSL offload is also a core use case. As SSL/TLS become pervasive to secure and protect web transactions, offloading non-business functions from application and web servers is needed to reduce application latency while lowering the cost of application footprint required to serve users.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

One of the ways organizations use cloud and automation to optimize the cost of their application infrastructure is by dynamically adjusting resource consumption to their actual utilization levels. As the number of users connecting to a particular application service grows, new instances of application services are brought online. Scaling-in and scaling-out in an automated way is one of the primary reasons why ADCs have built-in automation and integrations with orchestration systems. For example, Radware’s automation capabilities enhance and extend Microsoft Azure by taking advantage of Scale Sets to automatically grow and shrink the ADC cluster based on demand.

Automating Operations

Auto scale capability is important for organizations looking to automate operations – that is to add and remove services on demand without manual intervention for licensing and to reclaim capacity when no longer in-use. This saves costs, both in operations and well as in training. As organizations move to the cloud, capacity planning and associated licensing are common concerns. Elastic licensing is directed to cap the cost of licenses as organizations transition from physical hardware or virtual deployment to cloud.

[You may also like: Economics of Load Balancing When Transitioning to the Cloud]

Innovative elastic licensing benefits small and large enterprises, and enables then to protect against load balancing pricing shocks as the numbers of users and associated SSL transactions grow, while simplifying capacity planning. End-to-end visibility and automation further enable self-service across various stakeholders and reduce errors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryCloud Computing

Economics of Load Balancing When Transitioning to the Cloud

May 22, 2019 — by Prakash Sinha0

adc2-960x566.jpg

One of the concerns I hear often is that application delivery controller (ADC) licensing models do not support cloud transitions for the enterprise or address the business needs of cloud service providers that have a large number of tenants.

Of course, there are many models to choose from – perpetual pricing per instance, bring-your-own license (BYOL), consumption and metered licensing models by licensing by CPU cores, per-user, by throughput, service provider-licensing agreements (SPLA), to name a few. The biggest concern is the complexity in licensing of ADC capacity. In a cloud environment, the performance profile for a particular instance may need to change to accommodate traffic spike. The licensing infrastructure and automation needs to accommodate this characteristic.

Traditionally, load balancers were deployed as physical devices as a redundant pair supported by perpetual pricing, a non-expiring license to use an instance, whether it’s hardware, virtualized or in the cloud. The customer has no obligation to pay for support or update services, although they are offered at an additional yearly cost. As virtualization took hold in the data centers, ADCs began to be deployed as virtual appliances and started supporting subscription licensing model – a renewable license, usually annual or monthly, that includes software support and updates during the subscription term. The license is automatically terminated unless it is renewed at the end of the term. Now, as applications move to cloud, ADCs are being deployed as a service in the cloud and consumption-based pricing is becoming common.

[You may also like: Keeping Pace in the Race for Flexibility]

Evaluating Choices: The Problem of Plenty

There are many licensing models to choose from – perpetual , subscription, consumption/metered, so how do you decide what to choose? The key is to understand what problem you’re trying to solve, identify the *MUST* have capabilities you’d expect for your applications, and plan how much of the capacity you’d need and then do an apples-to-apples comparison.

Understand the use case

Let us consider a cloud service provider (CSP) tenant onboarding as an example. The provider offers service to its tenants (medium and large enterprises), which consume their own homegrown applications and those offered and hosted by the CSP.

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

For example, a CSP whose tenants are hospitals and physician networks offers patient registration systems as a shared SaaS offering among multiple tenants. Each tenant has varying needs for a load balancer – small ones require public cloud-based ADCs, whereas mid-sized and large ones have both public and private cloud solutions. Some of the larger tenants of the CSP also require their application services proxied by hardware ADCs due to low latency requirements. Self-service is a must for the CSP to reduce cost of doing business and so it automation and integration to support the tenants that administer their own environments.

Based on the use case, evaluate what functionality you’d need and what type of form factor support is required

CSPs are increasingly concerned about the rapid growth and expansion of Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform into their markets. Hosting providers that only provide commodity services, such as co-location and web hosting, have realized they are one service outage away from losing customers to larger cloud providers.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

In addition, many CSPs that provide managed services are struggling to grow because their current business is resource intensive and difficult to scale. In order to survive this competitive landscape, CSPs must have:

  • Cost predictability for the CSP (and tenants)
  • The ability to offer value-added advisory services, such as technical and consulting opportunities to differentiate
  • Self-service to reduce resources via the ability to automate and integrate with a customer’s existing systems
  • Solutions that span both private and public cloud infrastructure and includes hardware

For the CSP onboarding use case above, from a technical requirement, this breaks down to: Self-service, ability to create ADC instances of various sizes, automated provisioning, support for Ansible, vRO and Cisco ACI. From a business perceptive, the CSP needs to offer a host of solutions for their tenants that span cloud, private and hardware based ADCs.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

Plan Capacity

Once you understand the use case and have defined functional technical and business requirements, it’s time to review what kind of capacity you’ll need – now and in future. You may use existing analytics dashboards and tools to gain visibility into what you consume today. The data may be your HTTP, HTTP/S, UDP, SSL certificates, throughput per application at peak, connection and requests per second. Based on your growth projections you may define future needs.

Compare Available Options

The next step is to look at the various vendors for the performance metric that’s important to your applications. If you have a lot of SSL traffic, then look at that metric as a cost/unit across various vendors.

[You may also like: Are Your Applications Secure?]

Once you have narrowed down the list of vendors to those that support the functionality your applications MUST have, now it’s time to review the pricing to be within your budget. It’s important to compare apples-to-apples. So based on your capacity and utilizations profile, compare vendors on your short list. The chart below shows one example of comparison on AWS using on demand instances versus Radware Global Elastic Licensing subscription as a yearly cost.

As enterprises and service providers embark on a cloud journey, there is a need for simpler and flexible licensing model and infrastructure that eliminates planning risk, enables predictable costs, simplifies and automates licensing for provisioned capacity and enabled the ability to transfer capacity from existing physical deployment to cloud to realize savings.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application DeliveryCloud Computing

Application Delivery Use Cases for Cloud and On-Premise Applications

April 23, 2019 — by Prakash Sinha0

ADC-960x548.jpg

Most of us use web applications in our daily lives, whether at work or for personal reasons. These applications include sites offering banking and financial services, payroll, utilities, online training, just to name a few. Users get frustrated, sometimes annoyed, if the applications – such as bank account access, loading of a statement, emails, or bills – are slow to respond. Heaven help us if we lose these services right in the middle of a payment!

data center, servers, application delivery controllers, ADCs, ADC
White and blue firewall activated on server room data center 3D rendering

If you look at these applications from a service provider perspective, especially those that have web facing applications, this loss of customer interest or frustration is expensive and translates into real loss of revenue, almost $8,900 per minute of downtime in addition to loss of customer satisfaction and reputation. And if your services are in the cloud and you don’t have a fall back? Good luck…

Traditional Use of ADCs for Applications

This is where application delivery controllers (ADCs), commonly referred to as load balancers, come in. ADCs focus on a few aspects to help applications. ADCs make it seem to the user that the services being accessed are always up, and in doing so, reduce the latency that a user perceives when accessing the application. ADCs also help in securing and scaling the applications across millions of users.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

Traditionally, these load balancers were deployed as physical devices as a redundant pair, then as virtualization took hold in the data centers, ADCs began to be deployed as virtual appliance. Now, as applications move to cloud environments, ADCs are being deployed as a service in the cloud, or as a mix of virtual, cloud and physical devices (depending on cost and desired performance characteristics, as well the familiarity and expertise of the administrator of these services – DevOps, NetOps or SecOps).

The ADC World is Changing

The world of ADCs is changing rapidly. Due to the fast changing world of applications, with micro-services, agile approach, continuous delivery and integration, there are many changes afoot in the world of ADCs.

ADCs still have the traditional job of making applications available locally in a data center or globally across data centers, and providing redundancy to links in a data center. In addition to providing availability to applications, these devices are still used for latency reduction – using caching, compressions and web performance optimizations – but due to where they sit in the network, they’ve taken on additional roles of a security choreographer and a single point of visibility across a variety of different applications.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

We are beginning to see additional use cases, such as web application firewalls for application protection, SSL inspection for preventing leaks of sensitive information, and single sign on across many applications and services. The deployment topology of the ADC is also changing – either run within a container for load balancing and scaling micro-services and embedded ADCs, or be able to provide additional value-add capabilities to the embedded ADCs or micro-services within a container.

Providing high availability is one of the core use cases for an ADC. HA addresses the need for an application to recover from failures within and between data centers themselves. SSL Offload is also considered a core use case. As SSL and TLS become pervasive to secure and protect web transactions, offloading non-business functions from application and web servers so that they may be dedicated to business processing is needed not only to reduce application latency but also to lower the cost of application footprint needed to serve users.

As users connecting to a particular application service grow, new instances of application services are brought online in order to scale applications. Scaling-in and scaling-out in an automated way is one of the primary reasons why ADCs have built-in automation and integrations with orchestration systems. Advanced automation allows ADCs to discover and add or remove new application instances to the load balancing pool without manual intervention. This not only helps reduce manual errors and lowers administrative costs, but also removes the requirements for all users of an ADC to be experts.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

As we move to the cloud, other uses cases are emerging and quickly becoming a necessity. Elastic licensing, for example, is directed to cap the cost of licenses as organizations transition from physical hardware or virtual deployment to the cloud. Another use case is to provide analytics and end-to-end visibility, designed to pin-point root a cause of an issue quickly without finger-pointing between networking and application teams.

ADCs at the Intersection of Networking and Applications

Since ADCs occupy an important place between applications and networks, it’s quite logical to see ADCs take on additional responsibilities, as applications serve the users. Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure has evolved to  address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Delivery

Application SLA: Knowing Is Half the Battle

April 4, 2019 — by Radware0

ApplicationSLA-960x642.jpg

Applications have come to define the digital experience. They empower organizations to create new customer-friendly services, unlock data and content and deliver it to users at the time and device they desire, and provide a competitive differentiator over the competition.

Fueling these applications is the “digital core,” a vast plumbing infrastructure that includes networks, data repositories, Internet of Things (IoT) devices and more. If applications are a cornerstone of the digital experience, then managing and optimizing the digital core is the key to delivering these apps to the digitized user. When applications aren’t delivered efficiently, users can suffer from a degraded quality of experience (QoE), resulting in a tarnished brand, negatively affecting customer loyalty and lost revenue.

Application delivery controllers (ADCs) are ideally situated to ensure QoE, regardless of the operational scenario, by allowing IT to actively monitor and enforce application SLAs. The key is to understand the role ADCs play and the capabilities required to ensure the digital experience across various operational scenarios.

Optimize Normal Operations

Under normal operational conditions, ADCs optimize application performance, control and allocate resources to those applications and provide early warnings of potential issues.

[You may also like: 6 Must-Have Metrics in Your SLA]

For starters, any ADC should deliver web performance optimization (WPO) capabilities to turbocharge the performance of web-based applications. It transforms front-end optimization from a lengthy and complex process into an automated, streamlined function. Caching, compression, SSL offloading and TCP optimization are all key capabilities and will enable faster communication between the client and server while offloading CPU intensive tasks from the application server.

Along those same lines, an ADC can serve as a “bridge” between the web browsers that deliver web- based applications and the backend servers that host the applications. For example, HTTP/2 is the new standard in network protocols. ADCs can serve as a gateway between the web browsers that support HTTP/2 and backend servers that still don’t, optimizing performance to meet application SLAs.

Prevent Outages

Outages are few and far between, but when they occur, maintaining business continuity is critical via server load balancing, leveraging cloud elasticity and disaster recovery. ADCs play a critical role across all three and execute and automate these processes during a time of crisis.

[You may also like: Security Pros and Perils of Serverless Architecture]

If an application server fails, server load balancing should automatically redirect the client to another server. Likewise, in the event that an edge router or network connection to the data center fails, an ADC should automatically redirect to another data center, ensuring the web client can always access the application server even when there is a point of failure in the network infrastructure.

Minimize Degradation

Application SLA issues are most often the result of network degradation. The ecommerce industry is a perfect example. A sudden increase in network traffic during the holiday season can result in SLA degradation.

Leveraging server load balancing, ADCs provide elasticity by provisioning resources on-demand. Additional servers are added to the network infrastructure to maintain QoE, and after the spike has passed, returned to an idle state for use elsewhere. In addition, virtualized ADCs provide an additional benefit, as they provide scalability and isolation between vADC instance at the fault, management and network levels.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Finally, cyberattacks are the silent killers of application performance, as they typically create degradation. ADCs play an integrative role in protecting applications to maintain SLAs at all times.   They can prevent attack traffic from entering a network’s LAN and prevent volumetric attack traffic from saturating the Internet pipe.

The ADC should be equipped with security capabilities that allow it to be integrated into the security/ DDoS mitigation framework. This includes the ability to inspect traffic and network health parameters so the ADC serves as an alarm system to signal attack information to a DDoS mitigation solution. Other interwoven safety features should include integration with web application firewalls (WAFs), ability to decrypt/encrypt SSL traffic and device/user fingerprinting.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware2

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware0

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware1

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud ComputingCloud Security

Excessive Permissions are Your #1 Cloud Threat

February 20, 2019 — by Eyal Arazi0

AdobeStock_171211548-960x640.jpg

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused, becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led to organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

[You may also like: Ensuring Data Privacy in Public Clouds]

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your Permissions = Your Attack Surface

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mis-characterized as ‘misconfigurations’, but are actually the result of permission misuse or abuse by people who shouldn’t have them.

[You may also like: Protecting Applications in a Serverless Architecture]

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections Provide Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

  • The built-in mechanisms of public clouds usually provide fairly basic protection, and mostly focused security on the overall computing environment, they are blind to activity within individual workloads. Moreover, since many companies run multi-cloud and hybrid-cloud environment, the built-in protections offered by cloud vendors will not protect assets outside of their network.
  • Compliance and governance tools usually use static lists of best practices to analyze permissions usage. However, they will not detect (and alert to) excessive permissions, and are usually blind to activity within workloads themselves.
  • Agent-based solutions require deploying (and managing) agents on cloud-based servers, and will protect only servers on which they are installed. However, they are blind to overall cloud user activity and account context, and usually cannot protect non-server resources such as services, containers, serverless functions, etc.
  • Cloud Access Security Brokers (CASB) tools focus on protecting software-as-a-service (SaaS) applications, but do not protect infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) environments.

[You may also like: The Hybrid Cloud Habit You Need to Break]

A New Approach for Protection

Modern protection of publicly-hosted cloud environments requires a new approach.

  • Assume your credentials are compromised: Hackers acquire stolen credentials in a plethora of ways, and even the largest companies are not immune to credential theft, phishing, accidental exposure, or other threats. Therefore, defenses cannot rely solely on protection of passwords and credentials.
  • Detect excessive permissions: Since excessive permissions are so frequently exploited for malicious purposes, identifying and alerting against such permissions becomes paramount. This cannot be done just by measuring against static lists of best practices, but must be based on analyzing the gap between the permissions a user has defined, and the permission they actually use.
  • Harden security posture: The best way of stopping a data breach is preventing it before it ever occurs. Therefore, hardening your cloud security posture and eliminating excessive permissions and misconfigurations guarantees that even if a user’s credentials become compromised, then attackers will not be able to do much with those permissions.
  • Look for anomalous activities: A data breach is not one things going wrong, but a whole list of things going wrong. Most data breaches follow a typical progression, which can be detected and stopped in time – if you know what you’re looking for. Monitoring for suspicious activity in your cloud account (for example, such as anomalous usage of permissions) will help identify malicious activity in time and stop it before user data is exposed.
  • Automate response: Time is money, and even more so when it comes to preventing exposure of sensitive user data. Automated response mechanisms allow you to respond faster to security incidents, and block-off attacks within seconds of detection.

[You may also like: Automating Cyber-Defense]

Radware’s Cloud Workload Protection Service

Radware is extending its line of cloud-based security services to provide an agentless, cloud-native solution for comprehensive protection of workloads hosted on AWS. Radware’s solution protects both the overall security posture of your AWS cloud account, as well as individual cloud workloads, protecting against cloud-native attack vectors.

Radware’s solutions addresses the core-problem of cloud-native excessive permissions by analyzing the gap between granted and used permissions, and providing smart hardening recommendations to harden configurations. Radware uses advanced machine-learning algorithms to identify malicious activities within your cloud account, as well as automated response mechanisms to automatically block such attacks. This helps customers prevent data theft, protect sensitive customer data, and meet compliance requirements.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now