4 Approaches to Securing Containerized Applications

0
2976

As more and more enterprises are on the journey towards adopting containers running microservices, they are also scratching their heads trying to figure out which is the right way to secure these ecosystems. Radware’s Web Application Security Report suggests that no single practice has emerged yet, however there are different technologies claiming to do the job. Here, we will compare these approaches.

Security Implications of Microservices

Designing a microservices environment requires paying attention to additional information security aspects. A typical application infrastructure must be scalable and would normally include an orchestrator of deployment and allocation of resources (such as Kubernetes, OpenShift or Mesos), as well as a reverse-proxy ingress controller (most popular are NGiNX, HA-Proxy and Envoy).

None of these tools have built-in application security. Moreover, these environments are normally designed by DevOps, whose objective isn’t security but rather automation and synchronization for agility. Hence, if security hasn’t been taken into account during design, it must be addressed by the security staff retrospectively.

[You may also like: Can DevSecOps Cover Holes Created by Digital Transformation?]

  1. External threats – This very first one isn’t new; threats from external users from the internet transact with the application. This typical client-to-server traffic is also known as North-South traffic.
  2. Lateral threats – In microservices, the focus shifts to the transfer of data packets from server to server, or microservice to microservice, within a data center or VPC. ​This internal communication is also known as East-West traffic. Securing East-West traffic is crucial to reduce surface available for malicious activity. ​
  3. API Security – APIs are the main vehicle for East-West data exchange between microservices, using different protocols – REST, gRPC, GraphQL or others. The threats to APIs vary and include unauthorized access, protocol manipulations, denial of service and a wide range of bot attacks​.
  4. Open source – there are so many great tools, modules, and functions off the shelf; however, there’s no guarantee that they are tested or  patched for security.
  5. End-to-End encryption – Enterprises today are less tolerant of any form of cleartext communication and ​require SSL/TLS termination at the host level. This way, they avoid maintaining multiple certificates dispersed across multiple locations.

[You may also like: Is Security for Containers and Microservices the Same?]

East-West and API traffic are perceived as secure and therefore trusted, despite being the greatest blind spot in microservices security.

Strategies

Enterprises are adopting different strategies to overcome these challenges. New technologies emerge to create a more secure CI/CD pipeline. Let’s take a closer look at them:

External WAF – Either as a virtual machine at the perimeter or running on the ingress controller (for example, WAF running on NGiNX), an external WAF can block known attacks based on IP reputation or signatures. However, such deployment, by definition, cannot provide granular learning and accurate security, leaving the application vulnerable to zero-day attacks. An attempt to apply positive security and auto-learning will result in a high rate of false positives, since the learning is of all traffic to all microservices, and isn’t as fine grained as needed. Same applies for cloud security services.  Besides, diverting traffic outside the ecosystem and back in adds unwanted latency and is against the rationale of end-to-end integration of the CI/CD pipeline.

Container security solutions – These are emerging solutions – some are very successful – that provide different levels of security to the containers themselves. They are designed to protect the container as a host or endpoint, looking at the container images and known vulnerabilities in them rather than data transactions & HTTP traffic. Thus, they provide minimal application security, and will not protect against access violations, injections, Bruteforce, XSS and other exploits. In addition, most are still in alert-only mode and do not enforce security on data transactions. That said, they can co-exist with application security for a rather robust posture.

Runtime Application Self-Protection (RASP) – As the name suggests, the app will protect itself during runtime. This is a light, economic concept, and can auto-scale with the microservices. But is it really working?

First, it introduces no delays due to security controls in the data path. Instead, there is a performance penalty due to the additional library/plug-in with all the WAF-like rules. Second, RASP accommodates the end-to-end encryption requirement with SSL/TLS termination at the app. It also provides active protection in real time – however there are some attacks one simply cannot block during run time and must mitigate ahead of the app itself. For instance, denial of service. The ability to do learning is limited since it requires overhead that is not bearable.

And last, RASP does not auto-fix code vulnerabilities which may make its protection a swiss cheese for some apps. As it rewrites the code and may stop a function while running, there is a high risk to the app execution and eventually, SLA. Related or not, the RASP approach did not explode commercially, and it did not enjoy much of a commercial success. To summarize – RASP will not deliver by itself.

[You may also like: Application Security in the Microservices Era]

MicroWAF – This is a dedicated application security enforcement tool that integrates into the system, ideally managed by the orchestration tool such as Kubernetes, and sits as a sidecar in front of each container. This approach has two major advantages:

1. It is Kubernetes controlled and can be deployed, provisioned, scaled automatically.
2. Since there is an instance in front of each container, auto-learning of traffic to the microservice as a baseline for positive security can work.

In this approach, the management, analytics, and rule-engine reside separately from the enforcer, which continuously shares the necessary information to optimize security.
Another key advantage of this approach is that it is DevOps friendly – no slowdowns or interruptions, complete visibility, and as it integrates with more provisioning, analytics and automation tools in the K8s controlled ecosystem, the merrier. Here is an illustration:

We are hoping this is good food for thought when debating how to tackle application and data security in microservices. This checklist includes 10 tips for evaluating a microservices security solution.

Read “Radware’s 2019 Web Application Security Report” to learn more.

Download Now

Previous article2021 Will Be the Year of Catch-Up
Next articleManaging ADC Licenses in the Age of the Cloud
Ben Zilberman is a product-marketing manager in Radware’s security team. In this role, Ben specializes in application security and threat intelligence, working closely with Radware’s Emergency Response and research teams to raise awareness of high profile and impending attacks. Ben has a diverse experience in network security, including firewalls, threat prevention, web security and DDoS technologies. Prior to joining Radware, Ben served as a trusted advisor at Checkpoint Software technologies where he led partnerships, collaborations, and campaigns with system integrators, service, and cloud providers. Ben holds a BA in Economics and a MBA, from Tel Aviv University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here