Last month, I wrote about some inherent challenges with the increasingly popular cloud-only model for cyber-attack mitigation. As a reminder, I am not a cloud hate monger… the cloud is often an acceptable (and sometimes the only) place to mitigate certain attacks. But I do get concerned by the apparent over adjustment to cloud-based solutions, and look here to pick up on the second of four specific shortcomings of cloud-only attack mitigation.
Take the Long Way Home
If you’re like me, the paths you take from Point A to Point B can be habit forming. Maybe I’m risk averse or lack a sense of adventure, but when I find a route that works, I tend to stick with it – sometimes to a fault. When I first moved to my current neighborhood, the main road into and out of town was undergoing construction, forcing most to use a small two lane road. And the realities of road construction being what they are, this work took seemingly twice as long as necessary… a good four months I would say. By the time the work was done, I had become so accustomed to the detour route that I mindlessly kept using it. That is until one afternoon when there was an accident on the road, causing a massive delay that I didn’t see coming until I was too far down the path to turn around. Seeing the errors of my ways, I started using the more logical main roads to come and go.
I’m often reminded of this dreadful drive home when I consider the decision some organizations make in routing all of their Internet traffic through cloud-based security providers even during peacetime. This model, while perhaps providing some sense of relief for its “always on” nature, introduces several significant challenges for efficient, reliable service delivery.
The Creation of Unnecessary Latency
The levels of latency this routing model creates can be hard to predict. This can introduce non-trivial latency into the application’s performance even during peacetime.
Many applications require the minimal latency that comes from full management of the security and application infrastructure. What constitute acceptable levels of latency vary from company to company, but any networking team worth its salt should always be on a lookout for sources of unneeded latency. The impacts of this latency when projected into high throughput services and applications should be considered when determining the transactional processing capacity wasted due to the overall slowdown of traffic.
How Redirecting Traffic Affects Redundancy
One of the most significant architectural characteristics of the Internet that allowed it to thrive, especially in the early days before notions of high availability and ultra-redundant systems, is its self-healing properties. In an always-on model leveraging cloud-based security resources, all of your traffic must be routed through a finite set of scrubbing centers instead of using the redundancy of native Internet routing since you are forcing all of your traffic to transit 1 or 2 paths.
Today’s attacks increasingly include encrypted traffic flows that complicate detection and consume additional computing power to mitigate. About 25% of attacks that Radware mitigates include SSL floods and other forms of encrypted attack. These attacks present a real problem for cloud-only services because unless the customer is willing to provide their private keys to a 3rd party, they will be unable to inspect this traffic and detect an attack. Most pass it right along to the intended target.
The work-arounds that have emerged from some cloud security providers are troubling. Particularly dubious to me are the “flexible” SSL models that (I’m sorry but this is how I see it) dupe the user into believing they are in a secure communication between them and a particular third party. In fact, they are in a secure communication with an unknown organization, who is then passing the communication in an unsecure fashion along to its intended recipient.
Beware the Company You Keep
Another challenge surrounding the unnecessary routing of traffic during peacetime is the often overlooked issue of, what can best be called the “collateral damage” risk. By forcing the routing of traffic through a set of scrubbing centers, you have introduced a perpetual new point of failure, one that it often difficult to assess in terms of operational performance and maintenance. Additionally, the risk becomes something of a “company you keep” issue, where the reliability of a critical hop for your traffic is now subject to the impact of attacks from an unknown set of other customers supported by the cloud security provider. For providers delivering an always-on option only in the cloud to customers, you quickly can see how a collection of attacks targeting a growing customer base creates a threat against all customers.
Before coming to the quick conclusion that sending all of your traffic through a cloud-based security provider is your best option, take the time to consider these downsides of peacetime traffic redirection. Organizations that leverage cloud resources in conjunction with on-premises components incur none of this risk when they are at peacetime, and generally are able to get better detection of attacks that can be mitigated without having to swing traffic. I’m all for the benefits of the always-on model, but there are better ways to achieve this than an always on swing of traffic.