How to Prepare for the Biggest Change in IT Security in 10 Years: The Availability Threat


Availability, or the big “A” is often the overlooked corner of the CIA triad. Perhaps a contributing factor is the common belief among security professionals that if data is not available, it is secure.  Corporate executives have a different opinion, as downtime carries with it a hefty price tag. While today’s corporate risk assessment certainly involves the aspect of availability, it is focused on redundancy, not on security.  Penetration tests, a result of the corporate risk assessment, also fail to test on availability security.  In fact, pen testing and vulnerability scanning contracts specifically avoid any tests which might cause degradation of service, often leaving these vulnerabilities unknown until it’s too late.  Availability is commonly handed off to be addressed by network engineering to design and build resilient networks.  Common risk mitigations in this arena include redundant power, internet links, routers, firewalls, web farms, storage, and even geographic diversity with use of hot, warm and cold data centers.  You get the picture; there is a ton of money invested in building network infrastructure to meet corporate availability requirements.

While these investments in infrastructure are meaningful, they are not impervious to attack. In fact, attacks are often complicated and even exacerbated by the inherent resilient design of the network. For example, let’s consider a few common myths:

Myth 1: DDoS attacks consume lots of bandwidth and are noisy. We will add additional bandwidth if we come under DDoS attack or we already have enough bandwidth to absorb any attack.

A recent study by Radware’s ERT and other competitors report a significant change in the threat landscape, from noisy volumetric floods (TCP SYN or UDP Floods), to application layer (HTTP GET and DNS query floods) and low-and-slow attacks. A volumetric attack can consume any amount of bandwidth you can afford. Admittedly, additional bandwidth may delay the outcome and if significant (100Gbps +), it might even deter an attacker. However, a targeted attack is one in which the attacker is after your business, not your competitor or the guy down the street. He will shift gears to find weakness in your defense; more than likely at the application layer.

[You might also like: How are IoT Skills different than IT Skills?]

Myth 2: Route traffic through secondary data center or split the load across data centers.

Let’s take a DNS flood targeting a DNS server as an example. Some folks I’ve chatted with might consider failing over to a redundant data center in the event they are flooded at the primary, or even operating off a different IP address range for the secondary. However, let’s not underestimate our enemies – unfortunately the attack would follow in either event. You might ask why or how the attack would migrate. For one, if you reroute your traffic, the attack will simply follow the target. While changing or using different IP address ranges might buy you some time, security by obscurity is proven not to prevail. In addition, many attacks today target domain names, so you would find the attack simply following the DNS change.

Myth 3: We are safe from this threat because we use a CDN or cloud DDoS scrubbing solutions.

While CDN and cloud scrubbing centers serve their purpose and can be used as mitigation technique, they are also not conclusive of an attack mitigation system. By nature of the CDN, it can be bypassed by several tools (HULK, LOIC, HOIC, and more) which were purposely built to bypass such protections and utilize dynamically changing URLs. In fact, the CDN will not only leave one wanting for Anti-DoS, it’s known to amplify the attack as per recent blogs by my colleague, David Hobbs, CDN as a Weapon and You Cant Hide Behind the Clouds. In addition, neither the CDN nor cloud scrubbers can protect against encrypted attacks, unless of course you are willing to share your encryption keys with outsiders. Hmm, that doesn’t sound like a good idea…Does it? Another disadvantage of the cloud scrubbing center is that it is usually an on-demand service which requires rerouting traffic once under attack. Cloud scrubbing typically consists of several steps upon an attack. It begins with detection, alerting, route change, attack mitigation, verification of legitimate traffic, and finally detecting once the attack is over to put traffic back in its normal state. This detection is not a simple feat, even in the case of a network flood where an administrator may be alerted via email of bandwidth usage on a circuit. What if they don’t get the email because the email server is being DoS’d? What if the attack is over the weekend and no one notices, except your customers who are trying to purchase equipment from your ecommerce site?

Targeted attacks are driven by humans utilizing sophisticated tools, C&C and botnets, which require attack mitigation tools that can adapt to the threat and mitigate based upon abnormal behavior. In conjunction with tactical tools, a sound perimeter counter-defense strategy will also include a human factor. As with all layers of security, availability security is like a game of chess with the enemy that requires strategy and tactics. Use available tools to learn the normal behavior of your environment, detect, alert and mitigate anomalies, known and zero day intrusions in real time. Properly tuned tools will keep the enemy at bay for a limited time however it is inevitable they will adjust their attack campaign and identify weakness in your defense. Radware Emergency Response Team (ERT) provides the human factor of knowing the enemy, reverse engineering the attack tools and identifying the opponents’ next move.

Read “Cyber-Security Perceptions and Realities: A View from the C-Suite” to learn more.

Download Now


Please enter your comment!
Please enter your name here