IoT is being leveraged to monitor and protect species that are integral to our global ecosystems, from rhinos to dairy cows to honeybees.
I’ve long maintained that security can and should be leveraged as a competitive advantage, regardless of industry. But I’d like to expound upon this mantra: it holds particularly true for the financial services industry.
With consumers increasingly relying on web and mobile apps to conduct financial transactions, customer data has emerged as the new “oil” that powers financial institutions; it can be mined to upsell additional services over the long haul.
But if banks want to increase customer lifetime value, they first must protect this treasure trove of data. Why? Because privacy and security are top of mind for consumers.
This isn’t just an educated guess on my part; Radware recently conducted a survey of nearly 1,200 U.S. consumers to better understand how they view financial security, how they’d react if their data was compromised, and what it all means for financial institutions today.
Spoiler alert: Financial institutions stand to lose business if they don’t prioritize security.
The fact is, data breaches and cyberattacks increase customer churn in an age when virtual transparency, relentless competition and frictionless account transitions have made it easy for consumers to switch financial institutions. And make no mistake — they will abandon their banks if their privacy and security are better prioritized elsewhere.
I encourage you to read the full survey results report. And deeply consider how security isn’t an expenditure, but an investment in your customer lifetime value.
Read “Consumer Sentiments: Cybersecurity’s Role in the Future of Financial Institutions” today.
Today’s infrastructure threats will have major impacts on tomorrow’s 5G commercial networks. 5G network slicing, virtualization and disaggregation introduce new levels of complexity to network security, requiring a high-level of automation in security on-boarding, scale-out and attack mitigation.
5G security is absolutely required to be thought about in a Day 1 network build and ‘weaved’ into the network architecture. Otherwise, the immense job of re-architecting the network afterward will be a cost-prohibitive exercise.
Service providers are faced with a necessary burden of managing security threats in the 5G network.
Your ‘Typical’ Security Solution
A typical network security solution will include several security elements, such as firewalls, DDoS protection devices, IPS/IDS, etc. Each system may require its own domain expertise when it comes to proper configuration and tuning. When a carrier-grade network slice is under attack, dedicated expertise is required for handling changes and setting the proper mitigation actions. With the new paradigm of 5G network slicing coming onto the scene in a highly distributed network, carrier security teams will be challenged.
Service providers are already in a precarious position of creating healthy profit margins with the onslaught of over-the-top data and video traversing their networks. New revenue streams are tough to come by, and so the other lever available to influence margins is cost control. However, the cost economics do not scale well when contemplating an increase in security staff to prepare for 5G. The new attack vectors are just too complex and too high in volume to adequately address with a bloated Security Operations Center (SOC) of just human oversight and management.
What makes more sense is adoption of a comprehensive security solution used across all network slices to benefit from ease of management and SOC skill sets.
Vendor technology designed around the concept of self-learning with respect to threat detection not heavily dependent on pre-configured rules is the ideal toolkit for service providers. Minimal setup and configuration lower the overall carrier security team effort around system operation. Now, instead of manual provisioning and troubleshooting, the SOC specialist can look at a dashboard to see what was detected by the system and what mitigation actions took place to defend against malicious threats to the system. This yields strong visibility into network security threats across all network functions and slices.
In the new 5G security play, the various security functions are on-boarded per slice in alignment to the required network capabilities and desired distribution. The total investment in security computing resources and licenses aligned with the network slice investment allowing carrier better control on the risks and the costs associated with specific network slice.
Automated attack mitigation capabilities provide the security team with ‘peace of mind’ that all ‘war time’ actions are taken care of in automated manner with no manual intervention by security administrators.
So although 5G carries with it very challenging security issues, service providers can be proactive in creating a security posture that gives them the best chance to keep costs in check while keeping the network safe.
Read “Creating a Secure Climate for your Customers” today.
The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behavior and easily deceive conventional cybersecurity solutions/bot management systems.
Addressing highly sophisticated and automated bot-based cyberthreats requires deep analysis of bots’ tactics and intentions. According to Forrester Research’s The Forrester New Wave™: Bot Management, Q3 2018 report, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”
When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.
Basic Bot Management Features
Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.
Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.
Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.
Capability to Detect Large-Scale Distributed Humanlike Bots
When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.
Traditional defenses fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioral learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behavior.
The rise of highly sophisticated humanlike bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioral analyses, collective bot intelligence and threat research, as well as other foundational techniques.
A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors
- How advanced is the solution’s bot detection technology?
- Does it use unique device and browser fingerprinting?
- Does it leverage intent analysis in addition to user behavioral analysis?
- How deep and effective are the fingerprinting and user behavioral modeling?
- Do they leverage collective threat intelligence?
Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behavior. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.
Impact on User Experience — Latency, Accuracy and Scalability
Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.
Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.
Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.
Read “How to Evaluate Bot Management Solutions” to learn more.
Often, I find that only a handful of organizations have a complete understanding of where they stand in today’s threat landscape. That’s a problem. If your organization does not have the ability to identify its assets, threats, and vulnerabilities accurately, you’re going to have a bad time.
A lack of visibility prevents both IT and security administrators from accurately determining their actual exposure and limits their ability to address their most significant risk on premise. However, moving computing workloads to a publicly hosted cloud service exposes organizations to new risk by losing direct physical control over their workloads and relinquishing many aspects of security through the shared responsibility model.
Cloud-y With a Chance of Risk
Don’t get me wrong; cloud environments make it very easy for companies to quickly scale by allowing them to spin up new resources for their user base instantly. While this helps organizations decrease their overall time to market and streamline business process, it also makes it very difficult to track user permission and manage resources.
As many companies have discovered over the years, migrating workloads to a cloud-native solution present new challenges when it comes to risk and threats in a native cloud environment.
Traditionally, computing workloads resided within the organization’s data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protections via mechanisms such as firewalls, intrusion prevention/detection systems (IPS/IDS), web application firewall (WAF) and distributed denial-of-service (DDoS) protection, secure web gateways (SWGs), etc.
However, moving workloads to the cloud has presented new risks for organizations. Typically, public clouds provide only basic protections and are mainly focused on securing their overall computing environments, leaving individual and organizations workloads vulnerable. Because of this, deployed cloud environment are at risk of not only account compromises and data breaches, but also resource exploitation due to misconfigurations, lack of visibility or user error.
The typical attack profile includes:
- Spear phishing employees
- Compromised credentials
- Misconfigurations and excessive permissions
- Privilege escalation
- Data exfiltration
The complexity and growing risk of cloud environments are placing more responsibility for writing and testing secure apps on developers as well. While most are not cloud-oriented security experts, there are many things we can do to help them and contribute to a better security posture.
Recent examples of attacks include:
- A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
- js published an npm code package in their code release containing access keys to their S3 storage buckets.
The good news is that most of these attacks can be prevented by addressing software vulnerabilities, finding misconfigurations and deploying identity access management through a workload protection service.
With this in mind, your cloud workload protection solution should:
- Detect publicly exposed assets
- Identify excessive and unused permissions
- Have harder security configurations
- Secure APIs
- Uncover data theft attempts
- Automate cloud security functions
There are many blind spots involved in today’s large-scale cloud environments. The right cloud workload protection reduces the attack surface, detects data theft activity and provides comprehensive protection in a cloud-native solution.
As the trend around cybercriminals targeting operational technologies continues, it’s critical to reduce organizational risk by rigorously enforcing protection policies, detecting malicious activity and improving response capabilities while providing insurance to the developers.
These days, data breaches are an everyday occurrence. Companies collect volumes of data about their customers, from basic contact information to detailed financial history, demographics, buying patterns, and even lifestyle choices. Effectively this builds a very private digital footprint for each customer. When this footprint is leaked, it not only erodes the trust between consumers and the affected brand, but also erodes trust for all brands.
The latest marketing buzzwords call this a ‘post-breach era’ but I’d call it a post-trust era. We have watched the slow erosion of consumer trust for several years now. Forrester predicted that 2018 would mark the tipping point, calling it “a year of reckoning,” but here we are in 2019 and trust only continues to decline. The Edelman Trust Barometer claims that in the U.S., we saw the sharpest drop in consumer trust in history, bringing it to an all-time low.
Why is Consumer Trust Falling at Such a Rapid Rate?
Organizations have spent billions of dollars digitally transforming themselves to create faster, easier and more numerous access points for their customers to interact with their brand. And it’s worked. Consumers engage much more often with more personal data with brands today than ever before. For marketers, it’s a dream come true: More access equals more insights and more customer data recorded, enabling more personalized and customized customer experiences.
However, each touch-point comes with increased security risk and vulnerabilities. Prior to the digital transformation revolution, brands interacted much less frequently with their customers (for the sake of argument, let’s say once a month). But now, brands communicate daily (sometimes multiple times per day!) across multiple touch-points and multiple channels, collecting exponential amounts of data. This increases not only the opportunities for breaches, but the possibility for negative customer interactions with so much more private information known about an individual. An overabundance of those marvelous personalized interactions can make consumers feel invasive and uncomfortable at the risk in their digital footprint.
Trust is necessary to offset any negativity.
Brands have a tremendous responsibility to protect all the data they collect from their customers. Historically lack of vigilance on security has led to the start of many data breaches. For many years, the C-suite has treated information security as an expense to treat the basics of a regulatory compliance standard, not as an investment.
Today that organizational behavior just does not suffice. The stakes are much higher now; the size, frequency, and resulting consequences of recent data breaches have created a huge backlash in consumer sentiments. We feel the impact of this trust erosion in new legislation across the globe (GDPR, Castle Laws, etc.) designed to give consumers some power back with regards to their data. We also feel the impact in customer churn, brand abandonment poor Customer Lifetime Value (CLV) after a security breach. The ripple effects of data breaches signal the value of investing in security upfront; invest in the right cybersecurity infrastructure now or risk paying far more later.
It forces us as marketers to change the type of conversations we have with our customers.
What’s a Brand to Do?
How important is data security to your customers and your brand promise? If asked, surely every one of your customers would tell you it’s important. Most marketers are afraid to make security promises for fear of future data breaches. However, there’s a compelling argument that if you don’t address the issue up front, you are missing a critical conversation with your customers that could cost you their loyalty.
- Don’t fear the security conversation, embrace it. Brands like Apple are once again leading the privacy conversation. Apple’s new ad campaign address privacy issues head on. Executives may not need the exact stance as Apple, but as a marketer, you can identify the right tone and timing for a security conversation with your audience.
- Ask your customers about their security concerns and listen to their answers! Our digitally transformed world empowers us to engage in a two-way dialog with our audiences. Talk to them. Ask them their opinions on security – and more importantly, listen to their answers. Take their suggestions back to your product and development teams and incorporate it into your company’s DNA.
- Develop features and services that empower your customers to protect their own privacy. Today, banks offer credit monitoring, credit locking, fraud alerts, subscriptions to services that monitor the dark web for an entire family, etc. IoT devices have enabled people to see who is ringing the doorbell even when they are not home. Those doorbell recordings can now be shared through neighborhood watch sites to warn the community of incidents when they occur. These are all examples of innovation and evolution around security as a feature.
- Highlight all the different ways your company is protecting its customers data and privacy. Don’t assume your customers know that you take their privacy concerns seriously. Show them you care about their security concerns. Tell them and educate them about all the steps you are taking to protect them.
- Don’t whitewash security concerns. Be a champion for injecting security into the DNA of your organization – from product development to responsible data collection and storage, to the customer experience.
Regardless of your industry— from finance to retail to consumer goods to healthcare and beyond—there is a security discussion to be had with your customers. If you are not embracing the conversation, your competitors will, and you will be left behind.
Read “Consumer Sentiments: Cybersecurity, Personal Data and The Impact on Customer Loyalty” to learn more.
By this point, we know that state-sponsored cyber attacks are a thing. Time and again, we see headlines to this effect, whether it’s election hacking, IP theft, or mega-breaches. For your average consumer, it’s troubling. But for executives at organizations that are targeted, it’s a nightmare.
The accompanying PR headaches, customer churn, and operational and reputation losses are bad enough; but when big companies think they’re protected by cyber insurance only to find out they aren’t, things go from bad to worse.
Are You Really Covered?
Indeed, per the New York Times, “Many insurance companies sell cyber coverage, but the policies are often written narrowly to cover costs related to the loss of customer data, such as helping a company provide credit checks or cover legal bills.” In other words, many organizations think that because they’ve purchased cyber insurance, they are protected and will be reimbursed for any expenses related to suffering and mitigating a cyberattack.
But that’s not necessarily the case. Insurers are increasingly citing a “war exclusion” clause —which “protects insurers from being saddled with costs related to damage from war”— to avoid reimbursing losses associated with cyberattacks.
Huh? How can that be? We’ve seen the US Department of Justice identify APT-10 as a Chinese state-sponsored corporate hacking group, attacking both Hewlett Packard Enterprise and IBM.
In addition, the now infamous NotPetya (for which the U.S. assigned responsibility to Russia in 2018), affected companies are considered collateral damage in cyberwars. This is the nightmare scenario that played out for both Mondelez and Merck in 2017, after both organizations suffered hundreds of millions of dollars’ worth of damages resulting from the NotPetya attack. Unsurprisingly, both Mondelez and Merck are respectively fighting back—in court. But these cases will likely take years (and an astounding amount of legal fees) to resolve. Which begs the question: what are companies to do in the meantime when cyber insurance fails to protect the business?
Protecting Your Business
Well, first thing’s first. Prioritize security, don’t treat it as an add-on or wait until you’ve been hit with an attack to beef it up. Build it into the very fabric of your company’s foundation. As I wrote last year, doing so enables an organization to scale and focus on security innovation, rather than scrambling to mitigate new threats as they evolve. Besides, baking security into your products and/or services can be leveraged as a competitive differentiator (and therefore help produce new revenue streams).
Additionally, there are several other steps to take to help protect your organization against large scale cyberattacks:
- Install comprehensive DDoS and application security protection. Such solutions will optimize business operations, minimize service degradation and help prevent downtime.
- Educate employees. This can’t be emphasized enough; employers should educate their employees about common cyberattack methods (like phishing campaigns), and to be wary of links and downloads from unknown sources. This may sound simplistic, but it’s often overlooked.
- Manage permissions. This holds particularly true for organizations operating in or migrating to a public cloud environment; excessive permissions are the number one threat to your cloud-based data.
- Use multi-factor authentication. Again, this is low-hanging fruit, but it bears repeating. Requiring multi-factor authentication may seem like a pain, but it’s well worth the effort to safeguard your network.
And, as always, let the (security) experts handle the (cybercriminal) experts. Don’t hesitate to engage third-party experts in your quest to provide a secure customer experience.
Humans aren’t the only ones consumed with connected devices these days. Cows have joined our ranks.
Believe it or not, farmers are increasingly relying on IoT devices to keep their cattle connected. No, not so that they can moo-nitor (see what I did there?) Instagram, but to improve efficiency and productivity. For example, in the case of dairy farms, robots feed, milk and monitor cows’ health, collecting data along the way that help farmers adjust techniques and processes to increase milk production, and thereby profitability.
The implications are massive. As the Financial Times pointed out, “Creating a system where a cow’s birth, life, produce and death are not only controlled but entirely predictable could have a dramatic impact on the efficiency of the dairy industry.”
From Dairy Farm to Data Center
So, how do connected cows factor into cybersecurity? By the simple fact that the IoT devices tasked with milking, feeding and monitoring them are turning dairy farms into data centers – which has major security implications. Because let’s face it, farmers know cows, not cybersecurity.
Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.
It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.
Admittedly, connected cows aren’t new; IoT devices have been assisting farmers for several years now. And it’s a booming business. Per the FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in 2016 (including $363m in farm management and sensor technology)…and is set to grow further as dairy farms become a test bed for the wider IoT strategy of big technology companies.”
But what is new is the rollout of 5G networks, which promise faster speeds, low latency and increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve previously discussed, with new benefits come new risks. As network architectures evolve to support 5G, security vulnerabilities will abound if cybersecurity isn’t prioritized and integrated into a 5G deployment from the get-go.
In the new world of 5G, cyberattacks can become much more potent, as a single hacker can easily multiply into an army through botnet deployment. Indeed, 5G opens the door to a complex world of interconnected devices that hackers will be able to exploit via a single point of access in a cloud application to quickly expand an attack radius to other connected devices and applications. Just imagine the impact of a botnet deployment on the dairy industry.
I don’t know about you, but I like my milk and cheeses. Here’s to hoping dairy farmers turn to the experts to properly manage their security before the industry is hit with devastating cyberattacks.
Read “Creating a Secure Climate for your Customers” today.
A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.
During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.
Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.
About That Proof of Concept…
How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!
Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks? Well, the customer answered “yes” to both.
Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.
We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks. And then the prospect told us to launch the attacks; that they didn’t have any attack tools.
A Bad Idea
Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?
WRONG! Here’s why that’s a bad idea:
- Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
- Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
- Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.
Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.
A Better Way
So what are the alternatives? How should you do DDoS testing?
- With DDoS testing, the focus should be on evaluating the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
- Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
- It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
- If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
- Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.
Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.
Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.
Some Attacks are a Two-Way Street
As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.
Some examples of such attacks include:
Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.
An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.
Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.
An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.
Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.
Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.
Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.
Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.
Two-Way Attacks Require Bi-Directional Defenses
As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.
Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.