I’ve long maintained that security can and should be leveraged as a competitive advantage, regardless of industry. But I’d like to expound upon this mantra: it holds particularly true for the financial services industry.
With consumers increasingly relying on web and mobile apps to conduct financial transactions, customer data has emerged as the new “oil” that powers financial institutions; it can be mined to upsell additional services over the long haul.
But if banks want to increase customer lifetime value, they first must protect this treasure trove of data. Why? Because privacy and security are top of mind for consumers.
This isn’t just an educated guess on my part; Radware recently conducted a survey of nearly 1,200 U.S. consumers to better understand how they view financial security, how they’d react if their data was compromised, and what it all means for financial institutions today.
Spoiler alert: Financial institutions stand to lose business if they don’t prioritize security.
The fact is, data breaches and cyberattacks increase customer churn in an age when virtual transparency, relentless competition and frictionless account transitions have made it easy for consumers to switch financial institutions. And make no mistake — they will abandon their banks if their privacy and security are better prioritized elsewhere.
Today’s infrastructure threats will have major impacts on tomorrow’s 5G commercial networks. 5G network slicing, virtualization and disaggregation introduce new levels of complexity to network security, requiring a high-level of automation in security on-boarding, scale-out and attack mitigation.
5G security is absolutely required to be thought about in a Day 1 network build and ‘weaved’ into the network architecture. Otherwise, the immense job of re-architecting the network afterward will be a cost-prohibitive exercise.
Service providers are faced with a necessary burden of managing security threats in the 5G network.
Your ‘Typical’ Security Solution
A typical network security solution will include several security elements, such as firewalls, DDoS protection devices, IPS/IDS, etc. Each system may require its own domain expertise when it comes to proper configuration and tuning. When a carrier-grade network slice is under attack, dedicated expertise is required for handling changes and setting the proper mitigation actions. With the new paradigm of 5G network slicing coming onto the scene in a highly distributed network, carrier security teams will be challenged.
Service providers are already in a precarious position of creating healthy profit margins with the onslaught of over-the-top data and video traversing their networks. New revenue streams are tough to come by, and so the other lever available to influence margins is cost control. However, the cost economics do not scale well when contemplating an increase in security staff to prepare for 5G. The new attack vectors are just too complex and too high in volume to adequately address with a bloated Security Operations Center (SOC) of just human oversight and management.
What makes more sense is adoption of a comprehensive security solution used across all network slices to benefit from ease of management and SOC skill sets.
Vendor technology designed around the concept of self-learning with respect to threat detection not heavily dependent on pre-configured rules is the ideal toolkit for service providers. Minimal setup and configuration lower the overall carrier security team effort around system operation. Now, instead of manual provisioning and troubleshooting, the SOC specialist can look at a dashboard to see what was detected by the system and what mitigation actions took place to defend against malicious threats to the system. This yields strong visibility into network security threats across all network functions and slices.
In the new 5G security play, the various security functions are on-boarded per slice in alignment to the required network capabilities and desired distribution. The total investment in security computing resources and licenses aligned with the network slice investment allowing carrier better control on the risks and the costs associated with specific network slice.
mitigation capabilities provide the security team with ‘peace of mind’ that all
‘war time’ actions are taken care of in automated manner with no manual
intervention by security administrators.
So although 5G carries with it very challenging security issues, service providers can be proactive in creating a security posture that gives them the best chance to keep costs in check while keeping the network safe.
Read “Creating a Secure Climate for your Customers” today.
The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behavior and easily deceive conventional cybersecurity solutions/bot management systems.
Addressing highly sophisticated and automated bot-based cyberthreats requires deep analysis of bots’ tactics and intentions. According to Forrester Research’s The Forrester New Wave™: Bot Management, Q3 2018 report, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”
When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.
Basic Bot Management Features
Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.
Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.
Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.
Capability to Detect Large-Scale Distributed Humanlike Bots
When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.
Traditional defenses fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioral learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behavior.
The rise of highly sophisticated humanlike bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioral analyses, collective bot intelligence and threat research, as well as other foundational techniques.
A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors
How advanced is the solution’s bot detection technology?
Does it use unique device and browser fingerprinting?
Does it leverage intent analysis in addition to user behavioral analysis?
How deep and effective are the fingerprinting and user behavioral modeling?
Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behavior. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.
Impact on User Experience — Latency, Accuracy and Scalability
Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.
Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.
Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.
Read “How to Evaluate Bot Management Solutions” to learn more.
Often, I find that only a handful of organizations have a complete understanding of where they stand in today’s threat landscape. That’s a problem. If your organization does not have the ability to identify its assets, threats, and vulnerabilities accurately, you’re going to have a bad time.
A lack of visibility prevents both IT and security
administrators from accurately determining their actual exposure and limits
their ability to address their most significant risk on premise. However,
moving computing workloads to a publicly hosted cloud service exposes
organizations to new risk by losing direct physical control over their
workloads and relinquishing many aspects of security through the shared
Cloud-y With a Chance of Risk
Don’t get me wrong; cloud environments make it very easy for companies to quickly scale by allowing them to spin up new resources for their user base instantly. While this helps organizations decrease their overall time to market and streamline business process, it also makes it very difficult to track user permission and manage resources.
However, moving workloads to the cloud has presented new risks for organizations. Typically, public clouds provide only basic protections and are mainly focused on securing their overall computing environments, leaving individual and organizations workloads vulnerable. Because of this, deployed cloud environment are at risk of not only account compromises and data breaches, but also resource exploitation due to misconfigurations, lack of visibility or user error.
The complexity and growing risk of cloud environments are placing more responsibility for writing and testing secure apps on developers as well. While most are not cloud-oriented security experts, there are many things we can do to help them and contribute to a better security posture.
A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
js published an npm code package in their code release containing access keys to their S3 storage buckets.
The good news is that most of these attacks can be prevented
by addressing software vulnerabilities, finding misconfigurations and deploying
identity access management through a workload protection service.
With this in mind, your cloud workload protection solution should:
There are many blind spots involved in today’s large-scale cloud environments. The right cloud workload protection reduces the attack surface, detects data theft activity and provides comprehensive protection in a cloud-native solution.
As the trend around cybercriminals targeting operational technologies continues, it’s critical to reduce organizational risk by rigorously enforcing protection policies, detecting malicious activity and improving response capabilities while providing insurance to the developers.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
These days, data breaches are an everyday occurrence. Companies collect volumes of data about their customers, from basic contact information to detailed financial history, demographics, buying patterns, and even lifestyle choices. Effectively this builds a very private digital footprint for each customer. When this footprint is leaked, it not only erodes the trust between consumers and the affected brand, but also erodes trust for all brands.
The latest marketing buzzwords call this a ‘post-breach era’ but I’d call it a post-trust era. We have watched the slow erosion of consumer trust for several years now. Forrester predicted that 2018 would mark the tipping point, calling it “a year of reckoning,” but here we are in 2019 and trust only continues to decline. The Edelman Trust Barometer claims that in the U.S., we saw the sharpest drop in consumer trust in history, bringing it to an all-time low.
Why is Consumer Trust Falling at Such a Rapid Rate?
Organizations have spent billions of dollars digitally transforming themselves to create faster, easier and more numerous access points for their customers to interact with their brand. And it’s worked. Consumers engage much more often with more personal data with brands today than ever before. For marketers, it’s a dream come true: More access equals more insights and more customer data recorded, enabling more personalized and customized customer experiences.
However, each touch-point comes with increased security risk and vulnerabilities. Prior to the digital transformation revolution, brands interacted much less frequently with their customers (for the sake of argument, let’s say once a month). But now, brands communicate daily (sometimes multiple times per day!) across multiple touch-points and multiple channels, collecting exponential amounts of data. This increases not only the opportunities for breaches, but the possibility for negative customer interactions with so much more private information known about an individual. An overabundance of those marvelous personalized interactions can make consumers feel invasive and uncomfortable at the risk in their digital footprint.
Brands have a tremendous responsibility to protect all the data they collect from their customers. Historically lack of vigilance on security has led to the start of many data breaches. For many years, the C-suite has treated information security as an expense to treat the basics of a regulatory compliance standard, not as an investment.
Today that organizational behavior just does not suffice. The stakes are much higher now; the size, frequency, and resulting consequences of recent data breaches have created a huge backlash in consumer sentiments. We feel the impact of this trust erosion in new legislation across the globe (GDPR, Castle Laws, etc.) designed to give consumers some power back with regards to their data. We also feel the impact in customer churn, brand abandonment poor Customer Lifetime Value (CLV) after a security breach. The ripple effects of data breaches signal the value of investing in security upfront; invest in the right cybersecurity infrastructure now or risk paying far more later.
It forces us as marketers to change the type of
conversations we have with our customers.
What’s a Brand to Do?
How important is data security to your customers and your
brand promise? If asked, surely every
one of your customers would tell you it’s important. Most marketers are afraid to make security
promises for fear of future data breaches. However, there’s a compelling
argument that if you don’t address the issue up front, you are missing a
critical conversation with your customers that could cost you their loyalty.
Don’t fear the security conversation, embrace it. Brands like Apple are once again leading the privacy conversation. Apple’s new ad campaign address privacy issues head on. Executives may not need the exact stance as Apple, but as a marketer, you can identify the right tone and timing for a security conversation with your audience.
Ask your customers about their security concerns and listen to their answers! Our digitally transformed world empowers us to engage in a two-way dialog with our audiences. Talk to them. Ask them their opinions on security – and more importantly, listen to their answers. Take their suggestions back to your product and development teams and incorporate it into your company’s DNA.
Develop features and services that empower your customers to protect their own privacy. Today, banks offer credit monitoring, credit locking, fraud alerts, subscriptions to services that monitor the dark web for an entire family, etc. IoT devices have enabled people to see who is ringing the doorbell even when they are not home. Those doorbell recordings can now be shared through neighborhood watch sites to warn the community of incidents when they occur. These are all examples of innovation and evolution around security as a feature.
Highlight all the different ways your company is protecting its customers data and privacy. Don’t assume your customers know that you take their privacy concerns seriously. Show them you care about their security concerns. Tell them and educate them about all the steps you are taking to protect them.
Don’t whitewash security concerns. Be a champion for injecting security into the DNA of your organization – from product development to responsible data collection and storage, to the customer experience.
Regardless of your industry— from finance to retail to consumer goods to healthcare and beyond—there is a security discussion to be had with your customers. If you are not embracing the conversation, your competitors will, and you will be left behind.
Read “Consumer Sentiments: Cybersecurity, Personal Data and The Impact on Customer Loyalty” to learn more.
By this point, we know that state-sponsored cyber attacks are a thing. Time and again, we see headlines to this effect, whether it’s election hacking, IP theft, or mega-breaches. For your average consumer, it’s troubling. But for executives at organizations that are targeted, it’s a nightmare.
The accompanying PR headaches, customer churn, and
operational and reputation losses
are bad enough; but when big companies think they’re protected by cyber
insurance only to find out they aren’t,
things go from bad to worse.
Are You Really Covered?
the New York Times, “Many
insurance companies sell cyber coverage, but the policies are often written
narrowly to cover costs related to the loss of customer data, such as helping a
company provide credit checks or cover legal bills.” In other words, many
organizations think that because they’ve purchased cyber insurance, they are
protected and will be reimbursed for any expenses related to suffering and
mitigating a cyberattack.
But that’s not necessarily the case. Insurers are increasingly
citing a “war exclusion” clause —which “protects insurers from being saddled
with costs related to damage from war”— to avoid reimbursing losses associated
Huh? How can that be? We’ve seen the US Department of
APT-10 as a Chinese state-sponsored corporate hacking group, attacking both
Hewlett Packard Enterprise and IBM.
In addition, the now infamous NotPetya
(for which the U.S. assigned
responsibility to Russia in 2018), affected companies are considered collateral
damage in cyberwars. This is the nightmare scenario that played out for both Mondelez
in 2017, after both organizations suffered hundreds of millions of dollars’
worth of damages resulting from the NotPetya attack. Unsurprisingly, both Mondelez
and Merck are respectively fighting back—in
court. But these cases will likely take years (and an astounding amount of
legal fees) to resolve. Which begs the question: what are companies to do in
the meantime when cyber insurance fails to protect the business?
Protecting Your Business
Well, first thing’s first. Prioritize security, don’t treat it as an add-on or wait until you’ve been hit with an attack to beef it up. Build it into the very fabric of your company’s foundation. As I wrote last year, doing so enables an organization to scale and focus on security innovation, rather than scrambling to mitigate new threats as they evolve. Besides, baking security into your products and/or services can be leveraged as a competitive differentiator (and therefore help produce new revenue streams).
Additionally, there are several other steps to take to help
protect your organization against large scale cyberattacks:
Educate employees. This can’t be emphasized enough; employers should educate their employees about common cyberattack methods (like phishing campaigns), and to be wary of links and downloads from unknown sources. This may sound simplistic, but it’s often overlooked.
Manage permissions. This holds particularly true for organizations operating in or migrating to a public cloud environment; excessive permissions are the number one threat to your cloud-based data.
Use multi-factor authentication. Again, this is low-hanging fruit, but it bears repeating. Requiring multi-factor authentication may seem like a pain, but it’s well worth the effort to safeguard your network.
And, as always, let the (security) experts handle the
(cybercriminal) experts. Don’t hesitate to engage third-party experts in your
quest to provide a secure customer experience.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Humans aren’t the only ones consumed with connected devices
these days. Cows have joined our ranks.
Believe it or not, farmers are increasingly relying on IoT
devices to keep their cattle connected. No, not so that they can moo-nitor (see
what I did there?) Instagram, but to improve efficiency and productivity. For
example, in the case of dairy farms, robots feed, milk and monitor cows’ health,
collecting data along the way that help farmers adjust techniques and processes
to increase milk production, and thereby profitability.
The implications are massive. As the Financial Times pointed
out, “Creating a system where a cow’s birth, life, produce and death are
not only controlled but entirely predictable could have a dramatic impact on
the efficiency of the dairy industry.”
From Dairy Farm to Data Center
So, how do connected cows factor into cybersecurity? By the
simple fact that the IoT devices tasked with milking, feeding and monitoring
them are turning dairy farms into data centers – which has major security
implications. Because let’s face it, farmers know cows, not cybersecurity.
Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.
It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.
Admittedly, connected cows aren’t new; IoT devices have been
assisting farmers for several years now. And it’s a booming business. Per the
FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in
2016 (including $363m in farm management and sensor technology)…and is set to
grow further as dairy farms become a test bed for the wider IoT strategy of big
But what is new is
the rollout of 5G networks, which promise faster speeds, low latency and
increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve
previously discussed, with new benefits come new risks. As network
architectures evolve to support 5G, security vulnerabilities will abound if
cybersecurity isn’t prioritized and integrated into a 5G deployment from the
In the new world of 5G, cyberattacks can become much more
potent, as a single hacker can easily multiply into an army through botnet
deployment. Indeed, 5G opens
the door to a complex world of interconnected devices that hackers will be
able to exploit via a single point of access in a cloud application to quickly
expand an attack radius to other connected devices and applications. Just
imagine the impact of a botnet deployment on the dairy industry.
A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.
the discussion, our team asked a series of technical questions related to their
ISP links, types of applications, physical connectivity, and more. And we
provided an attack demo using our sandbox lab in Mahwah.
Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.
About That Proof of Concept…
would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or
enable the service if it is cloud based), set up some type of management IP
address, configure the protection policies, and off you go!
Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks? Well, the customer answered “yes” to both.
Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.
shipped the PoC unit quickly and the prospect, true to their word, got the unit
racked and stacked, cabled up ready to go. We configured the device then gave
them the green light to launch attacks. And then the prospect told us to launch the attacks; that they
didn’t have any attack tools.
A Bad Idea
Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?
Launching attacks over the Internet is ILLEGAL.
You need written permission from the entity being attacked to launch a DDoS
attack. You can try your luck if you want, but this is akin to running a red
light. You may get away with it, but if you are caught the repercussions are
damaging and expensive.
Your ISP might block your IP address. Many ISPs
have DDoS defenses within their infrastructure and if they see someone
launching a malicious attack, they might block your access. Good luck sorting
that one out with your ISP!
Your attacks may not reach the desired testing
destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come
knocking, there might be one or more DDoS mitigation devices between you and
the customer data center where the destination IP being tested resides. These
devices could very well mitigate the attack you launch preventing you from
doing the testing.
Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.
what are the alternatives? How should
you do DDoS testing?
With DDoS testing, the focus should be on evaluating the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.
Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.
Some Attacks are a Two-Way Street
As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.
Some examples of such attacks include:
Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.
An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.
Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.
An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.
Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.
Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.
Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.
Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.
As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.
Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
According to a study by the Ponemon Institute in December 2018, bots comprised over 52% of all Internet traffic. While ‘good’ bots discreetly index websites, fetch information and content, and perform useful tasks for consumers and businesses, ‘bad’ bots have become a primary and growing concern to CISOs, webmasters, and security professionals today. They carry out a range of malicious activities, such as account takeover, content scraping, carding, form spam, and much more. The negative impacts resulting from these activities include loss of revenue and harm to brand reputation, theft of content and personal information, lowered search engine rankings, and distorted web analytics, to mention a few.
For these reasons, researchers at Forrester recommend that, “The first step in protecting your company from bad bots is to understand what kinds of bots are attacking your firm.” So let us briefly look at the main bad bot threats CISOs have to face, and then delve into their industry-wise prevalence.
Bad Bot Attacks That Worry CISOs The Most
The impact of bad bots results from the specific activities they’re programmed to execute. Many of them aim to defraud businesses and/or their customers for monetary gain, while others involve business competitors and nefarious parties who scrape content (including articles, reviews, and prices) to gain business intelligence.
Account Takeover attacks use credential stuffing and brute force techniques to gain unauthorized access to customer accounts.
Application DDoS attacks slow down web applications by exhausting system resources, 3rd-party APIs, inventory databases, and other critical resources.
API Abuse results from nefarious entities exploiting API vulnerabilities to steal sensitive data (such as personal information and business-critical data), take over user accounts, and execute denial-of-service attacks.
Ad Fraud is the generation of false impressions and illegitimate clicks on ads shown on publishing sites and their mobile apps. A related form of attack is affiliate marketing fraud (also known as affiliate ad fraud) which is the use of automated traffic by fraudsters to generate commissions from an affiliate marketing program.
Carding attacks use bad bots to make multiple payment authorization attempts to verify the validity of payment card data, expiry dates, and security codes for stolen payment card data (by trying different values). These attacks also target gift cards, coupons and voucher codes.
Scraping is a strategy often used by competitors who deploy bad bots on your website to steal business-critical content, product details, and pricing information.
Skewed Analytics is a result of bot traffic on your web property, which skews site and app metrics and misleads decision making.
Form Spam refers to the posting of spam leads and comments, as well as fake registrations on marketplaces and community forums.
Denial of Inventory is used by competitors/fraudsters to deplete goods or services in inventory without ever purchasing the goods or completing the transaction.
Industry-wise Impact of Bot Traffic
To illustrate the impact of bad bots, we aggregated all the bad bot traffic that was blocked by our Bot Manager during Q2 and Q3 of 2018 across four industries selected from our diverse customer base: E-commerce, Real Estate, Classifieds & Online Marketplaces, and Media & Publishing. While the prevalence of bad bots can vary considerably over time and even within the same industry, our data shows that specific types of bot attacks tend to target certain industries more than others.
Bad bots target e-commerce sites to carry out a range of attacks — such as scraping, account takeovers, carding, scalping, and denial of inventory. However, the most prevalent bad bot threat encountered by our e-commerce customers during our study were attempts at affiliate fraud. Bad bot traffic made up roughly 55% of the overall traffic on pages that contain links to affiliates. Content scraping and carding were the most prevalent bad bot threats to e-commerce portals two to five years ago, but the latest data indicates that attempts at affiliate fraud and account takeover are rapidly growing when compared to earlier years.
Bad bots often target real estate portals to scrape listings and the contact details of realtors and property owners. However, we are seeing growing volumes of form spam and fake registrations, which have historically been the biggest problems caused by bots on these portals. Bad bots comprised 42% of total traffic on pages with forms in the real estate sector. These malicious activities anger advertisers, reduce marketing ROI and conversions, and produce skewed analytics that hinder decision making. Bad bot traffic also strains web infrastructure, affects the user experience, and increases operational expenses.
Classifieds & Online Marketplaces
Along with real estate businesses, classifieds sites and online marketplaces are among the biggest targets for content and price scrapers. Their competitors use bad bots not only to scrape their exclusive ads and product prices to illegally gain a competitive advantage, but also to post fake ads and spam web forms to access advertisers’ contact details. In addition, bad bot traffic strains servers, third-party APIs, inventory databases and other critical resources, creates application DDoS-like situations, and distorts web analytics. Bad bot traffic accounted for over 27% of all traffic on product pages from where prices could be scraped, and nearly 23% on pages with valuable content such as product reviews, descriptions, and images.
Media & Publishing
More than ever, digital media and publishing houses are scrambling to deal with bad bot attacks that perform automated attacks such as scraping of proprietary content, and ad fraud. The industry is beset with high levels of ad fraud, which hurts advertisers and publishers alike. Comment spam often derails discussions and results in negative user experiences. Bot traffic also inflates traffic metrics and prevents marketers from gaining accurate insights. Over the six-month period that we analyzed, bad bots accounted for 18% of overall traffic on pages with high-value content, 10% on ads, and nearly 13% on pages with forms.
As we can see, security chiefs across a range of industries are facing increasing volumes and types of bad bot attacks. What can they do to mitigate malicious bots that are rapidly evolving in ways that make them significantly harder to detect? Conventional security systems that rely on rate-limiting and signature-matching approaches were never designed to detect human-like bad bots that rapidly mutate and operate in widely-distributed botnets using ‘low and slow’ attack strategies and a multitude of (often hijacked) IP addresses.
The core challenge for any bot management solution, then, is to detect every visitor’s intent to help differentiate between human and malicious non-human traffic. As more bad bot developers incorporate artificial intelligence (AI) to make human-like bots that can sneak past security systems, any effective countermeasures must also leverage AI and machine learning (ML) techniques to accurately detect the most advanced bad bots.
Read “Radware’s 2018 Web Application Security Report” to learn more.