Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.
While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.
Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:
Why a WAF Isn’t an Effective Bot Detection Tool
WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.
However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.
Against these sophisticated threats, WAFs won’t get the job done.
The Benefits of Synergy
As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.
Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.
Read “How to Evaluate Bot Management Solutions” to learn more.
Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.
Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture.
The WAF Ten Commandments
The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.
52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.
Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.
Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.
For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.
The widespread adoption of mobile and IoT devices, and increased use of cloud systems are driving a major change in modern application architecture. Application Programming Interfaces (APIs) have emerged as the bridge to facilitate communication between different application architectures. However, with the widespread deployment of APIs, automated attacks on poorly protected APIs are mounting. Personally Identifiable Information (PII), payment card details, and business-critical services are at risk due to automated attacks on APIs.
So what are key API vulnerabilities, and how can you protect against API abuse?
Many APIs only check authentication status, but not if the request is coming from a genuine user. Attackers exploit such flaws through various ways (including session hijacking and account aggregation) to imitate genuine API calls. Attackers also target APIs by reverse-engineering mobile apps to discover how it calls the API. If API keys are embedded into the app, this can result in an API breach. API keys should not be used alone for user authentication.
Many APIs lack robust encryptions between API client and API server. Attackers exploit such vulnerabilities through man-in-the-middle attacks. Attackers also intercept unencrypted or poorly protected API transactions between API client and API server to steal sensitive information or alter transaction data.
What’s more, the ubiquitous use of mobile devices, cloud systems, and microservice design patterns have further complicated API security as now multiple gateways are involved to facilitate interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount.
APIs are vulnerable to business logic abuse. Attackers make repeated and large-scale API calls on an application server or slow POST requests that result in denial of service. A DDoS attack on an API can result in massive disruption on a front-end web application.
Poor Endpoint Security
Most IoT devices and micro-service tools are programmed to communicate with their server through API channels. These devices authenticate themselves on API servers using client certificates. Hackers attempt to get control over an API from the IoT endpoint and if they succeed, they can easily re-sequence API order that can result in a data breach.
A bot management solution that defends APIs against automated attacks and ensures that only genuine users have the ability to access APIs is paramount. When evaluating such a solution, consider whether it offers broad attack detection and coverage, comprehensive reporting and analytics, and flexible deployment options.
Other steps you can (and should) take include:
Monitor and manage API calls coming from automated scripts (bots)
Drop primitive authentication
Implement measures to prevent API access by sophisticated human-like bots
Robust encryption is a must-have
Deploy token-based rate limiting equipped with features to limit API access based on the number of IPs, sessions, and tokens
Implement robust security on endpoints
Read “Radware’s 2018 Web Application Security Report” to learn more.
Humans aren’t the only ones consumed with connected devices
these days. Cows have joined our ranks.
Believe it or not, farmers are increasingly relying on IoT
devices to keep their cattle connected. No, not so that they can moo-nitor (see
what I did there?) Instagram, but to improve efficiency and productivity. For
example, in the case of dairy farms, robots feed, milk and monitor cows’ health,
collecting data along the way that help farmers adjust techniques and processes
to increase milk production, and thereby profitability.
The implications are massive. As the Financial Times pointed
out, “Creating a system where a cow’s birth, life, produce and death are
not only controlled but entirely predictable could have a dramatic impact on
the efficiency of the dairy industry.”
From Dairy Farm to Data Center
So, how do connected cows factor into cybersecurity? By the
simple fact that the IoT devices tasked with milking, feeding and monitoring
them are turning dairy farms into data centers – which has major security
implications. Because let’s face it, farmers know cows, not cybersecurity.
Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.
It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.
Admittedly, connected cows aren’t new; IoT devices have been
assisting farmers for several years now. And it’s a booming business. Per the
FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in
2016 (including $363m in farm management and sensor technology)…and is set to
grow further as dairy farms become a test bed for the wider IoT strategy of big
But what is new is
the rollout of 5G networks, which promise faster speeds, low latency and
increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve
previously discussed, with new benefits come new risks. As network
architectures evolve to support 5G, security vulnerabilities will abound if
cybersecurity isn’t prioritized and integrated into a 5G deployment from the
In the new world of 5G, cyberattacks can become much more
potent, as a single hacker can easily multiply into an army through botnet
deployment. Indeed, 5G opens
the door to a complex world of interconnected devices that hackers will be
able to exploit via a single point of access in a cloud application to quickly
expand an attack radius to other connected devices and applications. Just
imagine the impact of a botnet deployment on the dairy industry.
In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.
As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.
On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.
With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.
As the saying goes, this is why we can’t have nice things.
If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.
E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.
Types of Application Attacks
It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.
Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.
Consecutive login attempts with different credentials from the same HTTP client
Unusual request activity for selected application content and data
Unexpected changes in website performance and metrics
A sudden increase in account creation rate
Elevated traffic for certain limited-availability goods or services
Intelligence is the Solution
Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.
A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.
Read “Radware’s 2018 Web Application Security Report” to learn more.
There’s an 87 Gigabyte file containing 773 Million unique email addresses and passwords being sold on online forums today called “Collection #1.” We know that many users of websites are using the same passwords all over the internet; even after all the years of data breaches and account takeovers and thefts, user behavior stays the same. Most people want the have the least complex means to use a website possible.
So, what does this mean for businesses?
Anywhere you have applications guarded with username / password mechanisms, there’s going to be credential stuffing attacks, courtesy of botnets. A modern botnet is a distributed network of computers around the globe that can perform sophisticated tasks and is often comprised of compromised computers belonging to other people. Essentially, these botnets are looking to steal the sand from the beach, one grain at a time, and they are never going to stop. If anything, the levels of sophistication of the exploitation methods have grown exponentially.
Today, a Web Application Firewall (WAF) alone is not enough to fight botnets. WAFs can do some of the job, but today’s botnets are very sophisticated and can mimic real human behaviors. Many companies relied on CAPTCHA as their first line of defense, but it’s no longer sufficient to stop bots. In fact, there are now browser plugins to break CAPTCHA.
Case in point: In 2016 at BlackHat Asia, some presenters shared that they were 98% successful at breaking these mechanisms. 98%! We, as humans, are probably nowhere near that success rate. Personally, I’m likely at 70-80%, depending on what words (and backwards letters!) CAPTCHA presents while I’m rushing to get my work done. Even with picture CAPTCHA, I pass maybe 80% of my initial attempts; I can’t ever get those “select the edges of street signs” traps! So, what if bots are successful 98% of the time and humans only average 70%?
CAPTCHA Alone Won’t Save You
If your strategy to stop bots is flawed and you rely on CAPTCHA alone, what are some of the repercussions you may encounter? First, your web analytics will be severely flawed, impacting your ability to accurately gauge the real usage of your site. Secondly, advertising fraud can run your bill up from affiliate sites. Third, the CAPTCHA-solving botnets will still be able to conduct other nefarious deeds, like manipulate inventory, scrape data, and launch attacks on your site.
Identification of good bots and bad bots requires a dedicated solution. Some of the largest websites in the world have admitted that this is an ongoing war for them. Machine learning and deep learning technologies are the only way to stay ahead in today’s world. If you do not have a dedicated anti-bot platform, you may be ready to start evaluating one today.
Read “Radware’s 2018 Web Application Security Report” to learn more.
In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.
Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.
Positive Impact: Business Acceleration
Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.
Good bots include:
Crawlers — are used by search engines and contribute to SEO and SEM efforts
Chatbots — automate and extend customer service and first response
Fetchers — collect data from multiple locations (for instance, live sporting events)
Pricers — compare pricing information from different services
Traders — are used in commercial systems to find the best quote or rate for a transaction
The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:
Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.
Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.
Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.
For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.
Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.
Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Let me cut to the chase: The financial services industry is rapidly changing to satisfy its new best friend, millennials. There’s no getting around it; their sheer numbers necessitate attention. Millennials represent one in three Americans in the workforce, 25 percent of the global population (fun fact: there are more millennials in China than people in the United States!), and have $200 billion in buying power. They are the largest single generation in the workforce today. And, most importantly for financial services, they are 43 percent of all mobile banking and finance usage.
Digital Trumps Traditional
Indeed, millennials don’t value traditional banking like previous generations. Born into a digitally-connected era, they heavily rely on the Internet and smartphones to conduct their business, including managing their finances. According to research from Gemalto, more than one in four (27%) millennials have never even visited a bank branch. Comparatively, 77 percent use online services every month and many consider mobile banking “essential,” with nearly 40 percent reporting that financial apps help them control their finances. This becomes critically important in maintaining trust. Since they’ve never been to a branch, there are no people, no relationships to build loyalty. All trust, loyalty and affinity for the brand comes 100% from experience on the web and via mobile apps. Any breach here, and trust is broken…forever.
And, it’s worth noting, millennials want financial help. Millennials grew up during the global financial crisis, so managing debt responsibly and avoiding risk is very important to them. A TD Bank survey designed to understand these young adults’ banking behaviors found that “while 59 percent of millennials reported that they are ‘extremely’ or ‘very’ knowledgeable about their day-to-day banking products like checking accounts, they still want advice on personal finance topics,” including savings, credit cards and creating a budget.
In other words, millennials value tools and advice that give them control over debt and credit alike—which helps explain their reliance on fintech over traditional banks for financial advice and things like debt consolidation loans. In fact, millennials are driving a surge in personal loans, 36 percent of which are from fintech lenders.
All these statistics converge to make one key point: While there is a huge opportunity for fintech providers to capture market share and growth, there is also sizable risk. Why? Because data security is top of mind for these so-called “digital natives.” They understand the liabilities of trusting organizations, like financial institutions, with their online data and expect that it will be well guarded 24/7 with no lapses.
If it isn’t? say goodbye to your millennial customer base; millennials are 2.5 times more likely to change banks than their older counterparts if they aren’t pleased. And one surefire way to keep them happy is with a secure mobile and/or online customer experience. After all, the number one tool millennials want is better mobile security for financial transactions.
Don’t risk losing the most connected, powerful consumer demographic because of lax security. The guaranteed fallout—customer attrition, reputation loss and more—simply isn’t worth the risk. Proactively securing a secure customer experience is paramount to maintaining a competitive advantage and capturing the trust of your most important customers.
Read “The Millennial View on Data Security” today.
The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?
Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.
While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.
Encrypted Application Layers
When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.
To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:
Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.
Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.
Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.
Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.
The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.
Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.
Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.
Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Over the last few years I have traveled around the world, researching and watching stadiums digitally evolve from the structures I once knew as a kid. I grew up watching the San Diego Chargers play in what was then called Jack Murphy Stadium and now find myself looking at stadiums from a totally different perspective.
As Super Bowl 53 approaches, my attention, along with Radware’s ERT, turns to the crowds and the target rich environments created by high profile sporting events. This Super Bowl, like years before, will bring large crowds once again that will demand connectivity and are expected to consume record breaking volumes this year. Extreme Networks reported that last year’s attendees at Super Bowl 52 in Minnesota transferred 16.32 Terabytes of data with a peak rate of 7.867 Gbps! This is an enormous demand for connectivity and the technology involved could poses a security risk for event organizers, partners, sponsors and attendees as their activities in the stadium begin to produce more digital oil–data.
A Seamless Digital Game Day Experience
There are few sporting events in the world as large as the Super Bowl. Last year there was an estimated 103 million viewers. The Super Bowl generates a lot of excitement from media, fans and the public. Beyond the hype of the game itself, there is a variety of multimedia technology available to fans, providing a more immersive and interactive experience. These experiences include Super Bowl Live, a 6-day series of concerts and events in Centennial Olympic Park in Downtown Atlanta, and the Super Bowl Experience, an 8-day event full of exhibits and interactive games inside the Georgia World Congress Center. Other events also include the Verizon Experience, which will showcase how 5G wireless technology will change the fan experience in stadiums going forward (something I’m personally looking forward to seeing).
To ensure Super Bowl attendees have a seamless digital experience, the NFL, Georgia World Congress Center, AMB Sports and Entertainment Group, and leading wireless carriers have made major investments into the construction and deployment of the current networks surrounding the stadium in order to maintain a high quality of service for the attendees and vendors at the Super Bowl. The stadium provides 15,000 Ethernet ports, 1,800 access points and a Distributed Antenna System (DAS), for enhanced cellular coverage. The DAS system is owned by the stadium and rented out to the four major US cellular carries for additional coverage. The stadiums WiFi is also provided by AT&T and consists of two redundant 40gb connections. The stadium also contains 2,000 IPTV for delivering game content provide by AT&T’s DirectTV. These features and network help ensure fans can watch, eat, share, download and communicate their game day experience with others.
When it comes to planning for the future, the stadium has pulled its fiber optics as close to the access points as possible, terminating in mini intermediate distribution frames (IDF) throughout the stadium. The network gear is from Aruba and Hewlett Packard Enterprise while others involved with the network include IBM, Corning and ThinkAmp. Recently, IBM and Corning built one of the more technology advanced stadiums with a blazing fast network for Texas A&M.
What’s more, Mercedes-Benz Stadium also promotes a mobile app. While this app is not as cutting edge as the one for Levi Stadium, for example, it does include information about the stadium, news, scores, as well as viewing, buying and transferring tickets and parking.
Assessing The Risks
There is always a potential risk at large sporting events like the Super Bowl. Even the smallest network outage could leave attendees unable to use their digital tickets to enter the game. Organizations such as the NFL, Patriots, Rams, Georgia World Congress Center, AMB Sports and Entertainment Group, wireless carriers, IBM Cloud, AT&T network or media outlets, as well as those considered partners, sponsors or supporters of Super Bowl 53, should take extra precautions and have an emergency plan in place.
For the Super Bowl, most cybercriminals will be focused on identity and financial theft in the days leading up to the game. These attacks will often be baited with promotions for Super Bowl ticket or a trip giveaway to Atlanta.
One of the other concerns at the Super Bowl will surround protecting critical applications and networks that support the events, hosted both locally and in the cloud. Broadcast networks, industrial control systems, civil-service networks and other related systems are all at risk as well. While there hasn’t been a recent attack of scale reported against the Super Bowl, last year we did witness a piece of malware named Olympic Destroyer that targeted and disrupted the opening ceremonies and entry into the 2018 Winter Olympics.
Indeed, major sporting events create a platform for cybercrime, though recently most cybercriminals have been focused on identity theft by spreading malicious software in a number of ways that’s designed to harvest and steal personal information. Today’s High Density (HD) Stadiums, theaters, arenas and amphitheaters require small cells, WIFI and DAS deployments to serve their demanding environment. Often, the technologies designed to enhance the spectators’ experience, such as Wi-Fi, Bluetooth and other digital services, are easily exploited to harvest information from attendees.
Technology can provide a more immersive and rewarding experience for fans, but it also create problems and security risks for those managing the event. Here are a few tips to consider if you’ll be joining me in the chaos next weekend in Atlanta for Super Bowl 53.
Charge your phone; you’re going to need that power to capture the experience
Ensure your phone is updated with the latest operating system
Disable Bluetooth when not in use
Disable Wi-Fi when not in use
Use the official event Wi-Fi when device is in use ‘attwifi’ (there will be no portal or advertisements. Join to Connect.)
Always use a VPN when using public Wi-Fi
Be careful when using ATMs – Understand how to spot and avoid card skimmers gathering card data.
Exercise caution when presented with pop-ups while browsing