main

Application SecurityAttack Types & VectorsBotnetsSecurity

Are Connected Cows a Hacker’s Dream?

April 3, 2019 — by Mike O'Malley0

connected_cows-960x639.jpg

Humans aren’t the only ones consumed with connected devices these days. Cows have joined our ranks.

Believe it or not, farmers are increasingly relying on IoT devices to keep their cattle connected. No, not so that they can moo-nitor (see what I did there?) Instagram, but to improve efficiency and productivity. For example, in the case of dairy farms, robots feed, milk and monitor cows’ health, collecting data along the way that help farmers adjust techniques and processes to increase milk production, and thereby profitability.

The implications are massive. As the Financial Times pointed out, “Creating a system where a cow’s birth, life, produce and death are not only controlled but entirely predictable could have a dramatic impact on the efficiency of the dairy industry.”

From Dairy Farm to Data Center

So, how do connected cows factor into cybersecurity? By the simple fact that the IoT devices tasked with milking, feeding and monitoring them are turning dairy farms into data centers – which has major security implications. Because let’s face it, farmers know cows, not cybersecurity.

Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.

[You may also like: IoT Expands the Botnet Universe]

It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.

5G Cows

Admittedly, connected cows aren’t new; IoT devices have been assisting farmers for several years now. And it’s a booming business. Per the FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in 2016 (including $363m in farm management and sensor technology)…and is set to grow further as dairy farms become a test bed for the wider IoT strategy of big technology companies.”

[You may also like: Securing the Customer Experience for 5G and IoT]

But what is new is the rollout of 5G networks, which promise faster speeds, low latency and increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve previously discussed, with new benefits come new risks. As network architectures evolve to support 5G, security vulnerabilities will abound if cybersecurity isn’t prioritized and integrated into a 5G deployment from the get-go.

In the new world of 5G, cyberattacks can become much more potent, as a single hacker can easily multiply into an army through botnet deployment. Indeed, 5G opens the door to a complex world of interconnected devices that hackers will be able to exploit via a single point of access in a cloud application to quickly expand an attack radius to other connected devices and applications. Just imagine the impact of a botnet deployment on the dairy industry.

[You may also like: IoT, 5G Networks and Cybersecurity: A New Atmosphere for Mobile Network Attacks]

I don’t know about you, but I like my milk and cheeses. Here’s to hoping dairy farmers turn to the experts to properly manage their security before the industry is hit with devastating cyberattacks.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Attack MitigationDDoSDDoS Attacks

Is It Legal to Evaluate a DDoS Mitigation Service?

March 27, 2019 — by Dileep Mishra2

ddostesting-960x640.jpg

A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.

During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.

Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.

About That Proof of Concept…

How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!

Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks?  Well, the customer answered “yes” to both.

[You may also like: DDoS Protection Requires Looking Both Ways]

Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.

We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks.  And then the prospect told us to launch the attacks; that they didn’t have any attack tools.

A Bad Idea

Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?

[You may also like: 8 Questions to Ask in DDoS Protection]

WRONG! Here’s why that’s a bad idea:

  • Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
  • Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
  • Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.

Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.

[You may also like: 5 Must-Have DDoS Protection Technologies]

A Better Way

So what are the alternatives? How should you do DDoS testing?

  • With DDoS testing, the focus should be on evaluating  the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like  trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
  • Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
  • It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
  • If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
  • Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi0

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsSecurity

CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats

March 21, 2019 — by Abhinaw Kumar0

BADBOTS-960x305.jpg

According to a study by the Ponemon Institute in December 2018, bots comprised over 52% of all Internet traffic. While ‘good’ bots discreetly index websites, fetch information and content, and perform useful tasks for consumers and businesses, ‘bad’ bots have become a primary and growing concern to CISOs, webmasters, and security professionals today. They carry out a range of malicious activities, such as account takeover, content scraping, carding, form spam, and much more. The negative impacts resulting from these activities include loss of revenue and harm to brand reputation, theft of content and personal information, lowered search engine rankings, and distorted web analytics, to mention a few.

For these reasons, researchers at Forrester recommend that, “The first step in protecting your company from bad bots is to understand what kinds of bots are attacking your firm.” So let us briefly look at the main bad bot threats CISOs have to face, and then delve into their industry-wise prevalence.

Bad Bot Attacks That Worry CISOs The Most

The impact of bad bots results from the specific activities they’re programmed to execute. Many of them aim to defraud businesses and/or their customers for monetary gain, while others involve business competitors and nefarious parties who scrape content (including articles, reviews, and prices) to gain business intelligence.

[You may also like: The Big, Bad Bot Problem]

  • Account Takeover attacks use credential stuffing and brute force techniques to gain unauthorized access to customer accounts.
  • Application DDoS attacks slow down web applications by exhausting system resources, 3rd-party APIs, inventory databases, and other critical resources.
  • API Abuse results from nefarious entities exploiting API vulnerabilities to steal sensitive data (such as personal information and business-critical data), take over user accounts, and execute denial-of-service attacks.
  • Ad Fraud is the generation of false impressions and illegitimate clicks on ads shown on publishing sites and their mobile apps. A related form of attack is affiliate marketing fraud (also known as affiliate ad fraud) which is the use of automated traffic by fraudsters to generate commissions from an affiliate marketing program.
  • Carding attacks use bad bots to make multiple payment authorization attempts to verify the validity of payment card data, expiry dates, and security codes for stolen payment card data (by trying different values). These attacks also target gift cards, coupons and voucher codes.
  • Scraping is a strategy often used by competitors who deploy bad bots on your website to steal business-critical content, product details, and pricing information.
  • Skewed Analytics is a result of bot traffic on your web property, which skews site and app metrics and misleads decision making.
  • Form Spam refers to the posting of spam leads and comments, as well as fake registrations on marketplaces and community forums.
  • Denial of Inventory is used by competitors/fraudsters to deplete goods or services in inventory without ever purchasing the goods or completing the transaction.

Industry-wise Impact of Bot Traffic

To illustrate the impact of bad bots, we aggregated all the bad bot traffic that was blocked by our Bot Manager during Q2 and Q3 of 2018 across four industries selected from our diverse customer base: E-commerce, Real Estate, Classifieds & Online Marketplaces, and Media & Publishing. While the prevalence of bad bots can vary considerably over time and even within the same industry, our data shows that specific types of bot attacks tend to target certain industries more than others.

[You may also like: Adapting Application Security to the New World of Bots]

E-Commerce

Intent-wise distribution of bad bot traffic on E-commerce sites (in %)

Bad bots target e-commerce sites to carry out a range of attacks — such as scraping, account takeovers, carding, scalping, and denial of inventory. However, the most prevalent bad bot threat encountered by our e-commerce customers during our study were attempts at affiliate fraud. Bad bot traffic made up roughly 55% of the overall traffic on pages that contain links to affiliates. Content scraping and carding were the most prevalent bad bot threats to e-commerce portals two to five years ago, but the latest data indicates that attempts at affiliate fraud and account takeover are rapidly growing when compared to earlier years.

Real Estate

Intent-wise distribution of bad bot traffic on Real Estate sites (in %)

Bad bots often target real estate portals to scrape listings and the contact details of realtors and property owners. However, we are seeing growing volumes of form spam and fake registrations, which have historically been the biggest problems caused by bots on these portals. Bad bots comprised 42% of total traffic on pages with forms in the real estate sector. These malicious activities anger advertisers, reduce marketing ROI and conversions, and produce skewed analytics that hinder decision making. Bad bot traffic also strains web infrastructure, affects the user experience, and increases operational expenses.

Classifieds & Online Marketplaces

Intent-wise distribution of bad bot traffic on Classifieds sites (in %)

Along with real estate businesses, classifieds sites and online marketplaces are among the biggest targets for content and price scrapers. Their competitors use bad bots not only to scrape their exclusive ads and product prices to illegally gain a competitive advantage, but also to post fake ads and spam web forms to access advertisers’ contact details. In addition, bad bot traffic strains servers, third-party APIs, inventory databases and other critical resources, creates application DDoS-like situations, and distorts web analytics. Bad bot traffic accounted for over 27% of all traffic on product pages from where prices could be scraped, and nearly 23% on pages with valuable content such as product reviews, descriptions, and images.

Media & Publishing

Intent-wise distribution of bad bot traffic on Media & Publishing sites (in %)

More than ever, digital media and publishing houses are scrambling to deal with bad bot attacks that perform automated attacks such as scraping of proprietary content, and ad fraud. The industry is beset with high levels of ad fraud, which hurts advertisers and publishers alike. Comment spam often derails discussions and results in negative user experiences. Bot traffic also inflates traffic metrics and prevents marketers from gaining accurate insights. Over the six-month period that we analyzed, bad bots accounted for 18% of overall traffic on pages with high-value content, 10% on ads, and nearly 13% on pages with forms.

As we can see, security chiefs across a range of industries are facing increasing volumes and types of bad bot attacks. What can they do to mitigate malicious bots that are rapidly evolving in ways that make them significantly harder to detect? Conventional security systems that rely on rate-limiting and signature-matching approaches were never designed to detect human-like bad bots that rapidly mutate and operate in widely-distributed botnets using ‘low and slow’ attack strategies and a multitude of (often hijacked) IP addresses.

The core challenge for any bot management solution, then, is to detect every visitor’s intent to help differentiate between human and malicious non-human traffic. As more bad bot developers incorporate artificial intelligence (AI) to make human-like bots that can sneak past security systems, any effective countermeasures must also leverage AI and machine learning (ML) techniques to accurately detect the most advanced bad bots.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Security

The Intersections between Cybersecurity and Diversity

March 20, 2019 — by Kevin Harris0

diversity-960x640.jpg

Cybersecurity and diversity are high-value topics that are most often discussed in isolation. Both topics resonate with individuals and organizations alike.

However, the intersections between cybersecurity and diversity are often overlooked. As nations and organizations seek to protect their critical infrastructures, it’s important to cultivate relationships between the two areas. Diversity is no longer only a social awareness and morality initiative; it is a core element of defending critical infrastructures.

Communities Need to Play a Greater Role in Cybersecurity

Technology careers typically pay more than other careers, providing a pathway to a quality lifestyle. With multiple entry points into the technology field — including degrees, apprenticeships and industry certifications — there are ways that varying communities can take part in technology careers, especially in cybersecurity. For instance, communities can improve cybersecurity education for women, minorities and home users.

Workforce Gaps Involving Women and Minorities Weakens Cybersecurity Defenses

Limited awareness and exposure to cybersecurity education often creates an opportunity gap for minorities and women. Failing to incorporate underserved populations limits the talent and size of our cybersecurity workforce. Without an all-inclusive cyber workforce, our critical infrastructure will have a talent gap, introducing additional system vulnerabilities.

To rectify this problem, communities must implement permanent efforts to ensure that children attending schools in underserved districts have access to technology and courses. That will better prepare them to become cyber workers.

[You may also like: Battling Cyber Risks with Intelligent Automation]

This infusion of technology talent helps to protect our nation’s vital digital assets. Organizations must make their recruitment and retention practices more inclusive. Ideally, they should provide opportunities to individuals who are either trained or are willing to undergo training to have a pathway to a successful career.

Additionally, higher education institutions should find ways to ensure that minorities and women have the support they need as they progress through their technology degrees. In addition, universities and colleges can offer cybersecurity faculty and mentors who can help these groups prepare for meaningful careers.

Cybersecurity Training Must Be Improved for Home Users

Another intersection of cybersecurity and diversity is at the user level. Most cybersecurity discussions center on the protection of government or corporate systems. Organizations spend significant portions of their budgets to prepare for and protect against cyberattacks.

Unfortunately, home users are often left out of such conversations; they are not considered part of any holistic cyber defense plan. With the large number of home users with multiple devices, the vulnerabilities of home systems provide hackers with easy attack opportunities.

[You may also like: The Costs of Cyberattacks Are Real]

Consequently, attackers access and compromise home devices, which allows them to attack other systems. In addition, these hackers can mask their true location and increase their computing power. They can then carry out their attacks more efficiently.

Compromising an individual’s personal device presents additional opportunities for attackers to access that person’s credentials as well as other sensitive workplace data. However, strong organization policies should dictate what information can be accessed remotely.

To increase home users’ threat awareness level, organizations should develop training programs as a part of community involvement initiatives. Vendors should strengthen default security settings for home users and ensure that home security protections are affordable and not difficult to configure.

[You may also like: Personal Security Hygiene]

Organizational Cultures Need to Emphasize that All Employees are Cyber Defenders

Diversity and cybersecurity also intersect at the organizational culture level. Regardless of whether or not organizations have an information systems security department, companies must foster the right type of security-minded workplace culture. All employees should be aware that they are intricate components in protecting the organization’s critical digital assets.

Educational institutions can support this effort by incorporating cyber awareness training across disciplines. This will give all graduates — regardless of their degrees — some exposure to cyber risks and their role in protecting digital assets.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Cybersecurity and Diversity Should Work Together, Not in Silos

Cybersecurity and diversity will continue to be important topics. The focus, however, should be on discussing the importance of their mutual support, rather than functioning in two separate silos. Improving our cyber defenses requires the best of all segments of our society, which includes minorities, women and home users.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Bots 101: This is Why We Can’t Have Nice Things

March 19, 2019 — by Daniel Smith0

AdobeStock_137861940-960x576.jpeg

In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.

As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.

[You may also like: The Big, Bad Bot Problem]

On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.

With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.

As the saying goes, this is why we can’t have nice things.

Targeted Industries

If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

  • E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
  • Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
  • Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
  • Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
  • Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
  • Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.

Types of Application Attacks

It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.

[You may also like: Adapting Application Security to the New World of Bots]

  • Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
  • Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
  • Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
  • Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
  • Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
  • Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
  • Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
  • Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
  • Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Symptoms of a Bot Attack

  • A high number of failed login attempts
  • Increased chargebacks and transaction disputes
  • Consecutive login attempts with different credentials from the same HTTP client
  • Unusual request activity for selected application content and data
  • Unexpected changes in website performance and metrics
  • A sudden increase in account creation rate
  • Elevated traffic for certain limited-availability goods or services

Intelligence is the Solution

Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.

A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware4

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi0

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityBotnets

Will We Ever See the End of Account Theft?

March 12, 2019 — by David Hobbs0

captcha-960x640.jpg

There’s an 87 Gigabyte file containing 773 Million unique email addresses and passwords being sold on online forums today called “Collection #1.” We know that many users of websites are using the same passwords all over the internet; even after all the years of data breaches and account takeovers and thefts, user behavior stays the same. Most people want the have the least complex means to use a website possible.

So, what does this mean for businesses?

Anywhere you have applications guarded with username / password mechanisms, there’s going to be credential stuffing attacks, courtesy of botnets.  A modern botnet is a distributed network of computers around the globe that can perform sophisticated tasks and is often comprised of compromised computers belonging to other people. Essentially, these botnets are looking to steal the sand from the beach, one grain at a time, and they are never going to stop. If anything, the levels of sophistication of the exploitation methods have grown exponentially.

Today, a Web Application Firewall (WAF) alone is not enough to fight botnets. WAFs can do some of the job, but today’s botnets are very sophisticated and can mimic real human behaviors. Many companies relied on CAPTCHA as their first line of defense, but it’s no longer sufficient to stop bots. In fact, there are now browser plugins to break CAPTCHA.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

Case in point: In 2016 at BlackHat Asia, some presenters shared that they were 98% successful at breaking these mechanisms. 98%! We, as humans, are probably nowhere near that success rate.  Personally, I’m likely at 70-80%, depending on what words (and backwards letters!) CAPTCHA presents while I’m rushing to get my work done. Even with picture CAPTCHA, I pass maybe 80% of my initial attempts; I can’t ever get those “select the edges of street signs” traps! So, what if bots are successful 98% of the time and humans only average 70%?

CAPTCHA Alone Won’t Save You

If your strategy to stop bots is flawed and you rely on CAPTCHA alone, what are some of the repercussions you may encounter? First, your web analytics will be severely flawed, impacting your ability to accurately gauge the real usage of your site. Secondly, advertising fraud can run your bill up from affiliate sites. Third, the CAPTCHA-solving botnets will still be able to conduct other nefarious deeds, like manipulate inventory, scrape data, and launch attacks on your site.

[You may also like: The Big, Bad Bot Problem]

Identification of good bots and bad bots requires a dedicated solution. Some of the largest websites in the world have admitted that this is an ongoing war for them. Machine learning and deep learning technologies are the only way to stay ahead in today’s world.  If you do not have a dedicated anti-bot platform, you may be ready to start evaluating one today.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Adapting Application Security to the New World of Bots

March 7, 2019 — by Radware0

web-app-bots-960x709.jpg

In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.

Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.

Positive Impact: Business Acceleration

Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.

Good bots include:

  • Crawlers — are used by search engines and contribute to SEO and SEM efforts
  • Chatbots — automate and extend customer service and first response
  • Fetchers — collect data from multiple locations (for instance, live sporting events)
  • Pricers — compare pricing information from different services
  • Traders — are used in commercial systems to find the best quote or rate for a transaction

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Negative Impact: Security Risks

The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:

  • Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
  • Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
  • Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
  • Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
  • Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.

[You may also like: The Big, Bad Bot Problem]

Blocking Automated Threats

Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.

 

Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.

For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.

[You may also like: IoT Expands the Botnet Universe]

Taking the Good with the Bad

Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.

Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now