main

Security

How Do Marketers Add Security into Their Messaging?

May 7, 2019 — by Anna Convery-Pelletier0

Marketing_Security-960x640.jpeg

These days, data breaches are an everyday occurrence.  Companies collect volumes of data about their customers, from basic contact information to detailed financial history, demographics, buying patterns, and even lifestyle choices. Effectively this builds a very private digital footprint for each customer. When this footprint is leaked, it not only erodes the trust between consumers and the affected brand, but also erodes trust for all brands.

The latest marketing buzzwords call this a ‘post-breach era’ but I’d call it a post-trust era. We have watched the slow erosion of consumer trust for several years now. Forrester predicted that 2018 would mark the tipping point, calling it “a year of reckoning,” but here we are in 2019 and trust only continues to decline. The Edelman Trust Barometer claims that in the U.S., we saw the sharpest drop in consumer trust in history, bringing it to an all-time low.

Why is Consumer Trust Falling at Such a Rapid Rate?

Organizations have spent billions of dollars digitally transforming themselves to create faster, easier and more numerous access points for their customers to interact with their brand. And it’s worked. Consumers engage much more often with more personal data with brands today than ever before. For marketers, it’s a dream come true: More access equals more insights and more customer data recorded, enabling more personalized and customized customer experiences.

[You may also like: The Costs of Cyberattacks Are Real]

However, each touch-point comes with increased security risk and vulnerabilities. Prior to the digital transformation revolution, brands interacted much less frequently with their customers (for the sake of argument, let’s say once a month). But now, brands communicate daily (sometimes multiple times per day!) across multiple touch-points and multiple channels, collecting exponential amounts of data. This increases not only the opportunities for breaches, but the possibility for negative customer interactions with so much more private information known about an individual. An overabundance of those marvelous personalized interactions can make consumers feel invasive and uncomfortable at the risk in their digital footprint.

Trust is necessary to offset any negativity.  

[You may also like: Cybersecurity as a Selling Point: Retailers Take Note]

Brands have a tremendous responsibility to protect all the data they collect from their customers.  Historically lack of vigilance on security has led to the start of many data breaches. For many years, the C-suite has treated information security as an expense to treat the basics of a regulatory compliance standard, not as an investment.

Today that organizational behavior just does not suffice. The stakes are much higher now; the size, frequency, and resulting consequences of recent data breaches have created a huge backlash in consumer sentiments. We feel the impact of this trust erosion in new legislation across the globe (GDPR, Castle Laws, etc.) designed to give consumers some power back with regards to their data. We also feel the impact in customer churn, brand abandonment poor Customer Lifetime Value (CLV) after a security breach. The ripple effects of data breaches signal the value of investing in security upfront; invest in the right cybersecurity infrastructure now or risk paying far more later.

It forces us as marketers to change the type of conversations we have with our customers.

What’s a Brand to Do?

How important is data security to your customers and your brand promise?  If asked, surely every one of your customers would tell you it’s important.  Most marketers are afraid to make security promises for fear of future data breaches. However, there’s a compelling argument that if you don’t address the issue up front, you are missing a critical conversation with your customers that could cost you their loyalty.

[You may also like: Consumer Sentiments About Cybersecurity and What It Means for Your Organization]

  • Don’t fear the security conversation, embrace it.  Brands like Apple are once again leading the privacy conversation.  Apple’s new ad campaign address privacy issues head on.  Executives may not need the exact stance as Apple, but as a marketer, you can identify the right tone and timing for a security conversation with your audience.
  • Ask your customers about their security concerns and listen to their answers! Our digitally transformed world empowers us to engage in a two-way dialog with our audiences.  Talk to them. Ask them their opinions on security – and more importantly, listen to their answers. Take their suggestions back to your product and development teams and incorporate it into your company’s DNA.
  • Develop features and services that empower your customers to protect their own privacy. Today, banks offer credit monitoring, credit locking, fraud alerts, subscriptions to services that monitor the dark web for an entire family, etc.  IoT devices have enabled people to see who is ringing the doorbell even when they are not home. Those doorbell recordings can now be shared through neighborhood watch sites to warn the community of incidents when they occur.  These are all examples of innovation and evolution around security as a feature. 
  • Highlight all the different ways your company is protecting its customers data and privacy.  Don’t assume your customers know that you take their privacy concerns seriously.  Show them you care about their security concerns. Tell them and educate them about all the steps you are taking to protect them.
  • Don’t whitewash security concerns. Be a champion for injecting security into the DNA of your organization – from product development to responsible data collection and storage, to the customer experience. 

Regardless of your industry— from finance to retail to consumer goods to healthcare and beyond—there is a security discussion to be had with your customers. If you are not embracing the conversation, your competitors will, and you will be left behind. 

Read “Consumer Sentiments: Cybersecurity, Personal Data and The Impact on Customer Loyalty” to learn more.

Download Now

Attack MitigationDDoSDDoS Attacks

Is It Legal to Evaluate a DDoS Mitigation Service?

March 27, 2019 — by Dileep Mishra5

ddostesting-960x640.jpg

A couple of months ago, I was on a call with a company that was in the process of evaluating DDoS mitigation services to protect its data centers. This company runs mission critical applications and were looking for comprehensive coverage from various types of attacks, including volumetric, low and slow, encrypted floods, and application-layer attacks.

During the discussion, our team asked a series of technical questions related to their ISP links, types of applications, physical connectivity, and more. And we provided an attack demo using our sandbox lab in Mahwah.

Everything was moving along just fine until the customer asked us for a Proof of Concept (PoC), what most would consider a natural next step in the vendor evaluation process.

About That Proof of Concept…

How would you do a DDoS POC? You rack and stack the DDoS mitigation appliance (or enable the service if it is cloud based), set up some type of management IP address, configure the protection policies, and off you go!

Well, when we spoke to this company, they said they would be happy to do all of that–at their disaster recovery data center located within a large carrier facility on the east coast. This sent my antenna up and I immediately asked a couple of questions that would turn out to be extremely important for all of us: Do you have attack tools to launch DDoS attacks? Do you take the responsibility to run the attacks?  Well, the customer answered “yes” to both.

[You may also like: DDoS Protection Requires Looking Both Ways]

Being a trained SE, I then asked why they needed to run the PoC in their lab and if there was a way we could demonstrate that our DDoS mitigation appliance can mitigate a wide range of attacks using our PoC script. As it turned out, the prospect was evaluating other vendors and, to compare apples to apples (thereby giving all vendors a fair chance), were already conducting a PoC in their data center with their appliance.

We shipped the PoC unit quickly and the prospect, true to their word, got the unit racked and stacked, cabled up ready to go. We configured the device then gave them the green light to launch attacks.  And then the prospect told us to launch the attacks; that they didn’t have any attack tools.

A Bad Idea

Well, most of us in this industry do have DDoS testing tools, so what’s the big deal? As vendors who provide cybersecurity solutions, we shouldn’t have any problems launching attacks over the Internet to test out a DDoS mitigation service…right?

[You may also like: 8 Questions to Ask in DDoS Protection]

WRONG! Here’s why that’s a bad idea:

  • Launching attacks over the Internet is ILLEGAL. You need written permission from the entity being attacked to launch a DDoS attack. You can try your luck if you want, but this is akin to running a red light. You may get away with it, but if you are caught the repercussions are damaging and expensive.
  • Your ISP might block your IP address. Many ISPs have DDoS defenses within their infrastructure and if they see someone launching a malicious attack, they might block your access. Good luck sorting that one out with your ISP!
  • Your attacks may not reach the desired testing destination. Well, even if your ISP doesn’t block you and the FBI doesn’t come knocking, there might be one or more DDoS mitigation devices between you and the customer data center where the destination IP being tested resides. These devices could very well mitigate the attack you launch preventing you from doing the testing.

Those are three big reasons why doing DDoS testing in a production data center is, simply put, a bad idea. Especially if you don’t have a legal, easy way to generate attacks.

[You may also like: 5 Must-Have DDoS Protection Technologies]

A Better Way

So what are the alternatives? How should you do DDoS testing?

  • With DDoS testing, the focus should be on evaluating  the mitigation features – e.g. can the service detect attacks quickly, can it mitigate immediately, can it adapt to attacks that are morphing, can it report accurately on the attack it is seeing, and what is being mitigated, how accurate is the mitigation (what about false positives). If you run a DDoS PoC in a production environment, you will spend most of your resources and time on testing the connectivity and spinning the wheels on operational aspects (e.g. LAN cabling, console cabling, change control procedures, paperwork, etc.). This is not what you want to test; you want to test DDoS mitigation! It’s like  trying to test how fast a sports car can go on a very busy street. You will end up testing the brakes, but you won’t get very far with any speed testing.
  • Test things out in your lab. Even better, let the vendor test it in their lab for you. This will let both parties focus on the security features rather than get caught up with the headaches of logistics involved with shipping, change control, physical cabling, connectivity, routing etc.
  • It is perfectly legal to use test tools like Kali Linux, Backtrack etc. within a lab environment. Launch attacks to your heart’s content, morph the attacks, see how the DDoS service responds.
  • If you don’t have the time or expertise to launch attacks yourself, hire a DDoS testing service. Companies like activereach, Redwolf security or MazeBolt security do this for a living, and they can help you test the DDoS mitigation service with a wide array of customized attacks. This will cost you some money, but if you are serious about the deployment, you will be doing yourself a favor and saving future work.
  • Finally, evaluate multiple vendors in parallel. You can never do this in a production data center. However, in a lab you can keep the attacks and the victim applications constant, while just swapping in the DDoS mitigation service. This will give you an apples-to-apples comparison of the actual capabilities of each vendor and will also shorten your evaluation cycle.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack MitigationDDoSSecurity

DDoS Protection Requires Looking Both Ways

March 26, 2019 — by Eyal Arazi1

ddos-960x540.jpg

Service availability is a key component of the user experience. Customers expect services to be constantly available and fast-responding, and any downtime can result in disappointed users, abandoned shopping carts, and lost customers.

Consequently, DDoS attacks are increasing in complexity, size and duration. Radware’s 2018 Global Application and Network Security Report found that over the course of a year, sophisticated DDoS attacks, such as burst attacks, increased by 15%, HTTPS floods grew by 20%, and over 64% of customers were hit by application-layer (L7) DDoS attacks.

Some Attacks are a Two-Way Street

As DDoS attacks become more complex, organizations require more elaborate protections to mitigate such attacks. However, in order to guarantee complete protection, many types of attacks – particularly the more sophisticated ones – require visibility into both inbound and outbound channels.

Some examples of such attacks include:

Out of State Protocol Attacks: Some DDoS attacks exploit weaknesses in protocol communication processes, such as TCP’s three-way handshake sequence, to create ‘out-of-state’ connection requests, thereby drawing-out connection requests in order to exhaust server resources. While some attacks of this type, such as a SYN flood, can be stopped by examining the inbound channel only, others require visibility into the outbound channel, as well.

An example of this is an ACK flood, whereby attackers continuously send forged TCP ACK packets towards the victim host. The target host then tries to associate the ACK reply to an existing TCP connection, and if none such exists, it will drop the packet. However, this process consumes server resources, and large numbers of such requests can deplete system resources. In order to correctly identify and mitigate such attacks, defenses need visibility to both inbound SYN and outbound SYN/ACK replies, so that they can verify whether the ACK packet is associated with any legitimate connection request.

[You may also like: An Overview of the TCP Optimization Process]

Reflection/Amplification Attacks: Such attacks exploit asymmetric responses between the connection requests and replies of certain protocols or applications. Again, some types of such attacks require visibility into both the inbound and outbound traffic channels.

An example of such attack is a large-file outbound pipe saturation attack. In such attacks, the attackers identify a very large file on the target network, and send a connection request to fetch it. The connection request itself can be only a few bytes in size, but the ensuing reply could be extremely large. Large amounts of such requests can clog-up the outbound pipe.

Another example are memcached amplification attacks. Although such attacks are most frequently used to overwhelm a third-party target via reflection, they can also be used to saturate the outbound channel of the targeted network.

[You may also like: 2018 In Review: Memcache and Drupalgeddon]

Scanning Attacks: Large-scale network scanning attempts are not just a security risk, but also frequently bear the hallmark of a DDoS attack, flooding the network with malicious traffic. Such scan attempts are based on sending large numbers of connection requests to host ports, and seeing which ports answer back (thereby indicating that they are open). However, this also leads to high volumes of error responses by closed ports. Mitigation of such attacks requires visibility into return traffic in order to identify the error response rate relative to actual traffic, in order for defenses to conclude that an attack is taking place.

Server Cracking: Similar to scanning attacks, server cracking attacks involve sending large amounts of requests in order to brute-force system passwords. Similarly, this leads to a high error reply rate, which requires visibility into both the inbound and outbound channels in order to identify the attack.

Stateful Application-Layer DDoS Attacks: Certain types of application-layer (L7) DDoS attacks exploit known protocol weaknesses or order to create large amounts of spoofed requests which exhaust server resources. Mitigating such attacks requires state-aware bi-directional visibility in order to identify attack patterns, so that the relevant attack signature can be applied to block it. Examples of such attacks are low-and-slow and application-layer (L7) SYN floods, which draw-out HTTP and TCP connections in order to continuously consume server resources.

[You may also like: Layer 7 Attack Mitigation]

Two-Way Attacks Require Bi-Directional Defenses

As online service availability becomes ever-more important, hackers are coming up with more sophisticated attacks than ever in order to overwhelm defenses. Many such attack vectors – frequently the more sophisticated and potent ones – either target or take advantages of the outbound communication channel.

Therefore, in order for organizations to fully protect themselves, they must deploy protections that allow bi-directional inspection of traffic in order to identify and neutralize such threats.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsSecurity

CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats

March 21, 2019 — by Abhinaw Kumar0

BADBOTS-960x305.jpg

According to a study by the Ponemon Institute in December 2018, bots comprised over 52% of all Internet traffic. While ‘good’ bots discreetly index websites, fetch information and content, and perform useful tasks for consumers and businesses, ‘bad’ bots have become a primary and growing concern to CISOs, webmasters, and security professionals today. They carry out a range of malicious activities, such as account takeover, content scraping, carding, form spam, and much more. The negative impacts resulting from these activities include loss of revenue and harm to brand reputation, theft of content and personal information, lowered search engine rankings, and distorted web analytics, to mention a few.

For these reasons, researchers at Forrester recommend that, “The first step in protecting your company from bad bots is to understand what kinds of bots are attacking your firm.” So let us briefly look at the main bad bot threats CISOs have to face, and then delve into their industry-wise prevalence.

Bad Bot Attacks That Worry CISOs The Most

The impact of bad bots results from the specific activities they’re programmed to execute. Many of them aim to defraud businesses and/or their customers for monetary gain, while others involve business competitors and nefarious parties who scrape content (including articles, reviews, and prices) to gain business intelligence.

[You may also like: The Big, Bad Bot Problem]

  • Account Takeover attacks use credential stuffing and brute force techniques to gain unauthorized access to customer accounts.
  • Application DDoS attacks slow down web applications by exhausting system resources, 3rd-party APIs, inventory databases, and other critical resources.
  • API Abuse results from nefarious entities exploiting API vulnerabilities to steal sensitive data (such as personal information and business-critical data), take over user accounts, and execute denial-of-service attacks.
  • Ad Fraud is the generation of false impressions and illegitimate clicks on ads shown on publishing sites and their mobile apps. A related form of attack is affiliate marketing fraud (also known as affiliate ad fraud) which is the use of automated traffic by fraudsters to generate commissions from an affiliate marketing program.
  • Carding attacks use bad bots to make multiple payment authorization attempts to verify the validity of payment card data, expiry dates, and security codes for stolen payment card data (by trying different values). These attacks also target gift cards, coupons and voucher codes.
  • Scraping is a strategy often used by competitors who deploy bad bots on your website to steal business-critical content, product details, and pricing information.
  • Skewed Analytics is a result of bot traffic on your web property, which skews site and app metrics and misleads decision making.
  • Form Spam refers to the posting of spam leads and comments, as well as fake registrations on marketplaces and community forums.
  • Denial of Inventory is used by competitors/fraudsters to deplete goods or services in inventory without ever purchasing the goods or completing the transaction.

Industry-wise Impact of Bot Traffic

To illustrate the impact of bad bots, we aggregated all the bad bot traffic that was blocked by our Bot Manager during Q2 and Q3 of 2018 across four industries selected from our diverse customer base: E-commerce, Real Estate, Classifieds & Online Marketplaces, and Media & Publishing. While the prevalence of bad bots can vary considerably over time and even within the same industry, our data shows that specific types of bot attacks tend to target certain industries more than others.

[You may also like: Adapting Application Security to the New World of Bots]

E-Commerce

Intent-wise distribution of bad bot traffic on E-commerce sites (in %)

Bad bots target e-commerce sites to carry out a range of attacks — such as scraping, account takeovers, carding, scalping, and denial of inventory. However, the most prevalent bad bot threat encountered by our e-commerce customers during our study were attempts at affiliate fraud. Bad bot traffic made up roughly 55% of the overall traffic on pages that contain links to affiliates. Content scraping and carding were the most prevalent bad bot threats to e-commerce portals two to five years ago, but the latest data indicates that attempts at affiliate fraud and account takeover are rapidly growing when compared to earlier years.

Real Estate

Intent-wise distribution of bad bot traffic on Real Estate sites (in %)

Bad bots often target real estate portals to scrape listings and the contact details of realtors and property owners. However, we are seeing growing volumes of form spam and fake registrations, which have historically been the biggest problems caused by bots on these portals. Bad bots comprised 42% of total traffic on pages with forms in the real estate sector. These malicious activities anger advertisers, reduce marketing ROI and conversions, and produce skewed analytics that hinder decision making. Bad bot traffic also strains web infrastructure, affects the user experience, and increases operational expenses.

Classifieds & Online Marketplaces

Intent-wise distribution of bad bot traffic on Classifieds sites (in %)

Along with real estate businesses, classifieds sites and online marketplaces are among the biggest targets for content and price scrapers. Their competitors use bad bots not only to scrape their exclusive ads and product prices to illegally gain a competitive advantage, but also to post fake ads and spam web forms to access advertisers’ contact details. In addition, bad bot traffic strains servers, third-party APIs, inventory databases and other critical resources, creates application DDoS-like situations, and distorts web analytics. Bad bot traffic accounted for over 27% of all traffic on product pages from where prices could be scraped, and nearly 23% on pages with valuable content such as product reviews, descriptions, and images.

Media & Publishing

Intent-wise distribution of bad bot traffic on Media & Publishing sites (in %)

More than ever, digital media and publishing houses are scrambling to deal with bad bot attacks that perform automated attacks such as scraping of proprietary content, and ad fraud. The industry is beset with high levels of ad fraud, which hurts advertisers and publishers alike. Comment spam often derails discussions and results in negative user experiences. Bot traffic also inflates traffic metrics and prevents marketers from gaining accurate insights. Over the six-month period that we analyzed, bad bots accounted for 18% of overall traffic on pages with high-value content, 10% on ads, and nearly 13% on pages with forms.

As we can see, security chiefs across a range of industries are facing increasing volumes and types of bad bot attacks. What can they do to mitigate malicious bots that are rapidly evolving in ways that make them significantly harder to detect? Conventional security systems that rely on rate-limiting and signature-matching approaches were never designed to detect human-like bad bots that rapidly mutate and operate in widely-distributed botnets using ‘low and slow’ attack strategies and a multitude of (often hijacked) IP addresses.

The core challenge for any bot management solution, then, is to detect every visitor’s intent to help differentiate between human and malicious non-human traffic. As more bad bot developers incorporate artificial intelligence (AI) to make human-like bots that can sneak past security systems, any effective countermeasures must also leverage AI and machine learning (ML) techniques to accurately detect the most advanced bad bots.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Security

The Intersections between Cybersecurity and Diversity

March 20, 2019 — by Kevin Harris0

diversity-960x640.jpg

Cybersecurity and diversity are high-value topics that are most often discussed in isolation. Both topics resonate with individuals and organizations alike.

However, the intersections between cybersecurity and diversity are often overlooked. As nations and organizations seek to protect their critical infrastructures, it’s important to cultivate relationships between the two areas. Diversity is no longer only a social awareness and morality initiative; it is a core element of defending critical infrastructures.

Communities Need to Play a Greater Role in Cybersecurity

Technology careers typically pay more than other careers, providing a pathway to a quality lifestyle. With multiple entry points into the technology field — including degrees, apprenticeships and industry certifications — there are ways that varying communities can take part in technology careers, especially in cybersecurity. For instance, communities can improve cybersecurity education for women, minorities and home users.

Workforce Gaps Involving Women and Minorities Weakens Cybersecurity Defenses

Limited awareness and exposure to cybersecurity education often creates an opportunity gap for minorities and women. Failing to incorporate underserved populations limits the talent and size of our cybersecurity workforce. Without an all-inclusive cyber workforce, our critical infrastructure will have a talent gap, introducing additional system vulnerabilities.

To rectify this problem, communities must implement permanent efforts to ensure that children attending schools in underserved districts have access to technology and courses. That will better prepare them to become cyber workers.

[You may also like: Battling Cyber Risks with Intelligent Automation]

This infusion of technology talent helps to protect our nation’s vital digital assets. Organizations must make their recruitment and retention practices more inclusive. Ideally, they should provide opportunities to individuals who are either trained or are willing to undergo training to have a pathway to a successful career.

Additionally, higher education institutions should find ways to ensure that minorities and women have the support they need as they progress through their technology degrees. In addition, universities and colleges can offer cybersecurity faculty and mentors who can help these groups prepare for meaningful careers.

Cybersecurity Training Must Be Improved for Home Users

Another intersection of cybersecurity and diversity is at the user level. Most cybersecurity discussions center on the protection of government or corporate systems. Organizations spend significant portions of their budgets to prepare for and protect against cyberattacks.

Unfortunately, home users are often left out of such conversations; they are not considered part of any holistic cyber defense plan. With the large number of home users with multiple devices, the vulnerabilities of home systems provide hackers with easy attack opportunities.

[You may also like: The Costs of Cyberattacks Are Real]

Consequently, attackers access and compromise home devices, which allows them to attack other systems. In addition, these hackers can mask their true location and increase their computing power. They can then carry out their attacks more efficiently.

Compromising an individual’s personal device presents additional opportunities for attackers to access that person’s credentials as well as other sensitive workplace data. However, strong organization policies should dictate what information can be accessed remotely.

To increase home users’ threat awareness level, organizations should develop training programs as a part of community involvement initiatives. Vendors should strengthen default security settings for home users and ensure that home security protections are affordable and not difficult to configure.

[You may also like: Personal Security Hygiene]

Organizational Cultures Need to Emphasize that All Employees are Cyber Defenders

Diversity and cybersecurity also intersect at the organizational culture level. Regardless of whether or not organizations have an information systems security department, companies must foster the right type of security-minded workplace culture. All employees should be aware that they are intricate components in protecting the organization’s critical digital assets.

Educational institutions can support this effort by incorporating cyber awareness training across disciplines. This will give all graduates — regardless of their degrees — some exposure to cyber risks and their role in protecting digital assets.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Cybersecurity and Diversity Should Work Together, Not in Silos

Cybersecurity and diversity will continue to be important topics. The focus, however, should be on discussing the importance of their mutual support, rather than functioning in two separate silos. Improving our cyber defenses requires the best of all segments of our society, which includes minorities, women and home users.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Bots 101: This is Why We Can’t Have Nice Things

March 19, 2019 — by Daniel Smith0

AdobeStock_137861940-960x576.jpeg

In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.

As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.

[You may also like: The Big, Bad Bot Problem]

On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.

With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.

As the saying goes, this is why we can’t have nice things.

Targeted Industries

If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

  • E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
  • Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
  • Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
  • Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
  • Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
  • Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.

Types of Application Attacks

It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.

[You may also like: Adapting Application Security to the New World of Bots]

  • Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
  • Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
  • Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
  • Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
  • Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
  • Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
  • Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
  • Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
  • Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Symptoms of a Bot Attack

  • A high number of failed login attempts
  • Increased chargebacks and transaction disputes
  • Consecutive login attempts with different credentials from the same HTTP client
  • Unusual request activity for selected application content and data
  • Unexpected changes in website performance and metrics
  • A sudden increase in account creation rate
  • Elevated traffic for certain limited-availability goods or services

Intelligence is the Solution

Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.

A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware2

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityBotnets

Will We Ever See the End of Account Theft?

March 12, 2019 — by David Hobbs0

captcha-960x640.jpg

There’s an 87 Gigabyte file containing 773 Million unique email addresses and passwords being sold on online forums today called “Collection #1.” We know that many users of websites are using the same passwords all over the internet; even after all the years of data breaches and account takeovers and thefts, user behavior stays the same. Most people want the have the least complex means to use a website possible.

So, what does this mean for businesses?

Anywhere you have applications guarded with username / password mechanisms, there’s going to be credential stuffing attacks, courtesy of botnets.  A modern botnet is a distributed network of computers around the globe that can perform sophisticated tasks and is often comprised of compromised computers belonging to other people. Essentially, these botnets are looking to steal the sand from the beach, one grain at a time, and they are never going to stop. If anything, the levels of sophistication of the exploitation methods have grown exponentially.

Today, a Web Application Firewall (WAF) alone is not enough to fight botnets. WAFs can do some of the job, but today’s botnets are very sophisticated and can mimic real human behaviors. Many companies relied on CAPTCHA as their first line of defense, but it’s no longer sufficient to stop bots. In fact, there are now browser plugins to break CAPTCHA.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

Case in point: In 2016 at BlackHat Asia, some presenters shared that they were 98% successful at breaking these mechanisms. 98%! We, as humans, are probably nowhere near that success rate.  Personally, I’m likely at 70-80%, depending on what words (and backwards letters!) CAPTCHA presents while I’m rushing to get my work done. Even with picture CAPTCHA, I pass maybe 80% of my initial attempts; I can’t ever get those “select the edges of street signs” traps! So, what if bots are successful 98% of the time and humans only average 70%?

CAPTCHA Alone Won’t Save You

If your strategy to stop bots is flawed and you rely on CAPTCHA alone, what are some of the repercussions you may encounter? First, your web analytics will be severely flawed, impacting your ability to accurately gauge the real usage of your site. Secondly, advertising fraud can run your bill up from affiliate sites. Third, the CAPTCHA-solving botnets will still be able to conduct other nefarious deeds, like manipulate inventory, scrape data, and launch attacks on your site.

[You may also like: The Big, Bad Bot Problem]

Identification of good bots and bad bots requires a dedicated solution. Some of the largest websites in the world have admitted that this is an ongoing war for them. Machine learning and deep learning technologies are the only way to stay ahead in today’s world.  If you do not have a dedicated anti-bot platform, you may be ready to start evaluating one today.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Adapting Application Security to the New World of Bots

March 7, 2019 — by Radware0

web-app-bots-960x709.jpg

In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.

Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.

Positive Impact: Business Acceleration

Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.

Good bots include:

  • Crawlers — are used by search engines and contribute to SEO and SEM efforts
  • Chatbots — automate and extend customer service and first response
  • Fetchers — collect data from multiple locations (for instance, live sporting events)
  • Pricers — compare pricing information from different services
  • Traders — are used in commercial systems to find the best quote or rate for a transaction

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Negative Impact: Security Risks

The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:

  • Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
  • Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
  • Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
  • Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
  • Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.

[You may also like: The Big, Bad Bot Problem]

Blocking Automated Threats

Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.

 

Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.

For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.

[You may also like: IoT Expands the Botnet Universe]

Taking the Good with the Bad

Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.

Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsBotnetsSecurity

IoT Expands the Botnet Universe

March 6, 2019 — by Radware0

AdobeStock_175553664-960x607.jpg

In 2018, we witnessed the dramatic growth of IoT devices and a corresponding increase in the number of botnets and cyberattacks. Because IoT devices are always-on, rarely monitored and generally use off-the-shelf default passwords, they are low-hanging fruit for hackers looking for easy ways to build an army of malicious attackers. Every IoT device added to the network grows the hacker’s tool set.

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes. At the same time, new domains such as cryptomining and credentials theft offer more opportunities for hacktivism.

Let’s look at some of the botnets and threats discovered and identified by Radware’s deception network in 2018.

JenX

A new botnet tried to deliver its dangerous payload to Radware’s newly deployed IoT honeypots. The honeypots registered multiple exploit attempts from distinct servers, all located in popular cloud hosting providers based in Europe. The botnet creators intended to sell 290Gbps DDoS attacks for only $20. Further investigation showed that the new bot used an atypical central scanning method through a handful of Linux virtual private servers (VPS) used to scan, exploit and load malware onto unsuspecting IoT victims. At the same time, the deception network also detected SYN scans originating from each of the exploited servers indicating that they were first performing a
mass scan before attempting to exploit the IoT devices, ensuring that ports 52869 and 37215 were open.

[You may also like: IoT Botnets on the Rise]

ADB Miner

A new piece of malware that takes advantage of Android-based devices exposing debug capabilities to the internet. It leverages scanning code from Mirai. When a remote host exposes its Android Debug Bridge (ADB) control port, any Android emulator on the internet has full install, start, reboot and root shell access without authentication.

Part of the malware includes Monero cryptocurrency miners (xmrig binaries), which are executing on the infected devices. Radware’s automated trend analysis algorithms detected a significant increase in activity against port 5555, both in the number of hits and in the number of distinct IPs. Port 5555 is one of the known ports used by TR069/064 exploits, such as those witnessed during the Mirai-based attack targeting Deutsche Telekom routers in November 2016. In this case, the payload delivered to the port was not SOAP/HTTP, but rather the ADB remote debugging protocol.

Satori.Dasan

Less than a week after ADB Miner, a third new botnet variant triggered a trend alert due to a significant increase in malicious activity over port 8080. Radware detected a jump in the infecting IPs from around 200 unique IPs per day to over 2,000 malicious unique IPs per day. Further investigation by the research team uncovered a new variant of the Satori botnet capable of aggressive scanning and exploitation of CVE-2017-18046 — Dasan Unauthenticated Remote Code Execution.

[You may also like: New Satori Botnet Variant Enslaves Thousands of Dasan WiFi Routers]

The rapidly growing botnet referred to as “Satori.Dasan” utilizes a highly effective wormlike scanning mechanism, where every infected host looks for more hosts to infect by performing aggressive scanning of random IP addresses and exclusively targeting port 8080. Once a suitable target is located, the infected bot notifies a C2 server, which immediately attempts to infect the new victim.

Memcached DDoS Attacks

A few weeks later, Radware’s system provided an alert on yet another new trend — an increase in activity on UDP port 11211. This trend notification correlated with several organizations publicly disclosing a trend in UDP-amplified DDoS attacks utilizing Memcached servers configured to accommodate UDP (in addition to the default TCP) without limitation. After the attack, CVE2018-1000115 was published to patch this vulnerability.

Memcached services are by design an internal service that allows unauthenticated access requiring no verification of source or identity. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send attack traffic to a targeted victim by spoofing the request packet’s source IP with that of the victim’s IP. Memcached provided record-breaking amplification ratios of up to 52,000x.

[You may also like: Entering into the 1Tbps Era]

Hajime Expands to MikroTik RouterOS

Radware’s alert algorithms detected a huge spike in activity for TCP port 8291. After near-zero activity on that port for months, the deception network registered over 10,000 unique IPs hitting port 8291 in a single day. Port 8291 is related to a then-new botnet that exploits vulnerabilities in the MikroTik RouterOS operating system, allowing attackers to remotely execute code on the device.

The spreading mechanism was going beyond port 8291, which is used almost exclusively by MikroTik, and rapidly infecting other devices such as AirOS/Ubiquiti via ports: 80, 81, 82, 8080, 8081, 8082, 8089, 8181, 8880, utilizing known exploits and password-cracking attempts to speed up the propagation.

Satori IoT Botnet Worm Variant

Another interesting trend alert occurred on Saturday, June 15. Radware’s automated algorithms alerted to an upsurge of malicious activity scanning and infection of a variety of IoT devices by taking advantage of recently discovered exploits. The previously unseen payload was delivered by the infamous Satori botnet. The exponential increase in the number of attack sources spread all over the world, exceeding 2,500 attackers in a 24-hour period.

[You may also like: A Quick History of IoT Botnets]

Hakai

Radware’s automation algorithm monitored the rise of Hakai, which was first recorded in July. Hakai is a new botnet recently discovered by NewSky Security after lying dormant for a while. It started to infect D-Link, Huawei and Realtek routers. In addition to exploiting known vulnerabilities to infect the routers, it used a Telnet scanner to enslave Telnet-enabled devices with default credentials.

DemonBot

A new stray QBot variant going by the name of DemonBot joined the worldwide hunt for yellow elephant — Hadoop cluster — with the intention of conscripting them into an active DDoS botnet. Hadoop clusters are typically very capable, stable platforms that can individually account for much larger volumes of DDoS traffic compared to IoT devices. DemonBot extends the traditional abuse of IoT platforms for DDoS by adding very capable big data cloud servers. The DDoS attack vectors supported by DemonBot are STD, UDP and TCP floods.

Using a Hadoop YARN (Yet-Another-Resource-Negotiator) unauthenticated remote command execution, DemonBot spreads only via central servers and does not expose the wormlike behavior exhibited by Mirai-based bots. By the end of October, Radware tracked over 70 active exploit servers that are spreading malware
and exploiting YARN servers at an aggregated rate of over one million exploits per day.

[You may also like: Hadoop YARN: An Assessment of the Attack Surface and Its Exploits]

YARN allows multiple data processing engines to handle data stored in a single Hadoop platform. DemonBot took advantage of YARN’s REST API publicly exposed by over 1,000 cloud servers worldwide. DemonBot effectively harnesses the Hadoop clusters in order to generate a DDoS botnet powered by cloud infrastructure.

Always on the Hunt

In 2018, Radware’s deception network launched its first automated trend-detection steps and proved its ability to identify emerging threats early on and to distribute valuable data to the Radware mitigation devices, enabling them to effectively mitigate infections, scanners and attackers. One of the most difficult aspects in automated anomaly detection is to filter out the massive noise and identify the trends that indicate real issues.

In 2019, the deception network will continue to evolve and learn and expand its horizons, taking the next steps in real-time automated detection and mitigation.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now