main

Security

The Intersections between Cybersecurity and Diversity

March 20, 2019 — by Kevin Harris3

diversity-960x640.jpg

Cybersecurity and diversity are high-value topics that are most often discussed in isolation. Both topics resonate with individuals and organizations alike.

However, the intersections between cybersecurity and diversity are often overlooked. As nations and organizations seek to protect their critical infrastructures, it’s important to cultivate relationships between the two areas. Diversity is no longer only a social awareness and morality initiative; it is a core element of defending critical infrastructures.

Communities Need to Play a Greater Role in Cybersecurity

Technology careers typically pay more than other careers, providing a pathway to a quality lifestyle. With multiple entry points into the technology field — including degrees, apprenticeships and industry certifications — there are ways that varying communities can take part in technology careers, especially in cybersecurity. For instance, communities can improve cybersecurity education for women, minorities and home users.

Workforce Gaps Involving Women and Minorities Weakens Cybersecurity Defenses

Limited awareness and exposure to cybersecurity education often creates an opportunity gap for minorities and women. Failing to incorporate underserved populations limits the talent and size of our cybersecurity workforce. Without an all-inclusive cyber workforce, our critical infrastructure will have a talent gap, introducing additional system vulnerabilities.

To rectify this problem, communities must implement permanent efforts to ensure that children attending schools in underserved districts have access to technology and courses. That will better prepare them to become cyber workers.

[You may also like: Battling Cyber Risks with Intelligent Automation]

This infusion of technology talent helps to protect our nation’s vital digital assets. Organizations must make their recruitment and retention practices more inclusive. Ideally, they should provide opportunities to individuals who are either trained or are willing to undergo training to have a pathway to a successful career.

Additionally, higher education institutions should find ways to ensure that minorities and women have the support they need as they progress through their technology degrees. In addition, universities and colleges can offer cybersecurity faculty and mentors who can help these groups prepare for meaningful careers.

Cybersecurity Training Must Be Improved for Home Users

Another intersection of cybersecurity and diversity is at the user level. Most cybersecurity discussions center on the protection of government or corporate systems. Organizations spend significant portions of their budgets to prepare for and protect against cyberattacks.

Unfortunately, home users are often left out of such conversations; they are not considered part of any holistic cyber defense plan. With the large number of home users with multiple devices, the vulnerabilities of home systems provide hackers with easy attack opportunities.

[You may also like: The Costs of Cyberattacks Are Real]

Consequently, attackers access and compromise home devices, which allows them to attack other systems. In addition, these hackers can mask their true location and increase their computing power. They can then carry out their attacks more efficiently.

Compromising an individual’s personal device presents additional opportunities for attackers to access that person’s credentials as well as other sensitive workplace data. However, strong organization policies should dictate what information can be accessed remotely.

To increase home users’ threat awareness level, organizations should develop training programs as a part of community involvement initiatives. Vendors should strengthen default security settings for home users and ensure that home security protections are affordable and not difficult to configure.

[You may also like: Personal Security Hygiene]

Organizational Cultures Need to Emphasize that All Employees are Cyber Defenders

Diversity and cybersecurity also intersect at the organizational culture level. Regardless of whether or not organizations have an information systems security department, companies must foster the right type of security-minded workplace culture. All employees should be aware that they are intricate components in protecting the organization’s critical digital assets.

Educational institutions can support this effort by incorporating cyber awareness training across disciplines. This will give all graduates — regardless of their degrees — some exposure to cyber risks and their role in protecting digital assets.

[You may also like: 5 Ways Malware Defeats Cyber Defenses & What You Can Do About It]

Cybersecurity and Diversity Should Work Together, Not in Silos

Cybersecurity and diversity will continue to be important topics. The focus, however, should be on discussing the importance of their mutual support, rather than functioning in two separate silos. Improving our cyber defenses requires the best of all segments of our society, which includes minorities, women and home users.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Bots 101: This is Why We Can’t Have Nice Things

March 19, 2019 — by Daniel Smith0

AdobeStock_137861940-960x576.jpeg

In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.

As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.

[You may also like: The Big, Bad Bot Problem]

On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.

With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.

As the saying goes, this is why we can’t have nice things.

Targeted Industries

If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

  • E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
  • Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
  • Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
  • Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
  • Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
  • Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.

Types of Application Attacks

It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.

[You may also like: Adapting Application Security to the New World of Bots]

  • Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
  • Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
  • Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
  • Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
  • Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
  • Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
  • Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
  • Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
  • Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Symptoms of a Bot Attack

  • A high number of failed login attempts
  • Increased chargebacks and transaction disputes
  • Consecutive login attempts with different credentials from the same HTTP client
  • Unusual request activity for selected application content and data
  • Unexpected changes in website performance and metrics
  • A sudden increase in account creation rate
  • Elevated traffic for certain limited-availability goods or services

Intelligence is the Solution

Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.

A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware12

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud Security

Are Your DevOps Your Biggest Security Risks?

March 13, 2019 — by Eyal Arazi1

apikey-960x720.jpg

We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.

But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?

The Problem of API Access Key Exposure

The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.

AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.

[You may also like: Ensuring Data Privacy in Public Clouds]

In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.

This Exposure Happens All the Time

The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.

A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.

[You may also like: The Rise in Cryptomining]

Examples, sadly, are abundant:

  • A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
  • WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
  • A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
  • The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
  • js published an npm code package in their code release containing access keys to their S3 storage buckets.

And the list goes on and on…

The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.

How You Can Protect Yourself

One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.

Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:

Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.

Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.

Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityBotnets

Will We Ever See the End of Account Theft?

March 12, 2019 — by David Hobbs0

captcha-960x640.jpg

There’s an 87 Gigabyte file containing 773 Million unique email addresses and passwords being sold on online forums today called “Collection #1.” We know that many users of websites are using the same passwords all over the internet; even after all the years of data breaches and account takeovers and thefts, user behavior stays the same. Most people want the have the least complex means to use a website possible.

So, what does this mean for businesses?

Anywhere you have applications guarded with username / password mechanisms, there’s going to be credential stuffing attacks, courtesy of botnets.  A modern botnet is a distributed network of computers around the globe that can perform sophisticated tasks and is often comprised of compromised computers belonging to other people. Essentially, these botnets are looking to steal the sand from the beach, one grain at a time, and they are never going to stop. If anything, the levels of sophistication of the exploitation methods have grown exponentially.

Today, a Web Application Firewall (WAF) alone is not enough to fight botnets. WAFs can do some of the job, but today’s botnets are very sophisticated and can mimic real human behaviors. Many companies relied on CAPTCHA as their first line of defense, but it’s no longer sufficient to stop bots. In fact, there are now browser plugins to break CAPTCHA.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

Case in point: In 2016 at BlackHat Asia, some presenters shared that they were 98% successful at breaking these mechanisms. 98%! We, as humans, are probably nowhere near that success rate.  Personally, I’m likely at 70-80%, depending on what words (and backwards letters!) CAPTCHA presents while I’m rushing to get my work done. Even with picture CAPTCHA, I pass maybe 80% of my initial attempts; I can’t ever get those “select the edges of street signs” traps! So, what if bots are successful 98% of the time and humans only average 70%?

CAPTCHA Alone Won’t Save You

If your strategy to stop bots is flawed and you rely on CAPTCHA alone, what are some of the repercussions you may encounter? First, your web analytics will be severely flawed, impacting your ability to accurately gauge the real usage of your site. Secondly, advertising fraud can run your bill up from affiliate sites. Third, the CAPTCHA-solving botnets will still be able to conduct other nefarious deeds, like manipulate inventory, scrape data, and launch attacks on your site.

[You may also like: The Big, Bad Bot Problem]

Identification of good bots and bad bots requires a dedicated solution. Some of the largest websites in the world have admitted that this is an ongoing war for them. Machine learning and deep learning technologies are the only way to stay ahead in today’s world.  If you do not have a dedicated anti-bot platform, you may be ready to start evaluating one today.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Adapting Application Security to the New World of Bots

March 7, 2019 — by Radware0

web-app-bots-960x709.jpg

In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.

Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.

Positive Impact: Business Acceleration

Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.

Good bots include:

  • Crawlers — are used by search engines and contribute to SEO and SEM efforts
  • Chatbots — automate and extend customer service and first response
  • Fetchers — collect data from multiple locations (for instance, live sporting events)
  • Pricers — compare pricing information from different services
  • Traders — are used in commercial systems to find the best quote or rate for a transaction

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Negative Impact: Security Risks

The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:

  • Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
  • Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
  • Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
  • Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
  • Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.

[You may also like: The Big, Bad Bot Problem]

Blocking Automated Threats

Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.

 

Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.

For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.

[You may also like: IoT Expands the Botnet Universe]

Taking the Good with the Bad

Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.

Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsBotnetsSecurity

IoT Expands the Botnet Universe

March 6, 2019 — by Radware1

AdobeStock_175553664-960x607.jpg

In 2018, we witnessed the dramatic growth of IoT devices and a corresponding increase in the number of botnets and cyberattacks. Because IoT devices are always-on, rarely monitored and generally use off-the-shelf default passwords, they are low-hanging fruit for hackers looking for easy ways to build an army of malicious attackers. Every IoT device added to the network grows the hacker’s tool set.

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes. At the same time, new domains such as cryptomining and credentials theft offer more opportunities for hacktivism.

Let’s look at some of the botnets and threats discovered and identified by Radware’s deception network in 2018.

JenX

A new botnet tried to deliver its dangerous payload to Radware’s newly deployed IoT honeypots. The honeypots registered multiple exploit attempts from distinct servers, all located in popular cloud hosting providers based in Europe. The botnet creators intended to sell 290Gbps DDoS attacks for only $20. Further investigation showed that the new bot used an atypical central scanning method through a handful of Linux virtual private servers (VPS) used to scan, exploit and load malware onto unsuspecting IoT victims. At the same time, the deception network also detected SYN scans originating from each of the exploited servers indicating that they were first performing a
mass scan before attempting to exploit the IoT devices, ensuring that ports 52869 and 37215 were open.

[You may also like: IoT Botnets on the Rise]

ADB Miner

A new piece of malware that takes advantage of Android-based devices exposing debug capabilities to the internet. It leverages scanning code from Mirai. When a remote host exposes its Android Debug Bridge (ADB) control port, any Android emulator on the internet has full install, start, reboot and root shell access without authentication.

Part of the malware includes Monero cryptocurrency miners (xmrig binaries), which are executing on the infected devices. Radware’s automated trend analysis algorithms detected a significant increase in activity against port 5555, both in the number of hits and in the number of distinct IPs. Port 5555 is one of the known ports used by TR069/064 exploits, such as those witnessed during the Mirai-based attack targeting Deutsche Telekom routers in November 2016. In this case, the payload delivered to the port was not SOAP/HTTP, but rather the ADB remote debugging protocol.

Satori.Dasan

Less than a week after ADB Miner, a third new botnet variant triggered a trend alert due to a significant increase in malicious activity over port 8080. Radware detected a jump in the infecting IPs from around 200 unique IPs per day to over 2,000 malicious unique IPs per day. Further investigation by the research team uncovered a new variant of the Satori botnet capable of aggressive scanning and exploitation of CVE-2017-18046 — Dasan Unauthenticated Remote Code Execution.

[You may also like: New Satori Botnet Variant Enslaves Thousands of Dasan WiFi Routers]

The rapidly growing botnet referred to as “Satori.Dasan” utilizes a highly effective wormlike scanning mechanism, where every infected host looks for more hosts to infect by performing aggressive scanning of random IP addresses and exclusively targeting port 8080. Once a suitable target is located, the infected bot notifies a C2 server, which immediately attempts to infect the new victim.

Memcached DDoS Attacks

A few weeks later, Radware’s system provided an alert on yet another new trend — an increase in activity on UDP port 11211. This trend notification correlated with several organizations publicly disclosing a trend in UDP-amplified DDoS attacks utilizing Memcached servers configured to accommodate UDP (in addition to the default TCP) without limitation. After the attack, CVE2018-1000115 was published to patch this vulnerability.

Memcached services are by design an internal service that allows unauthenticated access requiring no verification of source or identity. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send attack traffic to a targeted victim by spoofing the request packet’s source IP with that of the victim’s IP. Memcached provided record-breaking amplification ratios of up to 52,000x.

[You may also like: Entering into the 1Tbps Era]

Hajime Expands to MikroTik RouterOS

Radware’s alert algorithms detected a huge spike in activity for TCP port 8291. After near-zero activity on that port for months, the deception network registered over 10,000 unique IPs hitting port 8291 in a single day. Port 8291 is related to a then-new botnet that exploits vulnerabilities in the MikroTik RouterOS operating system, allowing attackers to remotely execute code on the device.

The spreading mechanism was going beyond port 8291, which is used almost exclusively by MikroTik, and rapidly infecting other devices such as AirOS/Ubiquiti via ports: 80, 81, 82, 8080, 8081, 8082, 8089, 8181, 8880, utilizing known exploits and password-cracking attempts to speed up the propagation.

Satori IoT Botnet Worm Variant

Another interesting trend alert occurred on Saturday, June 15. Radware’s automated algorithms alerted to an upsurge of malicious activity scanning and infection of a variety of IoT devices by taking advantage of recently discovered exploits. The previously unseen payload was delivered by the infamous Satori botnet. The exponential increase in the number of attack sources spread all over the world, exceeding 2,500 attackers in a 24-hour period.

[You may also like: A Quick History of IoT Botnets]

Hakai

Radware’s automation algorithm monitored the rise of Hakai, which was first recorded in July. Hakai is a new botnet recently discovered by NewSky Security after lying dormant for a while. It started to infect D-Link, Huawei and Realtek routers. In addition to exploiting known vulnerabilities to infect the routers, it used a Telnet scanner to enslave Telnet-enabled devices with default credentials.

DemonBot

A new stray QBot variant going by the name of DemonBot joined the worldwide hunt for yellow elephant — Hadoop cluster — with the intention of conscripting them into an active DDoS botnet. Hadoop clusters are typically very capable, stable platforms that can individually account for much larger volumes of DDoS traffic compared to IoT devices. DemonBot extends the traditional abuse of IoT platforms for DDoS by adding very capable big data cloud servers. The DDoS attack vectors supported by DemonBot are STD, UDP and TCP floods.

Using a Hadoop YARN (Yet-Another-Resource-Negotiator) unauthenticated remote command execution, DemonBot spreads only via central servers and does not expose the wormlike behavior exhibited by Mirai-based bots. By the end of October, Radware tracked over 70 active exploit servers that are spreading malware
and exploiting YARN servers at an aggregated rate of over one million exploits per day.

[You may also like: Hadoop YARN: An Assessment of the Attack Surface and Its Exploits]

YARN allows multiple data processing engines to handle data stored in a single Hadoop platform. DemonBot took advantage of YARN’s REST API publicly exposed by over 1,000 cloud servers worldwide. DemonBot effectively harnesses the Hadoop clusters in order to generate a DDoS botnet powered by cloud infrastructure.

Always on the Hunt

In 2018, Radware’s deception network launched its first automated trend-detection steps and proved its ability to identify emerging threats early on and to distribute valuable data to the Radware mitigation devices, enabling them to effectively mitigate infections, scanners and attackers. One of the most difficult aspects in automated anomaly detection is to filter out the massive noise and identify the trends that indicate real issues.

In 2019, the deception network will continue to evolve and learn and expand its horizons, taking the next steps in real-time automated detection and mitigation.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Mobile SecurityService Provider

Here’s How Carriers Can Differentiate Their 5G Offerings

February 28, 2019 — by Mike O'Malley0

5g-960x636.jpg

Much of the buzz surrounding this year’s Mobile World Congress has focused on “cool” tech innovations. There are self-driving cars, IoT-enhanced bee hives, smart textiles that monitor your health, realistic human chatbots, AI robots, and so forth. But, one piece of news that has flown relatively under the radar is the pending collaboration between carriers for 5G implementation.

A Team Effort

As Bloomberg reported, carriers from Vodafone Group Plc, Telecom Italia SpA and Telefonica SA are willing to call “a partial truce” to help each other build 5G infrastructure in an attempt “to avoid duplication and make scarce resources go further.”

Sounds great (who doesn’t love a solid team effort?!)…except for one thing: the pesky issue of competing for revenue streams in an industry fraught with financial challenges. As the Bloomberg article pointed out, “by creating more interdependent and overlapping networks, the risk is that each will find it harder to differentiate their offering.”

[You may also like: Securing the Customer Experience for 5G and IoT]

While this is certainly a valid concern, there is an obvious solution: If carriers are looking for differentiation in a collaborative environment, they need to leverage security as a competitive advantage.

Security as a Selling Point

As MWC19 is showing us in no uncertain terms, IoT devices—from diabetic smart socks to dairy milking monitors—are the way of the future. And they will largely be powered by 5G networks, beginning as early as this year.

Smart boot and sock monitor blood sugar, pulse rate, temperature and more for diabetics.

Which is all to say, although carriers are nervous about setting themselves apart while they work in partnership to build 5G infrastructure, there’s a huge opportunity to differentiate themselves by claiming ownership of IoT device security.

[You may also like: Don’t Be A “Dumb” Carrier]

As I recently wrote, IoT devices are especially vulnerable because of manufacturers’ priority to maintain low costs, rather than spending more on additional security features. If mobile service providers create a secure environment, they can establish a competitive advantage and reap financial rewards.

Indeed, best-of-breed security opens the possibility for capturing new revenue streams; mobile IoT businesses will pay an additional service premium for the peace of mind that their devices will be secure and can maintain 100% availability. And if a competing carrier suffers a data breach, for example, you can expect their customer attrition to become your win.

My words of advice: Collaborate. But do so while holding an ace—security—in your back pocket.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware2

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware4

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now