main

Application SecuritySecurity

How to Prevent Real-Time API Abuse

April 18, 2019 — by Radware1

API-abuse-960x640.jpg

The widespread adoption of mobile and IoT devices, and increased use of cloud systems are driving a major change in modern application architecture. Application Programming Interfaces (APIs) have emerged as the bridge to facilitate communication between different application architectures. However, with the widespread deployment of APIs, automated attacks on poorly protected APIs are mounting. Personally Identifiable Information (PII), payment card details, and business-critical services are at risk due to automated attacks on APIs.

API. application programming interface, cybersecurity, technology

So what are key API vulnerabilities, and how can you protect against API abuse?

Authentication Flaws

Many APIs only check authentication status, but not if the request is coming from a genuine user. Attackers exploit such flaws through various ways (including session hijacking and account aggregation) to imitate genuine API calls. Attackers also target APIs by reverse-engineering mobile apps to discover how it calls the API. If API keys are embedded into the app, this can result in an API breach. API keys should not be used alone for user authentication.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Lack of Robust Encryption

Many APIs lack robust encryptions between API client and API server. Attackers exploit such vulnerabilities through man-in-the-middle attacks. Attackers also intercept unencrypted or poorly protected API transactions between API client and API server to steal sensitive information or alter transaction data.

What’s more, the ubiquitous use of mobile devices, cloud systems, and microservice design patterns have further complicated API security as now multiple gateways are involved to facilitate interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Business Logic Vulnerability

APIs are vulnerable to business logic abuse. Attackers make repeated and large-scale API calls on an application server or slow POST requests that result in denial of service. A DDoS attack on an API can result in massive disruption on a front-end web application.

Poor Endpoint Security

Most IoT devices and micro-service tools are programmed to communicate with their server through API channels. These devices authenticate themselves on API servers using client certificates. Hackers attempt to get control over an API from the IoT endpoint and if they succeed, they can easily re-sequence API order that can result in a data breach.

[You may also like: The Evolution of IoT Attacks]

How You Can Prevent API Abuse

A bot management solution that defends APIs against automated attacks and ensures that only genuine users have the ability to access APIs is paramount. When evaluating such a solution, consider whether it offers
broad attack detection and coverage, comprehensive reporting and analytics, and flexible deployment options.

Other steps you can (and should) take include:

  • Monitor and manage API calls coming from automated scripts (bots)
  • Drop primitive authentication
  • Implement measures to prevent API access by sophisticated human-like bots
  • Robust encryption is a must-have
  • Deploy token-based rate limiting equipped with features to limit API access based on the number of IPs, sessions, and tokens
  • Implement robust security on endpoints

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsCloud SecuritySecurity

Anatomy of a Cloud-Native Data Breach

April 10, 2019 — by Radware3

cloudnativeattack-960x600.jpg

Migrating computing resources to cloud environments opens up new attack surfaces previously unknown in the world of premise-based data centers. As a result, cloud-native data breaches frequently have different characteristics and follow a different progression than physical data breaches. Here is a real-life example of a cloud-native data breach, how it evolved and how it possibly could have been avoided.

Target Profile: A Social Media/Mobile App Company

The company is a photo-sharing social media application, with over 20 million users. It stores over 1PB of user data within Amazon Web Services (AWS), and in 2018, it was the victim of a massive data breach that exposed nearly 20 million user records. This is how it happened.

[You may also like: Ensuring Data Privacy in Public Clouds]

Step 1: Compromising a legitimate user. Frequently, the first step in a data breach is that an attacker compromises the credentials of a legitimate user. In this incident, an attacker used a spear-phishing attack to obtain an administrative user’s credentials to the company’s environment.

Step 2: Fortifying access. After compromising a legitimate user, a hacker frequently takes steps to fortify access to the environment, independent of the compromised user. In this case, the attacker connected to the company’s cloud environment through an IP address registered in a foreign country and created API access keys with full administrative access.

Step 3: Reconnaissance. Once inside, an attacker then needs to map out what permissions are granted and what actions this role allows.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Step 4: Exploitation. Once the available permissions in the account have been determined, the attacker can proceed to exploit them. Among other activities, the attacker duplicated the master user database and exposed it to the outside world with public permissions.

Step 5: Exfiltration. Finally, with customer information at hand, the attacker copied the data outside of the network, gaining access to over 20 million user records that contained personal user information.

Lessons Learned

Your Permissions Equal Your Threat Surface: Leveraging public cloud environments means that resources that used to be hosted inside your organization’s perimeter are now outside where they are no longer under the control of system administrators and can be accessed from anywhere in the world. Workload security, therefore, is defined by the people who can access those workloads and the permissions they have. In effect, your permissions equal your attack surface.

Excessive Permissions Are the No. 1 Threat: Cloud environments make it very easy to spin up new resources and grant wide-ranging permissions but very difficult to keep track of who has them. Such excessive permissions are frequently mischaracterized as misconfigurations but are actually the result of permission misuse or abuse. Therefore, protecting against those excessive permissions becomes the No. 1 priority for securing publicly hosted cloud workloads.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Cloud Attacks Follow Typical Progression: Although each data breach incident may develop differently, a cloud-native attack breach frequently follows a typical progression of a legitimate user account compromise, account reconnaissance, privilege escalation, resource exploitation and data exfiltration.

What Could Have Been Done Differently?

Protect Your Access Credentials: Your next data breach is a password away. Securing your cloud account credentials — as much as possible — is critical to ensuring that they don’t fall into the wrong hands.

Limit Permissions: Frequently, cloud user accounts are granted many permissions that they don’t need or never use. Exploiting the gap between granted permissions and used permissions is a common move by hackers. In the aforementioned example, the attacker used the accounts’ permissions to create new administrative-access API keys, spin up new databases, reset the database master password and expose it to the outside world. Limiting permissions to only what the user needs helps ensure that, even if the account is compromised, the damage an attacker can do is limited.

[You may also like: Mitigating Cloud Attacks With Configuration Hardening]

Alert of Suspicious Activities: Since cloud-native data breaches frequently have a common progression, there are certain account activities — such as port scanning, invoking previously used APIs and granting public permissions — which can be identified. Alerting against such malicious behavior indicators (MBIs) can help prevent a data breach before it occurs.

Automate Response Procedures: Finally, once malicious activity has been identified, fast response is paramount. Automating response mechanisms can help block malicious activity the moment it is detected and stop the breach from reaching its end goal.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

SecurityService Provider

Out of the Shadows, Into the Network

April 9, 2019 — by Radware1

darkness-960x540.jpg

Network security is a priority for every carrier worldwide. Investments in human resources and technology solutions to combat attacks are a significant part of carriers’ network operating budgets.

The goal is to protect their networks by staying a few steps ahead of hackers. Currently, carriers may be confident that their network security solution is detecting and mitigating DDoS attacks.

All the reports generated by the solution show the number and severity of attacks as well as how they were thwarted. Unfortunately, we know it’s a false sense of well-being because dirty traffic in the form of sophisticated application attacks is getting through security filters. No major outages or data breaches have been attributed to application attacks yet, so why should carriers care?

Maintaining a Sunny Reputation

The impact of application attacks on carriers and their customers takes many forms:

  • Service degradation
  • Network outages
  • Data exposure
  • Consumption of bandwidth resources
  • Consumption of system resources

[You may also like: How Cyberattacks Directly Impact Your Brand]

A large segment of carriers’ high-value customers have zero tolerance for service interruption. There is a direct correlation between service outages and user churn.

Application attacks put carriers’ reputations at risk. For customers, a small slowdown in services may not be a big deal initially. But as the number and severity of application attacks increase, clogged pipes and slow services are not going to be acceptable. Carriers sell services based on speed and reliability. Bad press about service outages and data compromises has long-lasting negative effects. Then add the compounding power of social networking to quickly spread the word about service issues, and you have a recipe for reputation disaster.

[You may also like: Securing the Customer Experience for 5G and IoT]

Always Under Attack

It’s safe for carriers to assume that their networks are always under attack. DDoS attack volume is escalating as hackers develop new and more technologically sophisticated ways to target carriers and their customers In 2018, attack campaigns were primarily composed of multiple attacks vectors, according to the Radware 2018–2019 Global Application & Network Security Report.

The report finds that “a bigger picture is likely to emerge about the need to deploy security solutions that not only adapt to changing attack vectors to mitigate evolving threats but also maintain service availability at the same time.”

[You may also like: Here’s How Carriers Can Differentiate Their 5G Offerings]

Attack vectors include:

  • SYN Flood
  • UDP Flood
  • DNS Flood
  • HTTP Application Flood
  • SSL Flood
  • Burst Attacks
  • Bot Attacks

Attackers prefer to keep a target busy by launching one or a few attacks at a time rather than firing the entire arsenal all at once. Carriers may be successful at blocking four or five attack vectors, but it only takes one failure for the damage to be done.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application Delivery

Application SLA: Knowing Is Half the Battle

April 4, 2019 — by Radware2

ApplicationSLA-960x642.jpg

Applications have come to define the digital experience. They empower organizations to create new customer-friendly services, unlock data and content and deliver it to users at the time and device they desire, and provide a competitive differentiator over the competition.

Fueling these applications is the “digital core,” a vast plumbing infrastructure that includes networks, data repositories, Internet of Things (IoT) devices and more. If applications are a cornerstone of the digital experience, then managing and optimizing the digital core is the key to delivering these apps to the digitized user. When applications aren’t delivered efficiently, users can suffer from a degraded quality of experience (QoE), resulting in a tarnished brand, negatively affecting customer loyalty and lost revenue.

Application delivery controllers (ADCs) are ideally situated to ensure QoE, regardless of the operational scenario, by allowing IT to actively monitor and enforce application SLAs. The key is to understand the role ADCs play and the capabilities required to ensure the digital experience across various operational scenarios.

Optimize Normal Operations

Under normal operational conditions, ADCs optimize application performance, control and allocate resources to those applications and provide early warnings of potential issues.

[You may also like: 6 Must-Have Metrics in Your SLA]

For starters, any ADC should deliver web performance optimization (WPO) capabilities to turbocharge the performance of web-based applications. It transforms front-end optimization from a lengthy and complex process into an automated, streamlined function. Caching, compression, SSL offloading and TCP optimization are all key capabilities and will enable faster communication between the client and server while offloading CPU intensive tasks from the application server.

Along those same lines, an ADC can serve as a “bridge” between the web browsers that deliver web- based applications and the backend servers that host the applications. For example, HTTP/2 is the new standard in network protocols. ADCs can serve as a gateway between the web browsers that support HTTP/2 and backend servers that still don’t, optimizing performance to meet application SLAs.

Prevent Outages

Outages are few and far between, but when they occur, maintaining business continuity is critical via server load balancing, leveraging cloud elasticity and disaster recovery. ADCs play a critical role across all three and execute and automate these processes during a time of crisis.

[You may also like: Security Pros and Perils of Serverless Architecture]

If an application server fails, server load balancing should automatically redirect the client to another server. Likewise, in the event that an edge router or network connection to the data center fails, an ADC should automatically redirect to another data center, ensuring the web client can always access the application server even when there is a point of failure in the network infrastructure.

Minimize Degradation

Application SLA issues are most often the result of network degradation. The ecommerce industry is a perfect example. A sudden increase in network traffic during the holiday season can result in SLA degradation.

Leveraging server load balancing, ADCs provide elasticity by provisioning resources on-demand. Additional servers are added to the network infrastructure to maintain QoE, and after the spike has passed, returned to an idle state for use elsewhere. In addition, virtualized ADCs provide an additional benefit, as they provide scalability and isolation between vADC instance at the fault, management and network levels.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

Finally, cyberattacks are the silent killers of application performance, as they typically create degradation. ADCs play an integrative role in protecting applications to maintain SLAs at all times.   They can prevent attack traffic from entering a network’s LAN and prevent volumetric attack traffic from saturating the Internet pipe.

The ADC should be equipped with security capabilities that allow it to be integrated into the security/ DDoS mitigation framework. This includes the ability to inspect traffic and network health parameters so the ADC serves as an alarm system to signal attack information to a DDoS mitigation solution. Other interwoven safety features should include integration with web application firewalls (WAFs), ability to decrypt/encrypt SSL traffic and device/user fingerprinting.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsSecurity

What is a Zero-Day Attack?

April 2, 2019 — by Radware0

zeroday-960x640.jpg

Zero-day attacks are the latest, never-before-seen generation of attacks. They are not volumetric or detectable from a known application signature. Security systems and experts must react instantly to solve the new issues, that is, they have zero days to react. Advanced application-level attacks typically fit into this category.

Two Distinct Phases

Probe and Learn: Hackers assess network defenses and probe for vulnerabilities, looking for different weaknesses and identifying the type of attacks that will potentially be effective. It’s like an archer who picks the best arrows to put in his quiver before battle. For example, a hacker may determine that a combination of encrypted attacks, attacks from a rotating IP address source, new low and slow attacks and headless browser attacks will be most effective.

[You may also like: Protecting Applications in a Serverless Architecture]

Optimize, Morph and Attack: Hackers launch the attack and then vary the attack vectors (or arrows from the quiver). In this case, hackers often understand that legacy DDoS mitigators need manual intervention to troubleshoot and mitigate a zero-day attack. So they attack the weakness of the legacy mitigator (multiple manual troubleshooting cycles to stop an attack) in addition to attacking the application vulnerabilities.

Who Are the Attackers?

Richard Clarke, former special cybersecurity advisor to the U.S. president, devised an acronym — C.H.E.W. — to categorize and explain the origin of cyberattacks (that specifically target carriers and enterprises):

  • Cybercrime — the notion that someone is going to attack you with the primary motive being financial gain from the endeavor.
  • Hacktivism — attacks motivated by ideological differences. The primary focus of these attacks is not financial gain but rather persuading or dissuading certain actions or “voices.”
  • Espionage — straightforward motive of gaining information on another organization in pursuit of political, financial, capitalistic, market share or some other form of leverage.
  • War (Cyber) — the notion of a nation-state or transnational threat to an adversary’s centers of power via a cyberattack. Attacks could focus on nonmilitary critical infrastructure.

[You may also like: How Cyberattacks Directly Impact Your Brand]

The attackers can range from a tech-savvy teenager to a highly organized group that taps into huge server farms in places like Russia and Ukraine to facilitate attacks.

The types of hackers are as varied that the methods they employ and include APTs (advanced persistent threats) agents, corporate spies, cybercriminals, cyberwarriors, hacktivists, rogue hackers, spammers and malware spreaders.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Security Pros and Perils of Serverless Architecture

March 14, 2019 — by Radware9

serverless-960x544.jpg

Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. This cloud computing model can drive cost-efficiencies, increase agility and enable organizations to focus on the essential aspects of software development. While serverless architecture offers some security advantages, trusting that a cloud provider has security fully covered can be risky.

That’s why it’s critical to understand what serverless architectures mean for cyber security.

What Serverless Means for Security

Many assume that serverless is more secure than traditional architectures. This is partly true. As the name implies, serverless architecture does not require server provisioning. Deep under the hood, however, these REST API functions are still running on a server, which in turn runs on an operating system and uses different layers of code to parse the API requests. As a result, the total attack surface becomes significantly larger.

When exploring whether and to what extent to use serverless architecture, consider the security implications.

[You may also like: Protecting Applications in a Serverless Architecture]

Security: The Pros

The good news is that responsibility for the operating system, web server and other software components and programs shifts from the application owner to the cloud provider, who should apply patch management policies across the different software components and implement hardening policies. Most common vulnerabilities should be addressed via enforcement of such security best practices. However, what would be the answer for a zero-day vulnerability in these software components? Consider Shellshock, which allowed an attacker to gain unauthorized access to a computer system.

Meanwhile, denial-of-service attacks designed to take down a server become a fool’s errand. FaaS servers are only provisioned on demand and then discarded, thereby creating a fast-moving target. Does that mean you no longer need to think about DDoS? Not so fast. While DDoS attacks may not cause a server to go down, they can drive up an organization’s tab due to an onslaught of requests. Additionally, functions’ scale is limited while execution is time limited. Launching a massive DDoS attack may have unpredictable impact.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Finally, the very nature of FaaS makes it more challenging for attackers to exploit a server and wait until they can access more data or do more damage. There is no persistent local storage that may be accessed by the functions. Counting on storing attack data in the server is more difficult but still possible. With the “ground” beneath them continually shifting—and containers re-generated—there are fewer opportunities to perform deeper attacks.

Security: The Perils

Now, the bad news: serverless computing doesn’t eradicate all traditional security concerns. Code is still being executed and will always be potentially vulnerable. Application-level vulnerabilities can still be exploited whether they are inherent in the FaaS infrastructure or in the developer function code.

Whether delivered as FaaS or just based on a Web infrastructure, REST API functions are even more challenging code than just a standard web application. They introduce security concerns of their own. API vulnerabilities are hard to monitor and do not stand out. Traditional application security assessment tools do not work well with APIs or are simply irrelevant in this case.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

When planning for API security infrastructure, authentication and authorization must be taken into account. Yet these are often not addressed properly in many API security solutions. Beyond that, REST APIs are vulnerable to many attacks and threats against web applications: POSTed JSONs and XMLs injections, insecure direct object references, access violations and abuse of APIs, buffer overflow and XML bombs, scraping and data harvesting, among others.

The Way Forward

Serverless architectures are being adopted at a record pace. As organizations welcome dramatically improved speed, agility and cost-efficiency, they must also think through how they will adapt their security. Consider the following:

  • API gateway: Functions are processing REST API calls from client-side applications accessing your code with unpredicted inputs. An API Gateway can enforce JSON and XML validity checks. However, not all API Gateways support schema and structure validation, especially when it has to do with JSON. Each function deployed must be properly secured. Additionally, API Gateways can serve as the authentication tier which is critically important when it comes to REST APIs.
  • Function permissions: The function is essentially the execution unit. Restrict functions’ permissions to the minimum required and do not use generic permissions.
  • Abstraction through logical tiers: When a function calls another function—each applying its own data manipulation—the attack becomes more challenging.
  • Encryption: Data at rest is still accessible. FaaS becomes irrelevant when an attacker gains access to a database. Data needs to be adequately protected and encryption remains one of the recommended approaches regardless of the architecture it is housed in.
  • Web application firewall: Enterprise-grade WAFs apply dozens of protection measures on both ingress and egress traffic. Traffic is parsed to detect protocol manipulations, which may result in unexpected function behavior. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
  • IoT botnet protection: To avoid the significant cost implications a DDoS attack may have on a serverless architecture and the data harvesting risks involved with scraping activity, consider behavioral analysis tools and IoT botnet solutions.
  • Monitoring function activity and data access: Abnormal function behavior, expected access to data, non-reasonable traffic flow and other abnormal scenarios must be tracked and analyzed.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Adapting Application Security to the New World of Bots

March 7, 2019 — by Radware0

web-app-bots-960x709.jpg

In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.

Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.

Positive Impact: Business Acceleration

Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.

Good bots include:

  • Crawlers — are used by search engines and contribute to SEO and SEM efforts
  • Chatbots — automate and extend customer service and first response
  • Fetchers — collect data from multiple locations (for instance, live sporting events)
  • Pricers — compare pricing information from different services
  • Traders — are used in commercial systems to find the best quote or rate for a transaction

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Negative Impact: Security Risks

The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:

  • Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
  • Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
  • Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
  • Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
  • Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.

[You may also like: The Big, Bad Bot Problem]

Blocking Automated Threats

Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.

 

Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.

For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.

[You may also like: IoT Expands the Botnet Universe]

Taking the Good with the Bad

Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.

Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Attack Types & VectorsBotnetsSecurity

IoT Expands the Botnet Universe

March 6, 2019 — by Radware1

AdobeStock_175553664-960x607.jpg

In 2018, we witnessed the dramatic growth of IoT devices and a corresponding increase in the number of botnets and cyberattacks. Because IoT devices are always-on, rarely monitored and generally use off-the-shelf default passwords, they are low-hanging fruit for hackers looking for easy ways to build an army of malicious attackers. Every IoT device added to the network grows the hacker’s tool set.

Botnets comprised of vulnerable IoT devices, combined with widely available DDoS-as-a-Service tools and anonymous payment mechanisms, have pushed denial-of-service attacks to record-breaking volumes. At the same time, new domains such as cryptomining and credentials theft offer more opportunities for hacktivism.

Let’s look at some of the botnets and threats discovered and identified by Radware’s deception network in 2018.

JenX

A new botnet tried to deliver its dangerous payload to Radware’s newly deployed IoT honeypots. The honeypots registered multiple exploit attempts from distinct servers, all located in popular cloud hosting providers based in Europe. The botnet creators intended to sell 290Gbps DDoS attacks for only $20. Further investigation showed that the new bot used an atypical central scanning method through a handful of Linux virtual private servers (VPS) used to scan, exploit and load malware onto unsuspecting IoT victims. At the same time, the deception network also detected SYN scans originating from each of the exploited servers indicating that they were first performing a
mass scan before attempting to exploit the IoT devices, ensuring that ports 52869 and 37215 were open.

[You may also like: IoT Botnets on the Rise]

ADB Miner

A new piece of malware that takes advantage of Android-based devices exposing debug capabilities to the internet. It leverages scanning code from Mirai. When a remote host exposes its Android Debug Bridge (ADB) control port, any Android emulator on the internet has full install, start, reboot and root shell access without authentication.

Part of the malware includes Monero cryptocurrency miners (xmrig binaries), which are executing on the infected devices. Radware’s automated trend analysis algorithms detected a significant increase in activity against port 5555, both in the number of hits and in the number of distinct IPs. Port 5555 is one of the known ports used by TR069/064 exploits, such as those witnessed during the Mirai-based attack targeting Deutsche Telekom routers in November 2016. In this case, the payload delivered to the port was not SOAP/HTTP, but rather the ADB remote debugging protocol.

Satori.Dasan

Less than a week after ADB Miner, a third new botnet variant triggered a trend alert due to a significant increase in malicious activity over port 8080. Radware detected a jump in the infecting IPs from around 200 unique IPs per day to over 2,000 malicious unique IPs per day. Further investigation by the research team uncovered a new variant of the Satori botnet capable of aggressive scanning and exploitation of CVE-2017-18046 — Dasan Unauthenticated Remote Code Execution.

[You may also like: New Satori Botnet Variant Enslaves Thousands of Dasan WiFi Routers]

The rapidly growing botnet referred to as “Satori.Dasan” utilizes a highly effective wormlike scanning mechanism, where every infected host looks for more hosts to infect by performing aggressive scanning of random IP addresses and exclusively targeting port 8080. Once a suitable target is located, the infected bot notifies a C2 server, which immediately attempts to infect the new victim.

Memcached DDoS Attacks

A few weeks later, Radware’s system provided an alert on yet another new trend — an increase in activity on UDP port 11211. This trend notification correlated with several organizations publicly disclosing a trend in UDP-amplified DDoS attacks utilizing Memcached servers configured to accommodate UDP (in addition to the default TCP) without limitation. After the attack, CVE2018-1000115 was published to patch this vulnerability.

Memcached services are by design an internal service that allows unauthenticated access requiring no verification of source or identity. A Memcached amplified DDoS attack makes use of legitimate third-party Memcached servers to send attack traffic to a targeted victim by spoofing the request packet’s source IP with that of the victim’s IP. Memcached provided record-breaking amplification ratios of up to 52,000x.

[You may also like: Entering into the 1Tbps Era]

Hajime Expands to MikroTik RouterOS

Radware’s alert algorithms detected a huge spike in activity for TCP port 8291. After near-zero activity on that port for months, the deception network registered over 10,000 unique IPs hitting port 8291 in a single day. Port 8291 is related to a then-new botnet that exploits vulnerabilities in the MikroTik RouterOS operating system, allowing attackers to remotely execute code on the device.

The spreading mechanism was going beyond port 8291, which is used almost exclusively by MikroTik, and rapidly infecting other devices such as AirOS/Ubiquiti via ports: 80, 81, 82, 8080, 8081, 8082, 8089, 8181, 8880, utilizing known exploits and password-cracking attempts to speed up the propagation.

Satori IoT Botnet Worm Variant

Another interesting trend alert occurred on Saturday, June 15. Radware’s automated algorithms alerted to an upsurge of malicious activity scanning and infection of a variety of IoT devices by taking advantage of recently discovered exploits. The previously unseen payload was delivered by the infamous Satori botnet. The exponential increase in the number of attack sources spread all over the world, exceeding 2,500 attackers in a 24-hour period.

[You may also like: A Quick History of IoT Botnets]

Hakai

Radware’s automation algorithm monitored the rise of Hakai, which was first recorded in July. Hakai is a new botnet recently discovered by NewSky Security after lying dormant for a while. It started to infect D-Link, Huawei and Realtek routers. In addition to exploiting known vulnerabilities to infect the routers, it used a Telnet scanner to enslave Telnet-enabled devices with default credentials.

DemonBot

A new stray QBot variant going by the name of DemonBot joined the worldwide hunt for yellow elephant — Hadoop cluster — with the intention of conscripting them into an active DDoS botnet. Hadoop clusters are typically very capable, stable platforms that can individually account for much larger volumes of DDoS traffic compared to IoT devices. DemonBot extends the traditional abuse of IoT platforms for DDoS by adding very capable big data cloud servers. The DDoS attack vectors supported by DemonBot are STD, UDP and TCP floods.

Using a Hadoop YARN (Yet-Another-Resource-Negotiator) unauthenticated remote command execution, DemonBot spreads only via central servers and does not expose the wormlike behavior exhibited by Mirai-based bots. By the end of October, Radware tracked over 70 active exploit servers that are spreading malware
and exploiting YARN servers at an aggregated rate of over one million exploits per day.

[You may also like: Hadoop YARN: An Assessment of the Attack Surface and Its Exploits]

YARN allows multiple data processing engines to handle data stored in a single Hadoop platform. DemonBot took advantage of YARN’s REST API publicly exposed by over 1,000 cloud servers worldwide. DemonBot effectively harnesses the Hadoop clusters in order to generate a DDoS botnet powered by cloud infrastructure.

Always on the Hunt

In 2018, Radware’s deception network launched its first automated trend-detection steps and proved its ability to identify emerging threats early on and to distribute valuable data to the Radware mitigation devices, enabling them to effectively mitigate infections, scanners and attackers. One of the most difficult aspects in automated anomaly detection is to filter out the massive noise and identify the trends that indicate real issues.

In 2019, the deception network will continue to evolve and learn and expand its horizons, taking the next steps in real-time automated detection and mitigation.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Application Delivery

Keeping Pace in the Race for Flexibility

February 27, 2019 — by Radware2

AdobeStock_195521238-960x537.jpg

Flexibility and elasticity. Both rank high on the corporate agenda in the age of digital transformation and IT is no exception. From the perspective of IT, virtualization and cloud computing have become the de facto standard for deployment models. They provide the infrastructure elasticity to make business more agile and higher performing and are the reason why the majority of organizations today are operating within a hybrid infrastructure, one that combines on-premise with cloud-based and/or virtualized assets.

But to deliver the elasticity promised by these hybrid infrastructures requires data center solutions that deliver flexibility. As a cornerstone for optimizing applications, application delivery controllers (ADCs) have to keep pace in the race for flexibility. The key is to ensure that your organization’s ADC fulfills key criteria to improve infrastructure planning, flexibility and operational expenses.

One License to Rule Them All

Organizations should enjoy complete agility in every aspect of the ADC service deployment. Not just in terms of capabilities, but in terms of licensing . Partner with an ADC vendor that provides an elastic, global licensing model.

Organizations often struggle with planning ADC deployments when those deployments span hybrid infrastructures and can be strapped with excess expenses by vendors when pre-deployment calculations result in over-provisioning. A global licensing model allows organizations to pay only for capacity used, be able to allocate resources as needed and add virtual ADCs at a moment’s notice to match specific business initiatives, environments and network demands.

[You may also like: Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity]

The result? Dramatically simplified ADC deployment planning and a streamlined transition to the cloud.

An ADC When and Where You Need It

This licensing mantra extends to deployment options and customizations as well. Leading vendors provide the ability to deploy ADCs across on-premise and cloud-based infrastructures, allowing customers to transfer ADC capacity from physical to cloud-based data centers. Ensure you can deploy an ADC wherever, whenever they are required, at the click of a button, at no extra cost and with no purchasing complexity.

Add-on services and capabilities that go hand-in-hand with ADCs are no exception either. Web application firewalls (WAF), web performance optimization (WPO), application performance monitoring…companies should enjoy the freedom to consume only required ADC services rather than overspending on bells and whistles that will sit idle collecting dust.

Stay Ahead of the Curve

New standards for communications and cryptographic protocols can leave data center teams running amok attempting to keep IT infrastructure updated. They can also severely inhibit application delivery.

Take SSL/TLS protocols. Both are evolving standards that ensure faster encrypted communications between client and server, improved security and application resource allocation without over-provisioning. It allows IT to optimize the performance of applications and optimize costs during large scale deployments.

[You may also like: The ADC is the Key Master for All Things SSL/TLS]

Combining the flexibility of an ADC that supports the latest standards with an elastic licensing model is a winning combination, as it provides the most cost-effective alternative for consuming ADC services for any application.

Contain the Madness

The goal of any ADC is to ensure each application is performing at its best while optimizing costs and resource consumption. This is accomplished by ensuring that resource utilization is always tuned to actual business needs.

Leading ADC vendors allow ADC micro-services to be added to individual ADC instances without increasing the bill. By supporting container orchestration engines such as Kubernetes, it allows the organization to adopt their ADC to the application capacity. This also simplifies the addition of services such as SSL or WAF to individual instances or micro-services.

[You may also like: Simple to Use Link Availability Solutions]

Finding an ADC vendor that addresses all these considerations requires expanding the search from focusing on mainstream vendors. To drive flexibility via IT elasticity means considering all the key ADC capabilities and licensing nuances critical to managing and optimizing today’s diversified IT infrastructure. Remember these three
keys when evaluating ADC vendors:

  • An ADC licensing model should be an catalyst for cutting infrastructure expenditures, not increasing them.
  • An ADC licensing model should provide complete agility in ever aspect of your ADC deployment.
  • An ADC license should allow IT to simplify and automate IT operational processes.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Cloud ComputingCloud SecuritySecurity

Mitigating Cloud Attacks With Configuration Hardening

February 26, 2019 — by Radware3

cloud-attacks-960x540.jpg

For attackers, misconfigurations in the public cloud can be exploited for a number of reasons. Typical attack scenarios include several kill chain steps, such as reconnaissance, lateral movement, privilege escalation, data acquisition, persistence and data exfiltration. These steps might be fully or partially utilized by an attacker over dozens of days until the ultimate objective is achieved and the attacker reaches the valuable data.

Removing the Mis from Misconfigurations

To prevent attacks, enterprises must harden configurations to address promiscuous permissions by applying continuous hardening checks to limit the attack surface as much as possible. The goals are to avoid public exposure of data from the cloud and reduce overly permissive access to resources by making sure communication between entities within a cloud, as well as access to assets and APIs, are only allowed for valid reasons.

For example, the private data of six million Verizon users was exposed when maintenance work changed a configuration and made an S3 bucket public. Only smart configuration hardening that applies the approach of “least privilege” enables enterprises to meet those goals.

[You may also like: Ensuring Data Privacy in Public Clouds]

The process requires applying behavior analytics methods over time, including regular reviews of permissions and a continuous analysis of usual behavior of each entity, just to ensure users only have access to what they need, nothing more. By reducing the attack surface, enterprises make it harder for hackers to move laterally in the cloud.

The process is complex and is often best managed with the assistance of an outside security partner with deep expertise and a system that combines a lot of algorithms that measure activity across the network to detect anomalies and determine if malicious intent is probable. Often attackers will perform keychain attacks over several days or months.

Taking Responsibility

It is tempting for enterprises to assume that cloud providers are completely responsible for network and application security to ensure the privacy of data. In practice, cloud providers provide tools that enterprises can use to secure hosted assets. While cloud providers must be vigilant in how they protect their data centers, responsibility for securing access to apps, services, data repositories and databases falls on the enterprises.

Future security threats to the cloud environment.

[You may also like: Excessive Permissions are Your #1 Cloud Threat]

Hardened network and meticulous application security can be a competitive advantage for companies to build trust with their customers and business partners. Now is a critical time for enterprises to understand their role in protecting public cloud workloads as they transition more applications and data away from on-premise networks.

The responsibility to protect the public cloud is a relatively new task for most enterprises. But, everything in the cloud is external and accessible if it is not properly protected with the right level of permissions. Going forward, enterprises must quickly incorporate smart configuration hardening into their network security strategies to address this growing threat.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now