DDOS Attacks Targeting Payment Services of Global Financial Institutions

0
14383

A threat actor or group is actively targeting the online services of branches of global financial institutions with their headquarters located in Europe. Radware Cloud DDoS Protection Services prevented multiple attacks from disrupting online web banking, payment validation services and remote access services of branches of financial institutions in several countries across the globe. Over two weeks, Radware observed increased sophistication and refinements to evade detection and mitigation as the attacks progressed. While the targeted financial institutions are headquartered in Europe, Radware witnessed attacks on branches in other continents.

November until January is the global shopping season. It is not uncommon to see an increase in attack activity targeting ecommerce, financial and payment systems—a period where disruption of payment services or web and mobile banking services will not go unnoticed.

Many branches of leading global banks have local services hosted in country. These services include remote access for employees and partners, credit and debit card validation services and online or mobile web banking. In the second half of November, we observed increased activity targeting those services from several global financial institutions in multiple countries. The attacks were initially targeting specific services with volumetric attacks. Attacks subsequently changed to subnet floods, randomizing the destination IP addresses to evade detections. The threat actors persisted and eventually changed their tactics to more challenging and sophisticated encrypted and application-level attacks. Attack durations varied from multivector, hour-long attacks to shorter waves lasting only a few minutes attempting to impact productivity and user experience.

As far as we could verify, there were no ransom letters received by the targeted organizations. While the objective behind the attacks is unclear at this time, it could be one of several, common intents:

  • gaining a competitive advantage
  • hacktivists
  • angry customers
  • e-fame and trolling
  • advertising

It is unlikely hacktivism is the motivation behind the attacks. If it was, the attacks would be claimed and mediatized by the hacktivists. An angry customer is also reasonably unlikely given multiple financial institutions and branches in several countries were involved.

Kids looking for e-fame or trolling out of boredom are typically not as persistent and accurate when researching and discovering the most productivity and reputation-impacting targets. They mostly go for the low-hanging fruit and something very noticeable and easy to measure in terms of success, such as a website but not so much remote access services or machine to machine (M2M) APIs.

Skilled hackers demonstrating their capabilities for sale typically advertise in hacking forums, and as far as we could verify, the attacks have not been claimed. That said, the attacks were no success as far as our customers were involved, so there is not much to claim unless they had success with other victims we are not aware of yet.

Not The Technique, Rather Consider The Objective

When record-level attacks make headlines, people tend to lose sight of the objective and merely consider the tactic. Volumetric DDoS attacks reaching beyond 2Tbps get a lot of attention, but these attacks were successfully mitigated and defending against them mostly depends on ones ability to consume vast amounts of traffic. None of those reported massive attacks disrupted services or caused prolonged grievances for their victims.

Recently, several attack campaigns made the news as they impacted services over more extended periods, having their victims believe attacks subsided just to catch them off guard a couple of hours/days later. These impacting attacks were nothing close to terabit per second levels and did not require millions of requests per second to degrade or disrupt the services of their victims.

The objective of a DDoS attack is to degrade or completely disrupt a service in an attempt to impact productivity, reputation or revenue streams of an organization. Behind the objective is a motivation that will differ depending on the threat actor. An angry customer will seek revenge. A competitor would likely want to impact the revenue and productivity to get a competitive advantage and take market share. Organized cybercrime and professional cybercriminals are in it for the money, while skids are looking for e-fame or trolling for fun out of boredom.

With the objective in mind, tools need to be acquired to get to there. This will require investments from the threat actor, either by building or renting an infrastructure, hiring a skilled attacker or taking a subscription with one of the many DDoS-as-a-Service sites.

A threat actor should not bring a canon to a knife fight to be effective. He will want to adapt his capacity and capabilities just enough to overcome any defenses that prevent him to reach his objective. Consider a remote access service that is left unprotected on a 4Gbps internet connection, a simple 5Gbps arbitrary UDP Flood will be adequate to completely disrupt connectivity and boot remote workers of the corporate network.

If, however, that same threat actor notices his 5Gbps flood has little or no impact on the reachability of the service, he will change adapt techniques and attempt to evade defenses. A first step might be randomizing the destination IP addresses within the target’s subnet, also known as Carpet Bombing. Other techniques will require more research,  such as discovering exposed services and targeting them with what resembles legitimate traffic, sometimes adding additional evasions such as encryption to make detection at the network level nearly impossible for most solutions. Ultimately, the attacker will target the service with custom application-level requests. The more attack traffic looks like legitimate requests, the more challenging detection and mitigation will be. Typically such attacks will combine multiple techniques, such as hiding behind vast amounts of anonymizing proxies for application-level requests to resemble traffic originating from legitimate users and further confuse detection to reach the objective of degrading or disrupting the service.

Hitting Where It Hurts

When outages happen or a service is under prolonged attack, customers are (or at least should be) informed and as the issue gets resolved, people regain confidence in the service. If failures are happening randomly, temporary outages keep recurring; customers will start to distrust and question the service.

The pandemic made society more hygiene-aware and seems to have affected people’s preferences for payment by card [1]. The volume of cash used in the UK dropped by up to 60% in 2020. In the US, 28% of people stopped using cash altogether. Online shopping made up 28% of sales in the UK in 2020. Mobile banking, online payments and credit or debit card validation services have become a commodity. Consumers are adapting digital payments at record rates, not realizing the complex infrastructure behind those routine payments. Until a payment fails because a service is offline. We break out in a cold sweat the next few payments, hoping the service will be available this time. November through January are arguably the months most rely on the performance of online payment services.

Work from home is the new normal and remote access has proven adequate to keep a business productive while office spaces are left deserted. But what if employees’ applications freeze several times a day because connectivity was degraded or lost? Trust in the system and productivity will be hurting. Connectivity and VPNs have become the lifeline of many organizations and disrupting or impacting the remote experience of home workers will directly impact the productivity of an organization, not to mention the happiness of the organizations’ employees.

Evolution of Methods and Techniques in Sophisticated Attack Campaigns

Comparing techniques leveraged in the last two weeks during multiple attacks we believe are related to the same threat actors or group (see “DDoS Attack Attribution is Hard”), we observed a stepped increase in sophistication and refinement as campaigns progressed and attacks kept being successfully mitigated. The attacks were randomly distributed across countries, branches and organizations, but techniques seemed to morph in unison.

Please note that all attacks discussed below were successfully mitigated with minimal impact on the targeted services.

In the second half of November, a branch of a global financial institution got targeted by several volumetric DDoS attacks. The attacks consisted of multiple 3-minute waves and spread over a period of three hours and one wave later in the day. The attack traffic across all waves consisted primarily of UDP Fragment Floods and DNS amplification. The amplification attacks originated from close to 5,000 unique source IP addresses and peaked at 17Gbps and 1.6Mpps (million packets per second). The Floods were targeting a specific IP address from a server hosting online debit and credit card validation services.

This attack came only two days after a short, 5-minute attack of 12.5Gbps that targeted a branch in a different country. The attack vectors consisted of UDP Fragment, DNS amplification and several types of HTTPS TCP floods and HTTPS connection attempts and originated from more than 6,000 unique source IPs.

In the meantime, we started seeing more similar attack patterns across other financial institutions and countries. Below is an attack consisting of three subsequent traffic bursts. The first two bursts and part of the third burst targeted a credit and debit card validation server, while the most significant part of the third wave was directed at 240 IP addresses within the /24 subnet of the branch.

The traffic directed at the online card validation services consisted primarily of UDP Fragment and DNS Amplification Floods to random destination ports. The traffic directed at the subnet leveraged DNS amplification to 240 different IP addresses with a fixed destination port 80. Traffic across the three waves originated from 4,000 unique source IP addresses, the same addresses observed between attack waves.

One day after the previous attack, another branch was targeted by an attack that lasted one hour during office hours. The attack consisted of several widely spread bursts of 2 to 3 minutes each. All bursts leveraged attack vectors consisting primarily of volumetric attacks leveraging DNS Amplification Floods destined to port 80 and UDP Fragment Floods. Attacks were targeted at a subnet rather than a specific server or service.

The peak attack sizes were not exceptionally large, topping off below 6Gbps. However, remember the expression I used earlier: do not bring a canon to a knife fight. Not every organization or branch equips multi-gigabit per second internet connections. If the objective is to temporarily perturb the user experience on connections to local branches that had to accommodate increasing amounts of remote workers only requires part of the capacity to saturate the line or slow down communications.

A few days later, the same assets were targeted by a 9-minute attack peaking just above 10Gbps. The attack traffic, this time, consisted of UDP Fragments, NTP and LDAP amplifications, and ICMP Floods.

This brings me to another related attack that showed the actors targeting remote access services. One of the financial institutions’ branches had three specific servers targeted by a very distinctive traffic pattern. Almost to the point of being a textbook example of testing attack services, if not, this is a live production network of an organization rather than one of the many (illegal) DStat services such as dstat.cc attackers leverage to evaluate their DDoS capabilities.

The attack lasted 48 minutes in total and consisted of four, 12-minute waves with equal parts of attack traffic divided between TCP, UDP and ICMP Floods. The first wave combined the three attack vectors, while subsequent waves ran single attack vectors.

Attack traffic was also equally spread over the three targeted IP addresses. One of the IP addresses resolved to a hostname that started with ‘api’, another with ’vpn’, and the third hostname started with ’partner’. As the IP addresses revealed the function of the service, the threat actors revealed some of their objectives in targeting APIs, VPNs and online applications, a partner portal in this particular case.

The source IP addresses of the attack traffic consisted of 1,138 different addresses, all taken from a single autonomous system belonging to an internet service provider in Poland.

It’s fair to assume that the attacker spoofed the source IP addresses. These attacks most certainly originated from a server infrastructure. It is impossible to keep bandwidths and packets per second as consistent and evenly spread across servers while leveraging amplification and reflection assaults. ICMP Floods, in particular, have varying success when reflecting from third-party servers. While this attack had several tangents with the other discussed attacks, I must admit that this attack is only loosely related. But it came as such a surprise that I could not withhold from sharing it.

The most recent observed attack, in December, targeted card validation services of a branch outside of Europe of yet another global financial institution with headquarters in Europe. The attack lasted about 5-minutes and was peaking at 14Gbps and 3Mpps. The attack traffic combined volumetric and resource starvation attack vectors. The volumetric vectors were leveraging UDP Fragments, DNS Amplification and LDAP Amplification. The resource starvation attacks vectors consisted primarily of 1.6Mpps HTTPS TCP Floods and connection attempts originating from over 12,000 unique source IP addresses and a 100Kpps ICMP BlackNurse attack [2] flood.

It is when attacks get sophisticated and make distinguishing legitimate from attack traffic more difficult. Even if application level requests require a TCP session to be established, forcing the attacker’s hand at revealing their infrastructure’s IP addresses, besides using botnets attackers still have the option to leverage large networks of (illegal) SOCKS proxies and hide their source IP address behind multiple proxies [3]. SOCKS proxy lists can be bought through underground forums and some leverage infected routers, allowing criminals to hide attacks behind legitimate users’ IP addresses while conducting application-level DDoS attacks.

DDoS Attack Attribution is Hard

Attack attribution is hard, even more so for DDoS attacks. Many DDoS attack techniques leverage spoofing and reflection, making it impossible to trace the origin of the traffic. To make matters harder, amplification and reflection resources for DNS, NTP, LDAP, ARMS, SSDP, etc. are readily available and abused by many making correlation of attacks and attribution to common threat actors or groups less reliable. Only when threat actors brag about their exploits on social networks or when DDoS-as-a-service providers see their attack and billing logs leaked can one close in on attribution of DDoS attacks.

In the attacks on global financial institutions, we made the assumption that the attacks were performed by one and the same threat actor or group. We observe a lot of random attack activity, but the attacks referred to in this blog were all correlating through similar targeted services across multiple organizations within the same vertical and recurring attacks targeting the exact same IP address with attack characteristics that were identical within margins of error.

Your Defense Strategy Should Depend on the Exposed Assets and Services

In most cases, I’m not worried about the most visible parts of attack traffic graphs. Large and consistent floods are easily detected and mitigated. However, more worrying are the potentially malicious traffic patterns on the bottom of the barrel. Application-layer attacks require much less bandwidth compared to volumetric attacks to be effective and they are as easy to perform given access to illegal proxy lists. Publicly available attack tools that leverage proxy services can send large amounts of HTTP GET requests or perform HTTP POST requests with randomized information that requires application side logic and validation to detect the request’s legitimacy. That is why protections need to be adapted to the exposed assets and services being protected.

Protecting a network against saturation-level volumetric DDoS attacks requires cloud protection solutions that have enough capacity to consume even the largest of known attacks and clean the traffic before it gets routed back to the protected network. Stateless, network-layer protection solutions are adequate to protect network devices and servers against resource starvation attacks on-premise.

When exposing online applications users interact with, both the network and the application layer should be protected. Such a solution should be able to make the distinction between humans and machines (bots) and between good and bad bots. Protecting online APIs or single page mobile applications is different from protecting web applications. Consumers of APIs are mostly devices or machines, so distinguishing between humans and machines is not relevant and more advanced detection mechanisms will be required.

References

[1] Credit card statistics 2021 (https://blog.spendesk.com/en/credit-card-statistics)

[2] BlackNurse (https://blacknurse.dk/)

[3] Mēris botnet, climbing to the record – Qrator (https://blog.qrator.net/en/meris-botnet-climbing-to-the-record_142/)

Previous articleRansomware & Ransom DoS, Why They Are Similar But Different
Next articleRadware Threat Researchers Live: Ep.16
As the Director, Threat Intelligence for Radware, Pascal helps execute the company's thought leadership on today’s security threat landscape. Pascal brings over two decades of experience in many aspects of Information Technology and holds a degree in Civil Engineering from the Free University of Brussels. As part of the Radware Security Research team Pascal develops and maintains the IoT honeypots and actively researches IoT malware. Pascal discovered and reported on BrickerBot, did extensive research on Hajime and follows closely new developments of threats in the IoT space and the applications of AI in cyber security and hacking. Prior to Radware, Pascal was a consulting engineer for Juniper working with the largest EMEA cloud and service providers on their SDN/NFV and data center automation strategies. As an independent consultant, Pascal got skilled in several programming languages and designed industrial sensor networks, automated and developed PLC systems, and lead security infrastructure and software auditing projects. At the start of his career, he was a support engineer for IBM's Parallel System Support Program on AIX and a regular teacher and presenter at global IBM conferences on the topics of AIX kernel development and Perl scripting.

LEAVE A REPLY

Please enter your comment!
Please enter your name here