It’s not always obvious to a network or system administrator that their company’s infrastructure is under attack. In fact, an attack usually starts slowly and it’s only as the attack progresses that someone takes notice. But what does a DDoS attack look like from the inside? What are the early warning signs? Who are the principle players? What steps are taken to mitigate an attack? What tensions and emotional responses does an attack produce at the various levels of an enterprise? In the following post, a system administrator of a bank provides an hour-by-hour break down of the early stages of a DDoS attack as experienced in real time.
I am awakened by the sound of an incoming SMS message on my phone. The message, an automatic notification sent by our new server health monitoring tool reads: “Warning, Mainapp server at 30% maximum load.” Mainapp is the principal online banking application web-server that handles customer requests.
Since our CEO has strategically decided to promote online banking, the bank has invested a great deal of money to ensure that the Mainapp banking application web-server is robust, scalable, and highly available. So far, it’s had enough processing power and memory to handle current traffic levels. With last month’s statistics indicating a server load of no more than 15%, receiving a message that the server load is at 30% is worrisome, but not serious.
At this point, I think it’s possible that the monitoring tool’s alert threshold parameters were set incorrectly, but I can wait to check that when I get to the office later.
Only a half hour later another SMS message arrives. This one reads “Warning, Mainapp server at 50% maximum load.” Something is definitely wrong. But since I didn’t configure remote access to the health-monitoring tool, I can’t look at its logs until I get to the office.
While rushing to get to the office to investigate, I run through the possible causes of such high server load. I try to assure myself that it’s probably a simple configuration error, but I’m beginning to worry.
The customer support manager on duty calls me to report that many customers are complaining that the online banking website is significantly slower than normal. He says that one of the customers is furious because he was unable to perform a time-sensitive money transfer as quickly as usual.
Finally I arrive at the office and rush to a server terminal screen. Mainapp’s load has reached 70% — nearly maximum capacity.
Upon a quick check of the health monitoring tool logs, I find out that the alert thresholds are set correctly. However, online banking traffic still appears to be abnormally high. Thousands of connections have been opened to the server, requesting different pages on the online banking website.
A few beads of sweat drip down my forehead as I try not to panic. Such a massive amount of network traffic must be originating from a malicious source, but why? Who is behind it? I suddenly remember last week’s newspaper headlines’ detailing the wave of cyber attacks on financial service institutions. I begin to see similarities between what our server is experiencing and what I read about in the papers. I fear that our server is being targeted by a denial-of-service (DoS) attack.
Assuming the worst, I try and identify the nature and source of the malicious traffic. First, I check where the connections are originating from and try to isolate the attackers’ IP addresses in order to differentiate the legitimate from the malicious traffic. Meanwhile, my phone does not stop ringing.
The CIO calls wanting to know what’s going on. I tell him that I’m trying to solve the problem but that we might be experiencing a denial-of-service attack that’s exhausting our server’s resources. His response – the problem needs to be solved quickly, before the CEO gets involved.
I have no clue how to stop the attack, and I’m not even sure that it’s actually DoS. What I do know is that I’ve never seen anything like this in my entire career. My only knowledge on the subject comes from some reading I did on the Internet after attending last month’s security seminar.
Looking at the IP trace, it seems like the malicious connections are coming from various sources. Each IP is repeatedly sending HTTP GET requests for various online banking pages. This action is hogging all of Mainapp’s resources, making the online banking pages slow for legitimate users.
With some idea of what is going on, I decide on a short-term plan of action and call an emergency team meeting.
The situation has not gotten any better. The pace of the attack has been constant, but now Mainapp hardly responds to any kind of request. The customer support manager at my office is upset as all of his staff is being overwhelmed by support calls. Customers are unhappy and angry, but what can he instruct them to say? I tell him that I think we are under attack by one or more hackers, that we shouldn’t expect to regain normal service any time soon, and that we may release a formal statement in the near future regarding our downtime.
The situation is now catastrophic. Word has spread and the entire staff is in a state of panic. The emergency meeting I called convenes. It consists of the CIO, CTO, network administrators, security manager, application manager and system administrators. Tension aside, we all understand the importance of issuing an official message to the customers and decide on a plan of action to deal with the attack.
I show everyone the logs and after a few minutes the security manager notices that some of the malicious requests are coming from Russia. Quickly, I define a rule on the Mainapp web server to reject all requests originating from Russia thinking it may slow down the attack. Unfortunately, it doesn’t help. After activating my new filter, I see no decrease in the amount of malicious traffic. Following a brief period with no new connections, additional connections begin to originate from a dozen different countries, including ours!
The server is still under heavy load. Blocking IPs based on geographic region did not help so we have to look for another solution. Since we were not prepared to handle such an attack, it has become necessary to gain a better understanding of how to prevent and mitigate a DoS attack.
The Mainapp web-server is completely flooded and the online banking site is offline. Upon hearing this news, the CEO decides to get involved. She emphasizes how bad it is for the bank’s reputation to announce such an attack and wonders how much it will cost the bank in revenue loss as well as customer dissatisfaction. She’s worried that if the details of this attack leak to the press it could cause panic among the bank’s customers and reiterates that the attack must be mitigated quickly – by whatever means necessary.
It’s now clear to me that we’re facing a well coordinated distributed-denial-of-service (DDoS) attack that our current security tools cannot mitigate. Although DDoS is a rising security threat, we don’t have the right expertise within the organization to deal with something of this magnitude.
Does this scenario sound familiar? If so, share your DDoS experiences and let us know how your organizations are adjusting their security infrastructure to deal with availability-based attacks.
The above scenario was originally printed in our “DDoS Survival Handbook” which can be downloaded for free through our online resource which provides a comprehensive analysis on denial-of-service (DoS) and distributed denial-of-service (DDoS) attack tools, trends and threats.
Ronen manages the global marketing strategy for Radware’s Security products. His responsibilities include the planning, positioning and go-to-market strategy for all Security products activities worldwide. An industry expert, Ronen has more than 14 years experience in managing R&D and marketing products in the networking infrastructure, Security and application delivery sectors. Ronen writes about Security threats and solutions, application delivery, and cloud computing.