main

DDoSSecurity

Disaster Recovery: Data Center or Host Infrastructure Reroute

October 11, 2018 — by Daniel Lakier3

disaster-recovery-data-center-host-infrastructure-reroute-blog-960x540.jpg

Companies, even large ones, haven’t considered disaster recovery plans outside of their primary cloud providers own infrastructure as regularly as they should. In March of this year, Amazon Web Services (AWS) had a massive failure which directly impacted some of the world’s largest brands, taking them offline for several hours. In this case, it was not a malicious attack but the end result was the same— an outage.

When the organization’s leadership questioned their IT departments on how this outage could happen, most received an answer that was somehow acceptable:  It was AWS. Amazon failed, not us. However, that answer should not be acceptable.

AWS implies they are invulnerable, but the people running IT departments are running it for a reason. They are meant to be skeptics, and it is their job to build redundancies that protect the system against any one point of failure.  Some of those companies use AWS disaster recovery services, but if the data center and all the technology required to turn those fail-safes on crashes, then you’re down. This is why we need to treat the problem with the same logic that we use for any other system. Today it is easier than ever to create a resilient DoS resistant architecture that not only takes traditional malicious activity into account but also critical business failures. The solution isn’t purely technical either, it needs to be based upon sound business principles using readily available technology.

[You might also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

In the past enterprise disaster recovery architecture revolved around having a fully operational secondary location. If we wanted true resiliency that was the only option. Today although that can still be one of the foundation pillars to your approach it doesn’t have to be the only answer. You need to be more circumspect about what your requirements are and choose the right solution for each environment/problem.  For example:

  • A) You can still build it either in your own data center or in a cloud (match the performance requirements to a business value equation).
  • B) Several ‘Backups-as-a-Service’ will offer more than just storage in the cloud. They offer resources for rent (servers to run your corporate environments in case of an outage). If your business can sustain an environment going down just long enough to turn it back on (several hours), this can be a very cost-effective solution.
  • C) For non-critical items, rely on the cloud provider you currently use to provide near-time failure protection.

The Bottom Line

Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach.  But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design.

We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible. Radware’s dynamic performance-based route optimization solution (GSLB) allows us to offer a unique customer experience to each and every user regardless of where they are coming from, how they access the environment or what they are trying to do. This same technology allows us to reroute users in the event of a DoS event that takes down an entire site be it from malicious behavior, hardware failure or simple human error. This functionality can be procured as a product or a service as it is environment/cloud agnostic and relatively simple to deploy. It is not labor intensive and may be the least expensive part of an enterprise DOS architecture.

What we can conclude is that any company that blames the cloud provider for a down site in the future should be asked the hard questions because solving this problem is easier today than ever before.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Cloud SecuritySecurity

Shadow IT – Security and DR concerns?

September 6, 2016 — by Prakash Sinha0

shadow-it-2-2-960x670.png

According to Gartner, on average, 28 percent of IT spend occurs outside the IT department today. IT behind IT’s back, commonly called shadow IT, is primarily driven by easily available cloud services. Mobile growth and work shifting practices enables the shadow IT further with employees’ desire to work from anywhere. Shadow IT are typically services and applications that an organization’s IT department has had no role in selecting or vetting, and IT may not even be aware that these services and applications are being used within the network.

Convenience and productivity are often the drivers for adopting shadow IT. Employees deploy solutions that are not approved by their IT departments and many times, the reasoning is that going through the traditional route for approvals is too complicated or time consuming.

Application Delivery

Networks Can Fail – Here’s What You Can Do About It

February 17, 2016 — by Frank Yue1

networks-can-fail-2-960x583.jpg

Recently, my computer failed and I had to get it fixed and my data restored.  It took several days to identify the specific problem (in this case, catastrophic hard drive failure) and restore access to my applications and data.  In the meantime, my productivity dropped dramatically and it was hard to work using the alternative tools that were available to me.

Application Delivery

Application Availability is NOT Like Traveling in a Nor’Easter

January 29, 2016 — by Frank Yue0

nor-easter-2-960x640.jpg

Recently, I was trapped in the New York area due to a large snowstorm that we like to call a nor’easter. From midnight Thursday, January 21 until the evening of Sunday Jan 24 over a period of 68 hours, I had 5 or six flight cancellations, stayed in 3 different hotels, and ultimately went from LaGuardia to JFK to Newark to Philadelphia International airport by car/taxi/minivan.

Attack MitigationDDoS AttacksSecurity

6 Types of DDoS Protection for Your Business

July 14, 2014 — by David Monahan2

David Monahan is Research Director for Enterprise Management Associates (EMA) and is a featured guest blogger.

DDoS attacks have become commonplace these days.  The offending attackers may be hacktivists, cyber-criminals, and nation states or just about anyone else with an Internet grudge and a PayPal or Bitcoin account.  These attacks themselves often require no technical skill.  Someone with a bone to pick can simply purchase the use of any number of nodes on one or more botnets for an hourly fee (long term rate discounts available); use a Graphical User Interface (GUI) to organize the attack and then launch it.

Application Delivery

The Art of Efficient ADC Administration – Develop Once, Apply Multiple Times

March 7, 2013 — by Yaron Azerual0

It’s no secret application delivery controller (ADC) services are often perceived as complex to master and administer. Although they may use the latest ADC device, many ADC deployments only use basic layer 4 load balancing. It can be challenging to find an ADC champion who can really take advantage of the most advanced capabilities of an application delivery solution and maximize its business benefits.