main

DDoSDDoS AttacksSecurityWAF

What Can We Learn About Cybersecurity from the Challenger Disaster? Everything.

December 5, 2018 — by Radware1

AdobeStock_115308434-960x640.jpg

Understanding the potential threats that your organization faces is an essential part of risk management in modern times. It involves forecasting and evaluating all the factors that impact risk. Processes, procedures and investments can all increase, minimize or even eliminate risk.

Another factor is the human element. Often times, within an organization, a culture exists in which reams of historical data tell one story, but management believes something entirely different. This “cognitive dissonance” can lead to an overemphasis and reliance on near-term data and/or experiences and a discounting of long-term statistical analysis.

Perhaps no better example of this exists than the space shuttle Challenger disaster in 1986, which now serves as a case study in improperly managing risk. In January of that year, the Challenger disintegrated 73 seconds after launch due to the failure of a gasket (called an O-ring) in one of the rocket boosters. While the physical cause of the disaster was caused by the failure of the O-ring, the resulting Rogers Commission that investigated the accident found that NASA had failed to correctly identify “flaws in management procedures and technical design that, if corrected, might have prevented the Challenger tragedy.”

Despite strong evidence dating back to 1977 that the O-ring was a flawed design that could fail under certain conditions/temperatures, neither NASA management nor the rocket manufacturer, Morton Thiokol, responded adequately to the danger posed by the deficient joint design. Rather than redesigning the joint, they came to define the problem as an “acceptable flight risk.” Over the course of 24 preceding successful space shuttle flights, a “safety culture” was established within NASA management that downplayed the technical risks associated with flying the space shuttle despite mountains of data, and warnings about the O-ring, provided by research and development (R & D) engineers.

As American physicist Richard Feynman said regarding the disaster, “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

Truer words have never been spoken when they pertain to cybersecurity. C-suite executives need to stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and establish a culture of “acceptable risk” and start managing to real-world risks — risks that are supported by hard data.

Risk Management and Cybersecurity

The threat of a cyberattack on your organization is no longer a question of if, but when, and C-suite executives know it. According to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts, 96% of executives were concerned about network vulnerabilities and security risks resulting from hybrid computing environments. Managing risk requires organizations to plan for and swiftly respond to risks and potential risks as they arise. Cybersecurity is no exception. For any organization, risks can be classified into four basic categories:

The Challenger disaster underscores all four of these risk categories. Take strategic risk as an example. Engineers from Morton Thiokol expressed concerns and presented data regarding the performance of the O-rings, both in the years prior and days leading up to the launch, and stated the launch should be delayed. NASA, under pressure to launch the already delayed mission and emboldened by the 24 preceding successful shuttle flights that led them to discount the reality of failure, pressured Morton Thiokol to supply a different recommendation. Morton Thiokol management decided to place organizational goals ahead of safety concerns that were supported by hard data. The recommendation for the launch was given, resulting in one of the most catastrophic incidents in manned space exploration. Both Morton Thiokol and NASA made strategic decisions that placed the advancements of their respective organizations over the risks that were presented.

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

This example of strategic risk serves as a perfect analogy for organizations implementing cybersecurity strategies and solutions. There are countless examples of high-profile cyberattacks and data breaches in which upper management was warned in advance of network vulnerabilities, yet no actions were taken to prevent an impending disaster. The infamous 2018 Panera Bread data breach is one such example. Facebook is yet another. Its platform operations manager between 2011 and 2012 warned management at the social tech giant to implement audits or enforce other mechanisms to ensure user data extracted from the social network was not misused by third-party developers and/or systems. These warnings were apparently ignored.

So why does this continually occur? The implementation of DDoS and WAF mitigation solutions often involves three key components within an organization: management, the security team/SOC and compliance. Despite reams of hard data provided by a security team that an organization is either currently vulnerable or not prepared for the newest generation of attack vectors, management will often place overemphasis on near-term security results/experiences; they feel secure in the fact that the organization has never been the victim of a successful cyberattack to date. The aforementioned Facebook story is a perfect example: They allowed history to override hard data presented by a platform manager regarding new security risks.

Underscoring this “cognitive dissonance” is the compliance team, which often seeks to evaluate DDoS mitigation solutions based solely on checkbox functionality that fulfills minimal compliance standards. Alternatively, this strategy also drives a cost-savings approach that yields short-term financial savings within an organization that often times views cybersecurity as an afterthought vis-à-vis other strategic programs, such as mobility, IoT and cloud computing.

The end result? Organizations aren’t managing real-world risks, but rather are managing “yesterday’s” risks, thereby leaving themselves vulnerable to new attack vectors, IoT botnet vulnerabilities, cybercriminals and other threats that didn’t exist weeks or even days ago.

The True Cost of a Cyberattack

To understand just how detrimental this can be to the long-term success of an organization requires grasping the true cost of a cyberattack. Sadly, these data points are often as poorly understood, or dismissed, as the aforementioned statistics regarding vulnerability. The cost of a cyberattack can be mapped by the four risk categories:

  • Strategic Risk: Cyberattacks, on average, cost more than one million USD/EUR, according to 40% of executives. Five percent estimated this cost to be more than 25 million USD/EUR.
  • Reputation Risk: Customer attrition rates can increase by as much as 30% following a cyberattack. Moreover, organizations that lose over four percent of their customers following a data breach suffer an average total cost of $5.1 million. In addition, 41% of executives reported that customers have taken legal action against their companies following a data breach. The Yahoo and Equifax data breach lawsuits are two high-profile examples.
  • Product Risk: The IP Commission estimated that counterfeit goods, pirated software and stolen trade secrets cost the U.S. economy $600 billion annually.
  • Governance Risk: “Hidden” costs associated with a data breach include increased insurance premiums, lower credit ratings and devaluation of trade names. Equifax was devalued by $4 billion by Wall Street following the announcement of its data breach.

[You may also like: Understanding the Real Cost of a Cyber-Attack and Building a Cyber-Resilient Business]

Secure the Customer Experience, Manage Risk

It’s only by identifying the new risks that an organization faces each and every day and having a plan in place to minimize them that enables its executives to build a foundation upon which their company will succeed. In the case of the space shuttle program, mounds of data that clearly demonstrated an unacceptable flight risk were pushed aside by the need to meet operational goals. What lessons can be learned from that fateful day in January of 1986 and applied to cybersecurity? To start, the disaster highlights the five key steps of managing risks.

In the case of cybersecurity, this means that the executive leadership must weigh the opinions of its network security team, compliance team and upper management and use data to identify vulnerabilities and the requirements to successfully mitigate them. In the digital age, cybersecurity must be viewed as an ongoing strategic initiative and cannot be delegated solely to compliance. Leadership must fully weigh the potential cost of a cyberattack/data breach on the organization versus the resources required to implement the right security strategies and solutions. Lastly, when properly understood, risk can actually be turned into a competitive advantage. In the case of cybersecurity, it can be used as a competitive differentiator with consumers that demand fast network performance, responsive applications and a secure customer experience. This enables companies to target and retain customers by supplying a forward-looking security solution that seamlessly protects users today and into the future.

So how are executives expected to accomplish this while facing new security threats, tight budgets, a shortfall in cybersecurity professionals and the need to safeguard increasingly diversified infrastructures? The key is creating a secure climate for the business and its customers.

To create this climate, research shows that executives must be willing to accept new technologies, be openminded to new ideologies and embrace change, according to C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts. Executives committed to staying on top of this ever-evolving threat must break down the silos that exist in the organization to assess the dimensions of the risks across the enterprise and address these exposures holistically. Next is balancing the aforementioned investment versus risk equation. All executives will face tough choices when deciding where to invest resources to propel their companies forward. C-suite executives must leverage the aforementioned data points and carefully evaluate the risks associated with security vulnerabilities and the costs of implementing effective security solutions to avoid becoming the next high-profile data breach.

According to the same report, four in 10 respondents identified increasing infrastructure complexity, digital transformation plans, integration of artificial intelligence and migration to the cloud as events that put pressure on security planning and budget allocation.

The stakes are high. Security threats can seriously impact a company’s brand reputation, resulting in customer loss, reduced operational productivity and lawsuits. C-suite executives must heed the lessons of the space shuttle Challenger disaster: Stop evaluating and implementing cybersecurity strategies and solutions that meet minimal compliance and start managing to real-world risks by trusting data, pushing aside near-term experiences/“gut instincts” and understanding the true cost of a cyberattack. Those executives who are willing to embrace technology and change and prioritize cybersecurity will be the ones to win the trust and loyalty of the 21st-century consumer.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

DDoSSecurity

Disaster Recovery: Data Center or Host Infrastructure Reroute

October 11, 2018 — by Daniel Lakier2

disaster-recovery-data-center-host-infrastructure-reroute-blog-960x540.jpg

Companies, even large ones, haven’t considered disaster recovery plans outside of their primary cloud providers own infrastructure as regularly as they should. In March of this year, Amazon Web Services (AWS) had a massive failure which directly impacted some of the world’s largest brands, taking them offline for several hours. In this case, it was not a malicious attack but the end result was the same— an outage.

When the organization’s leadership questioned their IT departments on how this outage could happen, most received an answer that was somehow acceptable:  It was AWS. Amazon failed, not us. However, that answer should not be acceptable.

AWS implies they are invulnerable, but the people running IT departments are running it for a reason. They are meant to be skeptics, and it is their job to build redundancies that protect the system against any one point of failure.  Some of those companies use AWS disaster recovery services, but if the data center and all the technology required to turn those fail-safes on crashes, then you’re down. This is why we need to treat the problem with the same logic that we use for any other system. Today it is easier than ever to create a resilient DoS resistant architecture that not only takes traditional malicious activity into account but also critical business failures. The solution isn’t purely technical either, it needs to be based upon sound business principles using readily available technology.

[You might also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

In the past enterprise disaster recovery architecture revolved around having a fully operational secondary location. If we wanted true resiliency that was the only option. Today although that can still be one of the foundation pillars to your approach it doesn’t have to be the only answer. You need to be more circumspect about what your requirements are and choose the right solution for each environment/problem.  For example:

  • A) You can still build it either in your own data center or in a cloud (match the performance requirements to a business value equation).
  • B) Several ‘Backups-as-a-Service’ will offer more than just storage in the cloud. They offer resources for rent (servers to run your corporate environments in case of an outage). If your business can sustain an environment going down just long enough to turn it back on (several hours), this can be a very cost-effective solution.
  • C) For non-critical items, rely on the cloud provider you currently use to provide near-time failure protection.

The Bottom Line

Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach.  But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design.

We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible. Radware’s dynamic performance-based route optimization solution (GSLB) allows us to offer a unique customer experience to each and every user regardless of where they are coming from, how they access the environment or what they are trying to do. This same technology allows us to reroute users in the event of a DoS event that takes down an entire site be it from malicious behavior, hardware failure or simple human error. This functionality can be procured as a product or a service as it is environment/cloud agnostic and relatively simple to deploy. It is not labor intensive and may be the least expensive part of an enterprise DOS architecture.

What we can conclude is that any company that blames the cloud provider for a down site in the future should be asked the hard questions because solving this problem is easier today than ever before.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

BotnetsDDoSSecurityWAF

Protecting Sensitive Data: A Black Swan Never Truly Sits Still

October 10, 2018 — by Mike O'Malley1

protecting-sensitive-data-never-sit-still-960x540.jpg

The black swan – a rare and unpredictable event notorious for its ability to completely change the tides of a situation.

For cybersecurity, these nightmares can take the form of disabled critical services such as municipal electrical grids and other connected infrastructure networks, data breaches, application failures, and DDoS attacks. They can range from the levels of Equifax’s 2018 Breach Penalty Fines (estimated close to $1.5 billion), to the bankruptcy of Code Spaces following their DDoS attack and breach (one of the 61% of SMBs companies that faced bankruptcy per service provider Verizon’s investigations), to a government-wide shutdown of web access in public servants’ computers in response to a string of cyberattacks.

Litigation and regulation can only do so much to reduce the impact of black swans, but it is up to companies to prepare and defend themselves from cyberattacks that can lead to rippling effects across industries.

[You might also like: What a Breach Means to Your Business]

If It’s So Rare, Why Should My Company Care?

Companies should concern themselves with black swans to understand the depth of the potential long-term financial and reputation damage and suffering. Radware’s research on C-Suite Perspectives regarding the relationship between cybersecurity and customer experience shows that these executives prioritize Customer Loss (41%), Brand Reputation (34%), and Productivity/Operational Loss (34%). Yet, a majority of these same executives have not yet integrated security practices into their company’s security infrastructure such as their application DevOps teams.

The long-term damage on a company’s finances is note-worthy enough. IT provider CGI found that for technology and financial companies alone, they can lose 5-8.5% in enterprise value from the breach. What often goes unreported, however, is the increased customer onboarding costs to combat against large-scale customer churn following breaches.

For the financial sector, global accounting firm KPMG found that consumers not only expect institutions to act quickly and take responsibility, but 48% are willing to switch banks due to lack of responsibility and preparation for future attacks, and untimely notification of the breaches. News publication The Financial Brand found that banking customers have an average churn rate of 20-40% in 12 months, while a potential onboarding cost per customer can be within the $300-$20,000 range. Network hardware manufacturer Cisco estimates as high as 20% of customers and opportunities could be lost.

Just imagine the customer churn rate for a recently-attacked company.

How does that affect me personally as a business leader within my company?

When data breaches occur, the first person that typically takes the blame is the CISO or CSO. A common misconception, however, is that everyone else will be spared any accountability. But the damage is not limited to just security leadership. Due to the wide array of impacts that result from a cyberattack, nearly all C-level executives are at risk; examples include but are not limited to Equifax’s CEO, Richard Smith, Target CEO Gregg Steinhafel and CIO Beth Jacob. This results in a sudden emptiness of C-Suite level employees. Suddenly, there’s a lack of leadership and direction, causing its own internal combination of instability.

Today’s business leaders need to understand that a data breach is no longer limited to the company’s reputation, but the level of welfare of its customers. Just the event of a data breach can shatter the trust between the two entities. CEOs are now expected to be involved with managing the black swan’s consequences; in times of these hardships, they are particularly expected to continue being the voice of the company and to provide direction and assurance to vulnerable customers.

A business leader can be ousted from the company for not having taken cybersecurity seriously enough and/or not understanding the true costs of a cyberattack – that is, if the company hasn’t filed for bankruptcy yet.

Isn’t this something that my company’s Public Relations department should be handling?

One of the biggest contributors to the aftermath chaos of a black swan is the poor/lack of communication from the public relations team. By not disclosing a data breach in a timely manner, companies incur the wrath of the consumer and suffer an even bigger loss in customer loyalty because of delays. A timely announcement is expected as soon as the company discovers the incident, or according to the GDPR, within 72 hours of the discovery.

A company and its CEO should not solely depend on their public relations department to handle a black swan nightmare. Equifax revealed its data breach six weeks after the incident and still hadn’t directly contacted those that were affected, instead of creating a website for customer inquiries. Equifax continues to suffer from customer distrust because of the lack of guidance from the company’s leadership during those critical days in 2017. At a time of confusion and mayhem, a company’s leader must remain forthcoming, reassuring and credible through the black swan’s tide-changing effects.

Following the cybersecurity black swan, a vast majority of consumers must also be convinced that all the security issues have been addressed and rectified, and the company has a plan in place for any future repeated incidents. Those that fail to do so are at risk of losing at least every 1 in 10 customers, exhibiting the potential reach of impact a black swan can have within a company alone, beyond financial aspects.

How Do You Prepare for When the Black Swan Strikes?

When it comes to the black swan, the strategic method isn’t limited to be proactive or reactive, but to be preemptive, according to news publication ComputerWeekly. The black swan is primarily feared for its unpredictability. The key advantage of being preemptive is the level of detail that goes into planning; instead of reacting in real-time during the chaos or having a universal one-size fits all type of strategy, companies should do their best to develop multiple procedures for multiple worst-case scenarios.

Companies cannot afford to be sitting ducks waiting for the black swan to strike, but must have prepared mitigation plans in place for the likelihood. The ability to mitigate through extreme cyber threats and emerging cyberattack tactics is a dual threat to the company, depending on the level of cybersecurity preparation a company possesses. By implementing a strong cybersecurity architecture (internal or third-party), companies can adapt and evolve with the constant-changing security threats landscape; thereby minimizing the opportunities for hackers to take advantage.

In addition to having a well-built security system, precautions should be taken to further strengthen it including WAF Protection, SSL Inspections, DDoS Protection, Bot Protection, and more. Risk management is flawed due to its nature of emphasis on internal risks only. What’s been missing is companies must do more to include the possibilities of industry-wide black swans, such as the Target data breach in 2013 that later extended to Home Depot and other retailers.

It’s Time To Protect Sensitive Data

In the end, the potential impact of a black swan on a company comes down to its business owners. Cybersecurity is no longer limited to a CISO or CSO’s decision, but the CEO. As the symbol and leader of a company, CEOs need to ask themselves if they know how their security model works. Is it easily penetrated? Can it defend against massive cyberattacks?  What IP and customer data am I protecting?  What would happen to the business if that data was breached?

Does it protect sensitive data?

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Attack Types & VectorsBotnetsDDoSSecurity

The Evolution of IoT Attacks

August 30, 2018 — by Daniel Smith5

iot_botnet_emerge-960x636.jpg

What is the Internet of Things (IoT)? IoT is the ever-growing network of physical devices with embedded technologies that connect and exchange data over the internet. If the cloud is considered someone else’s computer, IoT devices can be considered the things you connect to the internet beyond a server or a PC/Laptop. These are items such as cameras, doorbells, light bulbs, routers, DVRs, wearables, wireless sensors, automated devices and just about anything else.

IoT devices are nothing new, but the attacks against them are. They are evolving at a rapid rate as growth in connected devices continues to rise and shows no sign of letting up. One of the reasons why IoT devices have become so popular in recent years is because of the evolution of cloud and data processing which provides manufacturers cheaper solutions to create even more ‘things’. Before this evolution, there weren’t many options for manufacturers to cost-effectively store and process data from devices in a cloud or data center.  Older IoT devices would have to store and process data locally in some situations. Today, there are solutions for everyone and we continue to see more items that are always on and do not have to store or process data locally.

[You might also like: The 7 Craziest IoT Device Hacks]

Cloud and Data Processing: Good or Bad?

This evolution in cloud and data processing has led to an expansion of IoT devices, but is this a good or a bad thing? Those that profit from this expansion would agree that this is positive because of the increase in computing devices that can assist, benefit or improve the user’s quality of life. But those in security would be quick to say that this rapid rise in connected devices has also increased the attack landscape as there is a lack of oversight and regulation of these devices. As users become more dependent on these IoT devices for daily actives, the risk also elevates. Not only are they relying more on certain devices, but they are also creating a much larger digital footprint that could expose personal or sensitive data.

In addition to the evolution of IoT devices, there has been an evolution in the way attacker’s think and operate. The evolution of network capabilities and large-scale data tools in the cloud has helped foster the expansion of the IoT revolution. The growth of cloud and always-on availability to process IoT data has been largely adopted among manufacturing facilities, power plants, energy companies, smart buildings and other automated technologies such as those found in the automotive industry. But this has increased the attack surfaces for those that have adopted and implemented an army of possible vulnerable or already exploitable devices. The attackers are beginning to notice the growing field of vulnerabilities that contain valuable data.

In a way, the evolution of IoT attacks continues to catch many off guard, particularly the explosive campaigns of IoT based attacks. For years, experts have warned about the pending problems of a connected future, with IoT botnets as a key indicator, but very little was done to prepare for it.  Now, organizations are rushing to identify good traffic vs malicious traffic and are having trouble blocking these attacks since they are coming from legitimate sources.

As attackers evolve, organizations are still playing catch up. Soon after the world’s largest DDoS attack, and following the publication of the Mirai source code, began a large battle among criminal hackers for devices to infect. The more bots in your botnet, the larger the attack could be.  From the construction of a botnet to the actual launch an attack, there are several warning signs of an attack or pending attack.

As the industry began monitoring and tracking IoT based botnets and threats, several non-DDoS based botnets began appearing. Criminals and operators suddenly shifted focus and began infecting IoT devices to mine for cryptocurrencies or to steal user data. Compared to ransomware and large-scale DoS campaigns that stem from thousands of infected devices, these are silent attacks.

Unchartered Territory

In addition to the evolving problems, modern research lacks standardization that makes analyzing, detecting and reporting complicated. The industry is new, and the landscape keeps evolving at a rapid rate causing fatigue in some situations. For instance, sometimes researchers are siloed, and research is kept for internal use only which can be problematic for the researcher who wants to warn of the vulnerability or advise on how to stop an attack. Reporting is also scattered between tweets, white papers, and conference presentations. To reiterate how young this specialty is, my favorite and one of the most respected conferences dedicated to botnets, BotConf, has only met 6 times.

EOL is also going to become a problem when devices are still functional but not supported or updated. Today there are a large number of connected systems found in homes, cities and medical devices that at some point will no longer be supported by the manufacturers yet will still be functional. As these devices linger unprotected on the internet, they will provide criminal hackers’ a point of entry into unsecured networks. Once these devices pass EOL and are found online by criminals, they could become very dangerous for users depending on their function.

In a more recent case, Radware’s Threat Research Center identified criminals that were targeting DLink DSL routers in Brazil back in June. These criminals were found to be using outdated exploits from 2015. The criminals were able to leverage these exploits against vulnerable and unpatched routers 4 years later. The malicious actors attempted to modify the DNS server settings in the routers of Brazilian residents, redirecting their DNS request through a malicious DNS server operated by the hackers. This effectively allowed the criminals to conduct what’s called a man in the middle attack, allowing the hackers to redirect users to phishing domains for local banks so they could harvest credentials from unsuspecting users.

[You might also like: IoT Hackers Trick Brazilian Bank Customers into Providing Sensitive Information]

Attackers are not only utilizing old and unpatched vulnerabilities, but they are also exploiting recent disclosures. Back in May, vpnMentor published details about two critical vulnerabilities impacting millions of GPON gateways. The two vulnerabilities allowed the attackers to bypass authentication and execute code remotely on the targeted devices. The more notable event from this campaign was the speed at which malicious actors incorporated these vulnerabilities. Today, actors are actively exploiting vulnerabilities within 48 hours of the disclosure.

What Does the Future Hold?

The attack surface has grown to include systems using multiple technologies and communication protocols in embedded devices. This growth has also led to attackers targeting devices for a number of different reasons as the expansion continues. At first hackers, mainly DDoS’er would target IoT devices such as routers over desktops, laptops, and servers because they are always on, but as devices have become more connected and integrated into everyone’s life, attackers have begun exploring their vulnerabilities for other malicious activity such as click fraud and crypto mining. It’s only going to get worse as authors and operators continue to look towards the evolution of IoT devices and the connected future.

If anything is an indication of things to come I would say it would be found in the shift from Ransomware to crypto mining. IoT devices will be the main target for the foreseeable future and attackers will be looking for quieter ways to profit from your vulnerabilities. We as an industry need to come together and put pressure on manufacturers to produce secure devices and prove how the firmware and timely updates will be maintained. We also need to ensure users are not only aware of the present threat that IoT devices present but also what the future impact of these devices will be as they approach end of life. Acceptance, knowledge, and readiness will help us keep the networks of tomorrow secured today.

Download “When the Bots Come Marching In, a Closer Look at Evolving Threats from Botnets, Web Scraping & IoT Zombies” to learn more.

Download Now

DDoSSecurityWeb Application Firewall

Security Risks: How ‘Similar-Solution’ Information Sharing Reduces Risk at the Network Perimeter

August 23, 2018 — by Thomas Gobet0

security_network_perimeter-960x540.jpg

We live in a connected world where we have access to several tools to assist in finding any information we need. If we choose to do something risky, there is often some type of notification that warns us of the risk.

The same holds true in IT departments. When a problem occurs, we search for answers that allow us to make decisions and take action. What problem created the outage? Do I need to increase the bandwidth or choose a CDN offering? Do I need to replace my devices or add a new instance to a cluster?

Connected Security

We all know that connected IT can help us make critical decisions. In the past, we have depended on standalone, best-of-breed security solutions that detect and mitigate locally but do not share data with other mitigation solutions across the network.

[You might also like: Web Application in a Digitally Connected World]

Even when information is shared, it’s typically between identical solutions deployed across various sites within a company. While this represents a good first step, there is still plenty of room for improvement. Let us consider the physical security solutions found at a bank as an analogy for cybersecurity solutions.

A robber enters a bank. Cameras didn’t detect the intruder wearing casual clothes or anything identifying him or her as a criminal. The intruder goes to the first teller and asks for money. The teller closes the window. Next, the robber moves to a second window, demanding money and that teller closes the window. The robber moves to the third window, and so on until all available windows are closed.

Is this the most effective security strategy? Wouldn’t it make more sense if the bank had a unified solution that shared information and shut down all of the windows after the first attempt? What if this robber was a hacker who is trying to penetrate your system? Would you allow the hacker to try and break into more than one network silo after the first attempt?

Comprehensive Security Via An Enterprise-Grade Suite Solution

As we’ve seen in the example above, having mitigation solutions that can share attack information allows an organization to block a new “signature” when you see the request. But this only applies when the traffic reaches the solution. How could the bank better protect itself from the robber?

  • Should they do active verification at the entrance?
    • No, it would be time-consuming for customers who may consider not coming back.
  • Should they keep a list of customers allowed?
    • No, otherwise they would turn off new customers.
  • Should they signal the risk to other desks and entrance security?
    • Yes, that way all windows would be closed simultaneously and security guards would be able to catch the intruder and any future attempts to enter.

Imagine these windows are your different sites and the security guard placed at the entrance is your security solution at the perimeter of your network. Identifying abnormal behavior from normal behavior requires you to perform analysis of network traffic. The more advanced the analysis is the closer to the backend application the solution is. That way we can ensure only traffic allowed by prior solutions doing first security barriers gets through. Being close to the application means that analyzed traffic went through: router, firewalls, switches, IPs, anti-virus, anti-DLP and many other solutions (in classic architectures).

Organizations require a fully integrated WAF and DDoS mitigation appliance that can communicate effectively to allow WAF solutions (deployed close to the application) to warn anti-DDoS systems (deployed at the perimeter) that an attacker is trying to penetrate the perimeter.

In the blog “Accessing Application With A Driving License,” Radware recommends blocking any requests coming from clients with abnormal behavior. This mechanism was only applied to the WAF, but with this added communication, it goes even one step further and blocks bad requests and/or bad clients who are trying to access your network.

[You might also like: Accessing Application With a Driving License]

With a fully integrated WAF and DDoS detection and mitigation solution that communicates with one another, these devices will save you time and processing power and they will be more effective in blocking intrusions to your network.

Download “Web Application Security in a Digitally Connected World” to learn more.

Download Now

Attack Types & VectorsDDoSSecurity

DNS: Strengthening the Weakest Link

August 2, 2018 — by Radware0

dns-attacks-960x640.jpg

One in three organizations hit by DDoS attacks experienced an attack against their DNS server. Why is DNS such an attractive target? What are the challenges associated with keeping it secure? What attack vectors represent the worse of the worst when it comes to DNS assaults? Based on research from Radware’s 2017-2018 Global Application & Network Security Report, this piece answers all those questions and many more.

DDoSSecurity

Be Certain and Specific when Fighting DDoS Attacks

July 19, 2018 — by Ray Tamasovich1

ddos-attacks-960x613.jpg

I was visiting a prospect last week and at the very beginning of the meeting he asked directly, “Why would I consider your products and services over the many others that claim to do the exact same thing?”  I immediately said, “That’s easy! Certainty and specificity.”  He looked at me, expecting more than a 5-word answer. When I did not provide one, he asked me to please explain. I told him that any number of the products or services on the market are capable of keeping your circuits from being overrun by a volumetric DDoS attack, but that if he wanted to be certain he was not blocking legitimate business users or customers, and if he wanted to be specific about the traffic he was scrubbing, he would need to consider my solution.

DDoSSecurityWAF

Building Tier 1 IP Transit – What’s Involved and Why Do It?

July 11, 2018 — by Richard Cohen4

ip-transit-960x540.jpg

Not all internet connectivity is created equal. Many Tier 2 and Tier 3 ISPs, cloud service providers and data integrators consume IP Transit sourced from Tier 1 Wholesale ISPs (those ISP’s that build and operate their own fabric from L1 services up). In doing so, their ability to offer their customers internet services customised to particular requirements is limited by the choices they have available to them – and many aspects of the services they consume may not be optimal.

DDoSSecurity

It only takes 6,000 smart phones to take down our Public Emergency Response System?

June 28, 2018 — by Carl Herberger0

telecom-threats-960x601.jpg

There are fewer scenarios which illustrate an evildoer’s heart than those designed for mass carnage.

We are all familiar with the false alarm (human mistake) of the Public Emergency Broadcast system in Hawaii earlier this year, which wreaked havoc throughout the archipelago. However, do we realize how fragile our nation’s emergency communications are and how vulnerable it is to cyber-attacks?

Attack Types & VectorsDDoSSecurity

Battling Cyber Risks with Intelligent Automation

June 26, 2018 — by Louis Scialabba0

automation-960x640.jpg

Organizations are losing the cybersecurity race.

Cyber threats are evolving faster than security teams can adapt. The proliferation of data from dozens of security products are outpacing the ability for security teams to process it. And budget and talent shortfalls limit the ability for security teams to expand rapidly.

The question is how does a network security team improve the ability to scale and minimize data breaches, all the while dealing with increasingly complex attack vectors?

The answer is automation.