main

Attack Types & Vectors

Empowering the Infosec Community

September 19, 2019 — by Ben Zilberman0

ThreatMap-960x468.png

Despite the technological advancements, innovation, and experience the knights of the cyber order have acquired over the past 25 years or so, the “bad guys” are still a step ahead. Why? In large part, because of the power of community.

While information security vendors live in a competitive market and must protect their intellectual property, hackers communicate, share information and contribute to each other’s immediate success and long-term skill set.

The Infosec Community

In recent years, we’ve seen more partnerships and collaborations between infosec vendors. For example, the Cyber Threat Alliance (of which Radware is a member) enables cybersecurity practitioners to share credible cyber threat information. Each vendor collects and shares security incidents detected by their security solutions, honeypots and research teams worldwide in order to disrupt malicious actors and protect end-users.

Similarly, several vendors offer live threat maps, which, as the name suggests, help detect live attacks as they’re launched.

[You may also like: Executives Are Turning Infosec into a Competitive Advantage]

Radware’s Live Threat Map, which is open to the public, presents near real-time information on cyberattacks–from scanners to intruders to DDoS and web application hacks–as they occur, based on our global threat deception network (comprised of distributed honeypots that collect information about active threat actors), and cloud systems’ event information. The systems transmit a variety of anonymized and sampled network and application attacks to our Threat Research Center and are shared with the community.

More specifically, our machine learning algorithms profile the attackers and their intent, the attack vector and target – be it a network, a server, an IoT device or an application. Various validation mechanisms assure high-fidelity and minimize false positives. This makes our map sturdy and essentially flawless, if I say so myself.

Visibility Is Key

Detecting live attacks despite all evasion mechanisms is just the first step. The “good guys” must also translate these massive data lakes into guidance for those who wish to gain a better understanding of what, exactly, we’re monitoring and how they can improve their own security posture.

[You may also like: Here’s How You Can Better Mitigate a Cyberattack]

Visibility is key to achieving this. The fact is, the market is overwhelmed with security technologies that constantly generate alerts; but to fight attackers and fend off future cyber attacks, businesses need more than notifications. They need guidance and advanced analytics.

For example, the ability to dig into data related to their own protected objects, while enjoying a unified view of all application and network security events with near real-time alerts via customizable dashboards (like Radware provides) will go a long way towards improving security posture — not just for individual companies, but the infosec community as a whole.

Download Radware’s “Hackers Almanac” to learn more.

Download Now

Security

Past GDPR Predictions: Have They Come To Fruition?

September 17, 2019 — by David Hobbs0

GDPR-960x540.jpg

In July 2017, I wrote about GDPR and HITEC and asked if the past could predict the future. At the time, GDPR had not yet gone into effect. Now that it has been active for over a year, let’s take stock at what’s occurred.

First, a quick refresher: GDPR implements a two-tiered approach to categorizing violations and related fines. The most significant breaches can result in a fine of up to 4 percent of a company’s annual global revenue, or €20 million (whichever is greater).

These higher-tier violations include failing to obtain the necessary level of customer consent to process data, failing to permit data subjects to exercise their rights including as to data erasure and portability, and transferring personal data outside the EU without appropriate safeguards.

[You may also like: The Impact of GDPR One Year In]

For less serious violations, which include failing to maintain records of customer consent or failing to notify the relevant parties when a data breach has occurred, the maximum fine is limited to 2 percent of annual global revenue, or €10 million (whichever is greater).

Rising Complaints & Notifications

The first year’s snapshot from May 2019 of the Data Protection Commission (DPC) demonstrates that GDPR has given rise to a significant increase in contacts with the DPC over the past 12 months:

  • 6,624 complaints were received.
  • 5,818 valid data security breaches were notified.
  • Over 48,000 contacts were received through the DPC’s Information and Assessment Unit.
  • 54 investigations were opened.
  • 1,206 Data Protection Officer notifications were received.

[You may also like: WAF and DDoS Help You on the Road to GDPR Compliancy]

In my first article, I discussed Memorial Healthcare System’s breach and resulting settlement of $5.5 Million USD. Now, let’s look at the first round of investigations under GDPR.

High-Profile Breaches: 2018-19 Edition

Marriott. In December 2018, news of Marriott’s massive breach hit. Upon receiving Marriott’s breach report in September 2018, the International Commissioner’s Office (ICO) — the UK’s GDPR supervisory authority — launched an investigation.

When a data breach is experienced that results in the exposure of EU citizen’s data, the breach must be reported to ICO within 72 hours of discovery. ICO investigates data breaches to determine whether GDPR rules were violated, as well as complaints about GDPR violations from consumers.

In July 2019, the ICO announced that it plans to fine the hotel chain $123 million USD. Marriott said it plans to appeal the decision.

[You may also like: Marriott: The Case for Cybersecurity Due Diligence During M&A]

Bergen, Denmark. One file in the wrong place landed the municipality of Bergen in Denmark in trouble. Computer files containing login credentials for 35,000 students and employees were insufficiently secured and accessed.

Per the European Data Protection Board, “the lack of security measures in the system made it possible for anyone to log in to the school’s various information systems, and thereby to access various categories of personal data relating to the pupils and employees of the schools.” As a result, the Norwegian Data Protection Authority fined the municipality of Bergen
€170,000.

British Airways. This is the largest fine to date, with an overwhelming price tag of £183.4m or $223.4M USD.  After an extensive investigation, the ICO concluded that information was compromised by “poor security arrangements” at British Airways. This relates to security around log in, payment card, and travel booking details, as well name and address information.

Sergic. France’s data protection agency, CNIL, found that real estate company Sergic knew of a vulnerability in its website for many months and did not protect user data. This data contained identity cards, tax notices, account statements and other personal details. The fall out? A €400,000 fine (roughly $445,000 USD).  

[You may also like: The Million-Dollar Question of Cyber-Risk: Invest Now or Pay Later?]

Haga Hospital. Now looking at healthcare, Haga Hospital in the Netherlands was hit with a €460,000 fine ($510,000 USD) for breaching data confidentiality. This investigation followed when it appeared that dozens of hospital staff had unnecessarily checked the medical records of a well-known Dutch person.

In my previous article, I wrote, “other industries you may not think about, such as airlines, car rentals and hotels which allow booking from the internet may be impacted. Will the HITECH Act fines become the harbinger of much larger fines to come?”

We can see that this prediction was spot on.  Some of the largest fines to date are pointing at airlines and hotels and the travel industry. I predict in the next year, we will start to really see the various agencies in the EU continue to ramp up fines, including cross border/international ones. 

CCPA is Almost Upon Us

Now, for the U.S.: California’s new Consumer Privacy Act (CCPA) goes into effect in January 2020. Will the state start rolling fines out like those imposed under GDPR?

If you’re an international company withany U.S. based customers, it’s pretty likely that you’ll have Californians in your database.  The CCPA focuses almost entirely on data collection and privacy, giving Californians the right to access their personal information, ask whether it’s being collected or sold, say no to it being collected or sold and still receive the same service or price even if they do say no.  

[You may also like: Why Cyber-Security Is Critical to The Loyalty of Your Most Valued Customers]

Come January 2020, you’ll either have to meticulously segment your database by state to create separate procedures for Californian citizens (and EU ones for that matter), or you’ll have to implement different data collection and privacy procedures for all your customers going forward.

With all of the new privacy rules coming and fines that are already starting to hit from GDPR, what will you do to maintain all the laws of the world to keep your customers safe?

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Attack Types & Vectors

Defacements: The Digital Graffiti of the Internet

September 12, 2019 — by Radware0

graffiti-960x586.jpg

A defacement typically refers to a remote code execution attack or SQL injection that allows the hacker to manipulate the visual appearance of the website by breaking into a web server and replacing the current website content with the hacker’s own.

Defacements are considered digital graffiti and typically contain some type of political or rivalry statement from the hacker. Hacktivist groups often leverage defacements.

These groups are typically unskilled, using basic software to automate their attacks. When major websites are defaced, it is typically due to network operator negligence. Web application firewalls are the best way to prevent these attacks, but updating content management systems or web services is also effective.

If you think that you are the target of a defacement campaign, update and patch your system immediately and alert network administrators to look for malicious activity, as a hacker will typically add a page to your domain. You can also monitor for such attacks retroactively via social media.

Security

Meet the Four Generations of Bots

September 11, 2019 — by Radware0

4GenBots-960x720.jpg

With the escalating race between bot developers and security experts — along with the increasing use of Javascript and HTML5 web technologies — bots have evolved significantly from their origins as simple scripting tools that used command line interfaces.

Bots now leverage full-fledged browsers and are programmed to mimic human behavior in the way they traverse a website or application, move the mouse, tap and swipe on mobile devices and generally try to simulate real visitors to evade security systems.

First Generation

First-generation bots were built with basic scripting tools and make cURL-like requests to websites using a small number of IP addresses (often just one or two). They do not have the ability to store cookies or execute JavaScript, so they do not possess the capabilities of a real web browser.

[You may also like: 5 Simple Bot Management Techniques]

Impact: These bots are generally used to carry out scraping, carding and form spam.

Mitigation: These simple bots generally originate from data centers and use proxy IP addresses and inconsistent UAs. They often make thousands of hits from just one or two IP addresses. They also operate through scraping tools, such as ScreamingFrog and DeepCrawl. They are the easiest to detect since they cannot maintain cookies, which most websites use. In addition, they fail JavaScript challenges because they cannot execute them. First-generation bots can be blocked by blacklisting their IP addresses and UAs, as well as combinations of IPs and UAs.

Second Generation

These bots operate through website development and testing tools known as “headless” browsers (examples: PhantomJS and SimpleBrowser), as well as later versions of Chrome and Firefox, which allow for operation in headless mode. Unlike first-generation bots, they can maintain cookies and execute JavaScript. Botmasters began using headless browsers in response to the growing use of JavaScript challenges in websites and applications.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

Impact: These bots are used for application DDoS attacks, scraping, form spam, skewed analytics and ad fraud.

Mitigation: These bots can be identified through their browser and device characteristics, including the presence of specific JavaScript variables, iframe tampering, sessions and cookies. Once the bot is identified, it can be blocked based on its fingerprints. Another method of detecting these bots is to analyze metrics and typical user journeys and then look for large discrepancies in the traffic across different sections of a website. Those discrepancies can provide telltale signs of bots intending to carry out different types of attacks, such as account takeover and scraping.

Third Generation

These bots use full-fledged browsers — dedicated or hijacked by malware — for their operation. They can simulate basic human-like interactions, such as simple mouse movements and keystrokes. However, they may fail to demonstrate human-like randomness in their behavior.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Impact: Third-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud, among other purposes.

Mitigation: Third-generation bots are difficult to detect based on device and browser characteristics. Interaction-based user behavioral analysis is required to detect such bots, which generally follow a programmatic sequence of URL traversals.

Fourth Generation

The latest generation of bots have advanced human-like interaction characteristics — including moving the mouse pointer in a random, human-like pattern instead of in straight lines. These bots also can change their UAs while rotating through thousands of IP addresses. There is growing evidence that points to bot developers carrying out “behavior hijacking” — recording the way in which real users touch and swipe on hijacked mobile apps to more closely mimic human behavior on a website or app. Behavior hijacking makes them much harder to detect, as their activities cannot easily be differentiated from those of real users. What’s more, their wide distribution is attributable to the large number of users whose browsers and devices have been hijacked.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Impact: Fourth-generation bots are used for account takeover, application DDoS, API abuse, carding and ad fraud.

Mitigation: These bots are massively distributed across tens of thousands of IP addresses, often carrying out “low and slow” attacks to slip past security measures. Detecting these bots based on shallow interaction characteristics, such as mouse movement patterns, will result in a high number of false positives. Prevailing techniques are therefore inadequate for mitigating such bots. Machine learning-based technologies, such as intent-based deep behavioral analysis (IDBA) — which are semi-supervised machine learning models to identify the intent of bots with the highest precision — are required to accurately detect fourth-generation bots with zero false positives.

Such analysis spans the visitor’s journey through the entire web property — with a focus on interaction patterns, such as mouse movements, scrolling and taps, along with the sequence of URLs traversed, the referrers used and the time spent at each page. This analysis should also capture additional parameters related to the browser stack, IP reputation, fingerprints and other characteristics.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

DDoS

5 Steps to Prepare for a DDoS Attack

September 10, 2019 — by Eyal Arazi3

5StepsDDoS-960x640.jpg

It’s inevitable almost as death and taxes: somewhere, at some point, you will come under a DDoS attack.

The reasons for DDoS attacks can vary from cyber crime to hacktivism to simple bad luck, but eventually someone will be out there to try and take you down.

The good news, however, is that there is plenty to be done about it. Below are five key steps you can begin taking today so that you are prepared when the attack comes.

Step 1: Map Vulnerable Assets

The ancient Greeks said that knowing thyself is the beginning of wisdom.

It is no surprise, therefore, that the first step to securing your assets against a DDoS attack is to know what assets there are to be secured.

[You may also like: DDoS Protection Requires Looking Both Ways]

Begin by listing all external-facing assets that might potentially be attacked. This list should include both physical and virtual assets:

  • Physical locations & offices
  • Data centers
  • Servers
  • Applications
  • IP addresses and subnets
  • Domains, sub-domains and specific FQDN’s

Mapping out all externally-facing assets will help you draw your threat surface and identify your point of vulnerability.

Step 2: Assess Potential Damages

After listing all potentially vulnerable assets, figure out how much they are worth to you.

This is a key question, as the answer will help determine how much you should spend in protecting these properties.

[You may also like: The Costs of Cyberattacks Are Real]

Keep in mind that some damages are direct, while other may be indirect. Some of the potential damages from a DDoS attack include:

  • Direct loss of revenue – If your website or application is generating revenue directly on a regular basis, then any loss of availability will cause direct, immediate losses in revenue. For example, if your website generates $1m a day, every hour of downtime, on average, will cause over $40,000 in damages.
  • Loss in productivity – For organizations that rely on online services, such as email, scheduling, storage, CRM or databases, any loss of availability to any of these services will directly result in loss of productivity and lost workdays.
  • SLA obligations – For applications and services that are bound by service commitments, any downtime can lead to breach of SLA, resulting in refunding customers for lost services, granting service credits, and even potentially facing lawsuits.
  • Damage to brand – In a world that is becoming ever-more connected, being available is increasingly tied to a company’s brand and identity. Any loss of availability as a result of a cyber-attack, therefore, can directly impact a company’s brand and reputation. In fact, Radware’s 2018 Application and Network Security Report showed that 43% of companies had experienced reputation loss as a result of a cyber-attack.
  • Loss of customers – One of the biggest potential damages of a successful DDoS attack is loss of customers. This can be either direct loss (i.e., a customer chooses to abandon you as a result of a cyber-attack) or indirect (i.e., potential customers who are unable to reach you and lost business opportunities). Either way, this is a key concern.

[You may also like: How Cyberattacks Directly Impact Your Brand]

When evaluating potential damages of a DDoS attack, assess each vulnerable asset individually. A DDoS attack against a customer-facing e-commerce site, for example, will result in very different damages than an attack against a remote field office.

After you assess the risk to each asset, prioritize them according to risk and potential damages. This will not only help you assess which assets need protection, but also the type of protection they require.

Step 3: Assign Responsibility

Once you create an inventory of potentially vulnerable assets, and then assign a dollar-figure (or any other currency…) to how much they are worth for you, the next step is to decide who is responsible for protecting them.

DDoS attacks are a unique type of cyber attack, as they affect different levels of IT infrastructure and can therefore potentially fall under the responsibility of different stakeholders:

  • Is DDoS the responsibility of the network administrator, since it affects network performance?
  • Is it the responsibility of application owner, since it impacts application availability?
  • Is it the responsibility of the business manager, since it affects revenue?
  • Is it the responsibility of the CISO, since it is a type of cyber attack?

A surprising number of organizations don’t have properly defined areas of responsibility with regards to DDoS protection. This can result in DDoS defense “falling between the cracks,” leaving assets potentially exposed.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Step 4: Set Up Detection Mechanisms

Now that you’ve evaluated which assets you must protect and who’s responsible for protecting them, the next step is to set up measures that will alert you to when you come under attack.

After all, you don’t want your customers – or worse, your boss – to be the ones to tell you that your services and applications are offline.

Detection measures can be deployed either at the network level or at the application level.

Make sure these measures are configured so that they don’t just detect attacks, but also alert you when something bad happens.

[You may also like: Does Size Matter? Capacity Considerations When Selecting a DDoS Mitigation Service]

Step 5: Deploy a DDoS Protection Solution

Finally, after you’ve assessed your vulnerabilities and costs, and set up attack detection mechanisms, now is the time to deploy actual protection.

This step is best done before you get attacked, and not when you are already under one.

DDoS protection is not a one-size-fits-all proposition, and there are many types of protection options, depending on the characteristics, risk and value of each individual asset.

On-demand cloud mitigation services are activated only once an attack is detected. They require the lowest overhead and are the lowest cost solution, but require traffic diversion for protection to kick-in. As a result, they are best suited for cost-sensitive customers, services which are not mission-critical, and customers who have never been (or are infrequently) attacked, but want a basic form of backup.

[You may also like: Is It Legal to Evaluate a DDoS Mitigation Service?]

Always-on cloud services route all traffic through a cloud scrubbing center at all times. No diversion is required, but there is minor added latency to requests. This type of protection is best for mission-critical applications which cannot afford any downtime, and organizations that are frequently attacked.

Hardware-based appliances provide advanced capabilities and fast-response of premise-based equipment. However, an appliance, on its own, is limited in its capacity. Therefore, they are best used for service providers who are building their own scrubbing capabilities, or in combination with a cloud service.

Finally, hybrid DDoS protection combines the massive capacity of cloud services with the advanced capabilities and fast response of a hardware appliance. Hybrid protection is best for mission-critical and latency-sensitive services, and organizations who encrypt their user traffic, but don’t want to put their SSL keys in the cloud.

Ultimately, you can’t control if-and-when you are attacked, but following these steps will help you be prepared when DDoS attackers come knocking at your door.

Download Radware’s “Hackers Almanac” to learn more.

Download Now

DDoS

The Emergence of Denial-of-Service Groups

August 27, 2019 — by Radware2

DosAttack-960x640.jpg

Denial-of-Service (DoS) attacks are cyberattacks designed to render a computer or network service unavailable to its users. A standard DoS attack is when an attacker utilizes a single machine to launch an attack to exhaust the resources of another machine. A DDoS attack uses multiple machines to exhaust the resources of a single machine.

DoS attacks have been around for some time, but only recently has there been an emergence of denial-of-service groups that have constructed large botnets to target massive organizations for profit or fame. These groups often utilize their own stresser services and amplification methods to launch massive volumetric attacks, but they have also been known to make botnets available for rent via the darknet.

If a denial-of-service group is targeting your organization, ensure that your network is prepared to face an array of attack vectors ranging from saturation floods to Burst attacks designed to overwhelm mitigation devices.

Hybrid DDoS mitigation capabilities that combine on-premise and cloud-based volumetric protection for real-time DDoS mitigation are recommended. This requires the ability to efficiently identify and block anomalies that strike your network while not adversely affecting legitimate traffic. An emergency response plan is also required.

Learn more:

Download Radware’s “Hackers Almanac” to learn more.

Download Now

DDoS

How to Choose a Cloud DDoS Scrubbing Service

August 21, 2019 — by Eyal Arazi0

ddoscloud-960x720.jpg

Buying a cloud-based security solution is more than just buying a technology. Whereas when you buy a physical product, you care mostly about its immediate features and capabilities, a cloud-based service is more than just lines on a spec sheet; rather, it is a combination of multiple elements, all of which must work in tandem, in order to guarantee performance.

Cloud Service = Technology + Network + Support

There are three primary elements that determine the quality of a cloud security service: technology, network, and support.

Technology is crucial for the underlying security and protection capabilities. The network is required for a solid foundation on which the technology runs on, and the operation & support component is required to bring them together and keep them working.

[You may also like: Security Considerations for Cloud Hosted Services]

Take any one out, and the other two legs won’t be enough for the service to stand on.

This is particularly true when looking for a cloud-based DDoS scrubbing solution. Distributed Denial of Service (DDoS) attacks have distinct features that make them different than other types of cyber-attacks. Therefore, there are specific requirements for cloud-based DDoS protection service that cover the full gamut of technology, network, and support that are particular to DDoS protection.

Technology

As I explained earlier, technology is just one facet of what makes-up a cloud security service. However, it is the building block on which everything else is built.

The quality of the underlying technology is the most important factor in determining the quality of protection. It is the technology that determines how quickly an attack will be detected; it is the quality of the technology that determines whether it can tell the difference between a traffic spike in legitimate traffic, and a DDoS attack; and it is the technology that determines whether it can adapt to attack patterns in time to keep your application online or not.

[You may also like: Why You Still Need That DDoS Appliance]

In order to make sure that your protection is up to speed, there are a few key core features you want to make sure that your cloud service provides:

  • Behavioral detection: It is often difficult to tell the difference between a legitimate traffic in customer traffic – say, during peak shopping periods – and a surge caused by a DDoS attack. Rate-based detection won’t be able to tell the difference, resulting in false positives. Therefore, behavioral detection, which looks not just at traffic rates, but also at non-rate behavioral parameters is a must-have capability.
  • Automatic signature creation: Attackers are relying more and more on multi-vector and ‘hit-and-run’ burst attacks, which frequently switch between different attack methods. Any defense mechanism based on manual configurations will fail because it won’t be able to keep up with changed. Only defenses which provide automatic, real-time signature creation can keep up with such attacks, in order to tailor defenses to the specific characteristics of the attack.
  • SSL DDoS protection: As more and more internet traffic becomes encrypted – over 85% according to the latest estimates – protection against encrypted DDoS floods becomes ever more important. Attackers can leverage DDoS attacks in order to launch potent DDoS attacks which can quickly overwhelm server resources. Therefore, protection capabilities against SSL-based DDoS attacks is key.
  • Application-layer protection: As more and more services migrate online, application-layer (L7) DDoS attacks are increasingly used in order to take them down. Many traditional DDoS mitigation services look only at network-layer (L3/4) protocols, but up-to-date protection must include application-layer protection, as well.
  • Zero-day protection: Finally, attackers are constantly finding new ways of bypassing traditional security mechanisms and hitting organizations with attack methods never seen before. Even by making small changes to attack signatures hackers can craft attacks that are not recognized by manual signatures. That’s why including zero-day protection features, which can adapt to new attack types, is an absolute must-have.

[You may also like: Modern Analytics and End-to-End Visibility]

Network

The next building block is the network. Whereas the technology stops the attack itself, it is the network that scales-out the service and deploys it on a global scale. Here, too, there are specific requirements that are uniquely important in the case of DDoS scrubbing networks:

  • Massive capacity: When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new peaks. That is why having large-scale, massive capacity at your disposal in an absolute requirement to stop attacks.
  • Dedicated capacity: It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers rely on their CDN capacity, which is already being widely utilized, for DDoS mitigation, as well.Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.
  • Global footprint: Fast response and low latency are crucial components in service performance. A critical component in latency, however, is distance between the customer and the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer, which can only be achieve with a globally distributed network with a large footprint.

Support

The final piece of the ‘puzzle’ of providing a high-quality cloud security network is the human element; that is, maintenance, operation and support.

Beyond the cold figures of technical specifications, and the bits-and-bytes of network capacity, it is the service element that ties together the technology and network, and makes sure that they keep working in tandem.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Here, too, there are a few key elements to look at when considering a cloud security network:

  • Global Team: Maintaining global operations of a cloud security service requires a team large enough to ensure 24x7x365 operations. Moreover, sophisticated security teams use a ‘follow-the-sun’ model, with team member distributed strategically around the world, to make sure that experts are always available, regardless of time or location. Only teams that reach a certain size – and companies that reach a certain scale – can guarantee this.
  • Team Expertise: Apart from sheer numbers of team member, it is also their expertise that matter. Cyber security is a discipline, and DDoS protection, in particular, is a specialization.  Only a team with a distinguished, long track record in  protecting specifically against DDoS attacks can ensure that you have the staff, skills, and experience required to be fully protected.
  • SLA: The final qualification are the service guarantees provided by your cloud security vendor. Many service providers make extensive guarantees, but fall woefully short when it comes to backing them up. The Service Level Agreement (SLA) is your guarantee that your service provider is willing to put their money where their mouth is. A high-quality SLA must provide individual measurable metrics for attack detection, diversion (if required), alerting, mitigation, and uptime. Falling short of those should call into question your vendors ability to deliver on their promises.

A high-quality cloud security service is more than the sum of its parts. It is the technology, network, and service all working in tandem – and hitting on all cylinders – in order to provide superior protection. Falling short on any one element can potentially jeopardize quality of the protection delivered to customers. Use the points outlined above to ask yourself whether your cloud security vendor has all the right pieces to provide quality protection, and if they don’t – perhaps it is time for you to consider alternatives.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Application Security

Threats on APIs and Mobile Applications

August 20, 2019 — by Pascal Geenens0

API-threats-960x560.jpg

Web Application Programming Interfaces, or Web APIs, are essential building blocks of our digital lives. They provide the tools and protocols that enable web and mobile applications to provide dynamic content and up to date, personalized information.

Our cars, bikes, and fitness trackers rely on Web APIs to track and guide us to our personal goals. In our homes, digital personal assistants help us manage our schedules, control our home, play our music, and much more, and they do so by interacting with an API provided as a service in the cloud. Google Pay, Apple Pay, PayPal, and many others enable businesses around to globe to process customer payments by the press of a button or a swipe of their phone. Their APIs provide easy integration and increased security for online commercial businesses. Smart cities and Industry 4.0 are taking over the manufacturing world and allow new interconnected and automated manufacturing technologies and processes.

Cyber-physical systems monitor physical processes and make decentralized decisions based on a virtual model of the real world. Industrial Internet of Things (IoT) communicate and cooperate in real-time with users and across organizations.

These are only a few examples of the digital world we live in today and which relies on one very essential building block: the Web API.

What Are Web APIs?

A Web API is a set of tools and protocols that provide a predefined interface for a request and response messaging system between two programs. It exposes reliable content and provides operation negotiation through a common defined language. REST, short for REpresentational State Transfer, and the Simple Object Access Protocol (SOAP) are the most common protocol styles for cloud service architectures, with REST by far the most common one.

[You may also like: How to Prevent Real-Time API Abuse]

SOAP used to be the go-to messaging protocol that almost every web service used; it is a standardized protocol that allows the exchange of messages using underlying protocols such as HTTP, SMTP, TCP, UDP, and others. SOAP builds on a large number of frameworks using XML to format the messages. The standard includes a Web Services Description Language (WSDL) which defines the structure of the data in the message. SOAP is an official web standard with specifications maintained and developed by the World Wide Web Consortium (W3C).

As opposed to SOAP, REST is much less of a protocol and more of an architectural style. REST only provides a set of guidelines and allows much more flexibility in the implementation of the protocol by its developers. As such, the REST architecture gained much popularity and fitted better the agile and continuously evolving specs and requirements of modern-day web services.

The percentages of API Architectural Styles for profiles in the ProgrammableWeb API directory [source: https://www.programmableweb.com/news/which-api-types-and-architectural-styles-are-most-used/research/2017/11/26]

REST is used to build web services that are lightweight, scalable, and easy to maintain. Services build on the REST architecture are called RESTful services. The protocol underlying REST is HTTP, the most common and standardized web protocol supported by almost every system and device on the internet. Any program that can talk HTTP is a potential REST client, any system that can process HTTP requests can expose RESTful services. Talk the talk, walk the … ; meaning there needs to be an agreement between consumer and service for them to exchange actionable and useful information, hence the use of a common language such as XML or JSON.

[You may also like: Adapting Application Security to the New World of Bots]

REST requests and JSON structures are straightforward concepts. A request is very much like a URL with some arguments:

https://my.restservice.local/get_user?id=1

The response a webservice located at that URL could return might be a JSON formatted message. JSON is a human and machine readable format, very convenient and easy to find structure and derive meaning from the data by either humans and machines:

// JSON Object
{
 “user”: {
 “id”: 1,
 “name”: “admin”,
 “groupid”: 1,
 “password”: “123456”
 }
}

The API Economy

To create new online applications within acceptable time frames, one should try to use existing and proven components for repeating and basic tasks. Focusing on the development of the innovative and differentiating parts of the application and not wasting cycles on the commodity is how in-house development stays productive and commercially viable. The why of in-house development is mainly the innovation and differentiation of the application, or what makes it stand out in the crowd.

[You may also like: 5G Security in an API-Driven Economy]

Most online applications rely on third-party and open source components, some of which could be Web APIs hosted in different clouds. Using third-party hosted Web APIs, developers can instantly add support for repeated and complex processes and do so in just a few lines of code. Using and consuming commercial-grade third-party APIs will typically not be free but is generally billed based on a subscription and number of (API) calls model, which embodies the ‘economy’ part of ‘the API economy.’

Credit card processing APIs are probably the most dominant component used by all commercial websites. It is more efficient and more secure to rely on a proven and certified third party to process customer payments. The security aspect and trust-worthiness of, say PayPal, results in much less resistance from visitors than to have them provide and store their credit card details on your website. Failing to provide an extensive list of payment options will negatively impact the success of your site. Think about how much more sales you could realize if your mobile app integrates with Apple and Google Pay and all your potential customer has to do is swipe from left to right to buy your products and services? No information or personal details to input, no additional authentication steps, all that is needed is a big smile for their phone to authorize the transaction and complete the purchase.

The Radware ShieldSquare Bot Manager relies on this very same Web API concept. Radware Bot Manager exposes a cloud-based service into which on-premise reverse proxies and web applications make API calls to differentiate legitimate users’ and good bot from bad bot requests. The service is provided to our customers as a subscription and pricing based on tiers of maximum number of API calls per month.

[You may also like: Navigating the Bot Ecosystem]

APIs, Built For and Consumed By Machines

APIs are by definition interfaces between machines. They are supposed to be consumed by devices and applications. Devices are machines, and their communication with the API is from and to machines (M2M). Mobile applications, dynamic web pages, or native user interface clients provide a graphical representation through which humans interact with the API. The graphical interface translates the interactions of the user into API requests while the data received in the API’s response message is rendered into a visual representation that makes more sense to the user.

Machines are good at processing structured data but have a harder time crunching through visual representations of that same data. Think about a paragraph in your document processor versus a scanned image of that same text. The visual representation of the text can be translated back to its original data representation, text in this case, but not without using complex tooling such as Optical Character Recognition (OCR) and only with a certain degree of success, most often not without introducing errors.

Do the exercise: this image provides tree representations of the same data. Which would you prefer to interact with and which do you think a machine would prefer? [data from https://opensource.adobe.com/Spry/samples/dataregion/JSONDataSetSample.html and formatted using http://json2table.com]_

Now put yourself in the shoes of an attacker that wants to scrape the product list from an online commercial website. Would you go at it by scraping and translating HTML pages and following links to discover and encode the full catalog? Or, would you first try to discover if an API feeds the dynamic content that gets rendered in the web browser? If you went for the latter, consider yourself a step closer to being a real hacker 😉

Securing Web APIs

The online nature of web APIs makes their communications subject to snooping or man-in-the-middle and replay attacks. Very much like the rest of the internet that is concerned about their privacy, all communication should be encrypted and origins verified. As REST relies on HTTP, SSL and TLS with certificates are the bare essentials.

Unless your Web API can verify the requesting client’s origin through a certificate, and as such leverages mutual TLS (mTLS), there is still no guarantee that the other side of the communication is a legitimate program with good intentions. Web APIs build on the same stateless paradigm and protocols used by web applications. While web applications are made stateful by introducing (hidden) session keys that get posted on each subsequent request after an initial login, Web API calls are by definition not stateful, but they can leverage the same ideas and concepts.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

JSON Web Token (JWT), for example, is an open standard (RFC 7519) that defines a self-contained way for securely transmitting information between parties as a JSON object. The token is signed using a secret or a public/private key pair and as such, can be verified and trusted by a receiving service. Because of the self-contained nature of the token, authorization can be performed based on just the token, no need for resolving keys or tokens into actual identity.

For machine to machine communications, however, there is no interactive authentication step after which a session key or token gets generated. The way a consumer of a Web API can be authorized, is by using some kind of shared secret that was agreed upon upfront and that is shared with the API as one of the call request arguments. That secret would have been obtained through a separate authentication and authorization step. Many third-party Web API providers require the author of an application to register and request access to the API, at which point he or she will be provided with a personal access or API token. The API token identifies the client program and allows it to consume the API, while the providing service can authorize and track requests and utilization.

[You may also like: Application SLA: Knowing Is Half the Battle]

There are convenient APIs that provide authentication for third-party applications and can be leveraged by consumers and providers alike. Ever used your Google or Facebook account to get access to a service you never previously registered for? Applications and API providers can rely on a third party such as Google or Facebook, using them as a trusted middle-man to authenticate the consumer of the service. The consumer of the service can decide to trust the middle-man secures its private information and shares only what is agreed to and required for authorization with the provider’s service. The convenience brought to the consumer of the application is single-sign-on (SSO), meaning that the user only needs to register and login only once with Google and then can access and consume all the services and applications that rely on that middle-man. An example of such a standardized protocol is OAuth, also used by Google and Facebook in its 2.0 incarnation.

So I Secured My API. I’m Safe, Right?!

Not quite there yet, keep reading! Requests to Web APIs can be authenticated, authorized, and their contents protected by encryption.

However, what if you host a commercial website? Your definition of an authenticated user is that of a user that previously, in some cases just seconds ago, registered for access your website. Automated programs, commonly referred to as bots, are very much able to create email aliases, register as fictitious persons, process the email validation requests and get unlimited access to your website as do legitimate persons. A single request performed by a bot does not look any different than a request originating from a real human. Chaining multiple requests into an intended behavior; only then can one reveal the legitimate or malicious nature of the other party.

[You may also like: 4 Emerging Challenges in Securing Modern Applications]

Some applications have the luxury of only servicing a limited number of consumers that can be vetted and certified through some clearance process – B2B applications typically. Even then, tokens can be compromised and unauthorized use of the API is still very much a possibility. Even if tokens are not directly compromised, client-side Cross-Site Request Forging (CSRF) and Server-Side Request Forging (SSRF) could allow malicious actors to abuse the API. Even when you have strict control on your API or host internal private APIs that are used only by your front-end servers, they are still at risk.

Mobile Apps, API Consumers With a Twist

Mobile applications are nothing more than fancy consumers of Web APIs. At least those applications that provide on-demand and data driven user experiences. Candy Crush probably not the most appropriate example, though a great user experience – no pun intended.

API requests are machine to machine and by consequence do not immediately reveal the presence of a natural person or the execution environment of the application. A web application’s environment can be challenged and identified using javascript injected into the application’s web pages. The content, application in this case, returned by a web server is dynamic and can be adapted on the fly or redirected if the need arises.

[You may also like: Web Application Security in a Digitally Connected World]

A mobile application, however, is static once delivered and installed and relies on API calls to only update that portion of the representation that contains dynamic information. Unless the mobile application includes functionality that allows it to identify human behavior through motion sensors or click and swipe patterns, and it can certify it is running on a real device and not in an emulated environment, the back end APIs cannot verify the actual state of the application.

By nature, mobile applications are publicly accessible and can easily be reversed to reveal their inner working. Reversing mobile applications uncovers the specific API calls directly following user actions such as clicks (or presses), as well as any embedded static tokens or certificates which provide the keys to the API kingdom.

Furthermore, easy access to device emulation software such as QEMU allows anyone to run the application in thousands of virtual instances and perform automated actions such as advertisement clicks which can cost you dearly.

Conclusions

Securing your web APIs and ensuring legitimate use of them requires more than authentication and authorization. Even if you are sure that your application is coded with best security practices, your infrastructure is top-notch secured and audited, and the application contains no vulnerabilities, there is still the threat of automated attacks that leverage legitimate requests to build a chain of behavior that results in malicious activity. Each individual request is legitimate, but the end game of the thousands of bots disguised as legitimate users could be a depleted stock or information being processed and leveraged competitively against you.

Building web APIs for B2B, providing customers with Mobile Apps, etc. increases customer loyalty, prevents customer churn, and increases revenue and competitiveness. However, these same APIs and mobile applications can be turned into a weapon against your business, and in a very insidious way, without immediate indication something or someone malicious is at work. A bot management solution should be considered when exposing APIs that are directly or indirectly connected with your business and revenue.

For those aware that applications without vulnerabilities are RBUs, consider the added layer of protection provided by a Web Application Firewall that will prevent abuse of vulnerable code and infrastructure, and it will even protect you from Cross-Site and Server Side Request Forgery.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Application Security

Automation for NetOps and DevOps

August 14, 2019 — by Prakash Sinha0

automation1-960x446.jpeg

Many organizations use public cloud service providers, some in addition to their private cloud and on premise deployments. The right product mix not only reduces vendor lock-in and shadow IT, but is also an enabler for the constituents that includes IT administrators, network and security operations, as well as DevOps.

Maintaining application security and configurations across multiple environments is complex AND error prone and increases the attack surface. Careful testing is required to protect business-critical applications from hacking attempts, which may include denial of service, network and application attacks, malware and bots and impersonation.

A successful implementation will not only include the right cloud provider, the correct security, licensing and cost model, but also the appropriate automation tools to help secure the technology and security landscape consistently as applications are rolled out in a continuous integration and continuous delivery (CI/CD) process.

When Does Automation Become a Pressing Issue?

The reasons to automate may be due to resource constraints, configuration management, compliance or monitoring. For example, an organization may have very few people managing a large set of configurations, or the required skill set spans networking AND security products, or perhaps the network operation team does not have the operational knowledge of all the devices they are managing.

Below are a few benefits that automation provides:

  • Time savings and fewer errors for repetitive tasks
  • Cost reduction for complex tasks that require specialized skills
  • Ability to react quickly to events, for example,
    • Automatically commission new services at 80% utilization and decommission at 20%
    • Automatically adjust security policies to optimally address peace-time and attack traffic

[You may also like: How to Move Security Up the DevOps Priority List]

Automate the Complexity Away?

Let us consider a scenario where a development engineer has an application ready and needs to test application scalability, business continuity and security using a load balancer, prior to rolling out through the IT.

The developer may not have the time to wait for a long provisioning timeline, or the expertise and familiarity with the networking and security configurations. The traditional way would be to open a ticket, have an administrator reach out, understand the use case and then create a custom load balancer for the developer to test. This is certainly expensive to do, and it hinders CI/CD processes.

[You may also like: Economics of Load Balancing When Transitioning to the Cloud]

The objective here would be enable self-service, in a way that the developer can relate to and work with to test against the load balancer without networking and security intricacies coming in the way. A common way is by creating a workflow that automates tasks using templates, and if the workflow spans multiple systems, then hides the complexity from the developer by orchestrating them.

The successful end-to-end automation consists of several incremental steps that build upon each other. For example, identify all use cases that administrators take that are prone to introducing errors in configuration. Then make them scripted – say, using CLI or python scripts. Now you’re at a point where you’re ready to automate.

You’ll have to pick automation and orchestration tools that’ll help you simplify the tasks, remove the complexity and make it consumable to your audience. Most vendors provide integrations for commonly used automation and orchestration systems – Ansible, Chef, Puppet, Cisco ACI and Vmware vRealize – just to name a few.

Before you embark on the automation journey, identify the drivers, tie it to business needs and spend some time planning the transition by identifying the use cases and tools in use. Script the processes manually and test before automating using tools of your choice.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Botnets

How Hard Is It to Build a Botnet?

August 13, 2019 — by David Hobbs0

botnet-960x540.jpg

While working on Radware’s Ultimate Guide to Bot Management, I began wondering what would it take to build a botnet.

Would I have to dive into the Darknet and find criminal hackers and marketplaces to obtain the tools to make one? How much effort would it take to build a complicated system that would avoid detection and mitigation, and what level of expertise is required to make a scraping/credential stuffing and website abuse botnet?

At Your Fingertips

What I discovered was amazing. I didn’t even need to dive into the Darknet; everything anyone would need was readily available on the public internet. 

[You may also like: What You Need to Know About Botnets]

My learning didn’t end there. During this exploration, I noticed that many organizations use botnets in one form or another against their competitors or to gain a competitive advantage. Of course, I knew hackers leverage botnets for profit; but the availability of botnet building tools makes it easy for anyone to construct botnets that can access web interfaces and APIs while disguising their location and user agents. 

The use cases being advertised from these toolsets range from data harvesting, to account creation and account takeover, to inventory manipulation capabilities, advertising fraud and a variety of ways to monetize and automate integrations into well known systems for IT.  

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Mobile Phone Farms

These tools designers and services clearly know there is a market for cyber criminality, and some are shameless about promoting it.

For example, per a recent Vice article examining mobile phone farms, companies are incentivizing traffic to their apps and content by paying users. Indeed, it appears that people can make anywhere from $100-300 a month per mobile phone on apps like perk TV, Fusion TV, MyPoints or even categorizing shows for Netflix. They merely have to take surveys, watch television shows, categorize content or check into establishments.

[You may also like: Botnets: DDoS and Beyond]

More specifically, people are building mobile phone farms with cheap android devices and used phones, and scale up their operations to a point where they can make a couple of thousands of dollars (or more!) per month. These farms can be rented out to conduct more nefarious activities, like price scraping, data harvesting, ticket purchasing, account takeover, fake article writing and social media development, hacking, launching launching DDoS attacks and more.  To complicate matters, thanks to proxy servers and VPN tools, it has become nearly impossible to detect if a phone farm is being used against a site.  

What’s Next?

It’s not a far leap to assume that incentivized engagement may very well invite people to build botnets. How long until somebody develops an app to “rent your phone’s spare cycles” to scrape data, or watch content, write reviews, etc. (in other words, things that aren’t completely against the law) for money? Would people sign up to make extra beer money in exchange for allowing botnet operators to click on ads and look at websites for data harvesting?

I think it’s just a matter of time before this idea takes flight. Are you prepared today to protect against the sophisticated botnets? Do you have a dedicated bot management solution? When the botnets evolve into the next generation, will you be ready?

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now