main

Application Security

Threats on APIs and Mobile Applications

August 20, 2019 — by Pascal Geenens0

API-threats-960x560.jpg

Web Application Programming Interfaces, or Web APIs, are essential building blocks of our digital lives. They provide the tools and protocols that enable web and mobile applications to provide dynamic content and up to date, personalized information.

Our cars, bikes, and fitness trackers rely on Web APIs to track and guide us to our personal goals. In our homes, digital personal assistants help us manage our schedules, control our home, play our music, and much more, and they do so by interacting with an API provided as a service in the cloud. Google Pay, Apple Pay, PayPal, and many others enable businesses around to globe to process customer payments by the press of a button or a swipe of their phone. Their APIs provide easy integration and increased security for online commercial businesses. Smart cities and Industry 4.0 are taking over the manufacturing world and allow new interconnected and automated manufacturing technologies and processes.

Cyber-physical systems monitor physical processes and make decentralized decisions based on a virtual model of the real world. Industrial Internet of Things (IoT) communicate and cooperate in real-time with users and across organizations.

These are only a few examples of the digital world we live in today and which relies on one very essential building block: the Web API.

What Are Web APIs?

A Web API is a set of tools and protocols that provide a predefined interface for a request and response messaging system between two programs. It exposes reliable content and provides operation negotiation through a common defined language. REST, short for REpresentational State Transfer, and the Simple Object Access Protocol (SOAP) are the most common protocol styles for cloud service architectures, with REST by far the most common one.

[You may also like: How to Prevent Real-Time API Abuse]

SOAP used to be the go-to messaging protocol that almost every web service used; it is a standardized protocol that allows the exchange of messages using underlying protocols such as HTTP, SMTP, TCP, UDP, and others. SOAP builds on a large number of frameworks using XML to format the messages. The standard includes a Web Services Description Language (WSDL) which defines the structure of the data in the message. SOAP is an official web standard with specifications maintained and developed by the World Wide Web Consortium (W3C).

As opposed to SOAP, REST is much less of a protocol and more of an architectural style. REST only provides a set of guidelines and allows much more flexibility in the implementation of the protocol by its developers. As such, the REST architecture gained much popularity and fitted better the agile and continuously evolving specs and requirements of modern-day web services.

The percentages of API Architectural Styles for profiles in the ProgrammableWeb API directory [source: https://www.programmableweb.com/news/which-api-types-and-architectural-styles-are-most-used/research/2017/11/26]

REST is used to build web services that are lightweight, scalable, and easy to maintain. Services build on the REST architecture are called RESTful services. The protocol underlying REST is HTTP, the most common and standardized web protocol supported by almost every system and device on the internet. Any program that can talk HTTP is a potential REST client, any system that can process HTTP requests can expose RESTful services. Talk the talk, walk the … ; meaning there needs to be an agreement between consumer and service for them to exchange actionable and useful information, hence the use of a common language such as XML or JSON.

[You may also like: Adapting Application Security to the New World of Bots]

REST requests and JSON structures are straightforward concepts. A request is very much like a URL with some arguments:

https://my.restservice.local/get_user?id=1

The response a webservice located at that URL could return might be a JSON formatted message. JSON is a human and machine readable format, very convenient and easy to find structure and derive meaning from the data by either humans and machines:

// JSON Object
{
 “user”: {
 “id”: 1,
 “name”: “admin”,
 “groupid”: 1,
 “password”: “123456”
 }
}

The API Economy

To create new online applications within acceptable time frames, one should try to use existing and proven components for repeating and basic tasks. Focusing on the development of the innovative and differentiating parts of the application and not wasting cycles on the commodity is how in-house development stays productive and commercially viable. The why of in-house development is mainly the innovation and differentiation of the application, or what makes it stand out in the crowd.

[You may also like: 5G Security in an API-Driven Economy]

Most online applications rely on third-party and open source components, some of which could be Web APIs hosted in different clouds. Using third-party hosted Web APIs, developers can instantly add support for repeated and complex processes and do so in just a few lines of code. Using and consuming commercial-grade third-party APIs will typically not be free but is generally billed based on a subscription and number of (API) calls model, which embodies the ‘economy’ part of ‘the API economy.’

Credit card processing APIs are probably the most dominant component used by all commercial websites. It is more efficient and more secure to rely on a proven and certified third party to process customer payments. The security aspect and trust-worthiness of, say PayPal, results in much less resistance from visitors than to have them provide and store their credit card details on your website. Failing to provide an extensive list of payment options will negatively impact the success of your site. Think about how much more sales you could realize if your mobile app integrates with Apple and Google Pay and all your potential customer has to do is swipe from left to right to buy your products and services? No information or personal details to input, no additional authentication steps, all that is needed is a big smile for their phone to authorize the transaction and complete the purchase.

The Radware ShieldSquare Bot Manager relies on this very same Web API concept. Radware Bot Manager exposes a cloud-based service into which on-premise reverse proxies and web applications make API calls to differentiate legitimate users’ and good bot from bad bot requests. The service is provided to our customers as a subscription and pricing based on tiers of maximum number of API calls per month.

[You may also like: Navigating the Bot Ecosystem]

APIs, Built For and Consumed By Machines

APIs are by definition interfaces between machines. They are supposed to be consumed by devices and applications. Devices are machines, and their communication with the API is from and to machines (M2M). Mobile applications, dynamic web pages, or native user interface clients provide a graphical representation through which humans interact with the API. The graphical interface translates the interactions of the user into API requests while the data received in the API’s response message is rendered into a visual representation that makes more sense to the user.

Machines are good at processing structured data but have a harder time crunching through visual representations of that same data. Think about a paragraph in your document processor versus a scanned image of that same text. The visual representation of the text can be translated back to its original data representation, text in this case, but not without using complex tooling such as Optical Character Recognition (OCR) and only with a certain degree of success, most often not without introducing errors.

Do the exercise: this image provides tree representations of the same data. Which would you prefer to interact with and which do you think a machine would prefer? [data from https://opensource.adobe.com/Spry/samples/dataregion/JSONDataSetSample.html and formatted using http://json2table.com]_

Now put yourself in the shoes of an attacker that wants to scrape the product list from an online commercial website. Would you go at it by scraping and translating HTML pages and following links to discover and encode the full catalog? Or, would you first try to discover if an API feeds the dynamic content that gets rendered in the web browser? If you went for the latter, consider yourself a step closer to being a real hacker 😉

Securing Web APIs

The online nature of web APIs makes their communications subject to snooping or man-in-the-middle and replay attacks. Very much like the rest of the internet that is concerned about their privacy, all communication should be encrypted and origins verified. As REST relies on HTTP, SSL and TLS with certificates are the bare essentials.

Unless your Web API can verify the requesting client’s origin through a certificate, and as such leverages mutual TLS (mTLS), there is still no guarantee that the other side of the communication is a legitimate program with good intentions. Web APIs build on the same stateless paradigm and protocols used by web applications. While web applications are made stateful by introducing (hidden) session keys that get posted on each subsequent request after an initial login, Web API calls are by definition not stateful, but they can leverage the same ideas and concepts.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

JSON Web Token (JWT), for example, is an open standard (RFC 7519) that defines a self-contained way for securely transmitting information between parties as a JSON object. The token is signed using a secret or a public/private key pair and as such, can be verified and trusted by a receiving service. Because of the self-contained nature of the token, authorization can be performed based on just the token, no need for resolving keys or tokens into actual identity.

For machine to machine communications, however, there is no interactive authentication step after which a session key or token gets generated. The way a consumer of a Web API can be authorized, is by using some kind of shared secret that was agreed upon upfront and that is shared with the API as one of the call request arguments. That secret would have been obtained through a separate authentication and authorization step. Many third-party Web API providers require the author of an application to register and request access to the API, at which point he or she will be provided with a personal access or API token. The API token identifies the client program and allows it to consume the API, while the providing service can authorize and track requests and utilization.

[You may also like: Application SLA: Knowing Is Half the Battle]

There are convenient APIs that provide authentication for third-party applications and can be leveraged by consumers and providers alike. Ever used your Google or Facebook account to get access to a service you never previously registered for? Applications and API providers can rely on a third party such as Google or Facebook, using them as a trusted middle-man to authenticate the consumer of the service. The consumer of the service can decide to trust the middle-man secures its private information and shares only what is agreed to and required for authorization with the provider’s service. The convenience brought to the consumer of the application is single-sign-on (SSO), meaning that the user only needs to register and login only once with Google and then can access and consume all the services and applications that rely on that middle-man. An example of such a standardized protocol is OAuth, also used by Google and Facebook in its 2.0 incarnation.

So I Secured My API. I’m Safe, Right?!

Not quite there yet, keep reading! Requests to Web APIs can be authenticated, authorized, and their contents protected by encryption.

However, what if you host a commercial website? Your definition of an authenticated user is that of a user that previously, in some cases just seconds ago, registered for access your website. Automated programs, commonly referred to as bots, are very much able to create email aliases, register as fictitious persons, process the email validation requests and get unlimited access to your website as do legitimate persons. A single request performed by a bot does not look any different than a request originating from a real human. Chaining multiple requests into an intended behavior; only then can one reveal the legitimate or malicious nature of the other party.

[You may also like: 4 Emerging Challenges in Securing Modern Applications]

Some applications have the luxury of only servicing a limited number of consumers that can be vetted and certified through some clearance process – B2B applications typically. Even then, tokens can be compromised and unauthorized use of the API is still very much a possibility. Even if tokens are not directly compromised, client-side Cross-Site Request Forging (CSRF) and Server-Side Request Forging (SSRF) could allow malicious actors to abuse the API. Even when you have strict control on your API or host internal private APIs that are used only by your front-end servers, they are still at risk.

Mobile Apps, API Consumers With a Twist

Mobile applications are nothing more than fancy consumers of Web APIs. At least those applications that provide on-demand and data driven user experiences. Candy Crush probably not the most appropriate example, though a great user experience – no pun intended.

API requests are machine to machine and by consequence do not immediately reveal the presence of a natural person or the execution environment of the application. A web application’s environment can be challenged and identified using javascript injected into the application’s web pages. The content, application in this case, returned by a web server is dynamic and can be adapted on the fly or redirected if the need arises.

[You may also like: Web Application Security in a Digitally Connected World]

A mobile application, however, is static once delivered and installed and relies on API calls to only update that portion of the representation that contains dynamic information. Unless the mobile application includes functionality that allows it to identify human behavior through motion sensors or click and swipe patterns, and it can certify it is running on a real device and not in an emulated environment, the back end APIs cannot verify the actual state of the application.

By nature, mobile applications are publicly accessible and can easily be reversed to reveal their inner working. Reversing mobile applications uncovers the specific API calls directly following user actions such as clicks (or presses), as well as any embedded static tokens or certificates which provide the keys to the API kingdom.

Furthermore, easy access to device emulation software such as QEMU allows anyone to run the application in thousands of virtual instances and perform automated actions such as advertisement clicks which can cost you dearly.

Conclusions

Securing your web APIs and ensuring legitimate use of them requires more than authentication and authorization. Even if you are sure that your application is coded with best security practices, your infrastructure is top-notch secured and audited, and the application contains no vulnerabilities, there is still the threat of automated attacks that leverage legitimate requests to build a chain of behavior that results in malicious activity. Each individual request is legitimate, but the end game of the thousands of bots disguised as legitimate users could be a depleted stock or information being processed and leveraged competitively against you.

Building web APIs for B2B, providing customers with Mobile Apps, etc. increases customer loyalty, prevents customer churn, and increases revenue and competitiveness. However, these same APIs and mobile applications can be turned into a weapon against your business, and in a very insidious way, without immediate indication something or someone malicious is at work. A bot management solution should be considered when exposing APIs that are directly or indirectly connected with your business and revenue.

For those aware that applications without vulnerabilities are RBUs, consider the added layer of protection provided by a Web Application Firewall that will prevent abuse of vulnerable code and infrastructure, and it will even protect you from Cross-Site and Server Side Request Forgery.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Application Security

Automation for NetOps and DevOps

August 14, 2019 — by Prakash Sinha0

automation1-960x446.jpeg

Many organizations use public cloud service providers, some in addition to their private cloud and on premise deployments. The right product mix not only reduces vendor lock-in and shadow IT, but is also an enabler for the constituents that includes IT administrators, network and security operations, as well as DevOps.

Maintaining application security and configurations across multiple environments is complex AND error prone and increases the attack surface. Careful testing is required to protect business-critical applications from hacking attempts, which may include denial of service, network and application attacks, malware and bots and impersonation.

A successful implementation will not only include the right cloud provider, the correct security, licensing and cost model, but also the appropriate automation tools to help secure the technology and security landscape consistently as applications are rolled out in a continuous integration and continuous delivery (CI/CD) process.

When Does Automation Become a Pressing Issue?

The reasons to automate may be due to resource constraints, configuration management, compliance or monitoring. For example, an organization may have very few people managing a large set of configurations, or the required skill set spans networking AND security products, or perhaps the network operation team does not have the operational knowledge of all the devices they are managing.

Below are a few benefits that automation provides:

  • Time savings and fewer errors for repetitive tasks
  • Cost reduction for complex tasks that require specialized skills
  • Ability to react quickly to events, for example,
    • Automatically commission new services at 80% utilization and decommission at 20%
    • Automatically adjust security policies to optimally address peace-time and attack traffic

[You may also like: How to Move Security Up the DevOps Priority List]

Automate the Complexity Away?

Let us consider a scenario where a development engineer has an application ready and needs to test application scalability, business continuity and security using a load balancer, prior to rolling out through the IT.

The developer may not have the time to wait for a long provisioning timeline, or the expertise and familiarity with the networking and security configurations. The traditional way would be to open a ticket, have an administrator reach out, understand the use case and then create a custom load balancer for the developer to test. This is certainly expensive to do, and it hinders CI/CD processes.

[You may also like: Economics of Load Balancing When Transitioning to the Cloud]

The objective here would be enable self-service, in a way that the developer can relate to and work with to test against the load balancer without networking and security intricacies coming in the way. A common way is by creating a workflow that automates tasks using templates, and if the workflow spans multiple systems, then hides the complexity from the developer by orchestrating them.

The successful end-to-end automation consists of several incremental steps that build upon each other. For example, identify all use cases that administrators take that are prone to introducing errors in configuration. Then make them scripted – say, using CLI or python scripts. Now you’re at a point where you’re ready to automate.

You’ll have to pick automation and orchestration tools that’ll help you simplify the tasks, remove the complexity and make it consumable to your audience. Most vendors provide integrations for commonly used automation and orchestration systems – Ansible, Chef, Puppet, Cisco ACI and Vmware vRealize – just to name a few.

Before you embark on the automation journey, identify the drivers, tie it to business needs and spend some time planning the transition by identifying the use cases and tools in use. Script the processes manually and test before automating using tools of your choice.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Application Security

How to Move Security Up the DevOps Priority List

July 17, 2019 — by Ben Zilberman0

DevOps-960x483.jpg

If you are in the information security business like me, you have probably improved your frequent flyer status recently. Indeed, May-June are when most industry events occur. Like birds, we fly when spring arrives.

In this blog, I’ll share some thoughts based on conversations I had during my own journeys, including those at the global OWASP conference in Tel Aviv, Israel.

The audience was mostly split between developers and researchers, and then me, supposedly the only marketing guy within a mile radius. Since the event was held in Tel Aviv–an information security innovation hub–the vendor/customer ratio was higher than usual.

DevOps Least Favorite Word is “Security”

According to Radware’s C-Suite survey, 75% of organizations have turned information security into a marketing message. Meaning, executives understand that consumers are looking for secure products and services, and actively sell to that notion.

But do developers share the same insight, or accountability?

By nature, information security is the enemy of the agile world. In an age where software development has shifted from 80% code writing and 20% integration to 20% code writing and 80% integration, all DevOps have to do is assemble the right puzzle of scalable infrastructure, available open source modules and their end-to-end automation and orchestration tools for provisioning, run-time management and even security testing.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

In other words, there’s no need to start from scratch today. Being familiar with more tools and how to efficiently navigate in Github (and other open-source communities) can yield more success than coding skills. Moreover, it yields faster time-to-market, which seems to be everybody’s interest.

Agility is the Name of the Game

As I mentioned, the global OWASP event attracted many vendors. However, will pitching ‘best of breed security’ do the trick? If you are the only one that can block rare attacks that only sophisticated hackers can carry out, is there a real business opportunity for your start-up to grow?

Well, DevOps says no!

And they are right. Running applications in the public cloud is all about efficiency and scale. Serverless and micro-services architecture fragment monolithic applications to components that are created, run and vanish without any supervision or visibility of the developer. It is done via end-to-end automation where the main orchestration tool is Kubernetes.

[You may also like: DevOps: Application Automation? The Inescapable Path]

This is agility.

Building Secure Products and Services

Both efficiency and agility are legitimate business objectives. Why would security interfere with their list of ‘what if’s?

Ironically, success doesn’t depend on how well an application security solution detects and mitigates attacks. It correlates better with how well the solution integrates into the SDLC (software development lifecycle), which essentially means it can interoperate with these orchestration and automation tools.

Before building security features, vendors should think of hands-off implementation, auto-scale, zero to minimal day-to-day management and APIs to exchange data with other tools in the customer environment.

[You may also like: How to Prevent Real-Time API Abuse]

Once all that is in place, it’s time to proceed to security and start building the algorithmics of the detection engines and mitigation manners.

Keep in mind security can’t be static anymore, but rather dynamic and evolving. Solutions must be able to learn and profile the behavior of traffic to the application and create policies automatically, adjusting the rules overtime when changes are introduced by the dev side. This is key for CI/CD because the last thing they want to hear about is going back to the code to reassess and test its logic, because every wrong decision translate to either a customer left out (false positives), or an attacker allowed in (false negatives).

Self-sufficient algorithmics reduces TCO significantly by reducing the required management labor – a plague in old application security solutions.

To auto-policy-generation DevOps says yes, and allow the executives to market secure products and services.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Application SecurityWAFWeb Application Firewall

Bot Manager vs. WAF: Why You Actually Need Both

June 6, 2019 — by Ben Zilberman0

BotWAF-960x641.jpg

Over 50% of web traffic is comprised of bots, and 89% of organizations have suffered attacks against web applications. Websites and mobile apps are two of the biggest revenue drivers for businesses and help solidify a company’s reputation with tech-savvy consumers. However, these digital engagement tools are coming under increasing threats from an array of sophisticated cyberattacks, including bots.

While a percentage of bots are used to automate business processes and tasks, others are designed for mischievous purposes, including account takeover, content scraping, payment fraud and denial-of-service attacks. Often, these attacks are carried out by competitors looking to undermine a company’s competitive advantage, steal information or increase your online marketing costs.

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

When Will You Need a Bot Detection Solution?

Sophisticated, next-generation bots can evade traditional security controls and go undetected by application owners. However, their impact can be noticed, and there are several indicators that can alert a company of malicious bot activity:

Why a WAF Isn’t an Effective Bot Detection Tool

WAFsare primarily created to safeguard websites against application vulnerability exploitations like SQL Injections, cross-site scripting (XSS), cross-site request forgery, session hijacking and other web attacks. WAFs typically feature basic bot mitigation capabilities and can block bots based on IPs or device fingerprinting.

However, WAFs fall short when facing more advanced, automated threats. Moreover, next-generation bots use sophisticated techniques to remain undetected, such as mimicking human behavior, abusing open-source tools or generating multiple violations in different sessions.

[You may also like: The Big, Bad Bot Problem]

Against these sophisticated threats, WAFs won’t get the job done.

The Benefits of Synergy

As the complexity of multi-vector cyberattacks increases, security systems must work in concert to mitigate these threats. In the case of application security, a combination of behavioral analytics to detect malicious bot activity and a WAF to protect against vulnerability exploitations and guard sensitive data is critical.

Moreover, many threats can be blocked at the network level before reaching the application servers. This not only reduces risk, but also reduces the processing loads on the network infrastructure by filtering malicious bot traffic.

Read “How to Evaluate Bot Management Solutions” to learn more.

Download Now

Application Security

4 Emerging Challenges in Securing Modern Applications

May 1, 2019 — by Radware0

appsecurity-960x474.jpg

Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.

Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture. 

The WAF Ten Commandments

The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.

However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.

Challenge 1: Bot Management

52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.

[You may also like: CISOs, Know Your Enemy: An Industry-Wise Look At Major Bot Threats]

Challenge 2: Securing APIs

Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.

[You may also like: How to Prevent Real-Time API Abuse]

Challenge 3: Denial of Service

Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.

[You may also like: DDoS Protection Requires Looking Both Ways]

Challenge 4: Continuous Security

For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Protecting All Applications

It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:

  • Application security solutions must encompass web and mobile apps, as well as APIs.
  • Bot Management solutions need to overcome the most sophisticated bot attacks.
  • Mitigating DDoS attacks is an essential and integrated part of application security solutions.
  • A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
  • To keep up with continuous application delivery, security protections must adapt in real time.
  • A fully managed service should be considered to remove complexity and minimize resources.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecuritySecurity

How to Prevent Real-Time API Abuse

April 18, 2019 — by Radware1

API-abuse-960x640.jpg

The widespread adoption of mobile and IoT devices, and increased use of cloud systems are driving a major change in modern application architecture. Application Programming Interfaces (APIs) have emerged as the bridge to facilitate communication between different application architectures. However, with the widespread deployment of APIs, automated attacks on poorly protected APIs are mounting. Personally Identifiable Information (PII), payment card details, and business-critical services are at risk due to automated attacks on APIs.

API. application programming interface, cybersecurity, technology

So what are key API vulnerabilities, and how can you protect against API abuse?

Authentication Flaws

Many APIs only check authentication status, but not if the request is coming from a genuine user. Attackers exploit such flaws through various ways (including session hijacking and account aggregation) to imitate genuine API calls. Attackers also target APIs by reverse-engineering mobile apps to discover how it calls the API. If API keys are embedded into the app, this can result in an API breach. API keys should not be used alone for user authentication.

[You may also like: Are Your DevOps Your Biggest Security Risks?]

Lack of Robust Encryption

Many APIs lack robust encryptions between API client and API server. Attackers exploit such vulnerabilities through man-in-the-middle attacks. Attackers also intercept unencrypted or poorly protected API transactions between API client and API server to steal sensitive information or alter transaction data.

What’s more, the ubiquitous use of mobile devices, cloud systems, and microservice design patterns have further complicated API security as now multiple gateways are involved to facilitate interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount.

[You may also like: HTTPS: The Myth of Secure Encrypted Traffic Exposed]

Business Logic Vulnerability

APIs are vulnerable to business logic abuse. Attackers make repeated and large-scale API calls on an application server or slow POST requests that result in denial of service. A DDoS attack on an API can result in massive disruption on a front-end web application.

Poor Endpoint Security

Most IoT devices and micro-service tools are programmed to communicate with their server through API channels. These devices authenticate themselves on API servers using client certificates. Hackers attempt to get control over an API from the IoT endpoint and if they succeed, they can easily re-sequence API order that can result in a data breach.

[You may also like: The Evolution of IoT Attacks]

How You Can Prevent API Abuse

A bot management solution that defends APIs against automated attacks and ensures that only genuine users have the ability to access APIs is paramount. When evaluating such a solution, consider whether it offers
broad attack detection and coverage, comprehensive reporting and analytics, and flexible deployment options.

Other steps you can (and should) take include:

  • Monitor and manage API calls coming from automated scripts (bots)
  • Drop primitive authentication
  • Implement measures to prevent API access by sophisticated human-like bots
  • Robust encryption is a must-have
  • Deploy token-based rate limiting equipped with features to limit API access based on the number of IPs, sessions, and tokens
  • Implement robust security on endpoints

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsBotnetsSecurity

Are Connected Cows a Hacker’s Dream?

April 3, 2019 — by Mike O'Malley0

connected_cows-960x639.jpg

Humans aren’t the only ones consumed with connected devices these days. Cows have joined our ranks.

Believe it or not, farmers are increasingly relying on IoT devices to keep their cattle connected. No, not so that they can moo-nitor (see what I did there?) Instagram, but to improve efficiency and productivity. For example, in the case of dairy farms, robots feed, milk and monitor cows’ health, collecting data along the way that help farmers adjust techniques and processes to increase milk production, and thereby profitability.

The implications are massive. As the Financial Times pointed out, “Creating a system where a cow’s birth, life, produce and death are not only controlled but entirely predictable could have a dramatic impact on the efficiency of the dairy industry.”

From Dairy Farm to Data Center

So, how do connected cows factor into cybersecurity? By the simple fact that the IoT devices tasked with milking, feeding and monitoring them are turning dairy farms into data centers – which has major security implications. Because let’s face it, farmers know cows, not cybersecurity.

Indeed, the data collected are stored in data centers and/or a cloud environment, which opens farmers up to potentially costly cyberattacks. Think about it: The average U.S. dairy farm is a $1 million operation, and the average cow produces $4,000 in revenue per year. That’s a lot at stake—roughly $19,000 per week, given the average dairy farm’s herd—if a farm is struck by a ransomware attack.

[You may also like: IoT Expands the Botnet Universe]

It would literally be better for an individual farm to pay a weekly $2,850 ransom to keep the IoT network up. And if hackers were sophisticated enough to launch an industry-wide attack, the dairy industry would be better off paying $46 million per week in ransom rather than lose revenue.

5G Cows

Admittedly, connected cows aren’t new; IoT devices have been assisting farmers for several years now. And it’s a booming business. Per the FT, “Investment in precision ‘agtech’ systems reached $3.2bn globally in 2016 (including $363m in farm management and sensor technology)
and is set to grow further as dairy farms become a test bed for the wider IoT strategy of big technology companies.”

[You may also like: Securing the Customer Experience for 5G and IoT]

But what is new is the rollout of 5G networks, which promise faster speeds, low latency and increased flexibility—seemingly ideal for managing IoT devices. But, as we’ve previously discussed, with new benefits come new risks. As network architectures evolve to support 5G, security vulnerabilities will abound if cybersecurity isn’t prioritized and integrated into a 5G deployment from the get-go.

In the new world of 5G, cyberattacks can become much more potent, as a single hacker can easily multiply into an army through botnet deployment. Indeed, 5G opens the door to a complex world of interconnected devices that hackers will be able to exploit via a single point of access in a cloud application to quickly expand an attack radius to other connected devices and applications. Just imagine the impact of a botnet deployment on the dairy industry.

[You may also like: IoT, 5G Networks and Cybersecurity: A New Atmosphere for Mobile Network Attacks]

I don’t know about you, but I like my milk and cheeses. Here’s to hoping dairy farmers turn to the experts to properly manage their security before the industry is hit with devastating cyberattacks.

2018 Mobile Carrier Ebook

Read “Creating a Secure Climate for your Customers” today.

Download Now

Application SecurityAttack Types & VectorsSecurity

Bots 101: This is Why We Can’t Have Nice Things

March 19, 2019 — by Daniel Smith0

AdobeStock_137861940-960x576.jpeg

In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were designed and deployed in the wild with bad intentions.

As I’ve previously discussed, there are generally two different types of bots, good and bad. Some of the good bots include Search Bots, Crawlers and Feed Fetchers that are designed to locate and index your website appropriately so it can become visible online. Without the aid of these bots, most small and medium-sized businesses wouldn’t be able to establish an authority online and attract visitors to their site.

[You may also like: The Big, Bad Bot Problem]

On the dark side, criminals use the same technology to create bots for illicit and profitable activates such as scraping content from one website and selling it to another. These malicious bots can also be leveraged to take over accounts and generate fake reviews as well as commit Ad Fraud and stress your web applications. Malicious bots have even been used to create fake social media accounts and influence elections.

With close to half of all internet traffic today being non-human, bad bots represent a significant risk for businesses, regardless of industry or channel.

As the saying goes, this is why we can’t have nice things.

Targeted Industries

If a malicious bot targets an online business, it will be impacted in one way or another when it comes to website performance, sales conversions, competitive advantages, analytics or users experience. The good news is organizations can take actions against bot activity in real-time, but first, they need to understand their own risk before considering a solution.

[You may also like: Credential Stuffing Campaign Targets Financial Services]

  • E-Commerce – The e-commerce industry faces bot attacks that include account takeovers, scraping, inventory exhaustion, scalping, carding, skewed analytics, application DoS, Ad fraud, and account creation.
  • Media – Digital publishers are vulnerable to automated attacks such as Ad fraud, scraping, skewed analytics, and form spam.
  • Travel – The travel industries mainly deal with scraping attacks but can suffer from inventory exhaustion, carding and application DoS as well.
  • Social Networks – Social platforms deal with automated bots attacks such as account takeovers, account creation, and application DoS.
  • Ad Networks – Bots that create Sophisticated Invalid Traffic (SIVT) target ad networks for Ad fraud activity such as fraudulent clicks and impression performance.
  • Financial Institutions – Banking, financial and insurance industries are all high-value target for bots that leverage account takeovers, application DoS or content scraping.

Types of Application Attacks

It’s becoming increasingly difficult for conventional security solutions to track and report on sophisticated bots that are continuously changing their behavior, obfuscating their identity and utilizing different attack vectors for various industries. Once you begin to understand the risk posed by malicious automated bot you can then start to focus on the attack vectors you may face as a result of activity.

[You may also like: Adapting Application Security to the New World of Bots]

  • Account takeover – Account takeovers include credential stuffing, password spraying, and brute force attacks that are used to gain unauthorized access to a targeted account. Credential stuffing and password spraying are two popular techniques used today. Once hackers gain access to an account, they can begin additional stages of infection, data exfiltration or fraud.
  • Scraping – Scraping is the process of extracting data or information from a website and publishing it elsewhere. Content price and inventory scraping is also used to gain a competitive advantage. These scrape bots crawl your web pages for specific information about your products. Typically, scrapers steal the entire content from websites or mobile applications and publish it to gain traffic.
  • Inventory exhaustion – Inventory exhaustion is when a bot is used to add hundreds of items to a cart and later, abandon them to prevent real shoppers from buying the products.
  • Inventory scalping – Hackers deploy retail bots to gain an advantage to buy goods and tickets during a flash sale, and then resell them later at a much higher price.
  • Carding – Carders deploy bots on checkout pages to validate stolen-card-details, and to crack gift cards.
  • Skewed analytics – Automated invalid traffic directed at your e-commerce portal can skews metrics and misleads decision making when applied to advertisement budgets and other business decisions. Bots pollute metrics, disrupt funnel analysis, and inhibit KPI tracking.
  • Application DoS – Application DoS attacks slow down e-commerce portals by exhausting web servers resources, 3rd party APIs, inventory database and other critical resources to the point that they are unavailable for legitimate users.
  • Ad fraud – Bad bots are used to generate Invalid traffic designed to create false impressions and generate illegitimate clicks on websites and mobile apps.
  • Account creation – Bots are used to create fake accounts on a massive scale for content spamming, SEO and skewing analytics.

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Symptoms of a Bot Attack

  • A high number of failed login attempts
  • Increased chargebacks and transaction disputes
  • Consecutive login attempts with different credentials from the same HTTP client
  • Unusual request activity for selected application content and data
  • Unexpected changes in website performance and metrics
  • A sudden increase in account creation rate
  • Elevated traffic for certain limited-availability goods or services

Intelligence is the Solution

Finding a solution that arms partners and service providers with the latest information related to potential attacks are critical. In my opinion, a Bot Intelligence Feed is one of the best ways to gain insight into the threats you face while identifying malicious bots in real-time.

A Bot Intelligence Feed will provide you with information about the latest data on newly detected IPs for various bot categories like data center bots, bad user-agent, advanced persistent bots, backlink checker, monitoring bots, aggregators, social network bots, spam bots, as well as 3rd party fraud intelligence directories and services used to keep track of externally flagged IPs, ultimately giving organizations the best chance to proactively block security holes and take actions against emerging threat vectors.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityBotnets

Will We Ever See the End of Account Theft?

March 12, 2019 — by David Hobbs0

captcha-960x640.jpg

There’s an 87 Gigabyte file containing 773 Million unique email addresses and passwords being sold on online forums today called “Collection #1.” We know that many users of websites are using the same passwords all over the internet; even after all the years of data breaches and account takeovers and thefts, user behavior stays the same. Most people want the have the least complex means to use a website possible.

So, what does this mean for businesses?

Anywhere you have applications guarded with username / password mechanisms, there’s going to be credential stuffing attacks, courtesy of botnets.  A modern botnet is a distributed network of computers around the globe that can perform sophisticated tasks and is often comprised of compromised computers belonging to other people. Essentially, these botnets are looking to steal the sand from the beach, one grain at a time, and they are never going to stop. If anything, the levels of sophistication of the exploitation methods have grown exponentially.

Today, a Web Application Firewall (WAF) alone is not enough to fight botnets. WAFs can do some of the job, but today’s botnets are very sophisticated and can mimic real human behaviors. Many companies relied on CAPTCHA as their first line of defense, but it’s no longer sufficient to stop bots. In fact, there are now browser plugins to break CAPTCHA.

[You may also like: WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10]

Case in point: In 2016 at BlackHat Asia, some presenters shared that they were 98% successful at breaking these mechanisms. 98%! We, as humans, are probably nowhere near that success rate.  Personally, I’m likely at 70-80%, depending on what words (and backwards letters!) CAPTCHA presents while I’m rushing to get my work done. Even with picture CAPTCHA, I pass maybe 80% of my initial attempts; I can’t ever get those “select the edges of street signs” traps! So, what if bots are successful 98% of the time and humans only average 70%?

CAPTCHA Alone Won’t Save You

If your strategy to stop bots is flawed and you rely on CAPTCHA alone, what are some of the repercussions you may encounter? First, your web analytics will be severely flawed, impacting your ability to accurately gauge the real usage of your site. Secondly, advertising fraud can run your bill up from affiliate sites. Third, the CAPTCHA-solving botnets will still be able to conduct other nefarious deeds, like manipulate inventory, scrape data, and launch attacks on your site.

[You may also like: The Big, Bad Bot Problem]

Identification of good bots and bad bots requires a dedicated solution. Some of the largest websites in the world have admitted that this is an ongoing war for them. Machine learning and deep learning technologies are the only way to stay ahead in today’s world.  If you do not have a dedicated anti-bot platform, you may be ready to start evaluating one today.

Read “Radware’s 2018 Web Application Security Report” to learn more.

Download Now

Application SecurityAttack Types & VectorsSecurity

Adapting Application Security to the New World of Bots

March 7, 2019 — by Radware0

web-app-bots-960x709.jpg

In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.

Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.

Positive Impact: Business Acceleration

Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.

Good bots include:

  • Crawlers — are used by search engines and contribute to SEO and SEM efforts
  • Chatbots — automate and extend customer service and first response
  • Fetchers — collect data from multiple locations (for instance, live sporting events)
  • Pricers — compare pricing information from different services
  • Traders — are used in commercial systems to find the best quote or rate for a transaction

[You may also like: Bot or Not? Distinguishing Between the Good, the Bad & the Ugly]

Negative Impact: Security Risks

The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:

  • Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
  • Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
  • Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
  • Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
  • Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.

[You may also like: The Big, Bad Bot Problem]

Blocking Automated Threats

Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.

 

Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.

For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.

[You may also like: IoT Expands the Botnet Universe]

Taking the Good with the Bad

Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.

Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now