main

Attack Types & Vectors

Ransomware: To Pay or Not To Pay?

August 22, 2019 — by Radware0

ransomwarepayorno-960x661.jpg

Ransomware is a type of malware that restricts access to user data by encrypting an infected computer’s files in exchange for payment to decrypt. The attacker often distributes a large-scale phishing campaign in the hope that someone will open the malicious attachment or link. Once infected, the device is unusable and the victim is faced with the decision of whether or not to pay the extortionist to recover the decryption key.

Only in certain cases have keys been recovered. Over the years, Radware researchers have also followed the ransomware-as-a-service (RaaS) industry, which offers novice users the ability to launch their own campaigns for an established price or percentage of the profit. Ransomware has existed for over two decades but has only recently gained popularity among for-profit criminals. This trend has tapered off because ransomware campaigns generate a great deal of attention, notifying potential victims and thereby discouraging them from paying. Campaigns that attract less attention are typically more profitable.

Ransomware campaigns follow a standard pattern of increased activity in the beginning before settling down. Ransomware, once incredibly popular, has fallen out of favor with attackers, who now prefer cryptojacking campaigns. Because of the amount of attention that ransomware campaigns generate, most groups target a wide range of industries, including manufacturing, retail and shipping, in the hope of finding some success.

[You may also like: The Origin of Ransomware and Its Impact on Businesses]

If you think that your organization could be a target of a ransomware campaign, shoring up your network is critical. Ransomware can be delivered in various ways, most commonly via spam/phishing emails containing a malicious document. Other forms of infection include exploit kits, Trojans and the use of exploits to gain unauthorized access to an infected device.

Learn more:

Download Radware’s “Hackers Almanac” to learn more.

Download Now

DDoS

How to Choose a Cloud DDoS Scrubbing Service

August 21, 2019 — by Eyal Arazi0

ddoscloud-960x720.jpg

Buying a cloud-based security solution is more than just buying a technology. Whereas when you buy a physical product, you care mostly about its immediate features and capabilities, a cloud-based service is more than just lines on a spec sheet; rather, it is a combination of multiple elements, all of which must work in tandem, in order to guarantee performance.

Cloud Service = Technology + Network + Support

There are three primary elements that determine the quality of a cloud security service: technology, network, and support.

Technology is crucial for the underlying security and protection capabilities. The network is required for a solid foundation on which the technology runs on, and the operation & support component is required to bring them together and keep them working.

[You may also like: Security Considerations for Cloud Hosted Services]

Take any one out, and the other two legs won’t be enough for the service to stand on.

This is particularly true when looking for a cloud-based DDoS scrubbing solution. Distributed Denial of Service (DDoS) attacks have distinct features that make them different than other types of cyber-attacks. Therefore, there are specific requirements for cloud-based DDoS protection service that cover the full gamut of technology, network, and support that are particular to DDoS protection.

Technology

As I explained earlier, technology is just one facet of what makes-up a cloud security service. However, it is the building block on which everything else is built.

The quality of the underlying technology is the most important factor in determining the quality of protection. It is the technology that determines how quickly an attack will be detected; it is the quality of the technology that determines whether it can tell the difference between a traffic spike in legitimate traffic, and a DDoS attack; and it is the technology that determines whether it can adapt to attack patterns in time to keep your application online or not.

[You may also like: Why You Still Need That DDoS Appliance]

In order to make sure that your protection is up to speed, there are a few key core features you want to make sure that your cloud service provides:

  • Behavioral detection: It is often difficult to tell the difference between a legitimate traffic in customer traffic – say, during peak shopping periods – and a surge caused by a DDoS attack. Rate-based detection won’t be able to tell the difference, resulting in false positives. Therefore, behavioral detection, which looks not just at traffic rates, but also at non-rate behavioral parameters is a must-have capability.
  • Automatic signature creation: Attackers are relying more and more on multi-vector and ‘hit-and-run’ burst attacks, which frequently switch between different attack methods. Any defense mechanism based on manual configurations will fail because it won’t be able to keep up with changed. Only defenses which provide automatic, real-time signature creation can keep up with such attacks, in order to tailor defenses to the specific characteristics of the attack.
  • SSL DDoS protection: As more and more internet traffic becomes encrypted – over 85% according to the latest estimates – protection against encrypted DDoS floods becomes ever more important. Attackers can leverage DDoS attacks in order to launch potent DDoS attacks which can quickly overwhelm server resources. Therefore, protection capabilities against SSL-based DDoS attacks is key.
  • Application-layer protection: As more and more services migrate online, application-layer (L7) DDoS attacks are increasingly used in order to take them down. Many traditional DDoS mitigation services look only at network-layer (L3/4) protocols, but up-to-date protection must including application-layer protection, as well.
  • Zero-day protection: Finally, attackers are constantly finding new ways of bypassing traditional security mechanisms and hitting organizations with attack methods never seen before. Even by making small changes to attack signatures hackers can craft attacks that are not recognized by manual signatures. That’s why including zero-day protection features, which can adapt to new attack types, is an absolute must-have.

[You may also like: Modern Analytics and End-to-End Visibility]

Network

The next building block is the network. Whereas the technology stops the attack itself, it is the network that scales-out the service and deploys it on a global scale. Here, too, there are specific requirements that are uniquely important in the case of DDoS scrubbing networks:

  • Massive capacity: When it comes to protection against volumetric DDoS attacks, size matters. DDoS attack volumes have been steadily increasing over the past decade, with each year reaching new peaks. That is why having large-scale, massive capacity at your disposal in an absolute requirement to stop attacks.
  • Dedicated capacity: It’s not enough, however, to just have a lot of capacity. It is also crucial that this capacity be dedicated to DDoS scrubbing. Many security providers rely on their CDN capacity, which is already being widely utilized, for DDoS mitigation, as well. Therefore, it is much more prudent to focus on networks whose capacity is dedicated to DDoS scrubbing and segregated from other services such as CDN, WAF, or load-balancing.
  • Global footprint: Fast response and low latency are crucial components in service performance. A critical component in latency, however, is distance between the customer and the host. Therefore, in order to minimize latency, it is important for the scrubbing center to be as close as possible to the customer, which can only be achieve with a globally distributed network with a large footprint.

Support

The final piece of the ‘puzzle’ of providing a high-quality cloud security network is the human element; that is, maintenance, operation and support.

Beyond the cold figures of technical specifications, and the bits-and-bytes of network capacity, it is the service element that ties together the technology and network, and makes sure that they keep working in tandem.

[You may also like: 5 Key Considerations in Choosing a DDoS Mitigation Network]

Here, too, there are a few key elements to look at when considering a cloud security network:

  • Global Team: Maintaining global operations of a cloud security service requires a team large enough to ensure 24x7x365 operations. Moreover, sophisticated security teams use a ‘follow-the-sun’ model, with team member distributed strategically around the world, to make sure that experts are always available, regardless of time or location. Only teams that reach a certain size – and companies that reach a certain scale – can guarantee this.
  • Team Expertise: Apart from sheer numbers of team member, it is also their expertise that matter. Cyber security is a discipline, and DDoS protection, in particular, is a specialization. Only a team with a distinguished, long track record in  protecting specifically against DDoS attacks can ensure that you have the staff, skills, and experience required to be fully protected.
  • SLA: The final qualification are the service guarantees provided by your cloud security vendor. Many service providers make extensive guarantees, but fall woefully short when it comes to backing them up. The Service Level Agreement (SLA) is your guarantee that your service provider is willing to put their money where their mouth is. A high-quality SLA must provide individual measurable metrics for attack detection, diversion (if required), alerting, mitigation, and uptime. Falling short of those should call into question your vendors ability to deliver on their promises.

A high-quality cloud security service is more than the sum of its parts. It is the technology, network, and service all working in tandem – and hitting on all cylinders – in order to provide superior protection. Falling short on any one element can potentially jeopardize quality of the protection delivered to customers. Use the points outlined above to ask yourself whether your cloud security vendor has all the right pieces to provide quality protection, and if they don’t – perhaps it is time for you to consider alternatives.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Application Security

Threats on APIs and Mobile Applications

August 20, 2019 — by Pascal Geenens0

API-threats-960x560.jpg

Web Application Programming Interfaces, or Web APIs, are essential building blocks of our digital lives. They provide the tools and protocols that enable web and mobile applications to provide dynamic content and up to date, personalized information.

Our cars, bikes, and fitness trackers rely on Web APIs to track and guide us to our personal goals. In our homes, digital personal assistants help us manage our schedules, control our home, play our music, and much more, and they do so by interacting with an API provided as a service in the cloud. Google Pay, Apple Pay, PayPal, and many others enable businesses around to globe to process customer payments by the press of a button or a swipe of their phone. Their APIs provide easy integration and increased security for online commercial businesses. Smart cities and Industry 4.0 are taking over the manufacturing world and allow new interconnected and automated manufacturing technologies and processes.

Cyber-physical systems monitor physical processes and make decentralized decisions based on a virtual model of the real world. Industrial Internet of Things (IoT) communicate and cooperate in real-time with users and across organizations.

These are only a few examples of the digital world we live in today and which relies on one very essential building block: the Web API.

What Are Web APIs?

A Web API is a set of tools and protocols that provide a predefined interface for a request and response messaging system between two programs. It exposes reliable content and provides operation negotiation through a common defined language. REST, short for REpresentational State Transfer, and the Simple Object Access Protocol (SOAP) are the most common protocol styles for cloud service architectures, with REST by far the most common one.

[You may also like: How to Prevent Real-Time API Abuse]

SOAP used to be the go-to messaging protocol that almost every web service used; it is a standardized protocol that allows the exchange of messages using underlying protocols such as HTTP, SMTP, TCP, UDP, and others. SOAP builds on a large number of frameworks using XML to format the messages. The standard includes a Web Services Description Language (WSDL) which defines the structure of the data in the message. SOAP is an official web standard with specifications maintained and developed by the World Wide Web Consortium (W3C).

As opposed to SOAP, REST is much less of a protocol and more of an architectural style. REST only provides a set of guidelines and allows much more flexibility in the implementation of the protocol by its developers. As such, the REST architecture gained much popularity and fitted better the agile and continuously evolving specs and requirements of modern-day web services.

The percentages of API Architectural Styles for profiles in the ProgrammableWeb API directory [source: https://www.programmableweb.com/news/which-api-types-and-architectural-styles-are-most-used/research/2017/11/26]

REST is used to build web services that are lightweight, scalable, and easy to maintain. Services build on the REST architecture are called RESTful services. The protocol underlying REST is HTTP, the most common and standardized web protocol supported by almost every system and device on the internet. Any program that can talk HTTP is a potential REST client, any system that can process HTTP requests can expose RESTful services. Talk the talk, walk the … ; meaning there needs to be an agreement between consumer and service for them to exchange actionable and useful information, hence the use of a common language such as XML or JSON.

[You may also like: Adapting Application Security to the New World of Bots]

REST requests and JSON structures are straightforward concepts. A request is very much like a URL with some arguments:

https://my.restservice.local/get_user?id=1

The response a webservice located at that URL could return might be a JSON formatted message. JSON is a human and machine readable format, very convenient and easy to find structure and derive meaning from the data by either humans and machines:

// JSON Object
{
 “user”: {
 “id”: 1,
 “name”: “admin”,
 “groupid”: 1,
 “password”: “123456”
 }
}

The API Economy

To create new online applications within acceptable time frames, one should try to use existing and proven components for repeating and basic tasks. Focusing on the development of the innovative and differentiating parts of the application and not wasting cycles on the commodity is how in-house development stays productive and commercially viable. The why of in-house development is mainly the innovation and differentiation of the application, or what makes it stand out in the crowd.

[You may also like: 5G Security in an API-Driven Economy]

Most online applications rely on third-party and open source components, some of which could be Web APIs hosted in different clouds. Using third-party hosted Web APIs, developers can instantly add support for repeated and complex processes and do so in just a few lines of code. Using and consuming commercial-grade third-party APIs will typically not be free but is generally billed based on a subscription and number of (API) calls model, which embodies the ‘economy’ part of ‘the API economy.’

Credit card processing APIs are probably the most dominant component used by all commercial websites. It is more efficient and more secure to rely on a proven and certified third party to process customer payments. The security aspect and trust-worthiness of, say PayPal, results in much less resistance from visitors than to have them provide and store their credit card details on your website. Failing to provide an extensive list of payment options will negatively impact the success of your site. Think about how much more sales you could realize if your mobile app integrates with Apple and Google Pay and all your potential customer has to do is swipe from left to right to buy your products and services? No information or personal details to input, no additional authentication steps, all that is needed is a big smile for their phone to authorize the transaction and complete the purchase.

The Radware ShieldSquare Bot Manager relies on this very same Web API concept. Radware Bot Manager exposes a cloud-based service into which on-premise reverse proxies and web applications make API calls to differentiate legitimate users’ and good bot from bad bot requests. The service is provided to our customers as a subscription and pricing based on tiers of maximum number of API calls per month.

[You may also like: Navigating the Bot Ecosystem]

APIs, Built For and Consumed By Machines

APIs are by definition interfaces between machines. They are supposed to be consumed by devices and applications. Devices are machines, and their communication with the API is from and to machines (M2M). Mobile applications, dynamic web pages, or native user interface clients provide a graphical representation through which humans interact with the API. The graphical interface translates the interactions of the user into API requests while the data received in the API’s response message is rendered into a visual representation that makes more sense to the user.

Machines are good at processing structured data but have a harder time crunching through visual representations of that same data. Think about a paragraph in your document processor versus a scanned image of that same text. The visual representation of the text can be translated back to its original data representation, text in this case, but not without using complex tooling such as Optical Character Recognition (OCR) and only with a certain degree of success, most often not without introducing errors.

Do the exercise: this image provides tree representations of the same data. Which would you prefer to interact with and which do you think a machine would prefer? [data from https://opensource.adobe.com/Spry/samples/dataregion/JSONDataSetSample.html and formatted using http://json2table.com]_

Now put yourself in the shoes of an attacker that wants to scrape the product list from an online commercial website. Would you go at it by scraping and translating HTML pages and following links to discover and encode the full catalog? Or, would you first try to discover if an API feeds the dynamic content that gets rendered in the web browser? If you went for the latter, consider yourself a step closer to being a real hacker 😉

Securing Web APIs

The online nature of web APIs makes their communications subject to snooping or man-in-the-middle and replay attacks. Very much like the rest of the internet that is concerned about their privacy, all communication should be encrypted and origins verified. As REST relies on HTTP, SSL and TLS with certificates are the bare essentials.

Unless your Web API can verify the requesting client’s origin through a certificate, and as such leverages mutual TLS (mTLS), there is still no guarantee that the other side of the communication is a legitimate program with good intentions. Web APIs build on the same stateless paradigm and protocols used by web applications. While web applications are made stateful by introducing (hidden) session keys that get posted on each subsequent request after an initial login, Web API calls are by definition not stateful, but they can leverage the same ideas and concepts.

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

JSON Web Token (JWT), for example, is an open standard (RFC 7519) that defines a self-contained way for securely transmitting information between parties as a JSON object. The token is signed using a secret or a public/private key pair and as such, can be verified and trusted by a receiving service. Because of the self-contained nature of the token, authorization can be performed based on just the token, no need for resolving keys or tokens into actual identity.

For machine to machine communications, however, there is no interactive authentication step after which a session key or token gets generated. The way a consumer of a Web API can be authorized, is by using some kind of shared secret that was agreed upon upfront and that is shared with the API as one of the call request arguments. That secret would have been obtained through a separate authentication and authorization step. Many third-party Web API providers require the author of an application to register and request access to the API, at which point he or she will be provided with a personal access or API token. The API token identifies the client program and allows it to consume the API, while the providing service can authorize and track requests and utilization.

[You may also like: Application SLA: Knowing Is Half the Battle]

There are convenient APIs that provide authentication for third-party applications and can be leveraged by consumers and providers alike. Ever used your Google or Facebook account to get access to a service you never previously registered for? Applications and API providers can rely on a third party such as Google or Facebook, using them as a trusted middle-man to authenticate the consumer of the service. The consumer of the service can decide to trust the middle-man secures its private information and shares only what is agreed to and required for authorization with the provider’s service. The convenience brought to the consumer of the application is single-sign-on (SSO), meaning that the user only needs to register and login only once with Google and then can access and consume all the services and applications that rely on that middle-man. An example of such a standardized protocol is OAuth, also used by Google and Facebook in its 2.0 incarnation.

So I Secured My API. I’m Safe, Right?!

Not quite there yet, keep reading! Requests to Web APIs can be authenticated, authorized, and their contents protected by encryption.

However, what if you host a commercial website? Your definition of an authenticated user is that of a user that previously, in some cases just seconds ago, registered for access your website. Automated programs, commonly referred to as bots, are very much able to create email aliases, register as fictitious persons, process the email validation requests and get unlimited access to your website as do legitimate persons. A single request performed by a bot does not look any different than a request originating from a real human. Chaining multiple requests into an intended behavior; only then can one reveal the legitimate or malicious nature of the other party.

[You may also like: 4 Emerging Challenges in Securing Modern Applications]

Some applications have the luxury of only servicing a limited number of consumers that can be vetted and certified through some clearance process – B2B applications typically. Even then, tokens can be compromised and unauthorized use of the API is still very much a possibility. Even if tokens are not directly compromised, client-side Cross-Site Request Forging (CSRF) and Server-Side Request Forging (SSRF) could allow malicious actors to abuse the API. Even when you have strict control on your API or host internal private APIs that are used only by your front-end servers, they are still at risk.

Mobile Apps, API Consumers With a Twist

Mobile applications are nothing more than fancy consumers of Web APIs. At least those applications that provide on-demand and data driven user experiences. Candy Crush probably not the most appropriate example, though a great user experience – no pun intended.

API requests are machine to machine and by consequence do not immediately reveal the presence of a natural person or the execution environment of the application. A web application’s environment can be challenged and identified using javascript injected into the application’s web pages. The content, application in this case, returned by a web server is dynamic and can be adapted on the fly or redirected if the need arises.

[You may also like: Web Application Security in a Digitally Connected World]

A mobile application, however, is static once delivered and installed and relies on API calls to only update that portion of the representation that contains dynamic information. Unless the mobile application includes functionality that allows it to identify human behavior through motion sensors or click and swipe patterns, and it can certify it is running on a real device and not in an emulated environment, the back end APIs cannot verify the actual state of the application.

By nature, mobile applications are publicly accessible and can easily be reversed to reveal their inner working. Reversing mobile applications uncovers the specific API calls directly following user actions such as clicks (or presses), as well as any embedded static tokens or certificates which provide the keys to the API kingdom.

Furthermore, easy access to device emulation software such as QEMU allows anyone to run the application in thousands of virtual instances and perform automated actions such as advertisement clicks which can cost you dearly.

Conclusions

Securing your web APIs and ensuring legitimate use of them requires more than authentication and authorization. Even if you are sure that your application is coded with best security practices, your infrastructure is top-notch secured and audited, and the application contains no vulnerabilities, there is still the threat of automated attacks that leverage legitimate requests to build a chain of behavior that results in malicious activity. Each individual request is legitimate, but the end game of the thousands of bots disguised as legitimate users could be a depleted stock or information being processed and leveraged competitively against you.

Building web APIs for B2B, providing customers with Mobile Apps, etc. increases customer loyalty, prevents customer churn, and increases revenue and competitiveness. However, these same APIs and mobile applications can be turned into a weapon against your business, and in a very insidious way, without immediate indication something or someone malicious is at work. A bot management solution should be considered when exposing APIs that are directly or indirectly connected with your business and revenue.

For those aware that applications without vulnerabilities are RBUs, consider the added layer of protection provided by a Web Application Firewall that will prevent abuse of vulnerable code and infrastructure, and it will even protect you from Cross-Site and Server Side Request Forgery.

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Attack Types & Vectors

Behind the Disguise of Trojans

August 15, 2019 — by Radware0

trojan-960x627.jpg

A Trojan horse is a malicious computer program masquerading as a useful or otherwise non-malicious, legitimate piece of software. Generally spread via social engineering and web attacks, Trojan horses often install a backdoor for remote access and unauthorized access of the infected machine.

An attacker can perform various criminal tasks, including, but not limited to, “zombifying” the machine within a botnet or DDoS attack, data theft, downloading or installing additional malware, file modification or deletion, keylogging, monitoring the user’s screen, crashing the computer and anonymous internet viewing.

[You may also like: Here’s How You Can Better Mitigate a Cyberattack]

If you think that you are a target of this attack vector, secure both your corporate network and user devices. Proper education and user hygiene help prevent an employee from infecting your network. Often an employee opens a malicious document via phishing or infects via a drive-by download, allowing the Trojan to download malicious payloads.

Learn more about this cyberthreat by watching our security researcher Daniel Smith outline the risks it presents to organizations:

Download Radware’s “Hackers Almanac” to learn more.

Download Now

Application Security

Automation for NetOps and DevOps

August 14, 2019 — by Prakash Sinha0

automation1-960x446.jpeg

Many organizations use public cloud service providers, some in addition to their private cloud and on premise deployments. The right product mix not only reduces vendor lock-in and shadow IT, but is also an enabler for the constituents that includes IT administrators, network and security operations, as well as DevOps.

Maintaining application security and configurations across multiple environments is complex AND error prone and increases the attack surface. Careful testing is required to protect business-critical applications from hacking attempts, which may include denial of service, network and application attacks, malware and bots and impersonation.

A successful implementation will not only include the right cloud provider, the correct security, licensing and cost model, but also the appropriate automation tools to help secure the technology and security landscape consistently as applications are rolled out in a continuous integration and continuous delivery (CI/CD) process.

When Does Automation Become a Pressing Issue?

The reasons to automate may be due to resource constraints, configuration management, compliance or monitoring. For example, an organization may have very few people managing a large set of configurations, or the required skill set spans networking AND security products, or perhaps the network operation team does not have the operational knowledge of all the devices they are managing.

Below are a few benefits that automation provides:

  • Time savings and fewer errors for repetitive tasks
  • Cost reduction for complex tasks that require specialized skills
  • Ability to react quickly to events, for example,
    • Automatically commission new services at 80% utilization and decommission at 20%
    • Automatically adjust security policies to optimally address peace-time and attack traffic

[You may also like: How to Move Security Up the DevOps Priority List]

Automate the Complexity Away?

Let us consider a scenario where a development engineer has an application ready and needs to test application scalability, business continuity and security using a load balancer, prior to rolling out through the IT.

The developer may not have the time to wait for a long provisioning timeline, or the expertise and familiarity with the networking and security configurations. The traditional way would be to open a ticket, have an administrator reach out, understand the use case and then create a custom load balancer for the developer to test. This is certainly expensive to do, and it hinders CI/CD processes.

[You may also like: Economics of Load Balancing When Transitioning to the Cloud]

The objective here would be enable self-service, in a way that the developer can relate to and work with to test against the load balancer without networking and security intricacies coming in the way. A common way is by creating a workflow that automates tasks using templates, and if the workflow spans multiple systems, then hides the complexity from the developer by orchestrating them.

The successful end-to-end automation consists of several incremental steps that build upon each other. For example, identify all use cases that administrators take that are prone to introducing errors in configuration. Then make them scripted – say, using CLI or python scripts. Now you’re at a point where you’re ready to automate.

You’ll have to pick automation and orchestration tools that’ll help you simplify the tasks, remove the complexity and make it consumable to your audience. Most vendors provide integrations for commonly used automation and orchestration systems – Ansible, Chef, Puppet, Cisco ACI and Vmware vRealize – just to name a few.

Before you embark on the automation journey, identify the drivers, tie it to business needs and spend some time planning the transition by identifying the use cases and tools in use. Script the processes manually and test before automating using tools of your choice.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

Botnets

How Hard Is It to Build a Botnet?

August 13, 2019 — by David Hobbs1

botnet-960x540.jpg

While working on Radware’s Ultimate Guide to Bot Management, I began wondering what would it take to build a botnet.

Would I have to dive into the Darknet and find criminal hackers and marketplaces to obtain the tools to make one? How much effort would it take to build a complicated system that would avoid detection and mitigation, and what level of expertise is required to make a scraping/credential stuffing and website abuse botnet?

At Your Fingertips

What I discovered was amazing. I didn’t even need to dive into the Darknet; everything anyone would need was readily available on the public internet. 

[You may also like: What You Need to Know About Botnets]

My learning didn’t end there. During this exploration, I noticed that many organizations use botnets in one form or another against their competitors or to gain a competitive advantage. Of course, I knew hackers leverage botnets for profit; but the availability of botnet building tools makes it easy for anyone to construct botnets that can access web interfaces and APIs while disguising their location and user agents. 

The use cases being advertised from these toolsets range from data harvesting, to account creation and account takeover, to inventory manipulation capabilities, advertising fraud and a variety of ways to monetize and automate integrations into well known systems for IT.  

[You may also like: 5 Things to Consider When Choosing a Bot Management Solution]

Mobile Phone Farms

These tools designers and services clearly know there is a market for cyber criminality, and some are shameless about promoting it.

For example, per a recent Vice article examining mobile phone farms, companies are incentivizing traffic to their apps and content by paying users. Indeed, it appears that people can make anywhere from $100-300 a month per mobile phone on apps like perk TV, Fusion TV, MyPoints or even categorizing shows for Netflix. They merely have to take surveys, watch television shows, categorize content or check into establishments.

[You may also like: Botnets: DDoS and Beyond]

More specifically, people are building mobile phone farms with cheap android devices and used phones, and scale up their operations to a point where they can make a couple of thousands of dollars (or more!) per month. These farms can be rented out to conduct more nefarious activities, like price scraping, data harvesting, ticket purchasing, account takeover, fake article writing and social media development, hacking, launching launching DDoS attacks and more.  To complicate matters, thanks to proxy servers and VPN tools, it has become nearly impossible to detect if a phone farm is being used against a site.  

What’s Next?

It’s not a far leap to assume that incentivized engagement may very well invite people to build botnets. How long until somebody develops an app to “rent your phone’s spare cycles” to scrape data, or watch content, write reviews, etc. (in other words, things that aren’t completely against the law) for money? Would people sign up to make extra beer money in exchange for allowing botnet operators to click on ads and look at websites for data harvesting?

I think it’s just a matter of time before this idea takes flight. Are you prepared today to protect against the sophisticated botnets? Do you have a dedicated bot management solution? When the botnets evolve into the next generation, will you be ready?

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Security

Navigating the Bot Ecosystem

August 8, 2019 — by Carl Herberger0

UGBM-Image-2-960x698.jpg

Bots touch virtually every part of our digital lives — and now account for over half of all web traffic.

This represents both a problem and a paradox. Bots can be good, and bots can be bad; removing good bots is bad and leaving bad bots can be even worse.

Having said that, few businesses, application owners, users, designers, security practitioners, or network engineers can distinguish the difference between good bots and bad bots in their operating environments.

As the speed of business continues to accelerate and automate, the instantaneous ability to distinguish legitimate, automated communications from illegitimate will be among the most crucial security controls we can on board.

Differentiating Between Good & Bad Bots

Indeed, as the volume of automated communication over the internet has dramatically increased,and according to Radware’s research, today’s internet now represents a majority (52%) of bot traffic. But how much of that traffic is “good” vs. “bad”?

[You may also like: Good Bots Vs. Bad Bots: What’s The Impact On Your Business?]

Some help populate our news feeds, tell the weather, provide stock quotes and control search rankings. We use bots to book travel, access online customer support, even to turn our lights on and off and unlock our doors.

But other bots are designed for more mischievous purposes — including account takeover, content scraping, payment fraud and denial-of-service (DoS) attacks. These bots account for as much as 26% of total internet traffic, and their attacks are often carried out by competitors looking to undermine your competitive advantage, steal your information or increase your online marketing costs.

These “bad bots” represent one of the fastest growing and gravest threats to websites, mobile applications and application programming interfaces (APIs). And they’re fueling a rise in automated attacks against businesses, driving the need for bot management.

[You may also like: Key Considerations In Bot Management Evaluation]

In the early days, the use of bots was limited to small scraping attempts or spamming. Today, things are vastly different. Bots are used to take over user accounts, perform DDoS attacks, abuse APIs, scrape unique content and pricing information, increase costs of competitors, deny inventory turnover and more. It’s no surprise, then, that Gartner mentioned  bot management at the peak of inflated expectations under the high benefit category in its Hype Cycle for Application Security 2018.

The ULTIMATE Guide to Bot Management

Recognizing the inescapable reality of today’s evolving bots, we have released the Ultimate Guide to Bot Management. This e-book provides an overview of evolving bot threats, outlines options for detection and mitigation, and offers a concise buyer guide to help evaluate potential bot management solutions.

From the generational leaps forward in bot design and use, to the techniques leveraged to outsmart and cloak themselves from detection, we’ve got you covered. The guide also dives into the bot problems across web, API and SDK / Mobile applications, and the most effective architectural strategies in pursuing solutions.

We hope you enjoy this tool as it becomes a must-have reference manual and provides you with the necessary map to navigate the murky waters and mayhem of bot management!

Read “The Ultimate Guide to Bot Management” to learn more.

Download Now

Security

Good Bots Vs. Bad Bots: What’s The Impact On Your Business?

August 7, 2019 — by Radware0

goodbadbots-960x538.jpg

Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.”

These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.

[You may also like: Bots in the Boardroom]

Watch this webcast sponsored by Radware to discover all about about bots, including malicious bot traffic and what you can do to protect your organization from such threats.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now

DDoS Attacks

Healthcare is in Cybercriminals’ Crosshairs

August 6, 2019 — by Mark Taylor0

healthcare-960x640.jpg

The healthcare industry is a prime target of hackers. According to Radware’s 2018-2019 Global Application and Network Security Report, healthcare was the second-most attacked industry after the government sector in 2018. In fact, about 39 percent of healthcare organizations were hit daily or weekly by hackers and only 6 percent said they’d never experienced a cyber attack.

Increased digitization in healthcare is a contributor to the industry’s enlarged attack surface. And it’s accelerated by a number of factors: the broad adoption of Electronic Health Records Systems (EHRS), integration of IoT technology in medical devices (software-based medical equipment like MRIs, EKGs, infusion pumps), and a migration to cloud services.

Case in point: 96% of non-federal acute care hospitals have an EHRS. This is up from 8% in 2008.  

Accenture estimates that the loss of data and related failures will cost healthcare companies nearly $6 trillion in damages in 2020, compared to $3 trillion in 2017. Cyber crime can have a devastating financial impact on the healthcare sector in the next four to five years.

The Vulnerabilities

According to the aforementioned Radware report, healthcare organizations saw a significant increase in malware or bot attacks, with socially engineered threats and DDoS steadily growing, as well. While overall ransomware attacks have decreased, hackers continue to hit the healthcare industry the hardest with these attacks. And they will continue to refine ransomware attacks and likely hijack IoT devices to hold tech hostage.

[You may also like: How Cyberattacks Directly Impact Your Brand]

Indeed, the increasing use of medical IoT devices makes healthcare organizations more vulnerable to DDoS attacks; attackers use infected IoT devices in botnets to launch coordinated attacks.

Additionally, cryptomining is on the rise, with 44 percent of organizations experiencing a cryptomining or ransomware attack. Another 14 percent experienced both. What’s worse is that these health providers don’t feel prepared for these attacks. The report found healthcare “is still intimidated by ransomware.”

The Office of Civil Rights (OCR) has warned about the dangers of DDoS attacks on healthcare organizations; in one incident, a DDoS attack overloaded a hospital network and computers, disrupting operations and causing hundreds of thousands of dollars in losses and damages.

[You may also like: 2018 In Review: Healthcare Under Attack]

Why Healthcare?

The healthcare industry is targeted for a variety of reasons. For one thing, money. By 2026, healthcare spending will consume 20% of the GDP, making the industry an attractive financial target for cyber criminals. And per Radware’s report, the value of medical records on the darknet is higher than that of passwords and credit cards.

And as my colleague Daniel Smith previously wrote, “not only are criminals exfiltrating patient data and selling it for a profit, but others have opted to encrypt medical records with ransomware or hold the data hostage until their extortion demand is met. Often hospitals are quick to pay an extortionist because backups are non-existent, or it may take too long to restore services.”

[You may also like: How Secure is Your Medical Data?]

Regardless of motivation, one thing is certain: Ransomware and DDoS attacks pose a dangerous threat to patients and those dealing with health issues. Many ailments are increasingly treated with cloud-based monitoring services, IoT-embedded devices and self or automated administration of prescription medicines. Cyber attacks could establish a foothold in the delivery of health services and put people’s lives and well-being at risk.

Recommendations

Securing digital assets can no longer be delegated solely to the IT department. Security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives–not just for enterprises, but for hospitals and healthcare providers alike.

To prevent or mitigate DDoS attacks, US-Computer Emergency Readiness Team (US-CERT) recommends that organizations consider the following measures:

  • Continuously monitoring and scanning for vulnerable and comprised IoT devices on their networks and following proper remediation actions
  • Creating and implementing password management policies and procedures for devices and their users; ensuring all default passwords are changed to strong passwords
  • Installing and maintaining anti-virus software and security patches; updating IoT devices with security patches as soon as patches become available is critical.
  • Installing a firewall and configuring it to restrict traffic coming into and leaving the network and IT systems
  • Segmenting networks where appropriate and applying security controls for access to network segments
  • Disabling universal plug and play on routers unless absolutely necessary

Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.

Download Now

Cloud Security

Security Considerations for Cloud Hosted Services

July 31, 2019 — by Prakash Sinha0

cloudsecurity-960x655.jpg

A multi-cloud approach enables organizations to move application services to various public cloud providers depending on the desired service level and price points. Most organizations will use multiple cloud providers; some in addition to their private cloud and on premise deployments. 

Multi-cloud subsumes and is an evolution of hybrid cloud. According to IDC, enterprise adoption of multi-cloud services has moved into the mainstream; 85% of organizations are using services from multiple cloud providers while 93% of organizations will use services from multiple cloud providers within the next 12 months.

C-level executives are pushing a “cloud first” policy for their IT service capabilities.

Multi vs. Hybrid Cloud Deployment

Multi-cloud may include any combination of public cloud (eg. Microsoft Azure, AWS), SaaS applications (eg. Office 365, Salesforce), and private clouds (eg. OpenStack, VMware, KVM).

[You may also like: Transforming Into a Multicloud Environment]

Hybrid cloud deployments might be permanent, to serve a business need, such as to maintain proprietary data or intellectual property information within an organizations control, while public clouds may be maintained to initiate an eventual transition to cloud – whether public or private.

Sometimes organizations adopt multi-cloud deployments in order to enable DevOps to test their services and reduce shadow IT or to enhance disaster recovery or scalability in times of need.

Security Considerations

As organizations transition to the cloud, availability, management AND security should be top-of-mind concerns in the move to adopt containers. This concern is evident in the survey conducted by IDC in April 2017.

In addition to using built-in tools for container security, traditional approaches to security still apply to services delivered through the cloud.

Many container applications services composed using Application Programming Interfaces (APIs) are accessible over the web, and they are open to malicious attacks. Such deployments also increase the attack surface, some of which may be beyond an organization’s direct control.

[You may also like: How to Prevent Real-Time API Abuse]

Multi-Pronged Prevention

As hackers probe network and application vulnerability in multiple ways to gain access to sensitive data, the prevention strategy for unauthorized access needs to be multi-pronged as well:

  • Routinely applying security patches
  • Preventing denial of service attacks
  • Preventing rogue application ports/applications from running in the enterprise or on their hosted container applications in the cloud
  • Routine vulnerability assessment scans on container applications
  • Preventing bots from targeting applications and systems while being able to differentiate between good bots and bad bots
  • Scanning application source code for vulnerabilities and fixing them or using preventive measure such as deploying application firewalls
  • Encrypting the data at rest and in motion
  • Preventing malicious access by validating users before they can access an application

[You may also like: Application Delivery Use Cases for Cloud and On-Premise Applications]

It is important to protect business-critical assets by enhancing your security posture to varying threat landscapes. You can do that by gaining visibility into multiple threat vectors using SIEM tools and analytics, and adopting security solutions such as SSL inspection, intrusion detection and prevention, network firewalls, DDoS prevention, Identity and Access Management (IAM), data leak prevention (DLP), SSL threat mitigation, application firewalls, and identity management.

Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.

Download Now