Generative AI – A Disruptive Force at the hands of Cyber attackers


The world is rapidly changing. With the introduction of publicly available Generative AI tools toward the end of 2022, we are now in the midst of one of the biggest technological revolutions in human history. Some claim that it is just as, or even bigger than the introduction of the internet, cellphones, smart phones, social media, as the adoption and development rate of these new Generative AI technologies and tools is like nothing we have seen before and about to change the world as we know it faster than we realize.

There are many implications to this AI revolution, but I will focus on the cyber security world today.

*This image was generated using the AI text-to-image tool Mage

AI as a vulnerability finder

Generative AI tools are designed to be the best co-pilots. So when it comes to ethical hackers or white hats, many already o admit to relying on AI in automating tasks, analyzing data, identifying vulnerabilities, etc., and since we can’t really survey the black hats we can only assume that they are using it as well to find vulnerabilities in applications and platforms, as well as quickly running reconnaissance operations to find zero-day vulnerabilities to exploit and analyzing their data. As these generative AI chatbots exponentially grow their database by digesting every available piece of data, they become more accurate, and with that can also be manipulated to expose vulnerabilities of applications, platforms, different types of software as well as security tools and mechanisms, and even write code that can bypass applications security layers.

Phishing attacks

Using AI to generate authentic looking emails, landing pages, URLs and text messages gets more malicious actors into the game. For instance, many non-English speakers can now easily generate quality phishing attacks on a global scale. Unfortunately, not only there will now be more attacks of that kind, if before we could tell that something was off because of the wording, phrasing or the looks of a landing page, email, or text message, now it is much harder for us to spot the differences between legit and AI generated fake content.

Distribution of malicious code libraries

Without giving any instructions on how it’s done as we don’t want to promote or teach that kind of behavior, I will just warn you that if you use AI Chat tools to download code libraries to build your applications, be very cautious as bad actors are flooding the AI databases with libraries with nefarious codes trying to spread them in developers environments.

So, my suggestion is to carefully vet libraries by checking the creation date and download count, but more so, I recommend to simply avoid downloading code libraries and packages using AI tools if you do not have to. It simply is not worth the risk.

More new Bots, many more…

Ill intended actors can now manipulate AI chats to provide new advanced bot scripts quite easily – zero-day bots. And if that is not enough there are new AI chat tools designed specifically for hackers and fraudsters with which they can generate new automated scripts for all sorts of purposes. This is just the beginning and it’s about to get worse. Our starting point today is not so great to begin with – 30% of internet traffic is bad bots. I believe that it will grow up faster and standard bot protection tools will not be able to handle the amount and variety of these new AI generated bot scripts. CAPTCHA might also see its demise as more and more AI generated bots could easily pass through the CAPTCHA challenges. A new form of detection is needed here, be it unique custom challenges, blockchain-based crypto challenges, new attestation and identity-based user validation services, or even AI generated challenges for bot mitigation.

Application Security in the new AI era

With an AI co-pilot, hackers become 10-fold smarter and faster. They can cut down the time it takes to discover a vulnerability by 90% and come up with a new one any time a vulnerability is patched.

Generative AI tools in the wrong hands are a serious threat and these tools and use of them must be regulated properly. But unfortunately, as regulation always legs behind technology, in the face of zero-day attack surges, it is up to organizations’ security managers and CISOs to make sure they employ advanced application protection solutions that utilize behavioral algorithms that can automatically detect and block zero-day attacks in real time before they materialize.

Uri Dorot

Uri Dorot is a senior product marketing manager at Radware, specializing in application protection solutions, service and trends. With a deep understanding of the cyber threat landscape, Uri helps companies bridge the gap between complex cybersecurity concepts and real-world outcomes.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center