How can organizations cut through the hype around AI to understand the most important issues they should be addressing? How can they incorporate AI into their security strategies now to take advantage of the technology’s ability to detect and mitigate attacks that incorporate the same capabilities? Pascal Geenens, Radware’s EMEA security evangelist, weighs in.
What is the threat landscape, and how disruptive is it likely to be?
In the near term, cybercriminals will mainly use AI to automate attacks and improve evasion capabilities against detection systems and to increase the scale and reach of the threats. Expect to see AI used to automatically breach defenses and generate more sophisticated phishing attacks from information scraped from publicly accessible web sources. The scale of attacks will quickly escalate to volumes that we have never experienced before.
On the evasive side, machine-learning systems such as generative adversarial networks (GANs) can automatically create malware that is harder to detect and block. This technique has already been demonstrated by researchers. The MalGAN research project proposed a GAN to create evasive malware that goes undetected by all modern anti-malware systems, even the systems based on deep learning.
In the first phase, AI will be used to improve current attack tools to make them more harmful and difficult to detect.
Machine learning and automation can be leveraged to find new vulnerabilities, especially in large public clouds where cloud native systems are being built based on widely reused open-source software frameworks. Platforms running this software will become primary targets for vulnerability scanning.
Given that open-source code is readable and accessible by both criminals and security researchers, this platform may become the next battlefield with an associated “arms race” to discover, abuse or fix vulnerabilities. Deep learning will provide an advantage in discovering new vulnerabilities based on code. While open source is an easier target, even closed-source software will not escape automated attacks based on the learning process of the attack program.
Looking further ahead, I can imagine large cybercrime organizations or nation-states using AI. Where machine learning was previously used mainly for automating attacks, now AI systems such as genetic algorithms and reinforced learning will be used to automatically generate new attack vectors and breach all kinds of systems, whether cloud, IoT or ICS. Then, combine this capability with the automation of the first stage. We will face a fully automated, continuously evolving attack ecosystem that will hack, crack and improve itself over time with no limits in scale or endurance.
Cybercriminals could move from being the actual hackers, performing the real attack and penetrating defenses, to becoming maintainers and developers of the automated AI hacking machine. Machines will do the hacking; humans will focus on improving efficiency of the machines.
What vulnerabilities will make targets more attractive to criminals once AI is incorporated in their tools? How will it affect corporate espionage?
Ultimately every organization will be digitally transformed and become a primary target for automated attacks. Which targets are chosen will be solely dependent on the objective of the attack. For ransom and extortion, every organization is a good candidate target. For corporate espionage, it depends how much organizations are willing to pay to secure intellectual property in certain areas. It’s fair to say that, by definition, every organization can — and, at some point, will — be a target.
What about politically motivated cyberattacks initiated at the national level?
We’ve already witnessed attacks meant to influence public opinion and the political landscape. Such attacks are likely to grow and become more difficult to identify early in the process and to protect against once attackers leverage deep learning and broader AI technologies. Attackers have already produced automatically generated messages and discussions, as well as “deep fake” videos that are created by AI algorithms.
Influencing what topics are important and manipulating opinions are becoming new weapons of choice for nation-states. Social platform providers need to take a stance and remain as clean as possible by dedicating much of their own AI-assisted automated detection systems to stay ahead of cybercriminals and others that create and improve AI-assisted automated systems for fake content creation.
From a defense perspective, what types of AI-based products will be used to combat more technologically savvy cybercriminals?
There’s a saying in our industry that “you cannot stop what you cannot detect.” Cybersecurity has become automated for the sake of the detection of new, increasingly complex and continuously adapting threats, and deep learning is improving that capability. AI, in the broad sense of the term, will probably come into play in the near-term future rather than immediately. The current state of AI in the defense discussion is confined to the traditional machine learning, and while deep learning shows a lot of promise, it is still too challenged to be used for automated mitigation. More intelligent and self-adaptive systems, the domain of AI, are still further out when it comes to automating our cyberdefenses.
Will the use of AI-based attacks by cybercriminals drive adoption of AI-based mitigation solutions by enterprises, organizations and institutions?
Yes, but not necessarily at the same pace. There are three factors to consider — the attack vector, its speed and its evasion technique:
- For example, using AI for phishing does not affect the victim in terms of change in attack vector, but it does increase the scale and number of targets, compelling every organization to improve its This protection might include AI-based systems, but not necessarily.
- On the other hand, as attacks get more automated, organizations will have to automate their security to ensure that they keep on top of the rising number and accelerated speed of attacks.
- When new evasion techniques based on AI are leveraged by cybercriminals, it will ultimately lead to the use of better detection systems that are based on AI.
Read “The Trust Factor: Cybersecurity’s Role in Sustaining Business Momentum” to learn more.
Recognized Cyber Security and Emerging Technology thought leader with 20+ years of experience in Information Technology As the EMEA Cyber Security Evangelist for Radware, Pascal helps execute the company's thought leadership on today’s security threat landscape. Pascal brings over two decades of experience in many aspects of Information Technology and holds a degree in Civil Engineering from the Free University of Brussels. As part of the Radware Security Research team Pascal develops and maintains the IoT honeypots and actively researches IoT malware. Pascal discovered and reported on BrickerBot, did extensive research on Hajime and follows closely new developments of threats in the IoT space and the applications of AI in cyber security and hacking. Prior to Radware, Pascal was a consulting engineer for Juniper working with the largest EMEA cloud and service providers on their SDN/NFV and data center automation strategies. As an independent consultant, Pascal got skilled in several programming languages and designed industrial sensor networks, automated and developed PLC systems, and lead security infrastructure and software auditing projects. At the start of his career, he was a support engineer for IBM's Parallel System Support Program on AIX and a regular teacher and presenter at global IBM conferences on the topics of AIX kernel development and Perl scripting.