The Quest To Automate The Business Of Fake News

0
5186

With several elections around the corner, all eyes are on whether there will be fair play as fabricated information that mimics news headlines prevail.

We’ve seen the tactics of fake news used in all aspect of life from false headlines used as bait to generate ad impressions and clicks on a site, to more extreme propaganda for Psychological Warfare, sometimes referred to as PSYWAR and PSYOPS.

In this last case, various techniques are used and are aimed at influencing a target audience’s values and beliefs. They can be used to induce confessions and even be used to destroy the morale of enemies through tactics that aim to depress troops’ psychological states. It’s full on and sticks a knife in the heart of human morals.

Fake News

The public knows little of these ‘black op’ style uses but fake news is certainly common language. And perhaps one of the most famous episodes of fake news happened in 2016 when the now defunct WTOE5news.com ran the story that Pope Francis had ‘shocked’ the world by endorsing Donald Trump for president.

It improbably quoted the pope as saying that the FBI’s inability to prosecute Hillary Clinton for her emails led him to endorse Trump. Comments, shares and reactions rocketed.

[You may also like: 2020 Predictions: AI, Cloud Breaches & Quantum Computing]

Not long after, a newly registered site published the exact same story with a twist: This time the pope endorsed Hillary Clinton. What was going?

Fake news perpetrators tend to be rather unimaginative and lazy, using the same story but changing a few of the details to generate advertising revenue from invented clickbait “news”.

Illustrated perfectly by an investigation by BuzzFeed, which found that WTOE5 was part of one of the world’s most unique and ambitious fake news operations — a network of at least 43 websites, with domains being registered right up to within days of the election, that have published over 750 news articles. 

[You may also like: The State of AI in Cybersecurity Today]

Though some are now offline, nearly all of the sites in this fake news network contained the same Google AdSense ID in their source code. This means the money earned each month from ads placed on the sites goes into a single account. Clever and wicked at the same time.

Data Manipulation

Yet there are those that argue it’s not as deceitful as the story uncovered by the Guardian involving Cambridge Analytica in which data harvested from millions of people’s Facebook accounts was being used without consent.

Cambridge Analytica was dedicated to big data, meant to be used in sales strategy, i.e. to create massive campaigns that actually approach users in a personal manner.

Even if that was the objective, the data ended up being used to create a huge thinking machine, which reached the point of destabilising and disrupting countries thanks to the complicity of a variety of enterprises such as Facebook.

Big Data and AI Applied to Misinformation

Fake headlines are one thing, but people are much more confident about the legitimacy of facts when they can see it. But with photoshop even that premise can’t be trusted.

Video was, until recently, the best way to validate information. But even this is in question as apps that let you swap people’s faces have gone mainstream.

DeepFaceLab has automated the Hollywood visual effect process for the man on the street.

Armed with a video and images, one can create realistic fake footage with little to no investment. It’s this trickery that companies and organisation need to be more concerned with in the future – how could misuse affect the brand?

[You may also like: FaceApp and the Friction Between Entertainment and Data Privacy]

But above all, we have to be aware of the way in which data is being used. Data is the new oil, it can be collected and scraped from the richest resource on earth: the internet.

People are freely taking part in surveys that reveal what kind of person they are, political affinity, interests and dislikes, and while they are concerned about their privacy when browsing the internet, for some reason they all trust social platforms.

The apps and surveys give the creators access to very valuable information that can be used to profile the kind of person you are.

This is the foundation on which Cambridge Analytica based its individual targeting campaigns.

[You may also like: Automation for NetOps and DevOps]

The insight can sway your mind, influencing how you will vote is a great example, and comes very close to propaganda.

So Where Does This Leave Us? Automation.

As we’ve said, scraping information from the internet is easy with automation. It’s no longer time-consuming to find things out and apply it because a machine can be taught to do it for us. It provides scale and reach.

This means companies are under pressure to look at their own practices in marketing and assess the ethics and responsible use of data.

Secondly, companies need to help their employees and customers understand the risks. What to trust and who to trust and how to make an assessment are vital to ensure fake stories aren’t taken as gospel.

Finally, companies need to look at partnerships they have in place and evaluate how automation is being used to fight automation.

[You may also like: Attackers Are Leveraging Automation]

When it comes to marketing practices, for example, Facebook, Google, YouTube and certainly many other platforms all have a responsibility and can make a difference in the fight against dis- and misinformation, companies using those services have a responsibility to ensure fair play prevails.

Social media giants are embracing AI technology too to detect and eliminate manipulated media and take out the effects of AI as much as possible. Though even that needs people – look at Facebook’s policy to use third-party fact-checkers to find the fake.

Keeping them on the straight and narrow is now essential and it will take coordinated effort to reverse the trend. That will be hard, which is why AI is valuable in keeping a moral compass and saving the good people from the bad.

Note: This piece originally published on Data Economy.

Read “Radware’s 2019 Web Application Security Report” to learn more.

Download Now

Previous articleYet Another Cyber Prediction Blog
Next articleNearly Two-Thirds Of Holiday E-Commerce Traffic Was Bad Bots
As the Director, Threat Intelligence for Radware, Pascal helps execute the company's thought leadership on today’s security threat landscape. Pascal brings over two decades of experience in many aspects of Information Technology and holds a degree in Civil Engineering from the Free University of Brussels. As part of the Radware Security Research team Pascal develops and maintains the IoT honeypots and actively researches IoT malware. Pascal discovered and reported on BrickerBot, did extensive research on Hajime and follows closely new developments of threats in the IoT space and the applications of AI in cyber security and hacking. Prior to Radware, Pascal was a consulting engineer for Juniper working with the largest EMEA cloud and service providers on their SDN/NFV and data center automation strategies. As an independent consultant, Pascal got skilled in several programming languages and designed industrial sensor networks, automated and developed PLC systems, and lead security infrastructure and software auditing projects. At the start of his career, he was a support engineer for IBM's Parallel System Support Program on AIX and a regular teacher and presenter at global IBM conferences on the topics of AIX kernel development and Perl scripting.

LEAVE A REPLY

Please enter your comment!
Please enter your name here