What The Fakes!
Has former US President Donald Trump been arrested? Yes, if widely-shared images are to be believed. Did Pope Francis wear a puffer jacket? The images that went viral looked great, but it wasn’t him.
Imagine being blackmailed with a compromising photo of you or your family members and being asked to pay up to prevent others from seeing it. Now imagine one bad actor being able do this to thousands of individual daily.
Even if a small percentage of victims pay up, there is enough financial incentive for the bad actor to mount such an attack.
As the western world is waking up to this new reality, so is India. After morphed photos of wrestlers Vinesh Phogat and Sangeeta Phogat surfaced on social media recently, experts, who have been working to combat online misinformation say the rise of a new crop of AI tools are able to quickly generate compelling images and written material in response to user prompts, which is worrisome.
Can regulation prevent this? No! Then what will?
Renowned computer scientist Srijan Kumar says creating defence tools for AI, equivalent to anti-malware or anti-virus, is key.
“Just like anti-virus and anti-malware software deployed on end-user devices protect us from even the most sophisticated virus threats, we need software to protect against AI scams or deepfakes too,” says Kumar, Professor at Georgia Tech, who was honoured with the Forbes 30 Under 30 award.
Large Language Models (LLMs)
Srijan, who has done some path-breaking work on misinformation and fakes online, recently came out with a research paper which shows how LLMs can counter misinformation.
His is the first such work that uses LLMs to generate counter-misinformation responses from social media and crowd sourcing.
“Data is key, so we first create two novel datasets of misinformation and counter-misinformation responses from social media and crowdsourcing. While LLMs have seen many commercial applications, their use in societal good tasks has largely been under-explored,” informs Srijan, whose previous research has been used by Flipkart and even influenced Twitter’s Birdwatch platform.
A reinforcement-learning-based text generation model is then created which rewards the generator to increase the politeness, refutation attitude, and factuality while retaining text fluency and relevancy. “Through experiments, we show that the model outperforms baselines in generating high-quality responses, demonstrating the potential for generative text models for social good. We envision such methods will democratize LLMs for various societally beneficial tasks,” explains Srijan, whose paper is the first such work that uses LLMs to generate counter misinformation responses to misinformation posts and has been nominated for the Best Paper award nomination.
This work has been hailed across the world, particularly for being timely as AI pioneers, including Geoffrey Hinton and Eric Horvitz, raised alarms regarding the misuse of AI in spreading mis/disinformation.
People, beware!
Educating people regarding cybersecurity and AI, and how they should be ultra careful to vet who they are really talking to is another way. “Bad actors use urgency, fear, and other psychological tricks to lure people into sharing info or sending money. Educating people on how not to fall victim is important,” says Srijan, who recalls two more recent incidents which highlight the potential of AI being misused by bad actors — A fake photo of a bomb blast near Pentagon spread on Twitter, which led to mild panic but was debunked pretty quickly; and a 60 minutes reporter cloned herself and tricked her colleague into sharing passport information.
Economics of it
Contrary to the general perception, the economics behind AI-powered fakes is completely different from using Photoshop.
1) Volume: AI digital tools allow manipulation and blackmail at a scale that wasn't possible before.
“It's a numbers game - the more people a bad actor can target, the higher their chance of success and ranson. Yes, bad actors created fake images before such tools existed, but now, they can directly target an incredibly high number of potential victims,” says the expert.
2) Speed: The time taken to create such fakes is reducing. With photoshop, it would take a bad actor hours to create a blackmail photo. Now it takes minutes, and it can even be competely automated.
(3) Access: Photoshop requires some level of skill. With these (AI) tools and more coming, anyone can do it.
Techpreneur Sagar Honnungar, the Co-founder of Hakimo, a California-based company that acts as an AI assistant for GSOC (Global Security Operations Centre) operators in big enterprises, says whenever you are reading an article or sharing a video, stop for a while to think whether it is from a trusted source or not.
SIFT
Citing the SIFT (Stop,Identify the source, Find better coverage and Trace claims, quotes to the original context) method developed by researcher Mike Caufield, useful for combating misinformation and fake AI-generated content,
Sagar says there have been some classic tells that people have found to differentiate AI-generated images such as fingers or eyeglasses not being developed properly.
“For example, in the image of the Pope wearing a puffer jacket, the fingers aren’t very well generated and don’t seem to be holding the cup properly. However, these tells are quickly becoming irrelevant as the models are becoming better day by day, and the need and urgency for better and novel methods to combat misinformation is increasing as well,” feels Sagar, whose firm uses cutting-edge AI solutions to eliminate most nuisance alarms and enables big companies to prioritise high severity alerts.
While AI advances will provide many benefits to society, researchers and practitioners have to be proactive in preventing any harm that may arise from their usage and develop tools that can identify and mitigate potential misuses early, he feels.
“Use reverse image search, fact-checking sites or other trusted news sources to verify the content. Click through on the cited sources to see if they were in the same context as mentioned in the article. Following these four simple steps, people are less likely to fall into the trap of believing fake content circulating on the internet,” he suggests .