Faked!
Bollywood is watching the horror unfold helplessly. As netas are busy electioneering, two deepfake videos of Bollywood A-listers Aamir Khan and Ranveer Singh have been making the rounds, where the actors seem to be exhorting people to vote for a particular political party. The 30-second video of Aamir Khan and the 41-second clip of Ranveer Singh, show the actors criticising a top leader, and exhorting people to vote for the other party. Both videos have been viewed on social media more than half a million times.
Even as the mammoth Indian election process is underway, Artificial Intelligence-generated deepfakes are playing spoilsport. While the fact remains that the videos are fake, they represent a trend of the film industry being used, or rather misused, to target politicians and elections, which is a first.
nip it in the bud
“Such AI-generated visuals can easily evade all the tracking systems in cyberspace. We need to create robust defence tools, like anti-virus and anti-malware software, and deploy them on end-user devices,” says computer scientist Srijan Kumar. Honoured with the ‘Forbes 30 under 30’ award for his work on social media safety and integrity, Srijan says new AI tools like diffusion models, which allow tech-savvy predators to exploit real-life photographs from the Internet, including shots displayed on social media sites and personal blogs, and re-create them into almost anything, are a challenge the world is facing. “It’s almost impossible to determine whether an image or video, developed using diffusion models, is real or created,” he says. Diffusion models, he says, are trained to generate unique images by learning how to de-noise or reconstruct. “Based on the prompt, this model can end up creating imaginative videos and pictures based on the statistical properties of its training data. It just takes a few seconds,” he says, adding that not much technical knowledge is required to create such deepfakes. Alarmingly, AI-generated deepfakes are being increasingly used in elections elsewhere in the world too. “Once new detectors are developed, generative technology evolves to incorporate methods to evade detection from those detectors. This is the real challenge,” points out Srijan. Another model, stable diffusion, is being used to create deepfakes. “It is a completely open-source and most adaptable picture generator. In western countries, it is believed stable diffusion is being heavily relied upon to create deep fakes,” says US-based Srijan, who is currently working on a startup aimed at enhancing security in the world of AI.
Research suggests that deepfakes have a huge impact on both the conscious and subconscious minds, which means that the videos purporting to be of Aamir Khan and Ranveer have the potential to influence the perceptions of people at large.
Control the spread
Rajesh Shukla, Chief Strategist, National Intellectual Advisory (NIA), says the need of the hour is to control the spread of such videos. “The authorities need to act swiftly to limit further sharing of the videos. This might involve requesting platforms to remove or restrict access to the content,” says Rajesh, a mentor for Venture Studio Capital, Jagoo Nari, and Padhega Bharat. He stresses that there is also an urgent need to review and learn. “The authorities must use this as an opportunity to review content creation and approval processes to avoid similar issues in the future,” he says.
Both Ranveer Singh and Aamir Khan have lodged complaints with the Cyber Crime Cell of Mumbai. “Aamir Khan faced a similar situation when an AI-generated video from an earlier show, Satyameva Jayate, surfaced, purportedly promoting a political party,” reminds Rajesh, adding that the Election Commission and political parties alike need to act swiftly.
He feels both actors should have held press conferences, which would have been impactful. “They should urge the government to take decisive action against such videos. The police, IT cells, and cybercrime branches must collaborate to identify the culprits. Removal of the videos from circulation is crucial to prevent further harm,” says Rajesh.
AI is making it difficult for us to distinguish between the real and fake. This type of forgery can have serious consequences.” — Nirali Bhatia, Cyberpsychologist
People have to be educated about this menace, and there have to be some foolproof solutions for cybercrimes. Not only India, the entire world is facing the problem. IT graduates and people with advanced digital knowledge should be a part of our security agencies now. We need to have a strong will to fight this menace” — Ashoke Pandit, Filmmaker