Threat of AI-Generated Deepfakes Remains Deep Rooted
Hyderabad: Deepfakes are no longer just a tool to target celebrities and politicians—-they are fast emerging as a serious threat for the general public as well. In a digital world flooded with visuals, it takes just one high-resolution photo for cybercriminals to create a deepfake. They can create very convincing fake visuals and voices, blurring the lines between reality and deception.
Deepfakes are highly realistic images, videos or audio recordings that are created, edited, or generated using artificial intelligence techniques and tools that alter or replace an individual's identity within the content. They can also produce entirely original material where a person is depicted as saying or doing something they never actually did. Various methods are employed in the creation of deepfakes, including face-swapping, voice cloning, lip-syncing and emotional manipulation. Additionally, audio deepfakes utilise techniques such as speech synthesis, parody, mimicry and sound effects manipulation to enhance the realism of the altered media.
Talking about how to identify or detect a deepfake, Dr Shruti Mantri, associate director of ISB Institute of Data Science (IIDS), said, "There is no definitive way to identify a fake when it comes to AI-generated content. What we can look out for is facial transformation.
“Pay attention to the cheeks and forehead; does the skin appear excessively smooth or overly wrinkled and is the ageing of the skin consistent with that of the hair and eyes? Observe the eyes and eyebrows for where the shadows are placed, as deepfakes may not accurately capture natural physics. Is the shadow even there? How intense? Does it change angle with movement? Deepfakes often struggle with realistic lighting. Be aware of how well lip movements align with speech, as many deepfakes rely on lip-syncing."
How do cybercriminals generate deepfakes?
They are essentially AI-generated content that use deep learning algorithms to produce fake audio, video or images that appear realistic.
"They are created using generative adversarial networks (GANs). They involve two neural networks: the generator, responsible for crafting fake content, and the discriminator, which evaluates the content's authenticity. While deepfakes often involve face-swapping or voice cloning, they can also manipulate body movements or fabricate entire scenarios to misrepresent someone's identity. This is why it is called 'deepfake', blending 'deep learning' and 'fake'," explained Jaspreet Bindra, founder of AI & Beyond.
Bindra added that deepfakes poses significant threats and affect society. In the political realm, they can be weaponised to create false speeches or actions attributed to leaders, potentially inciting unrest or influencing election outcomes. Criminally, deepfakes are used in scams, extortion or revenge porn, which can cause severe harm to victims.
Fostering distrust
On a broader scale, deepfakes erode public trust in media and individuals, blurring the distinction between reality and fabrication. This growing distrust creates an environment where truth becomes uncertain, with serious implications for democracy, governance and social stability.
In Hyderabad, deepfakes are commonly used for phishing scams. Fraudsters create fake voices or automated calls to lure victims. Cases of fraud involving trading apps and stock markets, using small-scale schemes are rampant.
According to Rupesh Mittal, cybercrime investigator and founder of Cyber Jagrithi, these scams often start with generating content on topics like income tax or elections. Fraudsters plant small, believable messages to gain trust, gradually leading victims to invest in trading platforms.
"They use trading as an end-platform. They convince users to invest small amounts like Rs 5,000 or Rs 10,000, which they raise over time. By the time victims realise that it's a scam, days or weeks have passed, and their losses accumulate in lakhs and crores," he said.
The most vulnerable individuals are in the age group 20 to 35—those who are earning well and are willing to explore new opportunities. The scams particularly target unsuspecting, innocent individuals rather than those who are more cautious or experienced.
Mittal said that despite huge losses, people are hesitant to file FIRs.
"Recently, a victim lost Rs 5 lakh in a stock market scam. Despite realising the fraud, she hesitated to file an FIR for fear of social stigma. Such scams rarely involve a one-time transaction. Instead, they occur over multiple payments, which also makes recovery challenging. On average, only 20 per cent of the lost amount is recoverable due to delays in reporting and the fragmented nature of the transactions," Mittal said.
Beyond extortion scams, deepfakes have also been used for impersonation and harassment. For instance, women’s social media accounts are targeted, compromised, and used to post fake endorsements. In Hyderabad, however, the use of deepfakes to create pornographic content is rare. "In those cases, the motive is revenge or to create controversies," he stated.
How do cyber experts detect deepfakes?
Deepfake technology operates by training GANs on extensive datasets of videos, images, and audio to mimic human expressions, movements and vocal nuances with remarkable accuracy. As the technology evolves, the fakes become increasingly difficult to distinguish from authentic content.
Cyber experts combat deepfakes using advanced detection tools, blockchain-based solutions for verifying content authenticity, and digital watermarks to certify originality.
"However, the battle between creators of deepfakes and those seeking to detect them is ongoing, with both sides continuously advancing their respective methods," Bindra said.
Mittal mentioned that there are many open-sourced tools that are available and applications that can be downloaded from PlayStore, which help fraudsters to create fake content. Hence, controlling the use of AI is not possible.
One aspect of differentiating fake content from the original involves contextual analysis, where experts assess the video's context, including its source and timing and cross-refer it with known facts or legitimate recordings.
"Cyber experts combine their analytical skills and experience with technical tools to evaluate the authenticity of suspicious media. A variety of technical tools have been developed to aid in detection, leveraging AI to analyse media files for signs of manipulation, including irregularities in pixel data and audio waveforms," said Dr. Shruti Mantri.
How to combat deepfake?
India's IT Act includes sections (66B, 66C, 66D, and 66F) that address various forms of cybercrime, such as impersonation, cheating and privacy violations. However, lack of specific provisions for AI-related crimes poses challenges. While existing laws can be interpreted to include such offences, updates are needed to explicitly address AI and deepfake misuse. "As individuals, we should avoid sharing high-resolution photos publicly; verify the authenticity of online content before reacting; be cautious with unfamiliar links or requests for personal details and rely on trusted sources for news and information," Mittal said.
Infographics:
How to detect deepfakes:--
Look for inconsistencies in facial expressions, unnatural blinking patterns or mismatched lighting that don’t align with the environment.
- Pay attention to jerky or odd body movements that seem inconsistent with normal human motion.
- Check if lip movements match the spoken words accurately. Misaligned audio and visuals can be a giveaway.
- Be alert for unusual voice modulation, robotic tones or inconsistencies in the audio quality that indicate possible tampering.
- In images, watch for blurred edges, distorted backgrounds or strange visual glitches around the face or body.
- Unrealistically smooth skin, perfectly symmetrical faces or exaggerated features indicate manipulation.
- Evaluate the context of the content for inconsistencies, such as an unlikely setting or implausible actions for the person being portrayed.