Top

AI And Deepfake Misuse A Real Threat To The Election Outcomes - Shamla Naidoo


Ms. Shamla Naidoo is currently Head of Cloud Strategy & Innovation at Netskope, and serves as a non-executive director for multiple domestic and international companies, and an adjunct professor of law at the University of Illinois, Chicago. She has successfully led digital strategy in technology leadership roles, including as CISO and Head of Information Technology Risk at IBM. Applying her experience in the healthcare, finance, hospitality, energy, and manufacturing sectors, Ms. Shamla Naidoo advises governments and industry on how to embrace innovation while managing risk.

In view of the impending threat of the potential misuse of AI tech to spread disinformation among the public with an influence the electoral choices/outcomes, Deccan Chronicle had an exclusive interaction with Ms. Naidoo to bring to our readers the scope of the impact these activities can have, and how does one avoid falling trap to the malafide intent of perpetrators.

  • While you have highlighted a possible misuse of technology to manipulate voters in your comment. How real and potent is this threat?

This is a very real threat that is likely to keep growing. Technology has always been weaponised for political disinformation, but with the emergence of generative AI technology, which is the technology behind ChatGPT and other similar AI assistants, threat actors are employing more sophisticated and efficient techniques to influence the masses, especially with deepfakes.

Until just a few years ago, creating fake pictures, audio samples or videos still required some technical skills and a lot of money, and thus their authors were focusing on content likely to influence a maximum number of voters to maximise the “return on investment”. But now that creating deepfakes is easier, they can spread more of them, and also narrow their targeting down to smaller communities and sections of the population, or even specific individuals.

India is a prime target because it is the largest democracy on the planet, and there will always be those who are interested in influencing the electoral process. Another issue is that a large part of the population has become digitally connected only in recent years and thus may not be as well equipped to interpret and question online disinformation as populations that are more seasoned internet users. It is perhaps no surprise that the World Economic Forum ranked India highest for risk of disinformation and misinformation earlier this year.

  • Is there any evidence to indicate that such a misuse has happened in the past anywhere in the globe during elections?

There is no shortage of evidence that technologies have been used to influence past elections. Most took the form of information or disinformation campaigns on social media, or cyber operations designed to disrupt electoral online operations, especially where citizens can cast their votes online. Today, deepfakes are the newest weapon in a threat actors’ arsenal to supercharge disinformation with content that looks increasingly realistic but is fake, and we don’t need to look far to find examples targeting these elections.

With that said, early research tends to show that the actual influence of fake news and disinformation campaigns over voters’ final decisions is limited. And at the same time, it is a difficult thing to measure since the goal of fake news is to convince people it is real, and we therefore all think that we are making decisions based on real information. We could tackle this with a neutral party identifying fake news from real information, but the big question is: who is legitimate and credible enough to be this neutral party?

In any case, it would be interesting to see if we can measure such analysis for India specifically in the months after the election, and see if deepfakes are having a level of influence, but there’s no doubt that some voters will make their decisions based on incorrect information, and for the sake of democracy, this is something that should not happen.

  • What are the key steps to either erase the possibility of such events happening or neutralise its impact?

We don’t have the technical or human means to avoid this problem entirely, so we need to focus on neutralising its impact. We can do that by warning and educating citizens about this threat, and encouraging them to develop healthy reflexes to question what they are seeing online, and ensure what they are reading or seeing is true. I think the “Verify before you Amplify” motto from the Electoral Commission is spot on in terms of the mindset we want the population to be in.

Whoever wins the election is also going to have to reflect on developing bespoke laws around disinformation, which should also penalise the spread of AI-generated content like deepfakes for such ends. Regulations would also be the opportunity to define the roles and accountability of all involved parties in this battle, including social media platforms or political parties.

  • What is the potential of deepfake and AI misuse in identity theft scenarios? What are the fail safe built in like other technologies to prevent the misuse?

In an era of social media, and with improvements in AI technologies, threat actors can easily steal identities and do whatever they want under their victims’ names. All the source material they need to clone voices or create fake pictures or videos is usually available on people’s social media accounts. In corporate environments, we are seeing more and more scenarios of impersonation of company executives, and threat actors can go as far as creating deepfakes of their victims’ colleagues on live video calls to deceive them.

Limiting our online personal identity, and ensuring that our privacy parameters are set to a maximum on our social media accounts is probably the best way to shield ourselves from identity theft. And exercising suspicion, going as far as confirming someone’s identity when interacting online, is the best way to avoid falling victim to a fake.

To mitigate the damages of identity theft, organisations operating in sensitive industries, such as banks or government agencies are rolling-out increasingly powerful authentication technologies, often powered by AI as well. But they are not foolproof and at the end of the day it falls on us to be cautious and protect our identities.

  • What are some of the key challenges facing the world in the realm of cybersecurity? How can we tackle these?

Two key cyber challenges we are seeing the world over are AI security and cloud security.

AI security is a multi-pronged issue. The advances in AI are allowing threat actors to increase the power and scale of their traditional operations and attacks, by automating a number of steps. Some risks are emerging from the advent of Generative AI tools like ChatGPT as well, where employees using them may leak sensitive data owned by their organisation. Many organisations are also rushing to embed generative AI into their products or digital services, sometimes without the right cybersecurity standards in mind, and this innovation wave is creating a whole new ecosystem and supply chain that cybercriminals are already targeting.

The continued adoption of cloud technology among organisations is also posing risks, with cybercriminals abusing cloud services and business applications to deliver malware to employees and potentially gain access to their system, where they can steal data, launch an attack and/or ask for a ransom. Some of our data show that India is heavily concerned by this.

For the stability and security of our societies, it is important that organisations - from private companies to government agencies - have cybersecurity standards that can counter these modern threats. Different integrated security models have been developed with this objective in mind, such as Security Service Edge platforms for example.

  • What does the future of cybersecurity look like with almost everything getting digitized?

At this stage, it almost feels like we are going to be stuck in an eternal arms race between cybercriminals and cybersecurity professionals. The more digitised the world becomes, the more exposed the world is to cyber threats. Governments and organisations are cognizant of this, and developing and adopting best-of-breed cybersecurity solutions to keep up with an ever growing cyber threat.

They are also cognizant that they will not be able to keep up without sharing their resources, and we are seeing an increasing number of initiatives designed to increase collaboration and intelligence sharing between countries or private companies operating in the same industries. Cybercriminals are efficient because they are working as a team, and thus it is important that those working to protect the digital world from threats also work in unison.

  • Please tell us about your journey from South Africa to becoming a global expert in the domain of cybersecurity and digital transformation.

I entered the cybersecurity space in 1999. I started my journey as an entry level technologist in South Africa, and as part of my engineering job, I started to learn and apply security controls. By the time cybersecurity became its own function, I was skilled and ready to hold responsibilities in this space. There weren't as many cybersecurity specialists at the time as there are today, and I was approached for an opportunity to work in the tech space in the USA.


( Source : Deccan Chronicle )
Next Story