Date:

How We Were Deepfaked

Unlock the Editor’s Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

A recent article discussed the warnings of generative artificial intelligence (AI)’s impact on global elections in 2024. The warnings predicted that AI would supercharge political disinformation, leaving voters unable to distinguish fact from fiction in a sea of realistic, personalized lies.

Disinformation in elections

The article highlighted the spread of AI-generated content, such as fake audio clips of political leaders and realistic videos of non-existent individuals making racist jokes. However, experts now believe that there is little evidence that AI disinformation was as widespread or impactful as predicted.

Results from around the world

A study by the Alan Turing Institute found only 27 viral pieces of AI-generated content during the UK, French, and EU elections combined. A separate study discovered that only around one in 20 British people recognized any of the most widely shared political deepfakes during the election. In the US, the News Literacy Project catalogued almost 1,000 examples of misinformation about the presidential election, with only 6% involving generative AI.

Impact of AI on disinformation

Research suggests that most AI-generated content was not designed to deceive, but rather to create emotional appeals or support political narratives. In some cases, AI-generated images were used for satirical or entertainment purposes.

Conclusion

The threat of deepfakes and AI-generated content is real and should not be taken lightly. However, it is essential to recognize that AI technology and social impacts are rapidly advancing. Rather than focusing on the potential for AI-generated disinformation, we should focus on tackling the reasons why people are willing to believe and share falsehoods in the first place, such as political polarization and TikTok-fuelled media diets.

FAQs

Q: What is generative AI and how does it relate to elections?
A: Generative AI refers to artificial intelligence that can create realistic, personalized content, such as fake audio clips, videos, and images. The concern is that this technology could be used to spread political disinformation during elections.

Q: How widespread is AI-generated disinformation in elections?
A: According to experts, there is little evidence that AI disinformation was as widespread or impactful as predicted. Studies have found that AI-generated content made up a small percentage of election-related misinformation.

Q: Are people susceptible to AI-generated disinformation?
A: While there is a risk of people falling for AI-generated disinformation, research suggests that most people can apply healthy scepticism to such claims. However, it is essential to continue educating the public on the dangers of misinformation.

Q: How can we protect ourselves from AI-generated disinformation?
A: To protect yourself from AI-generated disinformation, it is essential to be critically thinking and not take information at face value. Verify information through reputable sources and be cautious of emotional appeals and sensational headlines.

Q: Are there any signs that AI-generated disinformation is being used for malicious purposes?
A: Yes, there are signs that AI-generated disinformation is being used for malicious purposes, such as in impersonation scams or pornographic harassment and extortion. However, the real challenge remains in tackling the underlying reasons why people are willing to believe and share falsehoods in the first place.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here