• Big Purple Clouds
  • Posts
  • Seeing is No Longer Believing: Deepfakes Usher in an Age of Political Misinformation

Seeing is No Longer Believing: Deepfakes Usher in an Age of Political Misinformation

BIGPURPLECLOUDS PUBLICATIONS
Seeing is No Longer Believing: Deepfakes Usher in an Age of Political Misinformation

A controversial deepfake video of UK Labour party leader Sir Keir Starmer has caused an uproar in recent days. The video, which appeared to show Starmer struggling to answer questions about Brexit policy during an interview, rapidly spread across social media earlier this week. However, it was soon revealed that the video had been digitally altered using deepfake technology.

Deepfakes are synthetic media generated by artificial intelligence, made to look authentic but depicting events or speech that never actually occurred. They are created by feeding large volumes of images and videos of a person into a machine learning algorithm, which can then realistically generate new video or audio that mimics that person.

The Keir Starmer deepfake appears to show him stuttering and unable to respond to questions during a BBC interview. However, the original video shows Starmer answering the questions eloquently and confidently. The deepfake version splices in footage of Starmer looking uncomfortable and hesitant that has been lifted from a different context.

The video was originally posted to social media by a pro-Brexit Twitter account with the caption "Is this deepfake accurate?" However, many viewers did not realise it was manipulated and took it to be real footage of Starmer struggling. Within hours, the bogus video had spread rapidly across X and Facebook.

Starmer and Labour figures immediately criticised the video as malicious disinformation. A Labour spokesperson said it represents a "completely unacceptable" use of technology to spread falsehoods. However, the creator of the video defended it as satire used to make a political point.

The incident has sparked renewed concern about the societal impacts of deepfakes and their potential to spread misinformation or erode public trust. As deepfake technology becomes more advanced, videos like this will become increasingly harder to distinguish from real content.

Some analysts argue the Starmer deepfake represents a dangerous escalation in the use of AI-generated synthetic media for political purposes. If sophisticated fakes are rampantly spread during election campaigns, they may deceive and sway voters. This could serve to undermine democracy.

Others have cautioned that mislabelling or overreacting to satirical deepfakes also poses risks. Banning manipulated media could set concerning precedents for online censorship. Critics argue the focus should be on improving public awareness around deepfakes and developing better detection tools.

The episode has revealed gaps in current UK law around deepfakes. While deliberately misleading uses are condemned, satirical deepfakes currently fall into a grey legal area. Questions remain about what policies or regulations could guard against harmful but untruthful synthetic videos.

Examples of Political Deepfakes

The emergence of the Keir Starmer deepfake may serve as a wake-up call. But it is far from the first time synthetic media has been used for political purposes.

In 2018, a doctored video of US House Speaker Nancy Pelosi was spread on social media. The footage was slowed down to make Pelosi appear confused and slurring her words, in an apparent attempt to undermine her credibility.

Subscribe to keep reading

This content is free, but you must be subscribed to Big Purple Clouds to continue reading.

Already a subscriber?Sign In.Not now

Reply

or to participate.