Definition

Deepfake Disinformation refers to the use of AI-generated synthetic media (video, audio, image) to create false impressions of real people saying or doing things they did not say or do, for purposes of political manipulation, financial fraud, character assassination, or electoral interference. Unlike traditional disinformation, deepfakes exploit the psychological authority of audiovisual evidence.

Why It Matters

The deepfake threat to elections and democratic discourse is not a hypothetical future problem — it is documented in 38 countries between 2023 and 2024. The key finding from Recorded Future’s Insikt Group analysis: deepfakes don’t need to be high-quality to cause harm. A sufficiently believable fake, distributed fast enough during an election cycle, can change outcomes or suppress turnout before debunking occurs. Regulatory frameworks have not kept pace.

Evidence & Examples

  • 82 deepfakes documented targeting political figures in 38 countries, July 2023–July 2024 2024 Deepfakes and Election Disinformation Report
  • Slovakia: deepfake audio spread claims of electoral fraud just before elections
  • Turkey: presidential candidate withdrew after alleged deepfake sex tape; Erdoğan used deepfake linking opposition to terrorism
  • UK: fake audio of PM Keir Starmer criticizing his own party
  • US: fake Biden audio urging voters to skip primaries
  • Spoofed media branding: France24 and BBC logos used on deepfake videos to add credibility
  • Foreign leader repurposing: Trump and Xi deepfakes used in domestic elections in Taiwan and South Africa
  • Non-consensual pornography: women in politics disproportionately targeted; structural deterrent to political participation
  • Platforms cannot detect deepfakes reliably at speed; regulation focuses on takedown (post-harm) rather than prevention

Tensions & Counterarguments

  • Detection technology is racing against generation technology — arms race dynamic with no clear winner
  • Low-quality deepfakes can cause harm: the quality threshold for belief is lower than assumed; this actually simplifies the threat model (cheap attacks work)
  • Legal remedies (DMCA, defamation) are slow relative to viral spread; harm often occurs before takedown
  • Countermeasures like “familiarity campaigns” (getting people familiar with real likenesses) work for high-profile figures but not for local candidates or journalists
  • Foreign interference deepfakes operate across jurisdictions where no single legal framework applies

Key Sources