Summary

Recorded Future’s Insikt Group analyzed 82 deepfakes targeting public figures across 38 countries between July 2023 and July 2024. The report finds that deepfakes are being weaponized for financial scams, election manipulation, character assassination, and non-consensual pornography — and that speed of response, not just detection, is the critical mitigation variable.

Key Points

  • 82 deepfakes identified in 38 countries; 30 nations were holding or had upcoming elections
  • Primary uses: scams (26.8%), false statements (25.6%), electioneering (15.8%), character assassination (10.9%), non-consensual pornography (10.9%)
  • Emerging tactics: fake whistleblower deepfakes, audio-only deepfakes, spoofed media branding (France24, BBC logos), foreign leader impersonation repurposed for domestic elections
  • Slovakia: deepfake audio released just before elections claiming electoral fraud
  • Turkey: presidential candidate withdrew after release of alleged deepfake sex tape
  • Turkey’s Erdoğan used a deepfake to link an opposition leader to terrorist groups
  • Key finding: deepfakes don’t need to be high-quality to cause harm — speed of distribution matters more than polish
  • Countermeasures: rapid authentic content response, familiarity campaigns, DMCA takedowns, AI detection tools, platform-factchecker collaboration

Newsletter Angles

  • The “good enough” deepfake: the report’s finding that low-quality deepfakes still cause harm reframes the threat. The danger isn’t superintelligent AI fakes — it’s cheap, fast, good enough fakes deployed at election speed.
  • Non-consensual pornography disproportionately targeting women in politics is a structural barrier to political participation — an underreported political power story.
  • The regulatory gap: deepfakes move faster than any existing legal framework. The report highlights how foreign interference deepfakes are especially hard to address.

Entities Mentioned

Concepts Mentioned

Quotes

“Research suggests that deepfakes, beyond a certain quality, don’t necessarily need to be highly sophisticated to cause harm.”

Notes

Source is a threat intelligence firm (Recorded Future) with commercial interest in AI detection tools. Dataset is 82 cases — notable but not exhaustive. Methodology for identifying deepfakes not fully disclosed. The report skews toward high-profile political targets; everyday disinformation may look different.