Summary
Reality Defender (deepfake detection company) explainer on what deepfakes are, how they are created, and the categories of harm they enable. Covers image, video, and audio deepfakes. Notes that the overwhelming majority of existing deepfake content is non-consensual pornography, with election interference and fraud as secondary uses.
Key Points
- A deepfake is synthetic media (image, video, audio, text) created using deep learning — distinct from traditional CGI or Photoshop.
- Creation requires training neural networks on large datasets; newer tools can generate convincing voice deepfakes from 30 seconds of audio.
- Categories: deepfake images (news manipulation, defamation), deepfake video (most widely consumed, election interference, impersonation), deepfake audio (voice cloning, banking fraud, account takeover).
- “An overwhelming majority of current deepfake online content is deepfake pornography” — framed as a violation of consent and propagation of abuse against women.
- Election interference: deepfakes deployed to “manipulate free elections, impersonate public figures, and change public opinions on crucial issues.”
- Security threat: voice cloning attacks on banking biometric verification; cybercriminals using deepfakes to bypass security at scale.
Newsletter Angles
- Definitional foundation for Deepfake Disinformation — clarifies technical mechanism and harm taxonomy for readers unfamiliar with the technology.
- The pornography-first reality is often absent from policy discussions focused on election interference — honest coverage should surface this.
- Voice cloning as banking attack vector connects Deepfake Disinformation to Data Privacy Weaponization and financial infrastructure risk.
- “30 seconds of audio” threshold is a key data point: anyone with a public record (politician, executive, journalist) is now vulnerable to voice cloning.
Entities Mentioned
- No specific entities mentioned.
Concepts Mentioned
- Deepfake Disinformation — core explainer for this concept
- Data Privacy Weaponization — voice cloning as attack on biometric security
- Algorithmic Radicalization — deepfakes as amplified disinformation in algorithmic feeds
Quotes
“An overwhelming majority of current deepfake online content is deepfake pornography, a heinous abuse of generative AI technology that violates basic principles of consent and propagates violence and abuse against women.”
“The newest tools available for public use can create convincing speech deepfakes with only 30 seconds of audio.”
Notes
Published by Reality Defender, a commercial deepfake detection company — inherent commercial interest in emphasizing deepfake threats. The pornography statistic is widely reported across independent sources and not disputed. The 30-second voice cloning claim tracks with 2024 research on TTS models.