Spotting Misinformation & Propaganda
Narrative warfare, bot networks, deepfakes, and the prebunking techniques that beat them.
Types
Wardle's taxonomy
Claire Wardle (First Draft) distinguishes seven kinds of mis- and disinformation — worth knowing because each requires different responses.
Satire / parody
No intent to harm but can deceive. The Onion mistaken for real.
False connection
Headlines/visuals don't support the content. Clickbait.
Misleading content
Selective use of facts to frame an issue.
False context
Genuine content shared with false context. A real photo re-captioned.
Imposter content
Genuine sources impersonated. Fake BBC story layouts.
Manipulated content
Genuine content altered — edited video, doctored photo.
Fabricated content
Entirely new content designed to deceive. The hardest to catch.
Actors
State actors
Key Points
- Russia's Internet Research Agency (GRU-linked): 2016 US election interference documented by Mueller Report.
- China's 'wolf warrior' diplomacy + coordinated social media campaigns.
- Iran's Endless Mayfly network (targeted Middle East audiences).
- Most state-sponsored campaigns target diaspora communities or US election cycles.
Commercial / engagement actors
The Macedonian 'fake news' towns (2016) monetized US political outrage via ad revenue. Profit, not politics, motivated them.
Domestic political actors
Not foreign — often the biggest source of misinformation in any country's political discourse.
AI & Deepfakes
What deepfakes can and can't do (2026)
Key Points
- Face swaps: excellent at static; flickers detectable in video motion.
- Voice clones: 3-second samples yield convincing clones with ElevenLabs and similar tools.
- Text-to-video (Sora, Veo): coherent at 10-30 second length.
- Full synthetic identities: plausible but usually have subtle artifacts in eyes, hair boundaries, hand geometry.
Detection techniques
Key Points
- Provenance: C2PA content credentials (Adobe, Microsoft, Sony, BBC).
- Reverse-engineering: check eye reflections, look for paired lighting.
- Context verification: where did this first appear? Has the claimed speaker otherwise said this?
- Forensic tools: Microsoft Video Authenticator, Intel FakeCatcher — not definitive.
Prebunking
Inoculation theory
Psychological inoculation: expose people to weakened doses of manipulation techniques before they encounter real propaganda. Van der Linden and Roozenbeek's Cambridge experiments show durable effects.
Key Points
- Teach the technique (scapegoating, emotional manipulation) rather than individual falsehoods.
- Works across political spectrum — not just for one side.
- BBC iReporter, Bad News Game are gamified applications.
Community + platform response
Key Points
- Platform labels: slow but incremental. X Community Notes is the most-studied model.
- Digital literacy curricula: integrated in Finland K-12 since 2016; cited as why Finland ranks top on Open Society Institute media literacy index.
- Journalism: dedicated misinformation beats (Reuters, ProPublica, BBC Verify).
FAQ
Is content moderation censorship?
Private platforms moderating their spaces isn't censorship in the First Amendment sense. But it does concentrate power and can itself be abused. Transparency reports, appeals processes, and regulatory frameworks (EU's Digital Services Act) try to balance.
Continue learning
Explore related MUN guides to deepen your skills.