In the battle for public opinion, truth has become a special effect.
By James R Martin
Some films and videos label themselves as “documentaries,” but they disguise propaganda, personally attack individuals, or serve as tools for character assassination. These videos do not aim to explore reality or highlight issues; instead, they create conspiracy theories, make baseless claims, and tell outright lies to discredit someone, an institution, or a policy. Their goal is not investigation but to cause harm.
Attack Documentaries (Attackumentary) films are generally political and backed by groups with explicit ideology and agendas. They surpass standard propaganda by focusing on personal attacks and emotional appeals. While employing the visual style of nonfiction storytelling—such as interviews, archival footage, and a historical tone—they do not rely on objective, fact-based narratives like conventional documentaries.
Two decades ago, the 2004 film “Stolen Honor: Wounds That Never Heal” established a pattern for modern attack films. During John Kerry’s presidential campaign, it employed testimonial editing, emotional music, and selective interviews to create a narrative of betrayal—classic propaganda dressed up as patriotism. That film served as an early warning of how political storytelling would develop once everyone gained access to manipulation tools.
Tech Evolution
Today, the same persuasive techniques condense into seconds on TikTok, X, and YouTube: jump cuts, captions, and AI voiceovers replace the documentary’s narration. A swipe now achieves what once required a production budget and television airtime, turning reputational battles into an interactive game. Both “Stolen Honor” and today’s viral “call-out” videos rely on the same emotional structure—an accusation framed as a revelation. In “Stolen Honor,” serious interviews and a steady documentary pace establish credibility; online, quick edits, dramatic soundtracks, and on-screen text emphasize urgency and moral certainty to achieve the same effect. The scale and authorship differ: the earlier film depended on coordinated distribution and broadcast authority, while today’s creators generate similar reputational harm through repetition, remixing, and algorithmic boost.
Doxing and Harassment Videos
A well-known evolution of this trend is the increase in doxxing and harassment videos, where personal information is shared under the pretense of ‘exposing’ misconduct. For example, during the so-called “Pizzagate” and “GamerGate” conflicts, anonymous users gathered private details and spread them through viral videos that caused real-world harassment. These attacks mix spectacle with intimidation—propaganda adapted for the algorithm-driven era.
Stitched, looped, and AI-filtered
This lineage of attack media—stretching from “Stolen Honor” to the stitched, looped, and AI-filtered clips of today—has given rise to a new ecosystem of persuasion. Political operatives, influencers, and anonymous users all draw from the same toolbox to shape narratives that blur the lines between fact, fiction, and entertainment. Users create deepfake voice notes, selectively edit “gotcha” reels, and post montages of “receipts” at midnight, revealing how propaganda adapts to the attention economy. By examining these modern case studies—across politics, celebrity feuds, and social movements—we see not only how they attack reputations but also how they train audiences to participate in the spectacle.
Tools of Persuasion
Government propaganda has evolved into crowdsourcing. Persuasion techniques once reserved for governments and the media are now accessible to anyone with a smartphone. This creates a new form of influence—part documentary, part performance art—where the line between truth and fiction deliberately blurs. Producing credible nonfiction that argues for or against an issue clearly differs from crafting a fictional, harmful story about an individual or cultural institution.
Common Labels and Slang for Attack Media
Deepfake, deep-fake, synthetic media — AI-created or AI-modified videos or audio that convincingly place words, faces, or actions onto a target’s mouth or body.
Doctored video, edited clip, selective editing (contextomy) — Footage manipulated to remove context or rearrange events to alter meaning—classic ‘out-of-context’ attacks.
Smear video, hit piece, hatchet job, attack ad — Produced materials explicitly designed to discredit someone, often political figures or journalists.
Kompromat, compromising material — From Russian: video or images used to blackmail, discredit, or exert leverage.
Character assassination — Campaigns aimed at destroying a target’s reputation through repeated attacks, innuendo, or falsehoods.
Gaslighting (online) — Psychological manipulation designed to cause a target and their audience to doubt their memory or sanity.
Receipts, receipts dump — Posting ‘evidence’—screenshots, clips, files—to show wrongdoing; can be real or fake.
Call-out videos, cancel compilations, ratios, dogpiling, and pile-ons — short videos or threads that call out behavior; ‘dogpiling’ describes the mass amplification of attacks.
Dox video, doxxing (video format) — Sharing private information in video form, often with threats or instructions to harass.
Revenge porn, honeytrap, honeypot video — Sexual or intimate content used to shame, blackmail, or silence individuals.
Sockpuppet, troll-farm videos, organized inauthentic behavior (OIA) — Videos created or boosted by fake accounts or coordinated networks to simulate grassroots outrage.
Kinetic terms journalists use: ‘Gotcha clip,’ ‘gotcha video,’ ‘gotcha moment’ — Short, framed edits created to catch someone saying or doing something embarrassing.
Voice clone, audio deepfake — Synthetic audio used to imitate someone’s voice and create false statements or calls.
Weaponized montage, misleading montage — Quick cuts of images, captions, and audio arranged to evoke emotions rather than present facts.
Black PR, negative ops, dirty ops, kompromat dump — Industry or intelligence slang for leaking damaging material indirectly to avoid traceability.
Sources and Further Reading
• Wikipedia — Entries on Deepfake, Attack ad, Kompromat, Revenge porn, and related terminology. https://en.wikipedia.org
• Center for Media and Social Impact (CMSI) — Guidance on ethics and manipulation in nonfiction editing and “contextomy” (selective editing). https://cmsimpact.org
• Liberties.eu — Articles on Character Assassination and information manipulation in digital media. https://www.liberties.eu
• Cleveland Clinic — Psychological explanation of Gaslighting and emotional manipulation. https://my.clevelandclinic.org
• Online Harassment Field Manual – PEN America — Definitions and examples of doxxing, online abuse, and coordinated harassment. https://onlineharassmentfieldmanual.pen.org
• Internet Matters — Discussions on Troll farms, Sockpuppet accounts, and Organized Inauthentic Behavior (OIA) in social networks. https://www.internetmatters.org
• PMC / PubMed Central — Academic and media-industry discussions of Voice cloning and audio deepfakes. https://www.ncbi.nlm.nih.gov/pmc
• MIT Sloan / DHS — Academic and policy briefs on synthetic media, misinformation, and detection technologies. https://mitsloan.mit.edu, https://www.dhs.gov/science-and-technology
Post is a © excerpt from forthcoming book “Documentary Reflections” by James R Martin

