Facts & MythsMay 5, 2026

Myth

Viral social media videos showing Iranian forces destroying Israeli military bases and command centers during Operation Roaring Lion are authentic combat footage proving Iran inflicted devastating, irreversible damage on Israel's military.

Fact

BBC Verify and open-source analysts have confirmed these videos are entirely AI-generated fabrications — not genuine combat footage — representing the first time generative AI has been deployed at scale as a weapon of disinformation during an active military conflict.

The claim that viral videos document authentic Iranian military victories over Israeli bases and command centers during Operation Roaring Lion is comprehensively false. BBC Verify, working alongside analyst groups including Get Real and Alethea, conducted a systematic review of the most widely shared clips and found no authenticated footage whatsoever of Iranian forces destroying Israeli military infrastructure. What circulated online was, in fact, a coordinated wave of AI-generated propaganda designed to manufacture the appearance of an Iranian battlefield triumph that did not occur. The three most-viewed fabricated videos identified by BBC Verify collectively accumulated over 100 million views across multiple platforms before debunkers could contain their spread.

The Facts: What BBC Verify and Open-Source Analysts Actually Found

Emmanuelle Saliba, Chief Investigative Officer at the analyst group Get Real, told BBC Verify this represented "the first time we've seen generative AI be used at scale during a conflict." The fabricated videos disproportionately depicted night-time attacks — a deliberate production choice, as low-light scenes are significantly harder to authenticate. Claims of destroyed Israeli F-35 fighter jets featured heavily in the disinformation campaign: Lisa Kaplan, CEO of the Alethea analyst group, calculated that if the accumulation of "shootdown" clips were genuine, Iran would have destroyed 15% of Israel's entire F-35 fleet — a militarily catastrophic and immediately verifiable event that simply did not happen. One clip with over 21 million views on TikTok, purporting to show an Israeli F-35 downed by Iranian air defenses, was traced by BBC Verify to footage from a flight simulator video game. TikTok removed the video only after being directly approached by the verification team.

  • BBC Verify authenticated zero pieces of footage showing Iranian forces destroying Israeli military bases or command centers.
  • Geoconfirmed, the open-source imagery analysis group, described the volume of disinformation as "astonishing," noting the presence of game clips, recycled footage from the October 2024 strikes, and AI-generated content all presented as live combat evidence.
  • The pro-Iranian X account "Daily Iran Military" grew from approximately 700,000 to 1.4 million followers in under a week — a 100% surge driven almost entirely by spreading fabricated content, illustrating the rapid network amplification these operations exploit.
  • One widely shared still image purportedly showing an Iranian missile strike on Tel Aviv, which accumulated 27 million views, was identified as AI-generated; civilians in the image were rendered the same size as nearby vehicles, a telltale artifact of generative AI image synthesis.
  • CNN fact-checker Daniel Dale independently reviewed and debunked a tranche of AI-fabricated war videos, reinforcing the BBC Verify findings across multiple verification outlets.

Historical Context: Iran's Documented Tradition of Military Fabrication

The AI-powered disinformation surge did not emerge in a vacuum. Iran has a decades-long, well-documented record of fabricating or grossly exaggerating its military capabilities and battlefield results. The Washington Institute for Near East Policy has catalogued a series of debunked Iranian military propaganda operations: Iran announced the production of the Qaher-313 stealth aircraft in 2013, a project that never materialized; digitally manipulated imagery of a 2008 Shahab-3 ballistic missile test was initially accepted by foreign media before being exposed; and Iran relabeled an aging American-era F-5 as a domestically produced new model before the deception was exposed. These are not isolated incidents but reflect a structured state strategy — what analysts call "soft deterrence" — in which Iran inflates the credibility of its military power to deter adversaries and boost the morale of regional proxies without requiring actual battlefield victories.

The Institute for National Security Studies (INSS) documented how, during the post-October 7 Swords of Iron war, Iran's influence operations evolved to incorporate deepfake video technology and AI-generated placards. The INSS noted that while AI products were "not yet at a perfect level of production, their quality is relatively high, and they can easily deceive citizens, the media, and elected officials." The operation under analysis — deploying AI video fabrications to claim destruction of Israeli military assets — follows this same operational template, now turbocharged by generative AI tools accessible to state-linked actors at minimal cost and with plausible deniability. Russian-linked influence networks have also been identified by Alethea as playing a role in amplifying the fake F-35 shootdown content specifically, exploiting the conflict to sow doubt about Western weapons systems.

Conclusion: Why This Disinformation Is Dangerous and Must Be Rejected

Fabricated "victory footage" of this kind serves multiple hostile objectives simultaneously: it attempts to demoralize Israeli and Western publics, rehabilitate Iran's military image after a damaging campaign, recruit sympathizers in the Global South, and erode general trust in authentic visual evidence. The ADL has documented how generative AI disinformation tied to the Israel conflict "can cause some to doubt the validity of actual war images, creating unnecessary suspicion at a time of deeply polarized public opinion" — a secondary propaganda dividend that benefits Iran even after the specific fake videos are debunked. The appropriate response to this disinformation is precisely what BBC Verify, Geoconfirmed, CNN, and Alethea have modeled: systematic, source-specific debunking with named experts, technical artifact analysis, and platform accountability. Accepting fabricated AI footage as evidence of Iranian battlefield dominance would represent a victory for a state-directed disinformation apparatus designed to substitute propaganda for facts — and to let a campaign of pixels stand in for the military achievements Iran could not secure in reality.

#iran disinformation#ai-generated propaganda#operation roaring lion#deepfake video#israel-iran conflict#bbc verify#open-source intelligence#military fabrication#carlos