A sweeping wave of digitally fabricated content — not combat footage — has been deliberately seeded across X, TikTok, and Telegram with the express purpose of manufacturing a false narrative of Iranian military triumph over Israel. Independent open-source intelligence groups, digital forensics experts, and journalism verification units have systematically debunked every major video in circulation purporting to show Iranian missiles striking Tel Aviv or Israeli F-35 stealth fighters being destroyed. These are not blurry, ambiguous clips open to interpretation — they are synthetic creations produced using generative artificial intelligence tools that can conjure photorealistic destruction in minutes, with no real-world event behind them. The disinformation campaign is unprecedented in scale and technical sophistication, representing, in the words of leading analysts, the first time generative AI has been weaponized at this magnitude during an active armed conflict.
The Facts: What Verification Experts Actually Found
BBC Verify, one of the most rigorous open-source verification operations in international journalism, reviewed dozens of the most widely shared videos and confirmed that the three most-viewed fake videos alone collectively amassed over 100 million views across multiple platforms before being flagged. One AI-generated image depicting dozens of missiles raining down on Tel Aviv accumulated 27 million views on its own. Analysts at Get Real, a specialist disinformation watchdog, told BBC Verify that this conflict marks "the first time we've seen generative AI be used at scale during a conflict." The barrier to creating convincing synthetic conflict footage, as digital media expert Timothy Graham of Queensland University of Technology noted, "has essentially collapsed."
- No authenticated footage of a single Israeli F-35 being shot down exists. Lisa Kaplan, CEO of the Alethea analyst group, calculated that if the volume of fake F-35 destruction clips circulating were real, Iran would have eliminated 15% of Israel's entire F-35 fleet — an event that would be impossible to conceal from satellite imagery, aviation tracking data, and allied intelligence agencies.
- The open-source verification group Geoconfirmed documented that the fake footage includes "unrelated footage from Pakistan, recycled videos from the October 2024 strikes, game clips, and AI-generated content" all being falsely presented as live combat evidence from the current conflict.
- Platform X announced it would temporarily suspend creators from its monetisation programme for posting AI-generated conflict videos without labels — an acknowledgment that synthetic war content was being financially incentivized at scale.
- Pro-Iranian accounts with no verified ties to the Iranian government, such as "Daily Iran Military," doubled their follower count on X from roughly 700,000 to 1.4 million in under one week by serving as high-volume disinformation amplifiers.
- Even X's own AI chatbot Grok was observed by BBC Verify incorrectly identifying AI-generated videos as authentic when queried by users seeking verification — demonstrating how the disinformation ecosystem exploits platform AI tools to launder false content.
Historical Context: Iran's Documented Tradition of Digital Warfare Against Israel
This disinformation campaign does not exist in a vacuum. Iran has maintained a sustained, well-resourced foreign information manipulation infrastructure targeting Israel since at least the outbreak of the October 7, 2023 war. The Institute for National Security Studies (INSS) in Tel Aviv documented in detail how Iranian influence operations during the "Swords of Iron" conflict deployed deepfake videos — including fabricated footage of Israeli Prime Minister Netanyahu — across X, Facebook, Instagram, YouTube, WhatsApp, and Telegram simultaneously. These operations were characterized by sophisticated branding, coordinated bot networks, and a deliberate strategy of investing in a smaller number of high-quality deceptive assets rather than flooding platforms with crude fakes.
The US National Security Agency warned as early as September 2023 that the ease and accessibility of synthetic media creation posed a significant challenge to national security and could drive the spread of military disinformation. What the NSA anticipated has now materialized at scale. Iran's use of AI-generated content is not the organic expression of a victorious military — it is the information warfare strategy of a regime that understands it cannot win the battle of verified facts and so manufactures its own. The pattern is consistent: fabricate a military victory in the digital space, amplify it through coordinated accounts, and allow the volume of views to create a false impression of credibility.
Conclusion: Why This Disinformation Campaign Is Dangerous and Must Be Rejected
The harm caused by this category of AI-generated war disinformation extends far beyond mere factual error. When hundreds of millions of views are accumulated by fabricated footage of Tel Aviv under missile bombardment, it distorts public understanding of the actual military and humanitarian situation, undermines legitimate reporting, erodes trust in verified information, and — most critically — serves as a force-multiplier for Iran's psychological warfare objectives against Israel and Western audiences. It is designed to demoralize Israeli society, delegitimize Israel's defensive operations, and manufacture international sympathy for a regime that sponsors terrorism across the Middle East.
The antidote is rigorous source verification: no video circulating on social media should be accepted as combat evidence without corroboration from recognized open-source intelligence analysts, official military statements, or established journalistic verification units. The authentic record shows that Israel's air defenses and F-35 fleet remain operational, that no verified footage of Iranian battlefield victories over Israeli forces exists, and that the disinformation campaign itself is evidence of Iran's failure to achieve the military dominance it fabricates in pixels. Sharing or amplifying these videos — even skeptically — feeds the engagement algorithms that reward their creators and extends Iran's information warfare reach.