Facts & MythsMarch 31, 2026

Myth

AI-authenticated video footage circulating on X and TikTok confirms that Iranian missile strikes destroyed multiple Israeli Air Force bases during Operation Roaring Lion, with Grok itself verifying the footage as genuine.

Fact

Operation Roaring Lion was a joint US-Israeli offensive campaign targeting Iranian military infrastructure launched on February 28, 2026 — not an Iranian strike on Israel. No credible evidence exists of multiple Israeli Air Force bases being destroyed, and Grok, a large language model chatbot, has no technical capability to authenticate video footage.

This claim is false on every substantive level and represents a textbook example of the AI-amplified wartime disinformation that has flooded social media platforms since the outbreak of the US-Israeli campaign against Iran. Operation Roaring Lion — Israel's designated name for its portion of the broader joint operation known as "Epic Fury" — was an offensive military campaign by Israel and the United States against Iran, beginning on February 28, 2026. It targeted Iranian missile infrastructure, Revolutionary Guard facilities, intelligence headquarters, and senior leadership assets across at least 14 Iranian cities. The claim inverts the operational reality entirely: Israel was the attacking force, not the target.

The Facts on Operation Roaring Lion

According to contemporaneous reporting from multiple outlets across the political spectrum — including The Guardian, Epoch Times, Fox News, and Newsmax — Operation Roaring Lion commenced with coordinated US and Israeli strikes on Iranian military sites. Israeli officials stated at the outset that the campaign aimed at "degrading the regime's capabilities," and operations were confirmed to have targeted ballistic missile production sites, nuclear facilities, naval assets, and IRGC command structures. There is no verified report from any credible military, governmental, or journalistic source documenting the destruction of a single Israeli Air Force base during this operation. Iran did conduct limited retaliatory strikes — including coordinated cluster bomb attacks with Hezbollah reported on March 11, 2026 — but these caused no documented destruction of Israeli air bases.

  • Israel's air force continued active offensive operations throughout the conflict, including sustained strikes deep inside Iran, which would have been impossible had multiple air bases been destroyed.
  • US forces consumed more than 150 THAAD interceptors — roughly a quarter of America's total stockpile — defending against Iranian ballistic missiles during earlier phases of the Israel-Iran conflict, demonstrating the robustness of the joint air defense architecture protecting Israeli territory.
  • Satellite imagery analyzed during and after Operation Roaring Lion showed heavy damage to Iranian government and military zones — not to Israeli air bases.

How AI Disinformation Exploited the Conflict

The claim that "Grok itself verified the footage as genuine" is a deliberate and dangerous misrepresentation of what AI chatbots can do. Grok is a large language model developed by xAI; it is a text-generation system, not a forensic video authentication tool. It has no ability to independently verify whether a piece of video footage depicts real events, when it was recorded, or whether it has been synthetically generated or manipulated. During the earlier June 2025 Israel-Iran conflict, the BBC's disinformation unit documented extensively how users on X were turning to Grok to "establish posts' veracity" — and found that in multiple cases Grok incorrectly affirmed AI-generated videos as genuine, a failure that propagandists then exploited by screenshotting those responses and recirculating them as "proof."

This exploitation of Grok's limitations is a deliberate disinformation tactic, not evidence of authentic verification. Propagandists manufacture the very "confirmation" they then cite. The Guardian's March 13, 2026 investigation into AI-generated imagery from the Iran war explicitly warned that fake footage — including fabricated scenes of US soldiers at gunpoint and destroyed military assets — was "popping up faster than they can be debunked," with AI tools being weaponized to generate, spread, and falsely legitimize the content simultaneously.

The Iranian Disinformation Playbook

Iran and its proxies have a well-documented history of fabricating or misrepresenting battlefield footage to project military strength and demoralize adversaries. This practice — amplified in the social media era — exploits the speed of virality against the slower pace of fact-checking. The regime and aligned networks have consistently circulated footage from unrelated conflicts, video game simulations, old Syrian war footage, and outright AI-generated content as supposed evidence of Iranian military successes against Israel and the United States. Claiming the destruction of Israeli Air Force bases serves a clear strategic purpose: to undermine Israeli deterrence, sow fear among the Israeli public, attract recruits, and manufacture a counter-narrative to the demonstrably successful US-Israeli campaign against Iran's military infrastructure.

The invocation of "AI authentication" as a legitimizing device is the newest layer of this playbook. By claiming that an AI — particularly one hosted on the same platform where the content circulates — has "verified" the footage, bad actors lend false institutional credibility to manufactured content. It is a form of epistemic laundering: using the language of technological authority to bypass critical scrutiny.

Why This Myth Is Dangerous

Fabricated claims of Israeli Air Force base destruction serve several harmful ends simultaneously. They demoralize Israeli and Western publics, spread strategically useful falsehoods about Iranian military capability, and erode trust in verified information at a moment of genuine regional crisis. When disinformation successfully impersonates authenticated intelligence, it distorts public understanding of the actual military balance, potentially influencing policy debates and public pressure on democratic governments. Accepting unverified social media footage — regardless of any chatbot's response — as evidence of battlefield outcomes is not just analytically negligent; it makes the public a vector for adversarial information warfare. The standard for assessing military claims is verified satellite imagery, official governmental or military statements, and reporting by credentialed journalists with direct access — not viral clips on TikTok endorsed by a language model.

#operation roaring lion#iran disinformation#ai-generated footage#grok misuse#israeli air force#information warfare#iran-israel war#social media propaganda#carlos