Beyond traditional misinformation, the current crisis leverages sophisticated AI to bypass crowdsourced moderation. Evaluated data from X reveals that AI-flagged content has reached historic peaks, with the Iranian regime utilizing deepfakes to compensate for conventional military disadvantages by targeting the American public’s “America First” sensibilities.
On February 28, 2026, a joint U.S.-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially dubbed Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on U.S. warships, and satellite imagery purporting to show damage to American military bases in the Gulf.
Some of this footage was recycled from unrelated conflicts, including in Ukraine, and even from video games. Yet some of it was entirely fabricated and created with now ubiquitous generative artificial intelligence (AI) tools that can produce even more realistic content at scale. Several observers of the space emphasized the unprecedented volume of AI-generated content and its increasing sophistication.
While much has been written about the potential for AI-generated imagery, videos, and audio to flood the information ecosystem and make it increasingly difficult to parse what is true, AI content has previously only made up a small portion of the misleading content circulating across the web. During 2024, which was deemed “the year of the elections,” AI-generated content—while present—did not derail electoral processes around the world. And in the early days of the Israel-Hamas war, AI content was again present, but it represented just a small fraction of the overall misleading claims and recycled imagery circulating online. Does the current ongoing conflict in Iran truly represent a significant leap in AI-generated imagery? And if so, what might explain such a meaningful shift?
What remains the same
The surge in false, misleading, and decontextualized content during a time of crisis is not new. In 2023, at the start of the Israel-Hamas war, false claims flooded the web due to a supply-and-demand gap for credible information, the financial incentives for virality, the consequences of platform policy changes, and the fragmentation of the spaces journalists and researchers once relied upon for on-the-ground insight.
Many of the dynamics that complicated the information ecosystem then remain present now. The lag between the demand for credible information about the conflict and its supply persists, and in this void, false content circulates. The financial incentives for going viral have also not changed. Revenue-sharing programs, including on the social platform X, pay users based on impressions generated by their posts, creating a direct monetary incentive to produce sensational content that goes viral. During the first two weeks of the Iran conflict, AI-generated videos depicting fabricated attacks garnered millions of views.1
Both X and Meta’s platforms have also leaned further into crowdsourced content moderation, which, in theory, is not a bad thing, but becomes challenging in times of crisis. Unlike with in-house capabilities, which can be diverted or reallocated to address a surge at a particular moment, the crowd may grow in times of crisis but may not do so in a way that leads to more timely moderation. The algorithm that decides whether a note is helpful remains the same regardless of context. And the crowd may not have access to the specialized detection tools needed to reliably evaluate AI-generated content—if that is even possible at this point.
How big is the AI-generated content problem?
To explore the scale of generative AI usage in the Iran conflict, I evaluated Community Notes data from X, which continues to be a highly visible tool for moderating content across the platform.2 In this program, which couples crowdsourced context with community consensus, X users are responsible for providing additional information about posts, including flagging posts that may be misleading, decontextualized, false, or—importantly—AI generated. Participants in the program can then rate this added context, and if enough participants from “diverse perspectives”—defined based on an algorithm that infers viewpoints based on how contributors have rated other notes—agree that the addition is helpful, it is appended to the post. Users can add context to any post, but due to the consensus-based nature of the initiative, only a small fraction of those notes will become publicly visible. Importantly, all Community Notes data is made public, which provides a window into the scope of contested content across the platform.
Drawing on this data, I find that, in recent weeks, contested content flagged on X as leveraging generative AI has surged to its highest point ever since generative AI tools have become more widely available, with more than 5,000 notes referencing AI-generated content since the start of the conflict (Figure 1).3 This follows a steady rise over the past two months in notes that reference AI-related terms (including a surge in AI-related content in the aftermath of the Bondi Beach terrorist attack and U.S. Immigration and Customs Enforcement-related violence in Minnesota). Importantly, this still represents a small percentage of the overall contested information ecosystem on X and does not mean that AI has in fact been used. It is also likely that the Liar’s Dividend, where true content is dismissed as AI-generated because it is uncomfortable or inconvenient, is in play. Nevertheless, it does demonstrate the increased prevalence of AI as a complicating factor in an already muddled information ecosystem—a trend that is unlikely to recede as these tools get even easier to use and more embedded into day-to-day life.
The information layering problem
Several developments distinguish the current moment from previous information crises and may contribute to the growing prevalence of generative AI as part of the contested information ecosystem moving forward. Perhaps most noticeably, the technology itself has improved dramatically, and it is now complemented by AI-powered chatbots embedded into search results and platforms like X. These chatbots have become new layers in the information ecosystem. Despite improvements, they may still struggle to keep up with real-time developments, but they are increasingly being turned to as a first source of information. In the current conflict, Grok, X’s chatbot, has flagged videos of Israeli Prime Minister Benjamin Netanyahu as deepfakes, which have generated their own wave of confusion and fueled rumors about his whereabouts.
To add to the complexity, AI detection is also becoming an increasingly fraught space. AI-generated content detection relies on tools that are of variable quality and where technical sophistication may be necessary. “The crowd” may not be equipped to use these with the appropriate level of skepticism required to gain useful insights. In the context of the ongoing conflict in Iran, several researchers have documented the way open-source investigators have leveraged purported AI detection capabilities to deny authentic images, further increasing uncertainty about what is true.
This is particularly troubling in a context where platforms are increasingly leaning on the community to evaluate and flag misleading content. At the start of the Israel-Hamas war in late 2023, X was the primary platform to leverage consensus-based community moderation. Since then, Meta has increasingly adopted this approach across Facebook, Instagram, and Threads as it has simultaneously scaled back its investment in fact-checking. While community-driven moderation can offer a vital pathway toward fostering trust, it also faces significant challenges, particularly in times of crisis when there is surging demand for information. With more in-house capabilities, a company might divert resources to where the need is particularly acute. However, if a company relies on the crowd, these levers become increasingly difficult to pull.
Figure 2 shows that the percentage of notes the community ultimately rated “helpful” has declined over time, even as the percentage of notes referencing AI-generated content has grown significantly since the 2024 U.S. election, when it represented just 1.5% of all contested content. For the past two months, it has hovered around 7%-8%, meaning around one in every 12 notes written on X has referenced AI generation. Against this backdrop, the system’s ability to attach notes to posts as additional context remains constrained by the size and engagement of its volunteer contributor base (and its ability to come to meaningful consensus) as opposed to the volume of content that needs review. On X, and likely on other platforms that newly leverage community-based contributions, moderation may be unable to scale with the amount of misleading content during a crisis because the platform cannot control who shows up with the ability to write and rate notes on any given day.
Deepfakes as a weapon of war
Although the rise of AI-generated content across X is part of a broader growth over the past few months, the current conflict is notable in one respect: AI-generated content has emerged as an extension of the war effort. For years, researchers have recognized the potential for AI-generated content to be wielded as part of wartime efforts to stoke confusion, discredit leaders, undermine popular support, and polarize society, among other applications.
In the ongoing conflict, the multidirectional nature of AI-enabled information warfare has complicated the information environment for all parties involved. However, the use of AI-generated imagery and videos—including overt and covert usage—may be particularly important for the Iranian regime, which is at a conventional military disadvantage against the combined forces of the United States and Israel. Stoking chaos and undermining confidence in the United States’ military objectives may be fundamental to the regime’s survival. It is unsurprising, then, that Iran has turned to generative AI to accelerate its existing information warfare playbook. One recent report traced a coordinated deepfake campaign with identical videos and captions, synchronized posting windows, and hashtag clusters to the Iranian regime. This follows past attempts by the regime to experiment with using generative AI to try to sway public opinion, including around U.S. elections.
In the current conflict, the online disorder—which makes good use of but is not entirely driven by AI content—may be designed to help the Iranian government outlast the Trump administration’s willingness to fight. Part of President Donald Trump’s appeal during the 2024 presidential campaign stemmed from his promise to put “America First” and end the “forever wars” of the past. Many of his supporters framed the election as a choice between World War III and cheaper food and gas. Although his base remains behind the war efforts for now, there are already signs of fracturing. More broadly, the conflict is extremely polarizing, and a prolonged military campaign would run contrary to the President’s campaign promises.
In this context, much of the content pushed in support of Iran—and at times directly by the Iranian regime—is designed to project military strength it doesn’t have and exaggerate damages inflicted on the United States and Israel. In flooding the information ecosystem, Tehran is hoping to sow confusion and, perhaps most importantly, accelerate public discontent and erode already teetering public support for the U.S. military campaign. AI makes it cheaper, potentially more compelling, and quicker to do than in the past.
Looking forward
The convergence of improved AI capabilities, a layered information ecosystem, further retrenchment in content moderation, and a uniquely motivated state actor has created conditions that challenge informed public discourse during armed conflict. Against this backdrop, the politicization of content moderation is colliding with an active need for it as AI-generated content becomes an increasingly prevalent extension in armed conflict.
The Iran conflict demonstrates the concrete value of content moderation. At a minimum, such moderation should be more structurally suited to flagging uncertainty or source reliability in crisis conditions, which occur precisely at the moment when public opinion is most malleable and most actively being targeted by a foreign adversary. It also underscores the importance of investments in reliable detection capabilities, particularly as models continue to improve, and greater attention toward authentication processes for digital content. The data provided by X’s Community Notes platform offers a glimpse into understanding the scale of the challenge, which remains exceedingly difficult to quantify, but it also reveals the gap between what current systems can do and what the moment demands.

