The Iran war highlights AI-supported targeting risks after a strike on an Iranian school reportedly on a U.S. list. While AI enables rapid data processing, concerns over faulty models, reduced human judgment, and civilian harm grow. No binding framework exists; voluntary rules and multi-stakeholder discussions are urgently needed to prevent errors.
The war in Iran has added to concerns about the risks of using AI to select targets during armed conflict.
The US-Israeli war with Iran has amplified long-standing concerns over the adoption of AI-supported targeting in warfare.
These concerns came to the fore in the aftermath of the 28 February strike on Shajareh Tayyebeh girls’ school in Minab, southern Iran, which Iran says killed at least 168 people, most of whom were schoolchildren.
The Trump administration initially blamed Iran for the strike, though it did not provide any evidence. The US says it is now investigating the bombing. The Washington Post has reported that the school was on a US target list.
US Senate Democrats have written to Secretary of War Pete Hegseth seeking information about the attack, including clarification on any use of AI in target selection. So far there has been no confirmation of whether or not AI was used in planning or executing the strike on the school.
Admiral Brad Cooper, the US commander leading the war in Iran, has confirmed the use of ‘a variety of advanced AI tools’ to sift through large amounts of data in the conflict, without naming any tools in particular. He said these tools allowed leaders to make ‘smarter decisions faster than the enemy can react’ and sped up processes from taking hours or days to seconds. Admiral Cooper also stated that: ‘Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot.’
Iran is not the first war to incorporate AI systems, but it signals AI-supported targeting is becoming the norm in warfare. While militaries may embrace the potential for increased efficiency, significant risks remain.
Increasing military adoption
AI allows for rapid processing and analysis of information from a variety of different sources and customizable data access. Its adoption across the military domain has the potential to increase situational awareness, facilitate real-time information sharing and enable more informed decision-making in military operations.
A 2024 US Department of War release outlined how the AI-enabled Maven Smart System helps frontline soldiers identify and strike military targets, and assists chain-of-command approval for strikes. NATO also acquired a version of Maven Smart System from Palantir in 2025. The US military is now reportedly using its own version of Maven to help provide targeting information for its military operations in Iran. But it is unclear exactly how and to what extent Maven and other AI tools are being used in Iran.
In the war in Ukraine, both sides are using AI for data processing and target selection. Ukraine’s deputy defence minister said last year that AI analyses more than 50,000 video streams from the front line each month, which helps to ‘quickly process this massive data, identify targets, and put them on a map.’
The New York Times has reported that Israel used AI as part of its process of identifying potential targets for air strikes targeting Hamas in Gaza. The IDF has said that ‘information systems are merely tools for analysts in the target identification process.’
Additional uses for AI technology across the military domain include the training of military personnel through virtual simulations, the automated scheduling of logistical supplies or the identification of equipment maintenance needs via image-recognition systems. These are just some of the potential uses.
Risks
Many countries will want to invest in tools that give them an advantage over adversaries, in line with the search for asymmetry, which has been a constant throughout the history of warfare. But the use of AI in complex high-stake environments such as armed conflict also comes with serious risks.
Part of the concern relates to the development of AI technology itself and how it could impact on the system’s performance. For example, an AI model could be trained with faulty data, or with material that is different to what it encounters when deployed in the real world. This could lead it to generate inaccurate information or malfunction when used outside of the training environment.
AI large language models work by predicting a sequence of words, based on statistical probability – they will likely get it right most of the time, but they won’t get it right all of the time.
In practical terms, this means basing decisions on AI-generated information contains an element of risk and inaccuracy.
AI-supported targeting decisions are a high-risk case in point. If AI tools are being employed extensively to generate targets with minimal human oversight, it’s not difficult to imagine how errors could occur.
One core issue highlighted by the use of AI in war is that there is a difference between what AI-enabled systems can do and the procedures or rules about how humans use those systems.
What could be done better?
The Iran war suggests that AI tools are set to be increasingly used in armed conflict. While the laws of war apply to all conflicts, there is a growing debate about whether AI is introducing a new dimension that requires additional rules. For example, concerns have been raised over how AI reduces the space for human judgement required for international humanitarian law determinations.
A binding international framework is unlikely in the short-term. Nevertheless, it is in militaries’ own interest to develop rules for using AI. This would help them mitigate against the risk of over-relying on AI-supported targeting, which could reduce errors that lead to the wrong targets being hit and cause civilian deaths.
Forming these rules could involve building a shared understanding between states, militaries, technology companies, academia and civil society on how to enhance human-machine teaming – or how humans and AI-systems can interact so that each delivers on what they are better at. Considerations should also be given to how to monitor AI-based malfunctions and foster system improvements in complex environments.
Passed in December 2025, the UN resolution on ‘Artificial intelligence in the military domain and its implications for international peace and security’ is a step in the right direction. The resolution encourages multilateral and multi-stakeholder discussions; a three-day multi-stakeholder meeting set for June 2026 should provide an opportunity for informal exchange and consultation on the use of AI in armed conflict.
These discussions provide avenues for confidence-building and could pave the way for shared best practice. Without this, the potential risks of using AI in conflict are grave.

