How to Prevent the “Guernica of AI”
cepInput

Artificial Intelligence

How to Prevent the “Guernica of AI”

Dr. Anselm Küsters, LL.M.
Dr. Anselm Küsters, LL.M.
  • AI models are currently being deployed in Gaza, Iran and Ukraine with deadly consequences – in some cases without effective oversight.
  • Time pressure, automation bias, a lack of transparency and insufficient experience make human control an illusion.
  • The cep calls for binding standards and a European mandate for international agreements.

Artificial intelligence is no longer a vision of the future in modern warfare. Language models (LLMs) and other AI systems are already being actively used in military decision-making in the Gaza Strip, Iran and Ukraine. Errors can have incalculable consequences. The Centre for European Policy (cep) warns of a dangerous loss of control.

cepInput

The problem lies not only in the unpredictable technology itself. Whilst humans are formally involved in decision-making processes in line with the ‘human-in-the-loop’ doctrine, in practice they are under immense time pressure, must review a large number of target suggestions and are increasingly relying on the systems’ assessments. Automation bias, a lack of transparency and unclear responsibilities undermine control. “In many cases, operators have very little time to review an AI suggestion – often without being able to understand how the system arrived at its assessment or what unintended consequences it might have. Under these conditions, control quickly turns into dependence,” says cep AI expert Dr Anselm Küsters, who authored the study together with ethics expert Dr Niël Henk Conradie from RWTH Aachen University. The crucial factor, he says, is whether human control works under operational conditions. This requires binding, reliable and verifiable procedures.

The recent dispute between the US Department of Defence and the AI company Anthropic highlights the political dimension of the issue: safety safeguards are criticised as an obstacle to military efficiency. For the cep, this framing falls short. Conradie emphasises that the normative argument for binding standards and the strategic argument reinforce one another: “Avoiding AI-induced targeting errors is not only ethically imperative, but also operationally advantageous, as it conserves resources, maintains international credibility and reduces the risk of escalating errors”. For this reason, safeguards are not an operational constraint, but also make military sense. To prevent a geopolitical ‘race to the bottom’, credible binding mechanisms are therefore needed, including behaviour-based disclosure requirements, ‘computer’ limitations and reporting obligations for AI-related incidents. An EU or NATO standard for military AI that operationalises these requirements could serve as a multilateral negotiating offer before the window for orderly international agreements closes due to growing AI capabilities.

Download PDF

How to Prevent the “Guernica of AI” (publ. 04.28.2026) PDF 819 KB Download
How to Prevent the “Guernica of AI”