
Digital Economy
What to Expect When AI Agents Are Unleashed?
cepInput
“AI agents can already master complex virtual environments without specific programming,” explains cep digital expert Anselm Küsters. At the same time, they pose risks such as unintended market distortions, psychological manipulation or political bias. While existing EU instruments such as the Virtual Worlds Toolbox address basic risks such as avatar hacking, they fall far short when it comes to the ability of AI agents to deceive and strategically circumvent rules.
In particular, there are dangerous gaps regarding transparency. The dominant US tech companies give little insight into their security protocols. “Recent research shows that advanced AI systems can learn to 'lie' or 'cheat' – known as specification gaming – to achieve their goals under pressure,” warns Küsters. In the digital economy, such technical problems could lead to agents systematically bypassing rules, deceiving users and undermining trust in virtual interactions.
The cep is therefore calling for improvements to the EU's metaverse strategy and AI regulation. “We need binding transparency obligations, regular, independent bias audits and robust liability rules – ideally by reintroducing and adapting the AI Liability Directive,” says Küsters. Trust is the currency of the digital economy. The EU must therefore proactively set standards for trustworthy AI agents, rather than just reacting to technological developments.
Download PDF
What to Expect When AI Agents Are Unleashed? (publ. 05.13.2025) | 1 MB | Download | |
![]() |