AI as Systemic Risk in a Polycrisis (cepAdhoc)

Copyright: shutterstock/Aleutie

Whether to protect against credit card fraud, to create climate models or to distribute police forces: Artificial Intelligence (AI) is penetrating everyday life ever more deeply. The data required for this mostly comes from phases of relative stability, which cannot be readily applied in times of crisis. The Centre for European Policy (cep) sees this as an underestimated systemic risk - and calls for rules.

cepAdhoc

"The use of AI can be very useful in crises. However, algorithms optimised with normal data can then unconsciously lead to wrong decisions. There is therefore a need for risk-sensitive rules for AI in crises, especially in increasingly automated environments," says cep digital expert Anselm Küsters, who has studied the Commission's latest regulatory approach.

 

Küsters cites algorithms for calculating the risk of credit card fraud during the Corona pandemic as one example. "Suddenly, many only shopped on the internet. AI-based tools were overwhelmed by this and unnecessarily blocked many transactions." A risk-based approach, as envisaged in the EU AI law, may therefore not be sufficient, he said, as it is impossible to know the dynamic risk of a crisis-ridden system, especially in dramatically changing environments. "However, if one accepts the risk-based approach of the current draft, the dangers arising in times of polycrises could be taken into account by classifying a higher proportion of AI-driven systems as crisis-sensitive and thus high-risk," the cep expert recommends. In addition, it is crucial that AI audits are carried out regularly with sufficient staff and technical resources - without overburdening start-ups.

 

According to Küsters, AI-enthusiastic politicians, entrepreneurs and journalists need to take better account of the damage potential of AI in a polycrisis.