European Artificial Intelligence Act (cepPolicyBrief COM2021 206)
Whether healthcare, work, consumption or media: artificial intelligence (AI) will change the lives of many people in various ways. The Commission therefore wants to create rules to protect the health, safety and fundamental rights of AI users. It wants to ban particularly dangerous AI systems. Other AI systems will be subject to obligations depended on their risk or voluntary codes of conduct. In some cases, there should be no obligations at all.
The Centrum für Europäische Politik (cep) has assessed the Brussels draft law. The Freiburg-based think tank sees the proposals as largely positive and considers them historically unique in a global comparison. "The particularly strict obligations for high-risk AI systems are right and important, as these systems pose a greater risk. In addition, risk-dependent transparency obligations increase acceptance within the population," says cep economist Matthias Kullas, who wrote the study with cep lawyer Lukas Harta.
Kullas and Harta are critical of the low requirements for AI systems that categorise people by age, ethnic origin, religious or sexual orientation. "Stricter requirements than a mere duty to inform must apply to these AI systems," emphasise the Freiburg researchers. In addition, they say, the regulation falls short in terms of data protection.
Kullas and Harta call on the EU to ban social scoring not only for public authorities, but also for private providers. In China, AI systems already reward particularly loyal citizens with points, while others are prevented from obtaining favourable loans, attending cultural events or advancing socially by having their points deducted.