Artificial Intelligence for Europe – Pillar 3: Legal and ethical rules for AI
cepPolicyBrief

Single Market & Competition

Artificial Intelligence for Europe – Pillar 3: Legal and ethical rules for AI

Dr. Anja Hoffmann, LL.M. Eur.
Dr. Anja Hoffmann, LL.M. Eur.

The EU promotes the development and use of Artificial Intelligence (AI). In its AI strategy, the EU also addresses the challenges and risks and demands that AI has to be “trustworthy”. Therefore, AI should be subject to appropriate legal norms and follow ethical rules.

cepPolicyBrief

Status

In cep's point of view, the creation of trust in Artificial Intelligence among users and affected persons can promote its acceptance. However, a general duty to inform how AI decisions can be corrected by a person goes too far. Since three AI-specific problems in the implementation of the GDPR are already foreseeable today, it is appropriate for the Commission to "pursue" how the GDPR is implemented in AI applications. The demand that AI should be "transparent" is too vague for cep's experts. The ethical guidelines, which were developed by an "expert committee" on behalf of the Commission, can only be the starting point for a broad public ethical debate on AI, in which all concerned parties are to be involved.

This cepPolicyBrief deals with the third main objective of the EU's AI strategy which is to ensure an appropriate legal framework and ethical rules for AI. Two other cepPolicy Briefs relating to the first pillar (Investment in AI, cf. <link en eu-topics details cep artificial-intelligence-for-europe-pillar-1-investment-in-ai-communication.html>cepPolicyBrief 2019-10) and the second pillar (Adapting education and social systems, cf. <link en eu-topics details cep artificial-intelligence-for-europe-pillar-2-adapting-education-and-social-systems-communication.html>cepPolicyBrief 2019-12) have already been published.

Download PDF

cepAnalyse (publ. 07.23.2019) PDF 301 KB Download
Artificial Intelligence for Europe – Pillar 3: Legal and ethical rules for AI