Artificial Intelligence for Europe – Pillar 3: Legal and ethical rules for AI

© shutterstock

The EU promotes the development and use of Artificial Intelligence (AI). In its AI strategy, the EU also addresses the challenges and risks and demands that AI has to be “trustworthy”. Therefore, AI should be subject to appropriate legal norms and follow ethical rules.


In cep's point of view, the creation of trust in Artificial Intelligence among users and affected persons can promote its acceptance. However, a general duty to inform how AI decisions can be corrected by a person goes too far. Since three AI-specific problems in the implementation of the GDPR are already foreseeable today, it is appropriate for the Commission to "pursue" how the GDPR is implemented in AI applications. The demand that AI should be "transparent" is too vague for cep's experts. The ethical guidelines, which were developed by an "expert committee" on behalf of the Commission, can only be the starting point for a broad public ethical debate on AI, in which all concerned parties are to be involved.

This cepPolicyBrief deals with the third main objective of the EU's AI strategy which is to ensure an appropriate legal framework and ethical rules for AI. Two other cepPolicy Briefs relating to the first pillar (Investment in AI, cf. cepPolicyBrief 2019-10) and the second pillar (Adapting education and social systems, cf. cepPolicyBrief 2019-12) have already been published.