11.12.23

EU AI Act: A Milestone Met, But Key Challenges Remain in Standardisation and Competition

The European Union (EU)’s Öffnet externen Link in neuem Fensterlandmark agreement on Artificial Intelligence (AI) rules, reached after intense discussions on 9 December, marks a significant step towards shaping the emerging global governance of AI. This provisional agreement, emerging from marathon talks among the EU’s governing bodies, underscores the EU’s commitment to ensuring that AI systems used within its borders are safe, respectful of fundamental rights, and in alignment with EU values. Originating from a Öffnet externen Link in neuem Fensterproposal in 2021, the recent controversies and additions to the rulebook illustrate the challenge of legally addressing the dynamic field of AI. By finally agreeing on the provisions, there is now justified hope that the EU can enjoy a first-mover advantage and set the global standard with the first horizontal regulation on AI in the world.

 

Central to the EU AI Act is a risk-based framework, distinguishing between high-risk applications, like using AI in critical infrastructures that could put the life of citizens at risk, and lower-risk ones, such as AI-enabled spam filters. Banned applications include manipulative AI techniques, e.g. in children’s toys, and social scoring. The nuanced European approach allows, at least in theory, to appropriately reflect AI’s varied implications across different contexts and sectors. The most contentious issue during the Trilogue negotiations revolved around the regulation of AI foundation models like those of Google and OpenAI, which are now subject to specific transparency obligations and a stricter regime for high-impact models. This decision, influenced by the rapid proliferation of generative AI products such as ChatGPT, is to be applauded as an agile response by European policymakers to technological advancements. Foundation models are pretrained to perform a wide range of tasks, such as writing text in a specific tone or generating images and videos.

 

Importantly, the agreement recognizes the potential systemic risks posed by high-impact general-purpose AI models. This is crucial in a landscape where technology giants have been rapidly developing AI products, often prioritizing rapid innovation over risk assessment. Following ChatGPT’s success, Meta open-sourced Llama-2, Microsoft entered into an exclusive partnership with OpenAI and its GPT products, and Google released Bard in a hurry. In this context, “mandatory self-regulation through codes of conduct” – as envisaged by Germany, France, and Italy in their Öffnet externen Link in neuem Fensterrecent demands that had blocked the Trilogue negotiations – would not have been enough to ensure safe models, due to the business pressure on these few dominant firms. The Act’s provision for a stricter regime for foundation models, and the important inclusion of carve-outs for open-source initiatives, promises a balanced approach that promotes innovation while Öffnet externen Link in neuem Fenstersafeguarding against the concentration of power. This is particularly welcome from an ordoliberal perspective, as liability is placed where it is effective: if rules for the developers of foundation models had been removed, as recently demanded by the industry, European SMEs and other downstream users of basic models would have been Öffnet externen Link in neuem Fensterimplicitly expected to blindly vouch for the quality of the black boxes provided to them. Moreover, the Act’s exemption for AI systems used solely for research, innovation, and non-professional purposes, along with the establishment of AI regulatory sandboxes, illustrates the EU’s commitment to fostering innovation. Still, the definition of “high impact” foundation models based on the volume of data used for training could pose practical problems in the future, given the rapid evolution of AI technologies towards smaller, more efficient models, not least due to novel AI architectures and cheaper hardware that are already becoming available.

 

While the EU AI Act focuses commendably on product regulation and the correct use of AI, it does not directly address the emerging monopoly-like structure of the AI market. In addition to the mergers and partnerships between Big Tech firms and leading AI start-ups, the “Öffnet externen Link in neuem Fenstermarket for foundation models trends towards consolidation”. To mitigate the problems stemming from this concentration, promoting open source is not sufficient, which is why EU policymakers should leverage the continent’s robust competition law rules to check new network effects and prevent anti-competitive practices by AI oligopolists. Moreover, the agreement foresees the creation of a novel AI Office within the Commission, tasked with overseeing advanced AI models and with fostering standards and testing practices. Its staff should collaborate with competition policy experts in order to maintain a balanced and competitive AI ecosystem in the member states. Due to the novel technologies addressed by the AI Act and ambiguous legal terminology employed, “Öffnet externen Link in neuem Fensterstandardisation is arguably where the real rule-making […] will occur”. Europe must thus recognise the geopolitical potential of standards and ensure that its interests are well represented in the appropriate standard-setting organisations.

 

Looking ahead, the AI Act is set to come into effect two years after its formal adoption (shortened to six months for the bans). The fine structure, modelled on the EU’s other digital regulations such as the Digital Markets Act, is designed to be proportionate, i.e. amount to a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI Act’s obligations, and €7.5 million or 1.5% for the supply of incorrect information. Importantly, there are more proportionate caps for SMEs and start-ups. Despite inevitable trade-offs and abstraction, the EU AI Act is a bold and necessary step in the right direction. It showcases the EU’s ability to adapt to rapidly evolving technologies, balancing the need for innovation with the imperative to protect citizens and maintain fair markets. Still, key challenges remain in standardisation and competition.