This is Part 1 in our series focused on tracking the development of the EU’s AI Act. The series will provide updates and commentary as the AI Act moves through the EU’s legislative process.
To set the scene, the EU Commission first adopted its Proposal for a Regulation laying down harmonised rules on artificial intelligence (the Proposed Act) back in April 2021. In ways a ground breaking piece of draft legislation, it was one of the first attempts to implement a regulatory framework which was specific to the use of artificial intelligence (AI) technology. This article aims to summarise what the proposed regulatory framework would look like and how thinking on certain aspects of ultimate regulation may be developing.
For context, it should be acknowledged that the world’s leading AI companies are rapidly expanding their operations in the EU. Even in the short period since the Proposed Act’s publication, there have been considerable developments both in how society has started to engage with AI technology, and in respect of the EU’s thinking on how best to regulate the sector. The EU is somewhat ahead of the curve when compared with jurisdictions like the UK or US, who are only starting to really consider the appropriate legal mechanisms for regulating AI (although last week’s Bletchley Declaration in the UK and the recent US presidential Executive Order in respect of AI are clear signals that AI is a legislative concern across the globe).
However lawmakers are, in general, playing catch-up to the fast-paced development of innovations in the sector – a common issue in regulation of new technologies.
Development of the Regulation to Date
Following the adoption of the Proposed Act by the Commission, the Council adopted its position, proposing a number of amendments (the Council Draft) in December 2022.
Discussions on the Proposed Act in the European Parliament (EP) were particularly protracted, so it wasn’t until June 2023 that the EP adopted its position on the Proposed Act, proposing a separate set of amendments (the EP Draft).
The EP, the Council and the Commission are currently engaging in a series of closed door, informal trilogue negotiations, whereby the amendments proposed in each draft will be considered, in the hopes of producing a final text of the regulation by the end of this year/early next year. Reporting suggests that negotiations have been difficult, as the institutions try to strike a balance between adequately protecting fundamental rights which could be impacted by AI, and allowing sufficient space for innovation and development of the technology and maximising the commercial and social benefits of AI.
System of Regulation
The Proposed Act seeks to regulate AI systems through implementing a risk-based approach. It establishes a tiered scheme of obligations based on the perceived level of risk attached to an AI process – the higher the perceived risk, the more intensive the obligations, with certain uses of AI systems being prohibited entirely. The Annexes to the Proposed Act proscribe a set of AI practices which will be considered “high risk”.
The relevant obligations extend to a broad range of aspects of the provision of AI systems, including ensuring certain standards of:
- risk management systems
- data and data governance
- technical documentation and record keeping
- transparency and provision of information to users
- human oversight
- accuracy and robustness
The Proposed Act will apply to a number of actors along the chain of production and implementation of AI systems, including developers, users, end-product manufacturers, importers and distributors.
Compliance with the regulatory framework will be assessed by accredited third party “conformity assessment bodies”, which will in turn be designated and monitored by designated Member State authorities. The Commission will be responsible for ensuring appropriate coordination and cooperation between the designated conformity assessment bodies. A new European Artificial Intelligence Board will also be established, responsible for issuing general recommendations and opinions.
The maximum fine for the most serious failures of compliance provided under the Proposed Act is 6% of the relevant entity’s global annual turnover, with a sliding scale depending on the size of the entity and the type of infraction.
Ongoing Points of Negotiation
As already noted, the provisions of the Proposed Act are still being negotiated and have been the subject of significant proposed amendments by both the EP and the Council. We set out the key ongoing points of negotiation below.
1. The Scope of “AI”
In its original draft, the Proposed Act focused on AI systems with a specific purpose. These were the AI systems which were to the fore of public discourse at the time of drafting – systems like AI driving technology or AI medical diagnosis techniques.
Of course the most significant development in this space since the Commission’s original draft has been the widespread adoption of “general purpose”, generative AI technologies, trained on massive datasets, such as ChatGPT. This technological and social development, and its increasing adoption, has been a significant reason for the delays in the implementation process of the regulation, as the institutions responded to greater social awareness of the potential of AI technology and concern about its potential for harm and misuse.
Both the Council Draft and the EP Draft have expanded the proposed definition of “AI” to explicitly reference generative AI.
The EP Draft, building on the Council position, creates a new category of AI technology, calling it “foundation models”. A “foundation model” is defined as meaning:
“an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks“.
Under the EP Draft, providers of foundation models would be subject to certain new obligations, specific to that categorisation, but similar to “high risk” obligations including obligations to:
- demonstrate the identification, reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law
- train the foundation model only on datasets which are subject to appropriate data governance measures (in particular for suitability and to mitigate biases)
- register the foundation model in an EU database
“Generative AI” models are defined as a subcategory of foundation models, to whom an even stricter set of new obligations will apply.
The EP Draft also provides a definition of a “general purpose AI system” describing it as:
“an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed“.
The amended recitals explain the approach of the EP Draft; that a general purpose AI system can be an implementation, or reuse, of a foundation model. More stringent obligations would apply in relation to the development of foundation models because of the central and fundamental role they play, as a foundation on which further “downstream” uses can be based.
In Annex I of the Proposed Act, the text explicitly noted that AI covered under the Act included not only machine learning techniques, but also more basic technologies like logic- and knowledge-based deductive systems, statistical approaches and Bayesian estimations (e.g. basic chatbots providing responses based on simple decision trees). Interestingly the EP Draft and the Council Draft propose deleting this Annex. Both do include a references to such simpler technologies in proposed amendments to the Recitals – with the EP Draft calling out in its recitals that “simpler techniques such as knowledge based approaches, Bayesian, estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, in particular when they are used in combination with machine learning approaches in hybrid systems”.
So it would seem that while the EU institutions still consider that the regulation needs to extend to the “simpler” forms of AI, there is a clear shift in focus to primarily seeking to regulate the complex, deep learning technologies.
2. Prohibited AI systems
Article 5 of the Proposed Act sets out the AI practices which are intended to be completely prohibited. This list includes certain practices concerned with:
- use of subliminal techniques to distort a person’s behaviour;
- exploiting vulnerabilities of specific groups due to protected characteristics like age or disability;
- certain uses by public authorities to evaluate trustworthiness of people; and
- certain uses of real-time remote biometric identification systems by law enforcement
The Council Draft proposes some limited amendments to this Article, mainly in respect of tweaking the rules concerning when law enforcement may avail of real time real-time remote biometric identification systems.
The EP Draft however, proposes substantial amendments to Article 5. The EP Draft proposes extending the prohibition on real-time remote biometric identification systems to all uses in public spaces. It also proposes adding five additional prohibitions. These are:
- a prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected characteristics,
- a prohibition on AI systems that predict and/or assess the risks of individuals (re)committing criminal offences
- a prohibition on AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
- a prohibition on AI systems that infer emotions of natural persons in the areas of law enforcement, border management, in the workplace and in educational institutions
- a prohibition on AI systems that analyse recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorisation.
The use of biometric identification systems is likely to be a serious bone of contention in negotiations as some Member States see it as an important tool for the maintenance of safety and security, while critics worry about its potential impact on privacy rights.
3. High Risk AI systems
As noted above, the Annex to the Proposed Act provides a list of AI practices that are tobe considered “high risk” and accordingly are subject to greater obligations. Annex II provides a list of related EU legislation, generally concerning certain product safety standards (e.g. Directive 2009/48/EC on the safety of toys, Regulation (EU) 2016/426 on appliances burning gaseous fuels, Regulation (EU) 2017/745 on medical devices). Any AI system which is a product covered by such legislation, or which is intended to be used as a safety component of such a product, will be considered “high risk”.
Annex III provides for a list of critical areas and use cases ro be considered “high risk” e.g. use AI systems for employee recruitment, management of critical infrastructure or to evaluate access to public or private services.
The Proposed Act envisaged that inclusion in the list automatically indicated it should be considered high risk. However the EP Draft proposes adding an additional test: an AI system will only be considered high risk to the extent:
- it falls under one or more of the listed categories and
- It poses “a significant risk of harm” to the health, safety or fundamental rights of individuals, or (in relation to AI systems for the management and operation of critical infrastructure like energy grids or water management systems) to the environment.
The EP Draft further proposes adding a couple of new use-cases to the high-risk category – including AI systems intended to be used for influencing the outcome of an election and AI systems used by social media platforms designated as very large online platforms (VLOPs) under the DSA, within their recommender systems.
Next Steps
The negotiations on the final text of the ultimate regulation are ongoing. Reports suggest that progress has been slow, although this is perhaps not surprising given the difficult balance of interests which must be struck.
However the regulation remains a priority for the EU and is currently expected to be fully adopted by the end of 2023. So stay tuned for Part 2 of this series, once any further updates and developments are made public.