Insights EU AI Act update

Contact

The EU’s proposed AI Act, published in 2021, proposes new rules concerning the use of AI systems. Certain types of AI, such as those that can cause harm by the use of subliminal techniques, would be banned. High-risk AI systems, such as biometric identification, are permitted subject to several requirements (risk management, transparency etc). AI systems that pose minimal or no risk are subject to certain transparency requirements; these include AI systems that can potentially be used to generate “deep fakes”.

The AI Act is going through the EU regulatory process. It appears that the three EU legislative bodies are finally agreed on a definition of AI (“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments”) but, otherwise, the terms of the AI Act are still very much under discussion. According to recent reports, the Spanish presidency of the Council of Ministers has published discussion documents in preparation for the upcoming negotiation meetings with the Parliament and Commission (“trilogues”).

A key aspect of the discussions relates to the concepts of general-purpose AI (“GPAI”), generative AI and foundation models, concepts which were not in the Commission’s original proposal. The Council has proposed that GPAI (AI not having a specific purpose), when integrated with high-risk AI, would be subject to some of the high-risk AI obligations depending on certain factors. The Parliament proposed instead that the use of foundation models (an AI model that is trained on broad data at scale, is designed for generality of output and can be adapted to a wide range of tasks) would be subject to several new obligations including risk handling systems, use of proper datasets and quality and energy efficiency standards. “Generative AI” (foundational models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video) would be subject to further obligations including a requirement to publicly document the use of training data protected by copyright.

According to reports, the presidency is now suggesting that foundation models (an AI model that is capable of completely performing a wide range of distinctive tasks) should be subject to transparency requirements (e.g. documenting the modelling and training process) before launch, and that providers would have to provide certain types of information to downstream economic operators (presumably this means deployers, distributors and importers) post launch. A new category of “very capable foundation models” should be subject to additional obligations such as pre-launch risk mitigation systems and external vetting and compliance controls by independent auditors. The suggestion is to establish, via implementing acts, benchmarks to assess whether an AI system falls within these definitions. In the case of very capable foundation models, the Council suggests the benchmark may be based on the amount of computing or data consumed in the training or on its impact on users. The third proposed category would be GPAI systems built on foundation models and used “at scale” (this would include use by Very Large Online Platforms or Search Engines, defined under the Digital Services Act has having over 10,000 business users or 45m end users in the EU). These would be subject to obligations relating to risk mitigation systems and external vetting to uncover vulnerabilities.

On how to address the issue that copyright may exist in much of the information used to train AI models, the Presidency proposes that foundation model providers must demonstrate that they have taken adequate measures to ensure their system is trained in accordance with EU copyright law, including the rights for copyright holders to opt out of certain exceptions to copyright.

Other areas of the AI Act that are currently under discussion include real-time biometric identification systems used in publicly accessible places. The Commission proposed an outright ban subject to limited exceptions (e.g. to search for victims of abduction, preventing imminent terrorist threats), but the Parliament proposed a complete ban. The presidency has opted for the Commission’s approach but narrowing the exceptions.

The Commission proposed to permit emotional recognition (which identifies or infers emotions or intentions based on biometric data) and biometric categorisation (assigning natural persons to specific categories, such as sex, age, hair or eye colour, ethnic origin or sexual or political orientation, based on biometric data) subject to transparency requirements. The Parliament wanted to ban emotional recognition in law enforcement, border management and workplace and educational institutions. The presidency has agreed with the ban in workplace and educational institutions. The Parliament wanted to ban biometric categorisation of protected data such as religious beliefs, but the presidency has suggested a carve out for law enforcement. Where they are not banned, the presidency proposes to categorise emotion recognition and biometric categorisation as high-risk AI.

On the Parliament’s proposal that high risk AI should be subject to the conduct of a fundamental rights impact assessment, the presidency has suggested that this is limited to use by public bodies. The presidency also proposes to remove considerations of energy consumption from the definition of high-risk AI, as proposed by the Parliament, to the provisions dealing with technical standards which AI providers may adopt voluntarily.

Reports suggest that, on the above topics, the Commission agrees with the presidency proposals. However, there is still a long way to go to agree the final wording of the law and several technical meetings are already scheduled for the coming weeks.