Insights EU AI Act formally adopted by European Parliament

On 13 March 2024, the European Parliament formally adopted the text of the AI Act that had been provisionally agreed with the Council of the EU in December. Key aspects of the text, originally proposed by the Commission in 2021 and which remains subject to final legal and linguistic revisions, are highlighted below.

Definition of AI system

The Act regulates “AI systems”, defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The Recitals to the Act clarify that rules-based systems are excluded. Certain exclusions apply for providers of AI systems that make them available for research or non-professional activities and for free/open-source AI.

Prohibited AI

Prohibited AI systems include: AI using subliminal/manipulative techniques, or that exploits vulnerabilities (e.g due to age or disability), which materially distorts a person’s behaviour causing (or likely to cause) significant harm; biometric categorisation systems inferring sensitive information (e.g. race, political opinions, trade union membership etc) (with an exception for law enforcement); social scoring resulting in detrimental/unfavourable treatment; assessing the risk of an individual committing a criminal offence solely based on profiling or personal traits/characteristics (save where there is concrete evidence against the relevant person); the creation of facial recognition databases through the untargeted scraping from the internet or CCTV; and emotion recognitions systems used in the workplace or educational institutions (unless used for safety reasons).

The use of real-time remote biometric identifications systems in public places for law enforcement is prohibited save where it is strictly necessary to search for “specific” victims of abduction, human trafficking and sexual exploitation, or missing persons, in cases of an imminent threat to life or terrorist attack, and to search for criminal suspects, in each case subject to specific safeguards.

High-risk AI

There are two categories of high-risk AI systems: those intended to be used as a safety component in or in the form of the regulated products listed in Annex II, including machinery, toys, lifts etc, and AI systems used areas listed in Annex III including non-banned biometric identification and categorisation and emotion recognition, critical infrastructure, education/training, employment, law enforcement etc. Exceptions exist for AI systems under Annex III that do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Presumably in recognition of the difficulties in classifying AI, the Commission must provide further guidelines on the classification of AI systems as high-risk no later than 18 months from the Act coming into force.

High-risk AI systems are subject to numerous obligations relating to transparency, risk management, data and data governance (including detection and mitigation of possible biases), technical documentation, record keeping (including event logging), transparency, human oversight, accuracy, robustness, cybersecurity and the reporting of serious incidents. There is also an obligation to conduct a fundamental rights impact assessment in certain cases. Conformity assessment procedures will apply to Annex II AI systems including certification and CE marking.

Transparency for certain AI systems

Importantly, providers of AI systems intended for direct interaction with natural persons, and for permitted emotion recognition and biometric categorisation systems, must ensure that users are informed they are interacting with an AI system unless it is obvious. Further disclosure requirements apply to AI systems generating synthetic audio, image, video or text content (i.e. generative AI), AI-generated text relating to matters of public interest, and for deployers of AI systems that generate or manipulate image, audio or video content constituting a “deep fake” (an AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful). However, where the deep fake forms part of an evidently artistic work or programme, the obligation is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

General-purpose AI

A general-purpose AI system (“GPAI”) is defined as an “AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”.  A general-purpose AI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks…”.  GPAI is subject to obligations relating to the provision of technical documentation (e.g. in relation to training and testing), information for downstream AI providers who want to integrate the GPAI into their own AI systems, and information about training data. GPAI with systemic risks (essentially potentially high impact negative effects on public health, safety etc) is subject to further obligations such as in relation to model evaluations, risk assessment and mitigation, response and reporting procedures and cybersecurity.

The Recitals make it clear that the use of copyright-protected content requires the consent of the rightsholder unless an exception applies. Article 4 of the 2019 Copyright Directive states that Member States may provide that the use of copyright-protected content for text and data mining for lawfully accessible works is permitted unless rightsholders have expressly reserved their rights in an appropriate manner (e.g. in a machine readable form for online content). Where the rights have been reserved, anyone wishing to copy or extract relevant works will therefore need a licence. The Act provides that a copyright policy must be put in place to ensure that an opt-out from the text and data mining exception can be identified and respected and there is an obligation to publish a detailed summary of the content used to train any GPAI.

Territorial Scope, Penalties, Timing

The Act extends to providers (including importers and distributors) in countries outside the EU that place AI systems or GPAI on the EU market or that put into service AI systems in the EU, and to providers or deployers of AI systems outside the EU where the output it produces is used within the EU.

Member states may set penalties for breach including fines (up to a maximum cap) ranging from €7.5m to €35m or 1% to 7% of worldwide turnover, whichever is higher. Individuals can lodge complaints with national competent authorities, but the Act does not provide for damages to be awarded. Individuals subject to certain decisions from high-risk AI can request a clear and meaningful explanation from the deployer.

Although the Act still needs to be formally endorsed by the Council, formal adoption by Parliament signals that the EU is on the edge of achieving what many have described as the world’s first comprehensive law on AI. Once it comes into force (currently anticipated in May), the provisions in respect of prohibited AI systems will apply within six months, the provisions on GPAI governance will apply within 12 months, and the remaining provisions will apply within 24 months, save in respect of high-risk AI under Annex II which will apply within 36 months. High-risk AI already on the market when the Act comes into force will only be regulated if it undergoes significant changes and GPAI models already on the market will be given an additional two years to comply.

The European Commission can issue several delegated acts (secondary legislation that supplements or amends certain non-essential elements of a parent EU act) under the Act such as the thresholds for GPAI models with systemic risk.

For more information, click here.