Insights European Data Protection Supervisor (“EDPS”) publishes Opinion on proposed Directive on AI liability rules (“AI Liability Act”)

The European Commission’s proposed AI Liability Act, published last year and currently moving through the legislative process, seeks to provide recourse for those harmed by AI by creating new rules for non-contractual, fault-based civil claims involving AI. It is intended to complement the proposal to extend the current EU Product Liability Directive, which regulates no-fault based claims for defective products, to products including software and services involving AI.

The EU’s proposed AI Act, also moving through the legislative process, sets out rules concerning the use of AI systems defined to include software developed with machine learning, logic and knowledge-based or statistical approaches which can generate outputs such as content, predictions and recommendations. Certain types of AI, such as those that can cause harm by the use of subliminal techniques, are banned. High-risk AI systems, such as biometric identification, are permitted subject to strict requirements. AI systems that pose minimal or no risk are subject to certain transparency requirements. The terms of the AI Act are still very much under discussion.

The AI Act seeks to ensure safety and protect fundamental rights in the use of AI whereas the AI Liability Act establishes certain rules in relation to the claims that may be brought where damage nevertheless arises.

The AI Liability Act will apply where damage arises from the output or failure to output of an AI system (as defined under the AI Act) through someone’s fault (e.g. discrimination in a recruitment process involving AI) which could include the fault of a user, as well as the provider, of an AI system. Member States must ensure that AI system operators are required to disclose relevant evidence on a claimant’s request in the event that a high-risk AI system (as defined under the AI Act) is suspected of causing damage. This provision aims to make evidence-gathering easier for claimants.

It also proposes that Member State courts shall presume, for the purposes of applying liability rules to a claim for damages, a causal link between the fault of the defendant (the AI system provider or user) and the output of the AI system, or the failure of the AI system to produce an output, where the claimant can show: (1) the defendant has failed to comply with a legal duty of care, (2) it can be considered reasonably likely that the fault has influenced the AI system output (or output failure) and (3) the output (or output failure) has caused the damage. In the case of a claim against a provider of high-risk AI, the condition at (1) above can only be met if it can be shown that the provider failed to comply with certain specific obligations under the AI Act such as in relation to transparency and human oversight.

The EDPS, an independent body which advises on, and ensures EU institutions respect, data protection rules, has published an own-initiative Opinion on the AI Liability Act. Amongst other things, it recommends that the obligations under the Act are not limited to high-risk AI systems, since the damage caused by other types of AI system could be significant (e.g. a system used for making decisions on eligibility for home or liability insurance) and victims of damage may face similar difficulties in obtaining evidence to substantiate such claims. It also recommends a provision that, where a court orders evidence disclosure, the information disclosed is provided in an intelligible and generally understandable form; if it is too technical, those bringing claims may not be able to understand it. The Opinion also recommends that additional measures should be considered to alleviate the burden of proof on victims of damage from AI systems. Although safeguards are included, claimants must still prove the fault or negligence of the AI provider or user, which may be particularly difficult in the context of AI systems.

To access the Opinion, click here.