+44 (0)20 7612 9612
December 9, 2019
At the request of the Commission, the Expert Group has examined, and published a report on, the different liability regimes across the EU Member States and considered whether they are effective in relation to emerging technologies, including AI and the Internet of Things (IoT). As these technologies are rolled out further, sufficient safeguards are needed to minimise the risk of harm they may cause and, where harm is caused, adequate liability regimes need to be in place to compensate victims.
The report notes that only the strict liability of producers of defective goods is harmonised at EU level. Nearly all other liability regimes are national and particular to each Member State.
The report concludes that the liability regimes currently in force do ensure at least basic protection of victims of damage caused by the operation of these new technologies. However, such regimes may not function as well as they should due to the characteristics of these technologies and their applications, such as their complexity, the fact that they are constantly modified through updates, their limited predictability, and their vulnerability to cybersecurity threats. Therefore, victims may find they cannot claim compensation even where it would be appropriate, and the allocation of liability may be unfair or inefficient. To rectify this, the report states, certain adjustments need to be made to EU and national liability regimes.
The report goes on to make recommendations as to how liability regimes should be designed and/or changed to cope with the challenges arising from emerging digital technologies. For example:
- a person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation. Existing defences and statutory exceptions from strict liability may have to be reconsidered, in particular if they are tailored primarily to traditional notions of control by humans;
- where the service provider provides the necessary technical framework and has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology;
- a person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology and should be liable for breach of such duties if at fault;
- a person using a technology that has a certain degree of autonomy should not be less accountable for any resulting harm than if the harm had been caused by a human auxiliary;
- manufacturers of products or digital content incorporating an emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product after it had been placed on the market, provided they are still in control of updates to the technology. A development risk defence should not apply;
- in situations where third parties are exposed to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability;
- as a general rule, the victim should continue to be required to prove damage. However, where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to an easier standard of proof;
- emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof to avoid any detriment to the victim;
- the destruction of the victim’s data should be regarded as damage to be compensated under specific conditions; and
- it is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.
The report notes that it is impossible to come up with a single solution covering the entire risk spectrum when considering how liability regimes should be adapted, due to the diversity of the technologies and the range of risks they may pose. It suggests that comparable risks should be addressed by similar liability regimes and existing differences should ideally be eliminated. To read the report in full, click here.