An European push toward safer use of artificial intelligence


– Written by Aurora Agostini and Marco Stefanini

Last September 28, 2022 the European Commission proposed a directive (COM(2022)496) aimed at adapting the rules provided by EU member states on non-contractual liability caused by artificial intelligence systems (hereinafter, the “Directive”).

The Directive, following up on the European Parliament’s Resolution 2020/2014, is part of a broader effort to promote artificial intelligence and reduce the risks associated with certain uses of this technology.

Objectives of the proposal

Artificial intelligence (AI) is a rapidly evolving set of technologies able to integrate data, algorithms and processing power. Certain features of AI, such as complexity, autonomy and opacity (so-called black box effect), can make it difficult for those who suffer damage from the use of AI systems to identify the responsible person.

The Directive aims to introduce mechanisms to simplify, from a procedural point of view, the burden of proof on the injured parties by AI devices. This simplification is achieved, in the provisions of the Directive, through the introduction of special relative presumptions (praesumptio iuris tantum), as well as mechanisms of disclosure of information aimed at balancing the unfavorable position of the injured party under these circumstances. For this reason, the Directive does not affect the rules governing fundamental institutions of civil liability peculiar to individual national systems, such as damage and fault.

The Directive also aims to promote the harmonization of national regulations with the intent to reduce the degree of legal fragmentation and of uncertainty on how jurisdictions will interpret and apply regulations in force on liability  involving AI. In this regard, the adoption of a directive by the European legislature instead of other regulatory means appears to be consistent with this goal of harmonization and with the wish to provide Member States with the flexibility to integrate the measures within their respective liability regimes, without excessive friction.

Scope and subject matter

Intelligenza artificiale direttiva UEThe Directive applies to claims for compensation for damage caused by an AI system in the context of civil actions for non-contractual liability, where such actions are brought under fault-based liability regimes. Thus, the Directive does not refer to the rules applicable when a damage is of a different nature (e.g., pre-contractual, contractual, or social contact).

The Directive introduces two simplification mechanisms: the first consists of a new right of access to evidence, aimed at giving the judicial authority the power to order the defendant, under certain conditions, to disclose evidence relating to a given AI system; and the second consists of a relative presumption (praesumptio iuris tantum) concerning the causal link between the defendant’s culpable conduct and the damage suffered by the potential plaintiff.

In this regard, it should be noted that, according to the provisions of the Directive, claims may be filed not only by the person injured by the IA system but also by the person who may be subrogated to the rights of the injured party, such as an insurance company.

The order to exhibit

Regarding the first of the aforementioned mechanisms, Article 3 of the Directive provides that the court may order, at the request of the plaintiff, the disclosure of relevant evidence about specific high-risk AI systems suspected of having caused harm. A condition for the exercise of this power is that the plaintiff must first have obtained a denial of his or her request for production addressed to certain persons directly referred to within the Directive, such as the provider of an AI system, a person subject to the provider’s obligations under Article 24 or Article 28(1) of the AI Act, or a user under the AI Act.

It is also necessary for the plaintiff seeking to benefit from such an order of production to provide the court with sufficient facts and evidence to support the plausibility of the claim for damages (fumus boni juris).

The order of production may be granted by the judicial authority only to support the claim. It is also claimable by the defendant.

If the defendant fails to comply with such an order, the judicial authority may presume non-compliance with a relevant duty of care on the part of the defendant.

The relative presumption to the causal link

As for the second mechanism, Article 4 aims to introduce a relative presumption (praesumptio iuris tantum) by which the judicial authority may presume the existence of the causal link between the defendant’s failure to comply with certain obligations, as recalled below, and the output produced by the IA system or the failure of the IA system to produce an output that caused the damage.

For this presumption to operate, it is necessary that:

  • the plaintiff has proved the defendant’s fault by the rules of the applicable national law. Such fault consists, more specifically, in the non-compliance of the defendant’s conduct with a duty of care provided for by Union or national law and directly intended to protect against the harm that occurred;
  • based on the circumstances of the case, it appears likely that the defendant’s fault affected the output relevant to the IA system or its failure to produce it;
  • the plaintiff can prove that said output (or its non-production) caused the harm.

The Directive also distinguishes between high-risk and low-risk IA systems. In the case of high-risk IA systems, as defined by the IA Act, Article 4(4) of the Directive establishes an exception to the presumption of causation if the defendant demonstrates that the plaintiff can reasonably access sufficient evidence and expertise to prove causation.  On the other hand, in the case of AI systems that are not high-risk, Article 4(5) of the Directive establishes a condition for the applicability of the presumption of causation, whereby the presumption is subject to the condition that the court determines, based on the circumstances of the case that may, in fact, impair the plaintiff’s ability to prove the causal link between the defendant’s fault and the output of the AI system, that it is unduly difficult for the plaintiff to prove the existence of the causal link.

Adoption of the Directive and transposition by member states.

The EU legislation is intended to introduce an ad hoc discipline that, while having features that make it suitable for application to AI devices, is rooted within the traditional liability systems of the member states.

In any case, the Directive remains subject to review and approval by the European Council and Parliament and, once approved, must be transposed into national law by member states within the next two years.

Speak to our experts