Beyond the AI Act: The AI Liability Directive & the Product Liability Directive

Introduction

Following the political agreement reached on the terms of the EU’s AI Act in December, the EU seems set to lead the way in adopting a novel regulatory framework to regulate the use and development of artificial intelligence (AI). Whilst the spotlight has largely focused on the AI Act, the EU’s AI regulatory framework will extend beyond just that piece of legislation. 

Back in September 2022, the EU Commission published a proposed package of additional regulatory measures to support the rollout of AI within Europe. This package comprised of:

  1. the proposed AI Liability Directive (AILD) and 
  2. the proposed revisions to the Product Liability Directive (PLD). 1

These twin directives form part of the Commission’s regulatory framework on AI, in conjunction with the AI Act. 

Political agreement was reached on the PLD in December, and while progress on the AILD seems to have slowed in comparison, given the current legislative and societal focus on AI technologies, it is likely to remain a legislative priority.

Regulatory Framework for AI

The AI Act is not a standalone piece of legislation, but should be seen in the wider context of the Commission’s approach to ensure effective regulation of the emerging technology. The AI Act is, in part, a form of product safety legislation. As with all product safety legislation, it seeks to prevent and reduce risks by setting safety standards. One of its primary purposes is to minimise risks in relation to AI systems before they are placed on the market or deployed.

Product safety legislation is generally complemented by liability legislation. Not all risks arising from the use of a product can be prevented and when they materialise, liability legislation seeks to ensure that those harmed can be appropriately compensated. 

The Commission has identified a potential gap in our current framework of liability legislation where damage may arise through the use of AI systems and those suffering damage may not be effectively compensated. These difficulties arise due to the complex and opaque nature of the technology which does not fit neatly within our existing rules. 

To address this gap the Commission proposed the AILD and the PLD. The goal of these new and revised measures is to ensure that individuals who suffer damage caused by the use of AI systems can be appropriately compensated. This is an important issue as it will ensure that end-users of AI systems have greater confidence using the technology, safe in the knowledge that they are entitled to certain baseline protections, should harm occur. It also provides greater certainty for companies in the business of supplying or deploying AI systems by knowing what liability risks they face.

Where a person’s injury is caused by a product, a number of different legal avenues will be available to them for claiming compensation. These claims can range from contractual claims, or claims for non-contractual civil liability (in Ireland this would mean a tort law claim). Alternatively, the injured party may have recourse under strict liability regimes, such as the defective product liability regime. In developing the AILD and the PLD, the Commission has focused on (i) strict liability for defective products; and (ii) non-contractual civil liability claims.

Revised Products Liability Directive

The PLD was originally adopted in 1985 and implemented a strict liability regime for material damage suffered due to use of defective products. Following an evaluation of the directive in 2018, the Commission found that whilst it was overall an effective instrument, it had several shortcomings, including the need to update it for the digital age. Following informal negotiations in December, the EU institutions arrived at a political agreement on a compromise text. The compromise text was recently approved by Parliament committees and is expected to be put before the Parliament for a vote in the next few weeks. If approved it will  then pass to Council for approval and adoption.

The proposed reform of this directive is far reaching, extending beyond a mere update to account for AI technology. These changes include a general widening in scope of application, including a wider definition of product and liable parties. 

From an AI perspective, there are a few key proposed reforms to focus on.

The first is an explicit acknowledgement that AI systems fall within scope of the PLD, by virtue of the inclusion of “software” within the definition of “product”. Under this widened scope, AI system providers will potentially be liable for any defective AI systems that are placed on the market. It also covers any AI systems that are integrated into products, blurring the traditional distinction between tangible and intangible products. Whilst the PLD will not apply to free and open-source software supplied outside the course of a commercial activity, manufacturers that integrate any free and open-source software into their products will be potentially liable for any defects that result.

The PLD provides for recovery of any material losses that result from a product defect, whilst compensation for non-material losses falls to the laws of each Member State. The concept of “damage” in the PLD is widened in several ways. Loss or corruption of data is now recoverable, although not if the data is used for professional purposes. The compromise text clarifies in the recitals that destruction or corruption of data does not automatically result in material loss, if an injured party is still able to retrieve the data at no cost. Damage to any property which is not used for professional purposes continues to be non-recoverable. 

In the context of AI systems, this widened scope of damage will increase the potential liability of AI system providers. It also clarifies that certain types of damage, such as privacy infringements, will not in and of themselves trigger liability under the PLD. Interestingly, the addition of a new recital in the compromise text seems to widen the availability of the compensation right to indirect victims, who suffer damage as a result of the direct victim’s damage.

The concept of defectiveness itself under the PLD is also updated to account for AI systems. This can be seen by the addition of the self-learning nature of the product as a factor for courts to consider when assessing defectiveness. Products can also be considered “defective” on account of cybersecurity vulnerabilities, which will be particularly relevant in the context of the use and deployment of AI systems. An updated recital in the compromise text outlines that manufacturers who design products with the ability to develop unexpected behaviour remain responsible for behaviour that causes damage. In the context of AI systems, this would indicate that the ability of a system to act unexpectedly will not be enough for a developer to excuse itself from liability. The actions or omissions of third parties will also not excuse AI system providers from liability, where a defect exists in the product. This could be the case where a third party exploits a cybersecurity vulnerability in an AI system that results in damage suffered. Conversely, the liability of AI providers can be reduced or disallowed where the injured party themselves contributed to the damage, such as failing to install updates or upgrades to the AI system.

The traditional defence under the PLD that the defect in the product arose after being placed on the market has a new carve-out to account for the fact that products can still be within the control of a manufacturer after being placed on the market. This means that AI system providers will not be able to rely on this defence if the defectiveness of the AI system is due to a particular aspect of the product which remained within their control, including the provision of software updates or upgrades, or any substantial modifications. 

The compromise text sets out a lead time of 24 months for Member States to implement the directive in national law, which means businesses and individuals should expect to have local laws implemented in 2026 (if the PLD enters into force this year). This lead time would align with that of the AI Act, also set at 24 months following its entry into force. The alignment means that those preparing for compliance with the AI Act should also consider how the PLD will interact alongside it, bearing in mind that the PLD will need to be transposed into the local laws of each Member State whereas the AI Act, as an EU Regulation, will apply directly.

AI Liability Directive

Under this proposal, the Commission has addressed an injured party’s ability to take non-contractual, fault based claims in Member States. In Ireland, this will impact tortious claims. 

The Commission saw a need for setting minimum standards in relation to these rules as there was potential for fragmentation between Member States which would create uncertainty for claimants. The net effect of the AILD is that it will be easier for parties to take non-contractual fault based claims where they have been injured by AI systems.

The AILD is short in length but significant in overall impact. It is limited to claims taken by parties where damage was either (i) caused by an AI system or (ii) caused by the failure of an AI system to produce a specific output. Looking to ensure consistency, its definitions are aligned with the AI Act. The AILD comprises two key procedural devices. 

The first relates to the ability of injured parties to access relevant evidence. Claimants are able to request national courts to order the disclosure or preservation of evidence from relevant parties, in the case of high risk AI systems suspected of causing damage. Parties that do not comply are subject to a presumption of non-compliance, which makes things procedurally easier for the claimant and encourages relevant parties to comply with the orders. 

The second procedural change which the AILD introduces will make it easier to prove a causal link between a relevant party’s fault and the output of an AI system, through the introduction of a series of rebuttable presumptions. 

The effect of these rules gives injured parties a procedural advantage in proving their case. However, they will still have to prove all the relevant substantive elements of their claim under the laws of each Member State.

Whilst the AILD was published as part of a package proposal alongside the PLD, the PLD has outpaced in its development. Given the upcoming change in the EU’s legislative term, it seems unlikely the AILD will be agreed before the end of the current term this summer. It will be interesting to see whether, and to what extent, this legislative proposal will progress, in light of the agreed positions of the AI Act and the PLD. For now, businesses should focus on preparing to comply with the AI Act and understanding the heightened risk that will accompany providing AI systems under the PLD.

1.  References to the “PLD” in this article are references to the compromise text as linked.

Leave a Reply

Your email address will not be published. Required fields are marked *