Technical Dossier

Published in Edition 16

Product liability in the digital world

Have you noticed that most colourful futuristic fantasies have been materializing in the technological development?

Digital transformation is sometimes described as the fourth industrial revolution. There have been many breakthroughs around data analytics, cloud services, connectivity of everything, ever faster wireless networks, and artificial intelligence (AI). The digitalization of factories, surgical procedures and even toasters is also raising questions around liability.

Who is liable if an e-toaster burns down a home? What happens if the AI operating on a patient makes an erroneous decision?

Products we use every day aren’t like they used to be. Everything is now labeled smart and being able to analyze and change functions accordingly. The appliances are connected through the internet- of-things, which in itself is a smart system enabling processing of all the data and producing – well, we don’t always know what kind of surprising changes exactly IoT or AI may produce to our appliances in return. Your toaster is not probably getting too smart soon, but the development will not stop here.

Europeans are used to being protected by national product liability laws based on the 1985 European Product Liability Directive (PLD). Under these regulations the producer of a product is liable for any bodily injury or property damage caused to consumers by a safety defect of a product put into circulation. The directive has worked surprisingly well, with only one change over the years, especially considering that the products used today are often full of new digital technology. The producer of a physical product, such as an electric drill, is not the only one in the value chain potentially adding risk. But with new technologies the blame could also be put on providers of software, network or IoT based connected systems, users of the new technologies or, indeed, the creators of the AI technology used in the systems.

There are many definitions of AI. However, in essence these all reflect the ability of a program, system, device or robot to mimic human behaviour. AI does this through the utilization of diverse technologies and is also capable of learning and developing its skills independently, while performing tasks that are traditionally undertaken by human beings.

The EU has been working on a major overhaul of the PLD for a number of years, and a proposal for a new PLD - and a new Artificial Intelligence Civil Liability Directive (AILD) - was put forward in September 2022. In addition to responding to the development of new technology and network-based products the proposals would also change the present requirement that the consumer has to prove that a product was defective and that the defect caused the damage. Of course, liability only results in compensation being made after something has occurred. From a risk management perspective, the Commission is proposing to also revise product safety legislation such as the Machinery Directive and General Product Safety Directive to take new technology into account.

The Commission’s proposal introduces provisions ensuring that there should always be a business or legal representative based in the EU that can be held liable for defective products bought directly from manufacturers outside of the EU.

Some of the main new features of the PLD are:

  • AI systems, software and AI-enabled goods deemed as “products”.
  • Data loss, often not considered a property damage, will be compensable.
  • Products upgraded or modified after the original sale (in circular economy) have the same protection as the original products.
  • EU representative needed for claiming compensation for globally obtained products.
  • Manufacturers must disclose evidence in order to ease the burden of proof on consumers.
  • Failure to disclose evidence would lead to presumption of the product defect.
  • 15 years limitation period for latent injuries instead of 10 years.
  • The state-of-the-art defence will, however, be the rule; the member states may no longer derogate from the exemption afforded to manufacturers for scientifically and technically undiscoverable defects.

Preparation for the AILD started out as a separate process within the EC due to the view that it was dealing with different liabilities to those in the PLD. This was because AI systems were seen as technology with a strong influence on society through the control and tracking of citizens, such as for ethnic screening, analysis of big data and in automatic decision-making processes, therefore requiring specific regulations to protect civil liberties. However, during the preparation of the two directives, it was clear there was substantial overlap in terms of liability for compensation and the use of AI in products. The initiatives are closely linked and form a single package in a holistic approach to AI. 

AILD features:

  • Presumption of causality with stricter rules for high-risk AI systems (high risk to health, safety or fundamental rights).
  • Courts may order disclosure of relevant evidence related to the high-risk AI system.
  • AILD proposal to be added to the list set in Annex I of the EU’s Collective Redress Directive as an area where consumer representative actions must be implemented by member states.

Establishing a foundation for how liability will work with autonomous technology is a challenge. If a technology has, for example, been accepted for implementation into vehicles and, in addition, been approved to be used in a city centre, then who is ultimately liable in the event of an accident?

An insurer’s perspective


AI risks are present in many different insurance lines and there are several risks involved with AI and digitalized products and services.

Product liability insurance is the obvious one to cover the liability risks. But other insurance lines may also be exposed directly or indirectly. Accident and health policies could be triggered. Homeowners’ policies are also the first line to cover any damage caused by consumer products. Is the damage caused by AI accidental as required by All Risks policies? The functionality of statutory Motor Third Party Liability insurance schemes will also be tested in the case of autonomous vehicles.

The complicated causal links and the possibility of there being several contributing parties to one claim add complexity from an insurance perspective and may result in a reluctance for any one insurer to take the lead on such risks. An accident may be caused by a combination of different products, be that for example those driving the connecting technology, the cloud services and others.

This complexity is compounded by the fact that these are new risks, with insufficient statistical data available to fully meet the requirements for insurable risk. If new allocation rules for liability between producers, digital service providers and owners and users of the equipment are implemented, it may turn out messy and expensive for the liability insurer of one alternative liable party to adjust the claim.

In business-to-business relationships product liability is based on contracts and sale-of-goods legislation and is as such not dependent on EU product liability legislation. However, in the case of bodily injury, product liability legislation does apply no matter if the product is being used in industry or in a domestic setting.

Establishing a foundation for how liability will work with autonomous technology is a challenge. If a technology has, for example, been accepted for implementation into vehicles and, in addition, been approved to be used in a city centre, then who is ultimately liable in the event of an accident?


Next steps


The proposed directives have not yet been finalized. However, it seems clear that these reforms will materialize.

During the process comments from different sectors, including the European technology Industries, Orgalim, the Federation of European Risk Management Associations (FERMA) and from Insurance Europe, have been negative. These stakeholders have all reminded policymakers that there are existing liability regimes in place that already apply to any kind of injury or property damage cases and that the new rules would only have a negative impact on new product development as they would not be clear. They have also strongly rejected proposals for compulsory liability insurance requirements, for AI.

Of course, consumer representatives such as The European Consumer Organization (BEUC) believe that there is a need to update EU product liability laws and to extend them to digital contents, products and services.



Matti Sjögren

Matti Sjögren

Casualty risk management specialist - If P&C Insurance

Matti Sjögren is a casualty risk management specialist at If P&C Insurance. He has over 36 years’ experience in commercial insurance and is an expert in claims, liability underwriting and liability risk management. Matti also holds a master’s in law from Helsinki University.