INCEPTION IMPACT ASSESSMENT

Inception Impact Assessments aim to inform citizens and stakeholders about the Commission's plans in order to allow them to provide feedback on the intended initiative and to participate effectively in future consultation activities. Citizens and stakeholders are in particular invited to provide views on the Commission's understanding of the problem and possible solutions and to make available any relevant information that they may have, including on possible impacts of the different options.

Title of the initiative

Adapting liability rules to the digital age and circular economy

Lead DG (responsible unit)

GROW H2 (product liability) – JUST A2 (liability for AI)

Likely Type of initiative

Legislative, ordinary legislative procedure

Indicative Planning

Q4, 2021 – Q1, 2022

Additional Information

https://ec.europa.eu/growth/single-market/goods/free-movement-sectors/liability-defective-products_en

https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1593079180383&uri=CELEX:52020DC0064

The Inception Impact Assessment is provided for information purposes only. It does not prejudge the final decision of the Commission on whether this initiative will be pursued or on its final content. All elements of the initiative described by the Inception impact assessment, including its timing, are subject to change.

A. Context, Problem Definition and Subsidiarity Check

Context

The transformation to the digital economy and society is changing the economic reality of the single market. Artificial Intelligence (AI) in particular is already benefiting our society and economy but also presents potential risks. Equally, the transition to a circular economy, in which it is increasingly possible to extend the life of materials, and upgrade and repair products and components, will benefit the environment. However, at the same time it raises questions about liability for any subsequent damage.

While the overall objective of the EU safety framework is to ensure that all products and services, including those integrating emerging digital technologies like robotics, Internet of things (IoT) and AI, operate safely, reliably and consistently, the liability framework provides for remedies if damage nevertheless occurs. The objectives of the liability framework are to (i) provide legal certainty to industry about the risk they take in the course of their business, (ii) encourage the prevention of damage and (iii) ensure injured parties are compensated. Liability rules must strike a delicate balance between these objectives and promoting innovation.

The existing liability framework consists of the Product Liability Directive 85/374/EEC (the Directive) and national liability rules.

·National liability regimes regulate various liability claims for damages caused by products and services. Many claims are based on a liable person’s conduct (‘fault-based liability’), for example, a producer, service provider or individual user of a product. For various other claims, a person identified by law (usually the operator, user or owner, who benefits from an activity that exposes the public to a risk) is held liable independently of fault (‘strict liability’).

·The Directive harmonises one group of claims at EU level: claims against the producer for damage caused to a consumer due to the defectiveness of a product. The producer is strictly liable for damage caused by a defect in their product, provided that the injured party proves the damage, the defect and the causal link between the two. The Directive lays down one set of rules for a vast range of products, from raw materials to complex AI-driven devices. The Directive’s implementation is dependent on national rules, for example, rules on evidence and causation.

This initiative addresses shortcomings identified in the evaluation of the Directive 1 and the challenges that the circular economy and new emerging technologies present for the application of liability rules (see the White Paper on AI 2  and the accompanying Report on Liability for AI, IoT and Robotics 3 ). With respect to AI in particular, this initiative is part of the Commission’s staged approach to developing an ecosystem of trust for AI and will complement the proposed regulation on a European approach for AI (Artificial Intelligence Act or AI Act) and revised safety legislation such as the Machinery Regulation and General Product Safety Directive.

Problem the initiative aims to tackle

1.Liability rules not fit for the digital age and circular economy:

a)Digital technologies

Certain features of digital technologies such as the intangibility of digital products, their dependence on data, their complexity, and connectivity, present challenges in applying liability rules. So do features specific to AI, such as autonomous behaviour, continuous adaptation, limited predictability and opacity. This creates legal uncertainty for businesses and may make it difficult for consumers and other injured parties to get compensation for damage caused by products and services that use these technologies.

I)The 2018 Evaluation of the Directive identified several shortcomings in relation to digital technologies in general.

-Intangibility of digital products: Digital content, software and data play a crucial role in the safe functioning of many products but it is not clear to what extent such intangible elements can be classified as products under the Directive. It is therefore unclear whether injured parties can always be compensated for damage caused by software, including software updates, and who will be liable for such damage. 

-Connectivity and cybersecurity: New technologies bring with them new risks, such as openness to data inputs that affect safety, cybersecurity risks, risks of damage to digital assets or privacy infringements. But the Directive provides for compensation only for physical or material damage and it is unclear if the notion of defect covers cyber vulnerabilities.

-Complexity: The complexity of digital technologies (e.g. within IoT systems) could make it very challenging for injured parties to identify the producer responsible.

II)Under the Directive, importers are treated as producers for product liability purposes. However, the digital age has brought changes to value chains too. The rise of online marketplaces has enabled consumers to buy products from outside the EU without there being an importer, leaving consumers with no liable person from whom to seek compensation under the Directive in the event of damage.

III)As regards AI specifically, the obligations laid down in the proposed AI Act on human oversight, transparency and information to users should make AI products and services safer, but the specific characteristics of AI nevertheless make it difficult to (i) get compensation for damage under the Directive and national civil liability rules and (ii) know one’s liability with sufficient certainty (see section 2 below). It is also uncertain whether and to what extent certain national ‘strict liability’ rules would be applied.

b)Circular economy

-Circular business models in which products are repaired, recycled, refurbished or upgraded are increasingly common and central to the EU’s efforts to achieve sustainability and waste-reduction goals. However, under the Directive the defectiveness of a product at the moment it is put into circulation is decisive. The Evaluation found the Directive to be unclear about who should be liable for defects resulting from changes to products after they are put into circulation. Further analysis of the extent of this problem is needed. 

-Products can also cause environmental damage, yet this cannot be compensated under the Directive at present.

2.Significant obstacles to getting compensation and obstacles in the internal market   

-The 2018 Evaluation of the Directive found that the complexity of certain products, for example, pharmaceuticals and products that use emerging digital technologies, makes it very difficult for injured parties to prove that a defect caused the damage they suffered and therefore to get compensation.

-Several specific characteristics of AI (autonomous behaviour, continuous adaptation, limited predictability and opacity) also make it difficult for injured parties to get compensation, both under the Directive and under national civil liability rules. AI systems can perform tasks with increasingly less human intervention and are in certain cases able to learn autonomously while in use. With certain opaque AI systems it is possible only to a limited extent to understand how they produce a certain output. These features could make it very difficult and costly for injured parties to identify and prove the fault of a potentially liable person or a defect and the causal link between that fault/defect and the damage suffered. This heavy burden of proof is compounded by the fact that injured parties may not have sufficient technical information about AI products and services, therefore placing them at a considerable disadvantage. National courts may apply diverging ad hoc solutions, e.g. by alleviating the burden of proof or developing extensive interpretations of strict liability regimes under national law. If Member States attempt to address the resulting legal uncertainty at national level, this could lead to further fragmentation of liability rules across the EU for damage caused by AI. Taking into account the specific features and the economic importance of AI as a crucial enabling technology, a lack of harmonised rules in this area could therefore lead to obstacles in the internal market (see further below).

-In respect of AI-equipped products that continuously learn and adapt while in operation, it is not clear whether unpredictable outcomes that lead to damage can be treated as ‘defects’ under the Directive. Even if they can be, the ‘development risk defence’ exempts producers from liability for defects that were undiscoverable when the product was put into circulation, which could make it difficult for injured parties to get compensation.

-Besides the obstacles to getting compensation once a claim has been submitted, injured parties seeking compensation under the Directive face limitations on making claims in the first place due to the time limits and the minimum threshold for property damage of EUR 500 imposed by the Directive. These measures may excessively limit claims.

These problems have consequences for both businesses and consumers/injured parties.

Businesses: The legal uncertainty of outdated and unclear EU and national liability rules and divergent national approaches could leave producers, service providers and operators unable to assess the extent of their liability for products and services. This could create extra costs, stifle innovation and discourage investment, with a disproportionate impact on small and medium-sized enterprises (SMEs). If innovation in the circular economy were held back due to legal uncertainty, this could have an impact on the EU’s sustainability and waste-reduction goals.

Consumers/injured parties: Victims of harm caused by certain product types are experiencing difficulties getting compensation today and victims of harm caused by digital technologies, including AI, are likely to also experience unreasonable difficulties in the future. These victims would therefore have less protection compared to those who suffered damage caused by traditional technologies, which could undermine societal trust in and uptake of emerging technologies. The need to prevent possible liability gaps due to the specific challenges of AI is particularly urgent as regards products and services that expose the general public to the risk of harm to legal interests of high value, e.g. life, health and property.

Basis for EU intervention (legal basis and subsidiarity check)

This intervention is based on Art. 114 (TFEU) on the approximation of laws to ensure the internal market functions properly. This initiative aims to improve the functioning of the internal market while providing for a high level of consumer protection. It complements EU product safety legislation as well as the proposed AI Act. Member States acting individually would not be able to ensure the same objectives in a consistent manner.

EU liability rules harmonise only what is necessary and will continue to rely on national legal systems for many aspects of their functioning, such as rules on establishing proof. Any further harmonisation of national liability rules on AI would be carried out in a targeted manner, focusing only on those aspects that are challenged by the AI’s characteristics of autonomous behaviour, limited predictability, continuous adaptation and opacity. The measures would therefore respect the principle of subsidiarity.

B. Objectives and Policy options

Objectives of the initiative: The overall objective is to ensure the internal market functions properly as well as ensuring a high level of consumer/victim protection. More specifically, Objective 1 is to modernise liability rules to take account of the characteristics and risks of new technologies and of new digital and circular business models, including AI-equipped products and services. This is so businesses will have the legal clarity they need to plan their investment, assess their liability exposure, insure themselves and place safe, innovative and sustainable products on the market. Objective 2 is to reduce obstacles to getting compensation for damage in order to (i) ensure that injured parties are equally protected throughout the EU and (ii) create trust in innovative products and services, and in justice systems, while promoting consumer uptake of innovative technologies, including AI. This initiative will take into account the proposed AI Act, including, in particular, (i) the definition of AI, (ii) the requirements of the AI Act and (iii) risk-related considerations, as well as other EU safety legislation to ensure consistency and complementarity, while providing effective and proportionate solutions for liability.

Baseline scenario: No changes to current liability rules. Legal uncertainty persists for businesses in respect of how liability rules will be applied in the digital age, including in respect of AI, and how they will be applied to the circular economy. Difficulties for injured parties to get compensation persist. Risk of fragmentation of the internal market because of emergence of diverging national rules.

Policy options: The preferred policy option will be a combination of options and sub-options under the two headings to address all identified problems.

1-Options to adapt strict liability rules to the digital age and circular economy 4

1.a – Revise the Directive to extend strict liability rules to cover intangible products (e.g. digital content/software) that cause physical/material damage, and to address (i) defects resulting from changes to products after they have been put into circulation (e.g. software updates or circular economy activities like product refurbishments), (ii) defects resulting from interactions with other products and services (e.g. IoT) and (iii) connectivity and cybersecurity risks. In addition, extend strict liability to online marketplaces where they fail to identify the producer.

1.b – As Option 1a, but extend the range of damages for which compensation can be claimed under the Directive to non-material damages (e.g. data loss, privacy infringements or environmental damage).

1.c – Harmonise the existing strict liability schemes of operators/users that apply to AI-equipped products and providers of AI-based services (where injured parties only have to prove that the damage emanates from the sphere of the operator of the AI-system). Following existing national models, the operator could be defined as a person, other than the producer, who is able to exercise a degree of control over the risks associated with the operation (such as owners and service providers). Alternatively, the strictly liable person could be identified by reference to the ‘user’ as defined in the proposed AI Act. The extent of harmonisation of strict liability can vary:

(i) Recommendation to Member States of a targeted and risk-based harmonisation of the strict liability of operators/users of AI-systems that enable products and services with a specific risk profile (such as those endangering the lives, health and property of members of the public), possibly coupled with an insurance obligation.

(ii) Targeted and risk-based harmonising legislative measure covering the same elements as 1.c (i). This option would create for those AI-systems with a specific risk profile a liability framework at EU level similar to what happens in almost all Member States’ legal systems as regards motor vehicle liability: strict liability of the producer for defects under the Directive as well as strict liability of the owner/operator.

(iii) Risk-based, but broader harmonisation of the operator’s/user’s strict liability, similar to 1.c (ii) but including additional aspects such as statutory limitation periods for lodging a claim and rules for joint liability, as envisaged by European Parliament (EP) Resolution 2020/2014(INL).

(iv) Strict liability of the operators/users of AI-systems in general (irrespective of their risk profile).

2-Other options to address proof-related and procedural obstacles to getting compensation 5

2.1. Options to reduce obstacles to getting compensation under the Directive

2.1.a – Alleviate the burden of proof by (i) obliging the producer to disclose technical information to the injured party and (ii) allowing courts to infer that a product is defective or caused the damage under certain circumstances, e.g. when other products in the same production series have already been proven to be defective or when a product clearly malfunctions.

2.1.b – Reverse the burden of proof. In the event of damage, the producer would have to prove the product was not defective.

2.1.c – In addition to option 2.1.a or 2.1.b, adapt the notion of ‘defect’ and the alleviation/reversal of burden of proof to the specific case of AI and remove the ‘development risk defence’ to ensure producers of products that continuously learn and adapt while in operation remain strictly liable for damage.

2.1.d – In combination with option 2.1.a, 2.1.b or 2.1.c, ease the conditions for making claims (time limits and EUR 500 minimum threshold for damage to property).

2.2. Options to address proof-related challenges posed by AI to national liability rules 

2.2.a - Recommendation to Member States of targeted adaptations to the burden of proof.

2.2.b - Legislative measure providing for a harmonised reversal of or other ways of alleviating burden of proof linked to non-compliance with AI-specific obligations in EU safety legislation (e.g. documentation or human oversight obligations under the proposed AI Act), in order to better enforce these obligations through civil liability claims and further promote compliance.

2.2.c - Legislative measure adapting the burden of proof where the claimant would otherwise be required to demonstrate how an opaque AI system produced a certain output that caused the damage;

2.2.d - Harmonisation of claims involving fault of the operator of AI systems without a specific risk profile, by introducing a reversed burden of proof regarding fault, as well as harmonisation of additional aspects such as the types of compensable harm, limitation periods and joint liability, as envisaged by EP resolution 2020/2014(INL).

C. Preliminary Assessment of Expected Impacts

Likely economic impacts

Providing clear rules on liability, by adapting the existing liability framework to new technologies and the circular economy, should give producers, service providers and operators certainty to assess and adequately insure their liabilities. This should create the investment stability needed to market innovative products and services. Providing harmonised rules should improve the way the internal market functions and bring cost savings, especially for SMEs and start-ups, as the need for businesses to assess liability risks separately for each targeted market, and the related legal uncertainty and costs, would be reduced.

Extending strict liability and alleviating the burden of proof would lead to more successful compensation claims and therefore reallocate the costs for the damage from the injured party to the responsible operator that caused the damage, either in insurance premiums or in compensation paid to the injured party. Nevertheless, insurance coverage would allow the liable party to limit their costs to their annual premium. The mix of policy options will be chosen to avoid excessive liability rules that might lead to higher prices for consumers if these costs are passed on or that might hold back innovation.

In the absence of adapted liability rules, injured parties would be left with the damage costs. Providing fair rules for consumers and victims of damage caused by new technologies including AI products/services and by products in the circular economy should stimulate trust and higher take-up of such products and services.

Likely social impacts

Clear liability rules encourage compliance with safety requirements and prevent excessive risk-taking by businesses. Appropriate liability rules allow costs to be efficiently allocated, therefore improving competitiveness. Liability rules adapted to new technologies and the circular economy build trust by providing effective redress to injured parties.

Likely environmental impacts

Clearer liability rules for those involved in the circular economy would provide legal certainty for business models for repaired, recycled, updated and upgraded products and sustainable services like mobility services. This would help the EU achieve its sustainability and waste-reduction goals. Clear EU liability rules would allow faster and wider AI roll-out. Therefore, the expected environmental benefits in various sectors would be more quickly achieved, e.g. a more efficient AI-based energy provision based on real needs or AI-based shared transport services.

Likely impacts on fundamental rights

Clear liability rules for damage caused by new technologies, including AI, will reinforce the right to an effective remedy and to equal treatment between injured parties; indirectly protecting people’s lives, health and property by promoting compliance with safety and cybersecurity requirements and fundamental rights, including compliance with the proposed AI act. The revision of the Directive, especially the burden of proof, the EUR 500 threshold and the time limits releasing the producer from liability 10 years after the product was put into circulation should facilitate the right to an effective remedy.

Likely impacts on simplification and/or administrative burden

The initiative aims to prevent regulatory costs from increasing due to legal fragmentation across the EU. Liability rules would be made easier to apply by national courts, increasing the overall efficiency of justice (lower costs and faster dispute resolution). The evaluation of the Directive found the current administrative burden to be very low, with no need for simplification. Adapting liability rules to the digital age and circular economy will not result in new information requirements or create administrative costs for businesses or consumers. The initiative is generally not expected to result in new direct adjustment costs for businesses. In certain cases, the possible harmonisation of insurance requirements under option 1c may oblige previously uninsured AI operators to take out insurance, or may result in insurance costs increasing compared to the previous insurance policy.

D. Evidence Base, Data collection and Better Regulation Instruments

Impact assessment

The Commission will take decisions after assessing the impacts of the policy options, taking into account also the findings of the 2018 evaluation of the Directive and various studies and consultations.

Evidence base and data collection

In line with Better Regulation Guidelines, the collected evidence and data includes:

-the Commission Report on AI liability and the report of the Expert group on Liability and New Technologies; 

-the fifth report COM(2018)246 on the application of the Directive and the evaluation of the Directive (SWD(2018)157);

-input from the Product Liability Formation of the Expert Group on Liability and New Technologies;

-impact assessment study on the possible revision of the Directive (work in progress);

-comparative law study on civil liability and AI, an economic study and a behavioural study on civil liability and AI, launched to support the IA process (to be published soon); 

-position papers and other documents drawn up by relevant stakeholders;

-data from public consultations, including on the AI White Paper, as well as targeted consultations and interviews;

-relevant studies published by the European Parliament Research Service, in particular the ‘European added value assessment on a Civil liability regime for artificial intelligence’.

Consultation of citizens and stakeholders

In the consultation after the White Paper on AI and the Report on Liability for AI 6 , 60.7% of respondents supported a revision of the Directive, while 63% of respondents favoured adapting national liability rules, for all (47%) or specific AI applications (16%). In parallel, input was gathered in (i) 12 online webinars, (ii) bilateral webinars with European/national business and consumer umbrella associations, (iii) bilateral discussions with major companies, (iv) meetings with most Member States and (v) discussions in the multi-stakeholder forum of the AI Alliance Assembly.

Further input will be collected through:

-this inception impact assessment over a four‑week period;

-a 12-week public consultation;

-targeted consultations in the context of the IA-related studies;

-consultations with Member States, stakeholders and experts on AI liability and on the revision of the Directive.

In all of these activities, particular consideration will be given to (i) SMEs, consumers and other individuals likely to be affected, (ii) the relevant European organisations representing both businesses and consumers and (iii) national civil society organisations active in the justice field.

Will an implementation plan be established?

If legislative options were to be selected, implementation plans will help Member States apply new legislation consistently and effectively, in particular in the transposition period. These plans could include specific information activities, networks for exchange of information and best practice on transposition or bilateral/multilateral meetings with Member States.

(1) COM(2018) 246 final.
(2)  European Commission, White Paper on Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65 final, 2020.
(3) European Commission, Report on safety and liability implications of AI, the Internet of Things and Robotics, COM(2020) 64 final, 2020.
(4)  Policy option 1a or 1b could be combined with one sub-option under 1c.
(5)  One sub-option under heading 2.1 and one or several of the sub-options under 2.2 could be combined with the chosen options under heading 1.
(6)  See footnotes 2 and 3.