Choose the experimental features you want to try

This document is an excerpt from the EUR-Lex website

Document 52022SC0320

    COMMISSION STAFF WORKING DOCUMENT EXECUTIVE SUMMARY OF THE IMPACT ASSESSMENT REPORT Accompanying the document Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)

    SWD/2022/320 final

    Brussels, 28.9.2022

    SWD(2022) 320 final

    COMMISSION STAFF WORKING DOCUMENT

    EXECUTIVE SUMMARY OF THE IMPACT ASSESSMENT REPORT

    Accompanying the document

    Proposal for a Directive of the European Parliament and of the Council

    on adapting non-contractual civil liability rules to artificial intelligence
    (AI Liability Directive)

    {COM(2022) 496 final} - {SEC(2022) 344 final} - {SWD(2022) 318 final} - {SWD(2022) 319 final}


    Executive Summary Sheet

    Impact assessment on the initiative on civil liability for damage caused by AI

    A. Need for action

    What is the problem and why is it a problem at EU level?

    The roll-out of AI is both a Commission objective and an expected trend. Although AI-enabled products/services are expected to be safer than traditional ones, accidents will still occur.

    Current liability rules, in particular national rules based on fault, are not adapted to handle compensation claims for harm caused by AI-enabled products/services. Under such rules, victims need to prove a wrongful action/omission of a person that caused the damage. The specific characteristics of AI, including autonomy and opacity (the so-called “black box” effect), make it difficult or prohibitively expensive to identify the liable person and prove the requirements for a successful liability claim.

    The Commission wants to avoid that victims of harm caused by AI, e.g. citizens, businesses, are less protected than victims of traditional technologies. Such lack of compensation can affect their trust in AI and ultimately the uptake of AI-enabled products/services.

    It is uncertain how national liability rules can be applied to the specificities of AI. In addition, faced with a result, which is unjust for the victim, courts may apply existing rules on an ad hoc basis in a way to come to a just result. This will cause legal uncertainty. As a result, business will have difficulties to predict how the existing liability rules will be applied in case damage occurs. They will thus have difficulties to assess and insure their liability exposure. This impact is magnified in case of businesses active across borders as the uncertainty will cover different jurisdictions. It will affect particularly SMEs, which cannot rely on in-house legal expertise or capital reserves.

    It is also expected that, if the EU does not act, Member States will adapt their national liability rules to the challenges of AI. This will result in further fragmentation and increase costs for businesses active across borders.

    What should be achieved?

    The initiative delivers on the Commission’s priority for the digital transition. The overarching objective is to promote the rollout of trustworthy AI to harvest the full benefits of AI. Therefore the AI White Paper aims at creating an ecosystem of trust to promote the uptake of AI. The liability initiative is the necessary corollary of safety rules adapted to AI and complements thus the AI Act.

    The AI initiative will:

    -Ensure that victims of AI-enabled products/services are equally protected as victims of traditional technologies.

    -Reduce legal uncertainty regarding the liability exposure of businesses developing or using AI.

    -Prevent the emergence of fragmented AI-specific adaptations of national civil liability rules.

    What is the value added of action at the EU level (subsidiarity)? 

    Promoting the roll-out of AI in Europe implies the need to open the EU single market to those economic operators who want to develop or adopt AI in their businesses.

    The prerequisites are to reduce legal uncertainty and to prevent fragmentation, if Member States start on their own adapting national rules in a diverging manner.

    Conservative estimates suggest that action at EU level on liability for AI would have a positive impact of 5-7% increase on the production value of relevant cross-border trade compared to the baseline scenario.

    B. Solutions

    What are the various options to achieve the objectives? Is there a preferred option or not? If not, why?

    Policy option 1: three measures to alleviate the victims’ burden to prove their liability claim:

    a) Harmonising how the information recorded/documented according to the product safety rules of the AI Act can be disclosed in court proceedings so that the victim can identify and prove which action/omission led to the damage suffered.

    b) If the victim shows that the liable person did not comply with AI Act safety requirements designed to prevent damage, the courts could presume that this non-compliance caused the damage. The potentially liable person would have the opportunity to rebut such a presumption, e.g. by proving than another cause led to the damage.

    c) In case the only way for the victim to prove the liability claim is to demonstrate what happened inside the AI, such burden on the victim would be alleviated. The potentially liable person would have the opportunity to prove that they did not act negligently.

    Policy option 2: the measures under Option 1 + harmonising a strict liability regime for AI use cases with a particular risk profile. Strict liability means that a person who exposes the public to a risk (often for legal interests of high value (life, health, property)) and draws a benefit from it, is liable if such risk materialises, e.g. the liability for a car owner. In such cases, the victim only needs to prove that the materialised damage stems from the risk sphere of the liable person. This can be coupled with a mandatory insurance.

    Policy option 3: staged approach (preferred policy option) consisting of a

    - First stage: the measures under Option 1, and

    - Second stage: a review mechanism to re-assess the need for harmonising strict liability for AI use cases with a particular risk profile (possibly coupled with a mandatory insurance).

    What are different stakeholders' views? Who supports which option?

    Overall, the majority of stakeholders agreed with the problems identified and supported action at EU level.

    EU citizens, consumer organizations and academic institutions overwhelmingly confirmed the need of EU action to ease victims’ problems with the burden of proof. Businesses, while recognising the negative effects of the uncertainty around the application of liability rules, were more cautious and asked for a targeted intervention to avoid limiting innovation.

    A similar picture appeared regarding the policy options. EU citizens, consumer organizations and academic institutions strongly supported, as a minimum, the measures on the burden of proof. They advocated also for the strongest measure, harmonising strict liability coupled with mandatory insurance.

    Businesses were more divided, also depending on their size. Strict liability was considered disproportionate. Harmonisation of the alleviations of the burden of proof gained more support, in particular among SMEs. However, businesses cautioned against a complete shift of the burden of proof.

    C. Impacts of the preferred option

    What are the benefits of the preferred option (if any, otherwise of main ones)?

    The preferred policy option would ensure that victims of AI-enabled products and services (natural persons, businesses and any other public or private entities) are not less protected than victims of traditional technologies. It would increase the level of trust in AI and promote its uptake.

    Furthermore, the initiative would reduce legal uncertainty and prevent fragmentation, thus helping companies, and most of all SMEs, that want to realise the full potential of the EU single market by rolling out AI-enabled products and services cross-border.

    The initiative would also improve the conditions for insurers to offer coverage of AI-related activities, which is crucial in particular for SMEs to manage their risks.

    In terms of environmental benefits, the initiative is expected to generate efficiencies and contribute to the innovation of environmentally friendly technologies.

    The cutting-edge products and services that this initiative aims to promote are for the most part not yet on the market. The proposed measures are ahead of the curve as they adapt the legal framework to the specific needs and challenges of AI, so as to create an ecosystem of trust and legal certainty.

    Due to this forward looking policy approach, no sufficient data is available for quantifying the impacts of the preferred policy option. These impacts were therefore mainly assessed qualitatively, taking into account all available data, expert estimates and stakeholder input. Based on reasoned assumptions, some quantification approaches were pursued.

    It is namely estimated that the preferred policy option would generate an increased AI market value in the EU-27 between ca. EUR 500mln and ca. EUR 1.1bln in 2025. Moreover, a micro-economic analysis based on market data for robotic vacuum cleaners suggests that the initiative would generate an increase in total welfare of EUR 30.11-53.74mln for this product category alone in the EU-27.

    What are the costs of the preferred option (if any, otherwise of main ones)?

    The preferred policy option prevents liability gaps caused by the specific characteristics of AI. It would ensure that in the cases where the specific characteristics of AI would not have allowed the victim to prove the necessary facts, instead of the victim the person responsible for causing the damage bears the cost.

    This is in line with one of the fundamental purposes of liability law, i.e. to ensure that a person who harms another person in an illegal way will compensate the harm caused to the victim. It is also inherent in the Commission’s policy objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by traditional technologies. It leads to a more efficient cost-allocation to the person who has actually caused the damage and is best placed to prevent damage from occurring.

    Potentially liable persons (in particular businesses active in the AI market) are highly likely to be covered by insurance. Insurance solutions allow to spread the liability burden across the community of the insured, and thus limit liable persons’ costs to the annual insurance premiums. Insured liable persons would hence perceive the cost of compensation of the victim only as a marginal increase in their insurance premiums.

    A highly robust and precise cost quantification was not possible because the cutting-edge products and services promoted by this initiative are for the most part not yet on the market. Based on available data, expert analysis and reasoned assumptions, it was estimated that the preferred policy option may lead to an increase of overall amount of general liability insurance premiums paid annually in the EU by EUR 5.35mln to EUR 16.1mln.

    What are the impacts on SMEs and competitiveness?

    By improving the conditions for the functioning of the internal market for AI-enabled products and services, the initiative would have a positive impact on the competitiveness of companies active in the European AI market. These companies would become more competitive on a global scale, which will strengthen the EU’s position vis-à-vis its competitors in the global AI race (primarily the US and China). As AI is a cross-cutting enabling technology, these benefits would not be limited to certain specific sectors but apply – albeit to varying degrees – in all sectors where AI is developed or used.

    SMEs would benefit even more than other stakeholders from reduced legal uncertainty and fragmentation, because they are more affected by these problems under the current liability rules. This initiative would improve the conditions in particular for SMEs wishing to roll-out AI-enabled products or services in other Member States. This is crucial because the EU’s AI market is driven to a large extent by SMEs developing, deploying or using AI technologies.

    SMEs would benefit from this initiative also as victims of damage caused by AI, as they could rely on the alleviations of their burden of proof to claim compensation.

    Will there be significant impacts on national budgets and administrations? 

    No significant impacts on national budgets and administrations are expected.

    The envisaged measures easing victims’ burden of proof could be integrated without friction into Member States’ existing procedural and civil liability frameworks.

    Member States will have to report on the implementation of the initiative and provide certain information for the Commission’s targeted review. However, these reporting requirements will be limited to information available through Member States’ existing databases and information reported under other legal instruments (e.g. the AI Act or the Motor Insurance Directive), which will achieve synergies and ensure the coherence of future policy measures across various policy areas.

    Will there be other significant impacts? 

    Fundamental rights: The initiative will contribute to supporting the effective private enforcement of fundamental rights and preserve the right to an effective remedy where the fundamental rights risks of AI (e.g. discrimination) have materialised.

    International dimension: By putting forward a balanced approach regarding liability for damage caused by AI, the EU has the opportunity to set a global benchmark and promote its approach as a global solution, which would ultimately generate a competitive advantage for ‘AI made in Europe’.

    Proportionality? 

    The preferred option is designed to prepare the ground for the development and use of AI, while reaching the main objective of promoting its roll-out in the EU.

    However, this option will not go beyond what is necessary. Firstly, the EU intervention is targeted because it will only alleviate the burden of proof for victims. It will only harmonise elements of liability that are challenged by AI, leaving other elements like the determination of fault and causality to the existing national laws.

    Secondly, the preferred option postpones the assessment of the need to harmonise strict liability to a later stage, when more information about the AI and its uses can be gathered (see further below).

    Thirdly, the preferred option will propose a minimum harmonisation approach. While minimum harmonisation does not create an entirely level playing field, it ensures that the new rules can be integrated without frictions into the existing legal civil liability framework within each Member State.

    Thus, Member States will be able to integrate the targeted EU interventions of the preferred option into their national law, the initiative will increase, across the EU, legal certainty, prevent further legal fragmentation and ensure effective protection of victims which is comparable with the protection level for other damages.

    D. Follow up

    When will the policy be reviewed?

    The preferred policy option comprises of a staged approach: first introducing measures to alleviate the burden of proof on the victim and then, on the basis of a review clause, assess the situation five years later. This process will allow the Commission to assess in light of the development of the technology and its uses if, in addition to the measures alleviating the burden of proof, harmonisation of strict liability and mandatory insurance is needed as well.

    Top