Choose the experimental features you want to try

This document is an excerpt from the EUR-Lex website

Document 52022SC0319

    COMMISSION STAFF WORKING DOCUMENT IMPACT ASSESSMENT REPORT Accompanying the document Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence

    SWD/2022/319 final

    Brussels, 28.9.2022

    SWD(2022) 319 final

    COMMISSION STAFF WORKING DOCUMENT

    IMPACT ASSESSMENT REPORT

    Accompanying the document

    Proposal for a Directive of the European Parliament and of the Council

    on adapting non-contractual civil liability rules to artificial intelligence

    {COM(2022) 496 final} - {SEC(2022) 344 final} - {SWD(2022) 318 final} - {SWD(2022) 320 final}


    Table of contents

    1.Introduction: Political and legal context

    2.Problem definition7

    2.1.Introduction and overview7

    2.2.Problem driver: the specific characteristics of AI make it difficult to meet the burden of proof

    2.3.Problem 1: Legal uncertainty leading to internal market obstacles0

    2.4.Problem 2: Legal fragmentation leading to internal market obstacles1

    2.5.Problem 3: Lack of compensation for victims leading to a lack of societal trust and hampering uptake of AI-enabled products and services2

    2.6.The consequences: internal market obstacles and affected stakeholders

    2.7.Likely evolution of the problems without Union intervention

    3.Why should the EU act?19

    3.1.Legal basis19

    3.2.Subsidiarity: Necessity and added value of EU action

    4.Objectives: What is to be achieved?

    4.1.General objectives

    4.2.Specific objectives

    5.What are the available policy options?

    5.1.What is the baseline from which options are assessed?

    5.2.Description of the policy options29

    5.3.Measures discarded at an early stage

    6.What are the impacts of the policy options?

    6.1.PO1: Easing the burden of proof for AI-related claims2

    6.2. PO2: PO1 + strict liability for AI use-cases with a specific risk-profile, possibly coupled with mandatory insurance ……………………………………………………………………….. 51

    6.3.PO3: PO1 + targeted review regarding the strict liability and mandatory insurance elements of PO2 (staged approach)57

    7.How do the options compare?59

    8.Preferred option60

    8.1.Rationale and main impacts of the preferred PO60

    8.2.The interplay with the PLD review: consistency and complementarity of the package of measures for the compensation of damage caused by AI61

    8.3. The interplay with the PLD review: consistency and complementarity of the package of measures for the compensation of damage caused by AI ………………………………….. 61

       8.4.    One-In-One-Out …………………………………………………………………………… 62

    9.How will actual impacts be monitored and evaluated?62

    Annex 1: Procedural information

    Annex 2: Stakeholder consultation

    Annex 3: Who is affected and how? 93

    ΑΝΝΕΧ 4: Analytical methods and key findings of supporting studies / Explanations on how the European Parliament’s legislative own-initiative resolution on a civil liability regime for AI was taken into account 101

    ΑΝΝΕΧ 5: DETAILED EXPLANATIONS ON HOW THE SPECIFIC CHARACTERISTICS OF AI ARE CHALLENGING EXISTING LIABILITY RULES ……………………………………….…..... 120

    Annex 6: Detailed description of the legal context and interplay between existing / proposed legal rules and the AI liability initiative

    Annex 7: Interplay with the AI Act

    Annex 8: AI-specific fundamental rights concerns – Overall Commission policy approach and the role of the AI liability proposal

    Annex 9: The types of compensable (in particular immaterial) harm and the admissibility of contractual exclusions/limitations of liability – Member States’ legal approaches/traditions and reasons for not harmonising these aspects specifically for AI

    Annex 10: Detailed explanations and results regarding the assessment and comparison of policy options (multi-criteria analysis)

    Annex 11: The economic impact of adapting civil liability rules to the specific challenges of AIThe example of the European market for vacuum cleaners

    Annex 12: Monitoring and evaluation

    Annex 13: Illustration of AI-specific difficulties in claiming compensation based on case scenarios


    1.Introduction: Political and legal context

    1.1. Introduction and political context

    AI is a set of enabling technologies with the potential to transform our economy and society to such a profound extent that it is often compared to electricity. As a key driver for future economic development, AI has strategic importance 1 . While AI can bring about economic and societal progress, it can also put at risk European values.  In particular, when AI systems cause harm, the victims’ legal interests of life, health and property are at stake. In addition, the right to an effective access to justice can be in doubt because the specific characteristics of certain types of AI can make it prohibitively difficult to prove their claim under existing liability rules. 

    For these reasons, Commission President Ursula von der Leyen announced in her political Guidelines 2 a coordinated European approach on AI. In its White Paper on AI 3 , the Commission undertook to promote the uptake of AI and to address the risks associated with certain of its uses by building an ecosystem of excellence and trust. One of the White Paper political objectives for liability and this impact assessment is that persons having suffered harm caused by AI systems enjoy the same level of protection as persons having suffered harm caused by other technologies. 4 In the Report on AI Liability 5 accompanying the White Paper, the Commission identified the specific challenges posed by AI to existing liability rules. The Commission Work Programme 2020 envisaged a follow-up to the White Paper in the form of new legislative initiatives, including on liability. 6

    In its conclusions on shaping Europe’s digital future of 9 June 2020, the Council welcomed the consultation on the policy proposals in the White Paper on AI and called upon the Commission to put forward concrete proposals. On 20 October 2020, the European Parliament (EP) adopted a legislative own-initiative resolution requesting the Commission to adopt a proposal for a civil liability regime for artificial intelligence based on Article 114 TFEU. 7

    The Coordinated Plan on AI in the Annex to the Communication ‘Fostering a European approach to Artificial Intelligence’ of 21 April 2021 8 specifies that the Commission will propose in 2022 measures adapting the liability framework to the challenges of AI.

    AI systems could also contribute to the accomplishment of a several targets across the Sustainable Development Goals included in the 2030 UN Agenda for Sustainable Development. 9 The use of AI could support environmentally beneficial outcomes for instance by improving prediction as well as optimising operations and resource allocation.

    Liability-related issues linked to AI are the subject of debate in various EU trading partners. As a consensual regulatory solution has not yet emerged, the EU has the opportunity to influence the global discussion with a practical and constructive approach which has a chance to become a global model.

    In terms of strategic foresight, the AI liability initiative takes into account the megatrend of Accelerating technological change and hyper-connectivity. This initiative is a crucial part of the EU’s efforts to shape the AI-driven transformation in a human-centric, yet innovation-friendly way. The potential of AI underlines the importance of creating the right conditions for its rollout through legal certainty and consistent rules across the internal market. The ubiquitous and fast evolving nature of AI also means that the risks linked to the deployment of AI systems can potentially affect a very broad range of stakeholders and legal interests. By adapting the traditional liability rules to the challenges of AI, the initiative anticipates the novel difficulties of proof faced by victims of harm, and aims to ensure that societal trust in AI is not diminished due to a lack of compensation in the future. Due to this forward-looking perspective of the AI liability initiative, a strategic foresight approach underlies the problem definition, policy objectives and policy options.

    1.2.Scope of the impact assessment: AI systems and their characteristics

    In order to ensure consistency, it is appropriate to use the same general concept of AI as in the AI Act 10  also as the basis for the present initiative which is complementary to the AI Act.

    AI-specific issues regarding liability are linked to certain characteristics of AI systems (opacity/lack of transparency and explainability, autonomous behaviour, complexity, continuous adaptation and lack of predictability) 11  which make the application of existing liability rules uncertain or more difficult (see the following section on legal context). AI systems not having such characteristics can be dealt with under the existing liability rules, similarly to other types of software. For example, rule-based algorithms are designed to automatically execute rules encoded by their programmers, following the classical ‘if then logic’ as other types of software. Such algorithms do not put victims of harm in a fundamentally more difficult situation than other types of software. For example, robotic process automation in a factory setting may rely on such rule-based systems. In contrast, the highly autonomous operation of e.g. mobile robots in more unpredictable environments (such as delivery robots) – often involving interaction with humans – will typically require more complex AI systems, in particular developed through machine learning. Such AI systems are expected to challenge existing liability rules and are therefore concerned by the present initiative. 

    Their operation can potentially affect legally protected interests (for example AI used for recruitment, in autonomous vehicles, delivery robots, or drones). Certain AI-enabled products and services with the potential to cause harm are already in an advanced development stage or even starting to be deployed, such as e.g. certain mobile robots or AI-systems used in the context of recruitment. However, AI systems with the specific characteristics challenging liability rules are mostly not yet on the market, among others because they are not yet approved. Therefore, there are no available judicial cases yet where the problems to be addressed by this initiative have materialised. The objective of this proposal is to create the legal certainty conditions and to promote the societal trust needed for such AI systems to be brought to market in the EU.

    1.3.Legal context 12

    Liability rules determine how damage, caused by human activities and goods for which humans are considered liable by law, can be compensated.

    This initiative concerns ‘extra-contractual’ civil liability rules, i.e. rules providing a compensation claim irrespective of a contractual link between the victim and the liable person. Those rules will be referred to as ‘liability’ in the following. Criminal liability is not concerned.

    A victim currently relies on national liability rules and in certain cases on the Product Liability Directive 13 (PLD) to claim compensation for damage arising from AI-enabled products and services. 

    Depending on each case, a victim has several avenues available for claiming compensation. Various persons could be held liable in parallel under different conditions and for different reasons. Neither of these rules contains specific provisions for AI.

    Example: A traffic accident victim may have in parallel a fault-based liability claim against the driver whose fault caused the accident, a ‘PLD’ claim against the producer if the vehicle had a defect, and a strict liability claim against the vehicle’s owner.

    Firstly, the victim could have a fault-based liability claim based on another person’s wrongful behaviour, i.e. fault. 14 A fault-based claim usually requires proving the existence of:

    -a damage,

    -a fault of the liable person, and

    -causality between that fault and the damage.

    As a matter of principle, it is the victim that needs to prove these three elements. 15  

    Such claims apply irrespective of the technology/activity involved in causing harm and can be addressed against any type of wrongdoer (e.g. businesses, including service providers, or private persons) by any type of victim (e.g. business, private individual, public entity). They can compensate any type of harm protected by national law, e.g. physical injury, property damage, damage to or loss of data, economic loss like lost profit, non-economic loss like pain/ suffering or harm caused by discrimination. 16  

    Secondly, in addition to fault-based claims, victims could rely on liability independent of fault - so-called strict liability, with different scopes and conditions set by national laws. 17  

    Strict liability rules assign liability for the relevant risk to a person, irrespective of fault. This is usually justified by the fact that this person draws a benefit from exposing the public to that risk 18 . Normally, for that person to be found liable, the victim has only to prove that the risk stemming from the sphere of the liable person materialised.

    Thirdly, victims may have, under the PLD, a claim against the producer of a defective product for a defect present when the product was put on the market. The victim must prove that the product was defective and the causal link between that defect and the damage.

    Under the PLD, the victim can claim compensation for personal injury or damage to consumer property. The PLD does not cover damage caused to property intended for professional use, damage caused during the provision of a service, damage to victims other than natural persons or claims based on the wrongful use of a product. In all these cases, victims can only get compensation according to national liability rules.

    Types of civil liability

    National fault-based liability

    National strict liability regimes

    PLD

    Scope

    Any human actions or omissions, including manufacturing, providing services and using products

    Operating specific technologies (e.g. operation of motor vehicles, aircrafts, nuclear power plants), the production/use of ‘dangerous products/activities’ or more general the use of objects. 

    Producing defective products

    Liable person

    Any wrongdoer: e.g. producer, service provider, user (both businesses and private persons), owner, cyber attackers, etc.

    Owner or operator of technologies within scope

    Producer (manufacturer, importer) and, in some cases, supplier/seller

    Type of victim protected

    Any victim: e.g. private persons, businesses, public entities

    Any victim: e.g. private persons, businesses, public entities

    Natural persons

    Type of damage

    Any damage, e.g. death, personal injury, damage to property (for private or professional use), pure economic loss, discrimination, non-economic loss like pain and suffering, depending on national law.

    Varies, e.g. harm to life, health and property.

    Death, personal injury, damage to consumer property worth more than EUR 500

    What victim needs to prove

    Fault of a person + causal link between fault and damage

    The risk subject to strict liability materialises (e.g. a motor vehicle is involved in accident)

    Defect in product (irrespective of fault)

    Period of liability

    Varies

    Varies

    10 years after putting product into circulation

    A holistic approach to liability: Looking at all liability pillars

    The liability follow-up to the AI White Paper looks holistically at the three avenues to compensation currently available to victims. The purpose is to ensure that the specific challenges of AI do not lead for victims harmed by AI-systems to a lower level of protection, compared to victims harmed by traditional technologies, under all three avenues.

    For the purposes of presenting the problems and solutions in the form of impact assessments, a distinction is drawn between, on the one hand, the route to compensation provided by the PLD, and on the other hand, the routes to compensation provided by non-harmonised national fault-based and strict liability rules. It is necessary to cover these complementary pillars of liability in separate, but closely coordinated impact assessments, because:

    -The PLD and national liability rules provide liability claims based on different grounds, against different liable persons and for different types of victims and damages. This means that different markets and different possible addressees of liability claims have to be considered as regards the PLD and national liability rules. The PLD covers only part of the harm which can be caused by AI-systems. It covers damage done by defective products, while other liability rules compensate also the harm caused for instance by services or any use of products. It covers the producer as liable person, while other liability rules cover the harm done by other actors like operators/users of AI-systems. It covers certain damages, while other liability rules compensate also other harm suffered by victims like economic and non-economic loss.

    -The PLD evaluation found that it was difficult to apply its rules to digital technologies in general, including AI-enabled products. It found also that the rules made it difficult to prove liability in the case of complex products, including AI-enabled products, but also others such as pharmaceuticals. These problems call for technology-neutral measures. They will provide victims of damage, including that caused by AI-enabled products, a more effective compensation claim against the producer, to ensure the same level of protection as for damage caused by other technologies or types of products.

    -Digital technologies in general have not been identified as making it difficult to apply national liability rules. It is the specific features of certain types of AI, especially its autonomy and opacity, which challenge the application of national liability rules. This needs to be addressed through targeted, AI-specific measures.

    -Due to this difference between the problems, different objectives and policy options had to be developed respectively. In light of the AI White Paper objective to ensure that the specific challenges of AI do not lead for victims harmed by AI-systems to a lower level of protection compared to victims harmed by traditional technologies, this is an expression of the holistic Commission approach to examine all three pillars of liability. The PLD review is about adapting it to the digital age, preserving its technology-neutral nature and coverage. It also helps victims of damage caused by AI-enabled products to have a more effective compensation claim against the producer. The AI liability proposal tackles the AI-specific problems regarding other liability rules. Together, they will contribute to guaranteeing the effective functioning of the internal market in AI-enabled products/services and ensure that victims of harm caused by AI have the same level of protection as victims of harm caused by other technologies.

    1.4.Related policy initiatives

    The Commission is aiming to support the roll-out of AI in Europe by developing an ecosystem of trust. In accordance with the White Paper, the Commission proposes a forward-looking comprehensive package of measures to address problems caused by the introduction and use of AI, comprising of three complementary work streams:

    -a horizontal framework addressing fundamental rights and safety risks specific to AI systems (AI Act); 19  

    -a revision of sectoral and horizontal product safety rules;

    -EU rules to address liability issues related to AI systems.

    Interaction with already adopted proposals: The Commission has already proposed rules in the AI Act reducing risks for safety and fundamental rights. While they will reduce risks, they are not intended to prohibit the placing on the market of AI-systems posing a residual risk to safety and fundamental rights. 20  Harm will therefore still occur. When this happens, the liability rules subject to this impact assessment determine which party is liable for harm and under which conditions a victim can be compensated. 

    The AI Act also provides for definitions and specific documentation/logging requirements for high-risk AI systems. The information from these documents and logs could be useful in liability claims but the AI Act does not foresee a right of victims to access that information. 21  

    The Commission has also put forward updates of general 22  and sectoral product safety rules applicable to AI-enabled machinery products 23 and radio equipment. 24  These initiatives do not contain provisions on liability or access to information for the purposes of damages claims. Their requirements (e.g. general safety requirement, risk assessment or regarding human oversight, testing, data quality) can be relevant for determining what constitutes fault of producers or what is a defective product in relation to AI – points which the AI liability proposal will not regulate. 

    Safety and fundamental rights requirements are meant to prevent, monitor and address risks and thus address societal concerns 25 . While they employ tools that address a potential wrongdoer, such as authorisations, controls, monitoring, administrative sanctions, they do not compensate the victim for the harm suffered.

    Therefore, safety and liability are two sides of the same coin: they apply at different moments and reinforce each other. Safety rules intend to reduce the risk of harm. Liability rules come in to ensure compensation of the victim when harm has nevertheless occurred. Effective liability rules also provide an economic incentive to comply with safety rules and contribute therefore to preventing the occurrence of harm. 26

    As AI is only gradually being rolled-out, there are not yet many documented cases of harm as the relevant products/services are not yet on the market. However, as part of the Commission’s forward-looking and enabling approach for the uptake of this technology, action is needed to adapt the applicable liability rules in advance in order to increase trust and create the conditions for the roll-out of these products and services.

    Interaction with the pending revision of the PLD: The Commission takes a holistic approach in its AI policy to liability by considering adaptations to the producer’s liability for defective products under the PLD as well as a targeted harmonisation of specific aspects of the other liability rules.

    These two policy initiatives are closely linked and form a package, as the claims within their scope deal with different types of liability. They complement one another to form an overall effective civil liability system. For AI, they achieve the AI White Paper objective that persons having suffered harm caused by AI systems enjoy the same level of protection as persons having suffered harm caused by other technologies.

    Example: The victim may decide to pursue a liability claim against the one who caused the damage by operating the product (e.g. a service provider), based on strict or fault-based liability.

    The victim may also rely on product liability to claim compensation for losses as a result of material harm from the producer of the AI-product. In order to get compensation from the producer for damage not linked to material harm (such as pure economic loss or damage caused by discrimination), the same victim would need to invoke national liability rules outside the scope of the PLD, in particular fault-based liability.

    Together they will deliver the ex-post trust element in AI (and other digital technologies), by ensuring that victims have an effective chance of compensation if damage occurs despite the preventive requirements of AI Act and the other safety rules.

    2.Problem definition

    2.1. Introduction and overview

    Liability ranks amongst the top three barriers to the use of AI by European companies in a representative survey of 2020. It is the most relevant external obstacle (43%) for companies that are planning to, but have not yet adopted AI. 27  In the White Paper, the Commission counted liability-related issues amongst the main risks related to the use of AI, because victims may not have effective access to the evidence necessary to build a case in court and economic operators face uncertainty as regards the allocation of responsibilities.

    The accompanying Report identified those AI-specific characteristics that challenge liability rules, (see section 2.2.). Economic 28 , behavioural 29 and legal 30  studies commissioned for this impact assessment, as well as comprehensive stakeholder consultations 31 , have shown three AI-specific problems: legal uncertainty (see 2.3.), increased legal fragmentation (see 2.4.) and a lack of compensation for victims contributing to the lack of societal trust in AI use (see 2.5.), which in their combination cause internal market obstacles (see 2.6.).

    Delineation from the problems covered by the parallel impact assessment on the PLD review: Under the PLD, the burden of proof is less onerous on victims than under national fault-based liability rules, because the victim does not have to prove fault. Nevertheless, proving that a product is defective (i.e. that it does not provide the level of safety a person is entitled to expect) is challenging for complex products, of which AI-enabled products are an example, even if the victim does not have to prove how the product became unsafe.

    As the PLD regulates producer liability for various different types of defective complex products, of which AI-enabled products are only a part, this problem is covered in the parallel impact assessment on the PLD review. It will be addressed by amendments of the PLD regulating in a horizontal manner all those products.

    2.2. Problem driver: the specific characteristics of AI make it difficult to meet the burden of proof

    The three problems (uncertainty, fragmentation and lack of compensation) are driven by the fact that, under existing liability rules, victims normally have to prove the necessary conditions for a successful claim. Under current rules, victims generally have to identify the possibly liable person(s), the action/omission of that person which could be characterised as fault, and the causal link between that action/omission and the damage. There are neither targeted provisions for AI to ease the victim’s burden of proof under fault-based claims, nor AI-specific strict liability regimes that would exempt the victim from making this proof.

    However, certain AI-systems have characteristics that make it prohibitively difficult or even impossible for victims to provide that proof. 32  The specific characteristics that challenge current liability rules are: increasingly autonomous behaviour, opacity/lack of transparency and explainability, complexity, continuous adaptation and lack of predictability. 33 These characteristics make it very difficult to attribute directly the damaging output of the AI to an action/omission of a liable person, i.e. to identify:

    -the person who could possibly have done something wrong,

    -the action or omission of that person which might not comply with the relevant standard of care, and

    -the causal link between that action or omission and the victim’s damage.

    This is linked to two situations that can occur due to the use of AI systems:

    -between the human actor(s) related to the AI system and the damage caused by this AI system to a victim, the AI-system takes an autonomous action which is not pre-determined by a human and/or

    -the way the AI action is performed cannot be understood (mainly due to the opacity and complexity of certain forms of AI).

    Example: 34 A company deploys a fleet of autonomous cleaning robots to provide cleaning services throughout a city. It tasks one of its employees with the remote supervision of the fleet. One of the cleaning robots fails to recognise a colourful baby stroller, which is parked in front of a similarly patterned advertising banner. Because of the collision, the baby is injured and the stroller is damaged. The father witnessing the accident suffers psychological trauma. The accident could be due to a variety of possible causes, e.g. an image segmentation error of the robot’s AI-based perception system, a failure by the provider of the AI vision component to provide an available software update or a failure by the user (the cleaning company) to install it, a failure by the human remote operator to appropriately monitor the operation of the fleet (possibly due to a malfunction of the human-robot interface) or also a deliberate attack on the robot’s sensors by a third party (jamming, spoofing, sabotage through adversarial machine learning etc.).

    Fault: Given the cleaning robot’s highly autonomous mode of operation, as well as the opacity and complexity of the different AI components, it is highly uncertain that the victim could ascertain relevant actions or omissions of, for instance, the employee charged with providing the cleaning instructions or monitoring the fleet of robots remotely. This would however be necessary to make a successful claim against the company.

    Causality: Assuming a wrongdoer’s fault can be established, it is uncertain whether and how the victim can prove the causal link between such faulty behaviour and the damage. If an expert has access to logged information on inputs, outputs and internal states of the AI subsystems, she may be able to discard certain causes of the accident (e.g. jamming or spoofing of sensors) and to suppose certain correlations between, for instance, a detection failure and a control decision to move forward until colliding with the stroller. But due to the high degree of autonomy, complexity and the lack of explainability of the AI systems involved (not only the perception module but also e.g. the trajectory planning system and low-level controllers), it will likely be impossible to infer a clear causal link between any specific input and the harmful output. In view of AI specificities, to prove the necessary degree of likelihood is highly uncertain. 35

    PLD: When claiming compensation under the PLD, the victim does not have to prove the producer’s fault, but that the cleaning robot was defective (i.e. it failed to provide the level of safety the public at large is entitled to expect) and the causal link between the defect and the harm. The victim would not have to prove how the cleaning robot became defective; whether it was a mechanical or software flaw is irrelevant. However, the PLD has shortcomings when it comes to digital products: producers are not liable for defects that emerge after the product was put into circulation (e.g. if the defect was due to a subsequently downloaded software module) and software producers themselves, including AI-system providers, cannot be pursued. These issues are dealt with in the PLD impact assessment, as they concern not only AI technologies.

    The PLD does not help the victim for any claims against other parties than the producer (e.g. the cleaning company), claims based on other grounds than defect (e.g. a failure to appropriately supervise the robot fleet), and claims for the compensation of damage not covered by the PLD (e.g. the psychological harm suffered by the baby’s father).

    The results of the public consultation confirmed the existence of this problem driver. Two out of three overall respondents agreed (67% or 153 out of 227, of which 46 % or 105 strongly) that it could be difficult to link damage caused by highly autonomous AI to a liable person (only 20% or 47 out of 227 disagreed). The agreement among citizens was even larger (84% or 78 out of 93) and almost unanimous among consumer organisations, national ministries and academic/research associations (94% or 31 out of 33, with only one disagreeing). Among business stakeholders a relative majority did not agree with the difficulties to link the damage to a liable person (45% or 38 out of 85) while a strong minority (36% or 30 out of 85) agreed.

    Three out of four overall respondents agreed (74 % or 167 out of 227, of which 76% or 127 strongly) that it could be difficult for victims to prove that the conditions of liability (fault, defect or causation) are fulfilled in the case of opaque and complex AI , while only 12% disagreed (26 out of 227). The agreement among responding EU citizens was overwhelming (91% or 85 out of 93) and unanimous among responding consumer organisations, national ministries and research institutions. Even a relative majority among business stakeholders agreed with the difficulties of proof (41 %, or 35 out of 85, while 28% or 24 out of 85 disagreed).

    61% (or 139 out of 228) of overall respondents agreed (41,8 % or 95 out of 228 strongly) that this means less victim protection. The majorities among consumer organisations, national ministries, NGOs and research institutions (84% or 32 out of 38 with only one disagreeing) and EU citizens (79% or 74 out of 94) were even larger. Even a majority of business stakeholders agreed that victims are less protected (53,6 % while 26,2 % disagreed).

    However, in situations where the liable person acted on advice or recommendations given by an AI system, this AI-specific problem driver does not apply. In such cases, the human acting on the advice or recommendation will be responsible as there is a human action/omission, which can be a) identified, b) characterised as not complying with the relevant standard of care, and which would c) be the cause for a specific damage. This is for example the case with AI systems providing medical analysis or even suggestions for diagnosis and treatment, which are feeding into a decision on diagnosis and treatment, but that decision is ultimately taken by a human physician. 36  

    As AI-enabled products and services with the specific characteristics challenging liability rules are for the most part not yet on the market, it is not possible to quantify the scale of the problem or of its relevance for different market segments. The economic study did however provide some indications. 37 The market share affected by legal uncertainty and legal fragmentation regarding AI liability will likely differ substantially per sector and is potentially larger where AI technologies directly interact with humans and where – in the case of a failure or damage – large-scale negative effects can be expected. Potential bodily and psychological harm, property damage, economic loss and damage caused by discrimination present recurring risks linked to the use of AI across various sectors. 38 The economic study found 39 that the share of AI-enabled products and services potentially affected by legal uncertainty and legal fragmentation of civil liability rules ranges between ca. 20% in sectors such as real estate or oil and gas and around 40% in sectors such as human health and transport.

    2.3. Problem 1: Legal uncertainty 

    As there are currently no rules addressing the AI-specific difficulties of proof, the general burden of proof conditions apply. This has several consequences, creating legal uncertainty.

    The victim and the possible liable person wanting to check liability risks, as well as the judges having to decide a liability claim, will be confronted with cases where they would need to interpret general rules which were not designed with AI in mind. They would have to apply them to AI-technologies that are qualitatively different from previous technologies, because of their specific characteristics (see 2.2.). It is quite uncertain how existing liability rules would be interpreted in light of the AI-specific difficulties of meeting the burden of proving fault and causation. 40  

    Such difficulties would lead in all likelihood to the result that the victim would not be able to meet this burden of proof. Victims may therefore decide to not even claim compensation. As an economic consequence, they would have to bear the burden of the damage.

    If the case goes to court nevertheless, national courts may decide - on an ad-hoc basis - to stick to the traditional allocation of the burden of proof on the victim. This would lead in all likelihood to the result that the victim would lose the case, not obtain compensation and bear the economic burden of the damage. If such a result is not considered as equitable, national courts may decide - again on an ad-hoc basis - to adapt to the specificities of AI and alleviate the burden of proof of the victim, in order to achieve a fair outcome in each single case.

    These uncertainties surrounding AI liability are especially relevant in a cross-border context, because MS’ existing burden of proof rules are already very diverse. 41  The economic impact of this legal uncertainty is demonstrated by legal experts. When asked to estimate the costs of claiming compensation based on national liability rules for damage caused by AI, they submitted wide ranges to account for the prevailing uncertainty. 42

    The public consultation confirmed this problem: a very large majority of responding consumer organisations, NGOs academic/research associations (84% or 27 out of 32, with none disagreeing) and EU citizens (85% or 80 out of 94, with only one citizen disagreeing) agreed that it is uncertain how national liability rules will apply. The same respondents agreed equally broadly with the uncertainty how national courts will address the challenges of AI (91% or 84 out of 92 of EU citizens agreed, with none disagreeing; 81% or 25 out of 31 agreement amongst the aforementioned organisations, with only one disagreeing). A slight majority of business stakeholders disagreed (52% or 44 out of 85) with the uncertainty as to how national liability rules will apply, while still almost a third confirmed it (32% or 27 out of 85). Again a third of business respondents agreed with the uncertainty as to how national courts will react to the challenges of AI (34% or 29 out of 85) while a stronger share (43%, or 36 out of 85) disagreed.

    In the PLD context, unclear rules, including for AI-enabled products, have equally been identified as a problem driver. This mainly concerns the problem that the PLD is ill-adapted to products in the digital age in general. It imposes neither liability on software providers, including AI-system providers, nor liability for defects emerging after a product is placed on the market, such as through software updates, or ancillary digital services. Legal uncertainty applies horizontally to defective intangibles like software, digital content and ancillary services. For producers’ liability for defective products, including with respect to AI-enabled products, this problem and the measures needed to address it are discussed in the parallel impact assessment on the PLD revision.

    2.4. Problem 2: Legal fragmentation

    In a cross-border context, the law applicable to a liability claim is by default the law of the country in which the damage occurs. 43  This means that different liability regimes and burden of proof rules could be applied to the same kind of AI-enabled product/service deployed in several MS which causes the same kind of damage. 44  Consequently, a company disseminating/operating the same AI-enabled product/service in various MS could face different liability exposures if that product or service causes, for the same reason, the same type of damage in several MS. 45  Aside from the general burden of proof rules, there are also different rules about information disclosures in liability claims. This means that victims face greater information asymmetries in some MS. 46

    In addition, national AI strategies show that several MS are considering, or even concretely planning, legislative action on civil liability for AI. 47 The fact that in the public consultation on the AI White Paper, more than 63 % of respondents were in favour of adapting national liability rules for all or for specific AI applications suggests that MS are likely to face pressure to address the issue of liability for damage caused by AI. 48

    While the fragmentation of MS’ existing liability and burden of proof rules is as such not limited to cases involving AI, the likely adoption of AI-specific liability rules by at least some MS exacerbates the problem of fragmentation with respect to AI-enabled products and services in particular. Outside the scope of the PLD, tort law is currently not harmonised and MS’ rules diverge substantially. This may create internal market obstacles also for traditional products and services. Nevertheless, it is crucial to address this issue specifically for AI, for the following reasons:

    -The Commission aims to create the right conditions specifically for the roll-out of AI-equipped products and services.  According to the Commission’s ‘staged approach’, this involves two complementary and synergetic initiatives: in a first step, the AI Act proposal to ensure the effectiveness of safety and fundamental rights protection, and in a second step, the AI liability proposal to ensure effective protection when AI nevertheless causes harm.

    -Stakeholders active in the field of AI will be especially affected by the market barriers associated with legal fragmentation (see 2.6.).

    -As a key enabling technology, the importance of which can be compared with the introduction of electricity, AI bears a significant potential for economic growth and societal usefulness. It is therefore essential to address the obstacles entailed by legal fragmentation in order to reap the full potential of AI within the internal market and for the digital transition.

    -Given the entrenched and longstanding differences between national legal traditions, any attempt to address the problem of legal fragmentation by way of a broad stroke harmonisation of tort law would not be promising. Focussing on AI specific problems and addressing them with targeted adaptations to specific aspects of national laws provides an opportunity to approach the problem of fragmented civil liability rules in an effective and future-oriented manner.  

    The existence of diverging approaches to easing the burden of proof in cases involving complex products also forms part of the problems affecting the PLD, as it leads to differing levels of consumer protection (section 2.2.2. in the PLD impact assessment). Since the PLD problems concern not only AI-enabled products but also other complex products, they are dealt with in the impact assessment on the PLD revision.

    2.5. Problem 3: Lack of compensation for victims leading to a lack of societal trust and hampering uptake of AI-enabled products and services

    As already explained (see 2.2.), some characteristics of certain AI systems challenge the application of liability rules and may make it prohibitively difficult or even impossible for victims to get compensation. Victims are likely to incur significantly higher up-front costs and face significantly longer legal proceedings, when claiming compensation for damage caused by AI-enabled products or services, compared to cases not involving AI applications. 49  Substantial burdens on victims are anticipated, due to the specific challenges of AI, for almost all national legal systems covered by the analysis commissioned for this impact assessment, often going beyond a 100 % increase of fees paid for legal and technical expertise. 50  This could deter victims from claiming compensation. Ultimately victims may be left under- or even uncompensated. AI-specific barriers are therefore likely to impinge on the right to an effective judicial remedy. 51 As a consequence, the compensation function of liability law would fail. This would undermine the fundamental concept of justice as fairness (‘distributive justice’) that underpins our legal order and citizens’ trust in the justice system.

    If victims are left without compensation when AI is involved, they would find themselves at a disadvantage compared to a situation where traditional technology would have caused the damage. Such an AI-induced lack of compensation would be particularly problematic if future autonomous technologies would create a significant risk of harm to life, health and significant property values, due to the important legal interests at stake. The need to ensure effective compensation is all the more pressing where unwitting third parties are exposed to such risks, as such victims cannot choose to accept or avoid the risk of damage.

    A behavioural study commissioned for this impact assessment has shown that the expected lower likelihood of receiving compensation if AI applications cause damage is amongst the reasons for low levels of consumer trust in such applications. 52  Conversely, the availability of effective remedies and compensation is one of the factors increasing the level of societal acceptance of, and consumer trust in, AI-enabled products and services. 53 Another recent survey confirmed that consumers have clear concerns regarding the allocation of responsibility and liability if something goes wrong. It concluded that these concerns must be properly addressed to ensure strong protection for consumers and for them to trust this technology. 54 A statistically representative Eurobarometer of 2019 points in the same direction: EU citizens’ most widespread concern about the use of AI was that it could cause situations where it is unclear who is responsible. 55  

    The aforementioned behavioural research demonstrated that a lower level of trust in AI applications and the perception that persons suffering damage caused by AI-enabled products and services are less likely to obtain compensation correlate with a lower willingness to take up such products and services. 56 Businesses also consider that the lack of citizens’ trust is one of the most relevant obstacles to the take-up of AI-technologies in Europe. 57 Other factors, such as the perceived risk of accidents, were also shown to have a significant effect on the level of societal trust and consumer uptake. These findings are consistent with the complementary role of the AI Act / safety legislation and liability rules in creating an ecosystem of trust for AI.

    In the public consultation, a clear majority of overall respondents confirmed that the lack of adaptation of existing liability rules to AI may negatively affect trust in AI (60% or 135 out of 227) and the uptake of AI-enabled products and services (56% or 126 out of 227). In particular, responding NGOs consumer organisations, academic/research institutions and EU citizens overwhelmingly confirmed these problems: 81% (or 77 out of 95) of citizens and 89% (or 16 out of 18) of these organisations anticipate a negative effect on trust in AI; 79% (or 74 out of 94) of citizens and 78% (or 14 out of 18) of these organisations expect a negative effect on AI uptake. By contrast, 58 % (or 49 out of 85) of responding business stakeholders (business associations and companies/business organisations) did not uphold them, while still a quarter of them confirm them (27 % or 23 out of 85).

    2.6. The consequences: internal market obstacles and affected stakeholders 

    (a)     Directly affected stakeholders

    Potentially liable companies: The representative 2020 IPSOS survey of European companies showed that liability for potential damages is the most recurrent barrier to the adoption of AI technologies. 58  The economic study found that a majority of businesses active in the market for AI-enabled products and services perceive their liability exposure outside the scope of the PLD as uncertain to varying extents. 59  

    Where companies are considering a cross-border rollout of AI-enabled products or services, the costs and burden linked to legal uncertainty are amplified by fragmentation between MS’ liability rules. This means that such companies have to carry out complex assessments and quantifications of liability risks for each relevant MS. A significant share of businesses active in the AI market does or will incur costs due to legal fragmentation and uncertainty regarding the application of national liability rules to damage caused by AI: 60 legal information/representation and compliance costs, internal risk management costs, opportunity costs in the form of foregone revenue due to companies’ hesitation to explore new markets. 61  

    Moreover, liability rules may sometimes have a bearing on which technological solution is favoured by developers of novel products and services 62  and may influence how products and services are designed. 63 Going cross-border may therefore involve costly technological adaptations that thwart economies of scale. Companies deterred from investing in AI-technologies miss out on benefits in terms of increased profits or efficiency gains. 64  

    The economic study estimated that the total values of the EU27 AI market affected by legal uncertainty and legal fragmentation ranges from EUR 1.739 billion to EUR 4.973 billion in 2021. 65  These shares of the total AI market in the EU27 were obtained by multiplying the value of the AI market per sector with the respective shares affected by legal uncertainty and fragmentation. To ensure consistency, the estimated AI market sizes assumed for the impact assessment accompanying the proposed AI Act were taken as a starting point, representing overall investments in AI technologies. 66  For the sake of robustness and in order to account for uncertainty, these estimates allow for a wide range of market definitions and possible future developments. 67 The shares affected by legal uncertainty and fragmentation were determined based on a sector-by-sector analysis, taking into account the ability of the AI applications deployed in the respective sector (now or in the foreseeable future) to cause harm and thus give rise to liability claims. 68    

    The EP concluded in 2020 that fragmentation and uncertainty, relating to the liability provisions applicable across the EU, that may emerge in the absence of a common EU approach can provide negative incentives for innovation and diffusion of AI systems while also contributing to excessive costs for consumers. 69  These internal market obstacles will likely contribute to slowing down the innovation that drives the overall digital transition. Legal uncertainty and fragmentation regarding civil liability presents significant barriers across all sectors 70  and both to products and services. 71

    The results of the public consultation confirm these consequences. A clear majority of overall respondents expect that legal fragmentation and/or the lack of adaptation with respect to civil liability for AI will, for instance, entail at least to some extent additional costs for companies (e.g. legal information costs, insurance costs) (75% or 153 out of 205) and the need to adapt AI technologies, distribution models and cost-management models (70% or 146 out of 205).

    SMEs in particular: Start-ups and other SMEs are likely to be significantly more affected by legal uncertainty and fragmentation than large companies: they cannot rely on comparable in-house legal expertise and an established cross-border network. As smaller companies often lack sufficient external funding, the additional costs incurred due to legal uncertainty may be prohibitive for start-ups and other SMEs, or at least reduce their competitiveness in relation to large companies. 72  These effects are particularly weighty because the EU market is driven by SMEs developing, deploying or using AI technologies. It is estimated that SMEs account for more than 95 % of companies active in this market, with micro-enterprises representing over 80 % of all firms involved in AI research and software development. 73  Yet it is estimated that only 19 % of micro-enterprises active in the internal AI market are involved in cross-border activities (as opposed to 68 % of large corporations). 74  This points to a significant unused internal market potential. 75 As most start-ups and other SMEs are not using the potential of the single market to scale up, the fragmentation of the AI landscape, notably in terms of regulations, impedes the creation of a regional ecosystem with bigger scale. 76

    In the public consultation, business associations representing SMEs confirmed the relevant internal market obstacles to a larger extent than other business stakeholders. The responding individual SMEs also agreed that the lack of adaptation and fragmented AI-specific liability rules at national level will increase companies’ costs, insurance premiums, entail higher prices of AI-enabled products and services and cause companies to refrain from using AI.

    Insurance companies: Legal uncertainty regarding the liability exposure of insurance holders can make it more difficult to calculate premiums and offer coverage, in particular for cross-border activities and for situations covered by fault-based liability. 77 Such difficulties affect the insurance companies themselves. But they are also of concern to the companies facing unpredictable – and potentially very high – liability exposure when unable to procure desired insurance coverage. Finally, they also affect the victims bearing the liable party’s insolvency risk. Moreover, liability insurance premiums depend, to some extent, on insurers’ prospect of successful redress claims allowing them to claim back the amounts they paid out. 78 Outside the scope of the PLD, such redress claims rely to a large extent on fault-based liability, which means that insurance companies will face significant uncertainty in estimating their own chances of success on the redress level for damage caused by AI-enabled products and services.

    Victims of damage caused by AI-technologies may be (i) private persons, businesses or public entities owning AI-enabled products or using AI-enabled products and services and (ii) natural persons, businesses or public entities without any link to those products and services (third-party damage). See 2.5. for detailed explanations on the difficulties faced by victims.

    (b)     Indirectly affected stakeholders

    Consumers: As explained under 2.5., a lack of compensation of damage caused by AI is likely to contribute to low levels of consumer trust and demand of AI-enabled products and services. 79 If restrained consumer demand, coupled with legal uncertainty and fragmentation faced by businesses, delay the roll-out of novel AI-technologies in the internal market, consumers would, to some extent, be deprived of the benefits promised by AI. They could miss out on benefits like faster and more personalised services, innovative and performant products as well as advances in the fields of health, safety, security, mobility, sustainability, media, etc.

    Companies active in AI: The trust deficit and the consequential lower uptake of AI-enabled products and services is likely to affect all companies with a stake in the rollout of AI in the form of lost market opportunities. This includes also those entities in the AI value chain, such as researchers, developers and providers of related technologies or equipment (e.g. cloud service providers), who are likely to face reduced demand.

    General economic consequences: If victims are left to bear the burden of damage caused by AI, the overall cost allocation of introducing this technology is inefficient, as this cost is borne by those that neither can control the risk nor benefit from it. Burdening victims disproportionately and allowing for an externalisation of costs linked to the roll-out of AI would create inefficient incentives. It would also undermine the innovation potential linked to the development of safe AI-enabled products and services. 80  

    In an even broader sense, the existence of obstacles to the functioning of the internal market in AI-enabled products and services affects the European economy and society as a whole. The cross-sector relevance of AI was confirmed by a representative survey of 2020, showing that at least 35 % of EU companies from each of the sectors covered have adopted AI technologies. 81  The identified liability-related obstacles are likely to have a negative impact on the overall growth and innovation capacity of the European market. The fact that EU companies cannot take full advantage of the internal market prevents them from becoming more competitive on a global scale, and may thus further weaken the EU’s position vis-à-vis its competitors in the global AI race (primarily the US and China), as the EU already lags behind them 82 . 

    By putting forward a balanced approach regarding liability for damage caused by AI, the EU has the opportunity to set a global benchmark and ultimately generate a competitive advantage for ‘AI made in Europe’.

    While the longer term impact of Covid-19 on the situation of the internal market for AI-enabled products and services is still somewhat uncertain, the pandemic has expedited the trend towards increased digitalisation, and economic growth has picked up strongly in 2021 and 2022. 83  The war in Ukraine is likely to have a significant negative effect on the EU economy, and forecasts will have to be reviewed taking into account his factor. However, at the time of submitting this impact assessment no updated estimates were available yet.

    General societal consequences: A liability framework which is significantly more difficult to apply when AI is involved will not sufficiently motivate risk takers (producers, service providers, operators, users, etc.) to make efforts to avoid harm to others. In particular, inefficient liability rules will not sufficiently motivate compliance with safety rules or with the ex-ante requirements introduced by the AI Act.

    Compensation gaps could call into question citizens’ trust not only in AI, but also in the ability of the legal and judicial system to ensure fair and equitable results.

    2.7. Likely evolution of the problems without Union intervention

    Legal uncertainty regarding the application of national liability rules might to a certain extent diminish in the long term as MS adopt AI-specific liability rules or national courts clarify the interpretation of existing rules. However, the adoption of AI-specific liability rules at the national level would in all likelihood exacerbate the problem of fragmentation since such rules would be different and not benefit from a harmonised interpretation by the European Court of Justice.

    Judicial solutions adopted by national courts are by nature limited to the particular case at issue and in most cases do not bind other courts, which in turn can adopt different solutions. Therefore, legal uncertainty for business and victims’ difficulties in claiming compensation will persist, to a large extent, while fragmentation in the internal market will increase.

    The AI Act and various safety related rules 84  aim to improve the availability of relevant information on, and the transparency and explainability  85 of high-risk AI systems, the safety of those systems and the effective protection of fundamental rights. The requirements of the AI Act and future detailed standards providing details on these requirements can help to determine the duties of care for fault-based claims. The information to be documented and the logged records of the system pursuant to the AI Act can be useful, if the victim has access to it, for establishing liability. However, these proposals do not contain provisions designed to facilitate the victim’s claim, e.g. on the burden of proof or access to information. They will therefore not remove the AI-specific problem drivers in the context of liability. For this reason, the impact assessment for the AI Act underlined the importance of combining that initiative with specific liability-related measures.

    It is to be expected that the problem drivers of the difficulty of claiming compensation linked to e.g. the autonomy, opacity and complexity of certain AI systems will become more relevant in the future, as increasingly independent AI-systems will carry out ever more advanced tasks (see 5.1.).

    The negative effect of a perceived lack of compensation on societal trust is likely to further increase, if the risks posed by AI-enabled technologies materialise in practice and the public becomes increasingly aware of difficulties in claiming compensation. Accidents involving AI-enabled products often receive a very large amount of public attention. 86  Behavioural research has shown that the level of consumers’ anxiety towards the use of AI applications and their likelihood to take up AI-enabled products/services is correlated with their exposure to media coverage of accidents involving such products/services. 87  This effect could already be observed e.g. by the change in public perception of self-driving vehicles following high-profile crashes. 88  It is very likely to become more relevant with the increased rollout of AI applications capable of causing harm. 

    As the market for AI-enabled products and services is expected to grow substantially, and AI can be implemented in more and more situations and sectors, the economic and societal relevance of the identified problems is also likely to increase in absolute terms. 89  The economic study concluded that the sum of the AI market share affected by liability issues will increase at a compound annual growth rate of 44% – 56% until 2025. 90  

    3.Why should the EU act?

    3.1. Legal basis

    The initiative constitutes a core part of the EU AI strategy. The problems identified in section 2 (legal uncertainty, legal fragmentation, lack of compensation diminishing consumer trust) hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services. 91  

    On the one hand, these obstacles stem from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes would apply to damage caused by AI. This uncertainty concerns not only their own MS but particularly other MS where they will export to or operate their products and services. This assessment was recently confirmed by the comparative law study. 92  Certainty would be necessary for these businesses to know the relevant liability risks and to be able to insure themselves against them.

    On the other hand, there is concrete evidence showing that a number of MS would take unilateral legislative measures to address the specific challenges posed by AI with respect to liability. For example, AI strategies adopted in Italy 93 , Malta 94 , Czech Republic 95 , Poland 96 and Portugal 97 mention initiatives to clarify liability. Given the already existing large extent of differences between MS’ existing civil liability rules, it is very likely that any national AI-specific measures on liability would follow existing different national approaches and therefore increase such differences.  

    In a cross-border context the law applicable to a non-contractual obligation arising out of a tort/delict is by default the law of the country in which the damage occurs. 98  Thus, adaptations of liability rules taken on a purely national basis would increase the barriers to the roll-out of AI-enabled products and services across the internal market and contribute further to fragmentation. 

    In light of the existing situation and the content and aim of the planned initiative, the appropriate legal basis is Article 114 TFEU. This initiative addresses the abovementioned obstacles by harmonising targeted aspects of MS’ existing civil liability rules applicable to AI-systems, in order to improve the conditions for the functioning of the internal market in AI-enabled products and services. The choice of that legal basis is also supported by the EP which has called twice upon the Commission to use this legal basis for a legislative proposal. 99

    3.2. Subsidiarity: Necessity and added value of EU action

    (a) Necessity of action at EU level

    The identified obstacles have significant impact on cross-border trade of AI-enabled products and services. The negative effects of legal uncertainty – additional legal information/representation and risk management costs as well as foregone revenue – were found to be worse for companies trading cross-border. 100 Likewise, fragmentation between MS regarding the conditions under which business would face compensation claims for damage caused by AI would increase transaction costs especially regarding cross-border trade, entailing significant internal market barriers. 101  

    The significance of these obstacles is reinforced by the fact that legal uncertainty and fragmentation disproportionately affect start-ups and other SMEs, which account for the large majority of companies and the major share of investments in the relevant markets. 102 Smaller businesses lack the necessary financial resources as well as the specialised legal and technical expertise to cope with the identified challenges. Thus, they are more likely to be deterred from rolling out their products or services across borders 103 , incurring lost revenue. This situation is making the EU market overall less strong and innovative and European companies less competitive.

    MS could to some extent address the problem of legal uncertainty and improve the effectiveness of liability claims by adapting their respective rules. However, they would necessarily do so by building on the approaches of their own, already very different liability regimes. This would lead to further fragmentation, with the negative effects outlined under 2.4. and 3.1. The relevant problems can therefore be addressed effectively only at EU level.

    (b) Added value of EU action

    The conditions for the roll-out and development of AI-technologies in the internal market can be significantly improved by preventing fragmentation and increasing legal certainty through harmonised measures at EU level, compared to the baseline scenario involving possible adaptations of liability rules at national level. 104 The economic study concluded – as a conservative estimate – that targeted harmonisation measures on civil liability for AI would have a positive impact of 5 to 7 % on the production value of relevant cross-border trade as compared to the baseline scenario. 105 This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. This would lower stakeholders’ legal information/representation, internal risk management and compliance costs, facilitate financial planning as well as risk estimates for insurance purposes, and enable companies – in particular SMEs – to explore new markets across borders. 106 Moreover, it is only through EU action that the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI can be achieved consistently across the internal market. Harmonised adaptations of existing liability rules are needed to prevent AI-induced liability gaps across all MS, ensuring a consistent (minimum) level of protection for all victims (citizens and companies) and consistent incentives to prevent harm and ensure accountability.

    Given the long-standing and entrenched divergences between national tort law traditions, any attempt of a general broad stroke harmonisation of tort law would meet with strong subsidiarity concerns from MS. The same is not the case for a targeted initiative addressing AI specific challenges at EU level. The pivotal role of AI as a key enabling technology distinguishes it from other products and services. This is why the Commission and MS have jointly committed to align their AI policy to remove fragmentation and address challenges globally 107 , recognising the need for coordinated measures to create the right conditions for the rollout of AI in the internal market. By putting forward targeted and harmonised adaptations of certain aspects of existing national liability rules, this initiative presents a clear added value, without putting into question MS’ traditional civil liability frameworks.

    (c) Stakeholder views

    A public consultation conducted in 2017 by the EP showed that 90 % of individual stakeholders consider it necessary to regulate developments in both robotics and AI, and 96 % of those favouring a regulatory approach preferred action at EU or international level rather than action at MS level. 108  A statistically representative Eurobarometer of 2019 confirmed that half of EU citizens (50 %) agreed that public policy intervention is needed, whereas only 7 % considered that no specific action is needed, and 16 % considered that industry providers of AI can deal with these issues themselves. 109  

    These results were clearly confirmed by the outcome of the public consultation on the AI White Paper in 2020. 63 % of respondents were in favour of adapting national liability rules for all or for specific AI applications to better ensure proper compensation and a fair allocation of liability. Citizens were particularly supportive, with 72% of respondents calling for some adaptations. While the overall support among companies was 45%, it was higher among SMEs with 60% of respondents. This confirms that, as explained before, the effect of the identified legal uncertainty and fragmentation is disproportionately larger on SMEs.

    The need for EU action was again confirmed by the results of the public consultation done in the context of this impact assessment. When asked to rank PO by preference, the baseline option ‘No EU action on liability for AI’ had clearly the lowest average preference score of all responses. Moreover, ca.  60 % of respondents think that EU action to ease the victim’s burden of proof is necessary and justified, while only 25 % disagree with this.

    The public consultation showed particularly strong support for EU action amongst EU citizens, consumer organisations and NGOs; almost all of them chose ‘No EU action’ as their least preferred PO. Although business stakeholders tend to be more sceptical about EU action on liability for AI, there was support also from the business side: The share of responding companies/business organisations in favour of EU action to ease the victim’s burden of proof was the same as of those opposing it (ca.40 % respectively). SMEs are slightly more favourable towards EU action than other types of business stakeholders: 4 of the responding associations representing SME interests supported EU action to ease victims’ burden of proof, while 3 did not; all responding individual SMEs were in favour of EU action to address difficulties of proof linked with AI.

    The comparably less pronounced support from businesses, in relation with all the other categories of stakeholders, is likely to have two main reasons.

    Firstly, experience seems to indicate that business stakeholders, when contributing to public consultations of the Commission, which is considering a legislative intervention, usually tend to do so from the perspective of a future subject of obligations imposed by EU law, focusing on potential burdens, and not from the perspective of a potential beneficiary. Accordingly, in the case of the public consultation on AI liability, businesses mainly may have seen themselves as liable person/entity, and not as a potential victim of damage. The experiences in the context of the economic study, where business stakeholders provided input on the policy options from the perspective of potentially liable parties and not of potential victims of harm, seem to support such a viewpoint. In line with the above-mentioned differences in results when looking at views of businesses overall and SMEs specifically, the latter could however likely to be more able to view themselves in the victim’s position. This may be one major reason why respondents of this category had somewhat different views than large companies or business associations.

    Secondly, businesses may tend to focus mostly on how the legislation would apply to them directly and not to consider the broader effects of policy measures, from which they would benefit indirectly. Businesses might tend to disregard the macroeconomic effects of building trust in a technology, while they are more aware of the effects of the lack of it. Therefore, most businesses replying to the consultation seemed to compare a situation of no intervention to a situation of regulation at EU level:

    When considering no legislative intervention, on the one hand significant shares of business stakeholders acknowledged a link between a lack of adaptation of national liability rules applicable to AI and legal uncertainty and legal fragmentation. They also expect to incur costs due to these problems, especially when engaging in cross-border trade. On the other hand, many businesses may likely tend to associate the lack of adaptation of national liability rules with an indeed high probability that even if companies are responsible for harm caused by AI-enabled products and services they would not have to compensate the harm at all because potential victims would not be able to prove their otherwise justified compensation claim.

    By contrast, when considering a situation of regulation at EU level, a large share of business stakeholders may focus more on the fact that harmonised measures to prevent AI-specific compensation gaps would increase the chances of success of such a liability claim against them. These considerations might be given more weight than the fact that the AI liability initiative aims to help also businesses that suffered harm to claim compensation, or the potential benefits in terms of increased legal certainty and reduced fragmentation.

    Authorities from only 5 MS participated in the public consultation, expressing split views on the AI liability initiative. A dedicated workshop was subsequently organised to get a broader picture of Member States’ opinions. However this aim was not achieved, Member States were in their very large majority not in a position to confirm either clear support or opposition. Member States seemed to adopt more a ‘wait-and-see’ stance. A reason may be that they may expect the Commission to put forward a legislative proposal and may not have wished to pre-empt their position during the legislative negotiations.

    4.Objectives: What is to be achieved?

    Specific Objectives

    Increase legal certainty about the liability risk exposure of business activities involving AI

    Prevent the emergence of fragmented AI-specific rules across the internal market

    Prevent a lack of compensation (ensure same

    level of protection in cases involving AI)

    General objectives

    Functioning of the internal market

    (reduce obstacles and prevent emergence of new obstacles)

    Ecosystem of trust → Promote the

    uptake of AI-enabled products and services

    Promote the roll-out of trustworthy AI in Europe (economic, social, environmental benefits of AI)

    4.1. General objectives

    In line with the overall Commission strategy, the primary general objective of this initiative is to contribute to the roll-out of AI in order to reap its benefits as a set of enabling technologies for the growth of the digital economy and its societal advantages. The contribution of this initiative to this objective is twofold.

    Firstly, it would improve the conditions for the functioning of the internal market: reducing existing obstacles and preventing the emergence of new obstacles to the cross-border trade in AI-enabled products and services. This would have beneficial effects for the European economy and the competitiveness of the European AI-sector.

    Secondly, in line with the Commission human-centric approach and the White Paper on AI, this initiative contributes to the ‘ecosystem of trust’ in AI by ensuring that persons having suffered harm caused with the involvement of AI systems enjoy the same level of protection as persons having suffered harm caused by other technologies and that efficient redress mechanisms fairly distribute loss. 110  Increased trust will lead to increased uptake of AI.

    4.2. Specific objectives

    (a) SO1: Increase legal certainty about the liability risk exposure of business activities involving AI

    - Description: This initiative aims at reducing legal uncertainty regarding companies’ liability risk exposure. 111  Legal certainty is a key concern for investors. 112  

    By clarifying the conditions for assessing and quantifying liability risks, the initiative seeks to create investment stability. By reducing legal information/representation, risk management and litigation costs, companies should have stable and favourable conditions to produce, market and operate AI-enabled products and services. This initiative thus promotes the roll-out of AI-enabled products and services.

    This SO applies to all AI-related business but is particularly relevant for cross-border operations, where the negative effects of legal uncertainty are multiplied by the number of MS into which AI-enabled products and services are exported to or are operating in.

    - Stakeholders targeted: Legal certainty is meant to benefit all companies active in AI. Increasing legal certainty would be beneficial to a higher degree for start-ups and other SMEs, which perceive uncertainty to a greater extent and are disproportionately affected by the associated costs and burden. 113  

    By reducing legal uncertainty, the initiative aims also to promote a wide range of offers in the relevant insurance market and enable insurance companies to reduce risk premiums. Those taking up AI will be able to insure their liability risk, with the effect of limiting liability exposure to the insurance premiums. Such insurance decreases possible liability costs very significantly. It also makes them predictable and therefore manageable. This benefits in particular start-ups with lower capital cover, because a liability risk is more likely to be an existential threat for such companies than for large companies. 114  Insurance coverage is also advantageous for victims, because they benefit from faster and easier compensation, even in the event that the liable party is insolvent.

    (b) SO2: Prevent the emergence of fragmented AI-specific rules across the internal market

    - Description: The initiative aims to prevent the emergence of different national rules adapting in different ways national liability regimes to the specificities of AI. This would exacerbate the already existing legal heterogeneity and create more fragmentation. It would also slow down the roll-out of AI as a set of enabling technologies. Legal fragmentation adds to the burden linked to legal uncertainty when operating across national borders. The second SO is thus intertwined with the first. Preventing legal fragmentation in form of diverging adaptations of liability rules at national level alleviates those types of burden that apply specifically when operating across borders. For instance, the initiative would reduce the need for companies to adapt their business models and the technological parameters of AI-enabled products and services in light of diverging proof-related liability rules.

    - Stakeholders targeted: Attaining this SO would benefit primarily companies, whose activities are linked to the development or use of AI, that are already operating, or are considering to expand, across borders. As with the first SO, start-ups and other SMEs are affected to a markedly higher degree than large companies, given that the former are much more likely to be deterred by legal fragmentation from exploring new geographical markets. This SO covers also insurance companies. More consistent extra-contractual liability rules would facilitate it for them to offer competitive coverage of AI-related activities in several MS, notably by enabling a consistent risk-assessment and insurance product design.

    (c) SO3: Prevent a lack of compensation in cases involving AI to promote societal trust and the uptake of AI-enabled products and services

    - Description: By preventing AI-specific liability gaps, the initiative promotes societal acceptance and consumer trust, both in AI as a new technology and in the justice system. This SO forms part of the EU’s human-centric approach to create an ‘ecosystem of trust’.

    - Stakeholders targeted: This initiative aims at increasing societal trust in AI-technologies 115 and access to an effective justice system. It ensures that civil liability is well functioning, adapted to the specificities of AI, and justified claims for compensation of harm are successful. It will reach this aim by facilitating claims where the specific characteristics of AI (see 2.2.) would otherwise make it difficult or prohibitively costly for victims to meet the burden of proof.

    Increasing societal trust would benefit all companies in the AI-value chain, because strengthening citizens’ trust will contribute to a faster uptake of AI, and create a competitive advantage for European AI companies. Due to the incentive effect of liability rules, preventing liability gaps would also indirectly benefit all citizens through an increased level of protection of health and safety (Article 114(3) TFEU) and the obviation of sources of health risks (Article 168(1) TFEU).

    (d) Consistency with other Commission proposals concerning AI and the PLD revision 116

    Not only is this proposal an important element of the overall Commission strategy to promote the roll-out of AI and its human-centric approach. It will also incentivise compliance with safety requirements and requirements designed to safeguard fundamental rights, applicable to AI-enabled products and services. 117  Beside its other main objective to achieve fair compensation, liability law acts as deterrent to unwanted behaviours, in this case, a non-compliance with safety and fundamental rights requirements (see 1.3.). Potentially liable persons will be more diligent in observing rules meant to prevent harm by AI systems. This comes from the knowledge that, in addition to possible administrative sanctions (e.g. fines, withdrawal of products, and suspension of licences), they face the risk of having to pay compensation for a damage caused.

    The general and SO of this initiative are also consistent with those of the general PLD revision (PLD impact assessment section 4). The two initiatives are designed to complement one another. In their combination, they will create a clear and consistent legal framework ensuring effective protection of victims of AI. Given the PLD’s horizontal, technology-neutral approach, the PLD revision is not aimed at AI in particular but covers all kinds of products. This initiative addresses AI-specific challenges. 118 The PLD revision pursues also additional objectives, such as making the PLD fit for the circular economy.

    (e) Contribution to achieving the Sustainable Development Goals

    By promoting the development and rollout of AI, this initiative indirectly supports several targets across all the Sustainable Development Goals (SDGs). TA more direct contribution of this initiative to the SDGs is towards Goal 16, especially target 16.3. This initiative will ensure that victims of harm caused by AI-enabled products/ services have effective access to the evidence necessary to build a case in court, so as to enjoy the same chances of a successful claim as victims of harm caused by other technologies. By ensuring to the former the same level of protection as to the latter, this initiative promotes the goal of ‘equal access to justice for all’.

    5.What are the available policy options?

    5.1. What is the baseline from which options are assessed?

    (a)    Description

    The baseline builds on the existing legal framework outlined under 1.2., which consists primarily of MS’ fault-based and strict liability rules as well as the PLD. It is also assumed that the proposed AI Act 119 , the Machinery Products Regulation (‘MPR’) and the General Product Safety Regulation will be adopted. The expected adaptation of national liability rules by a number of MS, likely leading to further fragmentation, is taken into consideration when assessing the impacts of the identified problems under the baseline. A time horizon of 10 years is envisaged for the baseline and for the assessment of the PO against that baseline. 120  

    Interplay with PLD review: This same baseline is used in the impact assessment on the not AI-specific measures of the PLD review. In particular, the Commission is considering to clarify liability under the PLD in respect of:

    -software/digital elements (including AI) necessary for tangible products to operate,

    -who qualifies as producer for defective updates and upgrades,

    -liability for the failure to provide a software update to keep the product safe and

    -how to ease the burden of proof in the case of complex products. 121  

    These measures could allow to a greater extent than before compensation under the PLD of harm to life, health and consumer property caused by defective AI systems related to products.

    The measures for AI-induced damage looked at in the present impact assessment would complement the PLD for those liability claims not covered by the PLD, in particular for:

    -claims based on the fault of a person that led the AI to cause harm (not on product defect);

    -claims against other entities than producers (e.g. users, operators, service providers, cyber attackers);

    -claims for damage caused by AI-enabled services; 

    -claims to compensate damage suffered by businesses,

    -claims to compensate types of damage not covered under the PLD (e.g. immaterial damage not resulting from personal injury or damage to consumer property, damage caused by discrimination, etc.). 

    Beyond this, for persons having suffered harm caused by AI systems the objective of the AI White Paper is that these victims have the same level of protection as persons having suffered harm caused by other technologies. Both impact assessments together look at all three pillars of liability (fault based liability, PLD, strict liability, see 1.2.) in a holistic way. As a result, for all three pillars of liability the victims should not have less routes to compensation than for harm caused by other technologies.

    For those two reasons, the measures examined in the impact assessment on the PLD review and the PO defined in this impact assessment present a synergetic whole for damages caused by AI-systems.

     (b) Impacts of problems under the baseline scenario

    - Market barriers linked to legal uncertainty and fragmentation (problems 1 and 2): 122  It is likely that at least some MS will adapt their respective liability rules under the baseline scenario. Depending on their respective approaches, this may increase legal certainty to a varying extent. However, legal uncertainty would not be addressed in a systematic and coherent manner across the internal market. In those MS that will not adopt general legislative measures, only ad hoc judicial solutions will emerge, based on a case-by-case interpretation of existing liability rules. Affected businesses, in particular SMEs, are therefore likely to continue facing most of the costs of legal uncertainty described under 2.6. In addition, the national legal or case-law solutions reacting to AI specific challenges would increase the level of legal fragmentation, which would lead to higher costs for companies operating cross-border.

    With the expected roll-out of increasingly autonomous, opaque and complex AI-systems over the coming years, AI will be implemented in ever more sectors of the economy and society. It is therefore highly likely that damages and ensuing liability claims involving AI-systems will become more frequent, and the economic relevance of the liability-related problems will increase under the baseline scenario. It was estimated that the total value of the EU27 market shares affected by those problems will increase almost seven- to ten-fold between 2021 and 2025 (from between EUR 1.739 billion to EUR 4.973 billion to between EUR 10.204 billion and EUR 21.342 billion. 123 The market shares affected by legal uncertainty and fragmentation were determined by a sector-by-sector analysis in the economic study, taking into account the current and forecast take-up rate of AI-enabled technologies, the percentage of companies perceiving liability as a barrier to AI adoption and the ability of the AI applications deployed in the respective sector (now or in the foreseeable future) to cause harm and thus give rise to liability claims. 124 For all of these factors, the study made reasoned assumptions based on data from the aforementioned representative company survey on AI, relevant individual stakeholder feedback as well as a detailed market analysis. The size of the market shares affected by legal uncertainty and fragmentation provides an indication of the order of magnitude of the economic relevance of the problems to be addressed by this initiative. This indicator is not intended to provide a precise quantification of the economic impact of the legal uncertainty and fragmentation. Such quantification proved impossible due to a lack of data, as the AI-enabled products and services affected by the initiative are in the early stages of rollout or not yet on the market. It is also acknowledged that the factors and assumptions supporting these estimates are to a certain extent uncertain, as they reflect uncertain projections on future technological and market development. In particular, the assumption that the size of the market shares affected by legal uncertainty and fragmentation will increase proportionally to the increase in the overall size of the AI market depends on a number of yet unknown factors, such as the extent to which revised safety rules will decrease the instances of harm caused by AI and possible advances in the field of explainable AI. Taking into account that the increasing degree of complexity and behavioural autonomy of AI systems is likely to exacerbate the problems to be addressed by the AI liability initiative, and that MS will over time become increasingly likely to adapt their liability rules unilaterally in the absence of EU action, the assumption of a proportional increase of the market shares affected by legal uncertainty and fragmentation appears conservative. In any case, a maximum of robustness is ensured by a allowing for a wide bracket between the high and low estimates, which can accommodate differing market definitions and future developments.

    - Lack of compensation for victims leading to a lack of societal trust and hampering take-up by consumers (problem 3): In those MS that adapt their liability frameworks to the challenges of AI, claims for compensation could become similarly effective as in cases where no AI-technologies are involved. However, in those MS which will refrain from adapting liability rules reacting to AI specific challenges during the coming years it will often be prohibitively difficult or even impossible for victims to claim compensation.

    Over the course of the baseline period, the increasing prevalence of AI-systems in more and more sectors of the economy and society will mean more accidents and more liability claims that are likely to be attract significant media attention. This will drive consumers’ awareness of liability-related issues. As confirmed by dedicated behavioural research, persisting – and increasingly mediatised – safety concerns will in turn also contribute to a lack of societal trust in AI. 125 Thus, liability-specific issues, if not addressed consistently at EU level, are expected to have a significant negative effect on societal trust and the uptake of AI. 126  That is because lack of trust means less willingness to engage with the technology. Delayed or reduced uptake of AI-technologies in the internal market will affect all companies in the AI-value chain. It will also lead to reduced or delayed benefits of AI for the society.

    The practical consequence of this situation is both unfair and economically inefficient. On the one hand, certain risk-takers that benefit from integrating AI into their activities will be able to externalise to the victims the costs linked with the use of AI. On the other hand, victims – be they natural persons or companies – will be burdened with costs that they have no means to prevent. The liability framework would thus fail to fulfil its functions of preventing unwanted behaviours and ensuring an efficient cost allocation of risks.

    Persistent liability gaps are also likely to frustrate, to some extent, the White Paper objective of ensuring the safety of AI for the benefit of all EU citizens, as liability rules will not incentivise potentially liable persons to prevent accidents.

    (c) The relevance of liability-specific problems given the AI Act, MPR and General Product Safety Regulation

    The legislative proposals (in particular the AI Act and MPR) were also adopted as Commission’s follow-up to the AI White Paper to address specific problems of AI and they pursue the same overall objective as the present initiative. 127  

    These proposals introduce requirements to be met before AI systems placed on the market, to reduce safety risks and breaches of fundamental rights. They also provide for monitoring of how these systems operate in practice and reporting of incidents.. These provisions reflect that there is no absolute safety, only a level of accepted risk. The impact assessments accompanying the proposed AI Act, MPR and General Product Safety Regulation indeed conclude that these initiatives will reduce accident rates, while quantified estimates of the expected safety gains regarding AI-enabled products or services in particular are not available. It is therefore assumed, albeit only in qualitative terms, that safety levels will increase during the baseline period, an assumption that has been incorporated in this impact assessment.

    Hence, it is certain that these proposals, while reducing the risk for accidents, will not exclude harm caused by AI systems. The AI liability initiative will deal only with those accidents happening once the AI Act and other relevant safety legislation are adopted and applied.

     

    Once adopted, the requirements that market players (e.g. professional users) have under the AI Act could inform the standard of care that courts will have to assess in liability claims against them. Similarly, the information to be documented/logged pursuant to the AI Act could be used in liability claims, to the extent victims could have access to such evidence under various national laws. However, in line with the staged approach of the Commission’s AI policy, and the complementary nature of safety and liability rules, the provisions of the AI Act and the MPR are not designed to address the problem drivers and problems tackled by this initiative. Their provisions are not designed to clarify how liability rules should apply to AI and to facilitate the individual compensation of victims of harm. This means that, under the baseline scenario, the liability situation will remain subject to the same challenges as explained above. The Commission aims to address these challenges and achieve synergies between these proposals and the envisaged civil liability related measures (see s.1.3. and 2.8 and Annex 7).

    5.2. Description of the policy options

    Legal instrument for implementing the PO: Each of the PO could be implemented either by a binding legislative instrument or by a non-binding recommendation addressed to MS to implement the preferred measures in their national law. The approach for selecting the suitable instrument is set out in Chapter 6. 128

    (a)    PO1: Easing the burden of proof for AI-related claims

    Description: The problems are primarily linked to the difficulty of the victim to prove the necessary facts due to the specific characteristics of certain AI systems. This PO would help the victim to handle its burden of proof in a way that justified liability claims can be successful.

    PO1 would involve three complementary measures. Two of these – (i) and (ii) – are closely aligned with the approach and duties set by the AI Act. Definitions of the AI Act (AI-system, provider, user) would be taken over to ensure consistency. None of these two measures would involve the introduction of any new risk categories; the risk-categorisation of the AI Act would be mirrored. They would be fully compatible with the confidentiality safeguards included in the AI Act. Measure (iii) would not involve any differentiation according to the risk profile of AI systems. None of the measures of PO1 would provide for any substantive obligations regarding the development or use of AI systems.

    PO1 would not define either what constitutes in national law ‘fault’ or ‘causality’. MS would be able to fit provisions into their national liability regimes without disrupting their legal traditions.

    (I)The disclosure, for liability claims, of information that the user or provider of a high-risk AI has to record or document pursuant to the AI Act.

    Obtaining information recorded or documented pursuant to the AI Act 129 could help the victim to make a successful liability claim. 130  For instance, it could help to provide evidence for demonstrating fault as one of the conditions of liability to be proven by the victim. This could be done for example by proving that the liable person did not comply with their obligations under the AI Act.

    Example: A highly autonomous cleaning robot causes harm (cf. example under 2.2. and Annex 12). The victim claims compensation from the company operating the cleaning robot. For the purpose of the AI Act, this company is the ‘user’ of the robot’s AI systems. Access to relevant parts of the technical documentation (Annex IV of the AI Act), e.g. the instructions of use, would show the victim what the user was required to do to prevent such accidents. For example, the user may be required to ensure human oversight. The user might also be required to ensure that the AI systems are only used in certain operational environments (e.g. geographical and meteorological conditions, etc.). This information can help the victim to establish that the liable person has not complied with such requirements and therefore committed a fault.

    According to this measure, the victim could request from the possibly liable party the disclosure of such information concerning the victim’s claim. The competent national court could order that such disclosure would be subject to stringent safeguards to ensure proportionality and protect the legitimate interests of all parties concerned, for instance confidential information, intellectual property rights and trade secrets.

    If the possibly liable person does not disclose the information, the victim would benefit from a presumption that the facts, which the withheld information might have proven actually, did occur. Such presumption would facilitate the burden of proof of the victim.

    The liable person would have the opportunity to rebut the presumption. This means that the liable person would have the opportunity to prove that the presumed facts did in reality not occur. That person could also avoid liability by refuting any of the liability conditions set by the applicable liability rules, for instance demonstrate that the relevant AI system was carefully selected and used, taking into account the requirements of the AI Act.

    Example: In the cleaning robot example, the victim requests access to the data recorded during the operation of the cleaning robot in order to prove that the robot’s sensors were not properly cleaned at the time of the accident. The competent court can order necessary safeguard measures to ensure that all legitimate confidentiality interests are protected. If the company responsible for operating the cleaning robot (= the possibly liable party) nevertheless does not disclose this information, the victim could claim and the court would presume for the purposes of the victim’s liability claim that the sensors were not properly cleaned. This may constitute a fault of the liable party. However, the liable party could rebut the presumption by showing that the sensors were actually clean. It could also avoid liability in other ways, e.g. by showing that there is no causal link between the presumed failure to clean the sensors and the damage.

    This measure would not only help the victim. Providers and users of AI systems would be given an additional incentive to comply with the documentation/logging requirements of the AI Act. If they had complied, they could disclose the information (subject to necessary confidentiality safeguards) and would be able to demonstrate that they acted in accordance with legal requirements. This additional incentive would support the compliance with the AI Act. Thereby this measure would achieve synergies with the AI Act.

    (II)If the victim shows that the liable person did not comply with AI Act requirements designed to prevent damage, the second measure could establish a rebuttable presumption that this non-compliance caused the damage. 

    This measure would further facilitate the exercise of the burden of proof as regards the causality link between fault and damage for the victim. 131  The victim could demonstrate that the liable person did not comply with AI Act requirements. In that case the victim could claim and the court would presume for the purposes of the victim’s liability claim that such non-compliance would have caused the harm. Again, such presumption would be rebuttable. This means that the possibly liable person could avoid liability by proving that their non-compliance with the AI Act requirements did not cause the damage.

    Example: Under the AI Act the user of a high-risk AI system, like the company operating an AI-enabled highly autonomous cleaning robot, is obliged to use such systems in accordance with the instructions of use accompanying the system. These instructions of use could specify, for instance:

    -the type of input data the system should be exposed to, e.g. meteorological or lighting conditions perceived through the cleaning robot’s sensors;

    -circumstances which may lead to risks to health, safety or fundamental rights, e.g. the type of terrain or surface the cleaning robot is capable to move on safely without causing harm.

    The victim may be able to prove that the company operating the AI-enabled highly autonomous cleaning robot (= the possibly liable party) did not comply with such instruction, e.g. that the robot’s AI-enabled perception system was used in an environment where it was exposed to inadequate input data (e.g. poor lighting conditions) or under circumstances which may lead to health risks (e.g. on uneven terrain frequented by many human passers-by). Access to information in accordance with measure (i) could help the victim to prove such facts. The competent court would presume that it was the non–compliance of the liable party with its obligations that caused the damage.

    The possibly liable party could rebut this presumption by demonstrating that there is no causal link between the non-compliance and the damage, e.g. by showing that someone pushed the cleaning robot, causing the accident. That party could also avoid liability by showing that they applied all the care that could reasonably be expected of them under the relevant circumstances, which means that they were not at fault.

    In line with the AI Act, this measure would be primarily relevant for claims against providers and users of high-risk AI-systems. It would give them an additional incentive to comply with the AI Act, because in case of non-compliance it would be presumed that their behaviour caused the damage.

    However, this measure is not be sufficient to ensure that victims always have the same level of protection when suffering harm caused by AI:

    -Firstly, the information needed by the victim to prove non-compliance may for instance not be recorded. In such cases, the victim may not be able to establish the liable party’s non-compliance with the AI Act (and thus to trigger the presumption). For example, the victim may not be able to know what kind of input data the AI system was exposed to or what circumstances of use the AI system was designed for.

    -Secondly, the presumption can apply only if a ‘high risk’ AI-system as defined in the AI Act is at stake. It would therefore not apply to AI systems, like for instance autonomous drones, which are not covered by the AI Act requirements, but can cause harm.

    -Thirdly, the presumption would apply only if the requirement that the liable party did not comply with, was intended to prevent harm of a type suffered by the victim. For example, if the victim seeks compensation for damage caused by discrimination, but can only establish non-compliance with a requirement meant to prevent physical harm, this would not trigger the presumption.

    (III)Targeted alleviation of the exercise of the burden of proof, so that the victim does not have to explain the inner workings of the AI. 

    As explained under 2.2., the specific characteristics of the AI-system could make the victim’s burden of proof prohibitively difficult or even impossible to meet. There will be cases when the victim will not be able to identify or explain how something that the liable person did (or failed to do) was ‘translated’ by the AI into the output that caused the damage.

    Example: In the cleaning robot example (2.2.), several potentially liable persons are responsible for different input parameters. Such alternative causes for the harm caused to the victim could lie, for example, in a failure by a human supervisor to react to a distress signal, flawed training or testing data, exposure to input data not in conformity with the instructions of use, failure to provide or install a necessary update or an external attack (e.g. jamming, spoofing, adversarial machine learning) 132 . Each of these inputs could be the reason of the damage caused by the highly autonomous AI system. However, the opacity of that system can make it prohibitively difficult or even impossible for the victim to prove which of these persons effectively caused the damage.

    The victim may be able to establish facts such as that the liable party used the AI system in circumstances or conditions that were inappropriate or failed to take instructions of use into account. If the victim can prove such facts, the court can relieve the victim from the obligation to prove that these facts led the AI system to arrive at the specific output that caused the damage.

    As with the other measures, the potentially liable persons could prove an alternative cause for the damage or show that they acted diligently and thus avoid liability.

    The existing legal framework offers a limited set of alternative tools to help victims overcome these proof-related difficulties:

    -Rebuttable presumptions: If the victim meets a reduced burden of proof, e.g. by demonstrating the plausibility or likelihood of certain facts, it is presumed that those facts occurred. In order to avoid having to pay compensation, it is then for the liable party to demonstrate that these facts did in reality not occur or that other facts, for which the liable party is not responsible, occurred. This tool leaves the basic distribution of the burden of proof intact, but makes it easier for the victim to discharge that burden.

    -Shifts/reversals of the burden of proof: This tool departs from the general principle that the party invoking a legal provision has to prove that the conditions for the application of that provision are fulfilled. The victim hence no longer bears the burden of proof, but it is for the liable person to prove that one or all conditions of liability are not fulfilled. Such a solution was proposed by the European Parliament for fault-based liability. According to that resolution, the operator of an AI system would by default be liable for harm or damage caused by that system, unless they can prove that the harm or damage was caused without his or her fault.

    -Irrebuttable presumptions: This tool goes even further than a reversal of the burden of proof, as it posits certain conditions or facts as given, without allowing for the possibility to proof that this presumption is wrong.

    All of these possible approaches have been analysed when designing measure (iii), giving particular consideration to the suggestions put forward by the European Parliament. For the following reasons, only the tool of a rebuttable presumption has been retained in the PO for detailed assessment:

    -MS’ civil laws are based on the principle that the victim has to prove all the facts necessary to support their claim. A rebuttable presumption would leave this fundamental principle in place. This approach would make it easier for MS to integrate the harmonised AI-specific adaptations of liability rules into their respective legal systems without friction. It also preserves a maximum of consistency between the rules applicable in cases involving AI and the general rules applying in other cases.

    -The AI liability initiative seeks to help victims meet their burden in the most targeted, i.e. proportionate manner possible. A rebuttable presumption is the least interfering way of alleviating the burden of proof.

    -A reversal of the burden of proof regarding fault and causation, as suggested by the European Parliament, or, to an even greater extent, an irrebuttable presumption represent a more far-reaching exception from MS’ general civil liability rules, which puts into doubt their political feasibility. 

    -Such approaches would create a situation where victims of damage caused with the involvement of AI would systematically be in a more favourable situation than victims of damage caused by other technologies. They would thus overshoot the Commission’s policy objective defined in the AI White Paper, and risk dis-incentivising the uptake of AI.

    -The public consultation showed strong opposition from business stakeholders against a shift of the burden of proof: 63 % of these respondents (54 out 86) disagreed with such an option, and only 24,4 % (21 out of 86) agreed. Taking into account the initiative’s objective to promote the rollout of AI, this result militates in favour of alleviating the burden of proof through less interfering measures, that is to say in favour of a rebuttable presumption instead of a reversal of the burden of proof or a irrebuttable presumption.     

    The rebuttable presumption to implement measure (iii) is a procedural tool, to help in court proceedings, that would intervene only in the context of existing fault-based liability claims. 133  If the victim makes it sufficiently plausible that an action or omission of the potentially liable party led the AI system to arrive at the damaging output, the court could presume that this was indeed the case. The potentially liable party still has the possibility to rebut such presumption. The measure would not involve the creation of new risk categories, substantive obligations or liabilities.

    This measure would not be relevant where AI systems are used to provide advice or information to human decision-makers (e.g. medical analysis AI informing the diagnosis and treatment decisions of human physicians). In such cases the AI system is not interposed in the causal chain between the relevant human action and the damage. It will thus not be necessary for the victim to establish what triggered a specific output of the AI system.

    Scope of PO1: Which AI systems would be concerned?

    Measures (i), (ii) and (iii) would apply where an AI system as defined in the AI Act has caused harm. While the specific challenges of AI to liability rules are driven by the peculiar characteristics of certain AI systems, it is appropriate to define AI systems by reference to the AI Act’s broad concept of AI rather than limiting its application to a subset of AI systems. As measures (i) and (ii) are directly linked to the AI Act, their scope must be aligned to it, in any case, so they would apply to high-risk AI systems, as defined by the AI Act. In addition, victims of harm should not have to demonstrate that an AI system has specific characteristics or belongs to a specific category in order to benefit from the envisaged measures. The application of PO1, including measure (iii), should therefore not hinge on such criteria. Otherwise the objective of preventing a lack of compensation and ensuring legal certainty would be in doubt.

    However, the impact of the measures will vary depending on the specific characteristics of AI systems. Deterministic, transparent and relatively simple AI systems entail fewer difficulties of proof so the possibility of rebutting the presumption will be in practice particularly relevant for those AI systems. The beneficial effect of the rebuttable presumption alleviating the victim’s burden of proof will be thus particularly relevant for those AI systems which show the specific characteristics of opacity, autonomous behaviour, complexity or limited predictability.

    Addressing legal uncertainty and legal fragmentation and related internal market obstacles (2.2., 2.3., 2.6.): PO1 would provide uniform rules clarifying how national burden of proof rules would be applied to the specific challenges of AI. This would avoid that national courts need to employ ad-hoc solutions in each case before them, to achieve equitable solutions for adequate victim compensation. It would also prevent that MS see a need to adapt, based on their own different approaches, the relevant burden of proof rules, to react to the characteristics of AI challenging the application of those rules.

    PO1 will provide the legal certainty needed by businesses – in particular SMEs – to foresee possible liability risks and insure themselves against them. Accordingly, they will have investment stability to roll-out AI-enabled products and services in the internal market. Clear conditions for meeting the burden of proof would also facilitate ex-ante risk-assessments by insurers offering coverage of civil liability for damage caused by AI.

    Addressing the possible lack of compensation for victims leading to a lack of societal trust and hampering take-up by consumers (2.5., 2.6.): Measures designed to alleviate the exercise of the burden of proof by the victim allow victims to have successful liability claims obtaining compensation in cases of damage created by AI. By avoiding that victims are less protected due to the involvement of AI than victims of traditional technologies, PO1 contributes to the creation of an ecosystem of trust for AI. It thereby promotes the uptake of AI-enabled products and services in the EU 134 . All companies in the AI value chain would benefit from increased demand.

    Incentive effects: Finally, PO1 would have the ancillary purpose of incentivising compliance with the AI Act and damage prevention. It will thus increase the level of protection for all persons affected by the use of AI.

    In the public consultation, responding EU citizens, consumer organisations, academic/research institutions and NGOs in their very large majority supported the measures envisaged under PO1, namely:

    -measures regarding the disclosure of information, support by 87% among EU citizens (80 out of 92) and support by 94% among those organisations (30 out of 32), with none of the latter disagreeing;

    - inferring facts from the refusal to disclose information, support by 84% among EU citizens (77 out of 92) while only 13% disagreed (12 out of 92), and support by 68% amongst those organisations (21 out of 31);

    - presuming causality in the case of non-compliance by AI providers with their safety obligations, support by 90% among EU citizens (82 out of 91) and support by 81% among those organisations (26 out of 32) with only one disagreeing;

    - presuming causality in the case of non-compliance by AI users with their safety obligations, support by 59% among EU citizens (54 out of 92) while only 9% disagreed (8 out of 92) and support by 65% among those organisations (20 out of 31) while 23% (7 out of 31) disagreed;

    - an alleviation of the burden of proof regarding the functioning of AI systems, support by 88% among EU citizens (81 out of 92)and support by 91% among those organisations (29 out of 32) with only one disagreeing.

    Among business stakeholders the responses on the disclosure of information were relatively evenly split (33% or 28 out of 86 agreed, while 31% or 27 out of 86 disagreed).

    A majority was against inferring facts from the refusal to disclose information (63% or 54 out of 86, while 22% or 19 out of 86 supported such a presumption), presuming causality (41% or 36 out of 86, while 30% or 26 out of 86 supported such a presumption) or shifting the burden of proof (63% or 54 out of 86, while 24% or 21 out of 86 supported such a shift)). The fact that 63 % of business stakeholders disagreed was one of the reasons why the policy options do not include a reversal of the burden of proof, but only targeted alleviations in from of rebuttable presumptions. While29 % (25 out of 86) of business stakeholders disagreed, 35% (30 out of 86) agreed to apply such a presumption vis-à-vis the user of the AI-system.

    Responding individual SMEs approved of those measures. The views of business associations representing (primarily) SMEs were in most cases roughly evenly split.

    However, interests of possible liable parties in preserving compensation gaps to the detriment of victims who would have to bear the cost of the damage they have suffered should not define public policy. This would be against the principles of justice and fair compensation that underlie all national liability laws. It would also be against the policy objectives of the AI White Paper to ensure an equal – and technology agnostic – level of protection for victims of harm.

    How does this PO interact with the measures envisaged for the PLD revision?

    In line with the AI White Paper objective, all PO need to ensure that victims having suffered harm caused by AI systems enjoy the same level of protection as those having suffered harm caused by other technologies. PO1 will therefore introduce measures to ensure that victims of AI can have effective claims based on fault, despite the specific characteristics of AI that make the burden of proof very difficult.

    In claims against the producer, the PLD revision will ease the burden of proof for damages caused by defects in complex products, including AI-enabled products, by rules on when producers are obliged to disclose technical information and rebuttable presumptions. Moreover, the development risk defence, which exempts producers from liability when a product’s defective nature was not scientifically discoverable at the moment it was put into circulation, would be adapted to take account of the dynamic nature of products with digital elements, such as AI-enabled robots. If the defective nature becomes scientifically discoverable while the producer retains control of the product’s safety features (e.g. through software updates), the defence would not be available.

    PO1 uses very similar tools to ensure that a fault based claim for compensation for harm induced by AI is as effective as for harm induced by other technologies. Compared to the PLD, PO1 covers additional forms of damages, additional victims and additional liable persons as well as different sources of damage. Thus, alleviations of burden of proof are complementary between the two initiatives, as the following examples illustrate:

    -the producer’s obligation to give the victim access to information for claims under the PLD is complemented by the disclosure of information, based on this initiative about liability for AI, in claims based on the user’s fault or in claims against the producer for damage suffered by business, 

    -presumptions of defectiveness and causality under the PLD are complemented by a targeted presumption of causality, for the purposes of other civil liability claims under this initiative, when the liable person does not comply with the AI Act.

    In their combination these measures will ensure that victims can use effectively all available routes to compensation. Thereby they enjoy the same level of protection compared to damage created by other technologies. They will also provide the right incentives for all entities involved in the ‘life cycle’ of AI-enabled products and services to follow obligations meant to prevent harm from happening. This way, the liability risks are distributed over the AI value chain to the appropriate liable person.

    (b)    PO2: PO1 + strict liability for AI use-cases with a special risk-profile

    PO2 combines the alleviations of the burden of proof envisaged under PO1 with a harmonised risk-based strict liability regime applicable to professional operators/users of certain highly autonomous AI-enabled products or services with a special risk profile.

    In order to ensure, in line with the AI White Paper objective, that victims have equally effective compensation paths for damages caused by AI and other technologies, all pillars of liability need to be considered. All MS currently have, in addition to fault-based and PLD, also strict liability regimes. While they vary in scope and conditions, they usually cover risk of harm to high-ranking legal interests such as life, health, and property. These regimes assign liability to the person benefitting from the source of risk (usually owners, users, operators). In strict liability claims, the victim only has to prove that the risk ‘materialised. As a consequence, the liable person determined by law has to compensate the damage. Thus, strict liability is an easier path to compensation for victims.  135

    Following a targeted and risk-based approach, PO2 is based on the consideration that the operator/user is the one that decides on and benefits from the use of AI-enabled products or provision of AI-enabled services in practice. 136 The operation of certain AI-enabled products and services creates a risk for the general public and high-ranking legal interests. For instance, innocent bystanders, who do not choose to submit themselves to such risks and are mostly even unaware of them, should in such cases have an uncomplicated way of obtaining compensation for the harm suffered. Strict liability could be that solution.

    Such harmonising measure would be limited to strict liability of those persons operating AI-enabled products and services as part of their professional activity. This personal scope is justified by the fact that it is more likely that professionals would be able to control the operation of such products and services as part of their business activities, while consumers typically have little or no influence on how highly autonomous AI systems function and therefore no means to control the specific risk presented by such AI systems.. Moreover, this scope is consistent with the approach of the AI Act and existing and future safety-relevant legislation, which impose obligations on professionals only.

    Limiting the personal scope to professional users/operators is justified by the fact that consumers typically have no influence on how highly autonomous AI systems function and therefore no means to control the specific risk presented by such AI systems. For this reason, it is doubtful whether the operation of AI-enabled products with a potential to cause significant harm to legal interests of high value will at all be entrusted to consumers in the future, in particular in scenarios where such products could cause harm to unwitting third parties. Indeed, the AI Act shows that professional users have a key role in controlling the risks linked to the operation of AI systems. Should cases emerge where consumers play the role of a user/operator of AI technologies with a ‘strict liability profile’, these cases are highly likely to be already covered by strict liability in almost all MS, as exemplified by the use-case of autonomous vehicles. The risk that the harmonisation of strict liability for professional users/operators would create situations where the level of protection varies arbitrarily, from the perspective of the victim, compared to cases where the same type of AI system was used by a consumer, is therefore very small.

    Similar to a common approach in MS 137 , this limited strict liability regime could be coupled with an obligation for liable persons to insure themselves. This corresponds e.g. to the present situation for car accidents, as harmonised by the Motor Insurance Directive. In addition to the measures outlined for PO1 (measures i-iii), PO2 would thus comprise:

    (I)A harmonised strict liability regime applicable to the person exercising control over the operation of the relevant highly autonomous AI technology (operator/user) in the course of their professional activity. The professional in whose name the relevant product is registered or who is registered as service provider could be presumed to be the operator. Like the approach of the AI Act, the relevant products and services could be defined by a list in a technical Annex, complemented by the possibility for the Commission to update that list by delegated acts to technological and market developments. The AI use-cases included in the scope of the strict liability regime would be identified on the basis of criteria consistent with MS’ legal traditions, such as the ability to cause frequent or severe harm, in particular to high-ranking legal interests like life, health or property, and the likelihood that the public at large or at least unwitting third parties are exposed to such risks. These criteria could, for instance, be met by technologies like autonomous drones or vehicles. In contrast, strict liability would likely not be appropriate, for example, for merely stationary robots operating exclusively in confined environments and presenting risks to only a narrow, pre-defined range of people.  

    In addition, only the grounds for excluding or reducing strict liability would be harmonised (e.g. force majeure and own fault), while other modalities would be left to national law.

    (II)As a cumulative sub-option, an insurance obligation for the strictly liable persons, combined with a right for the victim to make a direct claim against their insurance undertaking. The insurer who paid compensation would be entitled to claim it back from any other party that could be liable for the damage under other compensation paths (e.g. PLD or fault-based).

    Because of the high interests at stake and the victim’s particularly vulnerable position, strict liability ensures that compensation is promptly and easily granted to the victim. However, this does not imply that somebody else might not be at fault for the damage or, in the case of products, that the AI-enabled product was not defective. Therefore, it is relevant for the strictly liable person and insurance of that person to have effective liability claims for eventually recovering the compensation paid to the victim from the person ultimately responsible for the damage.

    The envisaged harmonised measures would not create additional layers of regulation. Where MS already provide for strict liability for the cases covered by the harmonising measure, mainly because that type of activity or goods are already subject to strict liability (e.g. cars in almost all MS), those MS would have already transposed the measures under PO2. When EU law already provides for respective insurance requirements (e.g. mandatory insurance for motor vehicles or for drones above 20 kg), that requirement would continue to apply. It would neither be replaced nor superimposed by an additional insurance requirement for certain AI systems.

    Approach to addressing legal uncertainty and legal fragmentation and related internal market obstacles (2.3. + 2.4.): By specifying the AI-enabled technologies subject to strict operator liability at EU level, PO2 aims to increase legal certainty and predictability of liability risks for potentially liable parties. It also prevents the emergence of divergent AI-specific strict liability regimes at national level. The approach underpinning PO2 relies on a similar logic, a targeted and risk-based approach, as PO1. However, it goes further into MS’ liability systems and into the risk distribution between stakeholders because of the special risk profile of the AI technologies subject to strict liability. The possibility for operators to cover their liability risk with (mandatory or voluntary) insurance, thus reducing their risk exposure to the annual premium, is an important component of this PO.

    Approach to addressing the possible lack of compensation for victims leading to a lack of societal trust and hampering take-up by consumers (2.5.): Strict liability of the operator/user addresses the proof-related challenges posed by AI. The victim only has to prove that a certain risk materialised in the sphere of the liable person, instead of having to establish a misconduct. 138  The sub-option of mandatory insurance would relieve the victim of the liable party’s insolvency risk. These features of the PO would increase societal trust in AI, even when it has a specific risk profile. It would thereby promote its roll-out in the internal market.

    In the open public consultation NGOs, academic/research institutions, consumer organisations and EU citizens supported either full or minimum harmonisation of strict liability. In particular, 77%, or 68 out of 88, of EU citizens favoured a ‘full harmonisation’ approach to strict liability while the 7 responding consumer organisations supported only a minimum harmonisation approach. Business stakeholders tend to oppose the harmonisation of strict liability. Opposition was stronger regarding a minimum harmonisation (70 %, or 59 out of 84, while only 14%, or 12 out of 84, support) than a full harmonisation approach (42%, or 35 out of 84, disagreement while 30%, or 28 out of 84 support). Interestingly, almost all responding individual SMEs (9) approved of this PO, and business associations representing (primarily) SMEs were evenly split regarding (both the minimum and full) harmonisation of strict liability.

    How does this PO interact with the measures under the PLD revision? The existing liability system offers victims different avenues to seek compensation from various liable persons, under different instruments and for different types of harm (defect-based liability of producers under the PLD; national fault-based liability of any ‘wrongdoer’; national strict liability for certain specified risks). It is therefore necessary to work on all the existing avenues of compensation to ensure the same level of victim protection. These avenues attach liability to different grounds and the proposed measures will ensure coherence.

    The preferred option for the PLD revision would extend strict liability to the person who produces software (including AI) or provides digital services necessary to make a product work. PO2 would harmonise strict liability of the person who operates products equipped with AI or provides services equipped with AI under particular circumstances of risk.

    It can, of course, happen that the same person is linked to a damage in different capacities, and therefore fulfils the conditions of liability under different instruments. Given that this is a well-known and accepted situation under the existing national liability systems and an expression of the complementary nature of the pillars of liability, it is not expected to negatively affect innovation or uptake of AI-enabled products compared to the baseline scenario. For example, a producer of autonomous vehicles providing transport services with a fleet of such vehicles would be liable under the PLD for defects of the vehicles in their capacity as producer and under the harmonised strict liability regime for AI, in their capacity as user/operator.

    (c)    PO3: PO1 + targeted review regarding strict liability and mandatory insurance measures

    PO3 follows a staged approach, consisting in the implementation of PO1 in a first step, followed in a second step by a targeted review and re-assessment of the need for harmonising strict liability of the operator/user and mandatory insurance elements of PO2. Going beyond an evaluation of the already adopted measures, the targeted review would involve a dedicated mechanism designed to inform the future policy decisions on those additional elements. While the usual, backward-looking evaluation of the effectiveness, efficiency and relevance of already enacted EU rules would be done also under PO1 in line with general better regulation rules, the targeted review under PO3 takes a different, forward-looking perspective and provides for additional substantial steps to be taken by the Commission. Although PO3 would not (yet) implement all elements suggested by the European Parliament’s legislative own-initiative resolution, an EU instrument including the targeted review mechanism would nevertheless reflect the approach suggested by the Parliament, that is to say a combination of measures to ease the burden of proof for fault-based claims with a harmonised strict liability regime for certain specific AI applications. It would also demonstrate that the Commission is ready to act on the opinions expressed by non-business stakeholders in the public consultation, who showed clear support for strict liability. The targeted review provides a framework and prepares the ground for the Commission to take a policy decision specifically on the harmonisation of strict liability and mandatory insurance, at a later point in time when all the information for that decision will be available.

    The review would notably cover the following points:

    -the market developments regarding the rollout of products and services enabled by AI-systems with the characteristics that challenge existing (primarily fault-based) liability rules: a high degree of behavioural autonomy, opacity (complexity + lack of transparency), continuous adaptation and limited predictability;

    -the risk-profile and operating environment of the products and services, in particular whether they can cause harm to high-ranking legal interests of the public at large;

    -the incidence rate of accidents caused by AI systems, in particular those involving harm to high-ranking legal interests of unwitting third parties in the course of the operation of the relevant products or provision of the relevant services;

    -any AI-specific liability gaps regarding damage caused by the identified products and services, despite the alleviations of the burden of proof implemented under PO1.

    Aside from assessing to what extent PO1, in combination with the regulatory measures taken to prevent highly autonomous AI-enabled technologies from causing harm (e.g. the AI Act, the MPR, the General Product Safety Regulation, future measures under the ‘old approach’ safety legislation) achieves the Commission’s policy objectives effectively, the review will allow to conclude if the harmonisation of strict liability and mandatory insurance elements of PO2 are needed as well.

    The staged approach would also allow the European insurance industry to acquire sufficient actuarial data on the realisation of AI-specific liability risks, which was a point consistently raised by insurers in the consultation activities. PO3 would enable the Commission to take into account how the insurance market for AI has evolved, when re-assessing the need for AI-specific mandatory insurance during the targeted review. Future regulatory developments at EU or national level could also be taken into consideration, namely on AI-enabled products and services with a relevant risk profile (e.g. autonomous vehicles and unmanned aircraft). For measures to ensure that the targeted review will be based on a sufficient evidentiary basis, see Section 9 and Annex 12 on monitoring and evaluation.

    How does this PO interact with the PLD revision? At the first stage, the interplay with the PLD revision is the same as under PO1. In case of the introduction of strict liability as the outcome of the targeted review, the interplay would be as described under PO2.

    Minimum harmonisation approach: All of the PO would follow a minimum harmonisation approach. This means that MS could maintain or – outside of the scope of the PLD – introduce rules that are more favourable for victims. For example, MS could have more general reversals of the burden of proof regarding fault or causation. They could also maintain general strict liability regimes (e.g. for ‘dangerous things or activities’) applicable also to damage caused with the involvement of AI systems.

    In contrast, full harmonisation could lead to a lower level of protection compared to situations where Member States provide already for a more general strict liability regimes in their national law which could also apply to AI technology. In the light of the long-standing and entrenched divergences between national tort law traditions, an attempt of a full harmonisation blocking maintaining existing further-reaching national rules would meet with strong opposition from MS It would also be opposed by certain stakeholders as the position of consumer associations participating in the public consultation shows. The same is not the case for an initiative based on minimum harmonisation. Such an approach would ensure that the new rules can be integrated without frictions into the existing legal civil liability framework within each MS.

    This approach based on a realistic assessment of political feasibility is also reflected in the draft regulation proposed by the EP, which, after political discussion on this subject, ultimately opted for a minimum harmonisation approach.

    While protecting the victim, it is acknowledged that minimum harmonisation does not create an entirely level playing field. This is reflected in the reactions of business stakeholders in the public consultation. While business stakeholders overall in their majority resist strict liability, the opposition to minimum harmonisation of strict liability is stronger than for full harmonisation.

    However, for considerations of political feasibility within both branches of the European legislator, subsidiarity and proportionality, the measures on AI liability should interfere in the – very heterogeneous – national tort law traditions only to the unavoidable extent.

    5.3. Measures discarded at an early stage

    (a)     Harmonisation of risk-based liability for damage caused by all AI-enabled products or services, irrespective of their risk-profile, possibly coupled with mandatory insurance

    A horizontal strict liability regime applicable to users/operators of AI-applications irrespective of the risk linked to their activities would be neither proportionate nor suitable to achieve the objectives of this initiative. While such an approach may increase legal certainty and prevent fragmentation between possible future AI-specific national liability rules to the highest extent, possible benefits for companies seeking to invest in AI would be negated by the far-reaching shift of liability risks and associated compliance costs. 139 This discarded PO would entail high adaptation costs for companies. Moreover, it is uncertain whether the insurance industry would be able to cover the new strict liability risks across the broad range of AI-enabled technologies. 140  

    The combined effects would likely dis-incentivise investments in such technologies, stifling innovation and slowing down the take-up of AI by European businesses, 141 having an overall negative effect on cross-border trade in AI-enabled products and services. 142  

    This discarded PO would go beyond the objective to ensure that persons harmed by AI enjoy a similar level of protection as persons having suffered harm caused by other technologies. Prescribing strict liability whenever AI-systems are involved would, in many cases, treat victims of AI-induced damage better than victims of traditional technologies. 143  

    (b)     Harmonisation of the types of – in particular immaterial – harm giving rise to civil liability claims when caused by AI

    In its own-initiative resolution, the EP requested the Commission to analyse in depth the legal traditions in all MS and their existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm in AI-specific legislative acts is necessary and if it contradicts the existing Union legal framework or undermines the national law of the MS. The Commission’s analysis has shown that the legal situation in MS regarding the compensability of non-material harm, as well as pure economic loss, is fragmented. 144 The matter is marked by differing long-standing legal traditions. It is not an AI-specific question but rather a horizontal one.

    Creating a situation whereby victims might be able to claim compensation for a certain type of harm, for instance immaterial harm, only if it was caused by AI would not be in line with the Parliament’s and the Commission’s common objective to ensure that victims have the same level of protection. Such an approach would give victims of harm caused by AI-systems a better protection compared to harm caused by other technologies.

    It is therefore appropriate not to include a harmonisation of the compensable harm in the envisaged AI-specific instrument on civil liability. By ensuring the effectiveness of existing liability rules, the envisaged policy measures will nevertheless help victims to claim compensation also for other types of harm such as immaterial harm or pure economic loss, to the extent that such harm is compensable under national liability rules.

    6.What are the impacts of the policy options?

    The impacts of the policy options are, for the most part, economic in nature. However the initiative will also have relevant social impacts, mainly related to victim compensation and incentivising the prevention of harm. A deeper assessment of impacts was focused on the most significant ones, identified based on stakeholders’ views.  145

    The quantification of data was to a large extent not possible, due to the novelty of the technology and the scarcity of products and cases of damage or compensation. Mitigation actions were taken that led to the use-case modelling of costs of insurance, macro-economic approaches to estimate impact of legal fragmentation, as well as micro-economic approach for a specific use case. The available quantified estimates (production value affected by the obstacles of legal uncertainty and fragmentation of liability rules of the use-cases, overall market value affected by these obstacles, costs of claiming compensation) were then used to extrapolate quantified estimates of the impacts of policy options on the costs linked to the burden of proof, on the EU AI market value and of the incremental change in insurance premiums. In light of the scarcity of quantitative evidence, the costs and benefits of the policy options were largely assessed using qualitative scales, while taking into account the available quantified estimates.

    For the cost quantification, administrative costs are less significant under this initiative compared with other sectoral legislation because there are no specific obligations or information requirements imposed on economic operators. There will be some adjustment costs for potentially liable parties, related to liability insurance premiums. 146

    6.1. PO1: Easing the burden of proof for AI-related claims

    (a)Effectiveness

    - Degree to which SO1 and 2 147 would be achieved: Targeted, risk-based alleviations of the burden of proof aligned with the AI Act would address the major sources of legal uncertainty identified in the problem analysis. It would thereby achieve the more general objective to improve conditions for cross-border business activities involving AI.

    - Degree to which SO3 would be achieved: The envisaged alleviations of the burden of proof would effectively prevent AI-induced compensation gaps, and thus be suitable to ensure that victims suffering harm caused by AI – whether they are consumers or businesses – have the same level of protection as victims harmed by other technologies. Victims would be relieved from having to overcome the characteristic opacity of certain AI-systems to prove their claims. 148 Consequently, they would have to spend less on technical expertise and have better prospects of making a successful claim. 149  

    The perceived low likelihood of compensation and the difficulty to determine who is liable count amongst the most relevant reasons for low levels of consumer trust in and societal acceptance of AI. 150 Consumers who perceive liability rules as appropriate to protect victims of harm are significantly more willing to take up such products and services. 151 A liability regime where the burden of proof has been adapted in favour of the victim ranks higher in the perception of consumers than a regime where the victim bears the full burden of proof. 152  PO1 is therefore expected to contribute – together with the already proposed adaptations of safety rules – to increasing the level of societal trust in AI-enabled products and services and consumers’ willingness to take up such products and services. 

    - Indirect social impacts: By preventing liability deficits, PO1 would provide an effective incentive to prevent harm and thus drive safety-enhancing innovation. It would thus contribute indirectly to people’s overall level of safety. 153 This mechanism would apply firstly to businesses subject to specific safety requirements – in particular the user and provider under the AI Act. Secondly, by ensuring the effectiveness of general liability rules under national law, the incentive effect of PO1 could extend to any stakeholders whose actions or omissions may have contributed to the causation of damage, such as e.g. providers of labelled training or testing data. Moreover, behavioural research has shown that adapting the burden of proof in favour of the injured party makes people more likely to consider that victims receive just compensation and that the legal framework is reasonable, predictable and transparent. 154 By promoting effective access to justice, PO1 is hence likely to maintain or increase societal trust in the justice system.

    (b)Efficiency (costs and benefits for affected stakeholders)

    - Impacts on potentially liable parties (businesses and natural persons): The reduction of legal uncertainty and fragmentation would generate direct regulatory benefits for businesses, namely through lower legal information/representation, internal risk management and other compliance-related costs, as well as additional cross-border revenue. 155 By clarifying the kind of information and evidence potentially liable parties may be required to submit in civil proceedings, PO1 would help them to choose more efficiently between different technological options 156 , namely by favouring more transparent and explainable solutions. As start-ups and other SMEs are significantly more affected by the internal market barriers created by legal uncertainty and fragmentation (see 2.6.), this stakeholder group would also benefit to a higher degree. 157  The expected positive impacts of PO1 on societal trust in AI and consumers’ willingness to take up AI-enabled products and services, as well as the improved competitiveness of the European AI sector would directly or indirectly benefit all companies in the AI value chain. 158  

    These benefits, which have been approximated in terms of increases in the AI market size by EUR 500mln to EUR 1.1bln 159 , are likely to outweigh the following adaptation (substantive compliance) costs and redistribution effects linked to PO1. The business-as-usual costs under the baseline scenario, related to the uncertainty to assess what liability rule would apply to AI and what burden-of-proof rule a court would apply in a concrete case, are higher than any potential adjustment costs borne by potentially liable parties. It will be easier for companies to estimate liability risks and related costs. While under the baseline scenario courts might apply on an ad-hoc basis alleviations of the burden of proof to remedy what they consider an unequitable result, clear and harmonised alleviations of the burden of proof will help liable parties to know what to expect in case AI is involved both domestically and cross-border. Companies operating cross-border would benefit from reduced compliance costs compared to the very fragmented baseline scenario. Such clarity might also help companies get appropriately priced liability insurance coverage.

    In cases where the specific characteristics of AI would not have allowed the victim to prove the necessary facts under the baseline scenario, PO1 would shift the cost of compensating the relevant damage from the victim to the liable person, increasing the latter’s liability exposure. This result is not regarded as an undesirable impact or undue burden. It is in line with the policy objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by other technologies and in general with the purpose of liability law. It also achieves a more efficient cost-allocation to the person best placed to prevent damage from occurring. For the impact analysis, it is taken into account as a re-distribution effect. These effects are inherent in the Commission’s objective of avoiding that victims are less protected due to the use of AI. In practice, the added burden on potentially liable parties is likely not to be substantial because:

    -a major share is likely to have professional liability insurance 160 , which covers also risks linked to business activities involving AI 161 . It allows to limit costs to the annual insurance premium, making it even more predictable and easier to manage;

    -PO1 does not involve a general reversal of the burden of proof, but only targeted adjustments to counter-balance the specific challenges of AI. The negative economic impacts for potentially liable parties are likely to be very marginal 162 ;

    -PO1 is likely to achieve an overall more efficient allocation of the burden of proof, as the difficulty to establish how or why an AI system arrived at a harmful output is typically less burdensome for potentially liable parties having influenced the operation of that AI-system (e.g. developers, users) than for victims;

    -as PO1 does not introduce new grounds of liability and keeps the basic allocation of the burden of proof intact, it is not expected to lead to a major increase in the number of civil actions – or an associated increase of insurance premiums – compared to the baseline, nor to significant familiarisation and implementation costs for businesses.

    Especially the instrumental role of insurance in distributing and managing the impacts of the envisaged policy measures needs to be underlined. Ultimately, insurance coverage will allow potentially liable parties to spread liability costs across the community of all insured. This mechanism limits the economic burden on each individual insurance holder to the premium, preventing a possible deterring effect of liability risks and keeping market entry barriers low, which facilitates the roll-out of AI in particular by start-ups and other SMEs. 163 A large portion of potentially liable parties concerned will likely procure insurance coverage voluntarily to benefit from this cost-limiting effect. 164  

    In this context, the regulatory framework established by the AI Act for the development and use of high-risk AI systems is likely to improve the conditions for AI risk assessment by insurers over the coming years. In addition, the proposed Data Act will promote access to data generated by a user’s product and thus facilitate the provision of services that depend on or can be improved by such data, including insurance and data analytics. 165 Moreover, the Commission services will respond to the EP’s call to work closely with the insurance market to develop innovative insurance products 166 and facilitate a dialogue between the insurance industry and companies active in the AI market (in particular SMEs).

    Consequently, while a lack of actuarial data at both the current and the initial stage of implementation is likely to influence to some extent the premiums of AI-specific insurance products, for a transitional period, and make these premiums more volatile, this effect is expected to dissipate quickly as the data generated during the operation of these technologies will allow risk estimations to converge faster towards the optimum than in the case of ‘traditional’ technologies.

    Hence, a development of a competitive insurance market for AI-related liability risks may safely be expected; and such development will provide the necessary conditions for effective coverage at moderate prices. 167 This expectation is supported by the fact that the insurance industry is forcefully pursuing the development of innovative AI-specific insurance products, to explore the substantial new opportunities linked to this growth market. 168 First insurance policies designed to cover specifically AI-enabled technologies have already been rolled out. 169 AI liability insurance may also be incorporated as an additional feature into existing general policies. 170  

    The results of the public consultation confirmed that insurance solutions could ensure that the victim receives compensation (64% agreement v. only 5 % disagreement) and limit the costs of potential damage for the liable person to the insurance premiums (49 % agreement v. only 19 % disagreement). Even the share of business stakeholders (business associations + companies/business organisations) who confirmed the latter effect (38 %) was more than three times as large as the share of those not agreeing with it (12 %).

    All business stakeholders that agreed with the harmonisation of strict liability also agreed with the benefits of insurance (limiting cost, ensuring compensation). Specifically, they agreed that mandatory insurance coverage ultimately spreads –from a moderate to a very large extent- the liability costs over everyone taking out insurance, avoiding too high and burdensome one-off costs for the liable party and that, by limiting such costs to the insurance premiums, it facilitates business planning and lowers market entry barriers, especially across borders. In this respect, Insurance Europe submitted that “liability insurance plays a vital role by transferring liability risks from companies and consumers to insurers and thereby, protecting the insureds’ economic position as well as ensuring that injured persons are compensated for loss or damage.

    Along the same lines, 33% of business stakeholders expect that insurers will increase risk premiums (at least to a small extent) due to a lack of predictability, if the current liability framework is not adapted to the characteristics of AI, while 36% of business stakeholders do not expect this at all. At the same time, 50% of them expect an increase of insurance premiums, compared to only 14% that do not expect this at all, if MS adopt fragmented liability rules for AI at national level. Namely, business stakeholders acknowledge that while EU action might lead to a transitional, moderate increase of the insurance premiums, the absence of it would have a more negative impact on the market. In addition, approximately 31% of business respondents agreed that in cases where possible facilitations of the burden of proof would apply, the potentially liable party would likely be covered by (voluntary or mandatory) liability insurance, as opposed to only 11% who disagreed. Given that this statement, interestingly enough, also received a share of 31% ‘neutral’ responses, i.e. almost equal to those in favour of the statement, these results point clearly to PO3 - adopting both such facilitations and a staged approach towards insurance - as the option that is most representative of the views of the market and thus, the preferred one, as compared to the other two options.

    Quantification: Given the future-oriented nature of this initiative the lack of quantified data meant that, regarding liable parties, the re-distribution effect of preventing compensation gaps was approximated based on reasoned assumptions on the insurance cost linked to AI liability risks and the incremental change PO1 might entail in the level of claims and insurance premiums 171 . The absolute overall amount of annual liability insurance premiums in the EU is estimated to increase moderately by EUR 5.35mln-16.1mln due to this PO. 172  

    This estimate was based on available information on annual premiums paid for general liability insurance (EUR 42bn in 2019 173 ), estimates of the shares of these premiums linked to AI-related economic activities (between EUR 10.57mln and EUR 32.21mln, based on the low and high estimates of AI market sizes), input from insurance stakeholders to the effect that AI-related liability risks can largely be covered by existing general liability insurance policies 174 , and estimates of the extent to which PO1 could shift the burden of compensating damage caused by AI from the victim to the party responsible for that damage. The quantified estimates also take into account that, during an initial transitional period, the scarcity of relevant actuarial data on AI liability risks will make it more difficult for insurers to calculate premiums compared to insured activities not involving AI. It was notably estimated that alleviations of the burden of proof as per PO1 would cause an increase by 15 % of the general liability insurance premiums attributable to AI liability risks.

    With respect to high-risk AI-systems, PO1 could indirectly entail some minor administrative burden, namely to the extent that it prompts the disclosure, in particular in the framework of civil proceedings, of information documented pursuant to the AI Act. However, PO1 relates only to information that had to be logged, documented, and stored, in any case, for possible disclosure to supervisory authorities pursuant to the AI Act. This measure would thus firstly not affect private individuals or companies not falling in the scope of the AI Act. In addition, providing the same information, in particular also in the context of civil proceedings, is not expected to entail a significant added burden. Moreover, the harmonised disclosure rules would require national courts to ensure proportionality, i.e. to avoid any excessive administrative burden.

    - Impacts on victims of damage caused by AI (natural persons and businesses): PO1 would relieve the victims of the burden of bearing the damage, to the extent that their claims for compensation would have failed under the baseline due to the specific challenges of AI. This burden would be re-distributed to the person responsible for causing the damage. This applies not only in respect of material damage, but also pure economic loss and non-material harm (such as psychological harm and damage caused by discrimination) to the extent that these types of harm are compensable under existing rules. PO1 would also reduce victims’ costs linked to the burden of proof (e.g. for expert analysis), by ensuring access to relevant information and ensuring that victims do not have to prove how or why an AI-system arrived at a certain output.

    Quantification: Based on conservative assumptions regarding the costs to be advanced by victims to meet the burden of proof under existing liability rules, and the effect of PO1 on this cost factor, it is estimated that this PO would reduce this cost by ca. EUR 2000 on average per case in which the targeted alleviations of the burden of proof apply. 175  

    This quantification builds on the following estimates and assumptions (explained in detail in Annex 10):

    - Based on expert estimates provided in the framework of the economic study 176 , the average costs to be advanced by victims for technical expertise in cases where AI was involved in causing damage were assumed to be EUR 4149 higher than the same average in cases not involving AI. These costs for technical expertise are used as a proxy for the costs linked to the burden of proof.

    - It was further assumed that the targeted alleviations of the burden of proof envisaged under PO1 would reduce the costs to be advanced by victims due to the AI-specific difficulty of meeting the burden of proof by at least 50 % (i.e. ca. EUR 2000)

    It is important to stress that the estimated cost reduction for victims should not be misconstrued as a quantification of the AI-specific difficulty of meeting the burden of proof, because it does not take into account cases in which liability claims would not be pursued at all based on current liability rules (because the victim either cannot identify the liable party or considers the prospect of a successful claim insufficient to justify legal action). The preferred PO will help victims also in the latter cases, by overcoming the compensation gaps induced by the specific characteristics of AI.

    - Consumers: A faster roll-out of AI-technologies under PO1 would benefit consumers, e.g. in the form of faster and more personalised services, innovative and performant products as well as advances in the fields of health, safety, security, mobility, sustainability, circular economy, media, etc. Given the overall positive economic impacts also on businesses, it is not expected that the envisaged measures would lead to costs being passed on through increased consumer prices.

    - Insurance companies: PO1 may marginally increase the take-up of insurance by potentially liable parties – provided the insurance coverage is not already included in existing all-risks-policies 177 . Increased legal certainty and reduced fragmentation create more favourable conditions for offering insurance coverage and awareness of liability risks may slightly rise due to this initiative. An increased coverage rate would benefit victims of damage as insurance claims provide an easier path to compensation and relieve victims of the liable party’s insolvency risk.

    - Indirect economic impacts and impacts on the competitiveness of the internal market: Through the avoidance of liability gaps, PO1 would contribute to an efficient cost allocation. Its combined impacts are expected to have a positive effect on cross-border trade in AI-enabled products and services and the development of the European AI-sector as a whole. 178 The economic study estimated that a combination of alleviations of the burden of proof (as per PO1) and measures to harmonise strict liability for certain AI-enabled products and services (cf. PO2 and 3) would increase the cross-border trade in the AI-enabled goods and services falling under the six use-cases analysed in depth for that study by about 5 %. While PO1 does not include all of the assumptions made for that estimation, it is nevertheless relevant because the decisive drivers of the expected economic benefits – increased legal certainty, reduced fragmentation and increased consumer trust – are likely to materialise under PO1. 179

    Quantification:

    (1) Based on the overall value of the EU AI market affected by the liability-related problems addressed by this initiative, and reasoned assumptions regarding the incremental impact of PO1 on this market, it is estimated that the increased legal certainty, reduced fragmentation and increased level of consumer uptake will generate an additional market value between ca. EUR 500mln and ca. EUR 1.1bln. 180  These values are obtained by multiplying the estimated shares of the AI market affected by legal uncertainty and fragmentation regarding civil liability in 2025 under the baseline scenario (low and high scenarios assumed by the economic study ) with the estimated impact of the policy options (+5%). This percentage was determined conservatively, taking into account the estimated impact generated by a combination of measures to ease the burden of proof with a harmonisation of strict liability limited to certain AI applications 181 .

    (2) The Joint Research Centre has provided a complementary micro-economic quantification of the impacts of PO1, based on the use-case of robotic vacuum cleaners. This analysis reaches the conclusion that PO1 would generate an increase in consumer welfare of EUR 11.5-19.12mln and in total welfare 182 of EUR 30.11-53.74mln for this product category alone in the EU-27. 183  

    (3) Previous attempts by the EP to quantify the benefits of a clear and coherent EU civil liability regime for AI remained inconclusive. 184 Nevertheless, its preliminary analysis suggests that the added value of EU action on liability could generate EUR 54.8 billion by 2030 for the EU economy in terms of acceleration of the level of research and development in AI, and in the range of EUR 498.8 billion if other impacts, including reductions of accidents, health and environmental impacts and user impacts are also taken into consideration. 185 As these numbers were not linked to a clearly defined set of PO, they cannot be readily applied to the PO described in this impact assessment. However, they provide a reasoned view on the order of magnitude of potential economic benefits linked to a clear and consistent civil liability framework for AI.

    - Only small incremental impacts on enforcement, adjudication and litigation costs, borne by MS and parties to the proceedings respectively, are expected under PO1. The envisaged alleviations of the burden of proof are not likely to entail a substantial increase in the number of civil actions, as they are designed to apply only in the confined cases where the specific characteristics of an AI system make it unduly difficult to meet the default burden of proof. Moreover, the burden of proof will be distributed more efficiently overall, as potentially liable parties must by definition be capable of influencing the operation of AI-systems. They are therefore typically in a position to more easily discharge the burden of proof, with respect to how or why such systems arrived at a certain harmful output. This has a cost-cutting effect on overall enforcement, adjudication and litigation costs.

    Quantification: Where the measures envisaged under PO1 apply, they could lead to an increase between ca. EUR 200 and ca. EUR 1600 of the cost to be advanced by potentially liable businesses to defend against claims for the compensation of damage caused by AI. 186 This estimate takes as a starting point the average costs that victims would no longer have to advance due to PO1 (EUR 2000, see above). These are litigation costs that arise in the framework of pending litigation, if the defendant needs to provide or procure technical expertise in order to defend against claims for damages.

    Further reasoned assumptions were made regarding the extent to which each policy option would lead to a transfer of the cost linked to the burden of proof to the defendant (i.e. the allegedly liable party).It is assumed that a fraction ranging from 10 % to 80 % of the amount saved by victims due to the alleviations of the burden of proof under PO1 will have to be advanced by potentially liable businesses. This broad assumed range is based on the following considerations:

    - In certain cases, for instance where the defendant is a provider of a high-risk AI systems falling under the AI Act, they will have optimal information on and understanding of the functioning of the relevant AI system. They will thus not need to procure any external technical expertise to discharge the burden of proof.

    - On the other end of the spectrum of conceivable cases, the defendant may not have any advanced understanding of the functioning of the relevant AI system, nor easy access to detailed information on that system. This may for instance be the case where the defendant is an SME using an AI system not falling under the transparency and documentation requirements of the AI Act. As this type of defendant is nevertheless in a better position than the victim for establishing the trigger of the damage, it is assumed that even in such cases the amount saved by victims would not be re-distributed to the defendant in its entirety (but only to a large extent, e.g. by 80%).

    The estimates and assumptions underpinning this quantification are based on expert assessment, stakeholder feedback and legal-economic analysis. They are hence necessarily subject to a degree of uncertainty which is reflected by the broad range of estimates.

    (c)     Coherence – in particular with the PLD review

    As explained under 5.2.(a), PO1 would be coherent with the – not AI-specific – measures envisaged in the framework of the PLD review. These instruments are complementary as they address challenges posed by emerging digital technologies with respect to claims based on different grounds, directed against different liable persons and covering compensation for different victims and types of harm. The respective instruments use a consistent approach and similar tools (access to information, alleviations of the burden of proof) to ensure in their combination that products or services using AI or other digital technologies do not make it more difficult under any of the existing pillars of liability to get compensation compared to traditional products. The envisaged measures will together contribute to creating a more consistent and adequate liability framework, without upsetting the balance established by the existing rules.

    PO1 would also be coherent with the proposals already adopted as part of the follow-up to the AI White Paper, in particular the AI Act. It would namely take over definitions of key concepts such as ‘AI-system’, ‘provider’ and ‘user’, although additional criteria would be necessary to ensure that the envisaged alleviations of the burden of proof apply only where the specific characteristics of AI effectively challenge existing liability rules. The provisions on disclosure of information and presumptions of causality would build specifically on the requirements of the AI Act. PO1 would thus complement this act providing an additional incentive for ensuring the safety of AI-enabled products and services as well as the respect of fundamental rights.

    Finally, MS could fit the envisaged alleviations of the burden of proof into their national liability regimes without disrupting their respective legal traditions. This PO allows for sufficient flexibility and is based on tools that are already well-known in MS’ civil liability systems.

    (d)     Proportionality

    PO1 is limited to the measures strictly necessary to address the AI-specific problems identified. In particular, it would not touch upon the substantive conditions of liability like fault or causality, but focus on targeted proof-related measures ensuring that victims have the same level of protection as in cases not involving AI.

    (e)     Considerations regarding the choice of instrument (binding v. non-binding)

    A directive would be the most suitable instrument, as it could provide the desired harmonisation effect and legal certainty, while also being the adequate instrument to enable MS in a flexible manner to fit the harmonised measures without friction into their national liability regimes.

    A mandatory instrument would prevent protection gaps stemming from partial or no implementation. It would thus likely ensure higher and more harmonised protection of victims compared to the baseline scenario. 187 While a non-binding instrument presents a less intrusive approach, it is unlikely to address the identified problems in an effective manner. The implementation rate of non-binding instruments is difficult to predict and there is no sufficient indication that the persuasive effect of a recommendation would be strong enough to produce consistent adaptations of national laws. This effect is even more unlikely for measures concerning private law, of which extra-contractual liability rules form part. This area is characterised by long-standing legal traditions, which makes MS reluctant to pursue harmonised reform unless driven by the clear prospect of internal market benefits under a binding EU instrument 188  or the need to adapt to new technologies in the digital economy. Moreover, the significant existing divergences between MS’ liability frameworks (see 2.4.), are another reason why a recommendation is unlikely to be implemented in a consistent manner.

    Divergences and reduced legal certainty in cross-border cases would thus persist even to the – likely very limited – extent that MS choose to implement a recommendation. A selective and inconsistent implementation would solve neither the problem of legal uncertainty nor that of impending fragmentation and it would not ensure consistent victim protection throughout the EU. The economic study confirmed that a non-binding initiative would not address the identified internal market obstacles effectively, as the underpinning problems will likely be perpetuated to a substantial extent. 189  It concluded that a non-binding instrument would not achieve any increase in cross-border trade. 190 Due to this likely lack of effectiveness, choosing a non-binding instrument would not be in line with the principle of proportionality, which requires the choice of the least burdensome PO, which is still suitable to achieve the policy purpose.

    While the above considerations clearly militate in favour of implementing PO1 by way of a binding legal instrument, that instrument has to allow Member States to integrate the harmonised measure into their traditional legal systems without creating frictions. A directive would therefore be the most appropriate tool to achieve the policy objectives while maintaining the necessary flexibility to preserve the various established national approaches and legal traditions in the politically sensitive field of civil law.

    6.2. PO2: PO1 + strict liability for AI use-cases with a specific risk-profile, possibly coupled with mandatory insurance

    PO2 differs from PO1 as regards the strict liability regime applicable to users of AI technologies with a special risk-profile, possibly coupled with a mandatory insurance regime. The following impact analysis therefore focuses on these elements.

    (a)Effectiveness

    - Degree to which SO1 and 2 191 would be achieved: A harmonised strict liability regime, possibly coupled with mandatory insurance, is a suitable instrument to ensure legal certainty and prevent fragmentation, as MS would have to implement a consistent minimum strict liability standard with respect to the covered activities. Provided that the AI-enabled technologies covered by that regime can be specified with precision, the companies operating/using those technologies could have an even clearer and consistent basis for assessing their liability risk. However, the time horizon of the roll-out of technologies with a relevant risk profile and degree of autonomy as well as their features which allow to define them in a legal act with sufficient certainty are not yet known. It would therefore, at this point in time, be to a certain degree difficult to define the risk profile of those technologies with the desirable degree of precision, and to specify them in a legislative instrument in a way that ensures the legal certainty necessary for the definition of the scope of the harmonised strict liability regime. This consideration would apply also to a possible mandatory insurance regime covering strict liability. The obligation to ensure insurance coverage is a market entry requirement. This means that it is crucial to enable market participants to assess with a high degree of certainty whether they fall under this requirement or not.

    Moreover, because of the uncertainty related to the risk profiles explained above, it cannot be easily assessed at this stage if having an alleviation of the burden of proof (PO1) would not be sufficient to address the identified problems. This means that the effectiveness of the strict liability element of PO2 is also reduced.

    For these reasons, an instrument including strict liability for the use/operation of certain types of AI-enabled technologies would likely be less effective than it should be, at this point in time, in achieving specific objectives 1 and 2 than PO1.

    - Degree to which SO3 would be achieved: A harmonised strict liability and possible mandatory insurance regime, as the distinguishing features of PO2, could prevent a lack of compensation even more effectively than the alleviations of the burden of proof common to PO1 and 2. The expected effects on societal trust follow a similar pattern as under PO1, as strict liability represents simply another – potentially even more effective – way of ensuring an effective compensation of victims. As with the alleviations of the burden of proof, this is likely to have a positive effect on consumers’ perception of the appropriateness of liability rules, as well as their trust in AI applications. 192 This in turn is likely to increase their willingness to take up AI-enabled products and services. 193 At the same time, it would need to be acknowledged that pending the market maturity of relevant AI-enabled products and services, the specific modalities of their operation and the degree of control exercised respectively by the relevant stakeholders over the risks linked to the operation are at this point in time not known yet.

    - Indirect social impacts: The ability of strict liability rules to incentivise safety efforts depends to a large extent on whether the strictly liable person has cost-efficient means to prevent damages. 194 The control criterion envisaged for assigning strict liability to professional users/operators is therefore conducive to the desired incentive effects.

    Moreover, behavioural research has shown that a strict liability framework for AI is more likely to be perceived as predictable and transparent than fault-based liability. 195 The strict liability element is thus likely to increase societal trust in the justice system.

    (b)     Efficiency

    - Impacts on businesses (in particular as potentially liable parties): Provided that the AI-enabled technologies covered by the harmonised strict liability can be determined with a sufficient degree of precision, this element of PO2 would be suitable to increase legal certainty and reduce fragmentation by reducing legal information/representation, internal risk management, and other compliance-related costs, and generate additional cross-border revenue. 196 Moreover, the expected positive impacts of PO2 on consumers’ willingness to take up AI-enabled products and services (see below) would directly or indirectly benefit all companies in the AI value chain. Start-ups and other SMEs would benefit to a higher degree from the overall positive economic impacts of PO2. 197

    In certain cases, beyond merely harmonising existing strict liability rules, PO2 would entail the application of strict liability and possibly an insurance obligation to users of AI technologies that would otherwise be subject only to fault-based liability under national law. In such cases, PO2 could be a dis-incentivising factor for businesses choosing between AI-enabled technologies and functionally equivalent alternatives. However, this effect is likely offset by the cost reduction and internal market opportunities generated through harmonised liability rules. The economic study found that the moderate compliance costs linked to PO2 would be outweighed by cost savings thanks to higher legal certainty, saved resources on compliance, and higher revenue enabled by a clearer and less fragmented legal framework. 198  

    Moreover, the role of insurance solutions, as described under PO1, is instrumental in this respect, as it limits potentially liable parties’ costs to the insurance premiums, keeping market entry barriers low. 199   In principle, this mechanism holds true whether the relevant risk is covered by voluntary (market-driven) insurance or a mandatory insurance regime (either harmonised or regulated at national level). 200 In the public consultation, insurance stakeholders have pointed to a lack of statistical data on accidents and damages, which could initially drive risk margins and thus insurance premiums. Some SME stakeholders have also raised concerns about high insurance premiums due to the difficulties in assessing the covered liability risks. On the other hand, the fact the insurance industry is already proactively developing new insurance products for AI risks 201 and that this field is considered as the next big growth market 202  confirm the expectation that coverage will be offered at competitive prices. An additional important factor is the parallel initiative adapting the PLD to the digital age. Based on a (statutory or contractual) subrogation of the victim’s claim to the insurer, it will facilitate it for insurers to take recourse against producers, where defective AI software caused or contributed to the insured damage. This would allow insurers having compensated the victim’s damage to claim the compensation back from the insurer, thus also contributing to keeping premiums low. 203 In any event, the Commission services will respond to the Parliament’s call to work closely with the insurance market. 204 The Commission will notably facilitate a dialogue between the insurance industry and companies active in the AI market (in particular SMEs).

    Quantification: Where strict liability applies, the effect of redistributing the burden of damage from the victim to the responsible party is stronger compared to mere alleviations of the burden of proof under fault-based liability rules. This effect is approximated by the expected impact on insurance premiums. If PO2 would be limited to a combination of alleviations of the burden of proof under fault-based liability rules (=PO1) with a limited strict liability regime, it is estimated to entail an increase of the overall absolute amount of annual liability insurance premiums in the EU by 25 %, that is to say by EUR 8.89mln26.82mln. In this scenario, the voluntary or mandatory nature of insurance regimes would be the same as under the baseline scenario, mandatory insurance regimes existing under EU or national law would continue to apply, but PO2 would not introduce any additional mandatory insurance requirements. If PO2 would include, in addition, an insurance obligation covering strict liability, the increase of the overall amount of annual liability insurance premiums paid in the EU is estimated at 35 %, i.e. ranging between EUR 12.44mln and EUR 37.57mln. 205  These assumptions are based on the following considerations:

    - Where strict liability applies, the effect of re-distributing the burden of bearing the damage is significantly stronger, as the liable party cannot avoid liability even if not at fault but this regime would apply only with respect to a small number of AI-enabled products and AI-enabled services, and would therefore change the estimated impact on insurance premiums only to a limited extent.

    - Coupling strict liability with an insurance obligation would entail an incrementally bigger increase of insurance premiums, as PO2 would preclude to some extent the possibility for insurers to manage risks through contractual exclusions and limitations of coverage. Mandatory insurance in particular can have a premium-driving effect for a transitional period marked by scarce actuarial data, but is expected to dissipate gradually as data becomes available. Even during the initial stage, the harmonised safety framework provided by the AI Act and sectoral safety legislation proposed at EU level will help insurers to assess the risks linked to the operation of relevant AI systems. In addition, the proposed Data Act will promote access to data generated by a user’s product and insurance provision will be facilitated by such data.

    - Victims of damage caused by AI applications with a special risk profile (natural persons and businesses): A harmonised strict liability regime would facilitate victims’ access to compensation to an even greater extent than alleviations of the burden of proof. A possible mandatory insurance regime would relieve victims’ of the liable parties’ insolvency risk and provide them with an even cheaper, faster and easier path to compensation. 206  

    Quantification: Based on conservative assumptions regarding the costs to be advanced by victims to meet the burden of proof under existing liability rules, and the effect of PO2 on this cost factor, it is estimated that this PO would reduce this litigation cost by at least 60 %, that is to say ca. EUR 2 500, on average per case in which the envisaged measures apply. 207 This assumption is based on the consideration that victims would not bear any significant costs linked to the burden of proof in cases where they can invoke strict liability. The added benefit for victims compared to PO1 concerns however only the relatively limited number of cases involving certain specific AI-enabled technologies, where strict liability would apply.

    - Consumers: The expected benefits in terms of a faster roll-out of AI-enabled products and services are likely to be similar under PO2 as under PO1, because their common element – alleviations of the burden of proof under fault-based liability rules – applies to the major part of AI-enabled products and services affected by the initiative. The strict liability and mandatory insurance elements of PO2 are much more limited in scope. In respect of these elements, the difficulty linked to specifying already at the present point in time the AI-enabled technologies with a ‘strict liability profile’ could lead to a certain degree of legal uncertainty, which could potentially dis-incentivise AI rollout in certain cases.

    - Insurance companies: The harmonisation of strict liability under PO2 is likely to entail some additional demand for insurance, in particular when coupled with a compulsory insurance regime, which generates new market opportunities for insurance companies.

    - Indirect economic impacts and impacts on the competitiveness of the internal market: While the envisaged strict liability regime may be the most certain and easiest way to ensure that the victim does not bear the cost of the damage, it may not in all cases lead to the cost allocation on the party which was at the origin of the damage. However if it is coupled with mandatory insurance, such a cost-allocation will be achieved through subrogated recourse claims. 208  In case the insurance company of the strictly liable person compensates the victim, a damage claim of the victim based on fault or product liability would be re-assigned to the insurance company. On the basis of this claim, the insurance company could have recourse against the person at the origin of the damage, for instance against the producer if the product was defective. Thereby the most efficient cost allocation would be achieved. This scenario could be extended along the value chain. For example, if the producer has a liability claim against the developer, the insurance company of the producer would have a recourse possibility on the basis of this contractually subrogated claim against the developer. In the end, the insurance of the developer would pay. At the same time, insurance coverage of liable persons would limit the economic costs to the annual insurance premium and keep the market entries for AI producers and operators low, while it would be ensured at the same time that the victim’s harm would be compensated in a smooth way.

    - For similar reasons as under PO1, only small incremental impacts on enforcement, adjudication and litigation costs are expected under PO2.

    Quantification: Where the measures envisaged under PO2 apply, they could lead to an increase between ca. EUR 100 and ca. EUR 1500 of the litigation costs to be advanced by potentially liable businesses to defend against claims for the compensation of damage caused by AI. 209 This quantified estimate is based on the assumption that a fraction ranging from 5 % to 60 % of the amount saved by victims due to the eased burden of proof under PO2 will be re-distributed to the defendants. The following factors were taken into consideration in this respect:

    - the cost savings for victims due to the fact that they do not have to prove fault would not be shifted to the defendant because the latter cannot avoid liability by establishing, possibly with the help of costly technical expertise, that the damage was not caused by their fault.

    - By dispensing with the need to establish fault and a causal link between fault and damage, strict liability considerably reduces the overall evidentiary complexity and need for costly technical expertise.

    To the extent that risks within the scope of the strict liability element of PO2 are not already covered by a national strict liability regime under the baseline, the easier and more predictable path to compensation afforded by this measure may lead to a slight increase in the number of claims made. Any such possible increase is likely to be marginal. As strict liability would apply only in cases involving significant risks to important legal interests (life, health, property), victims would in such cases be likely to seek compensation also under the baseline, despite the challenges and costs linked to fault-based claims. 210

    (c)    Coherence – in particular with the PLD review

    PO2 would complement the measures envisaged in the PLD review. In particular, the liable persons defined by law are different and their liability is based on a different relationship they have to the risk. The law would attribute liability to the operator/user under the harmonised strict liability regime because the operation of the AI system would realise the risk for the victim. At the same time, the operator draws a benefit from the operation. The law also attributes liability to the producer under the (revised) PLD, despite the fact that the producer is a step more removed from the contact with the victim. The producer is made liable because of producing a defective product (including the AI that is necessary for it to work).

    In cases where the producer is nevertheless at the same time also the user/operator, the fact that they would be subject to the revised PLD and the harmonised strict liability regime for AI would not lead to inconsistent results because these liabilities would intervene in different capacities. Such scenarios already exist in the national laws where national strict liability schemes and product liability apply in parallel.

    As regards national law, the coherence of PO2 with the existing legal framework could be questioned in cases where the use of AI-enabled technologies would be subject to the harmonised strict liability regime while the use of functionally equivalent non-AI technologies is not subject to strict liability under national law.  (d)     Proportionality

    At this point in time, it is not clear whether, despite the alleviations of the burden of proof envisaged under PO1, , there is a liability gap which would need to be filled with a harmonised strict liability regime for AI operators. It is possible that the less interfering measures under PO1 will be sufficient to achieve the policy objectives. Moreover, at this point in time there is still some uncertainty regarding the precise characteristics, risk-profile and conditions of deployment of AI-enabled technologies with a potential ‘strict liability profile’. Therefore the proportionality of PO2 cannot be affirmed with certainty at this point in time.

    (e) Considerations regarding the choice of instrument (binding v. non-binding)

    The considerations made in the context of PO1 apply to an even larger extent: As a harmonised strict liability and mandatory insurance regime involves more far-reaching adaptations of MS’ liability rules, a non-binding instrument is not suitable to ensure a consistent implementation of PO2. For the same reasons as set out under PO1, a Directive seems to be the most appropriate legislative tool.

    6.3. PO3: PO1 + targeted review regarding the strict liability and mandatory insurance elements of PO2 (staged approach) 

    (a) Effectiveness

    During the first stage, PO3 would be equally effective to PO1. The second stage, i.e. the targeted review mechanism, would systematically re-assess the need for the strict liability and mandatory insurance elements of PO2. PO3 would ensure that the necessary information for this re-assessment is available. The review would assess if the technological and market developments as well as empirical evidence on AI liability cases confirm the need for such measures. This mechanism does not pre-empt the necessary future policy decision on the harmonisation of strict liability and mandatory insurance. But if there were a policy decision to that effect, the targeted review mechanism would have prepared the groundwork for realising the potentially even greater effectiveness described under PO2. The second stage would also gain more certainty about the technological and regulatory context defining the risk-profile and conditions of deployment of the AI-enabled technologies with a potential strict liability profile. This would thus allow to specify the material and personal scope of a possible strict liability regime with greater precision. It would therefore be conducive to ensuring legal certainty and uniform implementation (SO1 and 2).

    (b) Efficiency

    During the 1st stage, PO3 would have the same impacts as PO1. As MS anyway have to report on the implementation of the initiative in line with better regulation requirements on evaluation and monitoring, targeted reporting requirements supporting the review mechanism under PO3 would not entail a significant burden for them.

    Quantification: The decision on a possible harmonisation of strict liability and mandatory insurance for certain AI-enabled products and services with a specific risk profile would be taken only at the stage of the targeted review. Thus, the same macro-economic benefits (added AI market value in the EU of ca. EUR 500mln to ca. EUR 1.1bln) and the same incremental increase of insurance premiums (an additional EUR 5.35mln-EUR 16.1mln liability insurance premiums paid in the EU per year) are assumed as for PO1. Likewise, the same litigation cost savings for victims linked to the alleviated burden of proof (ca. EUR 2 000 per case on average) and the same incremental cost increase linked to that burden for potentially liable businesses (ca. EUR 200 to EUR 1 600 per case) are expected.

    The potential benefits of a harmonised strict liability regime would materialise later than under PO2. If there is a liability gap which would have needed to be filled with a harmonised strict liability regime for AI operators, the disadvantages of such liability gap would have their effect during a length of time corresponding to the period of data collection and assessment of the need for strict liability. However, the following factors will improve the efficiency of the measures potentially taken at the second stage:

    -By ensuring that the assessment of a possible strict liability regime for the use of AI-enabled technologies can rely on a more developed factual basis, the staged approach minimises the risk of creating an uneven playing field for AI-enabled technologies.

    -The staged approach allows the development of tailored market-driven insurance solutions, which can be taken into account at the second stage when assessing the need for and effects of a mandatory insurance regime. Moreover, AI-technologies potentially subject to such a regime will be rolled out over the coming years and statistical accident data will be accumulated. Therefore the lack of actuarial data as main potential cost-driver of AI-specific liability insurance will have become considerably less relevant by the time of the targeted review under PO3. 

    -As relevant safety standards for the AI-enabled technologies will be available by the time of the targeted review, the conditions for assessing liability risks will have improved for both insurers and liable users/operators.

    (c)    Coherence

    PO3 would insert itself without friction into the existing liability system and be consistent with the other strands of the Commission’s AI policy. The staged approach would allow the Commission to take stock of the practical effect of the planned adaptations to the PLD, in particular as regards providers of safety-relevant AI-systems. The targeted review mechanism would allow the Commission also to ensure coherence with future AI-related policy measures beyond the proposed AI Act. This may for instance concern future safety rules tailored to AI-systems with a ‘strict liability profile’ like autonomous vehicles and highly autonomous AI-enabled drones.

    (d)     Proportionality

    PO3 would ensure that future technological, regulatory and jurisprudential developments will be systematically taken into account to verify the need to harmonise strict liability for certain uses of AI. The staged approached is thus strictly based on the principle of proportionality.

    (e)     Considerations regarding the choice of instrument (binding v. non-binding)

    6.4.For the same reasons as set out under PO1 and PO2, a binding legal instrument – a Directive – would be the most appropriate tool to implement PO3. Evolution of policy options

    It is important to note that the policy options retained for detailed assessment have evolved due to the conclusions of the economic study, as well as due to the results of the public consultation and following discussions with stakeholders.

    The economic study assumed two different combinations of alleviations of the burden of proof under fault-based liability rules with a harmonised strict liability regime    and, by consequence, did not explicitly assess the economic impacts of alleviations of the burden of proof, as per PO1, taken in isolation.

    However, it proved important to consider alleviations of the burden of proof on their own (PO1), as an alternative to introducing these alleviations together with strict liability (PO2). We consider that the assessments made by the economic study are still largely relevant because:

    - According to the study, the economic benefits of an EU initiative on AI liability are primarily attached to the expected gains of legal certainty and reduced legal fragmentation. These effects are expected to materialise also with respect to the alleviations of the burden of proof taken in isolation (PO1 and first stage of PO3), which will clarify in a harmonised manner how the burden of proof is to be handled in cases involving AI.

    - The measures that PO1 shares with two of the policy options assumed for the purposes of the economic study are relevant for the major share of AI-enabled products and services, and thus decisive for the economic impacts on most stakeholders. This is because only a small set of AI-enabled technologies would have a risk profile warranting the application of strict liability.

    6.5.UN Sustainable Development Goals and indirect environmental impacts

    The policy options are expected to contribute to the rollout of AI and thus to achieving the related SDGs and targets 211 . They would also impact positively on them by contributing to the enforcement of the AI Act, because effective legislation on transparency, accountability and fundamental rights will direct AI’s potential among other benefits for individuals and society towards achieving the SDGs.

    As regards indirect environmental impacts, all policy options are expected to contribute – albeit to a non-quantifiable extent – to the uptake of AI applications that are beneficial for the environment. For instance, AI systems used in process optimisation make processes less wasteful (e.g. by reducing the amount of fertilizers and pesticides needed, decreasing the water consumption at equal output, etc.). However, there is no sufficient basis for assessing the environmental impact, since this basis could only relate to actually marketed AI-enabled products/services. However, this initiative will improve, to a great extent, the market roll-out conditions of AI-enabled products/services, which are not yet on the market.

    In addition, AI systems supporting improved vehicle automation and traffic management contribute to the shift towards cooperative, connected and automated mobility, which in turn can support more efficient and multi-modal transport, lowering energy use and related emissions. On the other hand, possible unintended effects cannot be excluded. For instance, an increase in traffic may be possible that could partly offset the lower energy use and emissions achieved through more efficient and multi-modal transport.

    7.How do the options compare?

    The PO were compared by way of a multi-criteria analysis (MCA) taking into account their effectiveness, efficiency, coherence and proportionality. 212  The following impact matrix compares the scores of the options for all of the main impact assessment criteria, using simple aggregation and assuming an equal weight of each individual criterion. The respective scores flow from the expected impacts as presented under heading 6 above. This ranking is linked primarily to the fact that PO2 includes a strict liability element which, for the reasons explained in section 6.2. a), lowers the effectiveness of this option at this point in time. Therefore,  PO2 scores slightly lower in terms of its effectiveness to achieve specific objective 1 (increase legal certainty regarding liability for AI), coherence and proportionality, leading to a marginally lower overall score.

    MCA (simply aggregation)

    Criterion

    Score net of the baseline (scale of -5 to +5)

    Option 1

    Option 2

    Option 3

    Effectiveness

    Specific objective 1

    4

    3

    4

    Specific objective 2

    4

    4

    4

    Specific objective 3

    4

    4

    4

    Efficiency

    4

    4

    4

    Coherence

    4

    3

    4

    Proportionality

    4

    3

    4

    → Simple sum

    24

    21

    24

    → Ranking based on simple aggregation

    1

    3

    1

    As three out of these six individual impact assessment criteria would come under the umbrella of effectiveness, effectiveness is de facto given more importance than the other IA criteria. Therefore, sensitivity analysis was carried out.  The results of the MCA and sensitivity analysis are summarised in the following table 213  which shows that PO1 and PO3 rank highest. This result is consistent with the fact that they involve the same harmonising provisions at the present stage. The feature distinguishing these options, the targeted review mechanism, does not lead to different impacts of the implemented measures to be implemented at this first stage.

    Policy Option

    Simple sum of scores

    Score based on equal weight of impact assessment criteria

    Score based on individually weighted impact assessment criteria

    PO1

    24 (1st)

    4 (1st)

    4 (1st)

    PO2

    21 (3rd)

    3,42 (3rd)

    3,3 (3rd)

    PO3

    24 (1st)

    4 (1st)

    4 (1st)

    Ranking of POs

    1. PO1 and PO3

    3. PO2

    8.Preferred option

    8.1. Selection of the preferred option: PO3 (staged approach)

    While liability rules are instrumental in controlling the risk associated with the development and use of AI, the findings of the research and the results of the public consultation informing this impact assessment have shown that the specific features of the AI will challenge their application. They also point towards a staged approach, as the best way to serve the objectives of this initiative, accommodate the concerns of the public and the industry, and at the same time allow Member States to integrate the adaptations into their existing traditional systems without frictions.

    Specifically, a clear majority of business stakeholders opposed strict liability in the public consultation. In addition, the relevant insurance market is still under development due to the currently lower predictability of AI-related risks linked to the lack of sufficient data.

    In this respect, the multi-criteria analysis provides a clear ranking between PO1/PO3 and PO2, confirming that it is preferable not to lay down a harmonised strict liability regime for certain types of AI-enabled products and services at the present time, whether or not coupled with mandatory insurance.

    In addition and in order to inform the necessary political decision between PO1 and PO3, it is also important to consider how well these options take into account the suggestions of the European Parliament as well as stakeholder opinions. In this respect, firstly, only the staged approach (PO3) incorporates both main elements suggested by the Parliament – a facilitated burden of proof under fault-based liability rules and a limited strict liability regime for certain AI-enabled technologies – explicitly into the proposed legislative instrument. While the strict liability element is not yet implemented at the present stage, the targeted review mechanism provides a dedicated framework preparing the ground for the future policy decision on this element. Secondly, by explicitly acknowledging the possible need for a harmonised strict liability regime in the proposed legislative provisions, PO3 also satisfies to a greater extent the opinions expressed by non-business stakeholders, a large majority of whom supported the harmonisation of strict liability.

    If, at the stage of the review, a political decision is taken to propose a harmonised strict liability regime, this measure could be designed in a way to meet concerns expressed by business stakeholders, which are at present sceptical towards it.

    In light of these considerations, the staged approach (PO3) is the most balanced, politically feasible, proportionate and yet effective option. It is most suitable to deliver the desired economic benefits in terms of roll-out of AI-enabled products and services in the internal market, and to increase citizens’ trust in AI by ensuring that victims who suffered harm caused with the involvement of AI systems enjoy the same level of protection as victims who suffered harm caused by other technologies. It is also most adapted to the political context of the AI liability initiative, including the Parliament’s legislative own-initiative resolution, and stakeholder feedback.

    8.2. Rationale and main impacts of the preferred PO 

    PO3 would achieve the SO of reducing legal uncertainty, preventing fragmentation and protecting victims by ensuring an effective path to compensation. The measures included in PO3 would also incentivise compliance with safety requirements and requirements designed to safeguard fundamental rights, applicable to AI-enabled products and services and thus reinforce the eco-system of trust in AI.

    The main impacts concern potentially liable persons and potential victims. On the one hand, the internal market obstacles identified will be reduced and this will lead to reduced costs for business, which would outweigh any possible adaptation (substantive compliance) costs. On the other hand, PO3 will reduce victim’s costs and re-distribute the burden of bearing the damage to the person responsible for causing it.

    The choice of PO3 as preferred option is a result of the application of the proportionality principle. This PO will allow to reassess the need for harmonising, in addition to aspects related to the burden of proof, also situations with specific risk profile where the use of AI would warrant strict liability. At that point in time, the price of insurance will likely be more stable because of more available data.

    8.3. The interplay with the PLD review: consistency and complementarity of the package of measures for the compensation of damage caused by AI

    As explained above, the measures from the preferred PO of the two impact assessments address challenges posed by emerging digital technologies with respect to claims based on different grounds, covering different types of damages and victims and against different liable persons. Both preferred PO have a consistent approach and use similar tools (access to information, adaptations of the burden of proof) to ensure, in their combination, that victims of damage caused by AI-systems have the same level of protection compared to damage caused by traditional products, no matter the liability path taken.

    8.4. One-In-One-Out

    The preferred PO involves small incremental adjustment costs for liable parties. These costs are linked to a transfer of costs of compensation from victims. The envisaged alleviations of the burden of proof will allow victims to claim compensation successfully in cases where they would have been unable to prove a justified claim under the baseline scenario due to the AI-specific difficulties of proof. This transfer is fully in line with the policy objectives and the spirit and purpose of liability rules. Its effect on liable parties has been approximated in terms of possible increases of general liability insurance premiums for a transitional period, where data is lacking. Also, potentially liable entities are in many cases likely to be covered by voluntary insurance, so that their exposure –which will be in any case done at their own interest- would again be limited to the premiums. Overall, it is expected that the annual volume of premiums paid for liability insurance will increase by EUR 5.35mln - 16.1mln due to the preferred option. This incremental increase does not represent a significant burden for stakeholders active in the AI market, given that the overall amount of these premiums is over EUR 42bln per year, since it only represents an increase of 0,013% - 0,04%

    This cost transfer will be mostly relevant for businesses as potentially liable parties rather than for natural persons. This is because the AI-specific liability gaps addressed by the preferred policy option are more likely to affect the liability exposure of actors with an active influence on the functioning of the relevant AI systems. The benefits of the policy options for companies in terms of increased legal certainty and reduced legal fragmentation outweigh these marginal adjustment costs, because the business-as-usual-costs under the baseline scenario related to the uncertainty about what liability would apply to AI and what burden-of-proof rule a court would apply in a concrete case are higher.

    The preferred PO will not introduce any administrative requirements (e.g. reporting, registration, monitoring) for any of the entities within its scope, i.e. potentially liable parties or victims. A possible incremental increase in judicial claims and the related costs borne by the parties or the courts are not included in the ‘administrative costs’ for the purposes of the One-In-One-Out approach, as they are not directly linked to the compliance with a law.

    The preferred policy option will not entail costs incurred in related markets or experienced by stakeholders that are not directly targeted by the initiative. In particular, as the initiative is expected to generate net cost savings for businesses active in AI, it is not expected to lead to increased consumer prices.

    9.How will actual impacts be monitored and evaluated?

    In order to ensure that the targeted review mechanism under the preferred PO (staged approach) can rely on a sufficient evidentiary basis, this mechanism could:

    -provide for reporting and information sharing by MS regarding the application of the measures under PO 1 in national judicial or out-of-court settlement procedures;

    -use information collected by the Commission or market surveillance authorities under the AI Act (in particular Article 62) or other relevant instruments;

    -use information and analyses supporting the evaluation of the AI Act and the reports to be prepared by the Commission on the implementation of that act;

    -take into account any information and analyses supporting the assessment of relevant future policy measures under the ‘old approach’ safety legislation 214 ;

    -rely on the information and analyses supporting the Commission’s report on the application of the Motor Insurance Directive with regard to technological developments (in particular autonomous and semi-autonomous vehicles) pursuant to its Article 28c(2)(a).

    To evaluate the effectiveness of the preferred PO, success criteria and data sources have been defined on a provisional basis for each of the SO. 215  For instance:

    -for SO1, effectiveness will be evaluated based on the level of legal certainty as perceived by business stakeholders, compared to the results of the public consultation;

    -for SO2, the adoption of any diverging measures on AI liability, as reported by MS, will be taken into account to evaluate the initiative’s effectiveness;

    -for SO3, the degree to which compensation gaps have been prevented and consumer trust has increased will be assessed based on expert and behavioural analysis, and MS reporting.

    Efficiency would be evaluated against specific operational objectives, for example:

    -With respect to companies, cost reductions due to increased legal certainty and reduced fragmentation could be used as benchmark, based on stakeholder feedback.

    -Regarding victims, the level of difficulty encountered when claiming compensation for damage caused by AI could be compared to the experts’ estimates in the Economic Study.

    -For insurers, the liability-related conditions for offering insurance coverage could be assessed, based on feedback from a stakeholder survey and/or a dedicated workshop.

    In addition, criteria for evaluating the proportionality, coherence and continued relevance as well as the EU added value of the policy measures have been developed. For instance, the emergence of any more effective or efficient means to achieve the policy objectives would be assessed by Commission services taking into account stakeholder feedback and information reported by MS on relevant legal cases. Moreover, the synergetic interplay between this initiative and the AI Act as well as the revised PLD would be assessed.

    Annex 1: Procedural information

    1.Lead DG, Decide Planning/CWP references

    DG Justice and Consumers, PLAN/2020/9848 - Adapting liability rules to the digital age and Artificial Intelligence, CWP 2020

    2.Organisation and timing

    Roadmap consultation period – 30 June to 28 July 2021

    Open public consultation period – 18 October 2021- 10 January 2022

    ISSG meeting on the Impact Assessment – 18 February- participants: SG, LS, JUST, GROW, CNECT, JRC, SANTE, EMPL, COMP, FISMA, AGRI, ECFIN, ENV

    3.glossary

    Term or acronym

    Meaning or definition

    AI

    Artificial intelligence

    AI Act

    Artificial Intelligence Act: Proposal for a Regulation of

    the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (COM/2021/206 final)

    IPR

    Intellectual Property Right(s)

    MID

    Motor Insurance Directive:

    Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability

    MPR

    Machinery Products Regulation:

    Proposal for a

    Regulation of the European Parliament and of the

    Council on machinery products (COM(2021) 202 final).

    MS

    Member State(s) of the European Union

    PLD

    Product Liability Directive:

    Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products

    PO

    Policy Option(s)

    SO

    Specific Objective(s)

    4.Consultation of the RSB

    4.1. Upstream meeting with the RSB – 30 March 2020

    The guidance and advice provided by the RSB was implemented in this impact assessment, in particular:

    -The interplay of this planned initiative with other relevant policy measures (in particular the PLD review and the AI Act) were addressed in detail in close cooperation with other Commission services, to ensure that related initiatives fit into a coherent overall intervention logic, serve consistent objectives and achieve synergies. Explanations on these aspects are provided throughout the impact assessment as well as in Annexes 6, 7 and 8.

    -The measures suggested in the European Parliament’s resolution on a civil liability regime for AI were to a large extent incorporated into the assessed policy options. For detailed explanations regarding the degree of alignment with the Parliament’s resolution, see Annex 4, section 4.

    -The lack of quantified data (mainly due to the fact that the relevant AI-enabled products and services are for the most part not yet on the market) is acknowledged. The measures undertaken to remedy the scarcity of data and to nevertheless generate quantified estimates are described. See in particular Annexes 4 and 10.

    -The specific challenges of AI regarding existing civil liability rules are explained in detail in the impact assessment and Annex 5, and illustrated with examples. Concrete use-case scenarios demonstrating these challenges have been elaborated together with AI experts from the Joint Research Centre (Annex 13). In this respect, the communalities and differences between the Product Liability Directive and national civil liability rules are explained.

    -The role of insurance solutions has been emphasised and addressed in detail in the assessment of policy options, taking into account detailed discussions with insurance experts and stakeholders, as well as the feedback received from these stakeholders during the public consultation and other consultation activities.

    -The preferred option was conceived specifically to enable an effective monitoring and evaluation of the envisaged measures, and to ensure that policy decisions on the more far-reaching measures considered (strict liability, possibly coupled with mandatory insurance) will be taken, at the stage of the targeted review, on an even more comprehensive evidence base.

    4.2. Opinion of the RSB and responses

    The Impact Assessment report was reviewed by the Regulatory Scrutiny Board. It received a positive opinion with reservations. The Impact Assessment has been revised to take into account the Board’s comments:

    Comments by the RSB

    Description of how and where comments were addressed

    (1) The report should explain clearly, why the initiative cites fragmentation of national rules as the main justification for the proposed single market legal base, yet limits the scope of the initiative to AI alone, given the highly fragmented state of tort law covering other products and services between different Member States. It should better justify how, in the specific case of AI, the variety of national rules on burden of proof differs from other types of products or services. The subsidiarity assessment should be strengthened, given the initiative’s aim to create harmonised AI liability rules in deeply embedded and diverse national liability systems. The initiative should also present evidence on the perception and level of support from businesses, Member States and the European Parliament.

    The factors distinguishing AI from other technologies and the reasons for proposing AI-specific measures regarding civil liability have been clarified in the problem definition and the subsidiarity analysis. In particular, the pivotal importance of AI for the internal market as a set of crucial enabling technologies and the role of the liability initaitive as a key component of the Commission’s overall AI policy have been explained in this context. Additional explanations on the views of business stakeholders as well as MS and the EP have been included in the revised version, acknowleding transparently that MS were for the most part not expressing firm positions on the AI liability initiative.

    (2) The likely evolution of the problem and the baseline should better incorporate the likely positive effects of the proposed EU legislation on AI, as it should reduce the risks for damage from AI and the need for liability compensation. Given their timing, it is not clear whether or to what extent the supporting studies and consultations incorporate the expected positive effects of the proposed AI legislation.

    Explanations inserted in the description of the baseline, to clarify that the preventive effect of the AI Act and other relevant safety legislation was taken into account, including by the supporting studies. It is however acknowledged that this effect could be expressed in qualtitative terms only, as no quantitative estimates for AI-enabled products and services are available.

    (3) The report should analyse a more complete set of options. The report needs to discuss the reasons why it does not consider as an option the European Parliament’s Article 225 Resolution for a complete reversal of the burden of proof. If it considers that this option is not realistic or feasible, it should demonstrate this clearly in the discarded options section. In addition, the report should be more specific on the exact content of some of the measures, such as the ‘targeted alleviation of the burden of proof’ or the ‘harmonised strict liability regime’. It should consider whether there are possibly alternative solutions and should analyse these as sub-options if policy choices need to be made. Again, if some of them are not feasible or realistic, the report should discuss this in the discarded options section.

    Further detailed explanations were inserted in the description of the policy options to address this comment. In particular, the possible alternative approaches for alleviating the burden of proof (in particular the reversal of the burden of proof suggested by the EP) are described more concretely. Moreover, the reasons for retaining (only) the tool of a rebuttable presumption for detailed assessment are further developed, taking into account stakeholder feedback as well as considerations of proportionality and coherence with national legal systems.

    Similarly, with respect to policy option 2, the design and content of the strict liability regime is presented in more detail, mentioning the content of a harmonised strict liability rule and the approach to determine the scope of this rule with a list of AI use cases in a technical Annex, which could be updated by way of Commission delegated acts.

    (4) The structure of the policy options should be presented in a coherent manner. The report should present genuine and credible alternatives that can tackle the identified problems. The report should bring out much more clearly the differences between options 1 and 3, which, in terms of substance and of expected impacts, appear to be identical given that both options can be reviewed once a more robust evidence base (that could justify more ambitious action) is in place. Given that the two options, on substance, seem identical, the report should consider the continued practical relevance of option 3, and, if retained, it should be adjusted to make it substantively different from option 1 both in terms of measures included and expected impacts. Such differences would need to be substantiated by credible and robust evidence.

    The description of the policy options was revised to clarify the differences between options 1 and 3. In particular, it has been further explained how the ‘targeted review’ under option 3 differs from a standard evaluation as required by general better regulation rules. It is now mentioned that this mechanism also fulfills the function to integrate the approach suggested by the European Parliament – a combination of measures to ease the burden of proof for fault-based claims with a harmonised strict liability regime for certain specific AI applications – into the AI liability instrument, while allowing the Commission to take the necessary policy decision on strict liability at a point in time when all the necessary information and evidence will be available.

    (5) At the minimum, the report should ensure that options 1 and 3 score equally in terms of effectiveness, efficiency and coherence since any other scoring lacks credibility given their inherent similarity. The assessment of impacts for option 2 needs to be revisited and clarified, in a manner that justifies why it is not the preferred option, considering the objectives to be reached, assuming it is maintained as a realistic and feasible option at this point in time. The sub-option on insurance should be explicitly analysed.

    The scoring was adapted in line with RSB comments. Options 1 and 3 now receive equal scores and additional clarifications are provided justifying why option 2 is not the preferred option. Option 3 remains the preferred option because it strikes the best overall balance It namely takes into account all elements suggested by the European Parliament’s legislative own-initiative resolution, including a harmonised strict liability regime for certain AI-technologies, as well as the broad support from non-business stakeholder for such a measure. At the same time, it is strictly aligned with the principle of proportionality and the constraints imposed by political feasibility, given that the AI use cases with a potential strict liability profile are presently not on the market yet, and business stakeholders were strongly opposed to harmonising strict liability for AI at the present time. The revised version of the impact assessment elaborates on these considerations to substantiate the choice of option 3 as the preferred option.

    In addition, the revised version contains additional explanations pertaining specifically to the sub-option of combining strict liability with a mandatory insurance regime.

    (6) The report should be more transparent about the credibility and relevance of the quantitative impact estimates. As the economic support study did not model the impacts of the options as described in the report, it should clearly explain the limitations of its results. It should better justify the conclusion that the (non-quantified) benefits for businesses resulting from increased legal certainty and reduced legal fragmentation outweigh their (quantified) costs, not least given this appears to contradict businesses’ views.

    The uncertainty linked to the fact that the policy options were adjusted compared to the economic study is explicitly acknowledged in the revised impact assessment. The reasons for nevertheless considering the findings of that study relevant for the assessment of the policy options are explained. These explanations are namely provided in the context of PO1, because the economic study did not explicitly assess alleviations of the burden of proof taken in isolation, as per PO1.

    (7) The report should add a separate subsection on the application of the ‘one in, one out’ approach. It should explain why it has been concluded that the preferred option will not entail significant administrative costs. As indirect administrative costs are in scope of the ‘one in, one out’ approach, they should also be discussed.

    A subsection on the application of the OIOO approach has been inserted in section 8 (‘Preferred option’), explaining that the initiative will not entail administrative costs, but only small incremental adjustment costs for liable parties.

    (8) The report should explain the reasons behind divergent stakeholder views on the policy options and, if possible, differentiate the views of various businesses segments (e.g. producers, service providers, distributors versus users etc). It should explore and discuss the reasons cited by stakeholders opposed to EU-level action. The report needs to explain particularly why business stakeholders are less positive about the initiative than other stakeholders are. It should also explain whether and how such less positive views have been taken into consideration in the impact analysis and the comparison of the options. The report should be upfront about the absence of Member States views and the reasons for their decision not to engage in the tailored consultations.

    The revised version of the impact assessment elaborates on the reasons for varying stakeholder positions, with a particular focus on the views submitted by business stakeholders.

    In the revised version, it is also acknowledged and explained that MS have not yet expressed any clear views on the AI liability intiative, reserving their positions for the negotiations on the legislative proposal.


    5.Evidence, sources and quality

    The discussion on liability for AI has started with the work undertaken by the Expert Group on Liability and New Technologies 216 , which had a series of meetings in 2018 and 2019 and was organised in two formations: Product Liability and New Technologies. The New Technologies formation of the Expert Group adopted a Report in 2019 on the topic of liability for artificial intelligence and other emerging digital technologies. 217 This report informed the Commission services in the preparation of the Commission Report on AI Liability 218 accompanying the White Paper on AI.

    In view of preparing this Impact Assessment, the Commission also contracted legal experts for further comparative legal analysis 219 , an additional study to provide economic analysis 220 and a third study focusing on behavioural analysis. 221

    The specific details of all these studies, their scope and methodology are described in Annex 4.

    In addition, the Commissions services also considered the ‘Comparative study on national rules concerning non-contractual liability, including with regard to AI’ which accompanied the European Added Value Assessment on a Civil liability regime for AI 222 commissioned by the European Parliament.

    The Impact Assessment was further based on the results of the public consultation, the input provided by stakeholders on the roadmap, online webinars, survey with citizens (over 8,000) done in the context of the behavioural economics study and numerous bilateral contacts with stakeholders and meetings with public authorities. Annex 2 provides more details about these sources.



    Annex 2: Stakeholder consultation

    A. Outline of the consultation strategy/process 

    In line with the Commission’s Better Regulation requirements, an extensive consultation strategy has been implemented to ensure a wide participation throughout the policy cycle of this initiative. The consultation strategy was based on both public and targeted consultations. The Commission has sought a wide and balanced range of views on this issue by giving the opportunity to all relevant stakeholders to express their views. In particular, the stakeholders addressed by the strategy were:

    -EU and national consumer associations and civil society organisations active in the justice field, victims associations and individual victims of damage caused by products

    -Industry associations at EU, national or sectoral level

    -Economic operators, including SMEs (e.g. manufacturers of products and components, software developers, AI programmers, medical devices manufacturers, manufacturers of pharmaceutical products, retailers (offline and online), online marketplaces, repair and refurbishment providers, providers of services, operators and users of various products)

    -Insurance associations and individual insurers

    -Legal firms and lawyers

    -Academic experts and research bodies

    -Citizens

    -National authorities.

    The extensive consultation process included various activities, for which the content and results are described in greater detail in the subsequent sections of this Annex, along with explanations about how the received feedback was taken into account for the development of policy options covered by this IA:

    -public consultation on the White Paper on AI and the Commission report on safety and liability;

    -online interactive webinars with a broad range of stakeholders, focused on specific questions regarding AI liability;

    -gathering of feedback on the published inception impact assessment;

    -surveys and targeted interviews carried out in the framework of two supporting studies (general economic and behavioural analysis);

    -a dedicated 12-weeks public consultation on Adapting Civil Liability Rules to the Digital Age and AI, which covered both the general review of the Product Liability Directive and the AI-specific issues addressed by this IA;

    -a workshop with Member State representatives organised to gather feedback on the AI liability initiative and the review of the PLD.

    The results of the dedicated public consultation are summarised first (Section B), and other consultation activities subsequently (Section C).

    B.    Public consultation on Adapting Civil Liability Rules to the Digital Age and AI

    1.    Introduction

    The public consultation on Adapting Civil Liability Rules to the Digital Age and AI covered both the general review of the Product Liability Directive and the AI-specific issues addressed in this IA. Section I of the public consultation aimed to gather stakeholders’ views on how to improve the Product Liability Directive (PLD); this section received a total of 291 responses. Section II concerned problems linked to certain types of AI that make it possibly difficult to identify the potentially liable person, to prove that person’s fault and/or the defect of a product and the causal link with the damage. There was a filter question asking respondents to choose if they want to continue with Section II or finish the survey after Section I, so only 233 responses were provided in Section II. A large number of business associations (63), representing a wide variety of interests and company sizes, contributed as well as individual companies (29), including SMEs (9). Associations representing consumers also contributed (7), including BEUC. The consultation includes also input from individual citizens (95) and NGOs (10), as well as research institutions (14) and national ministries (5) 223 . Some questions received less answers, since not all participants answered all the questions. The following summary focuses on the results relevant concerning AI-related questions in particular (section II of the public consultation survey).

    In terms of geographical representation, the consultation includes contributions from 21 Member States, as well as from third countries. The geographical coverage, however, is broader because some associations, which indicated Belgium as their country of origin, also represent stakeholders from Member States not directly mentioned in the responses. The majority of contributions come from Germany (91), followed by Belgium (39) and France (20). No campaigns were identified.

    There were also 70 position papers submitted separately in the public consultation.2.    Short summary of key results

    2.1.    Problems and problem drivers

    (a) The specific characteristics of AI make it difficult to meet the burden of proof

    The results of the public consultation confirm the existence of this problem driver:

    Amongst responding consumer organisations, NGOs, national ministries, academic/research associations and EU citizens, an overwhelming majority agreed that:

    (I)it could be difficult to link damage caused by highly autonomous AI to the actions and or omissions of a human actor (91%, or 85 out of 93, agreement amongst responding citizens; unanimous agreement amongst responding consumer organisations, NGOs, national ministries and academic/research associations),

    (II) in the case of opaque and complex AI, it could be difficult for victims to prove that the conditions of liability are fulfilled (84%, or 78 out of 93, agreement amongst responding citizens; unanimous agreement amongst responding consumer organisations, national ministries, NGOs and research institutions), and

    (III) because of AI’s specific characteristics, victims may in certain cases be less protected that victims of damage that did not involve AI (79%, or 74 out of 94, agreement amongst responding EU citizens; 84% agreement amongst responding consumer organisations, national ministries, NGOs and research institutions, with only one disagreeing).

    Responding business stakeholders (business associations and companies/business organisations) also agreed broadly with this problem driver, but were less unequivocal:

    -An absolute majority agreed that because of AI’s specific characteristics, victims may in certain cases be less protected that victims of damage that did not involve AI (54 %, or 45 out of 85, v. 26 %, or 22 out of 85, who disagreed).

    -A relative majority agreed that in the case of opaque and complex AI, it could be difficult for victims to prove that the conditions of liability are fulfilled (41 or 35 out of 85 % v. 28 % or 24 out of 85 who disagreed).

    -Only with respect to the statement that it could be difficult to link damage caused by highly autonomous AI to the actions and or omissions of a human actor , more business stakeholders disagreed (45% or 38 out of 85) than agreed (36% or 30 out of 85).

    (b) Lack of compensation leads to a lack of trust in AI

    A clear majority of overall respondents confirmed that the lack of adaptation of existing liability rules to AI may negatively affect trust in AI (60 %, or 135 out of 227, agreement v. 25%, or 58 out of 227, who disagreed) and the uptake of AI-enabled products and services (56%, or 126 out of 227, agreement v. 27%, or 60 out of 227, disagreement). In particular, responding NGOs, consumer organisations, academic/research institutions and EU citizens overwhelmingly confirmed these problems (92%). By contrast, close to 60 % (or 49 out of 85) of responding business stakeholders (business associations and companies/business organisations) did not confirm them, while around 27 % (or 23 out of 85) did. 

    (c) Legal uncertainty

    The public consultation confirmed legal uncertainty leading to internal market obstacles: Amongst responding consumer organisations, NGOs, academic/research associations and EU citizens, an overwhelming majority agreed that:

    (I)it is uncertain whether and how liability rules under national law apply to damage caused by AI (84%, or 27 out of 32, agreement amongst consumer organisations, NGOs and academic/research associations; 85%, or 80 out of 94, amongst EU citizens),

    (II)it is uncertain how national courts will address possible difficulties of proof and liability gaps in relation to AI (81%, or 25 out of 31, agreement amongst consumer organisations, NGOs and academic/research associations; 91%, or 84 out of 92, amongst EU citizens).

    Responding business stakeholders were more divided: A bigger share disagreed with statement (i) (52%, or 44 out of 85), while still a sizeable share of 32 % (or 27 out of 85) confirmed uncertainty as to how national liability rules will apply. More business respondents agreed with the uncertainty as to how national courts will react to the challenges of AI: 34% (29 out of 85) confirmed uncertainty, while 43% (36 out of 85) did not.

    Substantial shares of businesses stakeholders (although not the overall majority) think that the lack of adaptation of the current liability rules to AI will entail a number of negative economic consequences. They expect for instance that this lack of adaptation of liability rules will entail additional costs (e.g. legal information costs, insurance costs) for companies (27% or 23 out of 85), and that companies may defer or abandon certain investments in AI technologies if the current liability framework is not adapted (20% or 17 out of 85).

    A slight majority of business associations representing primarily SMEs think that, if the liability framework is not adapted:

    -companies will face additional costs (e.g. legal information costs, increased insurance costs) (4 agree while 3 disagree),

    -insurers will increase risk-premiums due to a lack of predictability of liability exposures (3 agree and 2 disagree) and

    -there will be a negative impact on the roll-out of AI technologies in the internal market (3 agree and 2 disagree).

    (d) Legal fragmentation

    A majority of overall respondents expect that legal fragmentation with respect to civil liability for AI and/or the lack of adaptation of the existing liability framework will, for instance, entail at least to some extent additional costs for companies (e.g. legal information costs, insurance costs) (75 % or 153 out of 205) and the need to adapt AI technologies, distribution models and cost-management models (70 % or 146 out of 205). They will also cause at least some companies to limit their cross-border activities related to the production, distribution or use of AI-enabled products or services (67 % or 135 out of 203).

    Regarding business stakeholders in particular, more than half of these respondents expect that, if Member States adapt their liability rules to AI in a divergent way, this will entail at least some additional costs for companies (54%, or 46 out of 85 agree), may reduce cross-border activities involving AI (36% (31 out of 85) ) and have a negative impact on the roll-out of AI technologies (39% or 33 out of 85) while 29% (or 25 out of 85) do not expect such an impact. ).

    2.2. Policy options

    (a) Policy Option 1

    EU citizens, consumer organisations, academic/research institutions and NGOs overwhelmingly supported the measures envisaged under Option 1.

    Replies from business stakeholders (business associations and companies/business organisations) were more nuanced: Their agreement and disagreement is roughly evenly distributed as regards rules on the disclosure of information (33%, or 28 out of 86, agreement v. 31%, or 27 out of 86, disagreement). A majority of business stakeholders is against inferring facts from the refusal to disclose information (63% or 54 out of 86, disagreement), shifting the burden of proof (63% or 54 out of 86, disagreement), or presuming causality if the provider of an AI system did not comply with their obligations under the AI Act (41%, or 36 out of 86, disagreement v. 30%, or 26 out of 86, agreement). By contrast, business stakeholders’ views were roughly evenly split about applying such a presumption vis-à-vis the user of the AI-system: while ca. 35 % (or 30 out of 86) of them disagreed with that approach, close to 30 % (or 25 out of 86) supported it.

    SME stakeholders were more supportive of measures to ease the burden of proof: The responding individual SMEs approved of those measures. The views of business associations representing (primarily) SMEs were in most cases roughly evenly split.

    (b) Policy Option 2

    The results of the open public consultation showed support amongst EU citizens, NGOs, academic/research institutions and consumer organisations for either full or minimum harmonisation of strict liability for the operation of certain AI-enabled products and the provision of certain AI-enabled services. In particular, EU citizens favoured a ‘full harmonisation’ approach to strict liability (77 %, or 68 out of 88, v. 54 %, or 47 out of 88, supporting minimum harmonisation 224 ). By contrast, more responding consumer organisations supported a minimum harmonisation approach, allowing MS to maintain broader and/or more far-reaching national strict liability schemes (4 out of 7 agreed with a minimum harmonisation of strict liability). None of the responding consumer organisations supported full harmonisation of strict liability.

    Business stakeholders tend to oppose the harmonisation of strict liability. Opposition was stronger regarding a minimum harmonisation approach (70 %, or 59 out of 84, v. 14%, or 12 out of 84, support) than a full harmonisation of strict liability (42 %, or 35 out of 84, disagreement v. 30 %, or 28 out of 84, agreement). Interestingly, SME stakeholders were distinctly more supportive of a harmonised strict liability regime: Almost all of responding individual SMEs (9) approved of this policy option, and business associations representing (primarily) SMEs were evenly split regarding both the minimum and full harmonisation of strict liability. Business stakeholders consistently argued against the harmonisation of strict liability in their position papers.

    (c) Policy Option 3

    The ‘staged approach’ (= preferred option) was not presented as a separate policy option in the public consultation, because it is a combination of elements included under the other policy options. Therefore, there are no results linked specifically to this PO. The staged approach was developed and refined in light of feedback received from stakeholders throughout the impact assessment process. It strikes a balance between the needs expressed and concerns raised by all relevant stakeholder groups.

    3. Detailed summary of results

    The following summary explains for each of the topics covered by the public consultation the respective results as regards the problem analysis as well as the development and assessment of policy options and how they were taken into account in the IA.

    3.1.    Problems to be addressed: Lack of compensation and consumer trust, legal uncertainty and fragmentation

    (a)    Problem for victims of damage caused by AI: Lack of compensation

    The problem of lack of compensation due to the specific characteristics of AI was largely confirmed.

    -74% (or 167 out of 227) of overall respondents (56%, or 127, strongly) agreed that in the case of AI that lacks transparency and explainability, it could be difficult for injured parties to prove that the conditions of liability (fault or causation) are fulfilled. Only 12 % (or 26 out of 227) of respondents disagreed.

    Looking at specific stakeholder groups, all responding consumer organisations and national ministries as wells as research institutions confirmed this problem. An overwhelming majority of EU citizens (91% or 85 out of 93) also agreed. Also a relative majority of businesses, including business associations and individual companies, agreed with this statement (41% or 35 out of 85), while 28% (or 24 out of 85) disagreed.

    -On the question about whether it could be difficult to link damage caused by AIto the actions or omissions of a human actor when AI operates with a high degree of autonomy, approximately 67% of respondents agreed (153 out of 227, of which 105 strongly, i.e. ca. 46%) that it could be difficult to link damage caused by highly autonomous AI to the actions or omissions of a human actor. 20% (or 47 out of 227) disagreed. In this respect, high agreement was reached amongst responding citizens (84% or 78 out of 93) and almost unanimous agreement (94%, or 31 out of 33, with only one disagreeing) amongst responding consumer organisations, national ministries and academic/research associations. The same applies for the responding national ministries and research institutions. Businesses were split on this question: while 36% (or 30 out of 85) agreed with this question, 45% (or 38 out of 85) disagreed.

    -61 % (or 139 out of 232) of respondents also agreed (42%, or 95 out of 232, strongly) that because of specific AI characteristics, victims of damage caused by AI may in certain cases be less protected than victims of damage that did not involve AI; only 23% (or 52 out of 232) disagreed. While a majority of businesses disagreed with such statement (54% or 45 out of 85), 26% (or 22 out of 85) did think that victims will be less protected due to the specific characteristics of AI. Respondent consumer organisations, national ministries and EU citizens overwhelmingly confirmed this problem.

    The positions submitted in writing point to a similar trend: Representatives of academia, NGOs and consumer associations agree that AI characteristics make it more difficult for victims to claim compensation and some agree with the legal uncertainty of applying existing rules. Many business associations and companies cast doubt on the specific challenges linked to AI, although some admit that victims might have difficulties claiming compensation.

    èThese results of the public consultation were analysed and taken into consideration to fine-tune and confirm the problem definition. In particular, it was further clarified in the IA that the AI-specific difficulties in claiming compensation do not affect only private individuals harmed by AI, but companies – in particular SMEs – may face similar problems. The link between AI-specific challenges to current liability rules and the identified internal market barriers was also further clarified.

    (b)    Problem of lack of trust in AI – reduced uptake

    For a majority of respondents the lack of adaptation of the current liability framework to AI may negatively affect trust in AI (60%, or 135 out of 227, agreed; 25%, or 58 out of 227, disagreed) and the uptake of AI-enabled products and services (56%, or 126 out of 227, agreement v. 27%, or 60 out of 227, disagreement).

    NGOs and consumer associations overwhelmingly confirm these problems (89%, or 16 out of 18, expect a lack of trust and 78%, or 14 out of 18, reduced uptake). Similarly, a large majority of EU citizens consider that the lack of adaptation of the current liability framework may negatively affect trust in AI (81% or 77 out of 95) and the uptake of AI-enabled products and services (79% or 74 out of 94). Likewise, the vast majority of research institutions share this opinion. Among responding national ministries 1 confirmed the problem and 1 denied it. While the majority of business stakeholders (business associations and individual companies) (58% or 49 out of 85), did not confirm these particular problems, significant shares of these stakeholders did consider that current liability rules will lead to lower consumer trust (27% or 23 out of 85) and uptake (25 % or 21 out of 85), if not adapted. Moreover, amongst those business stakeholders who considered that AI can make it difficult to claim compensation:

    -64% agreed that the lack of adaptation of the current liability framework to AI may negatively affect trust in AI, and

    -59% considered that the lack of adaptation of the current liability framework to AI may negatively affect the uptake of AI-enabled products and services.

    èThese results were taken into account in particular to complement the findings of the behavioural economics study regarding the link between the perception of liability rules and consumer trust in AI-enabled products and services (see Annex 4 for details on that study).

    (c)    Legal uncertainty

    Almost 63% (or 144 out of 227) of respondents agreed (36%, or 81 out of 227, strongly) that there is uncertainty as to whether and how liability rules under national law apply to damage caused by AI; only 22% (or 50 out of 227) disagreed (only 5 (11) strongly).

    Consumer associations unanimously confirmed their perception of legal uncertainty, and responding EU citizens NGOs and research institutions agreed with a very large majority. In particular, 85% of EU citizens (or 80 out of 94, with only one citizen disagreeing) and 84% of responding NGOs, consumer organisations and academic/research institutions (or 27 out of 32, with none disagreeing) agreed that it is uncertain whether and how liability rules under national law apply to damage caused by AI. Responding national ministries were split. While about half of businesses stakeholders (52% or 44 out of 85) do not perceive legal uncertainty specifically with respect to liability for AI, about one third (32% or 27 out of 85) of these respondents did confirm that problem as well.

    This picture is mirrored by the responses to the question whether it is uncertain how national courts will address possible difficulties of proof and liability gaps in relation to AI. An overwhelming majority of responding NGOs + consumer organisations (77% or 13 out of 17) agreement; 0% disagreement) and EU citizens (91% or 84 out of 92, agreement) agreed that the approach of national judges is uncertain (while some of these respondents did not have an opinion on this, none disagreed). 2 out of 5 responding national ministries also confirmed the existence of such uncertainty, while one disagreed. Respondents from the business side were again evenly split on this: 34% (or 29 out of 85) confirmed the existence of such uncertainty while 43% (36 out of 85) did not.

    Businesses stakeholders in particular were asked to provide feedback on the expected consequences of a lack of adaptation of the existing liability rules to AI. In this respect:

    -Substantial shares of businesses stakeholders (although not the overall majority) think that the lack of adaptation of the current liability rules to AI will entail a number of negative economic consequences. They expect for instance:

    othat this lack of adaptation of liability rules will entail additional costs (e.g. legal information costs, insurance costs) for companies (27% or 23 out of 85),

    othat companies may defer or abandon certain investments in AI technologies if the current liability framework is not adapted (20% or 17 out of 85) and refrain from using AI when automating certain processes (21% or 18 out of 85).

    -Interestingly, when looking specifically at business associations representing primarily SMEs, opinions are even more balanced. In particular, a slight majority of these respondents slightly agree with the statements that companies will face additional costs (e.g. legal information costs, increased insurance costs) (4 agree while 3 disagree), that insurers will increase risk-premiums due to a lack of predictability of liability exposures (3 agree and 2 disagree) and that if the liability framework is not adapted there will be a negative impact on the roll-out of AI technologies in the internal market (3 agree and 2 disagree). They slightly disagree with the statement that if the framework is not adapted companies may refrain from using AI when automating certain processes (4 against 3) and that there will be higher prices of AI-enabled products and services (4 against 3).

    -Most of the few responding individual SMEs expect additional costs, reduced investments and cross-border business activities, higher prices and insurance premiums as well as a negative impact on the roll-out of AI as a result of the lack of adaptation of current liability rules.

    -When focusing specifically on those business stakeholders who agreed that there is legal uncertainty regarding the application of national liability rules to damage caused by AI, strong majorities expect that, if the current liability framework is not adapted:

    ocompanies will, at least to a moderate extent, face additional costs (82%);

    ocompanies may refrain, to a moderate, large or very large extent, from using AI when automating certain processes (66 %);

    ocompanies may limit their cross-border activities related to the production, distribution or use of AI-enabled products or services (62%);

    oinsurers will increase risk premiums, at least to a moderate extent, due to a lack of predictability of liability exposures (67%).

    èThe results regarding legal uncertainty and its impacts were incorporated into the problem analysis in this IA. In particular, significant shares (although relative minorities) of business stakeholders confirmed the perception of legal uncertainty, and amongst those who did, strong majorities linked the lack of adaptation of existing liability rules to AI to negative consequences such as additional costs and reduced cross-border activities in AI. The fact that a clear overall majority of respondents confirmed the existence of legal uncertainty regarding the application of national liability rules to AI also fed into the problem definition.

    (b)    Legal fragmentation

    If Member States adapt liability rules for AI in a divergent way, or national courts follow diverging interpretations of existing liability rules in the case of damage caused by AI, 75% (or 153 out of 205) of respondents expect that this will entail at least some additional costs for companies active in AI (e.g. legal information costs, increased insurance costs). 70% (or 146 out of 205) also expect legal fragmentation with respect to liability for AI to entail a need to adapt AI technologies, distribution models and cost management models at least to some extent.

    Businesses stakeholders in particular were asked to provide feedback on the expected impacts of AI-specific legal fragmentation. Amongst these respondents:

    -A majority (54% or 46 out of 85) expects that legal fragmentation with respect to civil liability for AI will entail at least some additional costs for companies (e.g. legal information costs, increased insurance costs). Only 15% (or 13 out of 85) of businesses would not expect such costs at all.

    -A relative majority (46% or 39 out of 85) also expects that legal fragmentation with respect to liability for AI entails a need to adapt AI technologies, distribution models and cost management models, while only 18% (or 15 out of 85) disagree. 59% (or 49 out of 83) of responding businesses stakeholders anticipate that fragmented AI-specific liability rules would entail at least some (including small) need for technological adaptations when providing AI-based cross-border services (v. 27% or 22 out of 83 who do not anticipate such a need for adaptation at all).

    -50% (or 42 out of 84) expect that insurers will increase premiums due to more divergent liability exposures (whereas 14% or 12 out of 84 do not).

    -36% (or 31 out of 85) expect that companies may limit their cross-border activities related to the production, distribution or use of AI-enabled products or services, while only 28% (or 24 out of 85) would not.

    -Similarly, 38% (or 32 out of 85) think that legal fragmentation regarding liability for AI will lead to higher prices for AI-enabled products and services (while only 24%, or 21 out of 85, disagree).

    -39% (or 33 out of 85) expect that increased legal fragmentation regarding liability for AI would have a negative impact on the roll-out of AI technologies, whereas only 29% (or 25 out of 85) dispute this impact.

     

    Responding business associations representing in particular SMEs tend to agree even more strongly that legal fragmentation would lead to additional costs, reduced investments and cross-border activities, higher prices and insurance premiums, as well as a negative impact on the roll-out of AI. For instance, 7 of these respondents agreed that legal fragmentation can entail additional costs for companies (e.g. legal information costs, increased insurance costs) when producing, distributing or using AI-enabled products or services, while only 1 of them disputed this. The same trend was confirmed by the responding individual SMEs.

    èThe results regarding legal fragmentation and its impacts confirmed the economic analysis commissioned for this impact assessment (see Annex 4 for details), and were taken into account in the problem definition in this IA.

    3.2.    Need for EU action

    The public consultation results showed clear overall support for EU action on AI liability. When asked to rank policy options by order of preference from 1 (most preferred) to 8 (least preferred), respondents gave by far the lowest average score to the option ‘No EU action’ (baseline). 225 There was particularly strong support for EU action amongst consumer organisations and NGOs, almost all of which chose ‘No EU action’ as their least preferred option. Likewise, EU citizens clearly support EU action. 226

    The following chart shows that:

    -only 14% of respondents supported No EU Action;

    -only 11% expressed a preference for addressing AI-specific issues exclusively by easing the burden of proof under the PLD;

    -75% of respondents preferred one of the policy options involving harmonised measures to adapt also national liability rules to AI:

    While support for EU action on AI liability was consistently strong across responding EU citizens, consumer organisations, NGO and research institutes, it was a bit less pronounced amongst business stakeholders in particular. Nevertheless, as shown by the following graph, while a preference for ‘No EU Action’ was considerably more widespread amongst these respondents (32 %), the largest share nevertheless expressed a preference for one of the policy options involving harmonised measures to adapt also national liability rules to AI:

    èThe results of the dedicated public consultation are aligned with the trend indicated already by the public consultation on the AI White Paper. This confirmation was taken into account in particular in the context of the proportionality and subsidiarity assessment in this IA.

    3.3. Policy options

    (a) AI-specific measures to ease the burden of proof under national civil liability rules

    The public consultation showed overall support for targeted EU measures to ease the victim’s burden of proof under existing national liability rules:

    -More than two third of respondents (67%) agreed that the defendant (e.g. producer, user, service provider, operator) should be obliged to disclose necessary technical information (e.g. log data) to the injured party to enable the latter to prove the conditions of the claim (v. only 14% who disagreed with this policy option). Consumer associations and research institutions unanimously supported this option and EU citizens were also strongly in favour of it. Amongst business stakeholders, a slightly bigger share (33%, or 28 out of 86) agreed with this policy approach than disagreed (31% or 27 out of 86).

    -In connection with the previous question, 57% of respondents agreed that if the defendant refuses to disclose relevant information, courts should infer that the conditions to be proven by that information are fulfilled (v. 33% who disagreed with this). This option received strong support from responding NGOs + consumer organisations (61% support v. 17% disagreement) and EU citizens (84%, or 77 out of 92) support v. 13%, or 12 out of 92, disagreement). By contrast, business stakeholders were more sceptical in this regard: close to two thirds (63% or 54 out of 86) of these respondents were opposed to such a rule.

    -51% (or 114 out of 225) of respondents consider that if the provider of an AI system failed to comply with their safety or other legal obligations to prevent harm (e.g. those proposed under the proposed AI Act), courts should infer that the damage was caused due to that person’s fault. While all the other stakeholder categories support this approach, with consumer associations supporting it unanimously, businesses were more split: 30%, or 26 out of 86, agreed with his option and 41%, or 36 out of 86, disagreed.

    -The option to apply a similar mechanism (presuming causality in the case of non-compliance with AI Act) to users of AI systems, received more support from businesses (35%, or 30 out of 86, agreed, while 29%, or 25 out of 86, disagreed) and was supported also by a clear majority of the other stakeholder categories.

    -63 % (or 142 out of 226) of overall respondents consider that if, in a given case, it is necessary to establish how a complex and/or opaque AI system (i.e. an AI system with limited transparency and explainability) operates in order to substantiate a claim, the burden of proof should be shifted from the victim to the defendant in that respect (v. 27% (or 61 out of 226) who disagreed with this option). While about a quarter of business stakeholders also supported this approach, a clear majority (63% or 54 out of 86) disagreed with it.

    The position papers submitted in the consultation revealed similar trends: representatives of consumer organisations, citizens and NGOs agree that victims should not bear the burden of proof and there is a need for an alleviation while opinions vary about how this could be done (for example some consider that access to information is not helpful as the victim would not be able to understand and use it). Business organisations and companies usually do not favour a shift of the burden of proof or state that sufficient alleviations can be granted by courts under the various national laws. In addition, some of these stakeholders caution against an obligation to make technical information available to the victim, because of intellectual property considerations. Academia recognises the difficulties in finding the right balance and is suggesting, for example, to focus on further defining duties of care or introducing rebuttable presumptions for breaches of such duties of care.

    èThe overall support for measures to ease the burden of proof showed that the envisaged options go into the right direction. In particular, responses from business stakeholders reinforced the Commission’s approach to focus not only on producers but also other responsible actors in the AI value chain (in particular users of AI systems). The feedback received during the public consultation also allowed for a fine-tuning of the policy options covered by this IA. For instance, the intellectual property concerns raised by businesses stakeholders will be taken into consideration when conceiving provisions on the disclosure of information to be recorded/logged pursuant to the AI Act. In addition, the results showed that businesses considered liability-related questions primarily from the perspective of a potentially liable party, which is in line with the PLD’s approach to exclude property damage suffered by businesses or economic loss from its scope. However, the AI-specific issues covered by this IA can affect also businesses suffering AI-induced harm, in particular SMEs. The explanations in the problem definition and the policy options have been strengthened to clarify this angle for business stakeholders.

    (b) Strict liability and mandatory insurance

    EU citizens, NGOs and research institutions were strongly in favour of harmonising strict liability for operating AI-enabled products and providing AI-enabled services, limited to cases where these activities pose serious injury risks to the public:

    -EU citizens: 54% (or 47 out of 88) agreement with minimum harmonisation of strict liability; 77% (or 68 out of 88) agreement with full harmonisation;

    -NGOs + research institutions: 73% (19 out of 26) agreement with minimum harmonisation of strict liability; 69% (18 out of 26) agreement with full harmonisation.

    They also tended to agree with laying down a harmonised insurance obligation, where it does not exist yet, covering such a strict liability regime.

    Consumer organisations supported strict liability only in the form of minimum harmonisation, allowing Member States to maintain broader and/or more far-reaching national strict liability schemes applicable to other AI-enabled products and services (4 agreed with minimum harmonisation of strict liability v. 3 that disagreed; 6 disagreed with full harmonisation whereas none agreed with this approach).

    Written contributions submitted by citizens, consumer organisations, NGOs and research institutions were not very clear on the strict liability option; the ones that supported it did not provide much input or referred to the need to exempt consumers from strict liability, to take risks and circumstances of use into account or to allow national law to have in place more favourable regimes.

    The survey results showed that business stakeholders tend to disagree with the harmonisation of strict liability and mandatory insurance, but their disagreement was stronger as regards a minimum harmonisation approach to strict liability (70% opposition (59 out of 84) v. 14% support (12 out of 84) than a full harmonisation approach (42% disagreement (35 out of 84) v. 30% agreement (28 out of 84)). In the position papers, business associations and companies almost universally opposed a harmonisation of strict liability, for various reasons: some consider that AI does not warrant strict liability in general as not being more dangerous than other technologies, others consider that it is too soon, as uses of AI that would pose serious risks are not yet here and harmonising it would deter innovation, while others believe that existing national strict liability regimes are sufficient. The few representatives of this category that supported strict liability had done so to the extent it would be clearly limited to a few high-risk cases, if the heads of claims would be restricted or if clear exceptions would also be harmonised.

    èStakeholders’ diverging views on the harmonisation of strict liability and mandatory insurance for AI fed into the design and assessment of the policy options covered by this IA. The results namely confirmed the importance of achieving a balanced distribution of risks by following a targeted and strictly proportionate approach, focused only on AI-systems with the most critical risk profile. In addition, business stakeholders’ reservations regarding the need for harmonised strict liability rules and the technological scope of such a regime were aligned with the policy option involving a ‘staged approach’, whereby an EU initiative would be initially limited to targeted measures easing the burden of proof under fault-based liability rules, and the need for a harmonised strict liability regime would be re-assessed at a later stage, when more evidence on possible liability gaps and the relevant AI-enabled products and services is available.

    (c) Types of compensable harm

    Stakeholders were asked whether an EU instrument on AI liability should harmonise the types of harm that give rise to a compensation claim when caused by AI. Here again, views varied widely between different stakeholder categories.

    Amongst EU citizens, consumer associations, NGOs and research institutions there was clear support for EU rules prescribing the compensability of pure economic loss (e.g. loss of profit), loss of or damage to data (not covered by the GDPR) and immaterial harm (like pain and suffering, reputational damage or psychological harm).

    By contrast, business stakeholders were mainly opposed to such rules. Their opposition was strongest regarding the compensability of immaterial harm (63% (or 52 out of 83) disagreement v. 23% (or 19 out of 83) support) and of data loss/damage not resulting in a verifiable economic loss (75% (or 63 out of 84) opposition v. 12% (or 10 out of 84) support). This picture is confirmed also by written contributions submitted by business stakeholders. They namely emphasised the very diverse approaches under existing national rules and the concern that harmonisation for AI-related damage would create compares to cases not involving AI.

    èStakeholders’ opinions contributed to shaping the preferred policy option as presented in this IA. The widespread calls for ensuring the compensation of various types of harm support the Commission’s holistic approach to look not only at the existing EU instrument (the PLD) – which covers only damage caused by physical harm or damage to individual private property. Under existing national civil liability rules, victims can claim compensation also for other types of harm (including e.g. immaterial harm, damage to business property, etc.). This is why it is crucial to ensure that victims have the same level of protection also under those national rules. At the same time, the Commission took into consideration the concerns raised by (primarily business) stakeholders, who pointed out that a harmonisation of the types of compensable harm only with respect to cases where AI causes damage could lead to an internal fragmentation within national legal systems, and inconsistent outcomes.

    èIn light of these results, it was determined that the most coherent and proportionate approach is to ensure that victims are not less protected under existing national liability rules when AI causes damage, but abstain from harmonising the types of compensable harm in an AI-specific instrument

    C.    Other consultation activities

    1. Public consultation launched by the White Paper on AI and the Commission report on safety and liability

    An open 16 weeks web-based public consultation ran from February 19 to June 14, 2020. The objective of the consultation was to collect views/opinions on the White Paper on Artificial Intelligence - A European Approach to excellence and trust, COM(2020) 65 and on the Commission Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM (2020) 64. The public consultation resulted in more than 1200 responses from all categories of stakeholders from across the EU.

    63 % of respondents to the open public consultation on the AI White Paper in 2020 were in favour of adapting national liability rules for all or for specific AI applications to better ensure proper compensation and a fair allocation of liability. In that context, citizens were particularly supportive with 72% of respondents calling for some adaptations. While the overall support among companies was 45%, it was higher among SMEs with 60% of respondents.

    èThese results confirmed the need to examine specifically the need for EU policy measures on AI liability, which was taken up in the further evidence gathering and impact assessment activities.

    2. Online interactive webinars

    Twelve webinars were organised from 8 to 12 June 2020 for a total of 18 hours of interactive discussions with stakeholders. The webinars had the objective to inform the Commission as part of the consultation launched by the AI White Paper and to encourage stakeholders to participate in the public consultation.

    211 stakeholders registered and 170 participated. Many stakeholders’ categories were represented: the majority were from the tech industry, including medical tech companies, producers and software developers. Organisations representing consumers were also well represented as well as academia and legal professions. Interested citizens and non-European stakeholders also participated. A successful effort in the communication strategy was put in place to make sure that the most important organisations representing SMEs and start-ups participated in the discussions.

    The webinars covered three main topics related to liability: how to define high-risk AI applications, who should be held strictly liable and possible measures to help victims in case of fault-based rules.

    2.1. Webinar 1: Risk-based approach: How should high-risk and non-high-risk AI applications be determined?

    Stakeholders provided feedback on the risk based-approach presented in the AI White Paper and on the possible criteria to identify the AI applications with a specific risk profile. Several stakeholders asked for consistency on what is high risk in the different dimensions presented in the WP (fundamental rights, safety and liability). However, a slight differentiation could be justified with respect to strict liability, which should have a narrower scope.

    èThese insights were taken into account when developing the policy options. In particular, the need for consistency with the other strands of the Commission’s AI policy is reflected by the fact that the policy options on easing victims’ burden of proof are closely linked with the AI Act. Moreover, the call for a narrow scope of a possible strict liability regime was taken into account for designed the strict liability element of Policy Option 2.

    A European organisation representing consumers stressed that consumers should be entitled to an effective redress mechanism and access to justice irrespective of the risk level. Another consumer organisation stated its preference for a precautionary approach taking into account how new an application is and how little it is known about the risks it can raise. Interests of minorities should also be considered.

    èThis feedback was consistent with the Commission’s objective to ensure that, whichever route to compensation victims of harm caused by AI choose (fault-based, strict or PLD), they need to enjoy the same level of protection as persons having suffered harm caused by other technologies.

    A number of participants emphasises that a possible liability instrument would need to fit into the current framework. The need for a mandatory insurance should also be carefully considered. An organisation representing start-ups added that strict liability for high-risk AI applications could stifle innovation. The same representative also stated that any legislation put forward needs to be easy for companies to understand. Compliance is difficult for many start-ups. Other participants emphasised that any policy measures should be based on empirically established issues to address real gaps. Some stakeholders representing different interests argued that the level of autonomy should be considered for the criteria to define AI applications with a particular risk profile. A greater degree of autonomy leads to a higher risk. Another element to consider is whether the AI system merely supports the human decision or replaces it. While participants largely agree that the AI currently on the market is merely assisting human decisions, it was highlighted that some challenges might arise in those cases where AI is autonomous and continues to develop once it is put on the market. The AI could for example learn to act outside previously awarded safety certificates.

    èThis input was taken into account in developing the policy options. In particular, the measures were designed in a flexible manner to fit seamlessly in MS’ existing liability frameworks. Moreover, the staged approach was developed to support an empirical and evidence-based approach to AI liability, in particular when it comes to the more far-reaching measures such as strict liability and mandatory insurance.

    2.2. Webinar 2: Strict liability of the operator of high-risk AI applications: Who is the operator in that case?

    The majority of the speakers agreed in principle that the liable person could be defined based on the criterion of control. Some stakeholders representing tech companies mentioned that the criterion of “economic benefit” should also be considered, particularly in the B2B context. In addition, it was stated that the person who is closest to the operations would probably be the best addressee for liability. Some stakeholders were also in favour of joint and several responsibility, with the possibility to take redress along the supply chain. A number of stakeholders representing different interests were of the opinion that in the case of strict liability there is a need for mandatory insurance and caps are important, otherwise it would be difficult for the insurers to calculate the coverage. Medtech representatives supported caps in the medical industry. Mandatory insurance should be without prejudice to developers’ claim against the players in the upstream value chain. Stakeholders representing the automobile industry voiced scepticism regarding the idea of regulating liability caps at EU level because Member States do not calculate damage the same way. A lawyers’ association brought the counter-argument that different caps in different MS could result in forum shopping. Representatives from industry and legal professions took the view, with some variations, that exceptions/defences are important, namely:

    -Force majeure - Several speakers were of the opinion that an exception from liability in unforeseen circumstances should be applicable

    -Contributory damage – most speakers were also in favour of contributory damage (e.g. in cases where the victim doesn’t carry out a necessary update);

    -Cybersecurity attacks/3rd party interference – several speakers were in favour of such an exception. However, some industry representatives argued against an absolute exception, but in favour of a defence allowing the liable person to demonstrate that he or she has complied with cyber standards;

    -Open source AI – Some representatives from academia suggested that the liability of the developer of open source AI should be limited;

    -A few stakeholders mentioned the misuse of AI by a user.

    By contrast, representatives of consumer interests spoke against any kind of defence, caps, threshold or other exceptions, because digital goods expose consumers to particular difficulties when it comes to claiming compensation.

    èAll of this feedback was taken into consideration when developing and assessing the policy options. In particular, participants’ input regarding the detailed elements of a possible strict liability and mandatory insurance regime fed into the design of policy option 2. Moreover, the importance of taking into account redress possibilities for liable persons was reflected in the IA.

    2.3. Webinar 3: Burden of proof for AI applications other than those with a specific risk profile: How can it be made easier for victims of damage caused by such applications to prove their claim?

    Representatives of consumer interests emphasised the crucial importance of the burden of proof for all AI products, stressing that it is it is almost impossible for the consumer to prove fault and even harder to prove causality. Consumers should have to prove only the existence of a damage, because the prevailing information asymmetry between consumers and professionals justifies a reversal of the burden of proof with respect to all other liability conditions. These arguments were seconded by some representatives from academia, who stated that the difficulty of proving fault is particularly relevant in discrimination cases. An organisation representing businesses, in particular SMEs, added that a reversal of the burden of proof was necessary not only for consumers but also for companies that suffer harm.

    By contrast, several representatives of the tech and software industry argued against a reversal of the burden of proof for AI, questioning the need to deviate from the general rules. One of these discussants stressed that providers of AI-services or products should not be required to prove a negative fact (i.e. that they did not commit a faulty act). Large software companies argued that a reversal of the burden of proof was a very sharp tool and represented a significant burden for the defendant. Therefore, such a tool should only be considered on a case-by-case basis if it is prohibitively difficult for the victim to prove a claim. Representatives of tech companies submitted that the claimant should in any case be required to provide at least prima facie evidence that the defendant did something wrong. Representatives of manufacturing companies stated that a reversal of the burden of proof should apply only under the condition that it is impossible to ascertain the facts of the case, for instance in the case of traffic accidents. Legislative modifications of the burden of proof should be limited in scope based on the risk and the need for protection.

    èThese diverging views were taken into consideration to develop targeted measures to ease the burden of proof (only) to the extent necessary to ensure that victims of harm caused by AI enjoy the same level of protection as persons having suffered harm caused by other technologies, in strict alignment with the principle of proportionality. For instance, the policy options do not envisage a general reversal of the burden of proof, which was criticised as unjustified by a number of stakeholders, but a targeted alleviation only with respect to the question how or why an AI system arrived at a certain output. 

    3. Feedback on inception impact assessment

    Thirty-four stakeholders submitted feedback throughout the feedback period (30th June 2021 – 28th July 2021). Many categories were represented. The majority of the stakeholders came from the tech industry, including medical tech companies and software developers. Several NGOs submitted feedback, as well as some organisations representing consumers. EU citizens, public authorities and academic institutions also participated.

    èStakeholders from different sectors drew attention to the need for consistency with the current and up-coming legislation (e.g. PLD and its possible revisions, AI Act). This feedback was taken into account, firstly, by aligning the timing of the planned review of the PLD with the proposal on AI Liability to ensure consistency and complementarity. While the PLD review aims at ensuring the effectiveness of its horizontal and technology neutral rules for claims against producers based on defects, it has to be combined with the measures assessed in this IA to ensure that, whichever route to compensation victims of harm caused by AI choose (fault-based, strict or PLD), they enjoy the same level of protection as persons having suffered harm caused by other technologies. Secondly, the AI-specific measures to ease the burden of proof are closely linked to the AI Act, and the definitions of the AI Act are taken over to ensure consistency.

    Some stakeholders support a minimum harmonisation or an approach based on non-binding measures.

    èThese points were reflected in the further policy development, namely by assessing possible non-binding measures (e.g. a recommendation) and by designing the envisaged measures as a minimum harmonisation instrument.

    A number of stakeholders urged for a risk-based approach, stressing that AI applications ought to be treated according to the specific risk each of them poses.

    èThis feedback was aligned with the Commission’s risk-based approach, which the AI liability initiative notably implements by referring to the risk-based measures of the AI Act.

    In order to provide legal certainty and avoid fragmentation among MS, several stakeholders from different sectors favour a broader harmonisation of rules concerning extra-contractual liability. Some industry stakeholders argued that the producer should not be burdened with strict liability for damage caused by AI, as it would be impossible for him/her to anticipate every conceivable use of the product or service. This is because AI applications can be employed in unlimited scenarios and can even be misused or used in ways not specified by or foreseeable for the producer.

    èThese views confirmed the Commission’s holistic approach looking at all routes to compensation currently available to ensure that, whichever route victims of harm caused by AI choose (fault-based, strict or PLD), they enjoy the same level of protection as persons having suffered harm caused by other technologies. The call to focus not only on the producer supported in particular the Commission’s approach to look also at the liability of AI users.

    Many stakeholders from NGOs and consumers’ organisations expressed support for an alleviation of the burden of proof, in order to overcome challenges typical of AI which make it difficult for the injured parties to get compensation (such as autonomous behaviour, continuous adaptation, limited predictability and opacity). Another aspect that makes the burden of proof particularly heavy for the injured parties is their lack of technical information. Within the tech industry, while some stakeholders are generally in favour of a promotion of the exchange of technical information throughout the value chain to ensure that AI systems remain predictable, others cautioned that no obligation to disclose technical information should be imposed, if it infringes the intellectual property or legitimate confidentiality interests of the producer. An EU citizen expressed support for the introduction of an obligation to disclose technical information, whereas an NGO argued that such an obligation, although generally desirable, would not be sufficient to educate consumers about whether or not the AI system is defective in their case.

    Several stakeholders support a reversal of the burden of proof, whereas the majority of industry representatives opposed it, arguing that it would deter innovation. However, an NGO counter-argued that such a reversal would incentivise producers to invest more in preventing defects, which would in turn increase the customers’ willingness to pay higher prices for products that they reasonably trust to be safe. Finally, some companies caution against a blanket reversal of the burden of proof, since such a tool should only be applied to very specific cases as a measure of last resort.

    èThese diverging views were taken into consideration to develop targeted measures to ease the burden of proof (only) to the extent necessary to ensure that victims of harm caused by AI enjoy the same level of protection as persons having suffered harm caused by other technologies, in strict alignment with the principle of proportionality. For instance, the policy options do not envisage a general reversal of the burden of proof, which was criticised as unjustified by a number of stakeholders, but a targeted alleviation only with respect to the question how or why an AI system arrived at a certain output. 

    Finally, medtech companies maintained that, whenever AI is employed in the healthcare system, it is only used as a supporting tool, while the final decision rests with the healthcare professional who should then ultimately be responsible for his/her choices.

    èThis feedback confirmed the Commission’s analysis according to which the specific challenges of AI are relevant (only) in situations where autonomous AI systems are interposed in the causal chain between a human action/omission and the damage. The envisaged policy options are therefore designed to help victims in such situations rather than in cases (like med tech) where AI systems only a advise a human decision maker.

    4. Surveys and targeted interviews carried out in the framework of two supporting studies (general economic and behavioural analysis)

    Two external studies commissioned to inform this IA contributed to the consultation of stakeholders:

    -an economic analysis of AI-specific problems affecting current civil liability rules 227 ;

    -a behavioural economics study investigating the link between liability-related problems and policy measures and societal trust in / consumer uptake of AI-enabled products and services 228 .

    The methodology implemented by these studies is described in more detail in Annex 4. The following summary is limited to the tasks relevant as part of the consultation activities for this IA.

    4.1. Economic study

    -Consultation activities to collect data on the costs and time required to claim compensation under current liability rules, as a means of approximating victims’ difficulties in claiming compensation due to the specific characteristics of AI: The study team consulted legal experts in 13 Member States and the UK, with long-standing experience in civil liability cases related to information and communication technologies and novel technologies. The legal experts estimated the costs and time required by lawyers and technical experts in “traditional” cases of damage, as well as in cases involving AI applications. The data collection was complemented by desk research. The results confirmed that AI can make it substantially more difficult for victims to claim compensation (see Annex 4 for more detailed explanations).

    -Consultation activities to collect data on the impact of liability-related legal uncertainty and fragmentation on the roll-out of AI: The study team principally used two methods to collect data for this task: (i) A survey targeted to European trade associations and companies with activities linked to AI, and in particular the operation, production, development of AI systems in relation to the six selected use cases; (ii) Interviews with trade associations, companies and experts. The interviews were conducted in a semi-structured manner, with an overall list of questions shared in advance and the possibility to address also different topics during the interviews. The findings supported the consultant’s overall conclusion that legal uncertainty and fragmentation regarding civil liability for AI entail significant internal market barriers

    -Consultation activities to assess the impact of preliminary policy options: Here as well, the study team used a combination of survey and interviews. In order to receive targeted data across different relevant stakeholder groups, the survey was designed in three different versions: (i) for operators of AI-enabled products and providers of AI-enabled services (including companies that are at the same time manufacturers/developers of AI-systems); (ii) for the legal experts consulted also on the cost of claiming compensation under the current liability rules, to ascertain whether the preliminary policy options were suitable to improve victims’ situation; (iii) for insurance companies. The results supported the consultant’s conclusion that policy measures to ease the burden of proof, in combination with a limited harmonised strict liability regime for AI, would significantly increase legal certainty, prevent fragmentation, and thus have a significant positive effect on the functioning of the internal market in AI-enabled products and services.

    -The study team organised and hosted a webinar during the month of October 2020. The webinar enabled over 40 participants from various industrial sectors to provide views on the preliminary findings of the study and to debate the major research questions (impact of legal uncertainty, future legal fragmentation, and preliminary policy options). The webinar allowed the study team to confirm and adjust the findings and provided useful input to orientate the subsequent data collection activities.

    4.2. Behavioural economics study

    In the framework of the behavioural economics study, the consultant sought stakeholder feedback was through the following activities:

    -In-depth interviews with consumers, which helped to deepen the understanding of the behavioural dynamics that link, on the one hand, societal acceptance of AI applications, consumers’ trust and their willingness to take up AI applications and, on the other hand, civil liability rules. The insights derived allowed the consultant to fine-tune the design of the survey and of the survey-based experiment, while also partly complementing their results. The interviews were carried out individually with 18 consumers from three countries.

    -An online survey carried out on representative samples of the adult population from eight countries (Denmark, Netherlands, Ireland, France, Germany, Italy, Poland, and Romania), with a sample size of at least 1,000 respondents per country, amounting to a total of 8,079 respondents across the eight countries. The sample for each country was a quota-based one, with additional demographic weights being computed and appended to the data in order to correct any potential distortions of the characteristics of the respondents within the sample from the characteristics of the population from which the sample was extracted.

    The results of the behavioural survey and experiments showed that the perception of liability rules is one of the factors shaping societal trust and consumers’ willingness to take up AI-enabled products and services (see Annex 4 for a more detailed summary of findings).

    5. Workshop with Member States

    A workshop with representatives of MS’ governments was organised on 1 February 2022, with a view to gathering MS’ views and input on the problem definition and policy options before finalising this IA. Most MS did not express a clear position yet, but some conveyed preliminary observations.

    -Need for EU action: Some MS formulated reservations regarding the need for harmonisation of liability for AI, considering notably that existing national liability rules are sufficiently flexible to deal with the specific challenges of AI.

    èThis feedback was taken into account by re-assessing and strengthening the problem analysis in this IA. Moreover, the policy options were designed specifically to strike a delicate balance between, on the one hand, addressing the legal uncertainty and internal market obstacles linked precisely to the far-reaching flexibility in applying existing national liability rules, and on the other hand, MS’ legitimate interest in preserving the consistency of their national private law systems. The Commission’s targeted minimum harmonisation approach strikes this balance, and the staged approach was designed to ensure that comprehensive empirical evidence is available before deciding on the more far-reaching measures (strict liability and mandatory insurance).

    -Types of AI to be covered: Several MS favoured a narrow definition of AI, e.g. focused on highly autonomous and/or opaque AI.

    èThese views are aligned with the Commission approach of addressing only the particular challenges linked to the specific characteristics of certain types of AI systems (e.g. highly autonomous functioning and opacity) through targeted measures.

    Fault-based liability: A few MS were sceptical about harmonised measures to ease the burden of proof under fault-based liability rules, or at least encourage a cautious approach. However, other MS were open to the adaptation of rules on access to information to help victims make a successful claim as well as the targeted adaptation of the burden of proof.

    èMS cautious stance vis-à-vis measures adapting national rules on the burden of proof was taken into consideration when fine-tuning the design of the envisaged measures.

    -Some MS expressed the view that the types of damages compensable for harm caused by AI ought to be determined at national level or that the EU proposal should only cover economic loss, whereas the rest should be left to national law. Some MS oppose the harmonisation of rules regarding contractual limitations or exclusions of liability while one MS would welcome such intervention at EU level.

    èThese opinions correspond to the Commission’s assessment that the types of compensable harm and the admissibility of contractual limitations/exclusions of liability, not being AI-specific questions, do not need to be harmonised in an instrument targeting specifically the challenges linked to the peculiar characteristics of AI.

    -Strict liability: One MS deems it advisable to reassess the need for strict liability after the roll-out of AI-enabled products and services. According to another MS, if strict liability is considered, then a risk-based approach should be followed, although the definition of high risk is yet to be determined. Two MS opposed a harmonisation of strict liability in their responses to the public consultation. Some MS stated at the workshop that the initiative should not lower the level of protection provided by national law and that B2B, C2C and B2C relationships ought to be treated equally.

    èMember States cautious stance vis-à-vis the harmonisation of strict liability for damage caused by AI was taken into consideration when designing the policy options covered by this IA. In particular, the staged approach is aligned with MS’ concern that there might currently not yet be a sufficiently robust evidence base for introducing far reaching measures such as strict liability coupled with mandatory insurance.

    -Insurance: MS that provided input are of the opinion that, should strict liability be introduced, mandatory insurance should also be considered. However, it should first be evaluated whether the insurance sector offers such coverage on the market. A few additional points were raised: such obligation ought to be enshrined within the relevant national legal acts; mandatory insurance might cause an increase of premiums and it should be technology-neutral to ensure proportionality.

    èThese points raised by Member States were taken into account in developing and assessing the policy options. In particular, the staged approach is designed specifically to ensure that the concrete market conditions (including available insurance coverage) defining the products and services potentially subject to strict liability are empirically established before deciding on the harmonisation of strict liability and mandatory insurance.



    Annex 3: Who is affected and how?

    1.Practical implications of the initiative

    1.1.Implications for the most relevant stakeholder groups

    The preferred option has two elements: targeted alleviations of the burden of proof and a targeted review mechanism to re-assess the need for more far-reaching measures, including strict liability and possibly a mandatory insurance for the use of certain AI-technologies. Given that the review mechanism would not have practical implications for stakeholders, as it might lead only in the future to changes of the proposed Directive, this analysis will focus on the implications of the targeted alleviations of the burden of proof.

    (a) Impacts on businesses (in particular as potentially liable parties):

    The implications for businesses would mainly concern an increased legal certainty and a reduced fragmentation. It will be easier for companies to estimate liability risks and the related costs. While under the baseline scenario courts might apply on an ad-hoc basis alleviations of the burden of proof to remedy what they consider an unequitable result, clear and harmonised alleviations of the burden of proof will help companies know what to expect in case AI is involved both domestically and cross-border. Although for what concerns cross-border relations Member States can impose more stringent measures (for instance strict liability), companies would still benefit from reduced compliance costs compared to the very fragmented baseline scenario. Such clarity might also help companies get more appropriately priced liability insurance coverage. The combined economic gains linked to these impacts have been approximated in terms of increases in the AI market size. For further explanations on this quantification of indirect economic benefits, see heading 1.2.(a) below and the cost-benefits table at the end of this Annex.

    These benefits outweigh the adjustments costs as the business-as-usual costs under the baseline scenario related to the uncertainty to assess what liability rule would apply to AI and what burden-of-proof rule a court would apply in a concrete case are higher. These benefits also outweigh the redistribution effects linked to the alleviations of the burden of proof: an improved liability regime would allow shifting the costs of the damage born by the victim to the wrongdoer. However, this shift cannot be considered as an undesirable impact or undue burden. Firstly, it is in line with one of the fundamental justice-related purposes of liability law, i.e. to ensure that a person who harms another person in an illegal way will compensate the harm caused to the victim. Secondly, it is in line with the Commission policy objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by other technologies. Finally, it also achieves a more efficient cost-allocation, i.e. to the person having caused the damage/best placed to prevent damage from occurring, instead of being borne by the victim.

    (b) Impacts on victims of damage caused by AI (natural persons and businesses) 

    The targeted alleviations of the burden of proof would allow victims of damage caused by AI to have the same level of protection as victims of damage caused by other technologies. This means that it would be prevented that victims of AI induced damage would be left without compensation. Instead, the economic cost for compensating the harm occurred would be allocated to the person who is indeed responsible for causing the damage. In addition, victims’ costs linked to the burden of proof would be reduced, including the costs of expert analysis, by ensuring access to relevant information and alleviating the victim’s burden of establishing how or why an AI-system arrived at a certain output.

    (c) Consumers

    The practical benefits for businesses would also have impacts on consumers because of the faster rollout of AI-technologies. In particular, consumers will benefit from faster and more personalised services, innovative and performant products as well as advances in the fields of health, safety, security, mobility, sustainability, circular economy, media, etc.

    (d) Insurance companies

    Targeted harmonisation and adaptation of the rules on the burden of proof may lead to a slight increase in the take-up of insurance policies by liable parties due to increased legal certainty and reduced fragmentation. These aspects increase awareness of liability risks and create more favourable conditions to offer insurance policies.

    1.2. Broader economic and societal implications

    (a) Implications for the European economy

    The targeted and harmonised alleviations of the burden of proof would contribute to an efficient cost allocation. By increasing legal certainly and reducing fragmentation, the harmonised rules would incentivise the rollout of AI and consumer trust. Its combined impacts are expected to have a positive effect on cross-border trade in AI-enabled products and services and the development of the European AI-sector as a whole. 229 The economic study commissioned for this impact assessment estimated that a combination of alleviations of the burden of proof and measures to harmonise strict liability for certain AI-enabled products and services would increase the cross-border trade in the AI-enabled goods and services falling under the six use-cases analysed in depth for that study by about 5 %. While the preferred option does not include all of the assumptions made for that estimation (the need for strict liability would be assessed at a later stage), it is nevertheless relevant because the decisive drivers of the expected economic benefits – increased legal certainty, reduced fragmentation and increased consumer trust – are likely to materialise also under targeted and harmonised alleviations of the burden of proof. Under the same assumptions, the expected increase in cross-border trade could translate to an increase by EUR 150 million of the production value of cross-border trade in those products and services alone. 230 While these figures give some conservative indication of the direction of economic impacts, they do not provide an estimate of their overall magnitude. As they do not take into account trade in products and services within individual MS and only refer to the six specific use-cases, the overall economic impact is likely to be significantly larger, because the overall affected market size is much larger: the preferred policy option is estimated to generate an overall increase in the AI market size by ca. EUR 500 mln (low estimated value) to EUR 1.1 bln (high estimated value) 231 .

    (b) Environmental impacts

    On a general level, it is expected that AI-solutions can generate efficiencies and contribute to the innovation of environmentally friendly technologies. By contributing to an increased roll-out of AI-enabled products and services, the preferred policy option might hence yield certain indirect environmental benefits. However, it is acknowledged that there is no sufficient basis for assessing such effects even approximatively, because they are too far removed from the envisaged adaptations of liability rules.

    (c) Impacts relating to the UN sustainable development goals

    By incentivising the development and rollout of AI, the preferred option would support several targets 232 across all the Sustainable Development Goals (SDGs). The Covid pandemic has already showed the potential of AI to contribute to Goal 3 on good health and well-being, as it can contribute to finding effective vaccines, detecting diseases via pattern recognition using medical imagery, calculating probabilities of infection, or providing emergency response with robots replacing humans for high-exposure tasks in hospitals 233 . AI can also be an enabler by supporting the provision of food, health, water and energy services to the population. It can also support low-carbon system by supporting the creation of circular economies and smart cities that efficiently use their resources 234 . Looking at SDGs related to the environment, especially Goal 13 on climate action, there is evidence that AI can support the understanding of climate change and the modelling of its possible impacts. For instance, AI techniques can help to identify desertification trends over large areas, information that is relevant for environmental planning, decision-making and management to avoid further desertification, or help reverse trends by identifying the major drivers 235 .

    Since the present initiative aims specifically to minimise the risks related with the use of AI systems as regards access to justice, it is expected to contribute to Goal 16, target 16.3, that seeks to promote the role of law at national and international levels and, especially, to ‘ensure equal access to justice for all’. In particular, this initiative addresses the potential risk of non-effective remedies for victims of harm caused by AI-enabled products and services by helping them meet their burden of proof on the difficult-to-prove requirements where AI is concerned, namely fault, defect and causality. The preferred policy option is expected to contribute to the rollout of AI and thus to achieving the related SDGs and targets. It would also impact positively on them by contributing to the enforcement of the AI Act, because effective legislation on transparency, accountability and fundamental rights is essential to direct AI’s potential towards the highest benefit for individuals, society and the environment, as well as towards achieving the SDGs. The policy option also contributes to raising awareness of the risks and SDGs trade-offs associated with possible failures of AI systems.

    As regards indirect environmental impacts, all policy options are expected to contribute, albeit to a non-quantifiable extent, to the uptake of AI applications that are beneficial for the environment. For instance, AI systems used in process optimisation make processes less wasteful (e.g. by reducing the amount of fertilizers and pesticides needed, decreasing the water consumption at equal output, etc.). AI systems supporting improved vehicle automation and traffic management contribute to the shift towards cooperative, connected and automated mobility, which in turn can support more efficient and multi-modal transport, lowering energy use and related emissions. However, it is probable that unintended effects may occur. For instance, an increase in traffic may be anticipated that could partly offset the lower energy use and emissions achieved through more efficient and multi-modal transport.

    2.Summary of costs and benefits

    The costs and benefits of the preferred policy option are summarised in the following tables.

    I. Overview of Benefits (total for all provisions) – Preferred Option

    Description

    Amount

    Comments

    Direct benefits

    Reduced AI induced compensation gap

    No quantified estimates available. 236 The targeted alleviations of the burden of proof are expected to effectively ensure that victims of damage caused with the involvement of AI enjoy the same level of protection as persons having suffered harm caused by other technologies.

    Citizens and businesses as potential victims

    Reduced costs

    For citizens and businesses as potential victims, the alleviations of the burden of proof are expected to reduce litigation and enforcement costs linked to meeting the burden of proof under current liability rules by ca. EUR  2 000 per case in which those alleviations apply. 237 This estimate should not be misconstrued as a quantification of the AI-specific difficulty of meeting the burden of proof, because it does not take into account the cases in which liability claims would not pursued in the first place based on current liability rules, because the victim either cannot identify the liable party or considers the prospect of a successful claim insufficient to justify legal action. The preferred policy option will help victims also in the latter cases, by overcoming the compensation gaps induced by the specific characteristics of AI. This benefit is reflected in the previous row (‘reduced AI induced compensation gaps’).

    The burden of proof will be distributed more efficiently overall, as potentially liable parties must by definition be capable of influencing, to some extent, the operation of AI-systems. They are therefore typically in a position to discharge more easily the burden of proof, with respect to how or why such systems arrived at a certain harmful output. This has a cost-cutting effect on overall litigation costs.

    Citizens and businesses as potential victims

    Indirect benefits

    Increased AI market value in the EU, due to reduced costs and increased revenues achieved through increased legal certainty, reduced legal fragmentation and increased consumer uptake

    From ca. EUR 500 mln (low estimated value) to ca. EUR 1.1 bln (high estimated value) 238

    Businesses active in AI

    Safer AI systems

    Citizens and businesses as potential victims

    Administrative cost savings related to the ‘one in, one out’ approach*

    (direct/indirect)

    n/a

    II. Overview of costs – Preferred option

    Citizens/Consumers

    Businesses

    Administrations

    One-off

    Recurrent

    One-off

    Recurrent

    One-off

    Recurrent

    Targeted and harmonised alleviation of the burden of proof

    Direct adjustment costs

    EUR 5.35mln (based on the lower estimate of the AI market size) to EUR 16.1mln (based on the higher estimate of the AI market size) 239 This estimate represents the possible increase, due to the preferred policy option, of the overall amount of general liability insurance premiums paid annually in the EU. This cost factor is likely to be mostly relevant for businesses as potentially liable parties rather than for natural persons. This is because the AI-specific liability gaps addressed by the preferred policy option are more likely to affect the liability exposure of actors with an active influence on the functioning of the relevant AI systems.

    Direct administrative costs

    The preferred policy option does not involve administrative obligations that would entail direct administrative costs.

    Direct enforcement costs

    n/a

    In particular, the preferred policy option is not expected to entail additional litigation costs for private persons (as potentially liable parties). 240 These stakeholders are likely to defend themselves against liability claims using the same type of arguments and evidence as under the existing burden of proof rules. For example, they might seek to avoid liability by demonstrating that they acted diligently and in accordance with the instructions of use accompanying an AI-enabled product. Contrary to potentially liable businesses, which may have special knowledge and be subject to certain requirements regarding the functioning and ‘inner workings’ of an AI system (in particular under the AI Act), private persons would not have to base their defence on an analysis of the functioning of such a system. The envisaged alleviation of victims’ burden of proof regarding the ‘inner workings’ of AI systems is therefore not expected to prompt potentially liable private persons to commission technical expertise.

    Between ca. EUR 200 and ca. EUR 1600 to be advanced by businesses as potentially liable party, per case in which the measures to alleviate the burden of proof apply. 241  

    Indirect costs

    The preferred policy option would not entail costs incurred in related markets or experienced by stakeholders that are not directly targeted by the initiative. In particular, as the initiative is expected to generate net cost savings for businesses active in AI (see benefits), it is not expected to lead to increased consumer prices.

    Costs related to the ‘one in, one out’ approach

    Total

    Direct adjustment costs

    n/a

    n/a

    n/a

    EUR 5.35mln to EUR 16.1mln per year

    Indirect adjustment costs

    n/a

    n/a

    n/a

    n/a

    Administrative costs (for offsetting)

    n/a

    n/a

    n/a

    n/a

    3.Relevant sustainable development goals

    III. Overview of relevant Sustainable Development Goals – Preferred Option(s)

    Relevant SDG

    Expected progress towards the Goal

    Comments

    SDG no. 3 – healthy lives and well being

    Reduce mortality rate caused by existing or future diseases by supporting doctors in early detection of symptoms (including through telemedicine), finding appropriate cures (including vaccines) and supporting hospitals in case of tasks dangerous for humans or shortage of staff.

    Citizens as potential victims

    SDG no. 9 – foster innovation

    Promote the development and rollout of innovative AI enabled products and services, in particular safe AI solutions.

    Citizens and businesses active in AI

    SDG no. 13 – climate action

    Help strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries, since AI systems can be used to

    improve environmental prediction, as well as

    optimising operations.

    Citizens and businesses as potential victims

    SDG no. 16 – access to justice for all

    Ensure effective access to the evidence necessary to build a case in court for victims of harm caused by AI-enabled products and services, in order for them to enjoy the same chances of a successful claim as victims of harm caused by other technologies.

    Citizens and businesses as potential victims



    Annex 4

    Analytical methods and key findings of supporting studies / Explanations on how the European Parliament’s legislative own-initiative resolution on a civil liability regime for AI was taken into account

    This Annex presents the analytical techniques applied in the framework of the studies supporting this IA and explains how the European Parliament’s legislative own-initiative resolution on a civil liability regime for AI (2020/2014(INL) was taken into account. Detailed explanations on the methodology and criteria used for the comparison of policy options (multi-criteria analysis), as well as the results of that comparison, are presented separately in Annex 10. That Annex contains also detailed explanations, in the context of the efficiency assessment, on the available quantified estimates, and the assumptions and methodology underlying the quantification approaches. In order to avoid duplication, reference is made to the parallel IA on the PLD review for a presentation of the analytical methods used by the study supporting that IA.

    1. Legal comparative analysis

    Building on the report on from the Expert Group on Liability and New Technologies (New Technologies Formation) 242 , the challenges posed by the specific characteristics of AI with regard to various national legal systems was further investigated by legal experts using the method of comparative legal analysis. 243  

    1.1. European tort laws

    Based on input provided by experts covering representative legal systems, the comparative legal analysis of European tort laws focused on key AI-related issues, such as:

    -the standard of proof;

    -the burden of proving fault and causality, respectively;

    -causal uncertainty and procedural alleviations of the burden of proof available in national legal systems;

    -the varieties of strict liability in Europe.

    The comparative problem analysis was illustrated based on relevant use-cases, namely autonomous vehicles, autonomous lawnmowers and combine harvesters, and autonomous drones.

    Regarding the application of European tort laws to damage caused by AI, the study came to the following key conclusions 244 :

    -It is to be feared that at least some victims will not be indemnified at all or at least remain undercompensated if harmed by the operation of AI technology. The outcome of such cases in the Member States will often not be the same due to peculiar features of these legal systems that may play a decisive role especially in cases involving AI.

    -Identifying the cause of damage and convincing the court of its impact on the turn of events is challenging if it is an AI system that is suspected to have at least contributed to causing harm to the victim. This is due to the very nature of AI systems and their peculiar features such as complexity, opacity, limited predictability, and openness. Identifying harmful conduct will be the more difficult the more independent the behaviour of the AI system is designed, or – figuratively speaking – the more black the box is. After all, while human conduct is literally visible and can be witnessed, identifying the processes within an AI system and persuading the court thereof seems much more challenging, even if the defendant’s conduct relating to the AI system can be traced back with convincing evidence.

    -Those charged with the burden of proof can sometimes benefit from certain alleviations, for example if the courts are satisfied with prima facie evidence. It is hard to foresee whether and to what extent this will also be available in cases of harm caused by AI systems. At least initially, it would be difficult to apply, considering that this requires as a starting point a firmly established body of experience about a typical sequence of events, which at first will be missing for novel technologies.

    1.2. US law

    Having regard to the pre-eminent position of the US in the field of AI technologies, information on the legal regimes in the US on AI was added, focusing on federal law and the laws of relevant states. The analysis yielded the following key insights, showing that problems similar to the ones addressed by the AI liability proposal arise under US tort law, and measures to address those problems are being contemplated also by US regulators:

    -In the US, a patchwork of federal and state laws expressly address various issues involving artificial intelligence (AI). Federal legislators and regulators have paid the most attention to autonomous vehicles. In Congress, there has been considerable bipartisan support for federal legislation that would create a comprehensive framework for regulating autonomous vehicles. This legislation, however, has stalled because of the pandemic and the 2020 presidential election.

    -In addition to bodily injury or property damage, AI can cause other types of harms governed by existing US laws. In some cases, for example, plaintiffs have alleged that the defendant trained the AI with datasets that violate federal or state privacy laws. Other cases involve algorithms that allegedly violate anti-discrimination laws or those governing fair trade practices.

    -In the absence of federal law that resolves an issue of liability or insurance, state law will govern these issues. As of June 2020, seven states have adopted statutes that expressly regulate the use of AI. For example, two states have adopted statutes permitting optometrists to use AI in eye assessments. They must maintain liability insurance in an amount adequate to cover claims made by individuals examined, diagnosed, or treated with the AI.

    -The state governments’ primary emphasis with respect to the safety performance of AI has involved autonomous vehicles. As of June 1, 2020, thirty-five states and the District of Columbia have enacted statutes expressly regulating autonomous vehicles. Three states have adopted statutes that directly address liability in the event of a crash, and one of them (Tennessee) has comprehensively addressed the liability and insurance questions.

    -In the absence of either federal or state statutes that dictate otherwise, state tort law will govern cases in which AI technologies cause injury. In general, someone who has been physically harmed by AI will have four potential tort claims:

    oIf the operator negligently deployed the AI, then the victim can recover based on proof of fault, causation, and damages.

    oIf the AI determines the physical performance of a product, the victim can re-cover from the manufacturer under strict products liability by proving that the product contained a defect that caused the injury. Alternatively, if the AI only provides a service that was otherwise reasonably used by the operator, then the victim must prove that the AI performed in an unreasonably dangerous manner because of negligence by the manufacturer or other suppliers of the AI.

    oThe owner can be subject to negligence liability for negligent entrustment of the AI to the party who negligently caused the victim’s injury, or under limited conditions, the owner can be vicariously liable for the user’s negligence.

    oFinally, the victim can recover from the operator and potentially the owner under the rule of strict liability that would apply only if the AI technology is abnormally dangerous despite the exercise of reasonable care.

    -Plaintiffs in all states bear the burden of proving negligence, which will be hard to do in many cases of AI-caused injury. There might be uncertainty about how AI should be reasonably used before customary practices have been established.

    1.3. Methodological constraints

    Given budgetary and time constraints, the comparative law study could not provide a comprehensive overview of all existing tort law regimes with regard to AI, but focused on selected problem constellations that serve to highlight key aspects of civil liability. The report was therefore limited to a core outline of the key features of applicable liability rules that distinguish the tort laws of Europe. The findings are based on illustrative examples from selected European legal systems representing different legal families, with a starting point in Germanic jurisdictions due to the authors’ background.

    Furthermore, the comparative analysis was of a descriptive nature, explaining the diversity of tort laws in the EU as evidenced in particular by cases involving AI systems, and neither attempted to demonstrate potential solutions to overcome such differences nor made policy recommendations.

    In order to complement the comparative law study, and take into account as complete information as possible on the existing legal landscape, other sources were used to inform this IA. For example, the ‘Comparative study on national rules concerning non-contractual liability, including with regard to AI’ which accompanied the European Added Value Assessment on a Civil liability regime for AI 245 was taken into consideration.

    2. Behavioural analysis

    2.1. Scope and objectives

    The link between the identified legal challenges of claiming compensation under existing national liability rules and societal/consumer trust in AI-enabled products and services was researched by an external consultant, based on the methods of behavioural analysis. 246 The study examined the following two dimensions:

    -As regards the societal acceptance of AI-enabled products and services, the study assessed the current level of acceptance of AI applications, the factors shaping it, as well as the awareness of potential challenges in obtaining compensation for damage caused by these applications and its effect on societal acceptance.

    -With respect to consumers’ trust and willingness to take up AI-enabled products and services, the study generated insights on the potential impact of targeted adaptations of the liability regime on consumers’ trust and consumers’ willingness to take up such products and services, and on the causal mechanisms underlying this impact.

    2.2. Methodology

    The study followed a mixed-method approach, consisting of the following components:

    -Desk research and review of literature, through which the consultant took stock of the most up-to-date policy studies and empirical research on the topics of liability, consumer behaviour, and attitudes towards technology and AI.

    -In-depth interviews with consumers, which helped to deepen the understanding of the behavioural dynamics that link, on the one hand, societal acceptance of AI applications, consumers’ trust and their willingness to take up AI applications and, on the other hand, civil liability rules. The insights derived allowed the consultant to fine-tune the design of the survey and of the survey-based experiment, while also partly complementing their results. The interviews were carried out individually with 18 consumers from three countries.

    -An online survey carried out on representative samples of the adult population from eight countries (Denmark, Netherlands, Ireland, France, Germany, Italy, Poland, and Romania), with a sample size of at least 1,000 respondents per country, amounting to a total of 8,079 respondents across the eight countries. The sample for each country was a quota-based one, with additional demographic weights being computed and appended to the data in order to correct any potential distortions of the characteristics of the respondents within the sample from the characteristics of the population from which the sample was extracted.

    -A survey-based online experiment, through which, following a between and within subject design, the consultant tested the effect of alternative liability regimes on consumer behaviour towards AI applications. The experiment design was built around three AI applications (autonomous lawnmower, grocery-carrying robot and a smart irrigation system). These AI applications were selected based on their relevance in terms of consumer use and acceptance. According to dedicated market research, these types of applications have already been rolled out or are likely to be on the market in five years from now.

    -In order for the behavioural experiment to cover the full range of situations in which liability can play a role for consumers, separate scenarios were envisaged covering liability for ‘damage suffered by the owner of ab AI application’ or ‘damage suffered by a third party’. Within each of these scenarios, three alternative liability regimes were posited, including one (fault-based liability with the burden of proof on the injured party) that generally corresponds to most Member States’ existing fault-based liability rules. 247

    -For the online experiment, the participants comprised in the sample were randomly allocated, within the sample for each country, to one of the three groups defined by the AI applications of reference. Within each of the three groups, they were further randomly allocated to one of two groups, as defined by the two scenarios encompassing either damage to the owner of the AI application or to a third party.

    -The participants were shown a visual image of the AI application of reference and were asked to respond to a series of question within that context. After having answered the questions, they were asked to read a mock-up article presenting an accident caused by the AI application of reference. After having read the article, the treatments comprising the alternative liability regimes consisted of a series of mock-up newspaper interviews with a legal expert providing explanations on the assumed liability regime and likelihood of compensation. Each participant was required to read three such interviews in a random order and was asked after each one to answer a series of questions through which we measured the effect of the exposure to the information contained in the interview.

    2.3. Key findings 248

    The behavioural analysis has yielded the following key insights:

    -Liability regimes that place the burden of proof on the injured party contribute to lower levels of societal acceptance, consumer trust and willingness to take up A-enabled products and services. Regulatory alternatives addressing the expected lack of effectiveness of national liability rules are likely to have a positive impact on these indicators. The study has also shown that other factors such as the perceived level of safety/risk of accidents associated with AI-enabled products and services also play a significant role in shaping consumer trust.

    -The expected likelihood of receiving compensation if AI-enabled products or services cause damage is one of the factors shaping the level of consumer trust in such products and services: if consumers expect that they are less likely to get compensation for damage compared to traditional devices, levels of trust are lower. For most consumers, a low level of trust is linked, amongst others, to the same AI characteristics that are also expected to make it more difficult to claim compensation for damage, namely

    othe ability to function without human intervention or supervision and

    othe opacity of certain types of AI applications.

    -When asked to specify the reasons for their lack of trust in AI applications, consumers identified the following as the strongest reasons:

    othe difficulty to determine who is responsible in the case of damage caused by AI applications,

    othe perceived low likelihood of compensation in case of such damage, and

    othe perception of AI as ‘black boxes’.

    -The behavioural experiment also showed that, overall, liability regimes that increase the likelihood of compensation and shift the liability to a party other than the consumer (for instance a provider), lead to more positive attitudes towards AI-enabled products and services. Willingness to buy is lower under the assumption that the consumer claiming compensation for own-damage has the difficult burden of proving fault, as it is the case under most Member States’ tort laws. Willingness to buy increases if damage suffered by consumers caused by their own AI application is covered by the strict liability of another party (which could be the party that produced, designed, monitored or updated the AI application remotely) or if the burden of proving the other party’s fault is adapted in the consumers’ favour.

    -When provided with information about the likelihood that a third party harmed by an AI application will get compensation, consumers’ willingness to buy is higher if their own liability exposure is low because it is either difficult for the injured third party to meet the burden of proof or that party has easy access to compensation from another liable party. Consumers’ willingness to buy is higher under the latter scenario, which indicates that they also take into account the interests of the injured third party. Consumers’ willingness to use an AI application – for instance when it is made available as a service or for rent – follows the same pattern as in the case of their willingness to buy.

    2.4. Methodological constraints

    Due to the general methodological limitations relating to the design of consumer surveys and behavioural experiments (in particular the need to avoid over-complexity), the specificities of certain sectors (such as AI in education or healthcare), could not be reflected in the behavioural analysis. Some sectors may be partly characterised by different consumer concerns than the sectors represented by the AI applications used as examples.

    Moreover, the behavioural analysis did not investigate specifically the effect of safety legislation on consumer trust, in line with this IA’s focus on issues related to liability rules. However, the study did analyse the perceived risk of accidents associated with AI applications as a possible factor shaping consumer trust, and acknowledged the complementary nature of safety and liability rules, namely in light of the recent Commission initiatives designed to ensure the effectiveness of safety rules in the face of AI-specific challenges (AI Act, Machinery Products Regulation).

    3. Economic analysis

    3.1. Objectives

    An additional study was commissioned to provide economic analysis of the following aspects 249 :

    -the market for AI applications capable of causing damage;

    -an assessment of whether and to what extent, under the current liability framework, victims of damage caused by AI applications would be in a comparatively more difficult situation than victims of damage caused by non-AI devices when trying to obtain compensation for their loss;

    -an assessment of whether the challenges raised by AI for the national civil liability frameworks are likely to slow down the uptake of AI in the EU. In particular, the study investigates the impact of:

    olegal uncertainty: whether and to what extent businesses are uncertain about the application of current liability rules to their operations with AI, and whether the impact of legal uncertainty can hamper investments in AI;

    ofuture legal fragmentation: if Member States individually adjust their – already fragmented – liability laws to the challenges posed by AI, whether this would reduce the effectiveness of the internal market for AI applications and services.

    -an assessment of whether and to what extent harmonising certain aspects of national civil liability via EU legislation would reduce these problems and facilitate the overall uptake of AI technology by EU companies.

    3.2. Analytical methodology

    (a) Risk-based approach and use cases

    Similar to the approach set out in the White Paper on AI and the accompanying Report on Safety and Liability Implications, the economic study addressed the research questions with a risk-based approach to AI applications. This means that, in order to identify relevant industry stakeholders and subsequently assess the impact of policy options, the study team selected a number of relevant AI-enabled products and services and categorised them as follows based on their respective risk profile:

    Risk profile

    (specific/

    other)

    Type of use case

    (traditional / AI-based /

    IoT-based)

    Product/service

    Type of damage

    Motor vehicles

    specific

    AI-based

    AI-enabled autonomous vehicles

    Injury or death; damage to property (e.g.

    other vehicles, pedestrians, static objects

    such as infrastructure)

    Drones/autonomous delivery robots

    specific

    AI-based

    Autonomous drones/delivery robots

    Injury (in rarer cases death); damage to

    property (e.g. objects on the ground)

    Road traffic management systems

    specific

    AI-based

    AI- and IoT-enabled road traffic

    management system

    Injury or death; damage to property (e.g. road

    vehicles involved in accidents)

    AI-enabled manufacturing appliances

    other

    AI-based

    AI-enabled warehouse robots

    Injury or death; damage to property (e.g. goods

    being processed in a facility)

    Medical devices

    other

    AI-based

    AI-enabled Medical diagnosis services

    Injury, illness, or death

    Lawnmowers

    other

    AI-based

    Automated lawnmowers/vacuum

    cleaners

    Minor or serious injury; damage to property

    (e.g. property standing in gardens)

    The selected use cases represent examples intended to make the analysis more concrete and facilitate stakeholder input. They were the main reference for the data collection and analysis. However, the identified AI-specific problems concerning liability rules, as well as the Commission’s policy options on AI liability, are not limited to these use-cases. Therefore, both the impacts of legal uncertainty and fragmentation under the baseline scenario and the impacts of the preliminary policy options were also analysed on a broader basis, taking into account all relevant sectors of the economy. The use case based approach and the broader economic analysis complement one another to inform the overall findings.

    (b) Data collection and analysis

    - Market data collection: The study team reviewed market research reports and analyses on the current and expected economic significance of AI applications in the EU space. This data collection allowed the study team to identify market trends and to estimate the development of the market until 2030. The study team ensured the use of comparable sources that applied the same methodology to calculate market value for the different use cases. For the analysis of the structure of the EU market for AI, the study team relied on recognised databases consolidating data on companies manufacturing and/or operating AI applications in the EU.

    - Collection of data on the costs and time required to claim compensation under current liability rules, as a means of approximating victims’ difficulties in claiming compensation due to the specific characteristics of AI: The study team consulted legal experts in 13 Member States and the UK with long-standing experience in civil liability cases related to information and communication technologies and novel technologies. The legal experts estimated the costs and time required by lawyers and technical experts in “traditional” cases of damage, as well as in cases involving AI applications. The data collection was complemented by desk research.

    - Data collection on the impact of liability-related legal uncertainty and fragmentation on the roll-out of AI: The study team principally used two methods to collect data for this task: (i) A survey targeted to European trade associations and companies with activities linked to AI, and in particular the operation, production, development of AI systems in relation to the six selected use cases; (ii) Interviews with trade associations, companies and experts. The interviews were conducted in a semi-structured manner, with an overall list of questions shared in advance and the possibility to address also different topics during the interviews.

    - Data collection on the impact of preliminary policy options: Here as well, the study team used a combination of survey and interviews. In order to receive targeted data across different relevant stakeholder groups, the survey was designed in three different versions: (i) for operators of AI-enabled products and providers of AI-enabled services (including companies that are at the same time manufacturers/developers of AI-systems); (ii) for the legal experts consulted also on the cost of claiming compensation under the current liability rules, to ascertain whether the preliminary policy options were suitable to improve victims’ situation; (iii) for insurance companies.

    The analysis of the data collected relied on traditional triangulation techniques in all tasks of the study. For the market analysis, data collected from databases and market research reports was aggregated to the extent necessary to provide a sound representation of market trends until 2030. The study team organised and hosted a webinar during the month of October 2020. The webinar enabled over 40 participants from various industrial sectors to provide views on the preliminary findings of the study and to debate the major research questions (impact of legal uncertainty, future legal fragmentation, and preliminary policy options). The webinar allowed the study team to confirm and adjust the findings and provided useful input to orientate the subsequent data collection activities.

    (c) Methodology to assess the challenges of AI for victims of damage, based on the costs of claiming compensation

    As a basis for gathering cost estimates as well as input on specific challenges associated with claiming compensation for damage caused by AI, the study team presented the respondents with hypothetical liability scenarios. Those scenarios involve, in a first step, the causation of damage by ‘traditional’ technologies (that is to say, technologies without autonomous AI-enabled functions) for which the AI-enabled use-cases presented under Sections 3.1 above are, or are expected to become, available as functionally equivalent alternatives. Respondents were asked for data on the average/range of fees required by them and by technical experts to work on traditional liability cases, the average/range of hours needed to deal with this type of cases, and the likelihood that the victim obtains a favourable judgment (i.e. obtains a fair and reasonable amount of compensation).

    In a second step, the responses gathered on those ‘traditional’ liability cases were juxtaposed with respondents’ estimates and input regarding the same liability scenarios, but this time involving the AI-enabled use-cases. This approach enables a targeted comparison with a view to pinpointing the effects of AI-specific challenges under existing liability rules.

    (d) Methodology to assess the impact of liability-related legal uncertainty and fragmentation on the roll-out of AI

    - Economic impact analysis based on six use cases: The methodology was designed to estimate the size of the economy in the six use cases (measured in terms of production value and/or gross value added) concerned or affected by legal fragmentation and/or uncertainty regarding liability for AI. It is limited to the cross-border trade in this regard, assuming that these economic activities will be lowered in a worst case, if transaction costs due to uncertainty and fragmentation appear too high. The methodology is based on a economic analytical approach involving the following steps (source: Deloitte):

    The market values within the EU27 for each of the six use cases were taken as a starting point for this evaluation. In a second step, extra EU-27 exports are added and extra EU27-imports are subtracted in order to obtain the production values of the six use cases. As a next step, these production values were multiplied by two shares to determine, firstly, the production value which crosses borders within the EU27, and secondly, the proportion for which uncertainty and fragmentation regarding Member States’ existing liability rules might be relevant. The latter proportion was approximated by applying the share of the companies that see liability rules as a challenge to their businesses (based on the IPSOS European enterprise survey on the Use of Technologies based on Artificial Intelligence” of 2020). The study team thus obtained the EU27 intra-trade production (for the six use-cases) potentially affected by liability issues regarding AI.

    It should be noted, that even though cross-border business might be more affected by legal uncertainty – e.g. for companies entering regional markets outside their home country – legal uncertainty regarding the existing national liability rules can also be relevant within individual Member States. However, to estimate the share of the economy affected by liability-related impacts – being either legal uncertainty or fragmentation – in a consolidated approach, this analysis was limited to cross-border related production in the internal market. Therefore, the approach likely leads to a certain under-estimation of the size of the economy affected (measured in terms of the production value).

    - Economic analysis of the implications of liability-related legal uncertainty and fragmentation on the roll-out of AI across all relevant market sectors: As the identified AI-specific problems concerning liability rules are relevant beyond the six use cases across various sectors of the EU economy, the above methodology was complemented by a broader analysis of the economic impacts of legal uncertainty and fragmentation with respect to liability for AI. For this purpose, firstly, all sectors which include relevant AI-enabled applications were identified based on existing research and other relevant analytical findings. The Ipsos European enterprise survey of 2020 was again taken into account in this framework:

    In a second step, the percentage of AI-enabled applications in each sector was determined as a share of the total AI market size in the EU. 250 As a third step, the share of these AI-enabled applications that is affected by liability-related issues (legal uncertainty and legal fragmentation) was determined on a sectoral level. For this purpose, the ability of causing physical harm, property damage or other types of damage that may be compensable under Member States’ liability rules (such as immaterial harm and pure economic loss) were taken into account based on existing research:

    Figure Share of AI-enabled applications potentially affected by liability-related issues per sector (Source: Deloitte estimation based Ipsos (2020) and analysis of existing research.)

    Having performed these three steps, general conclusions regarding the impact of liability-related issues on the wider roll-out of AI per sector were drawn, and tendencies for future development outlined.

    The implications for SMEs were analysed specifically, taken into account the share of start-ups and other SMEs within the various sectors.

    (e) Methodology to assess the impact of policy measures

    - For the purposes of the economic analysis, preliminary policy options were determined in coordination with the Commission’s steering committee:

    These preliminary policy options comprised the same tools and elements as the policy options assessed in this IA, but the structure and logic underpinning the policy options evolved in certain respects following the conclusion of the study. In particular, the consultant did not examine specifically the ‘staged approach’, involving a harmonisation of proof-related aspects in a first stage and a targeted review of the need to harmonise strict liability for AI in a second stage. Moreover, measures to ease the burden of proof were analysed in a combination with strict liability and mandatory insurance as part of one policy option, whereas these elements were also considered as separate approaches for the purposes of this IA. The results of the economic analysis of policy options are nevertheless useful to inform this IA, as they cover the same tools and approaches, albeit in a partly different composition.

    - The costs and benefits of the preliminary policy options were analysed against the baseline (no EU action) for four separate stakeholder groups: (i) victims of damage caused by AI applications; (ii) potentially liable parties, in particular operators or potential operators of AI applications; (iii) insurance companies; and (iv) wider society and economy.

    3.3. Key findings 251

    (a) Challenges faced by victims of damage caused by AI-based products or services

    Under Member States’ current non-harmonised civil liability rules, because of the higher autonomy, opacity and complexity of AI, victims of damage caused by AI are likely to be in a comparatively more difficult situation than victims of damage caused by non-AI applications. The involvement of AI will substantially increase the burden on victims, leading even to a multiplication of costs in some Member States (in particular due to drastically increased costs for expert analyses). The AI-related burden is expected to increase significantly more when fault-based liability applies, especially if the burden of proof lies with the claimant. This is mainly due to the increased difficulty in identifying the liable party, proving fault and establishing the required causal link between a faulty behavior and the damage. The increasing degree of AI autonomy presents a significant challenge because it affects fundamentally the viability of today’s civil liability regimes, which normally require the victim to prove that the damage was caused by human fault. Furthermore, with respect to AI’s characteristic of opacity, the consulted experts submitted that the need to have a thorough understanding of the algorithms and the data flows behind the functioning of AI in order to correctly assess liability claims calls for specialised expertise. The study team’s analysis has confirmed that the estimated cost and time required to make a liability claim are relevant proxies that can be used to measure the challenges faced by victims in AI-related cases.

    (b) The effect of present and probable future challenges of AI regarding national civil liability rules on the functioning of the internal market in AI-based products and services

    Significant growth is expected for AI in all sectors over the next 10 years. Almost all AI technologies can be considered as cross-sector technologies, the adoption being especially high in equipment optimisation, anomaly detection and process automation technologies. Despite this, the roll-out of AI-technologies in the internal market could be impacted by a number of challenges that businesses will face in the coming years. Aside from the obstacles directly linked with legal uncertainty and fragmentation under existing liability rules, one challenge relates to data-sharing rules, as a significant amount of data is necessary for software developers to train the technology. The roll-out of AI-based applications could also be hampered by a lack of trust from consumers. For the cross-border trade of AI, there is hence significant unused internal market potential, with no region emerging as a leader in terms of cross-border trade. This finding applies to a particular degree with respect to micro-enterprises (start-ups) and other SMEs, which have a strong presence in the AI market (around 90-95% of all companies) but are also more affected by barriers to cross-border trade.

    Legal certainty is key for businesses to invest and market their products, because it enables them to understand their rights and obligations, to seek redress for violations of such rights, and to foresee the risk of their economic activity. Legal certainty is especially relevant for the roll-out of emerging technologies, such as AI. The existing research confirms that liability law plays a key role as an instrument to control the risk associated with the development and use of AI. However, based on existing research findings and consultation of stakeholders, there is widespread uncertainty within the business world as to the application of national civil liability rules to damage caused by AI. Business stakeholders consulted for this study overall indicated uncertainty about the application of liability rules to the operation of AI applications and the differences in their interpretation in the Member States, making it difficult to understand their liability risk. They submitted that this would impact their operations and possibly their uptake of AI-products and services. These views are not representative as they come from a limited set of respondents, but they confirm the assessment, based on available research, of how the peculiar features of AI may challenge the application of traditional liability rules. In principle, businesses tend to favor legal harmonisation over fragmentation, because the more fragmented and diverse a legal framework is, the more costly compliance and the more complex risk assessments become. Accordingly, amongst the limited sample of business stakeholder contributing to this study, a majority feared that national liability rules, once adjusted to the AI context, may be (more) fragmented in the coming years, and that, if this scenario materialised, it would constitute a challenge for their operations.

    More particularly, both stakeholder input and research findings suggest that a significant share of businesses active in the AI field does or will incur various costs to cope with legal uncertainty and fragmentation of national liability rules, namely information and compliance costs, investment and economic transactions costs, insurance costs and opportunity costs in the form of forgone revenue due to hesitation in exploring new AI markets. A key finding of this study is that such challenges are perceived more strongly by SMEs compared to large corporations: this is due to the limited resources available to the former to cope with greatly diverging legal frameworks within the internal market (both in terms of compliance costs, and costs of compensation in case of damage).

    While the impact of legal uncertainty and fragmentation is not yet quantifiable, it is likely that the related costs for businesses would amount to significant obstacles to the functioning of the internal market, affecting start-ups and other SMEs disproportionately. Legal uncertainty and fragmentation of liability rules may hinder the roll-out of AI applications in the EU market. If no action is taken at EU level to mitigate legal uncertainty and fragmentation regarding liability for damage caused by AI, those additional costs would persist, posing barriers to the entry of new firms in the internal market and to the expansion of activities across borders. For micro and small enterprises, the additional cost incurred due to legal uncertainty and fragmentation may be prohibitive or at least reduce their competitiveness in relation to larger companies.

    (c) Impact of possible regulatory solutions at EU level

    The study team concluded that policy options 2 and 3 (see overview above) are the most suitable to improve the current situation for all stakeholder groups, as well as to facilitate the realisation of the societal benefits of AI. While there is at the present stage still a lack of robust evidence and data allowing for precise estimations of impacts, these conclusions rely on a triangulation of stakeholder input, existing research and economic theory.

    Regarding potentially liable parties, in particular operators of AI-applications, Options 2 and 3 are expected to be the most beneficial. Despite higher early compliance costs for the limited number of stakeholders falling under the envisaged insurance obligation who would not anyway have taken out insurance voluntarily, these options are expected to greatly reduce legal uncertainty, prevent fragmentation, and facilitate consumer acceptance and uptake of AI. It is hence likely that the internal market obstacles resulting from legal uncertainty and fragmentation with respect to liability for damage caused by AI would be effectively addressed through those options, leading to a reduction of legal information and compliance, transaction, internal risk management and opportunity costs. Specifically, associations representing SMEs indicated that the increased certainty enabled by Option 2 would allow their members to save substantial resources currently devoted to compliance and legal representation. The higher risk of exposure to liability under a strict liability regime would be offset by higher legal certainty and lower costs, and also by the prospect of higher revenue stemming from a higher acceptance of AI-systems by reason of a more reliable compensatory regime.

    Insurance solutions would allow limiting liable parties’ financial burden to the annual premium, thus making liability costs predictable and manageable. This would be particularly beneficial for SMEs, including micro-enterprises, which are disproportionately affected by legal uncertainty and fragmentation, and for which a liability event is likely to be more disruptive than for large companies. These effects of insurance would be direct impacts of Options 2 and 3 to the extent that those options envisage a mandatory insurance regime for the operation of AI-applications having a specific risk profile. Such a regime is, however, expected to impose an initial burden on those operators who would not have taken out insurance voluntarily. Mainly due to a lack of historic data regarding the liability risk of AI, but also due to other costs to be recouped by insurers, insurance premiums are likely to be higher than warranted by the actual liability exposure during an initial period. Yet, that initial burden is likely to be mitigated by the safety benefits of AI-systems, which are expected to outweigh premium-driving factors: thanks to higher safety of AI-systems, operators’ insurers are likely to be less exposed to costs of compensation than in the status quo, which would have in turn a positive impact on the level of premiums. Moreover, the proposed Artificial Intelligence Act is expected (notably through documentation requirements for the development and use of AI-systems) to enable a more informed risk-assessment by insurers, who are also likely to manage the initially lower predictability of AI-related risks by relying on re-insurance and tools such as real-life testing of AI-systems, advanced analytics, and catastrophe models adapted to the AI context. Although it is too early to predict the level of competition in the insurance market for AI, as actuarial data becomes available in the medium term and is analysed with the help of advanced analytics, the insurance and re-insurance market for AI-specific liability risks could also become ever more competitive, which would lead to lower premiums. Liability insurance is expected to play an important role also in attenuating possible shifts of liability-related costs connected to the envisaged adaptations of the burden of proof: As potentially liable entities are in many cases likely to be covered by voluntary insurance, their exposure would again be limited to the premiums.

    Options 2 and 3 would also improve significantly the situation of victims of damage caused by AI; these options would address the challenges highlighted with regard to the current legal framework through the harmonisation of strict liability for the operation of AI-applications with a specific risk-profile, and targeted adaptations to the burden of proof (irrespective of the risk-profile) for other AI-applications, whose opacity and high degree of autonomy challenge liability rules and may hinder the path to compensation for victims.

    3.4. Methodological constraints and measures taken to remedy them

    (a) Scope

    The study focused on national rules on non-contractual liability other than direct transpositions of the PLD. This approach was chosen with a view to complementing the information gathered by the evaluation of the PLD 252 and the separate impact assessment study on the review of the PLD, to enable a holistic assessment of the need for policy measures regarding liability for AI. The study report underlined that any EU-level regulatory solution needs to rely on a coordinated assessment of the potential gaps of the PLD and national liability rules. Such a coordinated assessment is delivered by this IA, which explains how the AI-specific problems and policy measures are coherent and complementary with the technology-neutral measures regarding the PLD.

    (b) Low response rate to data gathering surveys and mitigating actions taken

    Despite the efforts by the study team, only a small percentage of the stakeholders contacted – both stakeholders across the various use cases, and insurance companies –provided specific input:

    -Some of the stakeholders indicated that it is too early for them to provide opinions on the research questions of the study. This is because, despite planning to use AI in the future (or, in the case of insurance companies, planning to insure AI activities), such stakeholders have not yet had the chance to define concrete business plans and reflect thoroughly on the question of liability.

    -Several trade associations indicated that only a few of their member companies had an informed opinion on this topic, and since the majority of them was not ready yet, they preferred not to participate.

    -Other stakeholders said that given the absence of case law or practical examples of problems arising out of damage caused by AI, it is difficult to make reliable estimates on the impact of uncertainty and fragmentation at the present time. This feedback points in general to some stakeholders’ perception that, since the market for AI is still in its early stages and they have not yet experienced liability-related issues, they are not in a position to assess the legal framework. In turn, they are also not in a position to assess the policy options.

    -Several stakeholders, especially SMEs and associations, although interested in the topic, were forced to decline participation due their resources being allocated to dealing with the COVID-19 crisis.

    Mitigating measures taken: The surveys were kept longer than initially planned to allow more stakeholders to show their interest and prepare their replies. Contacts of backup associations and companies have also been mobilised to make up for the lack of replies. The study team also proposed alternative semi-structured interviews, based on the same questions included in the survey. This has allowed the study team to increase the number of respondents considerably. Given the nevertheless low overall response rate to the surveys and other consultation activities conducted for this study, the conclusions drawn from those surveys and activities should not be considered as statistically representative. However, additional desk research (on other studies, literature, and economic theory) has been performed by the study team in order to complement stakeholder input. On this basis, relevant overall conclusions could be drawn despite the limited amount of feedback received.

    (c) Quantification challenges and mitigating actions taken 253

    The study team had difficulties collecting quantitative data – even in the form of order of magnitudes or ranges – from respondents to the surveys and from interviewees. Only a minority of respondents were able to identify the cost categories affected by legal uncertainty and future legal fragmentation, and almost no respondent was able to estimate the amount of costs and burdens due to these challenges. There are two major reasons for this:

    -Some respondents indicated that, even though they do see legal uncertainty and future fragmentation as a challenge, the impact of these challenges on their cost structure is too indirect and widespread to be quantified. Respondents indicated that they do not map their costs to the detail needed to map the impact of uncertainty and fragmentation.

    -Other respondents pointed to a reason similar to that highlighted above on the low response rate, i.e. that uncertainty and fragmentation are likely to lead to additional costs, but it is too early to measure their impact. This is especially true for legal fragmentation, as the research question concerns potential legal developments.

    Mitigating measures taken: In order to deliver as meaningful results as possible despite the lack of quantified data, the study team reinforced the qualitative analysis regarding the impact of legal uncertainty, fragmentation, and of the policy options. For the policy options, the study team also had recourse to literature, market reports and case studies per use case in order to extract numbers to be used as a general benchmark in the cost-benefit analysis.

    4.Explanations on how the European Parliament’s legislative own-initiative resolution on a civil liability regime for AI was taken into account

    4,1.    Context

    On 20 October 2020, the EP adopted a legislative own-initiative resolution requesting the Commission to adopt a proposal for a civil liability regime for artificial intelligence based on Article 114 TFEU. 254 The EP requested the Commission, on the basis of Article 225 TFEU, to submit a proposal for a Regulation on liability for AI and provided a fully-fledged text for the proposed Regulation.

    The EP’s Resolution sees no need for a complete revision of liability regimes, but considers that AI requires specific adjustments to liability regimes to avoid that potential victims end up without compensation. It regards liability for AI as one of the key aspects within the regulatory framework for AI.

    The Resolution focuses on claims against the operator of AI because the operator is controlling the risk associated with the AI-system and is in many cases the first visible contact point for a potential victim. It uses a broad definition of operator, which covers both the frontend operator (the person who is in control over the operation of the AI system) and the backend operator (the person who continuously defines the features of the AI system and provides data and essential backend support service). The proposed Regulation covers damage to life, health, physical integrity, property and significant immaterial harm resulting in a verifiable substantial economic loss. In this respect, the EP requested the Commission to analyse the legal traditions in all MS and the existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm is necessary and if it contradicts the existing Union or national legal framework. It also requested the Commission to evaluate the need for provisions to prevent contractual non-liability clauses. 255  

    Following the suggestions made by the Commission White Paper/Report on AI liability, the EP resolution uses a risk-based approach consisting of strict liability for a limited number of high risk AI and fault-based liability for all other AI.

    ‘High-risk’ is defined as a significant potential to cause harm or damage in a manner that is random and goes beyond what can reasonably be expected; the interplay between the severity of possible harm or damage, the degree of autonomy of decision-making, the likelihood that the risk materializes and the manner and the context in which the AI-system is being used is taken into account. In order to achieve legal certainty, all AI-systems that fall in the high-risk category would be exhaustively listed in an annex to the Regulation. This annex, i.e. the specification of what is high-risk AI, is left to the requested Commission proposal. The Commission would be given the power to amend the annex through delegated acts.

    According to the EP’s recommendations, both the frontend and the backend operator should be obliged to take out insurance coverage for their harmonised strict liability. To the extent compulsory insurance regimes under EU or national law already cover the operation of the AI-system, the insurance obligation under the draft Regulation would be deemed fulfilled. In the event that the insurer of the operator compensates the victim, any civil liability claim of the victim against another person for the same damage would be subrogated to the insurer.

    For other than high-risk AI-system, the EP’s proposal provides for fault-based liability, unless stricter national laws are in force. In this respect, it follows a minimum harmonisation approach. The fault-based liability is coupled with a presumption of fault of the operator, who can exculpate itself by proof of abiding by its duties of care. The draft Regulation defines certain circumstances under which the operator shall not be liable and mentions aspects, which shall not allow the operator to escape liability.

    4.2.    How does PO1 take into account the European Parliament’s recommendations?

    PO1 takes up the EP’s recommendations to ease the burden of proof on victims under fault-based liability. It is aligned with the EP’s conclusion that only specific harmonised adjustments to existing liability regimes are necessary to avoid AI-induced compensation gaps. Moreover, like the EP’s proposed provisions, it is without prejudice to national liability and burden of proof rules that are more favourable to the victim (minimum harmonisation approach).

    However, PO1 follows a slightly more targeted approach than that proposed by the EP, in order to intervene only with respect to those specific issues that are linked to the particular challenges of AI. For instance, PO1 would not harmonise the substantive conditions of liability, e.g. the standard of fault (which is included to a certain extent in the EP’s resolution in the form of defences available to the operator). Moreover, PO1 would not go so far as introducing a general reversal of the burden of proof for the use of AI-enabled products or provision of AI-enabled services. Likewise, it would not harmonise elements such as recourse between several liable parties, the types of compensable harm or the calculation of damages in specific cases (such as death). It would instead be limited to targeted alleviations of the exercise of existing national proof-related rules.

    In another respect, PO1 is designed in a more general way than the EP’s proposal: the envisaged alleviations of the burden of proof are not limited to claims against the ‘operator’, but can help the victim to substantiate its claims also vis-à-vis any parties potentially liable for damage caused by AI under national fault-based liability rules.

    These differences are due, on the one hand, to the Commission’s approach of limiting its intervention to measures strictly warranted by the specific challenges posed (only) by the peculiar characteristics of certain AI systems. On the other hand, they follow from the Commission’s objective to ensure that victims who suffer harm caused with the involvement of AI have the same level of protection as victims of harm caused by other technologies. This objective applies across all existing routes of compensation, including fault-based claims against any ‘wrongdoer’, not only vis-à-vis the ‘operator’.

    In addition, PO1 (like the other policy options) would most likely be implemented through a Directive, in order to allow Member States the necessary flexibility to insert the harmonised rules into their civil and procedural legal orders. Legislating by way of a directly applicable Regulation, as proposed by the EP, would achieve more far-reaching harmonisation. However, it would foreseeably be difficult to reach a political compromise on such an instrument, as any rules in the area of civil and procedural law have to fit into Member States’ traditional national systems without creating frictions or inconsistencies. For reasons of subsidiarity and coherence with the existing legal systems, a Directive could be preferable.

    Finally, PO1 does not reflect the EP’s recommendation to harmonise strict liability for certain AI technologies with a special risk-profile. This element, which involves a more far-reaching harmonised intervention into existing liability rules, is taken into consideration by the other policy options.

    4.3.    How does PO2 take into account the European Parliament’s recommendations?

    As regards the envisaged measures to ease the victim’s burden of proof under fault-based liability rules, the explanations regarding PO1 apply also in the context of PO2.

    PO2 would also take up the EP’s recommendations regarding the harmonisation of a strict liability regime for the operation of AI applications with a special risk profile, coupled with mandatory insurance. In order to ensure legal certainty and a clear delineation of the scope of the possible strict liability and mandatory insurance regimes, PO2 could be implemented through a list of specific AI use cases in an Annex, as proposed by the EP. Moreover, the Commission could follow the EP’s recommendation to allow for the amendments of this list, by way of delegated acts, in order to keep the legislative instrument updated in light of fast technological developments in the field of AI.

    By contrast, in line with the Commission’s targeted approach, those elements from the EP’s resolution that are not strictly linked to addressing the specific (proof-related) challenges of AI would not be harmonised under PO2 (e.g. statutory limitation, calculation of damages, recourse between jointly liable parties).

    4.4.    How does PO3 (the preferred policy option) take into account the European Parliament’s recommendations?

    Like PO2, PO3 reflects both elements recommended by the EP, albeit as regards strict liability only in a staged approach. The staged approach takes into account the EP’s call for strictly evidence-based and proportionate, yet future-proof measures.



    Annex 5

    Detailed explanations on how the specific characteristics of AI are challenging existing liability rules

    In the absence of targeted measures to ease the burden of proof or AI-specific strict liability regimes, the specific characteristics of certain AI-systems may make it unduly difficult for victims to meet the burden of proof. 256 This Annex explains in more detail which specific characteristics of AI challenge existing liability rules, and how. For illustrative use-cases elaborated in cooperation with AI experts from the JRC, see Annex 13.

    1. Autonomous behaviour 

    AI systems can increasingly perform tasks with less, or entirely without, direct human intervention. 257 The level of autonomy of AI systems is a continuum, ranging from fully supervised and controlled systems to more independent ones that combine environmental feedback with an analysis of their current situation and can perform tasks without direct human intervention, even when operating in a complex environment. While the high-level objectives of AI systems are always defined by humans, for a number of AI systems their outputs and the mechanisms to reach these objectives are not concretely specified, enabling automated decision-making, within pre-set boundaries, without the involvement of a human operator. An increasing degree of autonomy of the system is one of the key aspects sought by the users of certain AI systems. While it is in some cases technically necessary for the automation of a certain task, and commercially desirable, this characteristic makes it difficult to prove the link between a damaging output of the AI system and the action or omission of a potentially liable person, as required under fault-based liability rules. It is very difficult to attribute directly the damaging output to an action/omission of a human liable person if between the human act of deploying the AI system and the damage caused by the AI system to a victim, the AI-system takes a highly autonomous action which is not pre-determined by a human. To prove such a link is however a necessary condition for a successful liability claim.

    2. Opacity/lack of transparency and explainability

    Some AI systems lack transparency because the rules followed, which lead from input to output, are not fully pre-determined by a human. Due to the opacity of certain AI systems, it may not be possible to explain how the output is exactly derived from its input in a given context. An AI system can namely be opaque with respect to how exactly it functions as a whole, how the algorithm was realised in code and how the programme actually runs in a particular case, including the hardware and input data. As a result, some AI systems are opaque in a way (‘black box effect’) that other digital systems are not. Where damage happens, the opacity of an AI system can make it very challenging to establish which input data lead to a specific output creating a damage, and how. In the absence of a targeted alleviation of the burden of proof, and targeted rules on the disclosure of information on AI-systems in the context of civil proceedings, this can make it very hard or even impossible for victims to prove the link between the harmful AI output and a human action or omission. These difficulties are compounded by the fact that any human behaviour in developing or using an AI system will be accompanied and followed by built-in processes of the AI system that in turn depend upon a variety of internal and external factors independent from such conduct, for instance input data collected by sensors, which makes it again more difficult to link a harmful output to a human action or omission. 258

    3. Complexity

    Advanced AI models frequently have more than a billion parameters and process very large amounts of data. In addition, AI systems are usually combined together in complex systems in real-world scenarios (e.g. in a robot many AI systems for perception, navigation, control, can be integrated). Due to this multiplicity of elements constituting such AI systems, they may not be understandable in practice for humans. While some AI systems can be comprehensible from an ex-post perspective despite their complexity (e.g. complex rule-based systems), complexity can contribute to a lack of explainability of outputs in other cases. In the absence of a targeted alleviation of the burden of proof, e.g. in the form of targeted presumptions easing the burden of proof, this can make it difficult to establish that a certain wrongful human behaviour triggered the AI output that caused the relevant damage.

    4. Continuous adaptation and lack of predictability

    Some AI systems can ‘learn’ while in use. In these cases, the rules being followed by the system adapt based on the input it receives while in use. Such continuous adaptation will render an AI-system more unpredictable. Machine learning based on continuous adaptation may therefore make it impossible to prove the link between the damaging AI output and a specific cause, in particular the fault of a person. But limited predictability concerns not only AI systems that continuously adapt while in use. Learning-based/data driven AI systems in general have a probabilistic behaviour as they are based on data that usually does not represent all possible scenarios. Outputs of these systems have a high input sensitivity and even a small change in the inputs can lead to an unpredictable behaviour of the system, and thus a lack of predictability. Moreover, while recurrent systems that depend on the internal state of an AI system are not necessarily designed for continuous adaptation, they nevertheless produce different outputs for the same inputs at different times, even when exposed to the same input. The challenges regarding the attribution of AI output for the purposes of liability claims hence apply more broadly than in the context of continuous adaptation.

    The results of the public consultation overall confirm the assessment that the specific characteristics can make it prohibitively difficult or even impossible for victims to obtain compensation under existing civil liability rules: Amongst responding consumer organisations, NGOs, public authorities, academic/research associations and EU citizens, overwhelming majorities agreed that

    (i) it could be difficult to link damage caused by highly autonomous AI to the actions and or omissions of a human actor,

    (ii) in the case of opaque and complex AI, it could be difficult for injured parties to prove that the conditions of liability are fulfilled, and

    (iii) because of AI’s specific characteristics, victims may in certain cases be less protected that victims of damage that did not involve AI.

    While responding business stakeholders (business associations and companies/business organisations) were more divided about these points, a relative majority nevertheless agreed with the second statement (41,2 % v. 28,3 % who disagreed), and even an absolute majority with the third (53,6 % v. 26,2 % who disagreed). Only with respect to statement (i), more business stakeholders disagreed (44,7 %) than agreed (35,7 %).

    Annex 6

    Detailed description of the legal context and interplay between existing / proposed legal rules and the AI liability initiative

    Introduction and scope of this Annex

    This Annex contains information on the relevant existing legal instruments of EU and international law, and explains how the proposal on AI liability would complement and interact with the relevant rules.

    Only legislation that is closely linked with the presented proposal is described. While a broad range of EU level rules, for instance under sector-specific safety legislation, can be indirectly relevant for establishing the duties of care for individual liability claims, not all of these rules are described here, as they do not have an immediate impact on the policy objectives and measures covered by this IA.

    Moreover, the existing EU law instruments aimed at addressing fundamental rights risks linked to AI (in particular discrimination risks, data protection risks, and risks linked to the use of AI by large online platforms) and the interplay of the AI liability initiative with the AI Act are covered in detail by dedicated Annexes (Annex 7 and 8 respectively). Therefore, these aspects are not repeated in this Annex.

    Finally, an overview of existing national civil liability rules, characterised by various differences between Member States, is provided in the main part of this IA and therefore not covered in this Annex. For details, reference is made to the Comparative Law Study on Civil Liability for AI, commissioned for this IA. 259

    As the transport field in particular is characterised by various relevant legal instruments at EU and international level, this field is covered in the dedicated section B. Relevant EU law rules from other fields are discussed in section A.

    A. EU law context (except transport-specific instruments)

    1. Product Liability Directive

    1.1. Description

    The Product Liability Directive (PLD) 260 establishes a liability regime of producers when defective products cause personal injuries, death or damage to property of natural persons. It is without prejudice to national provisions relating to non-material damage 261 .

    The PLD covers all types of tangible products, ranging from raw materials to complex industrial products, including emerging digital technology products. It applies only to defective products. According to Article 6 PLD, a product is “defective” when it does not provide the safety that a person is entitled to expect, taking into account: (a) its presentation; (b) its reasonable/expected use; and (c) the time when it was put into circulation.

    The victim does not have to prove a fault of the producer. However, the victim bears the burden of proof regarding the defect in the product, the actual damage and the causal link between the defect and the damage

    Pursuant to Article 3 PLD, “producer” means the manufacturer of a finished product, the producer of any raw material or the manufacturer of a component part and any person who, by putting his name, trademark or other distinguishing feature on the product presents himself as its producer. 262  

    Under certain circumstances, the producer is exempt from strict liability under the PLD. Article 7 provides for exemptions in six cases. With respect to liability for damage caused by AI, the ‘later defect defence’ and the ‘development risk defence’ are particularly relevant. In accordance with the ‘later defect defence, the producer is not liable if he proves that it is probable that the defect which caused the damage did not exist at the time when the product was put into circulation by him or that this defect came into being afterwards. Based on the ‘development risk defence’, the producer is not liable if he demonstrates that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered.

    1.2. PLD review and interplay with the AI liability initiative

    In the framework of the pending PLD review assessed in parallel to the AI liability initiative, a number of horizontal (i.e. not AI-specific) amendments of the Directive are envisaged, notably to modernise it and adapt it to the digital age. These measures and the interplay with the AI-specific measures covered by this impact assessment are explained in detail in the impact assessment itself and are therefore not repeated in this Annex.

    2. Rome II Regulation

    2.1. Description

    The Rome II Regulation 263 sets out the conflict-of-law rule for non-contractual obligations. As a general rule, the law applicable to a non-contractual obligation arising out of a tort/delict is the law of the country in which the damage occurs (Article 4). It is not important whether the event which gave rise to the damage, or any potential indirect consequences of that event occurred in the same country. In situations where the liable person and the victim both have their habitual residence in the same country at the time when the damage occurs, the law of that country applies. Finally, in situations where it is clear from all the circumstances of the case that the tort/delict is manifestly more closely connected with a country other than that where the damage occurred or in which both the liable person and the victim have their habitual residence, the law of that other country shall apply.

    For claims based on product liability, the Rome II Regulation determines the applicable law by means of a cascade system, where the first element to be taken into account is the law of the country in which the person sustaining the damage had his or her habitual residence when the damage occurred, if the product was marketed in that country. The other elements are triggered if the product was not marketed in that country and, in essence, the law applicable to the claim depends on the place where the product was marketed (Article 5). If the liable person under the product liability regime and the victim both have their habitual residence in the same country at the time when the damage occurs, the law of that country applies. Where there is a manifestly closer connection to another country, the law of that country applies.

    2.2. Interplay with the AI liability proposal

    The AI liability proposal does not contain provisions determining the applicable law, while the Rome II Regulation does not contain provisions on the substantive conditions of liability or proof-related rules. The two instrument hence complement one another without overlapping. Indeed, the application of the Rome II Regulation leads to a situation where various different national liability rules can apply to the same type of AI-induced damage. In such a situation, the expected emergence of AI-specific national liability rules would lead to legal fragmentation and obstacles to cross-border trade in AI-enabled products and services (e.g. due to the lack of affordable cross-border insurance cover, increased legal information, financing and risk management costs). The fact that Article 14(1) of the Rome II Regulation allows parties to agree to submit non-contractual obligations to the law of their choice in certain cases does not negate these problems. Firstly, contractual choice of law clauses are permissible only in B2B, not vis-à-vis parties not pursuing a commercial activity. Secondly, even in B2B, such clauses are by nature only possible if parties are already in a direct commercial relationship before the damage occurs; the legal fragmentation problem can therefore not be resolved by such clauses when it comes to third party damage.

    However, when the victim is a third party (business or private individual) who is not in a contractual relationship with the liable party before the damage happens, the applicable law will almost always be determined by the general rules of the Rome II Regulation. The liable party and a third party victim are unlikely to agree contractually to choose a certain law after the damage has already happened, as the law applicable by default will always be more advantageous for one or the other party than alternative laws.

    The AI liability proposal aims at preventing the internal market barriers linked inter alia to fragmentation between future AI-specific national liability rules.

    3. E-Commerce Directive and Digital Services Act: Liability exemptions

    3.1. Description

    The E-Commerce-Directive 264 establishes certain limitations of the liability of intermediary service providers. In particular, “hosting”, “caching” and “mere conduit” service providers are exempted, under certain conditions, from liability for the third-party information they transmit or store. These exemptions cover only cases where the activity of the information society service provider is limited to the technical process of operating and giving access to a communication network over which information made available by third parties is transmitted or temporarily stored, for the sole purpose of making the transmission more efficient. They are based on the consideration that such activities are of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.

    The proposed Digital Services Act 265 maintains these liability exemptions and transfers them into a Regulation.

    3.2. Interplay with the AI Liability Proposal

    The AI liability proposal does not address the substantive conditions under which providers of intermediary services are liable. It hence does not affect the existing liability exemptions.

    4. Directive 2004/35/EC on environmental liability

    4.1. Description

    The Directive on environmental liability 266 (ELD) establishes a framework based on the polluter pays principle to prevent and remedy environmental damage.

    The Directive defines "environmental damage" as damage to protected species and natural habitats, damage to water and damage to soil. Operators carrying out dangerous activities listed in Annex III of the Directive fall under strict liability (no need to proof fault). Operators carrying out other occupational activities than those listed in Annex III are liable for fault-based damage to protected species or natural habitats. The establishment of a causal link between the activity and the damage is always required. Affected natural or legal persons and environmental NGOs have the right to request the competent authority to decide about remedial action.

    4.2. Interplay with the AI Liability proposal

    As the ELD deals with environmental damage, it is based on the powers and duties of public authorities, as distinct from a civil liability system for damage to for instance property, economic loss or personal injury. Conversely, the AI liability proposal deals only with civil liability. The two instruments are this complementary without overlapping.

    5. Antitrust Damages Directive 2014/104/EU

    5.1. Description

    The Antitrust Damages Directive 267 helps citizens and companies to claim damages if they are victims of infringements of EU antitrust rules, such as cartels or abuses of dominant market positions. The Directive pursues two complementary goals: First, it removes practical obstacles to compensation for all victims of infringements of EU antitrust law. For this purpose, it notably establishes harmonised rules on the disclosure of evidence from the defendant or a third party. Second, the Directive fine-tunes the interplay between private damages actions and public enforcement of the EU antitrust rules by the Commission and national competition authorities.

    5.2. Interplay with the AI Liability proposal

    The AI Liability Proposal and the Antitrust Damages Directive are largely covering different issues. While the former is aimed at addressing the particular difficulties of proof linked to the specific characteristics of AI, the latter facilitates (stand-alone and follow-on) actions for damages to ensure the effective compensation of harm caused by infringements of EU antitrust rules.

    In any event, the approaches of these two instruments are aligned. Both are meant to ensure the effectiveness of EU law rules and existing rights in civil actions for damages before national courts.. While the AI Liability Proposal is linked to the requirements established by the AI Act and designed to ensure that victims can claim compensation where non-compliance with these requirements causes damage, the Antitrust Damages Directive supports the application of competition rules by national courts when ruling on disputes between (alleged or declared) competition law infringers and (alleged) injured part(ies). Both instruments recognise the crucial importance of access to information. Therefore, the AI Liability proposal harmonises the conditions for victims’ access to information about AI-systems documented or logged in accordance with the AI Act, and the Antitrust Damages Directive contains a similar provision (although not linked to AI systems in particular) applying in the context of actions for damages based on infringements of antitrust rules.

    6. Directive 2004/48/EC on the enforcement of intellectual property rights

    6.1. Description

    Directive 2004/48/EC on the enforcement of intellectual property rights 268 provides for a minimum set of harmonised measures, procedures and remedies facilitating the civil enforcement of intellectual property rights (IPR) across the internal market. Similarly to the approach of the Antitrust Damages Directive, Directive 2004/48/EC notably provides for the possibility, in the framework of proceedings concerning an IPR infringement, to order the disclosure lying in the control of the opposing party. In addition, Directive 2004/48/EC requires Member States to ensure that the holder of an infringed IPR can claim damages appropriate to the actual prejudice suffered as a result of the infringement.

    6.2. Interplay with AI Liability proposal

    Both the AI Liability Proposal and Directive 2004/48/EC are meant to leverage ‘private enforcement’ to ensure the effectiveness EU law rules and existing rights. The two instruments complement one another in cases where highly autonomous, complex, opaque and unpredictable AI systems cause IPR infringements in contravention of EU law. In such cases, the targeted alleviation of the burden of proof as regards the question how or why the relevant AI system reached the harmful output will make it easier for the IPR right holder to claim compensation. The victim will namely be relieved from having to establish that a specific action or omission of the defendant was the causal trigger of the output that caused the IPR infringement, because this proof will often be prohibitively difficult or even impossible to provide under existing fault-based liability rules.

    7. Directive (EU) 2020/1828 on representative actions

    7.1. Description

    Directive (EU) 2020/1828 on representative actions for the protection of the collective interests of consumers 269 empowers organisations or public bodies designated by EU countries to seek injunctive or redress measures on behalf of groups of consumers through representative actions (including cross-border representative actions). This includes seeking compensation from traders who infringe consumer rights in areas such as financial services, travel and tourism, energy, health, telecommunications and data protection, as appropriate and available under EU or national law. The Directive leaves it to MS’ discretion whether the representative action can be brought in judicial or administrative proceedings, or both, depending on the relevant area of law or relevant economic sector.

    7.2. Interplay with AI Liability proposal

    The AI Liability proposal does not address the question of representative actions (or standing in general). When it comes to civil proceedings, the AI Liability proposal would be synergetic with the Directive on representative actions. Insofar as damage caused by AI systems is at stake, the measures to ease the burden of proof would ensure the effectiveness of existing claims also in cases where victims are represented by consumer organisations or bodies in accordance with Directive (EU) 2020/1828.

    B. EU and international legal instruments from the transport field

    1. Road transport

    1.1. Motor Insurance Directive

    (a) Description

    The Motor Insurance Directive (MID) 270 pursues the objectives of protecting victims of motor vehicle accidents and facilitating the free movement of motor vehicles between Member States. All Member States have to ensure that civil liability for the use of vehicles is covered by insurance and that victims of an accident caused by a vehicle enjoy a direct claim against the insurer covering the person responsible against civil liability.

    Directive (EU) 2021/2118 has amended certain provisions of the MID, transposition measures transposing are to apply from 23 December 2023. This MID review has not introduced any AI-specific measures, such as for autonomous vehicles. Indeed, the last evaluation of the Directive concluded that no changes were necessary regarding autonomous vehicles. Such vehicles will require third party liability insurance in line with the Directive. 271  However, the Commission will submit a report, by December 2030, evaluating the application of the Directive specifically with regard to technological developments, in particular with regard to autonomous and semi- autonomous vehicles.

    (b) Interplay with the AI liability initiative

    The policy measures envisaged under the AI liability initiative do not overlap with the MID. The MID does not harmonise issues of civil liability or the burden of proof. Insofar as the policy options assessed in this IA involve a mandatory insurance regime covering strict liability regime for the use of certain AI-enabled products or the provision of certain AI-enabled services, the scope of such an insurance obligation would specifically not include AI-enabled products, such as autonomous vehicles, of a category falling under the MID.

    1.2. Regulation (EU) No 181/2011 concerning the rights of passengers in bus and coach transport

    (a) Description

    Pursuant to Article 7 of Regulation (EU) No 181/2011, passengers are entitled to compensation for death or personal injury as well as for loss of or damage to luggage due to accidents arising out of the use of the bus or coach in accordance with national law. The reference to national law covers contractual and extra-contractual liability regimes and does not specify whether the national liability law should be fault-based or strict. The Regulation lays down minimum amounts for liability limits set out in national law (EUR 220 000 per passenger and EUR 1 200 per item of luggage). 272

    (b) Interplay with the AI Liability proposal

    The policy measures envisaged under the AI liability initiative do not overlap with Regulation (EU) No 181/2011. The Regulation does not regulate the substantive liability conditions, liability for third-party damage or allocation of the burden of proof. As far as liability is concerned, the Regulation refers only to passengers’ right to compensation vis-à-vis bus or coach carriers. While the Commission proposal (COM/2008/817/FINAL) had contained a chapter with detailed rules on the liability of bus or coach undertakings with regard to passengers and their luggage, the Council replaced the proposed harmonised rules by a simple reference to the applicable national law. The effect is that the substantive conditions of liability are not regulated in existing EU law. Moreover, damage suffered by third parties, i.e. accident victims other than passengers, is not addressed in Regulation (EU) No 181/2011 at all.

    1.3. Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport

    (a) Description

    Directive 2010/40/EU establishes a framework for the deployment and use of “systems in which information and communication technologies are applied in the field of road transport, including infrastructure, vehicles and users, and in traffic management and mobility management, as well as interfaces with other modes of transport”. Pursuant to Article 11 of that Directive, “Member States shall ensure that issues related to liability, concerning the deployment and use of ITS applications and services set out in specifications adopted [by the Commission], are addressed in accordance with Union law, including in particular [the PLD], as well as relevant national legislation”. ITS information service providers usually include disclaimers to reduce liability.

    (b) Interplay with the AI Liability proposal

    The policy measures envisaged under the AI liability initiative do not overlap with Directive 2010/40/EU. The Directive does not regulate the substantive liability conditions, liability for third-party damage or the allocation of the burden of proof. Similarly, it does not regulate the liability of road traffic management service providers.

    1.4. UNECE Convention on the contract for the international carriage of passengers and luggage by road (CVR)

    (a) Description

    The CVR is the only international agreement establishing bus and coach passenger rights. Only four Member States have ratified the CVR: HR, CZ, LV and SK. It applies to contracts for the cross-border carriage of passengers and luggage by road. Chapter IV of the CVR provides for strict liability of the carrier 273 for passengers’ personal injuries and damage to luggage connected with the carriage, as well as different liability caps. Force majeure and own-fault of the passenger are laid down as defences. If a third party contributed to the damage, the carrier is jointly and severally liable, without prejudice to recourse claims. Pursuant to Article 18 CVR, ‘in all cases governed by this Convention, proceedings for liability on any grounds whatever may not be instituted against the carrier or against persons for whom he is responsible […] otherwise than on the terms and within the limits laid down in this Convention’. This includes contractual as well as extra-contractual liability claims. However, in the case of gross negligence or wilful misconduct, the carrier cannot invoke the limitations of his liability.

    (b) Interplay with the AI Liability proposal

    Damage suffered by third parties, e.g. physical injury or property damage suffered by accident victims other than passengers or parties to a contract for carriage of goods, is not addressed at all by the CVR. Neither does it cover possible alleviations of the burden of proof borne by the claimant, in particular with respect to facts that may be obscured by AI systems. In addition, the convention does not cover the liability of road traffic management service providers.

    If the Commission would decide, at the stage of the targeted review envisaged by the preferred policy option on AI liability, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, the user’s/operator’s strict liability may apply to persons and damages that fall also under the CVR provisions. In this respect, it could be clarified that Member States can continue to apply provisions which exclude, fix or limit carriers’ contractual liability in accordance with the CVR also to extra-contractual liability (despite the fact that only a small number of Member States has ratified the CVR).

    1.5. UN Convention on the contract for the international carriage of goods by road (CMR)

    (a) Description

    All EU Member States have ratified the CMR. This Convention covers contracts for the international carriage of goods by road in vehicles. Article 17 provides for no-fault liability of the carrier for loss of the goods and damage thereto, and lays down certain grounds of relief of liability. The carrier bears the burden of proof with respect to those grounds. In cases where loss, damage or delay arising out of carriage under the CMR gives rise to an extra-contractual claim, the carrier may avail itself of the provisions of this Convention which exclude its liability or which fix or limit the compensation due, except in cases of wilful misconduct (Articles 28 and 29). This applies both to the grounds of relief listed above and to other limitations and exclusions.

    (b) Interplay with the AI Liability proposal

    Damage suffered by third parties, i.e. accident victims other than passengers or parties to a contract for carriage of goods, for instance physical injury or property damage, is not addressed at all by the CMR. Neither does it cover possible alleviations of the burden of proof borne by the claimant, in particular with respect to facts that may be obscured by AI-systems. In addition, the Convention does not cover the liability of road traffic management service providers.

    If the Commission would decide, at the stage of the targeted review envisaged by the preferred policy option on AI liability, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, the user’s/operator’s strict liability may apply to persons and damages that fall also under the CMR provisions. With respect to damage to passengers, luggage or transported goods, the CMR prescribes strict contractual liability of the carrier and certain limits, conditions and defences that cannot be undermined through extra-contractual liability rules. Therefore, a provision clarifying that Member States can continue to apply provisions, which exclude, fix or limit carriers’ contractual liability in accordance with the CMR, also to extra-contractual liability could be considered if strict liability were to be harmonised at the stage of the targeted review.

    2. Air transport

    2.1. Regulation on insurance requirements for air carriers and aircraft operators

    (a) Description

    Regulation (EC) No 785/ 2004 on insurance requirements for air carriers and aircraft operators 274 establishes “minimum insurance requirements for air carriers 275 and aircraft operators 276 in respect of passengers, baggage, cargo and third parties” (Article 1) while flying “within, into, out of, or over the territory of a Member State” (Article 2(1)). However, Article 2(2) excludes some aircraft from its scope, including model aircraft with a maximum take-off mass of less than 20 kg. There is no definition of ‘model aircraft’ and there is divergence between the rules at national level in respect of what is covered by this exemption. In the view of the Commission services, commercially used drones with a maximum take-off mass of less than 20kg are still subject to insurance requirements because Article 14 of Regulation (EU) 2019/947 requires an insurance policy number according to national or European legislation.

    (b) Interplay with the AI liability proposal

    Similarly to the interplay with the Motor Insurance Directive, the policy measures envisaged under the AI liability initiative do not overlap with Regulation (EC) No 785/2004. The Regulation does not harmonise issues of civil liability or the burden of proof. The future proposal on liability for AI would include targeted alleviations of the burden of proof to ensure that victims are equally protected when AI is involved. Insofar as the policy options assessed in this IA involve a mandatory insurance regime covering strict liability regime for the use of certain AI-enabled products or the provision of certain AI-enabled services, the scope of such an insurance obligation would specifically not include AI-enabled products, such as some types of autonomous drones, of a category falling under Regulation (EC) No 786/2004.

    2.2. Regulation (EC) No 2027/97 on air carrier liability in the event of accidents

    (a) Description

    Regulation (EC) No 2027/97 imposes unlimited liability on EU air carriers, i.e. licensed air transport undertakings, in the event of death or injury to passengers. In accordance with the Montreal Convention, that liability is strict up to 100 000 special drawing rights. 277 The air carrier can contest and potentially avoid liability above that amount by proving that its conduct was not negligent or otherwise amounted to fault. Different rules apply with respect to damage to passengers’ baggage and delay, which is covered by Regulation (EC) No 2027/97 in conjunction with the Montreal Convention, as well as cargo and damage occasioned by delay in the carriage of cargo, which is covered only by the Montreal Convention.

    Regulation (EC) No 889/2002 amended Regulation (EC) No 2027/97 to implement the Montreal Convention, which had been ratified by all Member States and the EU. 278 Following the amendment, Regulation (EC) No 2027/97 continues to cover liability of Community air carriers in respect of passengers including passenger delay and their baggage. The scope of the Regulation was extended to cover also domestic flights. The provisions of the Montreal Convention govern the conditions and limits of that liability irrespective of the basis of the claims, be it a contract, tort or otherwise. Regulation (EC) No 2027/97 does not exclude air transport undertakings providing air services by means of unmanned aircraft systems (UASs) from its scope. The Regulation covers damage caused by passenger drones to the passengers. The Montreal Convention covers damage caused to cargo by delivery drones.

    (b) Interplay with the AI Liability proposal

    The policy measures envisaged under the AI liability initiative do not overlap with Regulation (EC) No 2027/97. The Regulation does not address liability for damage caused by UASs to third parties or the liability of parties other than the air carrier for damage caused by UAS, such as aircraft operators not licenced as air transport undertakings or service providers of air traffic management. In addition, it does not cover alleviations of the burden of proof to the benefit of the claimant seeking compensation for damage caused by an UAS.

    The future proposal on liability for AI could apply to autonomous AI-enabled UASs and air traffic management systems. It would adapt national liability rules to ensure that injured persons, including in particular third parties who do not have a contract of carriage, can claim compensation despite the autonomy, opacity, complexity and the lack of predictability and explainability characterising some of the AI-systems used in these technologies. With respect to damage caused to passengers, their baggage, or cargo, there would also be no risk of a conflict between, on the one hand, possible AI-specific alleviations of the burden of proof as envisaged under the preferred policy option, and on the other hand, the existing EU and international rules in the field of air transport. If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, in order to be future-proof, that this possible regime is without prejudice to the liability conditions and limits laid down in the Montreal Convention concerning injury or death of passengers or damage to their baggage. A similar clarification could be envisaged, at that stage, with respect to liability for damage to cargo, because the respective provisions of the Montreal Convention are binding for the EU and all Member States as ratifying parties.

    2.3. Convention for the Unification of Certain Rules for International Carriage by Air (Montreal Convention)

    (a) Description

    All Member States and the EU are parties to the Montreal Convention. Pursuant to Article 29 of that Convention, “in the carriage of passengers, baggage and cargo, any action for damages, however founded, whether under this Convention or in contract or in tort or otherwise, can only be brought subject to the conditions and such limits of liability as are set out in this Convention”. Drones are not excluded from the scope of the Montreal Convention, but as only damage caused in the international carriage of passengers, baggage and cargo is covered, only drones used for those professional purposes are concerned. The Convention provides for air carriers’ liability in the event of bodily injury sustained by passengers or damage to baggage or cargo during the operation of the aircraft or in the course of any of the operations of embarking or disembarking. Different limits are defined depending on the type of damage. With respect to bodily injury, strict liability is capped at 100 000 Special Drawing Rights per passenger whereas the carrier can avoid liability exceeding that cap by proving that the damage was not due to the carrier’s fault, or was due to the fault of a third party.

    Liability for damage to checked baggage or cargo occurring while the damaged items were in the charge of the carrier does not require fault, but various limitations, exclusions or defences apply in that respect. The carrier is notably exempted from liability for damage to cargo if it proves that the damage resulted from certain facts like war, inherent defects of the cargo, etc. Regarding damage occasioned by delay, the carrier is not liable if it proves that it took all reasonable measures to avoid the damage. Fault of the injured party is generally admitted as a defence, for which the defendant bears the burden of proof. The Montreal Convention also contains several provisions on prima facie evidence, namely regarding the conclusion on an air carriage contract, cargo, and receipt of cargo in good condition.

    Parties to the Montreal Convention are to require air carriers to maintain adequate insurance covering their liability under this Convention.

    (b) Interplay with the AI liability proposal

    It is doubtful whether the Montreal Convention covers the liability of UAS operators for damage caused to third parties on the ground. In any event, the Montreal Convention does not cover liability towards third parties, e.g. liability vis-à-vis a passer-by who is injured by an autonomous delivery drone during a landing manoeuvre, due to an erroneous output of the drones’ AI-enabled perception system. Neither does it cover liability of other entities than the air carrier, such as aircraft operators not licenced as air transport undertakings, or air traffic management service providers. In addition, the Convention does not regulate alleviations of the claimant’s burden of proof regarding substantive liability conditions that could be obscured by the use of AI.

    If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, in order to be future-proof, that this possible regime is without prejudice to the liability conditions and limits laid down in the Montreal Convention concerning injury or death of passengers or damage to their baggage. A similar clarification could be envisaged, at that stage, with respect to liability for damage to cargo, because the respective provisions of the Montreal Convention are binding for the EU and all Member States as ratifying parties.

    2.5. Implementing Regulation (EU) 2021/664 on a regulatory framework for the U-space

    (a) Description

    The conditions under which U-space service 279 providers are liable for damage caused in the provision of such services are not harmonised. Pursuant to Article 15(1)(j) of Implementing Regulation (EU) 2021/664, U-space service providers are required to have in place arrangements to cover liabilities related to the execution of their tasks appropriate to the potential loss and damage.

    (b) Interplay with the AI Liability proposal

    The policy measures envisaged under the AI liability initiative do not overlap with Implementing Regulation (EU) 2021/664. That Implementing Regulation does not address liability for damage caused by UASs to third parties or the liability of parties other than the air carrier for damage caused by UAS, such as aircraft operators not licenced as air transport undertakings or service providers of air traffic management. In addition, it does not cover alleviations of the burden of proof to the benefit of the claimant seeking compensation for damage caused by an UAS.

    2.6. Implementing Regulation (EU) 2017/373 laying down common requirements for providers of air traffic management/air navigation services and other air traffic management network functions

    (a) Description

    The conditions under which providers of air traffic management services are liable for damage caused in the provision of such services are not harmonised. Pursuant to Implementing Regulation (EU) 2017/373 (Annex III, ATM/ANS.OR.D.020 Liability and insurance cover), air navigation services providers, air traffic flow management providers and the Network Manager are to have in place “arrangements to cover liabilities related to the execution of their tasks in accordance with the applicable law”. In addition, “air navigation services and air traffic flow management providers and the Network Manager, which avail themselves of services of another service provider shall ensure that the agreements that they conclude to that effect specify the allocation of liability between them”. Thereby, insurance cover is required to address liability risks sustained by air navigation service providers. Legislation concerning Air Traffic Management Data Service Providers (ADSPs) is still under development.

    (b) Interplay with the AI Liability proposal

    The preferred policy option envisaged under the AI liability initiative do not overlap with Implementing Regulation (EU) 2017/373. That Implementing Regulation does not cover alleviations of the burden of proof to the benefit of the claimant seeking compensation for damage caused by an UAS.

    2.7. Convention on Damage Caused by Foreign Aircraft to Third Parties on the Surface (1952 Rome Convention)

    (a) Description

    BE, ES, IT and LU are parties to the 1952 Rome Convention. It covers damage to third parties on the earth surface caused by an aircraft in flight or by any person or thing falling therefrom. The “operator” is strictly liable. The Rome Convention does not contain any reference to UASs. However, some scholars interpret its regulations as applicable to all kinds of aircraft or ‘air vehicles’, provided they are ‘usable for transport’. 280  The operator is defined as the “person who was making use of the aircraft”, i.e. the person “using it personally or when his servants or agents are using the aircraft in the course of their employment, whether or not within the scope of their authority” (Article 2(2)). The registered owner of the aircraft is presumed to be the operator, unless s/he proves that some other person was the operator and ensures that other person becomes a party to the dispute. The 1952 Rome Convention contains further detailed provisions on collisions between aircraft, defences, joint and several liability, the extent of liability, and security for operators’ liability. Liability regulated in the 1952 Rome Convention is exclusive. Pursuant to its Article 9, “neither the operator, the owner, any person liable under Article 3 or Article 4, nor their respective servants or agents, shall be liable for damage on the surface caused by an aircraft in flight or any person or thing falling therefrom otherwise than as expressly provided in this Convention”.

    (b) Interplay with the AI Liability proposal

    It is doubtful whether the Convention covers the liability of UAS operators for damage caused to third parties on the ground. The 1952 Rome Convention, ratified by only four Member States, addresses this kind of damage, but it is uncertain whether that Convention might be interpreted as covering UASs. Furthermore, the 1952 Rome Convention does not cover liability for collisions between aircraft in the air nor alleviations of the claimant’s burden of proof regarding substantive liability conditions that could be obscured by the use of AI. Lastly, the Convention only covers international flights, unless the signatory state explicitly declares that it also covers domestic flights in that state. Hardly any unmanned aircraft fly internationally today.

    If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, in order to be future-proof, that this possible regime is without prejudice to the liability conditions and limits laid down in the 1952 Rome Convention. However, it would then need to be ascertained whether that Convention is to be interpreted as covering UASs. The fact that only four Member States have ratified this Convention would also need to be taken into account in this context.

    3. Transport by water

    3.1. Regulation (EU) No 392/2009 on the liability of carriers of passengers by sea in the event of accidents

    (a) Description

    The liability regime in respect of passengers, their luggage and their vehicles and the rules on insurance or other financial security is governed by Regulation (EU) No 392/2009, as well as the Articles of the Athens Convention set out in Annex I of that Regulation and the provisions of the IMO Guidelines set out in Annex II of the same Regulation. It should be noted that the Regulation is not exactly identical to the Athens Convention. 281

    Depending on the volume and type of damage and whether the damage was due to a shipping incident, liability is either strict with certain admissible defences or fault-based. In cases where the damage was due to a shipping incident, the burden of proof regarding fault is shifted to the carrier. Liability rules relate only to incidents that occurred in the course of carriage, for which the claimant bears the burden of proof. Presumptions of fault or neglect of a party or the allocation of the burden of proof to a party do not prevent evidence in favour of that party from being considered.

    While the basis of liability, either in contract or in tort, is not specified in Regulation (EU) No 392/2009 or the provisions of the Athens Convention included in Annex I of that Regulation, the Convention is designed to be the sole legal base, within its subject matter, for passengers’ claims against the carrier. No action for damages for the death of or personal injury to a passenger, or for the loss of or damage to luggage, can be brought against a carrier or performing carrier otherwise than in accordance with the Athens Convention.

    Compulsory insurance or other security applies to ships licensed to carry more than twelve passengers.

    (b) Interplay with the AI Liability proposal

    The Regulation does not regulate liability for damage, other than loss suffered as a result of death or personal injury to a passenger or loss or damage to luggage, due to an incident occurring in the course of carriage, nor liability of other parties than the carrier such as navigation service providers. For example, the liability of the operator of an autonomous AI-enabled vessel for damage, e.g. physical injuries or property damage, suffered by a third party, e.g. a surfer or swimmer rammed by the AI-enabled vessel, would not be covered by the existing EU rules. Instead, EU law leaves the regulation of liability to the tort law of the Member States.

    The future proposal on liability for AI could apply to autonomous AI-enabled ships. It would adapt national liability rules to ensure that injured persons, including in particular third parties who do not have a contract of carriage, can claim compensation despite the autonomy, opacity, complexity and the lack of predictability and explainability characterising some of the AI-systems used in these technologies.

    If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, at that stage, that such rules are without prejudice to the rules on carrier liability for damage to passengers of sea-going vessels and their luggage under Regulation (EU) No 392/2009.

    3.2. Athens Convention relating to the Carriage of Passengers and their Luggage by Sea and the 2002 Protocol

    (a) Description

    The 1974 Convention and its 2002 protocol address the transport of passengers and their luggage by sea. It lays down mandatory rules on the substantive conditions of liability (strict v. fault-based), the burden of proof, insurance requirements and defences. According to Article 14, no action for damages governed by the Athens Convention shall be brought against a carrier „otherwise than in accordance with this Convention“. Within its subject matter, the Convention is designed to be the sole legal base for passengers’ claims against the carrier. Any contractual provision purporting to relieve a person liable under the Convention of liability towards the passenger, to prescribe a lower limit of liability, or to shift the burden of proof which rests on the carrier are void.

    The EU itself and several Member States are parties to the 2002 Protocol to the Athens Convention. Only a few Member States are still parties to the 1974 Convention itself: EE, IE, LU and PL.

    (b) Interplay with the AI Liability proposal

    See previous point on Regulation (EU) No 392/2009.

    3.3. Directive 2009/20/EC on the insurance of shipowners for maritime claims

    (a) Description

    This Directive establishes a legal framework applicable to the insurance of shipowners for maritime claims in order to make economic operators act more responsibly and to improve the quality of merchant shipping. It applies to ships of 300 gross tonnage or more (except warships or State owned or operated ships). Member States are to impose that ships flying their flag be insured by their owners and other ships be insured when they enter ports under the Member States’ jurisdiction.

    (b) Interplay with the AI Liability proposal

    Directive 2009/20/EC does not include any rules on substantive liability conditions or the burden of proof. If the Commission would decide, at the stage of the targeted review, to lay down a harmonised mandatory insurance regime for certain AI-enabled technologies, overlap with Directive 2009/20/EC would need to be avoided.

    3.4. Regulation (EC) No 391/2009 and Directive 2009/15/EC on ship inspection and survey organisations

    (a) Description

    Regulation (EC) No 391/2009 establish measures to be followed by organisations entrusted with the inspection, survey and certification of ships for compliance with the international conventions on safety at sea and prevention of marine pollution 282 . Directive 2009/15/EC complements this safety regime by establishing measures to be followed by Member States in relation to those organisations.

    (b) Interplay with the AI Liability proposal

    Regulation (EC) No 391/2009 does not determine liability or the burden of proof. 283 It has solely the preventive purpose of ensuring safety on sea and preventing marine pollution. While Directive 2009/15/EC requires MS to formalise a ‘working relationship’ with ship inspection and survey organisations, inter alia by agreeing on the MS entitlement to financial compensation from such organisations if liability arising out a marine casualty is imposed on that MS vis-à-vis injured parties, the Directive does not determine the conditions or the burden of proof under extra-contractual liability rules.

    The AI liability proposal complements the EU legal framework regarding safety at sea, by ensuring an effective compensation under extra-contractual liability rules when AI is involved in causing damage.

    3.5. Directive 2009/18/CE on the investigation of accidents in the maritime transport sector

    (a) Description

    Directive 2009/18/EC serves the purpose of establishing facts that lead to a maritime accident and then to safety recommendations. Its purpose is to learn from accidents, in order to prevent similar accidents from happening in the future

    (b) Interplay with the AI Liability proposal

    Directive 2009/18/EC and the AI liability proposal are complementary. The former has an exclusively preventive function and clarifies that conclusions and safety recommendations derived from the investigation of maritime incidents should not determine liability or apportion blame. The latter serves to enable effective liability claims.

    3.6. 1910 Brussels Collision Convention

    (a) Description

    23 Member States 284 are parties to the 1910 Brussels Collision Convention, which only lays down principles for the collision between sea-going vessels or between sea-going vessels and vessels of inland navigation. It addresses the compensation for damages caused to the vessels, or to any things or persons on board thereof (Article 1). It includes a fault-based liability regime, for which it regulates the conditions (referring to the fault “of the vessel”). While it does not expressly prohibit concurring claims under different liability regimes, it might be interpreted as an exclusive regulation of the extra-contractual liability issues within its scope, because pursuant to Article 1 “compensation … shall be settled in accordance with” the provisions of the Convention, and only specific questions, namely the determination of the meaning and effect of any contract or provision of law which limits the liability of the owners of a vessel towards persons on board, are “left to the law of each country” (cf. Article 4(4)). By contrast, issues not regulated by the Convention are left to national law, for instance the question of how to attribute fault and causation to a liable entity and – apart from a prohibition of presumptions of fault (Article 6) – the rules governing the proof of the relevant facts.

    Pursuant to Article 2 of the 1910 Brussels Collision Convention, the damages are borne by those who suffered them if the collision is accidental, caused by force majeure, or if the cause of the collision is left in doubt.

    The Convention does not address situations where a sea-going vessel causes damage to a person or to property, which is not on another vessel.

    (b) Interplay with the AI Liability proposal

    If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, liability for collisions involving sea-going vessels could be carved out from this possible strict liability regime to preserve the fault-based regime laid down in the 1910 Brussels Collision Convention. It could also be clarified that the envisaged measures to ease the burden of proof cannot lead to liability if the cause of a collision falling under the scope of the 1910 Brussels Collision Convention is left in doubt, because such an outcome would conflict with Article 2 of that Convention.

    3.7. 1960 Convention relating to the unification of certain rules concerning collisions in inland navigation

    (a) Description

    AT, DE, FR, NL, HU, PL and RO are parties to the Convention relating to the unification of certain rules concerning collisions in inland navigation. It applies to a collision of ships on inland waterways on the territory of a contracting state. It provides for fault-based liability for damage caused to the ships involved and to persons and objects on board. It specifically excludes legal presumptions of fault. Issues not regulated by the Convention are left to national law, for instance the question of how to attribute fault and causation to a liable entity and – apart from the prohibition of presumptions of fault (Article 2(2), second sentence) – the rules governing the proof of relevant facts.

    The Convention does not address situations where a sea-going vessel causes damage, nor any situations where damage is caused to a person or to property, which is not on a vessel of inland navigation.

    (b) Interplay with the AI Liability proposal

    As the preferred policy option on AI Liability does not involve any presumptions of fault, this option is compatible with the 1960 Convention relating to the unification of certain rules concerning collisions in inland navigation.

    3.8. Hague-Visby Rules (International Convention for the Unification of Certain Rules of Law relating to Bills of Lading (Hague Rules, 1924) amended in 1968 by the Protocol to Amend the International Convention for the Unification of Certain Rules of Law relating to Bills of Lading of 25 August 1924)

    (a) Description

    BE, HR, CY (only Hague rules), DK, DE, FI, FR, HU (only Hague rules), IE (only Hague rules), IT, LV, LT, NL, PL, PT (only Hague rules), SI (only Hague rules) and SE are parties to this Convention. Pursuant to Article II of the Hague Visby Rules, under every contract of carriage of goods by sea the carrier, in relation to the loading, handling, stowage, carriage, custody, care and discharge of such goods, is subject to the responsibilities and liabilities and entitled to the rights and immunities set out in these rules. Article IV sets out defences against the carrier’s or the ship’s liability, e.g. with respect to damage arising from neglect of the ship master, latent defects not discoverable by due diligence, acts of god or war, etc., and any other cause arising without the actual fault of the carrier, as well as limits capping the amount recoverable in the case of damage to shipped goods. Pursuant to Article IV bis, these limits and defences apply to contractual and extra-contractual claims against the carrier in respect of loss or damage to goods covered by a contract of carriage.

    (b) Interplay with the AI Liability proposal

    The Hague-Visby Rules guarantee ship-owners, which may include also managers and operators of ships, the right to limit their liability, including their extra-contractual liability vis-à-vis injured third parties. Therefore, if the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, at that stage, that these rules are without prejudice to the limitations laid down in, or stipulated in accordance with, those Conventions.

    3.9. Budapest Convention on the Contract for the Carriage of Goods by Inland Waterway (CMNI)

    (a) Description

    BE, BG, CZ, DE, FR, HR, LU, HU, NL, RO, SK are parties to the CMNI. PL and PT have signed but not ratified. The CMNI is applicable to contracts between a carrier and a shipper on the cross-border carriage of goods. The carrier is strictly liable for loss resulting from loss or damage to the goods transported unless he can show that the loss was due to circumstances, which a diligent carrier could not have prevented and the consequences of which he could not have averted (Article 16(1)). Claims based on Member States’ extra-contractual tort liability rules may lay independently from the carrier’s liability under the contract of carriage. However, the maximum limits of liability as well as exonerations from liability set out in the CMNI, e.g. where the damage results from acts of the shipper or consignee, apply in any action in respect of loss or damage to or delay in delivery of the goods covered by the contract of carriage, whether the action is founded in contract, in tort or on some other legal ground (Article 22). Any contractual stipulation intended to exclude, limit or increase the liability, within the meaning of the CMNI, of the carrier, the actual carrier or their servants or agents, or to shift the burden of proof are void (Article 25).

    (b) Interplay with the AI Liability proposal

    If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, carrier liability for loss resulting from loss or damage to goods transported on inland waterways could be carved out, at that stage, from a that possible strict liability regime to avoid conflicts with the Budapest Convention on the Contract for the Carriage of Goods by Inland Waterway. In that eventuality, it could also be clarified that the ratifying Member States can maintain the fault-based liability regime prescribed by international law for collisions in inland navigation, which would be without prejudice to the harmonisation of strict liability vis-à-vis injured parties not on board of the colliding vessels. The AI-specific alleviations of the claimant’s burden of proof envisaged under the preferred fault-based liability do not include presumptions of fault and are therefore compatible with the Budapest Convention.

    3.10. 1957 Liability Limitation Convention

    (a) Description

    FR, NL, ES, BE, DE, PL and PT are parties to the 1957 Liability Limitation Convention. It does not provide for a substantive liability regime, but allows the owner of a sea-going vessel to limit his liability, whatever the basis of liability may be, for loss of life or personal injury of persons on that vessel, or damage to goods carried on that vessel. It also allows the owner to limit liability in respect of loss of life of, or personal injury to, any other person, whether on land or on water, loss of or damage to any other property or infringement of any rights caused by the act, neglect or default of any person on board the ship for whose act, neglect or default the owner is responsible or any person not on board the ship for whose act, neglect or default the owner is responsible.

    (b) Interplay with the AI Liability proposal

    There is no overlap between the envisaged AI-specific alleviations of the burden of proof and the 1957 Liability Limitation Convention. If the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, it could be clarified, at that stage, that these rules are without prejudice to the limitations laid down in, or stipulated in accordance with, those Conventions.

    3.11. 1976 Convention on Limitation of Liability for Maritime Claims

    (a) Description

    DK, FI, FR, SE, ES and PL are parties to the Convention. It allows the owner, charterer, manager and operator of a sea-going vessel to limit their liability for damage related to the operation of a maritime vessel, whether the basis of liability may be contractual or extra-contractual in nature.

    (b) Interplay with the AI Liability proposal

    The same considerations as with respect to the 1957 Liability Limitation Convention apply, cf. previous point.

    3.12. Strasbourg Convention on the Limitation of Liability in Inland Navigation

    (a) Description

    DE, HU, LU and NL are parties to this Convention. It applies to inland waterways and allows the ship owner to limit his liability, whatever the basis of liability may be.

    (b) Interplay with the AI Liability proposal

    The same considerations as with respect to the 1957 Liability Limitation Convention apply, cf. 3.6.

    3.13. International Convention on Civil Liability for Bunker Oil Pollution Damage

    (a) Description

    Council Decision 2002/762/EC authorised the Member States, in the interest of the EU, to accede to this Convention. This authorisation was necessary because Articles 9 and 10 of the Bunkers Convention affect the Brussels I Regulation. All Member States are parties to this Convention. It regulates the strict liability of the ship owner for damage by oil pollution caused by seagoing vessels and seaborne craft that carry oil but not as bulk cargo. It regulates the ship owner’s liability for oil pollution damage exclusively (Article 3(5)).

    (b) Interplay with the AI Liability proposal

    There is no overlap or conflict between the envisaged AI-specific alleviations of the burden of proof and the International Convention on Civil Liability for Bunker Oil Pollution Damage.

    4. Rail transport

    4.1. Regulation (EU) 2021/782 on rail passengers’ rights and obligations

    (a) Description

    Pursuant to the recast Regulation (EU) 2021/782 on rail passengers’ rights and obligations, the liability of railway undertakings for passengers and their luggage is governed by the provisions of COTIF included in Annex I to the Regulation (see next point for further explanations on COTIF). Pursuant to Article 14 of Regulation (EU) 2021/782, railway undertakings are to be adequately insured or have adequate guarantees to cover its liabilities, in accordance with Article 22 of Directive 2012/34/EU. The latter Article requires adequate insurance or guarantees also for covering liability in respect of third parties.

    (b) Interplay with the AI Liability proposal

    Regulation (EU) 2021/782 does not cover liability for damage caused to third parties, i.e. persons other than passengers or things other than luggage, e.g. a driver or pedestrian hit by an autonomous AI-enabled train. Additionally, it does not address liability of providers of AI-enabled rail traffic management services, e.g. for damage caused by a collision between trains due to conflicting signals given by an autonomous AI-software.

    The future proposal on liability for AI could apply to autonomous AI-enabled trains and rail traffic management systems. It would adapt national liability rules to ensure that injured persons, including in particular third parties who do not have a contract of carriage, can claim compensation despite the autonomy, opacity, complexity and the lack of predictability and explainability characterising some of the AI-systems used in these technologies.

    In certain limited cases, there could be overlaps or conflicts between the measures envisaged as part of the future proposal on liability for AI and Regulation (EU) 2021/782 in conjunction with COTIF. For instance, AI-specific alleviations of the claimant’s burden of proof regarding liability for damage caused by AI-enabled autonomous trains may not be compatible with Article 46(2) of COTIF, pursuant to which the carrier’s liability for articles stowed on the outside of a vehicle transported on the train requires that “it is proved that the loss or damage results from an act or omission, which the carrier has committed either with intent to cause such a loss or damage or recklessly and with knowledge that such loss or damage would probably result”. Moreover, if the Commission would decide, at the stage of the targeted review, to adopt harmonised rules on strict liability of users/operators of certain AI-enabled technologies, extending that possible strict liability regime to passengers’ luggage in cases where the passenger has not also suffered death or personal injury could conflict with COTIF, because in such cases, COTIF provides for fault-based liability and excludes claims subject to other conditions. Moreover, in case a possible future strict liability regime for AI would include a closed list of defences available to the operator of the AI-enabled system that caused the damage, the specific defences guaranteed by Regulation (EU) 2021/782 in conjunction with COTIF would nevertheless have to be preserved. In order to ensure legal certainty, the future proposal on liability for AI could clarify, at the stage of the targeted review, that the provisions of Regulation (EU) 2021/782 in conjunction with COTIF take precedence in case of a conflict.

    4.2. Convention concerning International Carriage by Rail (COTIF)

    (a) Description

    The EU acceded to COTIF through an agreement concluded with the Intergovernmental Organization for International Carriage by Rail (OTIF) on 23 June 2011. All Member States except Cyprus and Malta are signatories. COTIF sets out uniform rules concerning contracts of international carriage of passengers (CIV, Appendix A to COTIF) or international carriage of goods (CIM, Appendix B to COTIF) by rail. It provides for the carrier’s strict liability for death or personal injury of passengers. Carriers are exempted from strict liability if the damage was caused by force majeure, by the passenger’s fault or by a third party under certain circumstances. Carriers are also liable for damage caused to passengers’ luggage. Strict liability for damage to luggage only applies if there is also a damage to the passenger, death or personal injury as well as in the case of registered luggage, under certain conditions. In the other cases, fault-based rules set out in the Convention apply. COTIF also contains provisions specifying what the parties to the various liability claims have to prove.

    With respect to contracts of international carriage of goods, CIM provides for the carrier’s liability for loss or damage resulting from the loss of, or damage to, the transported goods. The carrier is relieved of this liability to the extent that the loss or damage was caused by the fault of the person entitled, by an inherent defect of the goods, force majeure, or other specified risks, e.g. inadequate packaging, the nature of the goods, etc. Article 25 CIM regulates the burden of proof, and Articles 30 et seq. provide for limitations of liability.

    In all cases where these uniform rules apply, any action in respect of liability, on whatever grounds, may be brought against the carrier only subject to the conditions and limitations laid down in these uniform rules (Article 52 of Regulation (EU) 2021/782).

    (b) Interplay with the AI Liability proposal

    For explanations regarding the relationship between COTIF and the measures envisaged in the framework of the AI Liability proposal, see the previous point on Regulation (EU) 2021/782.

    Finally, it should be mentioned, for the sake of completeness, that the EU declaration on the exercise of competence between the EU and its Member States is annexed to the Council decision to sign and conclude the COTIF accession agreement. The EU also provided a list with EU secondary law in the field covered by the COTIF and pointed out that the Union competence is subject to continuous development. The COTIF accession agreement also contains a disconnection clause in Art. 2 according to which “without prejudice to the object and the purpose of [the COTIF] to promote, improve and facilitate international traffic by rail and without prejudice to its full application with respect to other Parties to [the COTIF], in their mutual relations, Parties to [the COTIF] which are Member States of the Union shall apply Union rules and shall therefore not apply the rules arising from [the COTIF] except in so far as there is no Union rule governing the particular subject concerned”.

    Both the declaration of competences and the disconnection clause refer to EU rules that have already been adopted. However, the Union is not bound by that declaration and can exercise its competence and adopt new rules with regard to matters covered by the COTIF. 285 Nevertheless, deviations from CIV seem to require a textual or substantive amendment to the recently adopted Regulation (EU) 2021/782, because pursuant to Article 13 of that Regulation, the liability of railway undertakings in respect of passengers and their luggage is governed by the CIV provisions reproduced in Annex I of the same Regulation.

    4.3. Directive (EU) 2016/798 on railway safety

    (a) Description

    Directive (EU) 2016/798 serves the purpose of establishing facts that lead to a railway accident and then to safety recommendations. Its purpose is to learn from accidents, in order to prevent similar accidents from happening in the future.

    (b) Interplay with the AI Liability proposal

    Directive (EU) 2016/798 and the AI liability proposal are complementary. The former has an exclusively preventive function and clarifies that conclusions and safety recommendations derived from the investigation of railway incidents should not determine liability or apportion blame. The latter serves to enable effective liability claims.



    Annex 7

    Interplay with the AI Act

    The characteristics of AI were identified in the AI Act Impact Assessment as driving specific problems: safety and fundamental rights risks, difficulties of enforcement, legal uncertainty and fragmentation that can lead to mistrust in AI technologies.

    The AI Act addresses these problems by proposing measures, in line with the EU product safety legislation framework, to:

    -set requirements specific to certain AI systems and obligations on all value chain participants in order to ensure that a limited number of AI systems, that contradict EU values are prohibited, that high-risk AI systems placed or used on the EU market are safe and respect the existing law on fundamental rights and Union values and that certain AI systems that directly interact with citizens are subject to transparency obligations;

    -ensure legal certainty to facilitate investment and innovation in AI by making it clear what essential requirements, obligations, as well as conformity and compliance procedures must be followed to place, or use a high-risk AI system in the Union market;

    -enhance governance and effective enforcement of the existing law on fundamental rights and safety requirements applicable to AI systems by establishing clear rules for relevant authorities on conformity assessment and ex post monitoring procedures and the division of governance and supervision tasks between national and EU levels;

    -facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

    The AI Act has as general objective the creation of the conditions for the development and use of trustworthy artificial intelligence in the Union, which is shared by this initiative. Given this complementarity, the IA accompanying the proposed AI Act concluded that “only a combination of the AI horizontal framework with future liability rules can fully address the problems listed in this impact assessment specifically in terms of […] legal certainty and single market for trustworthy AI”. It stressed that “effective liability rules will also provide an additional incentive to comply with the due diligence obligations laid down in the AI horizontal initiative, thus reinforcing the effectiveness and intended benefits of the proposed initiative.” 286  

    Therefore, the AI Act is a relevant component of the baseline scenario, because it plays an essential role in creating an ecosystem of trust for AI. The relationship between this proposal and civil liability is described along six main aspects.

    1.Reducing risks and liability-related costs

    Clear and specific safety rules and standards for AI-systems and AI-enabled products are likely to reduce the risk of accidents and thus liability-related costs. They may lead to a reduction of safety risks, facilitate insurance coverage of liabilities and, in the long run, reduce insurance costs. This will be particularly relevant for professional entities covered by the AI Act, such as producers integrating AI in products, as well as all other ‘providers’ and ‘users’.

    In this respect, reference should for instance be made to the AI Act’s requirements on risk management (Article 9), data governance (Article 10), human oversight (Article 14), accuracy, robustness and cybersecurity (Article 15), quality management system (Article 17), conformity assessment (Article 19) and corrective actions (Article 21).

    However, in spite of these enhanced safety rules for AI and the sought-after reduction of accidents, AI-related liability cases – and thus the relevance of the associated problems – may still occur, during the baseline period. As the market for AI-enabled products and services is expected to grow substantially, and AI technologies can be implemented in more and more situations and sectors, the economic and societal relevance of the identified problems will be important. The Deloitte study concluded that the sum of the AI market share affected by liability issues will increase at a compound annual growth rate of 44% – 56% until 2025. 287 The safety requirements applicable to AI systems will likely increase safety and reduce risks but they will not remove liability-related issues given the expected exponential increase in absolute terms of the AI market potentially affected

    While adding to the overall cost of doing business for manufacturers, liability rules may provide some additional incentive to comply with the due diligence obligations laid down in the AI Act and prevent damages from happening,.

    2.Setting standards of care relevant for establishing fault

    As explained in section 1.2. above, all jurisdictions in the EU require some misconduct as a prerequisite of fault liability. The benchmark for determining what counts as ‘misconduct’ is usually determined by the legal system as a whole, based on some objective standards (more or less a comparison with what a ‘reasonable person’ would have done). This assessment is done by courts after the damage happened but it becomes easier and more predictable if the legal system provides for specific rules of conduct that were breached. These duties of care are usually themselves at least partially introduced to prevent harm (for example traffic regulations). 288

    In this context, the requirements and obligations introduced by the AI Act for providers and users of high-risk AI systems are meant to minimize risks to safety and fundamental rights. Non-compliance with those requirements and obligations could be considered by courts to determine if the liable person had demonstrated a faulty behaviour. An example is the obligations of users in Article 29 of AI Act: to use the systems in accordance with the instructions of use, to ensure that input data (when it is under the control of the user) is relevant in view of the intended purpose and to monitor the operation of the AI system. Another example is the obligation of providers of AI systems in Article 21: to take the necessary corrective actions when the AI system is not in conformity with requirements set by the AI Act.

    For non-high risk AI systems, the AI Act supports the development of the voluntary codes of conduct that in principle, if those codes of conduct would exist in the specific sector or specific applications, the courts might look into them in order to determine the duties of care.

    The present initiative will incentivise compliance with the duties of care set by the AI Act for high-risk AI systems. For this purpose, it proposes to introduce a presumption of causality with the damage for non-compliance with requirements and obligations designed to prevent damage: if the victim can prove that the liable person did not comply with a requirement set by AI Act, which was meant to prevent the damage that occurred, then the court can presume that the non-compliance led to the damage.

    However, in light of the characteristics of AI and the problem drivers explained above, this presumption could be effectively used only if the victim can prove non-compliance with the AI Act. This concerns cases where this non-compliance can be proven from elements external to the AI system itself (for example because the system was used in circumstances not allowed under the instructions of use). Moreover, for non-high risk AI systems, there are no particular requirements set by the AI Act that could be interpreted as duties of care. Therefore, additional measures are needed to ensure the effectiveness of liability claims.

    3.Information potentially relevant in civil proceedings

    The AI Act provides for information to be included in the technical documentation of high-risk AI systems (Article 11 in conjunction with Annex IV of the AI Act), automatic logging capabilities and traceability (Article 12), as well as transparency and information obligations vis-à-vis users of high-risk AI systems (Article 13) 289 .

    While these measures are conceived and useful to increase transparency for the sake of harm prevention or ex-post checks and improvements, they are not designed to ensure that despite the involvement of AI it is made easier for injured persons to substantiate the conditions of existing liability claims. In particular, the AI Act does not give injured persons a direct right to access the documented or logged information. They can however request access to documents held by national authorities or bodies under the conditions defined by the relevant existing rules.

    Having access to such information, in the framework of court proceedings, would help these persons to establish, for instance, that the defendant did not comply with their obligations under the AI Act (e.g. as regards the quality of training data, testing, or human oversight), which would trigger the envisaged presumption of causality included in policy option 1. This means that the court could presume that the non-compliance with the obligation caused the damage and the burden to rebut this presumption would be on the professional operator.

    In discrimination cases, access to parameters and weights applied by an AI-system may also allow the injured person to show – with the help of an expert – that the applied criteria are likely to correlate with criteria protected by the fundamental right to non-discrimination/equal treatment. This could help to meet the prima facie threshold for establishing indirect discrimination under the existing Union or national legislation in that field. Likewise, access to input data, output data and the internal states of an AI-system could enable injured persons to establish – with the help of an expert – which input is most likely to have triggered the relevant (harmful) output.

    Access to logged information on the functioning of AI-systems could thus provide a basis, in the court proceedings, for proving the causal link between a negligent human behaviour and the relevant damage. In accordance with relevant procedural law, the competent national court could order that such disclosure would be subject to stringent safeguards to ensure proportionality and protect the legitimate interests of all parties concerned, for instance confidential information, intellectual property rights and trade secrets.

    4.Explaining how or why an AI system arrived at a certain harmful output

    The requirements of the AI Act are designed to enable the effective monitoring and supervision of ex ante safety requirements. While these requirements have the potential to also contribute to helping victims establish that a specific human wrongful behaviour – or a defect – caused an AI output that caused the damage, they are not conceived for this purpose. They are designed to allow supervisory authorities to understand and monitor AI systems to ensure that only safe products are allowed on the market. Their aim is not for instance to allow victims to prove for the purposes of their liability claims that a specific input led to a specific output, which caused the harm to the victim. Moreover, they are for the most part limited in scope to high-risk AI systems. Therefore, despite the envisaged provisions on the disclosure of information to be recorded or logged pursuant to the AI Act, the specific AI characteristics may still prevent victims from substantiating their liability claim. In particular, the opacity, complexity and autonomy of certain AI-systems can make it very challenging for victims, in court proceedings, to establish the internal decision-making processes of that system, which may be necessary in order to prove the causal link between a human behaviour and the harmful output of an AI-system.

    The targeted alleviation of the burden of proof would mean ultimately in practice that it is for the defendant to prove (to the level depending on national law) either that their own behaviour has not been at fault (for example that they followed the instructions of use) or that another cause than their behaviour has caused the damage. The defendant is in a better position than the victim to use the technical documentations at their disposal or which they can procure based on contractual arrangements with the seller or provider to explain how the AI reached the harmful output.

    5.AI used in special risk circumstances

    Another avenue to address the difficulty of proving how the use of certain AI led to the damage is by using strict liability. This would imply that the law determines the liable person for the harm caused by the AI-equipped product or service, irrespective of a fault of that person. The law would usually attribute that liability because the liable person has decided to expose the public to a risk and derives a benefit from it (for example by using that product or by providing that service). In this case, the victim would no longer have to prove the misconduct of the liable person but only the fact that the damage arose in connection to the risk posed by the ‘behaviour’ of the AI.

    Taking into account the diverse existing national strict liability regimes and the need of a proportionate approach, a harmonised strict liability for AI system would need to consider both the AI systems’ characteristics but also the risk of their use in practice, in particular to high-ranking legal interests (life, health or property) of potential victims.

    Therefore, such an analysis would require information about the market developments regarding the rollout of products and services driven by AI-systems with the characteristics that challenge existing liability rules: a high degree of behavioural autonomy (low level of human oversight), opacity (complexity + lack of transparency), continuous adaptation and a lack of predictability. In addition, the assessment would need to take into account the risk-profile and operating environment of the products and services, in particular whether they can cause harm to high-ranking legal interests of the public at large and, to the extent such information is available, the incidence rate of accidents caused by such AI systems.

    An important aspect influencing the rollout of such product and services likely to meet the risk profile for strict liability will be the regulatory measures meant to prevent harm: the AI Act, the General Product Safety Regulation, the MPR, future measures under the ‘old approach’ safety legislation.

    6.Different legal tools to address risks

    The AI Act, in line with the product safety legislation approach, in general, employs tools that address the risks posed by high-risk AI by imposing requirements on ‘potential wrongdoer’, either with the aim of prevention (authorisations, registrations, certifications) or compliance control (monitoring, reporting, administrative sanctions, etc.).

    While this approach is appropriate for the objectives pursued by the AI Act, these tools do not provide individual avenues for compensation to those that nevertheless suffered harmful consequences of AI systems. The AI Act measures apply to the entity subject to requirements under the AI Act or to the competent authorities but they will not compensate the victim of harm. This is why the Commission has committed to look also into the issue of liability for AI in the present Impact Assessment and in the Impact Assessment for the PLD.

    Liability tools analysed under the present Impact Assessment have, as primary purpose, the compensation of victims that have suffered harm, with the involvement of AI. Secondly, they aim to incentivise a prudent behaviour and thus compliance with regulatory requirements (including those set by the AI Act). By clarifying the rules applicable to such cases and ensuring efficiency of compensation claims, the present initiative aims to support societal trust and the uptake of the AI, which are the same objectives as those pursued by the AI Act.

    7.Conclusion

    In line with the Commission’s staged approach and the complementary nature of safety and liability rules, the provisions of the AI Act are designed to reduce and mitigate safety and fundamental rights risks while the present initiative is complementing it by facilitating compensation for justified liability claims in case AI systems produce harm to victims.

    The AI Act’s envisaged safety, transparency and human oversight requirements are meant to minimise safety and fundamental rights risks and support the roll-out of advanced, i.e. increasingly autonomous, complex, flexible and learning-based, AI-systems. Systems with such characteristics are likely to become more widespread. This also means that the negative effect of liability gaps on societal trust and consumer uptake will become increasingly relevant under the baseline scenario. This situation is illustrated by the case-studies developed together with the Joint Research Centre (Annex 13), where the difficulty of identifying and proving a faulty behaviour of a person as the cause of an accident is explained concretely.

    As questions of liability are not covered by the other already adopted proposals relevant for AI, legal uncertainty regarding the interpretation and applicability of existing liability rules, and the likelihood that fragmented AI-specific liability rules will be adopted by some MS, will persist under the baseline scenario.



    Annex 8

    AI-specific fundamental rights concerns –

    Overall Commission policy approach and the role of the AI liability proposal

    1. Fundamental rights challenges linked to AI

    The use of AI with its specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights (‘the Charter’). In the context of the AI Act, the Commission for instance identified risks regarding the right to human dignity (Article 1), respect for private life and protection of personal data (Articles 7 and 8), non-discrimination (Article 21) and equality between women and men (Article 23). As the specific characteristics of AI challenge existing liability rules – which often require the victim to prove fault in a human action or omission and a causal link between a human’s wrongful behaviour and the damage – they present a risk regarding the right to an effective remedy and a fair trial (Article 47).

    2. Role of the AI liability proposal with respect to fundamental rights risks

    One of the most important functions of civil liability rules is to ensure that victims of harm can claim compensation. By guaranteeing effective compensation, these rules also give an incentive to potentially liable persons to prevent harm, in order to avoid liability.

    Existing civil liability rules provide for the compensation of damage caused by harm to legal interests corresponding to some of the most basic fundamental rights, such as the right to life (Article 2 of the Charter), the right to the physical and mental integrity (Article 3), and the right to property (Article 17). In addition, depending on each Member State’s civil law system and traditions, victims can claim compensation for harm to other legal interests, such as violations of personal dignity (Articles 1 and 4 of the Charter), the right to liberty and security of person (Article 6), respect of private and family life (Article 7), the right to equality (Article 20) and non-discrimination (Article 21). 290

    Where existing liability rules allow victims to claim compensation for damage, this same right is in principle available also where AI is involved in causing harm. However, the specific characteristics of AI can make it prohibitively difficult or even impossible to use these existing rights successfully. With the AI liability proposal, the Commission aims to ensure that victims of harm caused by AI have an equivalent level of protection under civil liability rules as victims of damage caused without the involvement of AI. This means that the proposal will enable effective private enforcement – consistent with the existing system of civil liability and compensable harm – of fundamental rights and preserve the right to an effective remedy where the AI-specific fundamental rights risks have materialised.

    2.1. Disclosure of information 

    The envisaged rules on the disclosure of information to be documented/logged under the AI Act will help victims of damage to meet the burden of proof. The logging and documentation obligations of the AI Act apply, in the context of pending civil proceedings, with respect to ‘high risk’ AI systems. This includes uses of AI posing a high risk of fundamental rights breaches, in particular regarding the rights to equality and non-discrimination (e.g. AI systems used for the purposes of recruitment and staff matters, access to education, access to essential services through credit scoring, etc.). 291 The proposal on civil liability for AI would enable victims of discrimination to leverage that information to substantiate their compensation claims. If the defendant refuses without just motivation to disclose information, the facts that might have been proven (e.g. the application of criteria correlating with criteria protected by the right to non-discrimination, or unequal treatment) would be presumed. This rebuttable presumption will for instance help victims of discrimination to meet the prima facie threshold under the existing non-discrimination/equal treatment Directives (see 3.4. below). In accordance with relevant procedural law, the competent national court could order that the disclosure of information would be subject to stringent safeguards to ensure proportionality and protect the legitimate interests of all parties concerned, for instance confidential information, intellectual property rights and trade secrets.  

    2.2. Presumption of causality:

    If the victim establishes (where necessary using information to be disclosed in accordance with the first measure described above) that the defendant did not comply with their obligations under the AI Act, it would be presumed that harm of a type the relevant obligations were intended to prevent (e.g. discrimination) was caused by that non-compliance. For example, where the victim shows that the defendant did not comply with their obligation to minimize the risk of discrimination by ensuring high quality training, validation and testing data, it would be presumed that this failure to comply with the AI Act entailed the discrimination for which compensation is sought. This rebuttable presumption could thus help victims to overcome the AI-specific difficulty of establishing a link between specific input parameters and a harmful output, linked to the probabilistic nature and complexity of certain AI systems.

    2.3. Targeted alleviation of the burden of proof

    A targeted alleviation of the burden of proof would ensure that the victim would not have to prove how or why an AI-system arrived at the relevant (harmful) output, as this could be excessively difficult due to the specific characteristics of AI. This measure would apply irrespective of whether the relevant AI-system is qualified as ‘high-risk’ by the AI Act. It could thus help victims of discrimination also outside of the areas identified as high risk in Annex III of the AI Act.

    2.4. Clarifications regarding the purpose and scope of the proposal on AI liability – complementarity with other Commission policy areas

    The scope of the AI liability proposal is limited to matters of civil liability. Civil liability rules are by nature complaint-based and intervene only once damage has materialised. Accordingly, preventive regulatory and supervisory requirements aimed directly at avoiding fundamental rights breaches (such as discrimination), are outside the scope of this initiative. However, as explained in the subsequent sections, preventive and public enforcement tools are delivered by other existing and future policy instruments, namely the AI Act, the General Data Protection Regulation, the Digital Services Act and the non-discrimination/equal treatment acquis. The AI liability proposal complements these other strands of the Commission’s AI policy with a view to ensuring an effective protection of victims’ right to compensation under private law, including in the case of fundamental rights breaches.

    3. Complementarity and synergies with other strands of the Commission policy addressing fundamental rights concerns linked to AI

    3.1. AI Act

    The complementarity and synergies between the AI liability initiative and the proposed AI Act have already been described in the dedicated Annex 7. The present section looks specifically at how these instruments work together to address in particular the fundamental rights risks of AI.

    The AI Act provides a regulatory framework enhancing governance and effective enforcement of existing law on fundamental rights. It seeks to ensure a high level of protection for fundamental rights and aims to address various sources of risks through a clearly defined risk-based approach.

    The AI Act notably establishes a list of prohibited AI practices comprising AI systems the use of which is considered unacceptable as contravening Union values, for instance by violating fundamental rights. The prohibitions namely cover practices that have a significant potential to manipulate persons through subliminal techniques or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. 292 The proposal also prohibits AI-based social scoring for general purposes done by public authorities. Finally, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.

    With a set of requirements for trustworthy AI and obligations on all value chain participants, the AI Act promotes the protection of the rights protected by the Charter. The obligations for ex ante testing, risk management and human oversight will facilitate the respect of fundamental rights by minimising the risk of erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, important services, law enforcement and the judiciary. These mechanisms apply in particular to stand-alone AI systems with fundamental rights implications listed in Annex III of the AI Act. This covers for instance AI systems intended to be used in areas such as access to education, recruitment, access to essential services (e.g. through credit scoring), law enforcement, migration, asylum, border control, administration of justice and democratic processes. 293 For these AI systems, the AI Act establishes a new compliance and enforcement system involving a comprehensive ex-ante conformity assessment through internal checks, combined with a strong ex-post enforcement.

    As regards the risk of AI-induced discrimination in particular, the AI Act complements existing Union law on non-discrimination (see 3.4. below) with specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to:

    -the design and the quality of data sets used for the development of AI systems;

    -obligations for testing, accuracy, robustness, risk management, transparency, documentation and human oversight throughout the AI systems’ lifecycle.

    In case infringements of fundamental rights still happen, the increased transparency and traceability of AI systems afforded by the AI Act, coupled with ex post controls, will improve the conditions for seeking redress. The AI Act namely enables competent authorities, users and other interested people to exercise enhanced oversight over those AI systems posing high risks to fundamental rights. Moreover, AI providers will be obliged to inform national competent authorities about serious incidents or malfunctioning that constitute a breach of fundamental rights obligations as soon as they become aware of them. National competent authorities will then investigate the incidents or malfunctioning, collect all the necessary information and regularly transmit it to the Commission with adequate metadata.

    However, the AI Act does not include specific provisions on individuals’ claims for compensation for AI-induced fundamental rights breaches. These private enforcement needs will be addressed by the complementary proposal on AI liability.

    3.2. Digital Services Act

    Internet users are exposed to ever-increasing risks and harms online – from the spread of illegal content and activities, to limitations to express themselves and other societal harms. AI-enabled systems play a pivotal role for online services, e.g. for user profiling and content recommendations. Specific groups or persons may be vulnerable or disadvantaged in their use of online services because of their gender, race or ethnic origin, religion or belief, disability, age or sexual orientation. They can for instance be disproportionately affected by (unconscious or conscious) biases embedded in notification systems, as well as replicated in automated content moderation tools used by platforms.

    The proposed Digital Services Act 294 addresses these risks through a modern, future-proof governance framework effectively safeguarding the rights and legitimate interests of all parties involved, most of all Union citizens. The proposal will mitigate discriminatory risks and contribute to the protection of the rights of the child, the right to human dignity as well as the protection of personal data and privacy online. It namely defines clear responsibilities and accountability for providers of intermediary services, and in particular online platforms, such as social media and marketplaces. By setting out clear due-diligence obligations for certain intermediary services, the Digital Services Act seeks to improve users’ safety online across the entire Union and improve the protection of their fundamental rights.

    Recognising the particular impact of very large online platforms on our economy and society, the proposal sets a higher standard of transparency and accountability on how the providers of such platforms moderate content, on advertising and on algorithmic processes. It sets obligations to assess the systemic risks their systems pose and to develop appropriate risk management tools to protect the integrity of their services against the use of manipulative techniques. The Digital Services Act also includes specific obligations relating to recommender systems used by very large online platforms and systems to display online advertising on their online interface.

    The AI liability proposal does not interfere with the liability exemptions laid down in the Digital Services Act. It does not create new claims or harmonise the substantive conditions of existing claims, but comes in where MS’ liability rules do offer an avenue to compensation, i.e. in situations not falling under the liability exemptions harmonised by the Digital Services Act. This might for instance be the case where claims are not based on the alleged illegality of third-party information, but on damage induced by the functioning of the AI systems used by such providers. 295

    3.3. Data protection acquis

    The General Data Protection Regulation (Regulation (EU) 2016/679) and the Law Enforcement Directive (Directive (EU) 2016/680) aim to protect the fundamental rights and freedoms of natural persons, and in particular their right to the protection of personal data, whenever their personal data are processed. This covers the processing of personal data through ‘partially or solely automated means’, which includes any AI system that processes personal data. Users that determine the purpose and means of the AI processing (‘data controllers’) have to comply with a number of data processing principles such as lawfulness, transparency, fairness, accuracy, data minimisation, purpose and storage limitation, confidentiality and accountability. On the other hand, natural persons, whose personal data are processed, have a number of rights, for instance, the right to access, correction, not to be subject to solely automated decision-making with legal or similarly significant effects unless specific conditions apply, to obtain human intervention, and to contest the decision. Stricter conditions also apply for the processing of sensitive data, including biometric data for identification purposes, while processing that poses high risk to natural persons’ rights and freedoms requires a data protection impact assessment. Regarding civil liability, any person who has suffered material or non-material damage as a result of an infringement of the General Data Protection Regulation has the right to receive compensation from the controller or processor for the damage suffered (Article 82).

    3.4. Non-discrimination/equal treatment acquis and ongoing work to in relation to discrimination risks linked to AI

    (a) Existing instruments

    The EU equality acquis prohibits discrimination based on a number of protected grounds (such as racial and ethnic origin, religion, sex, age, disability and sexual orientation) and in specific context and sectors (for example, employment, education, social protection, access to goods and services). This existing acquis has been complemented with the new EU Accessibility Act setting requirements for the accessibility of goods and services, to become applicable as of 2025. Users of AI systems are bound by the following equality Directives:

    -Directive 2006/54/EC relating to the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation;

    -Directive 2004/113/EC relates to the implementation of the principle of equal treatment between men and women in the access to and supply of goods and services and thus deals with gender equality in the consumption market (the fields of media, advertising and education are excluded from this Directive’s scope of application);

    -Directive 92/85/EEC relating to workplace safety and health for pregnant and breastfeeding women and women who have recently given birth;

    -Directive 2019/1158/EU (work-life balance), 79/7/EEC (social security) and 2010/41/EU (the self-employed) extending the protection of the principle of gender equality in matters related to the labour market;

    -Directive (EU) 2019/882 relating to accessibility requirements for products and services.

    -The Racial Equality Directive 2000/43/EC implementing the principle of equal treatment between persons irrespective of racial or ethnic origin (the ‘Racial Equality Directive’), prohibiting discrimination based on racial or ethnic origin in employment, social protection, including social security and healthcare, social advantages, education and the access to and supply of goods and services available to the public. The material scope of this directive therefore extends beyond that of the gender acquis, as it includes education.

    -The Employment Equality Directive 2000/78/EC establishing a general framework for equal treatment in employment and occupation, prohibiting discrimination in the field of employment and occupation based on religion or belief, disability, age and sexual orientation (the ‘Employment Framework Directive’).

    It is important to note that the EU non-discrimination legal framework is not complete. It applies in relation to discrimination based on certain grounds (i.e. gender, racial or ethnic origin, disability, age, religion or belief, and sexual orientation) and in specific areas. When it comes to discrimination based on disability, age, religion or belief and sexual orientation, EU law prohibits discrimination in the area of employment/occupation but not in other areas, such as education, social protection and access to and supply of goods and services. A 2008 Commission proposal for a horizontal equal treatment directive ( COM/2008/0426 final ) aimed to fill this gap but the required unanimity had so far not been reached in Council.

    (b) Provisions on redress and burden of proof – interplay with AI liability proposal

    Pursuant to the gender Equality and Non-discrimination Directives, infringements of the prohibition of discrimination must be met with effective, proportionate and dissuasive sanctions, which may include compensation being paid to the victim. The Directives do not prescribe specific measures and allow Member States to decide on suitable remedies for achieving the objectives pursued. Depending on the legal avenue chosen, these can take various forms, such as a fine, compensation, an injunction for the wrongdoer to perform or refrain from certain action, publicising the wrongdoing, requiring an apology or imposing criminal sanctions.

    Based on the consideration that discrimination is difficult to proof and that the person that may be responsible for the alleged discrimination (e.g. a professional user of an AI system) is typically in possession of the information needed to prove a claim, the Equality Directives envisage a shift of the burden of proof. Once the victim establishes sufficient facts (prima facie evidence) for presuming discrimination, the burden of proof shifts to the defendant, who has to show that the difference in treatment is not discriminatory. The key effect of this adaptation of the burden of proof in non-discrimination cases is that it alleviates the burden on victims to show a clear causal link between the protected ground and the harm. The burden of proof shifts even if the causation between the protected ground and the harm is only probable or likely. The Court of Justice has clarified that the victim can meet the prima facie threshold for instance by providing general statistical evidence of discrimination. 296 The Court also recognised that although the Directives do not give the victim a right to access relevant information, the fact that an employer refuses to give the victim access to requested information is ‘among the factors which may be taken into account’ to establish a prima facie case of discrimination. 297  

    The defendant can rebut the presumed discrimination either by proving that there was no causal link between the prohibited ground and the differential treatment, or by demonstrating that although the differential treatment is related to the prohibited ground, it has a reasonable and objective justification. If the alleged discriminator is unable to prove either of the two, they will be liable for discrimination.

    As far as civil liability rules are concerned, the AI liability proposal will complement the existing mechanisms to facilitate compensation in accordance with the Equality Directives:

    -the provisions on the disclosure of information about AI systems will help the victim to meet the prima facie threshold to trigger the shift of the burden of proof and

    -the targeted alleviation of the burden of proof as regards the question how or why an AI system arrived at a certain output will dispense victims from the need to prove a link between the use of a protected ground of discrimination and a discriminating action, even where this is not guaranteed by the existing acquis.

    (c) Further relevant policy measures and ongoing analysis

    The Commission’s pending legislative proposal on pay transparency proposal (lex specialis to the Gender Equality Directive 2006/54/EC), strengthens the tools for workers to claim their rights and facilitates access to justice. Employers are required to provide pay related anonymised data upon employee request and employees will have the right to compensation for discrimination in pay. The existing reversal of the burden of proof under Directive 2006/54/EC) is re-enforced, to the effect that employers who do not respect their pay transparency obligations would have to prove that there was no discrimination in relation to pay. Therefore, the burden of proof shifts to the defendant without requiring the worker to establish even a prima facie case of discrimination. These provisions are highly relevant with respect to AI-induced algorithmic discrimination, as the (even prima facie) burden of proof would be particularly difficult to meet for victims where unequal pay or other forms of discrimination are triggered by an opaque AI system.

    As part of the ongoing efforts to achieve a ‘Union of Equality for All’, the European Commission has included legislative measures aimed at strengthening the role and independence of equality bodies in its work programme for 2022. These bodies play a key role in upholding the right of all persons to be protected. The initiative is scheduled for the second half of 2022. In preparation of this upcoming initiative, the Commission is exploring certain avenues relevant to address possible challenges posed by the use of AI systems that may lead to discrimination. Such avenues include the role of equality bodies in relation to collective complaint mechanisms and strategic litigation, effective mechanisms for cooperation with relevant stakeholders and duty bearers, a reinforced advisory and awareness raising role, etc.

    Finally, as announced in the EU Anti-Racism Action Plan 2020-2025, the Commission is looking into possible gaps related to the EU legislation prohibiting racial and ethnic discrimination, in particular in the area of law enforcement. The use of AI will be taken into consideration when executing this gaps assessment.



    Annex 9

    The types of compensable (in particular immaterial) harm and the admissibility of contractual exclusions/limitations of liability – Member States’ legal approaches/traditions and reasons for not harmonising these aspects specifically for AI

    Context and introduction

    In its legislative own-initiative resolution on a civil liability regime for artificial intelligence 298 , the European Parliament requested the Commission to evaluate, taking into account existing national rules and legal traditions, whether certain aspects relating to liability for AI should be covered by the future initiative. The Parliament inquired in particular whether:

    -the inclusion of immaterial damage in an AI-specific EU instrument is legally sound and necessary;

    -there is a need to include regarding contractual exclusions/limitations of liability in an AI-specific initiative.

    This Annex provides an overview of the Commission’s findings following up on these requests by the European Parliament, and explains the proposed regulatory choices, taking into account these findings.

    In Section A, Member States’ approaches and legal traditions regarding the compensation for immaterial damage are summarised. For practical reasons and reasons of proportionality, this IA cannot incorporate a full and exhaustive description of the relevant legal framework. Instead, the main findings are highlighted, pointing to illustrative examples of national laws and identifying trends and differences. On this basis, it is explained why the preferred policy option on AI liability does not include provisions on the types of compensable harm.

    Section B reports the main findings regarding the question of contractual exclusions / limitations of liability, and explains why the preferred policy option on liability for AI does not seek to harmonise this question either.

    A.    Immaterial harm

    Examples of relevant provisions (non-exhaustive): Austria: Art. 1323, 1324 ABGB; Denmark: § 26 Erstatningsansvarslov; Finland: Chap. 5 § 6 Liability Act; Germany: Section. 253 para. 1 BGB: Greece: Art. 57, 59, 299, 932, 933 CC; Italy: Art. 2043, 2059 CC, Art. 185 CP; the Netherlands: Art. 6:106,  6:162, 6:95 et seqq. BW; Poland: Art. 448 Civil Code; Portugal: Art. 483 (1), 496 CC; Spain; Art. 1902 CC, Art. 110 (3) Código Penal; Sweden: Chap. 2 § 3 Liability Act.

    1.    Background: the approach proposed by the European Parliament

    The European Parliament proposed that, in line with strict liability systems of the Member States, a harmonised civil liability regime for AI should cover violations of the important legally protected rights to life, health, physical integrity and property. In addition, the Parliament considered that such a regime should also incorporate “significant immaterial harm that results in a verifiable economic”. However, before adopting such approach, the Commission should analyse the legal traditions in the MS and their existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm in AI-specific legislative acts is necessary and if it contradicts the existing Union legal framework or undermines the national law of the Member States”. 299

    In the draft proposal annexed to its resolution, the European Parliament specifies that “significant immaterial harm should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances”. 300

    2.    The relevant categories of harm

    The concepts needed to discuss the questions raised by the European Parliament – ‘harm’, ‘immaterial harm’, ‘damage’, ‘loss’ – are not harmonised across MS legal systems. Different terminology is used, sometimes interchangeably, and the attached meaning can vary from one MS to another. To nevertheless develop a consistent comparative overview, a broad concept of ‘damage’ is used in this Annex, referring to any type of loss, harm or injury. Where a more granular terminological differentiation is needed, specific explanations are made to that effect.

    For the purposes of this summary, immaterial or non-material damage refers to losses which do not relate to a person’s assets, wealth or income and, as such, cannot be quantified in an objective manner by reference to a market price or value. 301 Such losses are, for example, damage to reputation, discrimination, loss of confidentiality, psychological harm and mental suffering. This type of damage is different from the concept of pure economic loss.

    Pure economic loss usually refers to financial loss suffered by a party, which is not the result of any damage to that party’s own person or property. Instead, the damage arises from economic relationships in which the party is involved. This concept is distinct from the concept of consequential economic loss, i.e. loss that arises from physical injury or property damage. However, one should bear in mind that there is no universally accepted definition or description of pure economic loss in MS’ or EU law. 302

    3.    Summary of national legal traditions and approaches

    3.1.    Basic distinctions and trends

    The comparative law study commissioned for this impact assessment highlighted that “there are important differences throughout Europe when it comes to recognizing which damage triggers tort claims in the first place (specifically evident in the case of pure economic loss). Furthermore, jurisdictions differ with regard to which consequences of an initial harm will be indemnified at all. The range and extent of remedies available are equally divergent, in particular (but clearly not limited to) the extent of compensation for immaterial harm. 303

    While all Members States’ civil law systems know the concept of non-material damage, there is no uniform approach to its recoverability. Broadly speaking, Members States follow one of two models: some do not differentiate between material and immaterial damage and simply consider both damages equally recoverable. Especially Napoleonic legal systems apply this approach. Other Member States only accept compensation for immaterial damage if expressly provided for by law. Especially Germanic legal systems apply this approach. 304

    There is also no uniform approach as to whether the question of compensable harm should be addressed differently under fault-based and strict liability. 305

    Some legal systems treat compensation for immaterial harm differently under contract law and under tort law. Developments usually start in tort law and later expand to contract law. In several Member States, one can observe an increasing acceptance of compensation for immaterial damage. 306

    3.2.    Legal systems that generally allow recovery of immaterial damage

    The first category of Member States, i.e. those that generally allow recovery for immaterial damage, does not distinguish between material and immaterial damage. This category includes, for example, Belgium, France, Luxembourg, Spain, Hungary and Slovenia. The starting point is that all damage or damage in general is recoverable. There is little distinction between cases of fault liability and cases of strict liability. Recovery serves (also) the purpose to ease non-material suffering and compensate the irreparable. 307 That is not to say that these legal systems do not apply restrictions on the recovery of immaterial damage. In Portugal, for example, loss is only recoverable if it is sufficiently severe to deserve legal protection. 308

    3.3.    Legal systems that only allow recovery for immaterial damage in certain cases

    The second category of Member States only allows recovery of immaterial damage if expressly provided for by law. For example, Italy, Germany, Poland, Austria, the Netherlands, Estonia, Lithuania and the Nordic Countries follow this approach. These Member States depart from the opposite premise: immaterial damage is generally non-recoverable.

    However, there are a number of “exceptions”, in which the law expressly provides for the recovery of immaterial damage. One such situation is if the infringed rights deserve special protection. In this sense, most legal systems allow recovery if the tort infringes certain protected rights such as health and bodily integrity, liberty or the right to sexual self-determination. This is true, for instance, for Italy, Poland, Greece, the Czech Republic, Slovakia, Germany, the Netherlands, Estonia and the Nordic Countries. Similarly, Italy generally permits recovery if the tort also constitutes a crime or if the tort causes a permanent and negative detrimental effect on a person’s life and social interaction with others. The Spanish legal system, which generally considers immaterial damage as recoverable, also expressly provides for compensation in case of criminal activity. 309

    Intention or gross negligence may also justify compensation for immaterial damage. This is the case in Austria and the Netherlands. In cases of intentional damage, the tort will often also constitute a crime. 310

    The Member States also frequently grant compensation for immaterial damage if recovery would otherwise be difficult or impossible. Poland, Estonia and the Nordic Countries allow recovery if the tort concerns incorporeal rights of personality. The Netherlands permit recovery if the harming party violates the reputation of the dead.

    In the context of the private enforcement of intellectual property rights (IPRs), Directive 2004/48/EC 311 requires national judicial authorities to take into account, when setting the damages to be paid to the rightholder for an infringement of IPRs, elements other than economic factors, such as the moral prejudice caused to the rightholder by that infringement. However, as the Directive gives MS the option to provide instead for a compensation of IPR infringements by lump sum payments (based e.g. on hypothetical royalties or fees), it did not force MS to depart from their traditional approach regarding the recoverability of immaterial harm.

    3.4.    Damage to or loss of data

    Currently, Member States mostly do not consider data as property. Accordingly, damage to or loss of data does not give rise to a tort claim under general rules. However, Article 82 of the General Data Protection Regulation 312 gives any person who has suffered material or non-material damage as a result of an infringement of that Regulation the right to receive compensation from the controller or processor for the damage suffered. While there is a recent tendency in national legislation, court decisions and doctrine to ascribe proprietary attributes to data, 313 it needs to be kept in mind that the right to the protection of personal data is an inalienable fundamental right, which is not compatible with its assimilation to property.

    3.5.    Damage caused by discrimination

    A vast majority of Member States provides for compensation awards in the case of unlawful discrimination. 314 Legal mechanisms to ensure effective, proportionate and dissuasive sanctions were notably developed to transpose the EU’s equality Directives (see in this respect Annex 8 on fundamental rights risks of AI). However, the amounts of damages, in particular for non-pecuniary harm linked to discrimination, have been pointed out as insufficient by experts and vary widely. In a recent comparative analysis of non-discrimination and gender equality law in Europe 315 , prepared on behalf of DG JUST by the European network of legal experts in gender equality and non-discrimination, the remedies available in MS in the case of infringements of EU directives in non-discrimination and gender equality have been summarised. According to a 2015 report from the European Network of Equality Bodies, Equinet, almost all MS provide for Compensation for immaterial damage in discrimination cases 316 . It is important to note however that the EU equality Directives are incomplete and do not cover all possible areas and/or grounds of discrimination.

    3.6.    Immaterial harm suffered by legal persons

    Most Member States accept immaterial damage of legal persons, e.g. in cases of injury to reputation. France, Belgium, Spain, Austria, Italy, Hungary, Slovenia, Poland, the Netherlands, Greece, Portugal and the Nordic countries have adopted this approach. On the contrary, e.g. the German legal system only allows legal persons to claim for immaterial damage in a restricted number of cases. 317

    3.7.    Personality rights

    The Member States largely accept that tort liability may arise out of the infringement of the right to one’s name or the unauthorised use of one’s image. Many legal systems also protect these rights after the death of the concerned party (post mortem). Legal systems that permit legal persons to claim damages for immaterial harm accept that these legal persons are also entitled to personality rights. 318

    3.8.    Personal injury of others

    If a person is killed, most but not all European systems grant close relatives compensation as secondary victims. The requirements differ substantially. Austria, for example, only permits compensation if the primary victim was killed by an act of gross negligence. Personal injury other than death, however, does generally not lead to compensation for close relatives. In addition, the amounts of compensation vary significantly among the Member States. 319

    3.9.    Pure economic loss

    Some Member States distinguish the concept of pure economic loss from other types of losses. Others do not view it as a separate type of damage. Regardless of this distinction, most legal systems accept compensation for pure economic loss in case of intentional conduct. However, the situation is different for negligent behaviour. If such conduct is apparent, the Member States that conceptualize pure economic loss do not universally accept compensation for this type of loss. Instead, this matter remains contested. 320

    (a)    Member States that acknowledge the notion of pure economic loss

    In Germany and Austria, which distinguish the notion of pure economic loss from other types of economic loss like property damage, recovery of pure economic loss is possible only if statutory law explicitly allows it. This stems from the fact that often, an injured party can only raise tort laws claims if the other party has breached certain protected interests. Mere economic interests are not included among such protected interests. In Germany, the victim can recover pure economic loss if the other party’s conduct was intentional and contrary to public policy. The Dutch legal system, on the other hand, operates on a more pragmatic case-by-case basis and focuses on the socio-economic implications of recovery. 321

    If a contracting party incurs pure economic loss, the legal systems that allow its recovery in principle need to strike a delicate balance between the contractual risk allocation agreed upon by the parties and the default rules of compensation in tort law. 322

    If a non-contracting party incurs pure economic losses, the situation can also be complex. If a primary victim incurs personal injury, a secondary victim linked to this person may suffer economic losses (relational economic loss). For example, the death of a family member can result in losses for other family members that were financially depended on the deceased relative. In such cases, recovery is generally impossible except for close relatives. Similarly, there are cases in which the loss that the primary victim would normally incur is actually incurred by a secondary victim (transferred loss). For example, damage to a leased car primarily constitutes damage to the lessor’s property; however, it also results in an economic loss of the lessee. In such cases and subject to certain conditions, the secondary victim may also recover the economic loss. 323 The recovery of pure economic loss is also revenant if a third party relies on the negligent provision of information or defective services. 324

    (b)    Member States that do not acknowledge the notion of pure economic loss

    Member States in which pure economic loss is not distinguished from other types of losses, such as Poland, France, Italy, Greece and Spain, approach this issue differently. Claims for pure economic loss are possible but often limited. In France, a party can generally not raise tort claims if contractual claims exist (non-cumul rule). This principle is also the starting point in Belgium and Luxembourg. 325 Other countries limit the conditions under which an injured part can successfully bring forward such claims. For example, Poland and Greece limit the circle of victims eligible for the compensation to directly affected parties. Spain applies rules of certainty to damages and limits the scope of liability. 326

    3.10.    Breach of an absolute right

    The recoverability of pure economic loss question is closely intertwined with the question of whether tort liability requires the breach of absolute rights or other protected legal interests. Such requirement is expressly imposed in Section 823(1) of the German BGB. Others, like e.g. the Austrian, Dutch and Estonian legal systems, follow a similar approach. Conversely, the majority of Member States does not require the breach of absolute rights. For instance, Belgium, France, Malta, Luxembourg, Spain, Italy, the Czech Republic, Slovakia, Hungary, Poland and Slovenia do not establish such condition. Some of these legal systems also protect relative rights arising from contractual agreements to a certain degree under tort liability. For example, this is the case in France.

    4.    Conclusion: Reasons for not including provisions on the types of compensable harm on the preferred policy option on AI liability

    In light of the analysis of national legal traditions and existing approaches, it may be true that if a legal system does not universally permit recovery of immaterial damage, victims may not always be able to claim compensation for their immaterial damage. Nevertheless, it is appropriate to leave the question of whether and to what extent immaterial harm gives rise to civil liability claims outside of the scope of the AI liability initiative, for the following reasons:

    First, the specific characteristics of AI (such as an increasing degree of autonomy, complexity, opacity, and a lack of transparency and predictability) do not alter the existing types or concepts of damage. The recoverability of immaterial harm therefore does not form part of the AI-specific issues to be addressed by the AI liability initiative. The significant divergences between national legal systems as outlined above do not relate to these AI-specific characteristics. 327 During a Workshop with MS representatives, those that took a position were in favour of leaving the definition of compensable harm to national law.

    Second, Member States have developed varying concepts and approaches as regards the compensation for immaterial damage. Novel concepts such as “significant immaterial harm resulting in a verifiable economic loss” will not easily fit into the existing legal regimes.

    Third, treating AI-induced damage as a category apart when it comes to its recoverability could upset the coherence of the existing national rules concerning the types of compensable harm. The prevailing divergences between MS’ approaches and legal traditions apply horizontally irrespective of whether immaterial damage was caused with the involvement of AI systems or not. 328 Harmonising this aspect in the AI liability proposal would potentially lead to a situation where victims can claim compensation for certain types of harm, but only if AI was involved in causing it. In addition to concerns regarding the coherence of such an approach, it might also represent an uneven playing field for AI-enabled products and services. Such an outcome would not be aligned with the policy objective to promote the roll-out of lawful and safe AI in Europe.

    Fourth, it is doubtful whether the current lack of harmonisation of the types of compensable harm amounts to an internal market barrier justifying harmonised measures. According to previous research, businesses tend to think that the recoverability of immaterial damage does not play a large role in selecting their place of business within the EU, despite the significant differences between Member States. 329

    Fifth, recoverability of damage in tort law has implications for the recoverability of damage in contract law. Harmonising the recoverability in tort law could lead to wider gaps between contract and tort law. Countries that treat recoverability of immaterial damage in tort and contract law the same or at least similar might face difficulties implementing such EU measures into their national law.

    B.    Contractual exclusions / limitations of liability waivers

    Examples of relevant provisions (non-exhaustive): Czech Republic: § 2898 CC; France: Art. 1231-3, 1245-14 Code civil; Germany: § 276 para. 3, § 278, § 307 No. 7 BGB; Greece: Art. 332 CC; Italy: Art. 1229 CC; Art. 12, 18 Portugal: Decreto-Lei No. 249/99; Spain: Art. 1255 CC.

    1.    Background and scope of the summary

    In its Resolution of 20 October 2020, the European Parliament requested the Commission “to evaluate the need for legal provisions at Union level on contracts to prevent contractual non-liability clauses”. 330

    The Product Liability Directive includes such provision in its Art. 12: “The liability of the producer arising from this Directive may not, in relation to the injured person, be limited or excluded by a provision limiting his liability or exempting him from liability. 331

    Liability waivers are primarily relevant in the context of contractual claims. However, almost all Member States permit certain limitations of non-contractual (tort) liability as well, although the detailed conditions vary. The possibility to exclude or limit liability has the potential to undermine the protection that the liability regime intends to provide. In the context of the AI liability initiative, the following question arises: Should the EU also regulate and thereby harmonize the ability to limit or exclude liability specifically for damage caused by AI? Prior to making this policy choice, a thorough stocktaking of existing regulations on an EU as well as on a national level is essential. This section summarises the Commission services’ findings regarding the admissibility, under existing EU or national law, of contractual waivers in the context of tort liability claims. It also takes into account legal systems that preclude claimants from invoking contractual and non-contractual claims at the same time (non-cumul principle).

    The summary is focuses on liability waivers that the parties agreed upon prior to the onset of the damage. Liability waivers that the parties agreed upon after the onset of the damage are a different issue. This includes questions such as whether the waiver bars an injured party from bringing its claim from a procedural point of view. In any case, the introduction of a new liability regime would not interfere with the possibility to settle a dispute amicably outside of a court.

    2.    Summary of the existing rules governing the admissibility of contractual exclusions / limitations of tort liability

    There is no uniform treatment of contractual liability waivers among the Member States in the context of tort law. Almost all legal systems allow such contractual agreements to some extent (but see 2.1. for exceptions). However, they impose several restrictions, a violation of which renders the waiver null and void. Broadly speaking, contractual liability waivers will usually be effective only if the parties limit their scope to situations of ordinary negligence and damage to property.

    2.1.    Legal systems that do not allow liability waivers in tort law

    While most Members States accept contractual liability waivers of tort claims at least under certain conditions, the French legal system has adopted a more restrictive approach. Extra-contractual liability is part of France’s ordre public. Therefore, except for special cases, contractual agreements that attempt to limit or exclude liability for torts committed in the future, are invalid. In other Member States such as Spain, Italy, Bulgaria and Portugal, academic literature at least discusses a similarly restrictive approach. 332 The fact that under the French legal system, a party can only bring a claim under tort law if no contract exists (non-cumul principle) is relevant to understand this approach. However, the inadmissibility of contractual waivers of tort liability is not a necessary consequence of the non-cumul principle, as shown e.g. by the Belgian legal system, which also acknowledges the non-cumul principle but permits waivers of tort liability. 333  

    2.2.    Legal systems that allow liability waivers in tort law under certain restrictions

    While the majority of Members States permits the parties to agree on liability waivers, all Members States substantially restrict the use of such clauses in the context of tort law. Due to these limits, the outcome does often not differ greatly from the French approach. The main restrictions of contractual limitations / exclusions of liability are summarised in the subsequent points.

    (a)    Intention or gross negligence

    All Members States prevent the parties from excluding or limiting liability in the case of intentionally harmful conduct. For example, § 276 para. 3 of the German BGB provides: “The obligor may not be released in advance from liability for intention.” Intentionally causing harm to another party will often also constitute a crime. Limiting liability for such actions conflicts with the principles of morality that are often also part of the ordre public.

    Many legal systems also prohibit liability waivers for instances of gross negligence. This is the case, for example, in the Czech Republic, Italy and Greece. In Austria, different opinions on this matter exist. 334 Proving intentional damage will often be difficult for the claimant. Therefore, the injured party will face difficulties if the damaging party has excluded its liability for grossly negligent conduct. This is different in Member States that do not permit such waivers leading to greater protection of the injured party.

    The situation is more heterogeneous with respect to acts of auxiliaries. Some jurisdictions are more liberal with regards to intentional or grossly negligent conduct of auxiliaries and permit waivers excluding or limiting liability for such acts. For example, Germany has adopted this approach. Other Members States such as Italy apply the same or at least similar rules for principals and auxiliaries. Consequently, liability waivers cannot extend to damage intentionally caused by auxiliaries. The same is true for grossly negligent conduct of auxiliaries if the principal cannot restrict his or her liability in such cases. 335

    (b)    Personal injury

    Generally, the provisions regulating waivers for personal injuries are stricter than those regarding waivers for property damage. Some countries such as Spain and Poland do not allow waivers of liability for personal injury. The Spanish legal system goes even further and prohibits clauses that limit liability for acts which violate moral integrity, fundamental rights or human dignity, or amount to an abuse of a dominant position. Protection of bodily integrity and other fundamental rights is often part of the ordre public. Therefore, parties cannot limit their protection by contractual agreements. 336 Similarly, the Unfair Contract Terms Directive prohibits the use of general terms “excluding or limiting the legal liability of a seller or supplier in the event of the death of a consumer or personal injury to the latter resulting from an act or omission of that seller or supplier”. 337

    (c)    Product liability

    Pursuant to Article 12 of the Product Liability Directive, the parties can generally not agree on contractual liability waivers for claims arising from that Directive. The Members States have implemented this or a similar provision into their national law.

    (d)    General terms and conditions

    Some Members States limit the possibility to include liability waivers in one’s general terms and conditions. For example, under the German legal system, a party cannot exclude liability for injury to life, body or health and in case of gross fault in its general terms and conditions (Art. 309(7) BGB). Austria, Greece and Portugal have adopted a similar approach. 338 The Unfair Contract Terms Directive prohibits general terms with the object or effect of “excluding or limiting the legal liability of a seller or supplier in the event of the death of a consumer or personal injury to the latter resulting from an act or omission of that seller or supplier”. 339

    (e)    Liability of professionals vis-à-vis private individuals

    There is a general tendency towards applying stricter rules to liability waivers invoked by professionals vis-à-vis consumers. Some legal systems consider certain waivers as unfair or abusive specifically in a B2C situation. 340

    (f)    Dangerous activities

    There is also a general tendency to apply stricter rules to liability waivers in cases of dangerous activities, e.g. in Poland. In Germany, liability waivers can often not extend to cases of strict liability. 341

    (g)    Interpretation of contractual liability waivers

    Usually, waivers focus on contractual liability. Whether a waiver also extents to tortious liability is a matter of interpretation of the contract. In this regard, MS approach the interpretation of contractual liability waivers differently. Some legal systems such as Greece generally assume that waivers are intended to cover both contractual and tortious liability. Other jurisdictions such as Germany apply a more restrictive approach and look more closely at the wording of the waiver and the intent of the parties.

    3.    Conclusion: Reasons for not including provisions on contractual exclusions / limitations of liability in the preferred policy option on AI liability

    In light of the foregoing analysis, it is proposed not to regulate the admissibility of contractual liability waivers in the AI liability initiative. The following arguments support this conclusion:

    First, the specific characteristics of AI (such as an increasing degree of autonomy, complexity, opacity, and a lack of transparency and predictability) do not affect the admissibility of liability waivers. None of the differences examined above is linked to these specific characteristics. The tendency to dismiss waivers in case of certain dangerous activities could not easily be applied to AI systems, because the risk attached to such systems depends on each individual software application and the situation in which it is used. Therefore, it would not be coherent to address the admissibility of liability waivers under the premise of AI-specific adaptations of existing liability rules. Instead, according to national legal traditions, this matter is regulated on a horizontal, technology-neutral basis.

    Second, Member States’ approaches to address the admissibility of liability waivers are highly heterogeneous. Some jurisdictions include explicit legislative provisions; others rely on jurisprudence and academic literature. In some legal systems, certain sub-questions are still subject to debate. Therefore, a regulation of this matter in the AI liability initiative would represent a significant intervention into the structure of national law and legal traditions.

    Third, the national provisions and rules on the admissibility are often intertwined with other legal matters and general principles of law. Often, the rules are derived from explicit provision that regulate the admissibility of waivers in the context of contractual agreements. Sometimes, the issue of liability waivers even relates to questions of fundamental rights and ordre public. Therefore, a regulation of this matter could interfere with general and even fundamental principles of law within the respective jurisdiction.

    Fourth, although the legal systems apply different provisions and rules, as explained in the second argument, the result is often similar. In all legal systems, a party can only limit its liability for a number of cases, often only for property damages caused by ordinary negligence. Even if the legal provision or the rules established by jurisprudence and academic literature differ, the results will frequently be the same. Injured parties would not be left unprotected even if an AI-specific instrument would not harmonise the matter at an EU level. Therefore, there is no urgent need to align the national approaches through harmonised provisions.

    Fifth, the preferred policy option does not provide for a harmonisation of the substantive conditions of liability, or introduce a harmonised claim for compensation. The envisaged measures are limited to targeted adaptations regarding the burden of proof. Differently from the PLD, there are hence not claims arising from the AI liability initiative, the exclusion or limitation of which would need to be regulated. Rather, the preferred option comes in where the victim can in principle invoke an extra-contractual liability claim in accordance with the general national liability rules.



    Annex 10

    Detailed explanations and results regarding the assessment and comparison of policy options (multi-criteria analysis)

    This Annex provides further detailed explanations on the assessment of policy options and their comparison by means of a multi-criteria analysis. It complements Sections 6 and 7 of the main impact assessment report.

    A.    Mapping of impacts and criteria for effectiveness, efficiency, coherence and proportionality

    The following tables provide an overview of the criteria used to assess the policy options with respect to the main impact assessment categories (effectiveness, efficiency, coherence and proportionality). In line with the internal market objective of the initiative, the impacts are for the most part economic in nature. However, the initiative would also have social impacts, primarily connected to the compensation of victims and the fact that effective liability rules can contribute to improved safety of AI-enabled products and services, by providing an incentive to prevent harm. The choice of significant impacts retained for deeper assessment is based on stakeholders’ views and the objective to usefully inform political decision-making.

    1.    Criteria for assessing effectiveness

    First specific objective (increase legal certainty and address associated internal market obstacles)

    Stakeholder group

    Success criteria

    Data sources

    Potentially liable companies, in particular companies operating AI-enabled products or providing AI-enabled services (differentiating by company size)

    - Increased level of legal certainty, leading to

    improved conditions for (in particular cross-border) business activities/investments (reduced costs,

    more investment and financing security etc.)

    → qualitative assessment

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Insurance companies

    - Increased level of legal certainty, leading to

    improved conditions for offering insurance

    coverage, in particular for cross-border activities; emergence of new market opportunities for

    insurance companies.

    → qualitative assessment

    - economic analysis in supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Second specific objective (prevent fragmentation and associated internal market obstacles)

    Stakeholder group

    Success criteria

    Data source

    Potentially liable companies, in particular companies operating AI-enabled products or providing AI-enabled services (differentiating by company size)

    - no emergence of fragmented liability rules for AI

    at MS level → improved conditions for

    cross-border business activities / investments

    (reduced costs, more investment and financing security,...)

    → qualitative assessment

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Insurance companies

    - Improvement of the conditions for offering

    insurance coverage, in particular for cross-border activities; emergence of new market opportunities

    for insurance companies.

    → qualitative assessment

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Third specific objective (ensure that victims suffering harm caused by AI have the same level of protection as victims suffering harm caused by other technologies)

    Stakeholder group

    Success criteria

    Data source

    Victims of damage caused by AI (citizens, consumers, companies)

    - The effectiveness of liability claims is not

    diminished by the specific characteristics of AI

    (opacity, autonomy, complexity, lack of

    predictability, etc.)

    → qualitative assessment

    - supporting studies

    - Qualitative assessment by Commission services

    Victims of damage caused by AI (citizens, consumers, companies)

    - Liability rules provide an effective incentive to potentially liable persons (in particular users and providers of AI systems) to prevent harm

    - supporting studies

    - Qualitative assessment by Commission services

    Companies in the European

    AI-sector (differentiating by

    company size)

    Increase in the level of societal trust and consumers’ willingness to take up AI-technologies.

    - Supporting studies (in particular

    behavioural analysis)

    - stakeholder feedback (public

    consultation, bilateral, webinars,

    supporting studies)

    2.    Criteria for assessing efficiency

    2.1.    Preliminary methodological clarifications on costs and benefits

    2.1.1. Quantification challenges

    Regarding the assessment of costs and benefits, the future-oriented nature as well as the specific subject-matter of this initiative impose a number of methodological limitations. As already explained in Annex 4 (analytical methods) with respect to the supporting studies, quantified data on costs and benefits was in many cases not available because:

    -Products and services that are, first, powered by AI systems with the specific characteristics challenging liability rules (high degree of autonomy, opacity, complexity, etc.), and second, capable of causing damage giving rise to civil liability claims, are for the most part not yet rolled-out on the market. No data on the prices, margins, specific characteristics, etc. of such products and services are yet available as a basis for measuring the impacts of adapting civil liability rules.

    -There is no sufficiently robust basis (yet) for estimating the damage that may be caused by AI-enabled products and services, as there are no relevant statistics on cases of damage or compensation yet. It is therefore not possible either to quantify the cost of compensation (i.e. the cost of repairing damage caused) that might be re-allocated from the victim to the liable party due to the policy measures.

    -Stakeholders were generally not in a position to provide quantified estimates of the impacts of either the identified problems under the baseline scenario or the policy options. Although a number of business stakeholders confirmed that they do see legal uncertainty and future fragmentation regarding liability for AI as a challenge, it was too early to quantify the impact of these problems on their cost structure.

    2.1.2. Mitigating actions

    Various actions were implemented in an effort to address the scarcity of quantified data:

    (a)    The consultant tasked with economic analysis (Deloitte) took a number of measures to obtain additional stakeholder feedback (wider outreach, prolongation of consultation activities, semi-structured interviews to complement the surveys) and carried out additional research to draw conclusions based on available literature, studies and economic theory. While these measures strengthened the qualitative economic analysis and allowed for relevant conclusions to be drawn, they did not allow a sufficiently stable quantification of specific costs and benefits.

    (b)    In close collaboration with better regulation experts of DG JUST and experts of the JRC, several approaches to obtaining quantified estimates were undertaken in order to complete the – largely qualitative – data delivered by the supporting studies:

    -A use-case based modelling of the costs of insurance covering the liability risks linked with the use of relevant AI-enabled products or the provision of relevant AI-enabled services was explored. Despite reaching out to several insurance companies and attempting to procure relevant data from other available sources (databases, surveys, etc.), it proved impossible to obtain sufficient data on insurance products covering this type of liability risks. While input from insurance companies confirmed that they are working on developing AI-specific insurance products or covering AI-related liability risks through existing ‘all-risk’ policies, they were not able/willing to provide datasets that would have allowed a modelling of insurance costs for the purposes of this IA.

    -A quantification of the economic impact of legal fragmentation regarding liability for AI was pursued based on a macro approach, to show namely the loss in trade due to legal fragmentation. The approach consisted in estimating the impact that discrepancies between MS’ national liability rules applicable to AI have an intra-EU cross-border trade. For the purposes of these estimations, a gravity model of trade was explored in which, in addition to traditional factors explaining trade between two countries (GDP, cost of trade, etc.), a ‘legislation distance’ score would have been added. However, the available information on MS’ intentions regarding future legislative measures on AI liability proved insufficiently detailed to implement this approach. While several MS envisage such measures in their national AI strategies, no information which was sufficiently stable and detailed for modelling purposes could be obtained. Input from MS was sought both in bilateral exchanges and in a Workshop on the topic of Liability for AI, but these efforts did not yield sufficiently comprehensive and detailed information to support a robust quantification of the economic impact of legal fragmentation.

    -In addition, in order to illustrate the economic impact of the envisaged policy measures designed to address the specific proof-related problems in AI-related claims, a micro-economic approach was pursued by the digital economy team of the JRC. This approach consisted in assessing the variations in the supply curve (legal uncertainty affecting businesses) and the demand curve (lack of consumer trust affecting consumer demand), building on a dataset for robotic vacuum cleaners.

    2.1.3.    Available quantified estimates

    The quantification efforts have yielded the following results (see Annex 4 and the economic study for further details):

    (a)    Production value affected by internal market obstacles linked to legal uncertainty and legal fragmentation regarding liability for damage caused by AI (use-cases)

    For the baseline, the EU27 intra-trade production value potentially affected by internal market obstacles linked to legal uncertainty and legal fragmentation was estimated for the markets represented by six relevant use-cases 342 . These estimated shares amount to EUR 829 million in 2021 and ca. EUR 3 billion in 2029. However, due to a lack of precise information regarding the degree of perceived legal uncertainty and the extent to which national measures would increase legal fragmentation under the baseline scenario, it was not possible to develop a robust estimate of the extent to which those production values would be reduced due to liability-related problems:

    Source: Deloitte analysis

    (b)    Overall market value affected by liability-related internal market obstacles

    Still regarding the baseline scenario, beyond the six use-cases, AI market shares affected by legal uncertainty and/or legal fragmentation regarding liability for AI were estimated by economic sector. These estimated shares range from EUR 1,119 to EUR 3,459 billion in 2020, and from EUR 10,204 to EUR 21,342 billion in 2025:

    Source: Deloitte analysis

    However, it has to be acknowledged that these estimates do not as such represent a quantification of the relevant problems because they do not express the extent to which the affected market shares will be reduced due to legal uncertainty and fragmentation regarding liability. It was not possible to develop a robust estimate of this metric, due to the uncertain factors mentioned under the previous point (a).

    (c)    Costs of claiming compensation linked to the specific challenges of AI: estimates by experts in ICT and liability law

    In the framework of the supporting economic study (Deloitte), legal experts estimated the costs for legal and technical experts needed to claim compensation, based on the existing national liability rules in 13 representative legal systems, when AI is involved in causing harm compared to cases not involving AI. 343 Certain damage scenarios involving a range of AI use cases (autonomous AI-enabled motor vehicles, drones, traffic management systems, warehouse robots, post-surgical infection analysis devices, robotic lawnmowers and fire detection systems) or, respectively, the corresponding (i.e. functionally equivalent) ‘non-AI’ technologies were posited for this purpose. The following tables show the estimated costs of technical expertise, which are the most relevant cost factor for assessing the AI-specific challenges of claiming compensation and the impacts of policy options designed to address those challenges 344 :

    -Data on costs for technical experts in AI-related cases (for all AI applications covered)

    Costs for technical experts (total EUR)

    AT

    15 000 – 40 000

    BE

    >1 000 – 10 000

    BG

    408 – 2 040

    DE

    >500 – 1 800

    DK

    10 000

    ES

    4 000 – 6 000

    FR

    15 000 – 75 000

    IE

    4 000 – 6 000

    IT

    10 000 – 20 000

    PL

    4 500 – 5 000

    PT

    1 000 – 10 000

    RO

    2 000 – 10 000

    SK

    400

    UK

    12 000 – 36 000

    -Source: Survey completed by legal experts for the supporting economic study (Deloitte)

    -Costs of technical experts for traditional liability cases (not involving AI)

    Country

    Motor vehicles

    Remotely piloted drones

    Traffic management systems

    Industrial appliances

    Medical devices

    Lawnmowers

    Fire detection systems

    AT

    5 000 – 15 000

    5 000 – 15 000

    15 000 – 20 000

    15 000 – 20 000

    15 000 – 20 000

    5 000 – 15 000

    5 000 – 15 000

    BE

    1 000

    5 000

    2 000

    3 000

    10 000

    1 000

    10 000

    BG

    204 – 1 020

    204 – 1 020

    204 – 1 020

    204 – 1 020

    204 – 1 020

    204 – 1 020

    204 – 1 020

    DE

    500 – 1 800

    500 – 1 800

    500 – 1 800

    500 – 1 800

    500 – 1 800

    500 – 1 800

    500 – 1 800

    DK

    2 000

    2 000

    10 000

    4 000

    4 000

    4 000

    10 000

    ES

    1 000 – 3 000

    2 500 – 5 000

    1 000 – 3 000

    2 000 – 3 500

    1 000 – 3 000

    1 000 – 3 000

    1 000 – 3 000

    FR

    10 000 – 50 000

    10 000 – 50 000

    10 000 – 50 000

    10 000 – 50 000

    10 000 – 50 000

    10 000 – 50 000

    10 000 – 50 000

    IE

    1 000 – 2 000

    3 000

    2 000 – 3 000

    1 000 – 2 000

    2 000 – 4 000

    1 000 – 2 000

    1 000 – 2 000

    IT

    5 000 – 15 000

    5 000 – 15 000

    5 000 – 15 000

    5 000 – 15 000

    5 000 – 15 000

    5 000 – 15 000

    5 000 – 15 000

    PL

    800

    1 000

    1 100

    300

    1 200

    1 000

    850

    PT

    800

    800

    2 000

    1 000

    3 000

    400

    2 000

    RO

    600 – 800

    600 – 1 200

    600 – 800

    500 – 700

    800 – 1 500

    600 – 800

    600 – 800

    SK

    300

    300

    300

    300

    300

    300

    300

    Source: Survey completed by legal expert for the supporting economic study (Deloitte)

    The following differences between the respective costs of technical expertise to be advanced by victims provide an idea of the scale of the challenges faced by victims due to the specific characteristics of AI 345 :

    Difference between costs of technical expertise needed in AI-related cases compared to ‘traditional’ cases

    Costs for technical experts

    AT

    +20% / +113%

    BE

    difference to be determined

    BG

    +100%

    DE

    difference to be determined

    DK

    +100%

    ES

    +190% / +80%

    FR

    +50%

    IE

    +30% / +20%

    IT

    +100% / +25%

    PL

    +430% / +490%

    PT

    +0% / +600%

    RO

    +226% / +820%

    SK

    +33%

    Source: Deloitte

    (d)    Quantified estimates of the impacts of policy options on the costs linked to the burden of proof borne respectively by victims and liable parties

    The estimates summarised under the previous point (c) were used to approximate, in a first step, the cost linked to the burden of proof under current liability rules, due to the specific challenges of AI. The additional costs of technical expertise in cases involving AI compared to other cases are suitable as a proxy for this quantification, because they reflect the difficulty of attributing liability given the specific opacity/lack of transparency, behavioural autonomy, complexity and limited predictability, etc. of certain AI systems. 346 On that basis, quantified estimated were generated regarding, on the one hand, the cost savings that each policy option can bring for victims, and on the other hand, the possible increase of costs linked to the burden of proof for the liable party. More specifically, the following steps were implemented for this quantification:

    (i)    Firstly, the difference was calculated between, on the one hand, the average estimated costs to be advanced by victims for technical expertise in cases where AI was involved in causing damage, and on the other hand, the same average in cases not involving AI. 347 Based on the estimates by legal experts shown in the tables above, the average difference amounts to EUR 4149.

    (ii)    Secondly, reasoned assumptions were made regarding the extent (expressed as a percentage) to which each policy option will alleviate victims’ costs linked to the burden of proof (see point 3.2. below for details). The estimated percentage was applied to the average AI-specific cost of technical expertise, to obtain a quantified estimate of this benefit.

    (iii)    Thirdly, further reasoned assumptions were made regarding the extent (expressed as a percentage) to which each policy option would lead to a transfer of the cost linked to the burden of proof to the defendant (i.e. the allegedly liable party) (see point 3.2. below for details). The estimated percentage was applied to the average quantified reduction of costs to be advanced by victims, to obtain a quantified estimate also of the ‘burden of proof costs’ attached to the policy options for liable parties. 348  

    These estimates should not be misconstrued as a quantification of the problem that the specific characteristics of AI can make it prohibitively difficult, or even impossible, to meet the burden of proof. In particular, they do not take into account the cases in which liability claims would not pursued in the first place based on current liability rules, because the victim either cannot identify the liable party or considers the prospect of a successful claim insufficient to justify legal action. The preferred policy option will help victims also in the latter cases, by overcoming the compensation gaps induced by the specific characteristics of AI. This benefit is reflected in the previous row (‘reduced AI induced compensation gaps’).

    (e)    Quantified estimates of the impact of policy options on the EU AI market value

    Regarding the impacts of policy options, the economic study delivered estimates, for the markets represented by the six use-cases (see point 2.1.3.(a) above), of the impact that the envisaged measures would have on the cross-border trade affected. Policy options involving a combination of measures to ease the burden of proof and a harmonisation of strict liability for certain AI applications (like PO2 and PO3) were estimated to increase the production value of the affected cross-border trade by 5-7%, leading to the following impact estimates net of the baseline:

    Source: Deloitte estimation

    Note: In this figure, Option 1 represents a non-binding instrument, Options 2 and 3 represent different combinations of alleviations of the burden of proof with strict liability, and Option 4 represents the discarded policy option applying strict liability to all AI systems that challenge the current liability rules.

    On the basis of the estimated incremental impact on AI market values and the estimates regarding the overall AI market affected by legal uncertainty and fragmentation under the baseline scenario, the policy options are expected to deliver an increase of the AI market value in the EU of between ca. EUR 500mln and ca. EUR 1.1bln. These values are obtained by multiplying the estimated shares of the AI market affected by legal uncertainty and fragmentation regarding civil liability in 2025 under the baseline scenario (low and high scenarios assumed by the economic study supporting this IA) with the estimated impact of the policy options (+5%). This percentage was determined conservatively, taking into account the estimated impact generated by a combination of measures to ease the burden of proof with a harmonisation of strict liability limited to certain AI applications (cf. Economic Study, pp. 195 et seq.). In the supporting study, policy options including these elements were estimated to increase the production value of the affected cross-border by 5-7 %, for the six use-cases analysed specifically by that study (AI-enabled autonomous vehicles, autonomous drones/delivery robots, AI-enabled road traffic management systems, AI-enabled warehouse robot, AI-enabled medical-diagnosis services, AI-enabled automated lawnmowers/vacuum cleaners). In order to quantify the overall economic benefits generated by the policy options (not limited to the six use-cases), a conservative extrapolation of this estimate was applied to the relevant market shares of all sectors affected by legal uncertainty and fragmentation, taking into account also that the preferred PO does not include the strict liability element assumed for the supporting study with respect to a small number of specific AI applications.

    For illustration purposes, the Joint Research Centre has provided a complementary micro-economic quantification of the envisaged measures to ease the victim’s burden of proof, based on the use-case of robotic vacuum cleaners. This analysis reaches the conclusion that these measures would generate an increase in consumer welfare of EUR 11.5-19.12mln and in total welfare 349 of EUR 30.11-53.74mln for this product category alone in the EU-27. 350

    (f)    Quantified estimates of the incremental changes in insurance premiums that might be caused by the policy options 

    Quantified estimates of the incremental changes in insurance premiums that might be caused by the policy options were generated based on:

    -available information on annual premiums paid for general liability insurance (EUR 42bn in 2019) 351 ;

    -input from insurance stakeholders to the effect that AI-related liability risks can largely be covered by existing general liability insurance policies 352 ;

    -estimates of the extent to which policy options could shift the burden of compensating damage caused by AI from the victim to the party responsible for that damage. 353

    The following steps were followed to obtain these quantified estimates:

    (i)    In a first step, the EU AI market size (low estimate = EUR 3.473bn; high estimate = EUR 10.737bn in 2020 354 ) was divided by the overall market value of the EU economy (ca. 14 trillion 355 ).

    (ii)    In a second step, the percentages thus obtained (0.002% and 0,007%) were used to approximate the share of liability insurance premiums linked to AI-related economic activities. This step is based on the considerations that premiums not linked to the AI market are not affected by the AI liability initiative in the first place. Given the overall annual premiums paid for general liability insurance, these shares amount to EUR 10.67mln (based on the low estimate of the AI market size) or EUR 32.21mln (based on the high estimate of the AI market size).

    (iii)    Economic analysis and stakeholder feedback indicate that, during an initial transitional period, the scarcity of relevant actuarial data on AI liability risks will make it more difficult for insurers to calculate premiums compared to insured activities not involving AI. In a third step, the shares of annual premiums linked to AI were therefore multiplied by two, to take into account the initial need for insurers to allow for sufficient risk margins with respect to AI. The shares thus adjusted amount to EUR 21.34mln (based on the low estimate of the AI market size) and EUR 64.42mln (based on the high estimate of the AI market size). This step ensures that the final cost estimate are conservative even during an initial stage of scarce actuarial data. As more data becomes available with the increasing market rollout of AI-enabled products and services, the need to allow for added uncertainty-induced risk-margins will likely dissipate quickly.

    (iv)    In a fourth step, the share of general liability insurance premiums on which the policy options can have an impact was further narrowed down, by excluding the share of premiums that can be allocated to insured events to which these policy options would not apply. This step involves estimating the share of insured events that are either devoid of factual uncertainty or subject to strict liability under the baseline scenario. In such cases, the AI-specific problems to be addressed by the AI liability initiative do not materialise. The policy options are designed to apply only where those AI-specific problems arise, and only in those cases can these options hence have an impact on insurance premiums compared to the baseline. It is estimated that the policy options are relevant for one third of the adjusted liability insurance premiums linked to AI, taking into account the following:

    -Some MS have broad strict liability regimes covering damage caused by ‘dangerous’ activities or things, or even more broadly any damage caused by things.

    -The potential to cause damage varies depending on the purpose, context of use and operating mode of AI systems. Not all AI-related economic activities are involve a significant liability risk, and only AI systems with certain specific characteristics (highly autonomous behaviour, opacity, low predictability, complexity, etc.) pose particular challenges in terms of allocating liability. 356

    On this basis, the shares of the premiums paid for general liability insurance on which the policy options can have an impact amount to EUR 7.11mln (based on the low estimate of the AI market size) or EUR 21.47mln (based on the high estimate of AI market size).

    (v)    In a fifth step, the incremental effect of each policy option on the determined relevant shares of insurance premiums is estimated in terms of percentages. For the percentages assumed respectively, see below under point 3.2. (Efficiency).

    (vi)    Finally, in a sixth step, a multiplier is applied to the changes in insurance premiums, to obtain estimated impacts for 2025. In light of the following factors, it is appropriate to multiply the changes in insurance premiums linked to each policy option by five to approximate the impact of the policy options in 2025:

    -Until 2025, the EU AI market is expected to grow six to ten-fold 357 , whereas for the overall EU economy will likely see moderate growth rates around two percent in 2023-2025 following post-Covid-19 catch-up effects with faster growth in 2022. 358

    -As more and more AI-enabled products and services are rolled out over the coming years, the difficulties linked to the lack of actuarial data are expected to diminish. Insurers will thus be able to estimate liability risk with increasing precision, and calculate premiums reflecting the real risk exposure.

    2.1.3.    Consequences for the assessment of efficiency

    In light of the scarcity of quantified data or estimates, the costs and benefits attached to the policy options are largely assessed qualitatively, in terms of trends, taking into account all available information. On this basis, the policy options are compared using qualitative scales rather than quantified impact metrics. The quantified estimates available are taken into account in that framework.

    2.1.4.    Cost of compensation:

    As regards the cost of compensation (i.e. the cost of repairing damage caused), it is assumed that damage and compensation offset one another in absolute economic terms, the rationale of liability rules being to “make the victim whole”. In line with the Commission’s political objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by other technologies (specific objective 3), the policy options necessarily involve a certain re-allocation of the costs of compensation. This effect is intended to materialise namely to the extent that the specific characteristics of AI would have caused a justified claim to fail under the baseline scenario, because of the difficulty of proving the conditions of liability under the current rules. As this effect is in line with the policy objectives and also achieves an efficient cost-allocation (to the person best placed to prevent damage from occurring), it is not regarded as an undesired impact or undue burden for the purposes of this impact analysis. The impact of the policy options on the effectiveness of liability claims is nevertheless relevant for assessing the equitable distribution of costs and benefits between stakeholders, which is a separate important criterion for assessing efficiency. The ‘cost of compensation’ is therefore mentioned in the efficiency assessment as a category apart, specifically to reflect the described redistribution effect of the policy options.

    2.1.5.    Environmental benefits:

    In terms of methodological limitations, it is furthermore acknowledged that there is no sufficient basis for assessing possible indirect environmental benefits that might be achieved through an increased uptake of AI. On a general level, it is expected that AI-solutions can generate efficiencies and contribute to the innovation of environmentally friendly technologies. However, these effects are too far removed from the envisaged adaptations of liability rules to assess them even approximatively in terms of impacts of those measures.

    2.2.    Overview of the relevant types of costs and benefits

    The following tables provide an overview of the relevant types of costs and benefits considered for the assessment of the policy options as to their efficiency:

    Regulatory costs

    Type of costs

    Stakeholders affected

    Data source

    Direct costs

    Substantive compliance costs, e.g. for mandatory insurance coverage

    → Type of impact: Economic

    - Potentially liable companies (differentiating according to size), in particular companies using/operating AI-enabled products or providing AI-enabled services falling under a harmonised strict liability regime

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    - Quantified estimate of incremental changes of insurance premiums; otherwise qualitative assessment.

    Indirect costs

    Costs of legal uncertainty and fragmentation – to the extent that a policy option fails to address the problems identified under the baseline, the associated costs would also persist (e.g. transaction, compliance, familiarisation, information costs, higher insurance costs, loss of business opportunities, increased cost of capital)

    → Type of impact: Economic

    - Potentially liable companies (differentiating according to size), in particular companies using/operating AI-enabled products or providing AI-enabled services, in particular cross-border.

    - Insurance companies

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Enforcement costs

    - Litigation: Increased costs for substantiating liability claims persisting to the extent that a policy options fails to address the AI-specific difficulties to meet the burden of proof

    - Litigation: costs linked to meeting the burden of proof

    - Adjudication: Increased costs on the justice systems due to an increased number of civil actions

    → Type of impact: Economic

    - Litigation: Victims of damage caused by AI-applications, including companies, public entities and private individuals

    - Costs linked to the burden of proof: potentially liable parties

    - Adjudication: judiciary, public budget

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    - Quantified estimates of costs linked to the burden of proof; otherwise qualitative assessment.

    Cost of compensation

    Redistribution of costs of compensation and / or partial redistribution of the costs linked to the burden of proof from the victim to the liable party

    → Type of impact: Economic

    Liable parties (companies and private individuals)

    Qualitative assessment based on supporting studies and stakeholder feedback

    Regulatory benefits

    Type of benefit

    Stakeholders affected

    Data source

    Direct benefits

    Lower costs linked to the burden of proof (e.g. analysis by IT experts)

    → Type of impact: Economic / social

    - Victims of damage caused by AI-applications, including companies, consumers and citizens

    - economic and legal analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    - Quantified estimates of the reduction of costs to be advanced by victims due to the specific challenges of AI.

    Guarantee of the right to an effective remedy

    - Victims of damage caused by AI-applications, including companies, consumers and citizens

    - legal analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Cost savings regarding the transaction, compliance, familiarisation, information and capital costs as well as insurance costs caused by legal uncertainty and fragmentation

    - Potentially liable companies (differentiating according to size), in particular companies using/operating AI-enabled products or providing AI-enabled services, in particular cross-border.

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    Market efficiency: increased business opportunities

    - Insurance companies

    Indirect benefits

    Wider economic benefits: Increased uptake of AI-technologies, due to increased societal trust and increased legal certainty / reduced legal fragmentation

    - Companies in the European AI-sector; citizens

    - economic analysis by supporting studies

    - stakeholder feedback (public consultation, bilateral, webinars, supporting studies)

    - Quantified estimates of the impact of the policy options on the AI market value in the EU.

    Improved competitiveness of the European AI sector

    - Companies in the European AI-sector;

    Improved safety of AI-enabled products and services, due to effective incentive to prevent harm

    - Victims of damage caused by AI-applications, including companies, consumers and citizens

    3.    Criteria for assessing coherence

    The coherence of the policy options is assessed taking into account the following criteria:

    -consistency and synergetic interplay with the other initiatives forming part of the follow-up to the White Paper on AI, with particular focus on the complementarity with the AI Act and the PLD review;

    -consistency with the rationale and approaches of existing national liability systems, enabling a friction.

    4.    Criteria for assessing proportionality

    With respect to the proportionality criterion, it has been assessed in particular whether

    -the measures envisaged under the different policy options are suitable limited to achieve the specific objectives;

    -the measures are limited to the minimum intervention necessary to achieve those objectives, or whether less intrusive measures could be equally effective;

    -whether the impacts of the measures are overall justified given the expected benefits.

    B.    Methodology for determining the preferred policy option: multi-criteria analysis

    The policy options were assessed and compared as to their effectiveness, efficiency, coherence and proportionality, differentiating between the stakeholders affected by the respective impacts. Given the varied nature of the criteria applied as well as the difficulty of quantifying most of the impacts, a multi-criteria analysis (MCA) methodology is applied for the purposes of this comparison. The respective scores are subsequently represented in a matrix showing the overall result. In order to ensure the robustness and test the sensitivity of that result, different weights are applied.

    C.    Comparative assessment and scoring of policy options

    The following sections present an overview of the scores given to the policy options for their respective efficiency, effectiveness, coherence and proportionality. Explanations are provided with a focus on the differentiating elements. Each IA criterion is scored on a scale from -5 to +5 for the purposes of the comparative ranking. Different scores are given as regards the implementation of the respective measures by way of a binding instrument (most likely a Directive) or a non-binding one (recommendation).

    1.    Effectiveness

    1.1.    Specific objectives 1 (ensure legal certainty) and 2 (prevent fragmentation)

    (a)    Policy Option 1

    The targeted, risk-based alleviations of the burden of proof envisaged under Option 1 would address the major sources of legal uncertainty identified in the problem analysis, which are linked to the AI-specific difficulties of meeting the burden of proof under fault-based liability rules. Option 1 would do so in a harmonised manner, ensuring a consistent minimum level of AI-specific alleviations of the burden of proof at national level. It would thereby achieve the more general objective to improve conditions for cross-border business activities involving AI.

    Example 359 : A start-up based in MS A considers to provide educational services for autistic children in another MS (B). The company uses a social robot equipped with AI-enabled software modules that can autonomously interact with the children and adapt to their individual behaviour. As it is not a producer, the company does not fall under the PLD. Due to the harmonised measures envisaged under PO1, the company nevertheless is aware that in the case of an accident, it may be required by B’s civil courts to prove how or why the robot’s AI-systems came to the output that caused the accident. This knowledge facilitates the company’s choice between alternative technological solutions, as it provides an incentive to favour transparent and explainable AI-systems. Moreover, PO1 enables the company to estimate its liability risks more accurately, and thus to develop a more robust profitability and cost-management model to obtain financing. By the same token, the increased legal certainty afforded by PO1 helps the company to get more appropriately priced liability insurance coverage. When rolling out its services in additional MS, the company has reduced legal information and transaction costs as it can rely on harmonised adaptations of the burden of proof. The increased level of legal certainty and lower fragmentation may thus incentivise and facilitate it for the company to roll out its services across borders.

    Legal certainty and reduced fragmentation would benefit start-ups and other SMEs in particular, as they are disproportionately affected by legal uncertainty and fragmentation.

    However, there is currently still a certain degree of uncertainty as to whether measures to ease the burden of proof will by themselves be sufficient to ensure legal certainty and prevent fragmentation completely. In light of the technological, regulatory and market developments over the coming years regarding AI-systems posing a risk to high-ranking legal interests, MS might come to the conclusion that there is a need for AI-specific strict liability rules. If they enact such rules at national level – which policy option 1 allows, it cannot be ruled out that the legal fragmentation expected under the baseline scenario might materialise to some extent. For these reasons, policy option 1 is given a score of 4 for its effectiveness regarding specific objectives 1 and 2, assuming that this option would be implemented through a binding EU instrument. By contrast, if the measures to ease the burden of proof were to be merely recommended to MS, it is not expected that a significant harmonisation effect would be achieved. It is highly uncertain how many MS would follow the recommendation, and even amongst those that do take up the recommendations, approaches to implementing it are unlikely to be aligned given the diverging legal traditions in the area of civil and procedural law. The implementation of policy option 1 by means of a recommendation is therefore given a score of 1 regarding specific objective 2 (prevent fragmentation). As a recommendation may, because of its non-binding nature, prompt only a few MS to address the issue of legal uncertainty in their national law in the way it is recommended, this approach is given a score of 1 regarding specific objective 1 (legal certainty).

    (b)    Policy Option 2

    A harmonised strict liability regime, possibly coupled with mandatory insurance, is in principle suitable to ensure legal certainty and prevent fragmentation. Provided that the AI-enabled technologies covered by that regime can be specified with a high degree of precision, the companies operating / using those technologies could have an even clearer and consistent basis for assessing their liability risk. This would benefit start-ups and other SMEs in particular, as they are disproportionately affected by legal uncertainty and fragmentation. 360 However, the time horizon and modalities of the roll-out of technologies with a relevant risk profile and degree of autonomy are not yet known. It would therefore, at this point in time, be to a certain degree challenging to assess the risk profile of those technologies, and to specify them in a legislative instrument in a way that ensures a maximum of legal certainty regarding the scope of the harmonised strict liability regime. This consideration would apply all the more with respect to a possible mandatory insurance regime covering strict liability. The obligation to ensure insurance coverage is a market entry requirement, which means that it is crucial to enable market participants to assess with a high degree of certainty whether they fall under this requirement or not.

    Example 361 : A company based in Belgium wants to provide outdoor cleaning services using AI-enabled cleaning robots in the Netherlands, France, Luxembourg and Germany. The robots have the size of a small car and are intended to move autonomously (without direct or constant human control or supervision), including in public spaces accessible to unwitting third parties. PO2 could clarify that this activity would be subject to strict liability, possibly coupled with a mandatory insurance coverage in all MS. It would thereby support a more predictable and certain financial planning and cost management, and liability-related costs would be limited to the insurance premiums. For insurers, harmonised strict liability would enable a more robust risk assessment and accurate pricing.

    In light of these elements, policy option is given a score of 3 regarding specific objective 1 (legal certainty), and a score of 4 regarding specific objective 2 (prevent fragmentation), assuming it would be implemented through a binding instrument (Directive). For the same reasons as set out above for policy option 1, a non-binding instrument is unlikely to achieve those objectives to a significant extent even if it includes also recommendations to provide for strict liability, possibly coupled with mandatory insurance. This sub-option is therefore given the same effectiveness scores as policy option 1.

    (c)    Policy option 3 (staged approach)

    During the first stage, Option 3 would be equally effective as Option 1. Deferring the possible harmonisation of strict liability until there is more certainty about the technological and regulatory context defining the risk-profile and other conditions of deployment of the AI-enabled technologies with a potential ‘strict liability profile’ (operational environment, safety requirements, user profile, etc.) is conducive to ensuring legal certainty and uniform implementation of the possible strict liability regime (objectives 1 and 2), as it allows to specify its material and personal scope with greater precision. For these reasons, policy option 3 is given a score of 4 for its effectiveness regarding specific objectives 1 and 2. The non-binding sub-option is scored consistently with options 1 and 2, as the same considerations and limitations apply.

    1.2.    Specific objective 3 (ensure consistent level of victim protection; increase level of societal trust and consumer uptake)

    (a)    Policy option 1

    The measures to ease the burden of proof under policy option 1 would effectively prevent AI-induced compensation gaps, and thus be suitable to ensure that victims suffering harm caused by AI – whether they are consumers or businesses – have the same level of protection as victims harmed by other technologies. Victims would be relieved of having to overcome the characteristic opacity of certain AI-systems to prove their claims. 362 Consequently, they would spend less on technical expertise and have better prospects of making a successful claim. 363  

    Example 364 : In the previous example involving an autonomous cleaning robot with remote human supervision (see 2.6.), PO1 would allow the victim to gain access to information held, in accordance with the proposed AI Act, by the provider or the user of the relevant AI-systems. This information could include e.g. logged information on inputs, outputs or internal states of the AI subsystems, or information on the suitable operating environment and human oversight requirements. On this basis, an expert could analyse the correlations between input parameters and the output signals that caused the robot to crash. It may thus be possible to discard certain causes of the accident. To the extent that the required causal link between an action or omission of the potentially liable person and the damage would remain obscured by the opacity and the lack of explainability of the AI systems involved, PO1 would alleviate the victim’s burden of proof to prevent that these characteristics lead to a lower level of victim protection. Moreover, if the victim, likely with the help of an expert analysing the available information on the AI-systems involved, can establish that the liable person (e.g. the company using the robot to provide cleaning services) did not comply with their obligations under the AI Act, the liable person’s fault would be presumed under PO1. As regards claims under the PLD against the final producer of the robot or manufacturers of individual AI-components, the PLD revision would provide the injured party with access to technical information held by the producer, and ensure that the latter cannot avoid liability based on the development risk defence if the AI-systems at issue were by design unpredictable. Taken together, these measures would thus ensure that the involvement of AI does not lead to a lower level of protection of the injured person.

    The behavioural economics study commissioned for this IA has shown that the perceived low likelihood of compensation and the difficulty to determine who is liable count amongst the most relevant reasons for low levels of consumer trust in and societal acceptance of AI. 365 It has also confirmed that consumers who perceive liability rules as appropriate to protect victims of harm are significantly more willing to take up such products and services 366 and that a liability regime where the burden of proof has been adapted in favour of the victim ranks higher in the perception of consumers than a regime where the victim bears the full burden of proof. Option 1 is therefore expected to effectively contribute – together with the already proposed adaptations of safety rules – to increasing the level of societal trust in AI-enabled products and services and consumers’ willingness to take up such products and services. By preventing liability deficits, Option 1 would provide an effective incentive to prevent harm and thus drive safety-enhancing innovation and contribute indirectly to people’s overall level of safety.

    Indirect social impacts: By preventing liability deficits, PO1 would provide an effective incentive to prevent harm and thus drive safety-enhancing innovation and contribute indirectly to people’s overall level of safety. 367 This mechanism would apply, firstly, to businesses subject to specific safety requirements – in particular the user and provider under the AI Act. Secondly, by ensuring the effectiveness of general liability rules under national law, the incentive effect of PO1 could extend to any stakeholders whose actions or omissions may have contributed to the causation of damage, such as e.g. providers of labelled training or testing data. 368 Moreover, behavioural research has shown that adapting the burden of proof in favour of the injured party makes people more likely to consider that victims receive just compensation and that the legal framework is reasonable, predictable and transparent. 369 By promoting effective access to justice, PO1 is hence likely to increase societal trust in the justice system.

    Given that the risk-profile and operating parameters of future AI-systems are not fully known at the present, stage, it is however not entirely certain (yet) that measures to ease the burden of proof will, in combination with the revision of the PLD, be sufficient to fully ensure that victims are protected equally well when suffering harm caused by AI. For these reasons, policy option 1 is given a score of 4 for its effectiveness regarding specific objective 3. By contrast, if the measures envisaged under this policy option would be merely recommended to MS in a non-binding instrument, the degree to which this specific objective could be achieved would hinge on the implementation rate by MS and thus be uncertain. As it is highly likely that a significant number of MS would not act on the suggestions in way they would be recommended, this sub-option is given a score of 1 for its effectiveness regarding specific objective 3.

    (b)    Policy option 2

    A harmonised strict liability and possible mandatory insurance regime, as the distinguishing features of Option 2, could prevent a lack of compensation even more effectively than the alleviations of the burden of proof common to Options 1 and 2. The expected effects on societal trust follow a similar pattern as under Option 1, as it represents simply another – potentially even more effective – way of ensuring an effective compensation of victims. As with the alleviations of the burden of proof, this is likely to have a positive effect on consumers’ perception of the appropriateness of liability rules, which in turn is likely to increase their willingness to take up AI applications. However, the assessment must also account for the fact that the specific risk profile of relevant AI-enabled products and services is not yet fully known, as these products and services are still in a pre-market development phase.

    Indirect social impacts: The mechanisms by which the strict liability element of PO2 would contribute to incentivising users of AI-enabled technologies with a special risk profile to minimise harm are similar to the ones discussed under PO1. The ability of strict liability rules to incentivise safety efforts depends to a large extent on whether the strictly liable person has cost-efficient means to prevent damages 370 . The control criterion envisaged for assigning strict liability to professional users/operators is therefore conducive to the desired incentive effects. Moreover, behavioural research has shown that a strict liability framework for AI is more likely to be perceived as predictable and transparent than fault-based liability. 371 The strict liability element is thus likely to increase societal trust in the justice system.

    Taking into consideration all of these elements, Option 2 is given a score of 4 for its effectiveness regarding specific objective 3. The sub-option assuming a non-binding instrument is scored consistently with Option 1, as the same limitations apply.

    (c)    Policy option 3

    During the first stage, Option 3 would be equally effective as Option 1 in achieving specific objective 3. While a potentially even more far reaching protection of victims through strict liability and possibly mandatory insurance for the use of certain AI-technologies will not be realised during this first stage, the targeted review mechanism allows to systematically re-assess the need for more these more far-reaching measures. This mechanism lays the groundwork for ensuring that specific objective 3 can be fully achieved even if the need for a harmonised strict liability regime is confirmed in light of technological and market developments as well as empirical evidence on civil liability cases involving AI.

    Option 3 is therefore given a score of 4 for its effectiveness regarding specific objective 3. The sub-option assuming a non-binding instrument is scored consistently with Options 1 and 2, as the same limitations apply.

    1.3.    Comparative overview of effectiveness scores

    Success criteria for specific objectives

    Score

    (impact net of the baseline (-5 to +5)

    Policy Option 1

    Policy option 2

    Policy Option 3

    Specific objective 1 (legal certainty)

    - Increased level of legal certainty, leading to improved conditions for (in particular cross-border) business activities / investments (e.g. reduced costs, more investment and financing security)

    - Increased level of legal certainty, leading to improved conditions for offering insurance coverage, in particular for cross-border activities; emergence of new market opportunities for insurance companies.

    4 (binding)

    1 (non-bind.)

    3 (binding)

    1 (non-bind.)

    4 (binding)

    1 (non-bind.)

    Specific objective 2 (prevent legal fragmentation)

    - no emergence of fragmented liability rules for AI at MS level → improved conditions for cross-border business activities / investments (reduced costs, more investment and financing security,...)

    - Improvement of the conditions for offering insurance coverage, in particular for cross-border activities; emergence of new market opportunities for insurance companies.

    4 (binding)

    1 (non-bind)

    4 (binding)

    1 (non-bind.)

    4 (binding)

    1 (non-bind.)

    Specific objective 3 (compensation of victims / trust in AI)

    - The effectiveness of liability claims is not diminished by the specific characteristics of AI (opacity, autonomy, complexity, lack of predictability, etc.)

    - Liability rules provide an effective incentive to potentially liable persons (in particular users and providers of AI systems) to prevent harm

    - Increase in the level of societal trust and consumers’ willingness to take up AI-technologies.

    4 (binding)

    1 (non-bind)

    4 (binding

    1 (non-bind.)

    4 (binding)

    1 (non-bind.)

    Overall score

    12 (binding)

    3 (non-bind.)

    11 (binding)

    3 (non-bind.)

    12 (binding)

    3 (non-bind.)

    2.    Efficiency

    2.1.    Policy option 1

    (a)    Impacts on potentially liable parties (businesses and natural persons) 

    Due to the envisaged harmonised adaptations of the burden of proof, potentially liable parties – such as companies using AI-enabled products to provide services – would have a more robust and consistent basis for assessing their liability risk outside the scope of the PLD. As explained in the context of effectiveness above, it cannot be ruled out that certain MS might come to the conclusion, in light of the technological, regulatory and market developments over the coming years, that there is a need for AI-specific strict liability rules. If they enact such rules at national level, the legal fragmentation expected under the baseline scenario might materialise to some extent despite PO1. Nevertheless, the reduction of legal uncertainty and fragmentation through harmonised measures delivered by PO1 would benefit developers and users of AI-enabled technologies (e.g. companies providing AI-enabled services) by generating direct regulatory benefits, namely through the reduction of legal information/representation, internal risk management, and other compliance-related costs, as well as additional cross-border revenue. 372 By clarifying the kind of information and evidence potentially liable parties may be required to submit in civil proceedings, PO1 would also help them to choose more efficiently between different technological options 373 , namely by favouring more transparent and explainable solutions.

    As start-ups and other SMEs are significantly more affected by the internal market barriers created by legal uncertainty and fragmentation (see 2.6.), this stakeholder group would also benefit to a higher degree. 374 The expected positive impacts of PO1 on societal trust in AI and consumers’ willingness to take up AI-enabled products and services, as well as the improved competitiveness of the European AI sector would directly or indirectly benefit all companies in the AI value chain. 375  

    The findings of the supporting economic study refer to a preliminary set of policy options, which was not identical to the policy options retained for this impact assessment. In particular, the economic study assumed two different combinations of alleviations of the burden of proof under fault-based liability rules with a harmonised strict liability regime. One of these options reflected a targeted and AI-specific approach similar to the one described above under PO2, the other the European Parliament’s resolution on a civil liability regime for AI. The economic study did not explicitly assess the economic impacts of alleviations of the burden of proof, as per PO1, taken in isolation. The policy options retained for detailed assessment have evolved precisely due to the conclusions of the economic study, as well as due to the results of the public consultation and following discussions with stakeholders. Some assumptions made in the framework of the economic study had to be reconsidered, in particular as regards the feasibility, at the current point in time, to define the scope of a harmonised strict liability regime for AI with a sufficient degree of precision and certainty. It therefore proved important to consider alleviations of the burden of proof on their own (PO1), as an alternative to introducing these alleviations together with strict liability (PO2). It is acknowledged that this entails a degree of uncertainty as to the extent to which the conclusions of the economic study apply to PO1. For the following reasons, the assessments made by the economic study are nevertheless still largely relevant:

    - According to the study, the economic benefits of an EU initiative on AI liability are primarily attached to the expected gains of legal certainty and reduced legal fragmentation. These effects are expected to materialise also with respect to the alleviations of the burden of proof taken in isolation (PO1), which will clarify in a harmonised manner how the burden of proof is to be handled in cases involving AI.

    - The measures PO1 shares with two of the policy options assumed for the purposes of the economic study are relevant for the major share of AI-enabled products and services, and thus decisive for the economic impacts on most stakeholders. This is because only a small set of AI-enabled technologies would have a risk profile warranting the application of strict liability.

    These benefits are likely to outweigh the following adaptation (substantive compliance) costs and redistribution effects linked to PO1: The business-as-usual costs under the baseline scenario related to the uncertainty to assess what liability rule would apply to AI and what burden-of-proof rule a court would apply in a concrete case are higher than any potential adjustment costs borne by potentially liable parties. It will be easier for companies to estimate liability risks and related costs. While under the baseline scenario courts might apply on an ad-hoc basis alleviations of the burden of proof to remedy what they consider an unequitable result, clear and harmonised alleviations of the burden of proof will help liable parties to know what to expect in case AI is involved both domestically and cross-border. Companies operating cross-border would benefit from reduced compliance costs compared to the very fragmented baseline scenario. Such clarity might also help companies get appropriately priced liability insurance coverage.

    In cases where the specific characteristics of AI would not have allowed the victim to prove the necessary facts under the baseline scenario, PO1 would shift the cost of compensating the relevant damage from the victim to the liable person, increasing the latter’s liability exposure. Likewise, victims would be relieved of some of the cost linked to meeting the burden of proof (e.g. costs of expert analysis). This cost may partly shift to the potentially liable party (who is however much more likely to have the necessary knowledge of the relevant AI systems in-house, without the need to procure external technical expertise). These effects are inherent in the Commission’s objective of avoiding that victims are less protected due to the use of AI. They are not regarded as undesirable impacts or undue burden. They are in line with the policy objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by other technologies and in general with the purpose of liability law. They also achieve a macro-economically more efficient cost-allocation to the person best placed to prevent damage from occurring. For the impact analysis, these effects are taken into account as a re-distribution effect. This effect may entail an incremental increase of insurance premiums covering AI liability risks. As a major share of liable parties is likely to hold liability insurance 376 covering (also) risks linked to activities involving AI 377 , the impact of the burden re-distribution is approximated through a quantified estimate of its indirect effect on insurance premiums. For this purpose, it is assumed that the measures to ease the burden of proof would cause an increase by 15% of the share of general liability insurance premiums attributable to AI liability risks. This assumption is based on the following considerations:

    -PO1 does not involve a general reversal of the burden of proof, but only targeted adjustments to counter-balance the specific challenges of AI. The negative economic impacts for potentially liable parties are likely to be very marginal, as indicated by the quantified estimates set out below.

    -PO1 is likely to achieve an overall more efficient allocation of the burden of proof, as the difficulty to establish how or why an AI system arrived at a harmful output is typically less burdensome for potentially liable parties having influenced the operation of that AI-system (e.g. developers, users) than for victims.

    -As PO1 does not introduce new grounds of liability and keeps the basic allocation of the burden of proof intact, it is not expected to lead to a major increase in the number of civil actions – or an associated increase of insurance premiums – compared to the baseline, nor to significant familiarisation and implementation costs for businesses.

    -In many cases, national courts already have similar tools (disclosure orders, presumptions) at their disposal under the baseline scenario, although it is highly uncertain whether and how these tools would be used in practice.

    -Under the baseline, some MS might take partly similar measures in their national legal systems to address the specific challenges of AI. However, it is uncertain how many would do so and what precise shape these measures would take. National initiatives would in all likelihood not be aligned and thus entail further legal fragmentation.

    -The increased legal certainty and reduced fragmentation delivered by PO1 will have a premium-lowering effect on insurance, which will partly offset the premium-driving effect of preventing AI-induced compensation gaps.

    Based on the methodology described under point 1.2.(a)(iii)(last indent) above, an increase by 15% of the insurance premiums attributable to AI liability risks would represent an overall cost of EUR 5.35mln (based on the lower estimate of the AI market size) to EUR 16.1mln (based on the higher estimate of the AI market size) for potentially liable parties.

    The instrumental role of insurance in distributing and managing the impacts of the envisaged policy measures needs to be underlined: ultimately, insurance coverage will allow potentially liable businesses to spread liability costs across the community of all insured and thereby communitarise costs. This mechanism limits the economic burden on each individual insurance holder to the premium, preventing a possible deterring effect of liability risks and keeping market entry barriers low, which facilitates the roll-out of AI in particular by start-ups and other SMEs. 378 A large portion of businesses concerned will likely procure insurance coverage voluntarily to benefit from this cost-limiting effect. 379 The expected development of a competitive insurance market for AI-related liability risks will provide the necessary conditions for effective coverage at moderate prices. 380 This expectation is supported by the fact that the insurance industry is forcefully pursuing the development of innovative AI-specific insurance products, to explore the substantial new opportunities linked to this growth market 381 . First insurance policies designed to cover specifically AI-enabled technologies have already been rolled out 382 . AI liability insurance may also be incorporated as an additional feature into existing general policies. 383 While an initial lack of actuarial data is likely to influence for a transitional period to some extent the premiums of AI-specific insurance products, and make these premiums more volatile, this effect is expected to dissipate quickly as the data generated during the operation of these technologies will allow risk estimations to converge faster towards the optimum than in the case of ‘traditional’ technologies. Moreover, even if during an initial phase AI-specific insurance policies may be priced higher than warranted by the actual liability risk covered, this does not mean that premiums would be higher than for competing technologies not equipped with AI. The expected safety gains achieved through AI are likely to have a mitigating effect on insurance premiums in many cases 384 and a sufficient number of insurance companies is likely to be active in the AI-specific insurance market from the outset, as they can rely on various tools and approaches to overcome the initial lack of actuarial data. 385 The regulatory framework established by the AI Act for the development and use of high-risk AI systems is likely to improve the conditions for AI risk assessment by insurers over the coming years. In addition, the Data Act will promote access to data generated by a user’s product and thus facilitate the provision of services that depend on or can be improved by such data, including insurance and data analytics. 386 Moreover, the Commission services will respond to the Parliament’s call to work closely with the insurance market to develop innovative insurance products 387 . The Commission will notably facilitate a dialogue between the insurance industry and companies active in the AI market (in particular SMEs).

    The results of the public consultation confirmed that insurance solutions could ensure that the victim receives compensation (63,7 % agreement v. only 5 % disagreement) and limit the costs of potential damage for the liable person to the insurance premiums (49,3 % agreement v. only 18,8 % disagreement). Even the share of business stakeholders (business associations + companies/business organisations) who confirmed the latter effect (37,8 %) was more than three times as large as the share of those not agreeing it (12,2 %).

    Insurance Europe submitted that “liability insurance plays a vital role by transferring liability risks from companies and consumers to insurers and thereby, protecting the insureds’ economic position as well as ensuring that injured persons are compensated for loss or damage.

    With respect to high-risk AI-systems, PO1 could indirectly entail some minor administrative burden, namely to the extent that it prompts the disclosure of information documented pursuant to the AI Act. However, PO1 relates only to information that had to be logged, documented, and stored for possible disclosure to supervisory authorities pursuant to the AI Act – activating the same information also in the context of civil proceedings is not expected to entail a significant added burden. It would only apply in the context of pending civil proceedings before national courts. Moreover, in accordance with relevant procedural law, the competent national court could order that such disclosure would be subject to stringent safeguards to ensure proportionality and protect the legitimate interests of all parties concerned, for instance confidential information, intellectual property rights and trade secrets. Potentially liable parties’ interests would thus be effectively safeguarded by PO1.

    (b)    Impacts on victims of damage caused by AI (natural persons and businesses)

    In line with the political objective to ensure that victims of harm caused with the involvement of AI enjoy the same level of protection as persons having suffered harm caused by other technologies, PO1 would relieve the victims of the burden of bearing the damage, to the extent that their claims for compensation would have failed under the baseline due to the specific challenges of AI. This burden would be re-distributed to the person responsible for causing the damage. This applies not only in respect of material damage, but also pure economic loss and non-material harm (such as psychological harm and damage caused by discrimination) to the extent that these types of harm are compensable under existing rules. PO1 would also reduce victims’ costs linked to the burden of proof (e.g. for expert analysis), by ensuring access to relevant information and alleviating the victim’s burden of establishing how or why an AI-system arrived at a certain output. Based on a conservative estimate, it is assumed that these measures would reduce the costs to be advanced by victims due to the AI-specific difficulty of meeting the burden of proof by at least 50 %. This assumption is based on the following considerations:

    -PO1 does not provide for a general reversal of the burden of proof. In principle, the victim would still bear the burden of proof in accordance with general rules. However, the targeted alleviation of the burden of proof regarding the question how or why an AI-system reached a certain (harmful) output would relieve victims of the need to ‘look inside the black box’ to demonstrate the inner workings of the AI system.

    -The measures linked to the AI Act (rules on the disclosure of information to be documented/logged pursuant to the AI Act + presumption of causality in the case of non-compliance with relevant requirements of the AI Act) apply only with respect to AI systems qualified as ‘high risk’ by the AI Act. While this is in line with the principle of proportionality and the Commission’s risk-based approach to AI regulation, it means that the measures do not alleviate the victim’s burden of proof in cases where other types of AI systems cause damage. In the latter cases, victims can nevertheless invoke the targeted alleviation of the burden of proof.

    Based on this assumption, it is estimated that PO1 would reduce costs to be advanced by victims to meet the burden of proof by ca. EUR 2 000 on average, per case in which the harmonised provisions apply. This estimate should not be misconstrued as a comprehensive quantification of the AI-specific difficulty of meeting the burden of proof, because it does not take into account cases in which liability claims would not be pursued in the first place based on current liability rules, because the victim either cannot identify the liable party or considers the prospect of a successful claim insufficient to justify legal action. The preferred policy option will help victims also in the latter cases, by overcoming the compensation gaps induced by the specific characteristics of AI.

    The burden of proof will be distributed more efficiently overall, as potentially liable parties must by definition be capable of influencing, to some extent, the operation of AI-systems. They are therefore typically in a position to more easily discharge the burden of proof with respect to how or why such systems arrived at a certain harmful output. This has a cost-cutting effect on overall litigation costs.

    (c)    Consumers

    A faster roll-out of AI-technologies under PO1 would benefit consumers, e.g. in the form of faster and more personalised services, innovative and performant products as well as advances in the fields of health, safety, security, mobility, sustainability, circular economy, media, etc. Given the overall positive economic impacts also on businesses, it is not expected that the envisaged measures would lead to costs being passed on through increased consumer prices.

    (d)    Insurance companies

    PO1 may marginally increase the take-up of insurance by potentially liable parties – provided the insurance coverage is not already included in existing all-risks-policies 388 , as increased legal certainty and reduced fragmentation create more favourable conditions for offering insurance coverage, and awareness of liability risks may slightly rise due to this initiative. An increased coverage rate would benefit victims of damage as insurance claims provide an easier path to compensation and relieve victims of the liable party’s insolvency risk.

    (e)    Indirect economic impacts and impacts on the competitiveness of the internal market

    By the same token (avoidance of liability gaps), PO1 would contribute to an efficient cost allocation. Its combined impacts are expected to have a positive effect on cross-border trade in AI-enabled products and services and the development of the European AI-sector as a whole. 389 The economic study commissioned for this impact assessment estimated that a combination of alleviations of the burden of proof (as per PO1) and measures to harmonise strict liability for certain AI-enabled products and services (cf. PO2 and 3) would increase the cross-border trade in the AI-enabled goods and services falling under the six use-cases analysed in depth for that study by about 5 %. While PO1 does not include all of the assumptions made for that estimation, it is nevertheless relevant because the decisive drivers of the expected economic benefits – increased legal certainty, reduced fragmentation and increased consumer trust – are likely to materialise under PO1. 390 As explained under point A.2.1.3.(f) above, PO1 is expected to deliver an increase of the AI market value in the EU of between ca. EUR 500mln and ca. EUR 1.1bln.

    The attempt of the European Parliament to quantify the benefits of a clear and coherent EU civil liability regime for AI remained inconclusive 391 . Nevertheless, its preliminary analysis suggests that the added value of EU action on liability could generate EUR 54.8 billion by 2030 for the EU economy, in terms of acceleration of the level of research and development in AI, and in the range of EUR 498.8 billion if other impacts, including reductions of accidents, health and environmental impacts and user impacts are also taken into consideration. 392 As these numbers were not linked to a clearly defined set of PO, they cannot be readily applied to the PO described in this impact assessment. However, they provide a reasoned view on the order of magnitude of potential economic benefits linked to a clear and consistent civil liability framework for AI.

    (f)    Enforcement, adjudication and litigation costs

    Only small incremental impacts on enforcement, adjudication and litigation costs, borne by MS and parties to the proceedings respectively, are expected under PO1. The envisaged targeted and limited adaptations of the burden of proof are not likely to entail a substantial increase in the number of civil actions, as they are designed to apply only in the confined cases where the specific characteristics of an AI system make it unduly difficult to meet the default burden of proof. Moreover, the burden of proof will be distributed more efficiently overall, as potentially liable parties must by definition be capable of influencing the operation of AI-systems. They are therefore typically in a position to more easily discharge the burden of proof with respect to how or why such systems arrived at a certain harmful output. This has a cost-cutting effect on the overall enforcement, adjudication and litigation costs. More particularly, it is assumed that a fraction ranging from 10 % to 80 % of the amount saved by victims due to the alleviations of the burden of proof under PO1 will have to be advanced by potentially liable businesses 393 . This broad assumed range is based on the following considerations:

    -In certain cases, for instance where the defendant is a provider of a high-risk AI systems falling under the AI Act, they will have optimal information on and understanding of the functioning of the relevant AI system. They will thus not need to procure any external technical expertise to discharge the burden of proof. A small fraction (e.g. 10%) of the cost may nevertheless be shifted to this type of liable party as PO1 may cause them to devote some additional internal resources to:

    oestablishing why or how the relevant AI system arrived at a certain (harmful) output (e.g. through reverse engineering of the output, testing of the AI system and digital forensics) and

    oproviding information on the AI system to the victim.

    -On the other end of the spectrum of conceivable cases, the defendant may not have any advanced understanding of the functioning of the relevant AI system, nor easy access to detailed information on that system. This may for instance be the case where the defendant is an SME using an AI system not falling under the transparency and documentation requirements of the AI Act. As this type of defendant is nevertheless in a better position than the victim for establishing the trigger of the damage, it is assumed that even in such cases the amount saved by victims would not be re-distributed to the defendant in its entirety (but only to a large extent, e.g. by 80%).

    Based on these assumptions, PO1 would entail an increased cost for the liable party linked to the burden of proof of ca. EUR 200 to ca. EUR 1 600 per case in which the harmonised provisions apply.

    (g)    Efficiency score of PO1 assuming a binding legal instrument (Directive)

    In light of these considerations, PO1 is given an efficiency score of 4, assuming that it would be implemented by a binding legal instrument (most likely a Directive).

    (h)    Efficiency score of PO1 assuming a non-binding instrument (recommendation)

    By contrast, if the measures under PO1 would be merely recommended to MS in a non-binding instrument, the described benefits would likely materialise to a much lesser extent. The implementation rate of non-binding instruments is difficult to predict and there is no sufficient indication that the persuasive effect of a recommendation would be strong enough to produce consistent adaptations of national laws. Therefore, the desired harmonisation effect and legal certainty is unlikely to be delivered by a recommendation. This applies to a special degree to measures concerning the private law of obligations, of which extra-contractual liability rules form part. This area is characterised by long-standing legal traditions, which traditionally makes MS reluctant to pursue harmonised reform unless driven by the clear prospect of internal market benefits under a binding EU instrument. 394 Moreover, the significant existing divergences between MS’ civil liability frameworks (see 2.4.), are another reason why a recommendation is unlikely to be implemented in a consistent manner. While the expected slight increase in insurance premiums might be even lower if PO1 were implemented by a non-binding instrument 395 , divergences and reduced legal certainty in cross-border cases would persist (even to the – likely very limited – extent that MS choose to implement a recommendation). Likewise, compensation gaps are likely to persist to a large extent and the expected cost savings for victims linked to the alleviations of the burden of proof would mostly not materialise. 396 The economic study confirmed that a non-binding initiative would not address the identified internal market obstacles effectively, as the underlying problems will likely be perpetuated to a substantial extent. 397 It concluded that a non-binding instrument would not achieve any increase in cross-border trade. 398 Therefore, this sub-option is given an efficiency score of 1.

    2.2.    Policy option 2

    PO2 differs from PO1 as regards the strict liability regime applicable to users of AI technologies with a special risk-profile, possibly coupled with a mandatory insurance regime. The following efficiency assessment therefore focuses on these elements.

    (a)    Impacts on businesses (in particular as potentially liable parties)

    Provided that the AI-enabled technologies covered by the harmonised strict liability can be determined with a sufficient degree of precision, this element of PO2 would be suitable to increase legal certainty and reduce fragmentation by providing users of AI-enabled technologies having a special risk profile with a broader basis for assessing their liability risk outside the scope of the PLD. This could generate direct regulatory benefits, in the form of reduced legal information/representation, internal risk management, insurance and other compliance-related costs, as well as additional cross-border revenue. 399 Moreover, the expected positive impacts of PO2 on consumers’ willingness to take up AI-enabled products and services (see below) would directly or indirectly benefit all companies in the AI value chain. However, the fact that the time horizon for the roll-out of the technologies that might fall under the scope of the envisaged strict liability regime, as well as the modalities defining their risk profile (e.g. their regulatory and operating environment, safeguards, supervision), are not entirely clear could reduce the expected benefits of PO2 in terms of ensuring legal certainty. 400 For the reasons set out in the following paragraphs, the overall economic benefits are nevertheless expected to outweigh the cost factors under PO2.

    In certain cases, beyond merely harmonising existing strict liability rules, PO2 would entail the application of strict liability and possibly an insurance obligation to operators/users of AI technologies that would otherwise be subject only to fault-based liability under national law. In such cases, PO2 could be a disincentivising factor for businesses choosing between AI-enabled technologies and functionally equivalent alternatives (e.g. human-driven sweepers or traffic management systems relying on merely automated, deterministic software). However, this effect is likely offset by the cost reduction and internal market opportunities generated through harmonised liability rules. More specifically, the economic study found that the moderate compliance costs linked to PO2 would be outweighed by cost savings thanks to higher legal certainty, saved resources on compliance, and higher revenue enabled by a clearer and less fragmented legal framework. 401 Moreover, the role of insurance solutions, as described under PO1, is instrumental, as it limits potentially liable parties’ costs to the insurance premiums, keeping market entry barriers low 402 .

    In principle, this mechanism holds true whether the relevant risk is covered by voluntary (market-driven) insurance or a mandatory insurance regime (either harmonised or regulated at national level). 403 In response to the public consultation, insurance stakeholders have pointed to a potential problem with a harmonised insurance obligation for AI-enabled products and services. This was linked to the lack of statistical data on accidents and damages, which could initially drive risk margins and thus insurance premiums. 404 Some SME stakeholders have also raised concerns about high insurance premiums due to the difficulties in assessing the covered liability risks. On the other hand, the fact the insurance industry is already proactively developing new insurance products for AI risks 405 and that this field is considered by some as the next big growth market 406 support the expectation that coverage will be offered at competitive prices. The parallel initiative adapting the PLD to the digital age is key to enable insurers to take recourse against producers, based on claim subrogation, in particular where defective AI software contributed to the insured damage. This can also contribute to keeping premiums low. 407  

    In any event, the Commission services will respond to the Parliament’s call to work closely with the insurance market to develop innovative insurance products 408 . The Commission will notably facilitate a dialogue between the insurance industry and companies active in the AI market (in particular SMEs).

    As start-ups and other SMEs are significantly more affected by the internal market barriers created by legal uncertainty and fragmentation (see 2.6.), this stakeholder group would also benefit to a higher degree from the overall positive economic impacts of PO2. 409

    Regarding the effect of PO2 on insurance premiums, it is assumed that:

    -a combination of PO1 with a limited strict liability regime applicable to damage caused by the operation/use of certain AI-enabled products or the provision of certain AI-enabled services with a specific risk profile would cause an incremental increase by 25% of liability insurance premiums attributable to AI liability risks;

    -if these measures were combined, in addition, with an obligation to cover the harmonised strict liability regime by insurance, an incremental increase by 35 % of the relevant shares of insurance premiums is assumed.

    In addition to the considerations specified under point (a) above (regarding the measures to ease the burden of proof under fault-based liability rules), these assumptions are based on the following considerations:

    -Where strict liability applies, the effect of re-distributing the burden of bearing the damage is significantly stronger, as the liable party cannot avoid liability even if not at fault. However, this regime would apply only with respect to a small number of AI-enabled products and AI-enabled services, and would therefore change the estimated impact on insurance premiums only to a limited extent.

    -Coupling strict liability with an insurance obligation would entail an incrementally bigger increase of insurance premiums, as this sub-option of PO2 would preclude to some extent the possibility for insurers to manage risks through contractual exclusions and limitations of coverage. As confirmed by feedback received during the public consultation, this effect of mandatory insurance can have a premium-driving effect in particular if insurers do not have sufficient statistical accident data to price premiums with a high degree of accuracy. Regarding novel AI-enabled products and services, this issue would primarily be relevant during an initial stage marked by scarce actuarial data, but is expected to dissipate gradually as data becomes available. Even during the initial stage, the harmonised safety framework provided by the AI Act and sectoral safety legislation at EU level will help insurers to assess the risks linked to the operation of relevant AI systems.

    Based on the methodology described under point 1.2.(a)(iii)(last indent) above, PO2 would entail the following costs through an increase of the insurance premiums attributable to AI liability risks:

    -if the alleviations of the burden of proof under fault-based liability rules would be combined with a limited strict liability regime (only): EUR 8.89mln (based on the lower estimate of the AI market size) to EUR 26.84mln (based on the higher estimate of the AI market size);

    -if these were combined, in addition, with an insurance obligation covering strict liability: EUR 12.44mln (based on the lower estimate of the AI market size) to EUR 37.57mln (based on the higher estimate of the AI market size).

    (b)    Victims of damage caused by AI applications with a special risk profile (natural persons and businesses)

    A harmonised strict liability regime would facilitate victims’ access to compensation to an even greater extent than alleviations of the burden of proof. Injured persons would have to establish only that the covered risk materialised. 410 Within the scope of strict liability, PO2 would thus substantially ease the victim’s burden of proof and reduce associated costs (e.g. for expert analysis). Based on a conservative estimate, it is assumed that PO2 would reduce the costs to be advanced by victims due to the AI-specific difficulty of meeting the burden of proof by at least 60 %. This assumption is based on the following considerations:

    -Regarding AI systems not falling under the scope of the harmonised strict liability regime, that is to say the vast majority of AI systems, the same considerations as under PO1 apply.

    -Where victims can invoke the harmonised strict liability regime, they would not bear a significant cost linked to the burden of proof. However, this would represent a benefit compared to the baseline only to the extent that strict liability does not apply also under the baseline. Most MS already have strict liability regimes in place for the use of AI-enabled products such as autonomous vehicles and autonomous drones.

    Based on this assumption, it is estimated that PO2 would reduce costs to be advanced by victims to meet the burden of proof by ca. EUR 2 500 on average.

    In addition, a possible mandatory insurance regime would relieve victims’ of the liable parties’ insolvency risk and provide them with an even cheaper, faster and easier path to compensation. 411  

    (c)    Consumers

    The expected benefits for consumers in terms of a faster roll-out of AI-enabled products and services are likely to be similar under PO2 as under PO1, because their common element – alleviations of the burden of proof under fault-based liability rules – applies to the major part of AI-enabled products and services affected by the initiative. The strict liability and mandatory insurance elements of PO2 are much more limited in scope. In respect of these elements, the difficulty linked to specifying, already at the present point in time, the AI-enabled technologies with a ‘strict liability profile’, could lead to a certain degree of legal uncertainty which could potentially disincentivise AI rollout in certain cases. Taking into account the cost-mitigating effect of insurance, it is not expected that the shift, due to strict liability, of compensation costs from victims to the entities controlling the relevant risk would lead to increased consumer prices.

    (d)    Insurance companies

    The harmonisation of strict liability under PO2 is likely to entail some additional demand for insurance, in particular when coupled with a compulsory insurance regime, which generates new market opportunities for insurance companies.

    (e)    Indirect economic impacts and impacts on the competitiveness of the internal market 

    While the envisaged strict liability regime may be the most certain and easiest way to ensure that the victim does not bear the cost of the damage, it may not in all cases lead to the cost allocation on the party which was at the origin of the damage. However if it is coupled with mandatory insurance, such a cost-allocation will be achieved through subrogated recourse claims. 412 In case the insurance company of the strictly liable person compensates the victim, a damage claim of the victim based on fault or product liability would be re-assigned to the insurance company. On the basis of this claim, the insurance company could have recourse against the person at the origin of the damage, for instance against the producer if the product was defective. Thereby the most efficient cost allocation would be achieved. 413 At the same time, insurance coverage of liable persons would limit the economic costs to the annual insurance premium and keep the market entries for AI producers and operators low, while it would be ensured at the same time that the victim’s harm would be compensated in a smooth way. The combined impacts of PO1 and 2 are expected to have a positive effect on cross-border trade in AI-enabled products and services and the development of the European AI-sector as a whole. 414  

    (f)    Enforcement, adjudication and litigation costs

    For similar reasons as under PO1, only small incremental impacts on enforcement, adjudication and litigation costs are expected under PO2. By dispensing with the need to establish fault and a causal link between fault and damage, strict liability considerably reduces the overall evidentiary complexity and need for costly technical expertise. More particularly, it is assumed that a fraction ranging from 5 % to 60 % of the amount saved by victims due to the eased burden of proof under PO2 will have to be advanced by potentially liable businesses 415 . This assumed range is based on the following considerations:

    -Regarding AI systems not falling under the scope of the harmonised strict liability regime, that is to say the vast majority of AI systems, the same considerations as under PO1 apply.

    -In cases where PO2 leads to the application of strict liability, the cost savings for victims due to the fact that they do not have to prove fault would not be shifted to the defendant because the latter cannot avoid liability by establishing, possibly with the help of costly technical expertise, that the damage was not caused by their fault.

    Based on these assumptions, PO2 could entail an increased cost for potentially liable businesses linked to the burden of proof of ca. EUR 100 to ca. EUR 1 500 per case in which the harmonised provisions apply.

    To the extent that risks within the scope of the strict liability element of PO2 is not already covered by a national strict liability regime under the baseline, the easier and more predictable path to compensation afforded by this measure may lead to a slight increase in the number of claims made. Any such possible increase is likely to be marginal. As strict liability would apply only in cases involving significant risks to important legal interests (life, health, property), victims would in such cases be likely to seek compensation also under the baseline, despite the challenges and costs linked to fault-based claims. 416

    (g)    Efficiency score of PO2 assuming a binding legal instrument (Directive)

    For all of the above considerations, PO2 is given an efficiency score of 3, assuming that it will be implemented through a binding legal instrument (Directive). Compared to the assessment of PO1, this score takes into account, in particular, that the time horizon and modalities of the roll-out of technologies with a risk profile relevant for strict liability are not yet known. It would therefore, at the present point in time, be somewhat challenging to assess the risk profile of those technologies and to specify them in a legislative instrument in a way that ensures a maximum of legal certainty regarding the scope of the harmonised strict liability regime. This consideration would be all the more relevant with respect to a possible mandatory insurance regime covering strict liability. The obligation to ensure insurance coverage is a market entry requirement. Having regard to the policy objective of promoting the roll-out of AI-enabled products and services, it is crucial to enable businesses to assess with a high degree of certainty whether they fall under such a requirement or not.

    (h)    Efficiency score of PO2 assuming a non-binding instrument (recommendation)

    If the measures under PO2 would be merely recommended to MS in a non-binding way, the expected benefits are expected to materialise, if at all, to a much lesser extent. For the reasons explained above in the context of PO1 (and in the section on effectiveness), this sub-option receives an efficiency score of 1.

    2.3.    Policy option 3

    During the 1st stage, PO3 would have the same impacts as PO1. As MS anyway have to report on the implementation of the initiative in line with better regulation requirements on evaluation and monitoring, targeted reporting requirements supporting the review mechanism under PO3 would not entail a significant burden for them. While the potential benefits of a harmonised strict liability regime would materialise later than under PO2, the following factors will improve the efficiency of the more far-reaching measures (strictly liability, possibly coupled with mandatory insurance) potentially taken at the second stage:

    -By ensuring that the assessment of a possible strict liability regime for the use of AI-enabled technologies can rely on a more developed factual basis, the staged approach minimises the risk of creating an uneven playing field for AI-enabled technologies.

    -The staged approach allows the development of tailored market-driven insurance solutions, which can be taken into account at the second stage when assessing the need for and effects of a mandatory insurance regime. Moreover, as AI-technologies potentially subject to such a regime are rolled out over the coming years and statistical accident data is being accumulated, the lack of actuarial data as main potential cost-driver of AI-specific liability insurance will have become considerably less relevant by the time of the targeted review under PO3.

    As relevant safety standards for the AI-enabled technologies will be available by the time of the targeted review, the conditions for assessing liability risks will have improved for both insurers and liable users/operators. Given that the possible harmonisation of strict liability and mandatory insurance for certain AI-enabled products and services with a specific risk profile would be taken only at the stage of the targeted review, the same incremental increase of insurance premiums is assumed as for PO1 (EUR 5.35mln to EUR 16.1mln). Likewise the same cost savings for victims linked to the alleviated burden of proof (ca. EUR 2 000 per case on average) and the same incremental cost increase linked to the burden of for liable businesses (ca. EUR 200 to EUR 1 600 per case) are expected.

    These considerations lead to an efficiency score of 4 for PO3, assuming it will be implemented through a binding legal instrument (Directive). The non-binding sub-option is scored consistently with the other two policy options, for the reasons explained above in the context of PO1 (and in the section on effectiveness).

    2.4.    Comparative overview of efficiency scores

    The following table shows the policy options’ respective efficiency scores, reflecting the ratio between all relevant costs and benefits for the affected stakeholder groups, as summarised in the mapping tables at the beginning of this Annex.

    Policy Option

    Efficiency scores

    (impacts net of the baseline (-5 to +5)

    Policy Option 1

    4 (binding)

    1 (non-binding)

    Policy Option 2

    3 (binding)

    1 (non-binding)

    Policy Option 3

    4 (binding)

    1 (non-binding)

    3.    Coherence

    3.1.    Policy option 1

    Option 1 would be coherent with the – not AI-specific – measures envisaged in the framework of the PLD review. These instruments are complementary as they address challenges posed by emerging digital technologies with respect to claims based on different grounds, directed against different liable persons and covering compensation for different victims and types of harm. The respective instruments use a consistent approach and similar tools (access to information, adaptations of the burden of proof) to ensure in their combination that products or services using AI or other digital technologies do not make it more difficult under any of the existing pillars of liability to get compensation compared to traditional products. The envisaged measures will together contribute to creating a more consistent and adequate civil liability framework for the digital economy, without upsetting the balance established by the existing rules.

    Option 1 would also be coherent with the proposals already adopted as part of the follow-up to the AI White Paper, in particular the AI Act. It would namely take over definitions of key concepts from those acts, such as ‘AI-system’, ‘provider’ and ‘user’, although additional criteria would be necessary to ensure that the envisaged alleviations of the burden of proof apply only where the specific characteristics of AI effectively challenge existing liability rules. The provisions on disclosure of information and presumptions of causality would build specifically on the requirements of the AI Act. Option 1 would thus complement this act providing an additional incentive for ensuring the safety of AI-enabled products and services as well as the respect of fundamental rights.

    Finally, MS could fit the alleviations of the burden of proof envisaged under Option 1 into their national liability regimes without disrupting their respective legal traditions, because this policy option allows for sufficient flexibility and is based on tools that are already well-known in MS’ civil liability systems.

    For these reasons, policy option 1 is given a coherence score of 4, assuming that it would be implemented by means of a binding legal instrument.

    While a non-binding instrument would be even less intrusive and would allow MS maximum flexibility in assessing whether and how to adjust their national rules, this approach would not be coherent with the Commission’s overall AI policy, which aims at creating an ecosystem of trust and promoting the roll-out of AI in the internal market through reliable and harmonised legal framework. Therefore, this sub-option is given a coherence score of 3.

    3.2.    Policy option 2

    Policy option 2 would be coherent with the measures envisaged in the PLD review. In particular, the criteria defining the operator/user for the purposes of the harmonised strict liability regime would be different from those defining the producer under the revised PLD. In cases where the producer is nevertheless at the same time also the user/operator, the fact that they would be subject to the revised PLD and the harmonised strict liability regime for AI would not lead to inconsistent results because these liabilities would intervene in different capacities. In any event, such scenarios exist already in the national laws where strict liability and product liability apply in parallel.

    Strict liability of the user/owner of technologies with a specific risk profile is a widely known and established approach in national laws. Option 2 would thus be coherent with the approaches of existing liability systems. However, in cases where the use of AI-enabled technologies would fall under a different liability regime than the use of functionally equivalent non-AI technologies, the coherence of Option 2 with the existing legal framework could be questioned.

    Regarding the other strands of the Commission’s AI policy, in particular the AI Act, it needs to be considered that the already proposed measures, aimed at ensuring safety and an effective protection of fundamental rights, do not apply directly to some of the AI-enabled products that could fall under a strict liability regime (e.g. autonomous vehicles and drones). Safety rules specify the duties of care to be fulfilled by various actors with a view to preventing harm. These rules thus provide an important framework for assigning liability based on the control over the risks that safety rules aim to minimise. Conversely, liability rules provide an additional incentive to comply with safety rules and fundamental rights. Given these complementary functions of safety and liability rules, it would be more conducive to the coherence of the overall EU regulatory framework for AI if the relevant safety rules for AI-enabled technologies with a ‘strict liability profile’ were already in place at the time the harmonised strict liability regime for the use/operation of these technologies is laid down.

    These considerations lead to a coherence score of 3, assuming that it would be implemented by means of a binding legal instrument. The non-binding sub-option is given the same coherence score because, on the one hand, it ensures more flexibility and allows MS to avoid any inconsistencies in their national legal systems, but on the other hand, it would not be entirely coherent with the Commission’s overall AI policy, as explained in the context of Option 1 above.

    3.3.    Policy option 3

    Option 3 would insert itself without friction into the existing liability system (PLD and national rules), and be consistent also with the other strands of the Commission’s AI policy. The staged approach would allow the Commission to take stock of the practical effect of the planned adaptations to the PLD, and in particular the extension of product liability to providers of safety-relevant AI-systems, before deciding on the need to harmonise strict liability of users/operators. AI-specific requirements applicable to AI-systems used in technologies with a typical ‘strict liability profile’ (in particular transport vehicles and aircraft) are likely to be gradually integrated into the ‘old approach’ safety legislation in step with technological developments. Requirements of the AI Act (and also the Machinery Products Regulation) do not currently apply to such technologies. The targeted review mechanism for strict liability and mandatory insurance would allow the Commission to take into account regulatory developments at national and EU level over the following years, and thus to ensure coherence also with respect to future AI-related policy measures beyond the proposed AI Act. This may for instance concern future safety rules tailored to autonomous vehicles and highly autonomous AI-enabled drones, in line with the EU’s complementary approach to safety and liability rules. These considerations lead to a coherence score of 4 for policy option 3, assuming that it would be implemented by means of a binding legal instrument. The non-binding sub-option receives a coherence score of 3, for the same consideration as set out in the context of PO1 above.

    3.4.    Indirect environmental impacts

    As regards indirect environmental impacts, all policy options are expected to contribute – albeit to a non-quantifiable extent – to the uptake of AI applications that are beneficial for the environment. For instance, AI systems used in process optimisation make processes less wasteful (e.g. by reducing the amount of fertilizers and pesticides needed, decreasing the water consumption at equal output, etc.). AI systems supporting improved vehicle automation and traffic management contribute to the shift towards cooperative, connected and automated mobility, which in turn can support more efficient and multi-modal transport, lowering energy use and related emissions.

    3.5.    Comparative overview of coherence scores

    Policy Option

    Coherence scores

    Policy Option 1

    4 (binding)

    3 (non-binding)

    Policy Option 2

    3 (binding)

    3 (non-binding)

    Policy Option 3

    4 (binding)

    3 (non-binding)

    4.    Proportionality

    4.1.    Policy option 1:

    Option 1 is limited to the measures strictly necessary to address the AI-specific problems identified. In particular, it would not touch upon the substantive conditions of liability like fault or causality, but focus on targeted proof-related measures ensuring that victims have the same level of protection as in cases not involving AI. In this way, Option 1 is strictly aligned with the Commission’s targeted approach of ensuring that victims are not less, but also not more protected due to the involvement of AI. Any shifts in the risk and cost distribution between affected stakeholders that would go beyond counter-balancing the specific proof-related challenges of AI would thus be avoided, and MS’ well-established liability systems would be respected to the maximum extent possible. However, while proportionality requires that the mildest option suitable to achieve the relevant objectives is chosen, there is presently still a degree of uncertainty as to whether the envisaged measures to ease the burden of proof will in all cases be sufficient to prevent compensation gaps. These considerations lead to an overall proportionality score of 4, assuming that PO1 would be implemented by means of a binding legal instrument. While a non-binding approach would be even less interfering into the existing legal framework, it receives a slightly lower proportionality score (2) because it is much less suitable to achieve the policy objectives.

    4.2.    Policy option 2

    Option 2 would in principle be suitable and effective to achieve the policy objectives. However, as there is still a certain level of uncertainty about whether, despite the alleviations of the burden of proof envisaged under Options 1, the absence of a harmonised strict liability regime for AI users would indeed entail liability gaps over the baseline period, the strict proportionality of Option 2 cannot be affirmed with certainty at this point in time. In particular, it can currently not be confirmed with certainty that policy option 2 does not go beyond the minimum intervention necessary to address the relevant issues. Given the more far-reaching nature of a harmonised strict liability regime compared to measures easing the burden of proof, policy option 2 therefore receives a proportionality score of 3. The non-binding sub-option is scored consistently with PO1, as the same limitations apply.

    4.3.    Policy option 3

    Option 3 would keep immediate measures to the strict minimum necessary to address the proof-related challenges posed by AI under national – primarily fault-based – liability rules, leaving other aspects to national law and the technology-neutral provisions of the PLD. At the same time, this option would ensure that future technological, regulatory and jurisprudential developments will be systematically taken into account to verify the need to harmonise strict liability for certain uses of AI. The staged approached is thus aligned with the principle of proportionality to a very large extent, although it cannot be ruled out that the measures enacted at the first stage (alleviations of the burden of proof) are not in all cases sufficient to achieve the policy objectives (proportionality score of 4). The non-binding sub-option is scored consistently with the other two policy options, as the same limitations apply.

    4.4.    Comparative overview of proportionality scores

    Policy Option

    Proportionality scores

    Policy Option 1

    4 (binding)

    2 (non-binding)

    Policy Option 2

    3 (binding)

    2 (non-binding)

    Policy Option 3

    4 (binding)

    2 (non-binding)

    5.    Overall results of the multi-criteria analysis and comparison of options

    The following tables summarise the results of the impact analysis, and show how the options were compared following a multi-criteria analysis methodology. As the sub-option of implementing the policy measures through a non-binding instrument (recommendation) consistently scored lower across the IA criteria, it can be discarded for the purposes of this comparison. A binding legislative approach (most likely a Directive) is assumed for the comparison.

    5.1.    Simple aggregation of scores

    Firstly, the following impact matrix presents the scores using simple aggregation and assuming an equal weight of each individual criterion. As three out of six individual criteria would come under the umbrella of effectiveness, effectiveness is de facto given more importance than the other IA criteria.

    The assessment has not lead to a preference between PO1 and PO3, which is consistent with the fact that they involve the same harmonising provisions at the present stage. The feature distinguishing these options, the targeted review mechanism, does not lead to different impacts of the implemented measures to be implemented at this first stage. PO2 scores slightly lower in terms of its effectiveness to achieve specific objective 1 (increase legal certainty regarding liability for AI), coherence and proportionality, leading to a marginally lower overall score. This ranking is linked primarily to the fact that PO2 entails the need, at the present stage, to assess the risk profile and operational characteristics of specific types of AI-enabled products and services in order to determine and specify with a high degree of precision the products and services falling under the harmonised strict liability regime. As the relevant products and services have not reached market maturity yet and the conditions for their operation, e.g. the stakeholders in charge of ensuring human supervision, operational environments, safety constraints, etc., are not yet known, this situation potentially engenders some uncertainty. This potential lack of certainty would be particularly pertinent if the harmonised strict liability regime were to be coupled with the obligation to ensure insurance coverage. Such an obligation acts as a market entrance requirement. With regard to the policy objective to promote the roll-out and take-up of AI-enabled products and services, it is therefore especially important that businesses can know with certainty whether they fall under such a regime or not.

    MCA (simply aggregation)

    Criterion

    Score net of the baseline (scale of -5 to +5)

    Option 1

    Option 2

    Option 3

    Effectiveness

    Specific objective 1

    4

    3

    4

    Specific objective 2

    4

    4

    4

    Specific objective 3

    4

    4

    4

    Efficiency

    4

    4

    4

    Coherence

    4

    3

    4

    Proportionality

    4

    3

    4

    → Simple sum

    24

    21

    24

    → Ranking based on simple aggregation

    1

    2

    1

    5.2.    Sensitivity analysis based on neutral weighting of impact assessment criteria

    Secondly, a weighted sum method was applied to analyse the sensitivity of the MCA. This sensitivity analysis neutralises the number of sub-criteria, giving equal weight to overall effectiveness, efficiency, coherence and proportionality.

    As shown in the following table, PO1 and PO3 are confirmed as the highest ranking options. As the sensitivity analysis shifts the weighting from effectiveness to the other IA criteria, the difference compared to PO2 is more distinct based on this method. This is due to the fact that PO2 scored slightly lower in terms of proportionality and coherence.

    MCA ((based on an equal weight of the IA criteria))

    Criterion

    Score net of the baseline (scale of -5 to +5)

    Option 1

    Option 2

    Option 3

    Effectiveness

    Specific objective 1

    4

    3

    4

    Specific objective 2

    4

    4

    4

    Specific objective 3

    4

    4

    4

    èOverall effectiveness

    12

    11

    12

    Efficiency

    4

    4

    4

    Coherence

    4

    3

    4

    Proportionality

    4

    3

    4

    → Simple sum

    24

    21

    24

    → Weighted sum of scores (=effectiveness/3 + efficiency + coherence + proportionality)/4

    4

    3,42

    4

    → Ranking based on the weighted sum of scores

    1

    3

    1

    5.3.    Sensitivity analysis based on weighted scores

    Thirdly, further sensitivity analysis was done by attaching different weights to individual criteria. Given the political priorities of the Commission, as well as Member States’ tendency to minimise interference into their traditional civil law systems, a higher weight was attached to effectiveness, coherence and proportionality.

    As shown in the following table, the ranking based on the previous methods of comparison is confirmed. Options 1 and 3 are most suitable to achieve the Commission’s political objectives, while taking into account Member States’ emphasis on considerations of proportionality and coherence.

    MCA (based on an weighting taking into account political priorities)

    Criterion

    Weight

    Score net of the baseline (scale of -5 to +5)

    multiplied by the weight

    Option 1

    Option 2

    Option 3

    Effectiveness

    Specific objective 1

    0,1

    0,4

    0,3

    0,4

    Specific objective 2

    0,1

    0,4

    0,4

    0,4

    Specific objective 3

    0,1

    0,4

    0,4

    0,4

    Efficiency

    0,1

    0,4

    0,4

    0,4

    Coherence

    0,3

    1,2

    0,9

    1,2

    Proportionality

    0,3

    1,2

    0,9

    1,2

    → Sum of weighted scores

    4

    3,3

    4

    → Ranking based on the weighted scores

    1

    3

    1

    5.4.    Overall conclusions on the ranking of options

    The multi-criteria analysis provides a clear ranking between PO1/PO3 and PO2, confirming that it is preferable not to lay down a harmonised strict liability regime for certain types of AI-enabled products and services at the present time, whether or not coupled with strict liability. This result is also corroborated by the consideration that a clear majority of business stakeholders opposed strict liability in the public consultation. These stakeholders’ views are of particular relevance given the Commission’s policy objective to promote the rollout of AI-enabled products and services.

    In order to inform the necessary political decision between PO1 and PO3, it is important to elaborate on other factors of relevance for that decision, and notably to consider how well these options take into account the suggestions of the European Parliament as well as stakeholder opinions. In this respect, firstly, only the staged approach (PO3) incorporates both main elements suggested by the Parliament – a facilitated burden of proof under fault-based liability rules and a limited strict liability regime for certain AI-enabled technologies – explicitly into the proposed legislative instrument. While the strict liability element is not yet implemented at the present stage, the targeted review mechanism provides a dedicated framework preparing the ground for the future policy decision on this element. Secondly, by explicitly acknowledging the possible need for a harmonised strict liability regime in the proposed legislative provisions, PO3 also satisfies to a greater extent the opinions expressed by non-business stakeholders, a large majority of whom supported the harmonisation of strict liability.

    If, at the stage of the targeted review, a political decision is taken to propose a harmonised strict liability regime, this measure could be designed in a way to meet concerns expressed by business stakeholders, which are at present sceptical towards it.

    In light of these considerations, the staged approach (PO3) is the most balanced, politically feasible, proportionate and yet effective option. It is most suitable to deliver the desired economic benefits in terms of roll-out of AI-enabled products and services in the internal market, and to increase citizens’ trust in AI by ensuring that victims who suffered harm caused with the involvement of AI systems enjoy the same level of protection as victims who suffered harm caused by other technologies. It is also most adapted to the political context of the AI liability initiative, including the Parliament’s legislative own-initiative resolution, and stakeholder feedback.



    Annex 11

    The economic impact of adapting civil liability rules to the specific challenges of AI

    The example of the European market for vacuum cleaners

    Bruno Carballa-Smichowski and Nestor Duch-Brown

    February 2022

    JRC Digital Economy Working Paper 2022-01

    1.    Introduction

    In its White Paper on artificial intelligence (AI), the Commission set the objectives of promoting the uptake of AI and addressing the risks associated with certain of its uses. As part of the latter, one of the specific objectives of the White Paper is to ensure the same level of protection for persons having suffered harm caused by AI systems and persons having suffered harm by other technologies. In the report on AI liability accompanying the White Paper, the Commission recognised specific challenges posed by AI to current liability rules. The Commission Work Programme 2020 foresaw a follow-up to the White Paper on AI in the form of new legislative initiatives, including on liability.

    Against this backdrop, DG JUST is proposing an initiative to adapt civil liability rules to the specific challenges posed by artificial intelligence (“the initiative” hereafter). The initiative details the problem stemming from a lack of intervention in the baseline scenario as well as three policy options and their respective expected effects.

    The objective of this report is to contribute to the impact assessment of the initiative by providing empirical evidence of the economic impacts of the main intervention common to all policy options: an alleviation of victims’ burden of proof (“the intervention” hereafter). In line with this purpose, the report focuses on the application of national fault-based liability rules to damage caused with the involvement of AI-systems. This scope is consistent with the other studies supporting the impact assessment of the initiative (Deloitte, 2021; Kantar, 2020). The report does in particular not cover the review of the Product Liability Directive, which is subject to a separate impact assessment addressing more general (i.e. not AI-specific) issues in relation to producers’ liability for damage caused by defective products.

    Focusing on the vacuum cleaners market, the report estimates first the baseline scenario, which captures the economic impact of two problems: legal uncertainty regarding the liability exposure of the operator and lack of consumer trust on AI-enabled robot vacuum cleaners (RVC). Then, it simulates the impact of three expected effects of the intervention: an increase in demand for due to increased legal certainty regarding liability exposure, increased consumer trust in AI-enabled products and a re-distribution of compensation costs due to the prevention of AI-induced compensation gaps.

    The remainder of the report is structured as follows. Section 2 describes the data used to evaluate the impact of the intervention on the vacuum cleaners market. Section 3 explains the methodology used to the estimate demand for vacuum cleaners and, on those basis, simulate the impact of the intervention on that market. Section 4 presents the results.

    2.    The impacts to be evaluated


    The initiative considers a series of problems that take place in the absence of intervention (baseline scenario) and proposes three policy options and describes the expected impacts they will have. In this section, we briefly describe only the economic effects of those problems and policy measures that, given the time and data availability constraints, we evaluate in this report.

    Problem 1: legal (un)certainty

    AI poses two types of difficulties to the interpretation of liability rules. First, in the absence of AI-specific legislation on the matter, national courts might decide to either employ the traditional allocation of the burden of proof (which means that the burden to establish the liable party’s fault and the causal link between that fault and the damage is usually fully borne by the victim), or to adapt it to the specificities of AI. Second, it is difficult to establish the fault of a person when AI-enabled products are involved. This is because, in these cases, it is very difficult to determine the standard of care. Even if it is established, the specific characteristics of certain AI systems (autonomous behaviour, opacity/lack of transparency, complexity, limited predictability) make it highly uncertain whether and how the victim can identify and prove a behaviour that failed the objective standard. This generates legal uncertainty as to how existing liability regimes will be applied in the case of damage caused by AI-enabled products and services.

    Legal uncertainty is expected to have various negative economic impacts. For the purpose of this report, we focus on one: lower willingness to take-up AI-enabled products and services. In the presence of legal uncertainty, it is expected that businesses and consumers demand less AI-enabled products and services than in absence of it.

    Problem 2: lack of consumer trust

    The initiative presents evidence of a current lack of consumer trust on AI-enabled products and services. This is rooted in two major concerns consumers have regarding these products and services: an expected low likelihood of receiving compensation if AI applications cause damage and concerns regarding the allocation of responsibility and liability if something goes wrong. As a result, consumers’ demand for AI-enabled products and services is restrained.

    Intervention: easing victims’ burden of proof for AI-related claims

    All the policy options include targeted and risk-based measures to ease the victim’s burden of proof. The objective is to do a harmonised adaptation of member state’s national rules that determine the possibility for injured parties to meet the burden of proof. This alleviation will consist of three measures:

    (I)Harmonised rules on the possibility for national courts to order the disclosure of relevant information to be recorded or documented pursuant to the AI Act. National courts would ensure confidentiality, proportionality and the protection of both parties’ legitimate interests (e.g. intellectual property rights, trade secrets). In the case of the refusal or inability to comply with an order to disclose the information, a rebuttable presumption of the facts that the victim sought to prove based on that information would apply.

    (II)A rebuttable presumption, for the purposes of fault-based claims under national law, of the causal link between the non-compliance with the applicable standard of care (=fault) and the damage for which compensation is sought, if the claimant shows that the defendant did not comply with AI Act requirements designed to prevent such damage.

    (III)An alleviation of the victim’s burden of proof, for the purposes of national civil liability claims, to ensure that the victim does not bear the burden of demonstrating how or why an AI-system reached a certain output. This could for instance be achieved through the legislative technique of a rebuttable presumption.

    These measures should provide legal certainty to businesses and consumers by clarifying how the proof-related challenges of AI are to be handled by national courts and the conditions for meeting the burden of proof. As a result, the valuation of AI-enabled products by firms and consumers should increase, which should boost their demand.

    The measures should also allow injured parties to use their existing civil liability claims effectively in cases involving AI. In that manner, they would tackle one of the roots of the lack of consumer trust: the low likelihood of compensation in case of AI-generated damage. This should result in an increase in demand for AI-enabled products. Note that a corollary effect of the intervention is that compensation for AI-caused damage born by the defendant should be more likely. We will come back to this point in Section 4.4.3. It is however appropriate to specify already at this stage that this result materialises in cases where the specific characteristics of certain AI systems would not have allowed the victim to prove the necessary facts under the baseline scenario. Only in these cases, the intervention would shift the cost of compensating the relevant damage from the victim to the liable person, increasing the latter’s liability exposure. Likewise, victims would be relieved of some of the costs linked to meeting the burden of proof (e.g. costs of expert analysis). These re-distribution effects are inherent in the Commission’s policy objective to ensure that victims of damage caused with the involvement of AI systems have the same level of protection as victims of damage caused by other technologies, and in general with the purpose of liability law. These impacts lead to a more efficient cost-allocation to the person best placed to prevent damage from occurring. Moreover, the potentially liable party is much more likely to have the necessary knowledge of the relevant AI systems in-house, and thus to discharge the burden of proof more efficiently without the need to procure external technical expertise. These re-distribution effects are therefore not regarded as an undesirable impacts or undue burden in the impact assessment.

    3.    Data


    We use panel data on the sales of household vacuum cleaners provided by GfK, one of the major providers of data and analytics on consumer goods. The data covers the years 2015-2019 for 6 European Union countries: Belgium, Germany, Greece, Ireland, Poland and Slovakia and includes all the brick-and-mortar and online sales channels. Each vacuum cleaner is described by its brand and model. The unit of observation is a vacuum cleaner (brand-model), country and year.

    For each observation, we observe quantity sold, total sales and several observable characteristics: type of vacuum cleaner (e.g. handstick, robot, etc.), maximal wattage, battery type, whether it has a control handle, whether it has an EPA filter, the motor position, the noise level, whether it has a power brush, the type of power supply, whether it has smart connect, whether it has special provisions for animals, the type of dust collector and the type of EPA filter, if any.

    Around 12% of the observations of the original dataset correspond to unidentified models. These are private labels sold by big retailers that were eliminated during the data cleaning process. During the latter, vacuum cleaners were classified into 5 categories (robot, cylinder, upright handstick and handheld) and 9 subcategories based on their type. In order to reduce the computational burden of the estimation, only brands and models summing up to 99% of the sales were retained. Finally, we computed average prices from all sales channel and eliminated observations with outlier values, namely average prices below 25€ and above 1 000€. The cleaned dataset used for the estimation contains 28 373 observations.

    Tables 1 shows the share of robot vacuum cleaners in total quantities by country and year.

    Table 1: Percentage of robot vacuum cleaners over total quantities of vacuum cleaners sold

    Year

    Country

    Belgium

    Germay

    Greece

    Ireland

    Poland

    Slovakia

    2015

    8.9%

    6.9%

    1.0%

    1.3%

    6.9%

    7.3%

    2016

    8.4%

    7.6%

    0.8%

    1.0%

    7.3%

    8.5%

    2017

    9.1%

    8.5%

    1.1%

    1.3%

    8.4%

    8.6%

    2018

    9.9%

    10.4%

    1.9%

    1.8%

    9.6%

    9.4%

    2019

    10.8%

    10.9%

    3.5%

    1.1%

    12.0%

    11.8%

    As Tables 1 shows, the penetration of robot vacuum cleaners shows an increasing tendency in the 2015-2019 period for all the countries studied. This confirms the increasing relevance of artificial intelligence in the vacuum cleaners market and hence of potential third-party liability issues addressed by the initiative.

    Table 2 below provides summary statistics of the variables retained for the demand estimation described in Section 4.

    Table 2: Summary statistics of the variables retained in the nested logit model

    Variable

    Mean

    Std. Dev.

    Min

    Max

    aprice

    173

    144

    25

    1000

    q_total

    2.04

    7.68

    0.001

    292.164

    control

    0.05

    0.21

    0

    1

    motor

    0.11

    0.32

    0

    1

    powerbrush

    0.11

    0.31

    0

    1

    rech

    0.31

    0.46

    0

    1

    smart

    0.02

    0.15

    0

    1

    animal

    0.12

    0.32

    0

    1

    bagless

    0.44

    0.50

    0

    1

    epa

    0.40

    0.49

    0

    1

    robot

    0.08

    0.27

    0

    1



    NB: “aprice” stands for “average price” and “q_total” for “total quantities”. Variables “control”, “motor”, “powerbrush”, “rech”, “animal”, “bagless”, “epa” and “robot” are dummy variables describing technical characteristics

    The main two limitations of the dataset stem from its nature. First, none of the policy options involves strict liability for household robot vacuum cleaners. Hence, we can only analyse the impact of the measures aimed to the ease of victims’ burden of proof under fault-based liability rules (which is common to all policy options), but not that of the application of strict liability, which is one of the measures forming part of option 2. Hence, the dataset does not allow us to analyse the different impact of each policy option, only that of any policy option in comparison to the baseline scenario. Second, given that the dataset refers to household vacuum cleaners (as opposed to industrial vacuum cleaners used by professional service providers), we can only estimate households’ demand. This circumscribes the analysis to cases implying lower risks of third party liability than those faced by professional service providers. However, these risks are not inexistent, as illustrated in Section 4.1.

    The richness of the database in terms of its temporal and country coverage, its exhaustivity and the technical characteristics of the vacuum cleaners covered, allow to circumscribe the baseline scenario with high precision, which is the cornerstone of any impact analysis of an intervention.

    4.    Methodology

    This section presents the methodology used to estimate demand for the baseline scenario and to build the counterfactuals used to assess the impact of the intervention.

    Note that, regarding the relevant liability regimes, the analysis for which the methodology is described in this section focuses on issues and policy solutions concerning the application of national fault-based liability rules to damage caused with the involvement of AI-systems. It is designed to complement the studies supporting the impact assessment on AI Liability (Deloitte, 2021; Kantar, 2020), which had the same scope. Therefore, it does in particular not cover the Product Liability Directive, which harmonises producers’ liability based on defects in their products.

    4.1.    Setting


    We consider a three-agent setting summarized in the figure below.

    Figure 1: The setting analyzed

    In this setting, final consumers can expose third parties to three types of damages by operating AI-enabled robot vacuum cleaners:

    ·Harm to persons (e.g. stuck hair , harming a toddler, electrocution, stumbling, etc.) 417

    ·Harm to property (e.g., damaging pets, damaging jewellery, damaging electronics, catching fire, etc.)

    ·Cybersecurity threats (exposure of personal data collected by the RVC)

    Our dataset allows us to identify two of these parties: robotic vacuum cleaner producers (which we assume to be the same as sellers in Figure 1 for the sake of simplicity) and final consumers (households buying robotic vacuum cleaners). Given that that the dataset analyses risks posed to third-parties by household RVC, the scope of the risks to which the intervention would apply are circumscribed to situations in which a non-member of the household is exposed to any of the above-mentioned risks while present in the household

    4.2.    The choice of a nested logit model


    In order to simulate different scenarios corresponding to the policy options, we first estimate the demand for vacuum cleaners using a structural demand estimation model. Structural Demand estimation models allow to estimate the demand of a product in a market and, using the results of that estimation, to obtain the marginal costs (i.e. the cost of producing an additional unit of the good) of the firms. Using a structural demand estimation model, we can therefore produce an empirical description of how consumers and producers behave in a market (in this case, the market for vacuum cleaners) and use it to simulate scenarios in which consumer preferences or producers’ cost change (i.e. “counterfactuals”), obtaining so new (hypothetical) quantities sold and prices. Hence, after estimating demand and marginal costs that we observe in the vacuum cleaners market today, we apply counterfactuals to simulate the impacts of the baseline scenario and of each policy option in the vacuum cleaners market.

    Within the family of demand estimation models, discrete choice models are particularly suited to analyse the vacuum cleaners market. In these models, a product is considered as a bundle of characteristics. For example, a specific robot vacuum cleaner is a bundle of the characteristics “robot”, “EPA filter”, “with power brush”, etc. These models take into account heterogeneous consumers (i.e., consumers with different tastes for the same product or bundle of characteristics and, hence, different price sensitivities to each product) that buy one out of many differentiated products (i.e. products that vary in their characteristics but fulfil the same need) they can choose from within a period that defines the market (typically a year). In this respect, this model is suited to estimate the consumers’ demand for vacuum cleaners. The latter are differentiated products (different types of vacuum cleaners, different noise levels, with or without smart connect, etc.) and durable goods that consumers buy at least on yearly basis. Moreover, given the existence of multiple types of vacuum cleaners in the market and the different needs and budgets consumers have, a model accounting for consumer heterogeneity is fit for the analysis of the vacuum cleaners market. In other words, it is realistic to think that different consumers value differently a given characteristic (e.g. the vacuum cleaner being small), and hence are willing to pay different amounts of money for that characteristic.

    Discrete choice models consider a utility-maximizing consumer and calculate the probability of a consumer choosing a particular good (e.g., a robot vacuum cleaner) over other alternative goods in the market (e.g., a traditional vacuum cleaner), including an “all other goods” choice. Then, they equate the observed market shares to the calculated probability. This means that what we observe in the market as the market share of a product is the result of a discrete choice (i.e., choosing either one good or another, but not more than one, within typically a year) of consuming a particular good (e.g. a robot vacuum cleaner with certain characteristics) over other goods he/she could have consumed with the same income. On the basis of that identity between the calculated probability of buying a good and the observed market share, these models use data on the latter to estimate several metrics, and in particular:

    ·Own price elasticity of a given product (i.e., by what percentage would the quantities sold of a given vacuum cleaner model decrease if its price increased by 1%)

    ·Cross-price elasticity of a given product (i.e., by what percentage would the quantities sold of a given vacuum cleaner vary if the price of another model price increased by 1%)

    ·The marginal cost of a given product (i.e., the cost of producing an additional unit of a given vacuum cleaner model)

    These metrics will be used to produce the above-mentioned counterfactuals that we generate to estimate the impact of the different policy options and the baseline scenario.

    The particularity of the nested logit model within the discrete choice models family is that it assumes that consumers consider some products to be substitutes to each other to a greater extent than other products. For example, it might be the case that, when buying a vacuum cleaner, consumers decide first between buying a robot vacuum cleaner or a traditional one, and then decide on which power level to choose. This is the referred to as the “nesting structure”. In this example, there would be 2 nests: the “robot vs non-robot” nest and the “power” nest. Given that the observed vacuum cleaner models vary considerably in their characteristics, a nesting logit model seems the most appropriate one.

    4.3.    Model specification, estimation and instruments


    In this section, we present the specification of the nested logit model estimated.

    4.3.1.    The model



    We use a nested logit model as described by Berry (1994). We specify the model as a one-level nested logit model (Barry, 1994) in which consumers choose first which category of vacuum cleaner to buy, and then the specific product (brand-model). We build the following 5 categories on the bases of the variable type: robot, cylinder, upright handstick and handheld.

    Consumers choose among J differentiated products (vacuum cleaners), j = 1, …, J, and an outside good j = 0 representing the decision of not buying any. The indirect utility of an individual i for product j is given by:

    Where designates the utility obtained by consumer i from consuming good j. Variable  is an individual-specific valuation of product j. Variable is a group-g-specific valuation of the goods that are part of group g, where each group g corresponds to one of the 5 above-mentioned categories of vacuum cleaners. Parameter , with 0 ≤  ≤ 1, represents the degree of preference correlation for products of the same category. The first part of the equation, , represents the mean utility of consuming good j that is common to all consumers, which is defined as:

    The mean utility of consuming product j depends on three variables. First, variable , which is a vector of the product characteristics affecting consumers’ utility, namely: whether it has a control handle, the motor position, whether it has a power brush, whether it is rechargeable, whether it has smart connect, whether it has special provisions for animals, whether it is bagless, whether it has an EPA filter and whether it is a robot vacuum cleaner. Second, the observed price . Third, an unobserved quality of the product, . Parameters and are unobserved and represent how sensitive the mean utility is to each of the product characteristics and the product’s price, respectively.

    Following Berry (1994), the estimation equation is

    ()

    Where is the market share of product j, which is equal to the quantities sold of good j as divided by the quantities corresponding to the outside good (i.e., market size minus total quantities sold of all vacuum cleaners). Each market is defined as a combination of country and year. This gives us a total of 30 markets. In order to calculate the market size, we considered that, in each market, 20% of the population of the country decides whether to buy a certain vacuum cleaner or the outside good. The variable represents the market share of product j within group g. It is calculated as the quantities sold of product j (a vacuum cleaner model-brand) divided by the quantities sold for group g (the category of vacuum cleaner).

    4.3.2.    Estimation and instruments


    Using the dataset described in Section 3, we estimate equation (1) using the generalized method of moments. In order to improve the quality of the estimation, we include a series of fixed effects and instrument variables and .

    We create vectors of dummy variables to control for market (i.e., country-year combinations), vacuum cleaner model (variable model) and type of vacuum cleaner (variable type) fixed effects, which we label , and . Given that prices and group market share are endogenous and correlated with the unobserved quality , we use BLP instruments (Berry, Levinsohn & Pakes 1995). For each of the 9 characteristics affecting the indirect utility for product j and represented by the vector , we create 7 instrumental variables by computing the counts of product characteristics and the sums of product characteristics’ values of each firm, first, and of its competitors, second, in each case overall, by brand and by nest. This results in a total of 63 instrumental variables.

    Hence, the estimation equation becomes:

    (2)

    Where is the instrumented price variable and is the instrumented group market share variable

    After estimating the model using equation (2), we compute the resulting total quantities of vacuum cleaners sold, consumer surplus and firms’ revenues and profits. This is our baseline scenario. We then re-run the estimation three times for three counterfactual scenarios. For each counterfactual scenario, we compute the resulting total quantities of vacuum cleaners sold, consumer surplus and firms’ revenues and profits and compare them to the first estimation’s results. In the next section, we detail how we design and calibrate the counterfactuals. Section 5 shows the results of the comparison between the counterfactual scenarios and the baseline scenario.

    4.4.    Design and calibration of the counterfactuals


    The results of the estimation described in Section 4.3 correspond to the baseline scenario. Currently, in absence of policy intervention, we expect legal uncertainty regarding liability risks and a related lack of consumer trust to exist with respect to the use of AI-enabled products in particular, which affects negatively consumers’ valuation of robot vacuum cleaners relative to non-robot vacuum cleaners. In order to assess the economic impact of the intervention, we design counterfactuals translating the economic impacts of an alleviation of victims’ burden of proof, including the impacts of the consequent increases in legal certainty and consumer trust. Figure 2 below provides a summary of how the intervention (i.e. an alleviation of victims’ burden of proof) generates effects that are translated into the economic impacts simulated in the counterfactuals (in orange): higher use demand for RVC and higher expected cost of compensation paid by RVC users.

    Figure 2: The estimated economic impacts of the intervention

    NB: “RVC” stands for “robot vacuum cleaners”. Orange-filled rectangles correspond to the economic effects estimated through counterfactuals.

    The remaining of the section details how each of these counterfactuals were designed and calibrated. Given the uncertainty over the magnitude of the impacts, we considered a “high” and a “low” scenario for each of them.

    4.4.1.    Increased legal certainty


    If consumers face an increase in legal certainty regarding liability exposure in case of third-party damage caused by the RVC they own, their valuation of these products relative to other non-robot vacuum cleaners should increase. With legal certainty, consumers know that in case of accident generated by a RVC the legal framework will be clear regarding which party should be held liable and what they might be required to prove in civil proceedings, and hence they have more incentives to buy one. In terms of the model described in Section 4.3, this translates into an increase in the δ parameter of RVC, which expresses consumers’ mean utility or willingness to pay for those products. We therefore simulate the “increased legal certainty effect” as an increase in the δ parameters corresponding to each market.

    In order to provide a magnitude to this increase, we use data from the behavioural study produced to support the initiative (Kantar, 2020). The study methodology includes an online survey and a survey-based online experiment. The survey was carried out on representative samples of the adult population from eight countries (Denmark, Netherlands, Ireland, France, Germany, Italy, Poland, Romania), with a sample size of minimum 1 000 respondents per country, amounting to a total of 8 079 respondents across the eight countries. Following a between and within subject design, the study tested the effect of alternative liability regimes on consumer behaviour towards AI applications. The latter is focused on three AI-enabled products: smart lawnmowers, smart irrigation systems and grocery-carrying robots. Using the results of the survey and the experiment, the study finds that consumers’ willingness to pay for a smart lawnmower increases by 6% if it contains third-party civil liability insurance.

    This product is the more similar in the study to robot vacuum cleaners both in terms of its use and, more importantly, the type of damage it can cause. We therefore use results on this product as a proxy to approximate the results corresponding to RVC. Moreover, we use the willingness to pay attributable to third-party civil liability insurance (+6%) as a proxy of consumers’ valuation of legal certainty. By stating how much they value this (currently inexistent) insurance, consumers are putting a value on the legal certainty that the existence of such insurance would give them. In case of accident, they would be certain that the legal framework would be clear regarding the distribution of liability in case of accident. Given that lawnmowers contain blades that could do harm to property and persons that a RVC cannot, we take this average 6% increase in the valuation as the high scenario of the impact of increased legal certainty on consumers’ willingness to pay for RVC. In absence of additional data, we take a 100% spread between the high and low scenario, so that the increase in consumers’ willingness to pay due to an increase in legal certainty in the low scenario is of 3%. Hence, our counterfactual δ parameters are the following:

    Where is the vector containing the δ parameters (i.e. consumers’ valuation) corresponding to RVC, is the vector containing the δ parameters corresponding to RVC in an a high “increased legal certainty” scenario and is the vector containing the δ parameters corresponding to RVC in an a low “increased legal certainty” scenario.

    4.4.2.    Increased consumer trust


    This effect is simulated in a similar manner to the effect of increased legal certainty. Consumer trust refers to “consumers’ trust in services, products, or, latently, providers and manufacturers to deliver against quality expectations” (Kantar, 2020, p.13). The above-mentioned study built a consumer trust index on the basis of consumers’ answers to a question linking the level of trust to the need for regulation. However, when analysing the impact of consumers’ trust index on their willingness to pay for AI-enabled products, it finds that, counterintuitively, the higher the trust, the lower the willingness to pay. According to the study, this is because consumers associate more trustable AI-enabled products with more expensive products they cannot afford. In other words, the results contain the mix of two effects that cannot be dissociated: higher consumer trust (which should lead to a higher valuation of the product) and price elasticity of demand (consumers expect this products to be more expensive). Given that the latter is already taken into account in the nested logit model used, we cannot employ the results using the consumer trust index.

    We therefore use the results of another related concept that is correlated to consumer trust: likelihood of compensation 418 . The study shows that consumer trust more AI-enabled products if, in case of an accident, the likelihood of compensation is high. This is coherent with the very definition of consumer trust, as providing a compensation in case of accident is a way of maintaining a level of quality. Moreover, the intervention analysed (alleviation of victims’ burden of proof) would tackle the problem of the low likelihood of compensation in particular in order to generate more consumer trust. Hence, this variable is well-suited to simulate the impact of an increase in consumer trust as a result of the intervention.

    The behavioural study runs two models to test for the impact of an increase in the likelihood of compensation on the willingness to pay for a smart lawnmower: a generalized linear model and a linear mixed effects model. In each of these models, the coefficient for the variable “likelihood of compensation” translates by how much the willingness to pay increases when the likelihood of compensation rises. Given that lawnmowers generate higher risks than RVC, we take the lower bound in the interval of confidence of the estimated coefficients and compute the percentage over the baseline valuation given to lawnmowers that the constant translates. For the generalized linear model, this percentage is equal to 4% and for the linear mixed effects model to 6%. Hence, we used the former as the low scenario and the latter as the high scenario to estimate the variation in consumers’ willingness to pay for a RVC as a result of an increase in consumer trust.

    As in the case of increased legal certainty, we use the above-mentioned percentages to generate two counterfactual δ parameters:

    Where is the vector containing the δ parameters (i.e. consumers’ valuation) corresponding to RVC, is the vector containing the δ parameters corresponding to RVC in an a high “increased consumer trust” scenario and is the vector containing the δ parameters corresponding to RVC in an a low “increased consumer trust” scenario.

    4.4.3.    Re-distribution of compensation costs due to the prevention of AI-induced liability gaps

    All policy options consider an alleviation of victims’ burden of proof. As explained in Section 2, the alleviation of victims’ burden of proof is the reason why there should be an increase in legal certainty and consumer trust. Moreover, the alleviation of victims’ burden of proof should make getting compensation under national liability rules for damage caused by AI-enabled products or services more likely for victims. Conversely, users/operators, which in our setting are the consumer buying RVC, would be more likely to face the cost of the compensation. Then, the alleviation of victims’ burden of proof constitutes an additional expected cost for rational consumers who take into account both the product of the product (the RVC) and related costs of ownership, in this case the cost of a possible compensation. In terms of the nested logit model estimated, this can be simulated as a counterfactual in which the price of RVC increases by the expected compensation cost, the latter being equal to the probability of an accident happening multiplied by the average compensation.

    Note that, as stated by insurance industry representatives during exchanges with DG JUST, it is likely that the damage to third parties generated by RVC owned by an individual will be covered by existing general liability insurance policies, provided that the individual is insured. However, when estimating the effects of a re-distribution of compensation costs, we will consider that RVC consumers are not insured. Although it is likely that existing general liability insurance policies will cover such damage, we cannot be sure this will be the case in absence of an intervention. Hence, given that the re-distribution of compensation costs has a negative effect on (robot) vacuum cleaners’ demand (although a positive effect for the victims), we opt for a prudent methodological choice that will likely underestimate the positive impact of the initiative in the vacuum cleaners market.

    In order to calculate the magnitude of the “re-distribution of compensation costs” effect, we use data from the United States 419 generated by the Insurance Information Institute on the bases of Insurance Services Office. The data refers to average observed homeowners insurance losses for the period 2015-2019, which matches the timespan of the vacuum cleaners sales dataset. In particular, we focus on the liability subcategory of losses defined as “bodily and property damage”, which corresponds to the type of damage that RVC could cause. For this category and period, the claim frequency is of 0.07/100 and the average claim of 29,752 USD or 26 219 €. Then, the expected loss is equivalent to 18€ per year.

    Given that these figures consider all claims related to bodily injury and property damage to third parties, they are not circumscribed to RVC, although we can expect claims related to these and other AI-enabled products used domestically to increase with their (increasing) use. Hence, we consider that, for the high scenario, the percentage of RVC-related claims over total claims is of 10%, and hence the claim frequency becomes 0.1 x 0.07/100 and the expected loss 1.84€ per year. For the low scenario, we consider that the percentage of RVC-related claims over total claims is of 5%, and hence the expected loss of 0.92€ per year. Therefore, we define the counterfactuals of the effect of an alleviation of victims’ burden of proof on expected compensation costs as the following hypothetical RVC price increases:

    5.    Results


    In this section, we present the results of the three effects studied in terms of changes in consumer surplus, revenues, profits, revenues and quantities in the vacuum cleaners market. The figures for the 6 countries studied result from the model described in Section 4.3. Values for the EU-27 are computed as an extrapolation of the results for the 6 countries studied obtained from the model. Hence, EU-27 values should be taken as rough approximations. 420  

    Table 3 below presents the variation in consumer surplus, profits, revenues and quantities sold for all robot vacuum cleaners stemming from the impact of the combined effect of the three counterfactuals. The table translates the sum of all the effects studied that stem from an alleviation of victims’ burden of proof: increased legal certainty, increased consumer trust and a re-distribution of compensation costs due to the prevention of AI-induced liability gaps.

    Table 3: Impact of the intervention on consumer surplus, profits, revenues and quantities sold (in thousands of euros/units)

    *Profits and revenues correspond to vacuum cleaner sellers and the third party parties benefiting from compensations.

    Table 3 shows that, for the six countries studied, the overall impact of a policy intervention that alleviates victims’ burden of proof is an increase of consumer welfare that ranges from 4 185 000 € to 6 959 000 € depending on the scenario considered. For the EU-27, consumer welfare would increase by 11 500 000 € and 19 124 000 €, depending on the scenario considered. Total welfare (i.e., the sum of vacuum cleaners consumers surplus, vacuum cleaners sellers’ profits and third party victims receiving a compensation), in turn, would increase between
    10 957 000 € and 19 556 00 € for the 6 countries studied and between 30 111 000 € and 53 742 000 € for the EU-27 depending on the scenario considered. These results indicate that the intervention would have a market expansion effect: the increase in demand explained by ‘new’ demand for RVC would be greater than the decrease in demand for non-robot vacuum cleaners coming from consumers substituting the latter for RVC. As a result, we observe both an expansion in consumer welfare and firms’ profit. Then, the intervention should benefit both consumers and sellers of vacuum cleaners.

    Tables 4, 5 and 6 below disaggregate the contribution of the three effects that are conflated in Table 3: increased legal certainty, increased consumer trust and a re-distribution of compensation costs due to the prevention of AI-induced liability gaps, respectively.

    Table 4: Impact of an increase in legal certainty on consumer surplus, profits, revenues and quantities sold (in thousands of euros/units)

    Table 4 shows that an increase in legal certainty would make consumer welfare rise between 2 149 000 and
    4 311 000 € for the 6 countries studied and between 5 905 000 € and 11 847 € for the EU-27, depending on the scenario considered. Profits, in turn, would increase between 1 414 000 € and 2 837 000 € for the 6 countries studied and between 3 887 000 € and 7 797 000 € for the EU-27, depending on the scenario considered. Given the market expansion effect observed, an increase in legal certainty should benefit both consumers and sellers of vacuum cleaners.

    Table 5: Impact of an increase in consumer trust on consumer surplus, profits, revenues and quantities sold (in thousands of euros/units)

    The impact of increased consumer trust is similar to that of an increase in legal certainty. The impact on consumer welfare would be between 2 868 000 € and 4 311 000 € for the 6 countries studied and between 7 882 000 € and 11 847 € for the EU-27, depending on the scenario considered. Profits, in turn, would increase between 1 888 000 € and 2 837 000 € for the 6 countries studied and between 5 188 000 € and 7 797 000 € for the EU-27, depending on the scenario considered. As for the case of an increase in legal certainty, we observe a market expansion effect. Hence, an increase in consumer trust should benefit both consumers and robot vacuum cleaners sellers.

    Table 6: Impact of a redistribution of compensation costs on consumer surplus, profits, revenues and quantities sold (in thousands of euros/units)

    *Profits and revenues correspond to vacuum cleaner sellers and the third party victims benefiting from compensations.

    Table 6 shows that a redistribution of compensation costs from third-party victims to consumers of robot vacuum cleaners make consumer welfare in the vacuum cleaners market decrease between 832 000 € and 1 663 000 € for the 6 countries studied and between 2 287 000 € and 4 570 000 € for the EU-27, depending on the scenario considered.

    As expected, a re-distribution of compensation costs from third-party victims to consumers of robot vacuum cleaners, resulting from an alleviation of victims’ burden of proof, has a negative impact on consumer welfare in the vacuum cleaners market and a positive one on third party victims, as explained below. This results from the combination of two effects. On the one hand, the additional compensation costs contract demand for RVC. On the other hand, some consumers replace the (now more expensive) RVC by other non-robot vacuum cleaners, which boosts demand for non-robot vacuum cleaners. However, the former effect prevails, so the overall impact on all robot vacuum cleaners (robot and non-robot) is negative in terms of consumer welfare in the vacuum cleaners market. Hence, we see a market contraction effect.

    Note that here profits increase because the additional compensation costs are modelled as an increase in RVC prices (cf. Section 4.4.3). Hence, these profits should be interpreted as belonging to vacuum cleaners sellers and the third party victims that benefit from the compensations. Vacuum cleaners sellers’ profits, in turn, should decrease as a result of re-distributing compensation costs to the parties responsible for the risks created by the use of these products: consumers would buy less robot vacuum cleaners and only partially replace some of them by non-robot vacuum cleaners. However, as shown in Table 3, the positive impact of an increase in legal certainty and consumer trust would countervail the negative impact of the intended re-distribution of compensation costs.

    References

    Berry, S. T. (1994). Estimating discrete-choice models of product differentiation. The RAND Journal of Economics, 242-262.

    Berry, S., Levinsohn, J., & Pakes, A. (1995). Automobile prices in market equilibrium. Econometrica: Journal of the Econometric Society, 841-890.

    Deloitte (2021). Study to Support the Commission’s Impact Assessment on Liability for Artificial Intelligence. Forthcoming.

    Insurance Information Institute (2020). Facts + Statistics: Homeowners and renters insurance. Available at: https://www.iii.org/fact-statistic/facts-statistics-homeowners-and-renters-insurance

    Kantar (2020). Behavioural Study on the link between challenges of Artificial Intelligence for Member States’ civil liability rules and consumer attitudes towards AI-enabled products and services. JUST/2020/RCON/FW/CIVI/0065



    Annex 12

    Monitoring and evaluation

    1.In order to ensure that the targeted review under the preferred policy option (staged approach) can rely on a sufficient evidentiary basis, this mechanism would:

    -provide for reporting and information sharing by Member States regarding the application of the measures under Option 1 in national judicial or out-of-court settlement procedures;

    -use information collected by the Commission or market surveillance authorities under the AI Act (in particular Article 62) or other relevant instruments;

    -use information and analyses supporting the evaluation of the AI Act and the reports to be prepared by the Commission on the implementation of that act;

    -take into account any information and analyses supporting the assessment of relevant future policy measures under the ‘old approach’ safety legislation;

    -rely on the information and analyses supporting the Commission’s report on the application of the Motor Insurance Directive with regard to technological developments (in particular autonomous and semi-autonomous vehicles) pursuant to Article 28c(2)(a) MID.

    2.For the purposes of evaluating the effectiveness of the preferred policy option, success criteria have been defined, on a provisional basis, for each of the specific objectives. Possible sources of data informing this evaluation have also been identified, as shown in the subsequent overview table.

    Success criteria for monitoring and evaluating effectiveness

    Specific objective

    Success criteria / indicators

    Data sources

    1. Ensuring legal certainty

    - positive: significant improvement of the level of legal certainty

    as perceived by business stakeholders (benchmark: public consultation results)

    - positive/negative: legal experts’ opinion on the level of legal certainty achieved

    - Survey of business stakeholders with focus

    on SMEs

    - Commission services / external legal experts

    2. Preventing legal fragmentation

    - positive: correct transposition by MS

    - negative: adoption of diverging measures on AI liability at

    national level

    - MS reporting

    3. Ensuring an equal

    level of protection of victims, and

    increasing trust in AI

    - positive: reduced difficulties for victims claiming compensation (benchmark: cost/time estimates by legal experts for economic study)

    - positive impact on consumers’ perception of liability rules and attitudes vis-à-vis AI-enabled products and services; increased likelihood to buy / use such products and services and increased willingness to pay (benchmark: results of behavioural economics study

    - MS reporting

    - Commission services

    - analysis by legal experts

    - behavioural analysis

    3.In order to benchmark the efficiency of the envisaged measures, operational objectives and preliminary indicators have been defined regarding the relevant stakeholder groups, together with data sources enabling an effective monitoring.

    Operational objectives and indicators for monitoring and evaluating efficiency (costs / benefits)

    Stakeholders

    Operational objectives / indicators

    Data sources

    1. Companies (as potentially liable

    parties)

    - extent to which the AI liability initiative has shifted

    costs of compensation from victims to liable companies

    - cost reductions (e.g. legal information, insurance, risk management costs) due to increased legal certainty and reduced legal fragmentation

    - higher demand due to increased consumer trust

    - Survey of business stakeholders with focus on SMEs

    - MS reporting on legal cases

    - Economic data on AI market

    - behavioural analysis

    2. Victims

    - reduced difficulties for victims claiming

    compensation (benchmark: cost/time estimates by legal experts for economic study)

    - victims of harm caused with the involvement of AI

    have the same level of protection as victims of harm

    caused by other technologies

    - MS reporting on legal cases

    - Commission services

    - analysis by legal experts

    3. Insurers

    - improved conditions for offering insurance coverage

    due to increased legal certainty and reduced

    fragmentation;

    - increased demand for insurance solutions due to

    higher awareness of liability risks and increased uptake

    of AI-enabled products and services

    - Survey / workshops with insurance stakeholders

    4. Public authorities

    - no significant added burden due to changes in

    national courts’ caseload compared to the baseline

    scenario

    - no significant administrative burden on MS linked to reporting requirements

    - MS

    4.In order to evaluate the proportionality, coherence and continued relevance as well as the EU added value of the policy measures, the following criteria are envisaged on a provisional basis:

    Criteria

    Data Source

    Proportionality

    - Availability of more effective or efficient means

    to achieve the policy objectives?

    - Qualitative assessment by Commission services taking into account MS reporting

    on legal cases and stakeholder feedback

    Coherence

    - Synergetic interplay between the AI liability

    initiative and the revised PLD / the AI Act

    - frictionless integration of AI liability provisions

    into MS’ existing civil liability systems

    - Qualitative assessment by Commission services

    - MS reporting on legal cases

    - Feedback from MS

    Continued relevance

    - Need for harmonised adaptations of general

    liability rules given the specific characteristics

    of AI systems on the market and practical

    experience with liability cases

    - MS reporting on legal cases

    - Stakeholder survey

    - Analysis of the characteristics and risk

    profile of AI systems on the market

    EU added value

    - Likely consequences of stopping or withdrawing

    the harmonised provisions on AI liability, in

    particular regarding
    (i) the level of legal certainty,

    (ii) the level of legal fragmentation;

    (iii) the protection of victims of harm caused by AI

    - Legal economic assessment in the

    framework based on stakeholder feedback

    and data reported by MS



    Annex 13

    Illustration of AI-specific difficulties in claiming compensation based on case scenarios

    This Annex presents a number of case scenarios, which have been developed in close collaboration with the Joint Research Center (JRC). The technical description of the technologies involved has been provided by AI experts of the JRC.

    The purpose of this Annex is to illustrate concretely the AI-specific challenges this initiative is designed to address. It only provides examples. Due to the general nature of the national civil liability rules to be adapted by this initiative, the envisaged measures can apply in a large variety of cases involving various AI-enabled technologies (either self-standing software systems or physical products enabled by AI), potentially liable parties and types of harm (material harm and immaterial harm to the extent it is compensable under national law).

    1. Case scenario 1: autonomous cleaning robots

    (a) Characteristics of the relevant AI systems and operational environment

    An autonomous fleet of cleaning robots operates in pedestrianised public areas. The robots are equipped with multiple sensors (cameras, LiDAR, radar, ultrasound, GPS, etc.), digital information (digital maps), and connectivity features including communication between the robots and between the robots and the infrastructure. The robots include multiple AI systems, each one responsible for a particular task (e.g., detection and location of litter and dirt, robot localization and mapping, detection of obstacles, trajectory planning or lateral and longitudinal control).

    Each cleaning robot belongs to a fleet deployed throughout the city. An employee is in charge of defining the operation areas to be cleaned (the missions) and monitoring multiple robots in simultaneous operation from a remote-control centre. The fleet can coordinate the safe cleaning of the selected region, the interaction with pedestrians, and the avoidance of obstacles, with a high degree of automation. The role of the human operator is of a supervisory nature.

    (b) Description of the events leading to damage

    A colourful baby stroller is parked in front of a similarly patterned advertising banner while the baby’s guardian looks at a nearby shop window. One of the cleaning robots seems to fail to recognise the stroller as an obstacle and collides with it. The stroller is damaged and the baby slightly injured.

    (c) Potential reasons of the accident

    The accident might have been caused by any of the following issues:

    -An original flaw in the AI vision component, an AI-based perception system which is not capable of detecting the trolley since it was somehow camouflaged with the background (an advertising banner of a similar colour and pattern than the trolley). This leads to an image segmentation error (i.e., a false negative) that considers the trolley as part of the background of the banner.

    -Failure by the provider of the AI vision component to distribute an available software update fixing an identified safety issue or due to the operator for not having updated to the latest version. The fixing update was based on an enhanced segmentation system by using a sensor fusion approach including data from range-based sensors, which would have allowed the 3D volume of the stroller to be detected as an obstacle, and being avoided by the path planning system.

    -Failure by the human remote operator to appropriately monitor the operation of the fleet of robots. The failure may be either due to inadequate supervision by the human operator (i.e. incorrect compliance with the human oversight mechanisms defined by the provider), or to defects in the human-robot interfaces (i.e. deficiencies of the human oversight mechanisms defined by the cleaning robot producer).

    -A specific vulnerability of the perception system based on some patterns, designed through adversarial machine learning, which can be printed on stickers that, when pasted on the billboard or any other surface, produce critical problems in machine learning-based segmentation algorithms leading to unexpected perception results.

    -A deliberate attack on the robot's sensors such as blinding, jamming or spoofing. If there are no mechanisms to detect and counteract this type of attacks, the perception systems are completely compromised, leading to failures in the interpretation of the environment, such as, for example, the non-detection of an object such as a baby stroller.

    -Failure by a cybersecurity breach through unauthorised access and subsequent manipulation of the internal electronic control units by an attacker, using the wireless interface as the entry point. In this case, the attacker would have forced the system to take an unsafe action making the robot to collide with the stroller.

    -Failure due to an updated version of the perception system devised to reduce the number of false positives and negatives of previous versions (which leads to many regions not properly cleaned). The confidence threshold to consider a detection as a true obstacle was increased to reduce the number of false positives. Unfortunately, the similarity between the texture and colour of the baby stroller with the background of the advertising banner from the perspective of the camera resulted in a potential obstacle detected with not very high confidence and was discarded by the new updated version of the segmentation system.

    (d) Potentially liable parties

    The final producer of the cleaning robot can be strictly liable under the Product Liability Directive for damage caused by a defect in its product. However, the autonomous cleaning robots are very complex systems with many different components based on AI, affecting each other, and usually developed and integrated by different parties or subcontractors. The defect (e.g. a safety or a cybersecurity vulnerability) can be in one of the components, in several components, or in a faulty integration of these components. Therefore, other potentially liable parties can be found within the complex supply chain involved in the development of autonomous cleaning robots (e.g., AI-software developer, company providing the perception systems of the robots). The existing Product Liability Directive may not adequately cover these parties as component manufacturers, because of doubts as to the extent to which software falls within the definition of ‘product’.

    The producer establishes a set of operational requirements for the AI systems involved (in particular for human oversight). The professional user (or the user’s employee in the case of legal persons) have to follow these requirements to ensure safe operation of the autonomous cleaning robots vehicles. If a user (or their employee in charge) does not comply with their obligations, the user can be (vicariously) liable for damage caused by a negligent use of the robotic fleet. Potentially liable parties include also companies providing the datasets for training perception and decision-making systems.

    Other possible liable parties are third parties (adversaries) that perform a cyber attack on the autonomous robotic fleet such as attacking the sensors through jamming or spoofing, placing adversarial artefacts in the scene (e.g. sticker), unauthorised access to the electronic control units of the robots thorough wireless connectivity vulnerabilities, etc.

    In summary, potentially liable parties include:

    -Producer of the cleaning robots. This stakeholder would likely be considered as the provider of the robots’ AI-systems for the purposes of the AI Act.

    -Provider of individual AI components integrated in the cleaning robots (e.g. navigation, perception systems such as vision component, path planning, low-level controllers, operational interfaces). These stakeholders would likely be considered as providers under the AI Act.

    -Professional user / operator: the municipality, or a company providing the service to the municipality, deploying the cleaning robot services in the city. This stakeholder would likely be considered as the user of the robots’ AI systems for the purposes of the AI Act.

    -Adversaries (e.g. cybercriminals) that attack the system exploiting vulnerabilities in the AI components (e.g. adversarial ML) or on the broader software and hardware surface (e.g. buffer overflows).

    (e) Fault-based compensation claims under national law

    Under national tort law, the injured party or claimants in principle have to prove that the defendant – or their employees in the case of vicarious liability – caused the damage intentionally or negligently (i.e. by not complying with the applicable standard of care). As mentioned, there may be multiple alternative or cumulative reasons for the damage, including low confidence detection, internal cybersecurity breach, external adversarial attack with some specific patterns, etc.

    Due to the high degree of complexity and lack of explainability of the AI systems with which the cleaning robots are endowed, proving the fault of the defendant and the causal link between that fault and the damage in this case will often require a (possibly costly) expert opinion. Provided that the claimant can obtain access to the relevant data, the expert can examine and interpret the raw inputs read by the sensors, the internal variables of some of the subsystems, and the outputs or actions taken by the robot. The necessary analysis would require in particular:

    -access to documentation related to operational and human oversight requirements of the system;

    -access to system logs (inputs, outputs, and internal states of any of the subsystems) corresponding to the last few minutes before the accident.

    The initiative on civil liability for AI will include provisions on the disclosure of information to be documented/logged pursuant to the AI Act. In accordance with relevant procedural law, the competent national court could order that such disclosure would be subject to stringent safeguards to ensure proportionality and protect the legitimate interests of all parties concerned, for instance confidential information, intellectual property rights and trade secrets. On the basis of the disclosed information, an expert could for instance determine that the result of the perception system seems to be wrong at the time of the collision, since the stroller does not appear in the list of detected objects. The expert may thus be able to prove that the stroller was not properly detected (without indicating the cause). She may also be able to discard that the sensors were jammed or spoofed since the raw data seems correct. The expert could further suppose a correlation between such detection failure and the control decision of the robot to move forward until colliding with the stroller. This may allow the claimant to establish prima facie evidence.

    However, the high degree of autonomy, opacity and complexity of the AI systems involved may make it impossible even for an expert to infer a clear causal link between a specific input and the harmful output, given in particular that not only the perception module but also the trajectory planning system and the low-level controllers are based on complex AI models. Depending on the standard of proof national courts will require of the claimant, the inability to discard alternative causes of the damage (e.g., cybersecurity vulnerability, unpredictable behaviour due to reinforcement learning or continuous adaptation of the trajectory planning system, etc.) may lead to all fault-based claims being dismissed, leaving the victims to bear the damage.

    The initiative on civil liability for AI will address these issues by:

    -providing for a presumption of causality if the defendant did not comply with obligations under the AI Act designed to prevent damage of the type for which compensation is sought;

    -providing for a targeted alleviation of the burden of proof regarding the question how or why an AI system reached the relevant (harmful) output.

    These measures are designed specifically to avoid that the peculiar characteristics of certain AI-systems lead to compensation gaps compared to cases not involving AI.

    (f) Obtaining compensation under the Product Liability Directive

    - From the manufacturer of the final product:

    The injured party has to prove that the robot was defective, i.e. it failed to provide the safety that the public at large is entitled to expect, and that this lack of safety caused the damage. Whether the source of the defect was a mechanical flaw, a software flaw or a data flaw is irrelevant. It should not be difficult to prove that the robot failed to provide the safety the public at large was entitled to expect when it crashed into the stationary stroller.

    However, the Directive exempts producers from liability in some cases:

    -if they prove that the defect probably did not exist when the product was put on the market (Article 7(b) PLD). The robot manufacturer might argue the defect emerged while the robot was in operation. Under plans to revise the PLD, this exemption would be adapted to the reality of digital products, which change and interact with other products/systems after being placed on the market.

    -if they prove that even though the defect existed when the product was put on the market, it was undiscoverable according to state-of-the-art knowledge (Article 7(e) PLD). This exemption is intended to encourage producers to put innovative products on the market. In this case, however, the risk of a robot that operates in public colliding with people and property is a clearly foreseeable risk, so it is unlikely this exemption would be available. The availability of the defence in such cases should be clarified in the revision of the Directive.

    -if the damage was caused entirely by a third party, in this case the hacker, and not because of a defect. It is currently unclear if a product’s cybersecurity vulnerabilities can make a product defective – should a product be able to withstand the most sophisticated cyberattack? The assessment of defectiveness would need to take into account, among other elements, the intended usage of the product, the threats identified in the context of use, the state of the scientific literature, and the actions taken by the manufacturer to mitigate the cybersecurity risks. If the damage is only partially due to a defect, the robot manufacturer is nevertheless fully liable for the damage (Article 8(1) PLD).

    - From component manufacturers:

    In most cases, the injured party will seek compensation from the manufacturer of the final product, but it is possible to seek compensation from component manufacturers instead. The injured party has to prove that the perception system, AI-software or datasets were defective and that this lack of safety caused the damage.

    The first difficulty is that the AI-software developer and provider of datasets could argue that software/data is not a product within the meaning of the Directive and that they are therefore not producers and cannot be sued. Such products and producers should be brought within the scope of the Directive through the revision.

    Proving that the components failed to provide the safety that the public at large was entitled to expect would be challenging. What safety expectations are the public entitled to when it comes to new technological developments like AI-software or datasets? The court would take into account all circumstances, which would include applicable legal requirements, like the AI Act and the Machinery Directive/Regulation, which will apply to AI-systems as such an AI-software integrated into machinery. The notion of defectiveness in relation to new technologies should be clarified through the revision of the Directive.

    Proving causality raises similar issues to those described above for the fault-based claims: difficult and costly. The revision will look at easing the burden of proof, including an obligation on the producer to disclose technical information (e.g. data logs) to the claimant.

    - Limits on claims

    The claimant can claim compensation for physical injury to the baby, but property damage below the value of EUR 500 is not covered by the Directive. In all likelihood therefore, damage to the stroller would not be recoverable. The EUR 500 threshold may be lowered or removed through the revision.

    2. Case scenario 2: socially assistive robots in education

    (a) Characteristics of the relevant AI systems and operational environment

    Socially assistive robots are defined as physical devices that can autonomously sense, process sensory information, and perform actions upon the surrounding social environment, in a meaningful way. This case scenario refers to the use of socially assistive robots as a tool to teach or demonstrate socially desirable behaviours to help children who have difficulties expressing themselves to others because of neurodevelopmental conditions, characterized by social communication difficulties as a result of Autism. Socially assistive robots can have anthropomorphic features, or they may have minimalistic morphological characteristics. They can be designed across a spectrum of behavioural and morphological characteristics, regarding their mimicry of agency, to be perceived from social agents to a smart toy. In any of those cases, the robots might function fully autonomously, and they might learn from their environment and through interaction.   

    Autism Europe highlights the need for inclusion and preparation of autistic children for an independent life. One of the ways to address this challenge is the connection between special and mainstream schools. In the present case scenario, a school unit therefore decides to increase the number of admissions for high functioning autistic children. However, the staff is not adequate to undertake individual support for the autistic children. For this reason, the director of the school decides to introduce one Socially Assistive Robot per class for personalized interaction with the autistic children for the improvement of their social skills. The school signs a contract with a company providing robotic educational services to provide the service at the school.

    In this case scenario, the robot has the following characteristics and AI-based modules:

    -It is 1.30m tall.

    -It has arms.

    -It has multiple sensors to detect the environment, including cameras, 3D sensors, laser, sonar infrared, tactile, microphones and inertial measurements units.

    -It is mobile, including AI-based perception, navigation, facial, speech and emotion recognition localization, decision-making, mapping and path planning systems, manipulation, grasping, expressive communication and other AI-based systems.

    -It has a tablet for an alternative means for communication.

    -It can perceive and process natural language using AI systems, including a module for verbal communication with expressive voice.

    -It is capable of detecting obstacles, people, and facial expressions using AI-based computer vision algorithms.

    -It is equipped with an AI-based cognitive architecture which combines task-related actions and socially adaptive behaviour for effective and sustained human-robot interaction.

    For the robot to be tailored for interaction with autistic children in school environments, it provides the following AI-based modules:

    -Pre-designed interventions for cognitive engagement (task-oriented).

    -Adaptation of the robot’s social behaviour according to the child’s personal abilities and preferences in the context of autism. 

    The robots are installed in the classrooms for regular personalized interventions with the autistic children and for voluntary interaction during children’s free time. The robots are mobile, and they can navigate dedicated space during the children’s free time if a child requests so. The robots learn from the interaction with the autistic children and adapt their social behaviour. While during the lesson time the robot is available only for children with autism to perform personalized interventions, during the free time, any child of the school can interact with the robot at dedicated spaces.

    (b) Description of the events leading to damage

    In this use-case, we focus on injury/damage/harm which might be caused because of the adaptive behaviour of the robot. Some property damage may also occur.

    Case 1 – physical harm and property damage towards a child with darker skin: Because of biases in the development of robot adaptation (perception system) from the prolonged period of interaction with children with certain characteristics, the robot fails to perceive a child with darker skin, and it causes physical harm to the child. The blow caused by the robot also resulted in the breakage of the child's glasses, valued at less than €500.

    Case 2 – physical harm and property damage towards a child that behaves in an unexpected way: Because of biases in the development of robot adaptation (decision-making and path-planning system), from a prolonged period of interaction with children with certain behavioural characteristics, the robot fails to respond in an appropriate way to an autistic child that might have unexpected behaviour, it loses control and collides with the child. This incident caused by the robot also resulted in the breakage of the child's glasses, valued at less than €500. In addition to the physical harm, the inappropriate robot response causes emotional distress to the interacting child, which might lead to psychological trauma.

    Case 3 - Long-term psychological harm towards a neurotypical child 421

    During childrens’ free time at the school, a neurotypical child interacts with the robot on a regular basis. The robot adapts to the child's needs and requests which subsequently leads the child to develop the following medically recognised pathological conditions:

    -The child develops symptoms of addiction to the robot. The robot perceives the child's social interaction styles, the emotional states and the cognitive level and adapts its behaviour accordingly. The robot is equipped with the intrinsic motivation for sustained human-robot interaction which makes it develop behaviours that stimulate the development of an emotional bond of the child towards the robot. This, in turn, results in the child's possible addictive behaviour towards the robot. The increased preference of the child to interact with the robot rather than with humans causes an abnormal socio-emotional development.

    -Medical condition of depression: Deficiency of emotion regulation skills and anxiety. The lack of interactions with humans and social relationships (i.e., social isolation) can have detrimental effects on an individual’s physical and psychological health. Social isolation can negatively influence psychological health leading to depressive symptoms.

    -Abnormal cognitive and socio-emotional development: The robot’s limited modalities allow for a limited variety in the robot-initiated social interactions with the child and for poor and predictable loops of interaction. The children have the tendency to adapt their behaviour to the social agents they interact with. Consequently, for neurotypical children, the predictable robot behaviour has a negative impact on the child’s cognitive and socio-emotional development. This might lead to cognitive deficiencies and learning difficulties.

    -Cognitive and socio-emotional dependencies. This might have a negative impact on the child’s development of autonomy and agency.

    (c) Potential causes of the damage suffered

    In cases 1 and 2 (primarily physical harm), the damage could have the following possible causes:

    -the perception module of the robot fails to perceive the child;

    -the decision-making and path planning modules of the robot fail to adapt to the child user;

    -the control module fails to consider the physical and behavioural differences of the child user.

    -Where a person was entrusted with ensuring regular human supervision (e.g. an educator or therapist), that person’s possible failure to adequately monitor the robot during operation is an additional possible cause of the damage.

    These potential causes could be related to the adaptation on the perception and decision-making (and control) elements of the robot, which evolved continuously via the interaction of the robot with children with specific characteristics. As a result, the robot could have developped biases towards children with darker skin or unexpected behaviours, whereby the causation of harm is linked to protected grounds of discrimination.

    In case 3 (psychological harm), the possible reasons are linked primarily to the robot’s adaptation module. This module embeds an intrinsic motivation element which contributes to the human-robot sustained interaction. The robot's internal motivation to remain in an optimal level of comfort for the child-user contributes to its adaptation to the specific child's characteristics, needs and behaviours. This robot behaviour develops a closed loop of cognitive and socio-emotional interaction with the child that might lead to the child's addiction to the specific robot behaviour. In a long-term interaction the child might exhibit a preference for interaction with the robot rather than human social agents. In that case, the child and the robot develop in a mutual adaptation loop. Here again, a possible neglect by the designated human supervisor to adequately monitor the robot-child interaction (if such human supervision is prescribed) is an additional possible cause of the harm.

    (d) Overview of stakeholders involved

    Potentially liable parties:

    -Provider of the robotic educational system: this stakeholder owns the robots and equips them with tailored AI-based educational capabilities. They modify the robot to add new functions to it after it has been placed on the market. This stakeholder would likely be considered as the provider of the educational AI system for the purposes of the AI Act, as it provides the system for the particular context. They can also be the users of the system, if they themselves use it to provide educational services.

    -Manufacturer of the robot: Designs and develops various AI and non-AI modules of the robot or collaborates with providers that develop the various modules and is responsible for their integration. This stakeholder would likely be considered as the provider of the robot’s AI systems for the purposes of the AI Act.

    -Providers of the various AI-based systems integrated in the robot before it is placed on the market, e.g. perception systems, control systems.

    -User of the robotic educational platform: This entity provides the robotic educational services. It may be the same as the provider of the robotic educational system or a different entity. In the latter case, they may purchase or lease the robotic education platform from the provider.

    -School unit A mainstream school which accommodates high functioning, verbal children that have been diagnosed with ASD.

    Injured parties:

    -End-user (neurotypical and autistic children, children with a certain skin colour) From a psychological perspective, children diagnosed with ASD show a notably lower number of behaviours aimed at initiating, responding, or maintaining social interaction and an extreme difficulty in filtering sensory input. For autistic children, robots represent safe, predictable, and coherent environments to experience prototypes of social interactions and reduce emotional dysregulations.

    (e) Compensation claims for the psychological and physical injury based on fault-based liability under national law 422

    Under national fault-based liability rules, claims can in principle be brought against all potentially liable parties listed above. Both physical injury and psychological harm amounting to a recognised pathological condition are usually compensable under national liability rules, and there is no minimum threshold in respect of property damage as in the PLD. However, the injured party in principle has to prove that the respective defendant – or their employees in the case of vicarious liability – caused the damage intentionally or negligently (i.e. by not complying with the relevant standard of care).

    Claims against the manufacturer of the robot: Depending on how fault-based producer liability is approached under the relevant national law, national courts may or may not infer fault and causality from the fact that the robot caused the relevant injuries. The fact that the AI-systems influencing the robot’s behaviour adapted during the latter’s autonomous operation may put into doubt such an inference. National courts may namely take into account that the robot’s behaviour depends on various circumstances that may be considered unforeseeable for the manufacturer (namely the precise operating environment, human interaction and the input data the robot is exposed to). The challenges described below with respect to claims against the providers of AI modules may then apply also vis-à-vis the manufacturer of the end-product.

    If the injured party can prove that the manufacturer did not comply with certain obligations incumbent upon them as provider pursuant to the AI Act (e.g. the obligation to ensure balanced and suitable training data), this may facilitate the compensation claim as national courts may infer liability from such non-compliance. However, in order to establish this, the injured party may need access to relevant information that providers are required to record/document under the AI Act, such as information on the training data used, the specifications defining suitable input data, the provider’s risk-management system, etc.

    The initiative on civil liability for AI will address these challenges by, first, laying down harmonised provisions on the disclosure of information (subject to appropriate confidentiality safeguards in accordance with the relevant procedural law) to be documented/logged pursuant to the AI Act, second, presuming that non-compliance with requirements of the AI Act was causal for damage of the kind those requirements are meant to prevent, and third, alleviating the victim’s burden of proof as regards the question how or why an AI system reached a certain output. The latter measure is particularly relevant with respect to AI systems that self-adapt continuously, as the characteristic unpredictability of such systems can make it very difficult to prove the necessary link with a specific trigger of a harmful AI output.

    Claims against providers of individual AI modules, including the education system: The applicability of national fault-based liability rules does not depend on whether an AI-system has been integrated into the robot before or after it was placed on the market by the manufacturer. Fault-based liability thus extends in principle to the provider of the various individual AI modules, including the educational system with which the robot was equipped to tailor it to its specific intended purpose. In order to claim compensation from any of the providers of the various AI modules enabling the autonomous functioning of the educational robot, the injured party would have to establish that the specific AI-system provided by the defendant malfunctioned, that this malfunctioning was caused by a negligent or deliberately harmful action or omission of the defendant, and that it caused the damage. Given the high degree of autonomy, complexity and the lack of explainability of the relevant AI-systems, meeting this burden of proof will often require expert analysis of the design and functioning of the relevant AI-systems.

    The necessary expert analysis could notably be based on the documentation and system logs required by the AI Act (inputs read by the sensors, outputs, internal states of the AI subsystems, actions taken by the robot), provided that the injured party is entitled to access that information for the purposes of their claim. The initiative on civil liability for AI will include provisions on the disclosure of such information, subject to appropriate confidentiality safeguards in accordance with the relevant procedural law. On this basis, an expert may notably be able to determine whether the result of the robot’s perception system is correct at the time of the accident, for instance by checking whether the physically injured child appears in the list of detected objects. The expert may also review the relevant control decisions of the robot, e.g. the decisions to interact in a certain way with the affected children, or the decision to actuate certain movements. The analysis may also inform the supposition of a correlation between, for instance, a detection failure and a relevant control decision of the robot.

    However, the high degree of autonomy, complexity and the lack of explainability of the AI systems involved make it impossible – even for an expert – to establish clear causality between a specific input variable and the harmful AI output. Given the adaptive nature of the robot’s interaction capabilities, which are shaped by various parameters in its operational environment, and thus outside the providers’ control, national courts may deny the attribution of that output to any specific provider. The initiative on civil liability for AI will address this challenge by providing for a targeted alleviation of the victim’s burden of proof regarding the question how or why an AI system reached the relevant (harmful) output.

    The initiative on civil liability will ensure that if the defendant can prove that the respective provider has not complied with relevant preventive requirements of the AI Act, a liability claim based on national law could be brought on those grounds, without the need to prove a causal link between that non-compliance and the damage. In order to provide the necessary proof of non-compliance, the injured party may again rely on the harmonised conditions for access to information that providers are obliged to document or record under the AI Act.

    These measures are designed specifically to avoid that the peculiar characteristics of certain AI-systems – and this case scenario in particular their continuously adapting nature – lead to compensation gaps compared to cases not involving AI.

    Claims against the company using the robot to provide education services: The company providing educational services is using the robot’s AI systems under its authority, and would therefore be considered as the ‘user’ for the purposes of the AI Act. In the case at hand, proving that the damage was caused by a negligent or intentionally harmful action or omission of that company’s employees would be challenging because the potential causes of the harmful output are linked to autonomous, adaptive output of different AI-systems. In such a context, it is uncertain how national courts would assess the professional user’s duty to ensure the safe operation of the robot. Access to information such as the human oversight requirements (instructions of use) and data logged during the operation could help the victim to establish a possible failure to comply with the user’s standard of care.

    Claims against the school: A failure of the school’s staff to duly monitor children’s activities and prevent harm can in principle give rise to the school’s (vicarious) liability under national law. However, given that a company was in charge of providing the educational services using the robot, and assuming that the robot was intended to function without supervision by a teacher, national courts are unlikely to uphold such a claim.

    (1)

     McKinsey Global Institute estimated that by 2030 AI technologies by 2030 could contribute about 16% higher cumulative global gross domestic product (GDP) compared to 2018, or about 1.2% additional GDP per year. For a comparison, the introduction of steam engines in the 1800s boosted labour productivity by 0.3% a year and spread of IT during the 2000s by 0.6% a year, cf. ITU/McKinsey, Assessing the Economic Impact of Artificial Intelligence, 2018.

    (2)

      https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf  

    (3)

    White Paper On Artificial Intelligence – A European approach to excellence and trust, 19.2.2020, COM(2020) 65 final.

    (4)

    Ibid., p. 15.

    (5)

    Report from the Commission to the European Parliament, the Council and the Economic and Social Committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, 19.2.2020, COM(2020) 64 final.

    (6)

    Communication from the Commission of 29 January 2020, Commission Work Programme 2020 – A Union that strives for more‘, COM(2020) 37 final, Annex I, row 10 of the table.

    (7)

    European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).

    (8)

    COM(2021) 205 final.

    (9)

    See Annex 3, sections 1.2(c) and 3.

    (10)

    In the proposed AI Act, ‘AI systems’ are defined as software developed with certain techniques and approaches (machine learning, logic- and knowledge-based approaches, statistical approaches etc.) that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments these systems interact with.

    (11)

    See 2.2.

    (12)

    For further explanations, including on other relevant EU and international law instruments, see Annex 6.

    (13)

    Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the MS concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).

    (14)

    See Karner/Koch/Geistfeld, Comparative Law Study on Civil Liability for Artificial Intelligence, November 2020, p. 38 ( https://op.europa.eu/publication/manifestation_identifier/PUB_DS0921157ENC ).

    (15)

    In a few MS, the fault is presumed once the other two elements are proven by the victim; several fault-based regimes contain specific rules which modify the premises of fault-based liability (especially the distribution of the burden of proving fault) (cf. ibid., p. 41 et seq).

    (16)

    There are currently many differences between MS rules on the types of compensable harm, see Annex 9.

    (17)

    Comparative Law Study, p. 59 et seq.

    (18)

    Ibid, p. 58.

    (19)

    Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (COM(2021 206 final).

    (20)

    Only a small number of AI use-cases are expressly prohibited by the AI Act.

    (21)

    For details on the interaction of this initiative with the AI Act, see Annex 7.

    (22)

    Proposal for a Regulation of the European Parliament and of the Council on general product safety (COM/2021/346 final).

    (23)

    Proposal for a Regulation of the European Parliament and of the Council on machinery products (COM(2021) 202 final).

    (24)

    Commission Delegated Regulation (EU) 2022/30 supplementing Directive 2014/53/EU of the European Parliament and of the Council with regard to the application of the essential requirements referred to in Article 3(3), points (d), (e) and (f), of that Directive (OJ L 7, 12.1.2022, p. 6).

    (25)

    For explanations on how this initiative addresses fundamental rights (in particular discrimination) risks, see Annex 8.

    (26)

    Commission SWD(2021) 84 final, Impact assessment accompanying the Artificial Intelligence Act, p. 88.

    (27)

    European enterprise survey on the use of technologies based on AI, Ipsos 2020, Final report p. 58 ( https://op.europa.eu/en/publication-detail/-/publication/f089bbae-f0b0-11ea-991b-01aa75ed71a1 ) .

    (28)

    Deloitte, Study to support the Commission’s IA on liability for artificial intelligence, 2021 (‘economic study’).

    (29)

    Kantar, Behavioural Study on the link between challenges of Artificial Intelligence for MS’ civil liability rules and consumer attitudes towards AI-enabled products and services, Final Report 2021 (‘Behavioural Economics Study’); cf. also Special Eurobarometer Survey 460, ‘Attitudes towards the impact of digitisation and automation on daily life’, May 2017.

    (30)

    Comparative Law Study.

    (31)

    For details, see: Annex 2 on Stakeholder consultation.

    (32)

    Regarding the difficulty to identify a potentially liable person, see e.g. Renda e.a., Study to support an impact assessment of regulatory requirements for Artificial Intelligence in Europe, final report, 2021, p. 67. Concerning the difficulty to trace back the damage to an action or omission of that person, to prove human fault and establish a causal link with the damage, cf. Comparative Law Study, pp. 23 et seq. and 32.

    (33)

    For more detailed explanations on these characteristics and the related challenges regarding liability, see Annex 5. In order to avoid misunderstandings, it should be noted that the specific characteristics of certain AI systems are not referred to as grounds of liability but because these characteristics make it difficult for victims to prove the conditions of national civil liability claims. While the provider of the AI system is responsible for its design and can therefore be subject to a liability claim for this reason, the characteristics of the AI system can make it difficult to meet the burden of proof also in the context of claims against other parties.

    (34)

    For details, see Annex 13.

    (35)

    The required proof of causality depends on the degree of likelihood (i.e. ‘How likely is it that this faulty behaviour caused the damage?’) required by each national law.

    (36)

    For this, it is not necessary that the human understands for instance what exactly triggered the ‘wrong’ AI output.

    (37)

    Based on a qualitative analysis differentiating between various economic sectors, the analysis took into account the percentage of enterprises perceiving liability issues as a barrier to the adoption of AI-enabled technologies as well as the potential of AI-enabled products and services to cause harm. Cf. Economic Study, pp. 104 et seq.

    (38)

    Ibid, p. 109.

    (39)

    By triangulating qualitative research findings with the results of the representative survey on the use of AI technologies (Ipsos, op. cit.)

    (40)

    Comparative Law Study, pp. 34 et seq. and 48 and Expert Group Report, pp. 50-55.

    (41)

    For examples, see Comparative Law Study, pp. 32 et seq.

    (42)

    Economic Study, pp. 51 et seq.

    (43)

    Article 4(1) of the Rome II Regulation. For more details see Annex 6.

    (44)

    Cost of non-Europe in artificial intelligence – liability, insurance and risk management’, Study by the European Parliamentary Research Service, June 2019, p. 39-41.

    (45)

    This problem was confirmed by legal experts who, when consulted about the costs of claiming compensation for damage caused by AI, submitted widely differing estimates across MS, cf. Economic Study, pp. 51 et seq.

    (46)

    See CSES, Impact assessment study on the possible revision of the Product Liability Directive, 2022, p. 54.

    (47)

    For an overview see e.g. European Parliamentary Research Service, Civil liability regime for artificial intelligence, September 2020, Table 18, p. 45; AI Watch, National strategies on artificial intelligence – a European perspective (JRC-OECD report), 2021.

    (48)

    Around 80 % of those public authorities and SMEs that took a position on this question were in favour of adapting national liability rules. See Public consultation on the AI White Paper – Final Report, p. 14 (see here ).

    (49)

    Economic Study, pp. 51 et seq.

    (50)

    For example for the Austrian legal system, minimum lawyers’ fees were estimated to be ca. EUR 6 500 higher in liability cases involving AI and maximum lawyers’ fees ca. EUR 58 000 higher. For all legal systems covered, see ibid, Annex B, tables 13 et seq. The additional costs of technical expertise in cases involving AI compared to other cases was used as a proxy to quantify the costs linked to the burden of proof, see Annex 10, 2.1.3. (d).

    (51)

    Ibid., p. 66; Expert Group Report, p. 35.

    (52)

    Behavioural Economics Study, op. cit. p. 24-28.

    (53)

    The difficulty to determine who is responsible in cases of damage and the perceived low likelihood of compensation count amongst the strongest reasons for a lack of trust in AI-enabled products and services, cf. ibid., pp. 24 et seq. and 33.

    (54)

     See BEUC, ‘Artificial Intelligence: what consumers say’, http://www.beuc.eu/publications/beuc-x-2020-078_artificial_intelligence_what_consumers_say_report.pdf  

    (55)

    43% of respondents in Standard Eurobarometer 92 , Europeans and Artificial Intelligence, 12/2019, p. 16.

    (56)

    Behavioural Economics Study, op. cit., p. 47 and 48.

    (57)

    71 % of businesses that have already adopted AI identified lack of trust as a barrier, cf. Ipsos 2020, op. cit., p. 57. Already in 2017, stakeholders considered public attitude towards and acceptance of robotics and AI as the second most relevant obstacle preventing the full development of those technologies, cf. the summary of the public consultation of the European Parliament in 2017 at https://www.europarl.europa.eu/cmsdata/130181/public-consultation-robotics-summary-report.pdf . A Eurobarometer survey of the same year revealed that the unsatisfactory uptake of AI-enabled products and services over the preceding years coincides with a decline in the proportion of respondents having a positive view of robots and AI, available at: https://ec.europa.eu/jrc/communities/sites/jrccties/files/ebs_460_en.pdf .

    (58)

    Cf. Ipsos, op. cit., p. 12.

    (59)

    Economic Study, pp. 71 et seq.

    (60)

    Economic Study, p. 91.

    (61)

    Economic Study, pp. 113 and 114.

    (62)

    Cf. “Study on Safety of non-embedded software; Service, data access, and legal issues of advanced robots, autonomous, connected, and AI-based vehicles and systems”, SMART 2016/0071, TNO 2019-R10095 – Final Study Report regarding CAD/CCAM and Industrial Robots p. 31.

    (63)

    See e.g. Galasso/Luo, Punishing Robots – Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence, in: Agrawal/Gans/Goldfarb (eds.), The economy of artificial intelligence: an agenda, Chicago 2020, p. 493(501).

    (64)

    The McKinsey Global Institute found that uncertainty has a direct negative effect on the propensity to invest in AI technologies See ‘Notes from the AI frontier – Tackling Europe’s gap in digital and AI‘, op. cit., fn. 52.

    (65)

    Economic Study, p. 112. These numbers refer to the AI market sizes as estimated for the purposes of the IA accompanying the AI Act.

    (66)

    Renda e.a. (2020), Study to Support an Impact Assessment of Regulatory Requirements for Artificial Intelligence in Europe.

    (67)

    The lower estimate of the AI market size (EUR 5.404 billion in 2021) stems from Allied Market Research (2018). Artificial Intelligence Market. Artificial Intelligence Market Size, Growth | AI Market Forecast - 2025 (alliedmarketresearch.com) . The higher estimate ( EUR 15.451 billion in 2021) stems from Grand View Research (2020). Global Artificial Intelligence Market. Artificial Intelligence Market Size & Share Report, 2020-2027 (grandviewresearch.com) .

    (68)

     For example, in the area of mobile robotics, a risk of physical harm was taken into account; in the area of recruitment services, a risk of damage caused by discrimination was assumed; in the health sector, a risk of physical harm exists, etc.

    (69)

    EPRS, Civil liability regime for artificial intelligence - European added value assessment, September 2020, p. 48: https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2020)654178 .

    (70)

    Economic Study, pp. 104 et seq. For the shares of affected AI-enabled activities per sector, and absolute sizes of the affected market shares, see ibid., pp. 110-112 and 121. In absolute terms, the market shares affected by legal uncertainty and fragmentation regarding civil liability are the biggest in the manufacturing sector (EUR 226-699 mio. in 2020 / EUR 1 925-4 026 mio. in 2025) and other technical or scientific sectors (EUR 100-309 mio. in 2020 / EUR 924-1 932 mio. in 2025).

    (71)

    Examples of affected activities include the provision and use of predictive AI tools in the agricultural sector, the operation of AI equipment in manufacturing, AI-enabled solutions to remotely monitor work sites in the oil and gas sector, speech and text recognition tools in the IT sector, automated waste management, etc.

    (72)

    Economic Study, p. 122.

    (73)

    Ibid, p. 115.

    (74)

    Ibid., p. 117.

    (75)

    Ibid., pp. 118 et seq.

    (76)

    See ‘Notes from the AI frontier – Tackling Europe’s gap in digital and AI‘, McKinsey, op. cit., p. 47.

    (77)

    ‘Access to insurance for services provided in another Member State’, SWD(2014) 130 final, 31/3/2014 p. 7.

    (78)

    Economic Study, p. 142.

    (79)

    Behavioural Economics Study, op. cit. p. 24-28.

    (80)

    Economic Study, pp. 188 et seq.

    (81)

     Ipsos 2020, op. cit .

    (82)

    Economic Study, pp. 32 et seq.

    (83)

    Cf. the Winter 2022 Economic Forecast . 

    (84)

    See Annex 7 for a more detailed analysis of how the proposed AI Act is taken into account for the baseline scenario, and the interplay with the proposed AI Act.

    (85)

    For approaches to improve explainability, see Joint Research Centre Technical Report, Robustness and Explainability of Artificial Intelligence – From technical to policy decisions, 2020, point. 3.2.2. et seq. and footnotes 49 et seq.

    (86)

    Cf. e.g. J.W. Hong, Why Is Artificial Intelligence Blamed More? Analysis of Faulting Artificial Intelligence for Self-Driving Car Accidents in Experimental Settings, International Journal of Human–Computer Interaction, 2020, DOI: 10.1080/10447318.2020.1785693.

    (87)

    Behavioural Economics Study, op. cit., pp. 43, 47, 87.

    (88)

    See https://www.theverge.com/2018/5/22/17380374/self-driving-car-crash-consumer-trust-poll-aaa

    (89)

    Any suggestion that the expected increase in absolute terms of the market value potentially affected by liability-related issues might be outweighed by attenuating factors such as increased safety requirements applicable to AI systems is not supported by evidence; cf. Economic Study, p. 113.

    (90)

    Economic Study, p. 121.

    (91)

    Economic Study, pp. 97, 114 and 121.

    (92)

    See Comparative Law Study, Executive Summary.

    (93)

    2025 Strategia per l’innovazione tecnologica e la digitalizzazione del Paese: https://assets.innovazione.gov.it/1610546390-midbook2025.pdf ;

    (94)

    Economic Study, p. 96.

    (95)

    National Artificial Intelligence Strategy of the Czech Republic, 2019: https://www.mpo.cz/assets/en/guidepost/for-the-media/press-releases/2019/5/NAIS_eng_web.pdf ; AI Watch, ‘National strategies on Artificial Intelligence – A European perspective’, 2021 edition – a JRC-OECD report: https://op.europa.eu/en/publication-detail/-/publication/619fd0b5-d3ca-11eb-895a-01aa75ed71a1 , p.  41

    (96)

     See Polityka Rozwoju Sztucznej. Inteligencji w Polsce na lata 2019 – 2027 (Policy for the Development of Artificial Intelligence in Poland for 2019-2027) ( www.gov.pl/attachment/0aa51cd5-b934-4bcb-8660-bfecb20ea2a9 ), 102-3. The policy document emphasised that the provisions of private law on liability for damages are not adapted to the challenges posed by AI, and it was pointed out that the resolving of the emerging problems could be attempted on a micro scale, trying to find temporary solutions beforehand. In the long term entirely new rules of civil liability for algorithms should be developed. The document emphasised that it would be optimal to establish these rules by way of international consensus.

    (97)

    AI Portugal 2030: https://www.incode2030.gov.pt/sites/default/files/julho_incode_brochura.pdf ; AI Watch, op. cit., p. 113.

    (98)

    See 2.4. above.

    (99)

    Resolution of 16 February 2017 on Civil Law Rules on Robotics (2015/2103(INL)); legislative own-initiative Resolution on a civil liability regime for AI (op. cit.)

    (100)

    Economic Study, p. 114

    (101)

    Ibid.

    (102)

    Ibid., p. 117.

    (103)

    Ibid., p. 122.

    (104)

    See chapter 7 of the Economic Study, and section 6 for details.

    (105)

    See Economic Study, pp. 196 et seq.

    (106)

    For an overview of economic effects of harmonisation measures, see Economic Study, pp. 202 et seq.

    (107)

      Coordinated Plan on Artificial Intelligence 2021 Review , p. 2.

    (108)

    See summary of the public consultation, p. 5 ( https://www.europarl.europa.eu/cmsdata/130181/public-consultation-robotics-summary-report.pdf ).

    (109)

    See footnote 50.

    (110)

    AI White Paper, p. 15.

    (111)

    Given the general nature of these liability rules (they can apply to any ‘wrongdoer’), a broad range of companies can fall under the scope, e.g. companies providing labelled training data, companies involved in the development of AI components, companies using AI to provide services, etc.

    (112)

    Cf. ‘Notes from the AI frontier – Tackling Europe’s gap in digital and AI‘, McKinsey Global Institute Discussion Paper, February 2019, p. 47; Economic Study, p. 68. The representative IPSOS survey also came to the conclusion that reducing uncertainty can be highly beneficial for enterprises, as enterprises find liability for potential damages to be a major external challenge to AI adoption, see IPSOS, op. cit., p. 6.

    (113)

    Economic Study, pp. 97 and 117.

    (114)

    Ibid., pp. 142.

    (115)

    The link between effective liability rules for AI and consumer trust was confirmed specifically by the behavioural study, see Behavioural Economics Study, op. cit.

    (116)

     For details about how the present initiative interacts with other relevant legislation and policies, see Annex 6.

    (117)

    See IA on the AI Act, SWD(2021 84 final (p. 88): “Effective liability rules will […] provide an additional incentive to comply with the due diligence obligations laid down in the AI horizontal initiative, thus reinforcing the effectiveness and intended benefits of the proposed initiative.”

    (118)

    This does not imply that the AI liability initiative would take precedence (e.g. based on the lex specialis rule) over the PLD. The respective scopes would be carefully delineated to avoid any conflicts or inconsistencies. As the AI-specific alleviations of the burden of proof are designed to apply in the context of national fault-based liability rules, claims falling under the scope of the PLD would not be affected by these alleviations.

    (119)

    See Annex 7 for details of the links with the AI Act.

    (120)

    This time horizon takes into account the usual length of the procedure for proposing, negotiating, adopting and implementing the policy options as well as the future-oriented nature of this initiative, which is designed to provide the right conditions for the roll-out of innovative AI-enabled products and services.

    (121)

    Namely through access to technical information, by expressly allowing the presumption of defectiveness under certain circumstances, and by clarifying the ‘development risk defence’ to ensure that producers remain liable for undiscoverable defects in products designed with limited predictability.

    (122)

    Due to the interconnected nature of the problems of legal uncertainty and legal fragmentation, those problems are considered together for the purposes of the baseline scenario.

    (123)

    Cf. Economic Study, p. 112. For explanations about how the relevant shares of the AI market were determined, see point 2.6.(a) above.

    (124)

     For example, in the area of mobile robotics, a risk of physical harm was taken into account; in the area of recruitment services, a risk of damage caused by discrimination was assumed; in the health sector, a risk of physical harm exists, etc.

    (125)

    Behavioural Economics Study. pp. 33, 85.

    (126)

    Ibid, pp. 24, 28, 32, 33, 35.

    (127)

    Namely to promote the roll-out of trustworthy and safe AI in Europe, cf. the impact assessment accompanying the AI Act proposal, pp. 13, 24, 25, 29 and 33.

    (128)

    For explanations on how the policy options take into account the EP’s resolution on a civil liability regime for AI (2020/2014(INL), see point 4 of Annex 4.

    (129)

    Without prejudice to the possibility of disclosure of other information pursuant to national law or the PLD.

    (130)

    Similar approaches have been successfully implemented at EU level e.g. in the ‘Damages Directive’ (2014/104/EU) facilitating the private enforcement of EU competition law, and the ‘Enforcement Directive’ (2004/48/EC) enabling and effective civil enforcement of harmonised intellectual property rights.

    (131)

    This measure would harmonise an approach known from some national tort laws, which was recommended by the Expert Group.

    (132)

    These examples of potential causes of the damage are mentioned to illustrate the victim’s difficulties of proving what triggered the harmful output of the relevant AI system. The company using the AI system would not be liable for flawed training / testing data or an external attack (unless the latter was enabled by a failure by that company to take appropriate cybersecurity measures).

    (133)

    While this measure is inspired by similar tools available to national courts in many MS, the respective national rules and approaches currently diverge significantly and their applications in AI-related cases is highly uncertain (cf. Comparative Law Study, pp. 26-29 and 32-37).

    (134)

    See 2.5. and 2.6. for explanations on the link between effective liability rules for AI and consumer trust and willingness to take up AI-enabled products and services, as confirmed in particular by the behavioural study.

    (135)

    Cf. Expert Group Report, pp. 39 and 40.

    (136)

    The fact that someone benefits from the use of a certain technology is a common criterion under many existing national strict liability regimes (cf. Expert Group, pp. 35 and 39.

    (137)

    Cf. Comparative Law Study, p. 61.

    (138)

    Expert Group Report, p. 21

    (139)

    Cf. Economic Study, pp. 170 et seq.

    (140)

    Ibid., pp. 183 et seq

    (141)

    Ibid., pp. 170 et seq.

    (142)

    Ibid., pp. 197 and 198.

    (143)

    Economic Study, p. 137.

    (144)

    See Annex 9 for details.

    (145)

    For an overview of the success criteria, relevant stakeholders and respective scores of the PO with respect to effectiveness, and detailed explanations on the comparison of PO, see Annex 10, Section A.

    (146)

    For the interplay and synergies between the policy options under this impact assessment and the PLD impact assessment see section 8.2.

    (147)

    Given the close links and interactions between legal uncertainty and legal fragmentation (see 2.6. and 2.7.), the efficiency of PO is assessed together for SO 1 and 2.

    (148)

    Economic Study, p. 133.

    (149)

    Ibid.

    (150)

    Behavioural Economics Study, op. cit., pp. 24, 25 and 33.

    (151)

    Ibid., p. 47.

    (152)

    Ibid., p. 75.

    (153)

    Economic Study, p. 164.

    (154)

    Behavioural Economics Study, executive summary, p. iii.

    (155)

    Cf. Economic Study, pp. 113 et seq., 161 and 163.

    (156)

    For explanations on how liability rules may influence how products and services are designed, see e.g. Galasso/Luo, Punishing Robots - Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence, in: Agrawal/Gans/Goldfarb (eds.), The economy of artificial intelligence: an agenda, Chicago 2020, p. 493(501).

    (157)

    Cf. Economic Study, pp. 122 and 161.

    (158)

    Ibid., p. 165.

    (159)

    See below in ‘indirect economic impacts’ for details

    (160)

    Ibid., pp. 141, 142, 163, 164. The study supporting the evaluation of the PLD found that around 80% of manufacturers hold liability insurance (cf. Ernest & Young, Evaluation of Council Directive 85/374/EEC, 2018, p. 19). While insurance uptake may not be identical amongst operators/users of AI technologies subject to strict liability, it can nevertheless be deduced that voluntary insurance coverage is widespread, at least amongst businesses whose activities involve the risk of causing substantial damage.

    (161)

    This was confirmed by input received from the insurance industry during the consultation activities feeding into this IA.

    (162)

    This is indicated by the quantified estimates set out under the relevant quantification box.

    (163)

    Economic Study, p. 163.

    (164)

    Ibid., pp. 163 and 164.

    (165)

    See the proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act), COM(2022) 68 final ; cf. also Economic Study, p. 201.

    (166)

    EP, Resolution 2020/2014(INL) on a civil liability regime for artificial intelligence, recital 22.

    (167)

    Ibid., pp. 148 et seq., 164.

    (168)

    Cf. e.g. Harvard Business Review, The Case for AI Insurance , April 2020.

    (169)

    E.g. Ensure AI ‘ by MunichRe, enabling AI providers to guarantee the performance of their AI systems, including for AI-systems controlling physical products (e.g. robots). Novel technologies are being developed through statistical research (e.g. ‘distribution-free uncertainty quantification’ including ‘conformal inference’), which will enable a more standardised failure rate assessment.

    Cf. also La Playa , offering brokerage of specialist AI insurance. Experts working on AI insurance for a leading re-insurance company confirmed in bilateral contacts with the Commission services that the insurance industry expects an increasing demand for specific AI liability coverages and are developing capabilities to assess AI risks with a view to developing new products, including for the coverage of third-party losses.

    (170)

    See e.g. Insurance Europe’s reply to the public consultation: “…insurance can lessen the negative consequences of accidents involving AI by ensuring that the victim receives compensation. There are already many such insurance solutions available in the European insurance market. Protection against material damage incurred by AI generally falls within the remit of general liability insurance policies, which are sold on an all-risks basis.” Similarly, the German Insurance Association (GDV) submitted that “the voluntary insurance market works well in providing actors of all kinds with appropriate liability insurance.” Likewise, a leading European insurance company explained in a bilateral exchange with the Commission services that AI liability will be likely be incorporated as additional feature into existing insurance policies, and that the increasing integration of AI in business activities is not expected to lead to an increase of premiums on a general level.

    (171)

    This assumption is based on the following considerations:

    - In many cases, national courts already have similar tools (disclosure orders, presumptions) at their disposal under the baseline scenario, although it is highly uncertain whether and how these tools would be used in practice.

    - Under the baseline, some MS might take partly similar measures in their national legal systems to address the specific challenges of AI. However, it is uncertain how many would do so and what precise shape these measures would take. National initiatives would in all likelihood not be aligned and thus entail further legal fragmentation.

    - The increased legal certainty and reduced fragmentation delivered by PO1 will have a premium-lowering effect on insurance, which will partly offset the premium-driving effect of preventing AI-induced compensation gaps.

    (172)

    See Annexes 3, 4 and 10 (A.2.1.3.(e) and B.2.1.(a)) for further detailed information on the quantification challenges and steps undertaken to remedy the scarcity of quantified data, as well as the methodology and assumptions underlying the quantified estimates.

    (173)

    Insurance Europe, European Insurance in Figures, 2019 data, p. 48, https://www.insuranceeurope.eu/publications/689/european-insurance-in-figures-2019-data/download/EIF+2021.pdf .

    (174)

    See e.g. Insurance Europe’s reply to the public consultation: “…insurance can lessen the negative consequences of accidents involving AI by ensuring that the victim receives compensation. There are already many such insurance solutions available in the European insurance market. Protection against material damage incurred by AI generally falls within the remit of general liability insurance policies, which are sold on an all-risks basis.” Similarly the German Insurance Association (GDV) submitted that “the voluntary insurance market works well in providing actors of all kinds with appropriate liability insurance.” Likewise, a leading insurance company explained in a bilateral exchange with the Commission services that AI liability will be likely be incorporated as additional feature into existing insurance policies, and that the increasing integration of AI in business activities is not expected to lead to an increase of premiums on a general level.

    (175)

    For detailed explanations regarding the methodology and assumptions underlying this estimate, see Annex 10, A.2.1.3.(d) and B.2.1.(b).

    (176)

    For detailed explanations regarding these cost estimates, see Annex 10 and the Economic Study, pp. 48 et seq.

    (177)

    AI liability risks are likely to be covered by existing all-risk policies in many cases, cf. fn. 157.

    (178)

    Economic Study, p. 196 and 197.

    (179)

    Ibid., pp. 196-198.

    (180)

    For detailed explanations on the methodology and assumptions underlying this estimate, see Annex 10, A.2.1.3.(e).

    (181)

    Economic Study, pp. 195 et seq. and the explanations on the evolution of policy options in section 6.4.

    (182)

    Sum of vacuum cleaners consumer surplus, vacuum cleaner sellers‘ profits and third party victims receiving a compensation).

    (183)

    See Annex 11 for details on the methodology and results of this quantification effort.

    (184)

    EPRS, European added value assessment, September 2020.

    (185)

    Ibid.

    (186)

    For detailed explanations on the methodology and assumptions underlying this estimate, see Annex 10, A.2.1.3.(d) and B.2.1.(g).

    (187)

    Economic Study, p. 133 and 134.

    (188)

    Cf. e.g. Caruso, The Missing View of the Cathedral: The Private Law Paradigm of European Legal Integration, European Law Journal, Vol. 3, No. 1 March 1997, p. 3.

    (189)

    Economic Study, pp. 131, 132, 158-160 and 195.

    (190)

    Ibid., p. 196.

    (191)

    Given the close links and interactions between legal uncertainty and legal fragmentation (see 2.6. and 2.7.), the efficiency of PO is assessed together for SO1 and 2.

    (192)

    Behavioural Economics Study, pp. 67 and 68.

    (193)

    Ibid, p. 47.

    (194)

    Economic Study, p. 142.

    (195)

    Behavioural Economics Study, op. cit., executive summary, p. iii.

    (196)

    Cf. Economic Study, pp. 113 et seq., 161 and 163, and Executive Summary, p. ii.

    (197)

    Ibid, pp. 122 and 161.

    (198)

    Ibid, p. 164.

    (199)

    Ibid., p. 163.

    (200)

    Ibid., 141.

    (201)

    E.g. ‚ Ensure AI ‘ by MunichRe, enabling AI providers to guarantee the performance of their AI systems. Cf. also La Playa , offering brokerage of specialist AI insurance.

    (202)

    Cf. e.g. Harvard Business Review, The Case for AI Insurance , April 2020.

    (203)

    Cf. Economic Study, pp. 153.

    (204)

    EP, Resolution 2020/2014(INL) on a civil liability regime for artificial intelligence, recital 22.

    (205)

    For details regarding methodology and assumptions underlying these estimates, see Annex 10, A.2.1.3.(e) and B.2.2.(a).

    (206)

    This liability risk can be substantial, in particular in the case of robotics start-ups. Cf. e.g. Schmelzer, Why Are Robotics Companies Dying?, 2018; Fresh Consulting, Why Robotics Companies Fail , 2021.

    (207)

    For further detailed explanations regarding the methodology and assumptions underlying this estimate, see Annex 10, A.2.1.3.(d) and B.2.2.(b). This estimate should not be misconstrued as a quantification of the AI-specific difficulty of meeting the burden of proof, because it does not take into account cases in which liability claims would not be pursued at all based on current liability rules (because the victim either cannot identify the liable party or considers the prospect of a successful claim insufficient to justify legal action). The preferred policy option will help victims also in the latter cases, by overcoming the compensation gaps induced by the specific characteristics of AI.

    (208)

    Economic Study, pp. 142, 154 and 156.

    (209)

    For details regarding methodology and assumptions underlying these estimates, see Annex 10, A.2.1.3.(d) and B.2.2.(g).

    (210)

    Economic Study, p. 61.

    (211)

    Cf. section 1.1. above and see Annex 3 for further details.

    (212)

    The detailed assessment of POs (cf. Annex 10) has shown that the sub-option of implementing the policy measures through a non-binding instrument (recommendation) consistently scored lower across the IA criteria. This sub-option was therefore discarded for the purposes of the comparison. The scores in the table relate to the option to implement the respective policy measures through a binding legislative instrument (Directive).

    (213)

    See Annex 10 for detailed explanations on the assessment and comparison of the POs.

    (214)

    Cf. Section B of Annex II to the AI Act.

    (215)

    For details on how the impacts will be monitored and evaluated for all the relevant categories, see Annex 12.

    (216)

      Register of Commission expert groups and other similar entities (europa.eu)  

    (217)

    Liability for artificial intelligence and other emerging digital technologies, November 2019, https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en  

    (218)

    Report from the Commission to the European Parliament, the Council and the Economic and Social Committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, 19.2.2020, COM(2020) 64 final.

    (219)

    Karner/Koch/Geistfeld, Comparative Law Study on Civil Liability for Artificial Intelligence, 2021, https://op.europa.eu/en/publication-detail/-/publication/8a32ccc3-0f83-11ec-9151-01aa75ed71a1/language-en

    (220)

    Deloitte, Study to Support the Commission’s Impact Assessment on Liability for Artificial Intelligence, 2021 (‘Economic Study’).

    (221)

    Behavioural Economics Study, op.cit.

    (222)

    EPRS, Civil liability regime for artificial intelligence, September 2020, Author: Tatjana Evas, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654178/EPRS_STU(2020)654178_EN.pdf  

    (223)

    The Czech Ministry of trade and industry, the Austrian Ministry for Social Affairs, the Bulgarian Ministry for the Economy and the Ministries of Justice of Estonia and Finland participated in the Survey.

    (224)

    The answer options were multiple choice, respondents could express support for both full and minimum harmonisation.

    (225)

    Average score of 5,87 on a scale of 1 (most preferred) to 8 (least preferred), with the second least popular option having an average score of 4,44, and the best average being 4,03.

    (226)

    Only 2 responding citizens chose ‘No EU action’ as their preferred option, and this option scored an average 7,56 on the scale from 1 (most preferred) to 8 (least preferred).

    (227)

    Deloitte, Study to Support the Commission’s Impact Assessment on Liability for Artificial Intelligence, 2021 (‘Economic Study’).

    (228)

    Kantar/Behavia/CEPS, Behavioural study on the link between challenges of Artificial Intelligence for Member States’ civil liability rules and consumer attitudes towards AI-enabled products and services, December 2021 (‘Behavioural Economics Study’).

    (229)

    Economic Study, p. 196 and 197.

    (230)

    Ibid., pp. 196-198.

    (231)

    See the overview of benefits below, p. 32.

    (232)

    Vinuesa, R. et al., ‘The role of artificial intelligence in achieving the Sustainable Development Goals’, Nature communications 11(1), 2020, pp. 1-10.

    (233)

    OECD, Using artificial intelligence to help combat COVID-19, 2020.

    (234)

    Ibid., p.2.

    (235)

    Ibid., p. 4.

    (236)

    Due to the future-oriented nature of this initiative, aimed at creating the right conditions for the rollout of AI-enabled products and services, the technologies to which this initiative would apply are in most cases not yet on the market. There is hence no statistical data available on damage caused by such products and services, nor on the success rate of liability claims brought on the basis of current liability rules. The qualitative assessment of the expected compensation gaps (under the current liability rules = baseline scenario) and the extent to which the policy options would address those gaps are based on expert analysis, stakeholder feedback and desk research on the tools used in national and EU law to overcome information asymmetries and difficulties of proof.

    (237)

    This quantification is based on estimated costs of technical expertise to be advanced by victims to claim compensation under current liability rules. In the framework of the supporting economic study (Deloitte), these costs were estimated, on the one hand, for cases where AI systems are involved in causing damage, and on the other hand, for cases not involving AI. The difference between these estimates was used to approximate the cost of meeting the burden of proof due to the specific characteristics of certain AI systems. On that basis, assumptions were made regarding the effect each policy options would have on this cost factor. For detailed explanations regarding the methodology and assumptions made, see Annex 10, A.2.1.3.(d) and B.1.1.(b).

    (238)

    These values are obtained by multiplying the estimated shares of the AI market affected by legal uncertainty and fragmentation regarding civil liability in 2025 under the baseline scenario (low and high scenarios assumed by the economic study supporting this IA) with the estimated impact of the preferred option (+5%). This percentage was determined conservatively, taking into account the estimated impact generated by a combination of measures to ease the burden of proof with a harmonisation of strict liability limited to certain AI applications (cf. Economic Study, pp. 195 et seq.). In the supporting study, policy options including these elements were estimated to increase the production value of the affected cross-border trade by 5-7 %, for the six use-cases analysed specifically by that study (AI-enabled autonomous vehicles, autonomous drones/delivery robots, AI-enabled road traffic management systems, AI-enabled warehouse robot, AI-enabled medical-diagnosis services, AI-enabled automated lawnmowers/vacuum cleaners). In order to quantify the overall economic benefits generated by the preferred option (not limited to the six use-cases), a conservative extrapolation of this estimate was applied to the relevant market shares of all sectors affected by legal uncertainty and fragmentation, taking into account that the preferred PO does not include the strict liability element assumed for the supporting study with respect to a small number of specific AI application.

    The Joint Research Centre (JRC) has provided complementary micro-economic quantification of the impacts of the preferred policy option, based on the use-case example of robotic vacuum cleaners. This analysis reaches the conclusion that the envisaged measures to ease the victim’s burden of proof would generate an increase in consumer welfare of EUR 11.5-19.12mln and in total welfare of EUR 30.11-53.74mln for this product category in the EU-27. See Annex 11 for the JRC report with detailed explanations and results.

    (239)

    See footnotes 25 and 26.

    (240)

    As explained in the main part of the IA, only the targeted alleviation of the burden of proof regarding the ‘inner workings’ of an AI system could apply vis-à-vis citizens as potentially liable parties. The other measures forming part of the preferred policy option (presumption of causality in the case of non-compliance with relevant requirements of the AI Act / harmonised rules on the disclosure of information on AI systems to be documented/logged pursuant to the AI Act) are designed to apply only to addressees of obligations under the AI Act, that is to say businesses.

    (241)

    This quantified estimate is based on reasoned assumptions regarding the extent to which the liable parties might have to advance the costs of technical expertise that would otherwise be borne by victims under the baseline scenario. This extent would vary widely in practice, as it depends on the liable party’s knowledge and information on the AI system. Moreover, it is important to underline that this cost increase would apply only in cases where national courts consider it necessary to establish how or why an AI system arrived at a certain output. As it is not possible to estimate in how many instances this might be the case, the costs are estimated only per individual case in which the targeted alleviation of the burden of proof would apply. The estimate also takes into account that for businesses falling under the AI Act, the preferred PO can trigger, aside from the targeted alleviation of the burden of proof, the disclosure (subject to appropriate confidentiality safeguards) of information on the relevant AI system as well as a presumption of causality in the case of non-compliance with the AI Act. For details regarding the methodology and assumptions underpinning these estimates, see Annex 10, A.2.1.3.(d) and B.2.1.(g).

    (242)

    Liability for artificial intelligence and other emerging digital technologies, November 2019, https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en  

    (243)

    Karner/Koch/Geistfeld, Comparative Law Study on Civil Liability for Artificial Intelligence, 2021, https://op.europa.eu/en/publication-detail/-/publication/8a32ccc3-0f83-11ec-9151-01aa75ed71a1/language-en  

    (244)

    Cf. Comparative Law Study, Executive Summary.

    (245)

    EPRS, Civil liability regime for artificial intelligence, September 2020, Author: Tatjana Evas, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654178/EPRS_STU(2020)654178_EN.pdf  

    (246)

    Behavioural Economics Study, op. cit.

    (247)

    The study focused on national liability rules and regulatory alternatives for adapting these rules. Accordingly, none of the posited liability regimes corresponded to the PLD. In particular, strict liability as posited for the purposes of this study does not require a defect and was not linked specifically to the producer.

    (248)

    Cf. Behavioural Economics Study, op. cit, Executive Summary.

    (249)

    Deloitte, Study to Support the Commission’s Impact Assessment on Liability for Artificial Intelligence, 2021 (‘economic study’).

    (250)

    The share was determined at an overall EU-level and did not differentiate within sectors across EU Member States. This is due to a lack of sufficient data at the Member State level and because the study focused on the impact of uncertainty and fragmentation at the EU level. Existing country-level differences in AI adoption were, however, mentioned and analysed where necessary.

    (251)

    Economic Study, Chapter 8 Conclusions.

    (252)

    SWD(2018) 157 final, 7.5.2018.

    (253)

    For further explanations regarding the scarcity of quantified data and measures implemented to overcome this challenge, see Annex 10, point 1.2.

    (254)

    European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).

    (255)

    For the follow-up to both of these requests, see Annex 9 below.

    (256)

    Certain AI systems may include only some of the relevant characteristics and may include mitigating mechanisms to reduce negative effects of some of these characteristics. As a rule, the more specific characteristics a given AI system has, the higher the probability that it becomes a ‘black box’. For explanations about how the AI Act will impact the relevant characteristics of ‘high-risk’ AI systems, see Annex 7.

    (257)

    Report from the Expert Group on Liability and New Technologies – New Technologies Formation, European Commission, 2019, p.33

    (258)

    Comparative Law Study, p. 48. In this respect, see also the impact assessment accompanying the proposed AI Act, op. cit., p. 28, concluding that opacity (lack of transparency) and complexity make it difficult to prove possible breaches of laws.

    (259)

    Comparative law study, https://op.europa.eu/en/publication-detail/-/publication/8a32ccc3-0f83-11ec-9151-01aa75ed71a1/language-en  

    (260)

    Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210 , 7.8.1985, p. 29–33.

    (261)

    Non-material damage can be defined as “losses which do not relate to a person’s assets, wealth or income and, as such, cannot be quantified in an objective manner by reference to a market price or value” (CJEU C‑371/12).

    (262)

    Pursuant to Article 5 of the Product Liability Directive, where two or more persons are liable for the same damage, they shall be liable jointly and severally without prejudice to the provisions of national law concerning the rights of contribution or recourse.

    (263)

     Regulation (EC) No 864/2007 of the European Parliament and of the Council of 11 July 2007 on the law applicable to non-contractual obligations (Rome II), OJ L 199, 31.7.2007, p. 40–49.

    (264)

    Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce'), OJ L 178, 17.7.2000, p. 1.

    (265)

    Proposal COM/2020/825 final for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC.

    (266)

    Directive 2004/35/CE of the European Parliament and of the Council of 21 April 2004 on environmental liability with regard to the prevention and remedying of environmental damage (OJ L 143, 30.4.2004, p. 56).

    (267)

    Directive 2014/104/EU of the European Parliament and of the Council of 26 November 2014 on certain rules governing actions for damages under national law for infringements of the competition law provisions of the Member States and of the European Union (OJ L 349, 5.12.2014, p. 1).

    (268)

    Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights (OJ L 157, 30.4.2004, p. 45).

    (269)

    Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1).

    (270)

    Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability (OJ L 263, 7.10.2009, p. 11).

    (271)

    Public consultation on REFIT review of Directive 2009/103/EC on motor insurance, https://ec.europa.eu/info/consultations/finance-2017-motor-insurance_en .

    (272)

    According to the evaluation of Regulation (EU) No 181/2011 concerning the rights of passengers in bus and coach transport published on 10 December 2021, operators attempt to limit the compensation below the minimum compensation of EUR 1 200 per item of luggage to which passengers are entitled under the Regulation (Article 7(2)(b)). Serious accidents remain relatively rare in bus and coach passenger transport and, therefore, liability for injury or death is not an issue thoroughly commented on by stakeholders. However, the objective of harmonising these different rules in the interest of greater clarity for passengers remains relevant as passengers’ awareness of their rights and willingness to exercise them grow, cf. evaluation of Regulation (EU) No 181/2011 on the rights of passengers travelling by bus and coach, https://transport.ec.europa.eu/transport-themes/passenger-rights/passenger-rights-studies_en

    (273)

    ‘Carrier’ is defined as ‘any person who in the course of trade or business, but acting other than as an operator of a taxi service […], undertakes […] to carry one or more persons and, where appropriate, their luggage, whether or not he performs the carriage himself’. The carrier is responsible also for the acts of any persons of whose services she makes use for the performance of her obligations under the contract of carriage, as if such acts were her own.

    (274)

     Regulation (EC) No 785/ 2004 of the European Parliament and of the Council of 21 April 2004 on insurance requirements for air carriers and aircraft operators, OJ L 138, 30.4.2004,1 , as last amended by Regulation (EU) 2019/ 1243 of the European Parliament and of the Council of 20 June 2019 adapting a number of legal acts providing for the use of the regulatory procedure with scrutiny to Articles 290 and 291 of the Treaty on the Functioning of the European Union, OJ L 198, 25.7.2019, 241.

    (275)

     “Air carrier” is defined as “an air transport undertaking with a valid operating license” (Article 3 lit. a).

    (276)

     Unless they are “air carriers”, someone “who has continual effective disposal of the use or operation of the aircraft” is its “operator”, with a rebuttable presumption in favour of the person in whose name the aircraft is registered (Article 3 lit. c).

    (277)

    SDRs are a mix of currency values established by the International Monetary Fund (IMF).

    (278)

    For details on the Montreal Convention, see subsequent point.

    (279)

    ‘U-space airspace’ means a UAS (unmanned aircraft system) geographical zone, where UAS operations are only allowed to take place with the support of U-space services; ‘U-space service’ means a service relying on digital services and automation of functions designed to support safe, secure and efficient access to U-space airspace for a large number of UAS

    (280)

    Cf. Masutti, Anna / Tomasello, Filippo. International Regulation of Non-Military Drones, 2018, p. 187.

    (281)

    See subsequent point for further explanations on the Athens Convention.

    (282)

    International Convention for the Safety of Life at Sea of 1 November 1974 (SOLAS 74); International Convention for the Prevention of Pollution from Ships of 2 November 1973 (MARPOL).

    (283)

    While recital 7 states that ship inspection and survey organisations should be subject to global joint and several liability, there is no corresponding legal provision. The question of joint and several liability question is in any event distinct from the question of substantive liability conditions and the burden of proof.

    (284)

    DE, AT, BE, FR, CY, HU, IE, MT, NL, RO, IT, DK, PT, EL, SE, PL, FI, EE, ES, LV, LU, HR and SI.

    (285)

    Art.7 of the COTIF accession agreement provides that “The scope of the competence of the Union shall be indicated in general terms in a written declaration made by the Union at the time of the conclusion of this Agreement. That declaration may be modified as appropriate by notification from the Union to OTIF. It shall not replace or in any way limit the matters that may be covered by the notifications of Union competence to be made prior to OTIF decision-making by means of formal voting or otherwise”.

    (286)

    Commission SWD(2021) 84 final, Impact assessment accompanying the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), p. 88 ( https://eur-lex.europa.eu/legal-content/SV/TXT/?uri=CELEX:52021SC0084 ).

    (287)

    Deloitte, op. cit., p. 121.

    (288)

     See Karner/Koch/Geistfeld, Comparative Law Study on Civil Liability for Artificial Intelligence, November 2020, p. 39 et seq. ( https://op.europa.eu/publication/manifestation_identifier/PUB_DS0921157ENC ).

    (289)

    The AI Act defines the ‘user’ as ‘any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system I used in the course of a personal non-professional activity.’- Article 3 (4).

    (290)

    Member States’ rules on right to compensation for damage caused by discrimination or unequal treatment are closely linked with the existing EU acquis on these rights, see 3.4. below for details.

    Furthermore, under Article 82 GDPR, any person who has suffered material or non-material damage as a result of an infringement of the General Data Protection Regulation has the right to receive compensation from the controller or processor for the damage suffered. This right is linked to the fundamental right to the protection of personal data (Article 8 of the Charter).

    (291)

    See 3.1. below for further details on the AI Act.

    (292)

    Other manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behaviour.

    (293)

    Annex III namely lists the following:

    -AI systems intended for the remote biometric identification of natural persons;

    - AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity;

    - AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education;

    - AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships;

    - AI systems used in the area of access to and enjoyment of essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living (e.g. AI systems intended to be used to evaluate the creditworthiness of natural persons);

    - AI systems intended for certain uses of by law enforcement authorities;

    - AI systems intended for certain uses by public authorities in the area of migration, asylum and border control management;

    - AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

    (294)

    Proposal COM/2020/825 final for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC.

    (295)

    Cf. in this respect footnote 156 of advocate general Øe’s opinion in case C-682/18, Frank Peterson v. Google LLC, YouTube LLC, e.a., ECLI:EU:C:2020:586: "[I]t is possible that, where a service provider controls an algorithm, that service provider may be held liable for the damage caused by the functioning of that algorithm in itself. [...] I repeat that the exemption under Article 14(1) of Directive 2000/31 covers only liability for stored information."

    (296)

    cf. e.g. Judgment of 8 May 2019, Villar Láiz. C-274/18, ECLI:EU:C:2019:828.

    (297)

    Judgment of 19 April 2012, Galina Meister v Speech Design Carrier Systems GmbH C-415/10 EU:C:2012:217 [44], [47].

    (298)

    European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).

    (299)

    Point 19 of the resolution.

    (300)

    Annex B, recital 16.

    (301)

    Cf. e.g. the Opinion of Advocate General Wahl in case C-371/12, Petillo v Unipol, ECLI:EU:C:2013:652, para. 38, with reference to Cf. Horton Rogers, W.V. (ed.), Damages for Non-Pecuniary Loss in a Comparative Perspective, European Centre of Tort and Insurance Law, Springer Verlag, Wien New York: 2001, p. 246.

    (302)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 134.

    (303)

    Comparative Law Study, op.cit., p. 9, 105.

    (304)

    Ibid., p. 108.

    (305)

    Von Bar et al. (eds), Principles, Definitions and Model Rules of European Private Law – Draft Common Frame of Reference (DCFR), p. 3052-3059.

    (306)

    Martin-Casals in: The Borderlines of Tort Law: Interactions with Contract Law, London 2019, p. 731(733).

    (307)

    Ibid., p. 731.

    (308)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 109.

    (309)

    Martin-Casals in: ibid., The Borderlines of Trot Law: Interactions with Contract Law, London 2019, p. 732; van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 109, 112.

    (310)

    Martin-Casals in: ibid., The Borderlines of Trot Law: Interactions with Contract Law, London 2019, p. 732.

    (311)

    Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights (OJ L 157, 30.4.2004, p. 45).

    (312)

    Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).

    (313)

    Stepanov, Introducing a property right over data in the EU: the data producer’s right – an evaluation, International Review of Law, Computers & Technology 2020, p. 65 et seqq.

    (314)

    Von Bar et al., DCFR, pp. 237 et seq., pointing to Greek law as the only exceptional case where civil law does not provide for compensation, but only penal sanctions (ranging from administrative fines to imprisonment) apply.

    (315)

    European network of legal experts in gender equality and non-discrimination, https://www.equalitylaw.eu/publications/comparative-analyses .

    (316)

    See table at page 19 of the report by K. Wladasch, ‘The sanctions regime in discrimination cases and its effects’, available at: https://migrate.equineteurope.org/wp-content/uploads/2015/12/sanctions_regime_discrimination_-_final_for_web.pdf

    (317)

    Von Bar et al., DCFR, pp. 3059 et seq.

    (318)

    Ibid., pp. 3061-3066.

    (319)

    Comparative Law Study, op.cit., p. 22, 44.

    (320)

    Martin-Casals in: ibid., The Borderlines of Trot Law: Interactions with Contract Law, London 2019, pp. 745 et seq.

    (321)

    Ibid., p. 746.

    (322)

    Ibid., p. 747 et seq.

    (323)

    Ibid., p. 749 et seq.

    (324)

    Ibid., p. 750-752.

    (325)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 205.

    (326)

    Martin-Casals in: ibid., The Borderlines of Trot Law: Interactions with Contract Law, London 2019, p. 752-754

    (327)

    The Expert Group on Liability and New Technologies concluded: “[G]enerally speaking, AI and other emerging digital technologies do not call into question the existing range of compensable harm per se.”, Expert Group Report, p. 19.

    (328)

    The Comparative Law Study commissioned for this impact assessment emphasised that “while claims for compensation invariably require that the victim incurred some harm, the range of compensable losses and the recognized heads of damage will not be different in AI cases than in any other tort scenario.”, cf. Comparative Law Study, pp. 9 and 105.

    (329)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 108.

    (330)

    Para. 22.

    (331)

    Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, 85/374/EEC.

    (332)

    Von Bar et al., Principles, Definitions and Model Rules of European Private Law, p. 3551 et seqq; Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 168.

    (333)

    Von Bar et al., Principles, Definitions and Model Rules of European Private Law, p. 3551; Martin-Casals, Comparative Report, in: The Borderlines of Tort Law: Interactions with Contract Law, London 2019, p. 784; van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 166 et seq.

    (334)

    Martin-Casals, Comparative Report, in: The Borderlines of Tort Law: Interactions with Contract Law, London 2019, p. 782 et seq.

    (335)

    Ibid., p. 783.

    (336)

    Ibid., p. 783.

    (337)

    Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, Annex, No. 1 lit. a.

    (338)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 167.

    (339)

    Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, Annex, No. 1 lit. a.

    (340)

    Martin-Casals, Comparative Report, in: The Borderlines of Tort Law: Interactions with Contract Law, London 2019, p. 782 et seq.

    (341)

    Van Bar/Drobnig, Study on Property Law and Non-contractual Liability Law as they relate to Contract Law, p. 169.

    (342)

    AI-enabled autonomous vehicles, autonomous drones/delivery robots, AI-enabled road traffic management systems, AI-enabled warehouse robot, AI-enabled medical-diagnosis services, AI-enabled automated lawnmowers/vacuum cleaners

    (343)

    For details see Economic Study, pp. 48 et seq. and Annex B.

    (344)

    Legal experts provided more granular use-case specific estimates for the damage scenarios involving ‘traditional’ use cases, whereas only global estimates could be provided for cases involving AI.

    (345)

    In order to enable this comparison, an average of all the data on non-AI cases per country was calculated. The respective first values in the following table refer to the difference between the minimum fees (non-AI vs. AI) and the second one refers to the difference between the maximum fees.

    (346)

    For detailed explanations given by the experts for their estimates, see Economic Study, pp. 52 et seq.

    (347)

    Where high and low estimates were submitted for a certain legal system, the average of the difference between the respective high and low estimates was calculated.

    (348)

    The policy options are not expected to entail additional litigation costs for private persons as potentially liable parties. These stakeholders are likely to defend themselves against liability claims using the same type of arguments and evidence as under the existing burden of proof rules. For example, they might seek to avoid liability by demonstrating that they acted diligently and in accordance with the instructions of use accompanying an AI-enabled product. Contrary to potentially liable businesses, which may have special knowledge and be subject to certain requirements regarding the functioning and ‘inner workings’ of an AI system (in particular under the AI Act), private persons would not have to base their defense on an analysis of the functioning of such a system. The envisaged alleviation of victims’ burden of proof regarding the ‘inner workings’ of AI systems is therefore not expected to prompt potentially liable private persons to commission technical expertise.

    (349)

    Sum of vacuum cleaners consumer surplus, vacuum cleaner sellers‘ profits and third party victims receiving a compensation).

    (350)

    See Annex 11 for details on the methodology and results of this quantification effort. In terms of methodological limitations, it is important to acknowledge that this quantification approach focused on civil liability rules outside the scope of the PLD, in line with the scope of the AI liability initiative.

    (351)

    Insurance Europe, European Insurance in Figures, 2019 data, p. 48, https://www.insuranceeurope.eu/publications/689/european-insurance-in-figures-2019-data/download/EIF+2021.pdf . The latest available data on the overall annual premiums paid for general liability insurance is from 2019 and includes the UK insurance market. Nevertheless, it provides a suitable basis for approximating the possible increase of insurance premiums due to the policy options.

    (352)

    See e.g. Insurance Europe’s reply to the public consultation: “…insurance can lessen the negative consequences of accidents involving AI by ensuring that the victim receives compensation. There are already many such insurance solutions available in the European insurance market. Protection against material damage incurred by AI generally falls within the remit of general liability insurance policies, which are sold on an all-risks basis.” Similarly the German Insurance Association (GDV) submitted that “the voluntary insurance market works well in providing actors of all kinds with appropriate liability insurance.” Likewise, a leading insurance company explained in a bilateral exchange with the Commission services that AI liability will be likely be incorporated as additional feature into existing insurance policies, and that the increasing integration of AI in business activities is not expected to lead to an increase of premiums on a general level.

    (353)

    In this respect, see clarifications regarding the ‘cost of compensation’ below under this heading.

    (354)

    Cf. Economic Study, p. 100.

    (355)

    While the EU economy was slightly smaller in 2020 due to the exceptional economic shock caused by the Covid-19 pandemic (cf. Statista, https://www.statista.com/statistics/279447/gross-domestic-product-gdp-in-the-european-union-eu/ ), it is appropriate, for the purposes of assessing the relevant market shares, to assume a more regular development of the EU economy.

    (356)

    Cf. in this respect the supporting economic study for reasoned estimations of market shares affected by liability-related problems, Economic Study, pp. 104 et seq.

    (357)

    Ibid., p. 32.

    (358)

    See Statista, https://www.statista.com/statistics/267898/gross-domestic-product-gdp-growth-in-eu-and-euro-area/  

    (359)

    For a more detailed description of the technology involved in this use-case, see Annex 13.

    (360)

    Economic Study, pp. 155 and 156.

    (361)

    For a more detailed description of the technology involved in this use-case, see Annex 13.

    (362)

    Economic Study, p. 133.

    (363)

    Ibid.

    (364)

    For details, see Annex 13.

    (365)

    Behavioural Economics Study, op. cit., pp. 24, 25 and 33.

    (366)

    Ibid., p. 47.

    (367)

    Economic Study, p. 164.

    (368)

    Cf. in this respect Economic Study, p. 139, pointing out that an approach assigning (strict) liability to a specific actor has the downside that the role of “distant” parties in the chain may be overlooked despite their ability to affect the risk and likelihood of damage, which may cause such parties to care less about safety consideration.

    (369)

    Behavioural Economics Study, op. cit., executive summary, p. iii.

    (370)

    Economic Study, p. 142.

    (371)

    Behavioural Economics Study, op. cit., executive summary, p. iii.

    (372)

    Cf. Economic Study, pp. 113 et seq., 161 and 163.

    (373)

    For explanations on how liability rules may influence how products and services are designed, see e.g. Galasso/Luo, Punishing Robots - Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence, in: Agrawal/Gans/Goldfarb (eds.), The economy of artificial intelligence: an agenda, Chicago 2020, p. 493(501).

    (374)

    Cf. Economic Study, pp. 122 and 161.

    (375)

    Ibid., p. 165.

    (376)

    Ibid., pp. 141, 142, 163, 164. The study supporting the evaluation of the Product Liability Directive found that around 80% of manufacturers hold liability insurance (cf. Ernest & Young, Evaluation of Council Directive 85/374/EEC, 2018, p. 19). While insurance uptake may not be identical amongst operators / users of AI technologies subject to strict liability, it can nevertheless be deduced that voluntary insurance coverage is widespread, at least amongst businesses whose activities involve the risk of causing substantial damage.

    (377)

    This was confirmed by input received from the insurance industry during the consultation activities feeding into this impact assessment.

    (378)

    Economic Study, p. 163.

    (379)

    Ibid., pp. 163 and 164.

    (380)

    Ibid., pp. 148 et seq., 164.

    (381)

    Cf. e.g. Harvard Business Review, The Case for AI Insurance , April 2020.

    (382)

    E.g. Ensure AI ‘ by MunichRe, enabling AI providers to guarantee the performance of their AI systems, including for AI-systems controlling physical products (e.g. robots). Novel technologies are being developed through statistical research (e.g. ‘distribution-free uncertainty quantification’ including ‘conformal inference’), which will enable a more standardised failure rate assessment.

    Cf. also La Playa , offering brokerage of specialist AI insurance. Experts working on AI insurance for a leading re-insurance company confirmed in bilateral contacts with the Commission services that the insurance industry expects an increasing demand for specific AI liability coverages and are developing capabilities to assess AI risks with a view to developing new products, including for the coverage of third-party losses.

    (383)

    See e.g. Insurance Europe’s reply to the public consultation: “…insurance can lessen the negative consequences of accidents involving AI by ensuring that the victim receives compensation. There are already many such insurance solutions available in the European insurance market. Protection against material damage incurred by AI generally falls within the remit of general liability insurance policies, which are sold on an all-risks basis.” A major insurance company explained in a bilateral exchange with the Commission services that AI liability will be likely be incorporated as additional feature into existing insurance policies, and that the increasing integration of AI in business activities is not expected to lead to an increase of premiums on a general level.

    (384)

    Economic Study, p. 146.

    (385)

    Such as risk testing, advanced analytics, cooperation with developers and risk modelling, cf. Economic Study, pp. 145 and 146. Experts working on AI insurance for a major re-insurance company confirmed in bilateral contacts with the Commission services that the insurance industry expects an increasing demand for specific AI liability coverages and are developing capabilities to assess AI risks with a view to developing new products, including for the coverage of third-party losses. The White Paper ‘Artificial Intelligence and Algorithmic Liability – a technology and risk engineering perspective from Zurich Insurance Group and Microsoft Corp’ (July 2021) points for instance to ‘AI model monitoring’ based on application and model telemetry, ‘acceptable use policies’ specifying rules for appropriate conduct by insured AI users, and the use of ‘algorithmic design history files’ to document the input, output, review, verification, validation, transfer and changes to the model. The same paper also underlines that technology companies and insurers can pair their strengths to mitigate algorithmic risks across industries.

    (386)

    See the policy measures envisaged in the Data Strategy Communication (COM/2020/55/final); cf. also Economic Study, p. 201.

    (387)

    EP, Resolution 2020/2014(INL) on a civil liability regime for artificial intelligence, recital 22.

    (388)

    AI liability risks are likely to be covered by existing all-risk policies in many cases, cf. fn. 161.

    (389)

    Economic Study, p. 196 and 197.

    (390)

    Ibid., pp. 196-198.

    (391)

    EPRS, European added value assessment, September 2020, op. cit., p. 63.

    (392)

    Ibid.

    (393)

    The policy options are not expected to entail additional litigation costs for private persons as potentially liable parties. These stakeholders are likely to defend themselves against liability claims using the same type of arguments and evidence as under the existing burden of proof rules. For example, they might seek to avoid liability by demonstrating that they acted diligently and in accordance with the instructions of use accompanying an AI-enabled product. Contrary to potentially liable businesses, which may have special knowledge and be subject to certain requirements regarding the functioning and ‘inner workings’ of an AI system (in particular under the AI Act), private persons would not have to base their defense on an analysis of the functioning of such a system. The envisaged alleviation of victims’ burden of proof regarding the ‘inner workings’ of AI systems is therefore not expected to prompt potentially liable private persons to commission technical expertise.

    (394)

    Cf. e.g. Caruso, The Missing View of the Cathedral: The Private Law Paradigm of European Legal Integration, European Law Journal, Vol. 3, No. 1 March 1997, p. 3.

    (395)

    Given the high degree of uncertainty regarding the take-up of a non-binding recommendation by MS, there is no sufficiently robust basis for a quantified estimate in this respect. In order to estimate the added insurance costs, one would have to account not only for the likelihood that a number of MS would not follow the recommendation, but also for the fact that the premium-lowering effect of increased legal certainty and reduced fragmentation would materialise only to a much smaller extent.

    (396)

    Given the high degree of uncertainty regarding the take-up of a non-binding recommendation by MS, there is no sufficiently robust basis for a quantified estimate of either victims’ cost savings linked to the burden of proof or the additional cost that the potentially liable party might have to advance.

    (397)

    Economic Study, pp. 131, 132, 158-160 and 195.

    (398)

    Ibid., p. 196.

    (399)

    Cf. ibid., pp. 113 et seq., 161 and 163.

    (400)

    Ibid., p. 164.

    (401)

    Ibid, p. 164.

    (402)

    Ibid., p. 163.

    (403)

    Ibid., 141.

    (404)

    For instance, Insurance Europe cautioned that “introducing new mandatory insurance requirements would result in eliminating the existing cover in voluntary insurance policies and creating new dedicated insurance products tailored to the specific requirements of the mandatory insurance in question”. Further, Insurance Europe submitted that the insurability of AI-enabled technologies “requires individual risk appraisal and the ability of insurers and insureds to be free to agree insurance terms and conditions suited to the insured’s individual risks. […V]oluntary insurance is usually the best solution as it enables insurers and insureds to agree cover that is tailored to individual needs (risk profile).”

    (405)

    E.g. ‚ Ensure AI ‘ by MunichRe, enabling AI providers to guarantee the performance of their AI systems. Cf. also La Playa , offering brokerage of specialist AI insurance.

    (406)

    Cf. e.g. Harvard Business Review, The Case for AI Insurance , April 2020.

    (407)

    Cf. Economic Study, pp. 153.

    (408)

    EP, Resolution 2020/2014(INL) on a civil liability regime for artificial intelligence, recital 22.

    (409)

    Cf. Economic Study, pp. 122 and 161.

    (410)

    Economic Study, pp. 127, 128 and 133.

    (411)

    This liability risk can be substantial, in particular in the case of robotics start-ups. Cf. e.g. Schmelzer, Why Are Robotics Companies Dying?, 2018; Fresh Consulting, Why Robotics Companies Fail , 2021.

    (412)

    Economic Study, pp. 142, 154 and 156.

    (413)

    This scenario could be extended along the value chain. For example, if the producer has a liability claim against the developer, the insurance company of the producer would have a recourse possibility on the basis of this contractually subrogated claim against the developer. In the end, the insurance of the developer would pay.

    (414)

    See Economic Study, p. 196 and 197.

    (415)

    As explained above in the assessment of PO1, the policy options are not expected to entail additional litigation costs for private persons as potentially liable parties. The additional elements of PO2 (strict liability and possibly mandatory insurance) would in any case apply only vis-à-vis businesses as potentially liable parties.

    (416)

    Ibid., p. 61.

    (417)

     For anecdotical evidence of a robot vacuum cleaner harming persons see https://www.theguardian.com/world/2015/feb/09/south-korean-womans-hair-eaten-by-robot-vacuum-cleaner-as-she-slept

    (418)

    The other variable positively correlated to consumer trust, “utility of the application” yielded similar results.

    (419)

     We were unable to find similar data for European countries. However, it is likely that for the category of losses study the frequency and average loss in the United States is similar to that of the European countries studied in this report.

    (420)

    The extrapolation made has two limitations. First, demand is influenced by country-specific attributes (e.g. culture, income, etc.). By extrapolating results combining demand functions for markets in 6 countries, we implicitly assume that those country-specific attributes are representative of the EU-27, which introduces a bias. Second, given that we had to use an alternative data source to do the extrapolation, we cannot be sure of how accurate the share of these 6 countries’ sales over total EU-27 sales is. In order to do the extrapolation, we used data on total vacuum cleaner units sold between 2015 and 2019 in EU-27 and for each of the 6 countries studied provided by Statista. The figures for the 6 countries studied provided by Statista and GfK (the data source used to estimate the model) differ considerably across each of the 30 markets (country-year) studied. However, despite these discrepancies, if the proportion of units sold in these countries over the EU-27 total reported by Statista (36%) is accurate (this proportion cannot retrieved from GfK), then the extrapolation is reliable.

    (421)

    The risk for a robot-related psychological harm to children is not based on existing scientific results since there is no longitudinal study in the field of child-robot interaction that has looked at this topic. The JRC is interested to carry out such a study in the future.

    (422)

    For the Product Liability Directive, see mutatis mutandis the cleaning robot use-case above. Under the existing provisions of the Directive, damage means damage caused by death or personal injuries and damage to items of property for private use or consumption, other than the defective product itself. The review of the Directive will not extend the types of harm giving rise to claims under the PLD to psychological harm.

    Top