1.Can the Union act? What is the legal basis and competence of the Unions’ intended action?
|
1.1 Which article(s) of the Treaty are used to support the legislative proposal or policy initiative?
|
The (proposal for a) Directive on adapting non-contractual civil liability rules to artificial intelligence is based on Article 114 of the Treaty on the Functioning of the European Union (TFEU), which regulates the approximation of the provisions laid down by law, regulation or administrative action in Member States having as their object the establishment and functioning of the internal market. The Directive harmonises targeted aspects of the Member States’ existing civil liability rules applicable to AI-systems, in order to improve the conditions for the functioning of the internal market in AI-enabled products and services. The choice of Article 114 TFEU as a legal basis is also supported by the EP which has called twice upon the Commission to use this legal basis for a legislative proposal.
|
1.2 Is the Union competence represented by this Treaty article exclusive, shared or supporting in nature?
|
In the case of civil liability rules applicable to AI-systems, aiming to improve the conditions for the functioning of the internal market in AI-enabled products and services, the Union’s competence is shared – according to Article 4(2)(a) TFEU, internal market as a policy area is subject to a shared competence between the Union and its Member States.
|
Subsidiarity does not apply for policy areas where the Union has exclusive competence as defined in Article 3 TFEU. It is the specific legal basis which determines whether the proposal falls under the subsidiarity control mechanism. Article 4 TFEU sets out the areas where competence is shared between the Union and the Member States. Article 6 TFEU sets out the areas for which the Unions has competence only to support the actions of the Member States.
|
2.Subsidiarity Principle: Why should the EU act?
|
2.1Does the proposal fulfil the procedural requirements of Protocol No. 2:
-Has there been a wide consultation before proposing the act?
-Is there a detailed statement with qualitative and, where possible, quantitative indicators allowing an appraisal of whether the action can best be achieved at Union level?
|
An extensive consultation strategy was implemented to ensure a wide participation of stakeholders throughout the policy cycle of this proposal. The consultation strategy was based on both a public and several targeted consultations.
An online public consultation was open from 18 October 2021 to 10 January 2022 to gather views from a wide variety of stakeholders, including consumers, civil society organisations, industry associations, businesses, including SMEs, and public authorities. After analysing all the responses received, the Commission published a summary outcome and the individual responses on its website.
In addition, the proposal builds on 4 years of analysis and close involvement of stakeholders, including academics, businesses, consumer associations, Member States and citizens. The preparatory work started in 2018 with the setting up of the Expert Group on Liability and New Technologies (New Technologies Formation). The Expert Group produced a Report in November 2019 that assessed the challenges some characteristics of AI pose to national civil liability rules.
The input from the Expert Group report was complemented by three additional external studies: a comparative law study, a behavioural economics study and an economic study.
The explanatory memorandum (section 2) and the impact assessment (chapter 3) explain how the principle of subsidiarity has been taken into account by this legislative proposal – for further details see answer to question 2.2 below.
|
2.2Does the explanatory memorandum (and any impact assessment) accompanying the Commission’s proposal contain an adequate justification regarding the conformity with the principle of subsidiarity?
|
The explanatory memorandum clarifies that the objectives of this proposal cannot be adequately achieved at national level because emerging divergent national rules would increase legal uncertainty and fragmentation, creating obstacles to the rollout of AI-enabled products and services across the internal market. Legal uncertainty would particularly affect companies active cross-border by imposing the need for additional legal information/representation, risk management costs and foregone revenue. At the same time, differing national rules on compensation claims for damage caused by AI would increase transaction costs for businesses, especially for cross-border trade, entailing significant internal market barriers. Further, legal uncertainty and fragmentation disproportionately affect start-ups and SMEs, which account for most companies and the major share of investments in the relevant markets.
In the absence of EU harmonised rules for compensating damage caused by AI systems, providers, operators and users of AI systems on the one hand and injured persons on the other hand would be faced with 27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States.
Harmonised measures at EU level would significantly improve conditions for the rollout and development of AI-technologies in the internal market by preventing fragmentation and increasing legal certainty. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. Moreover, only EU action can consistently achieve the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI across the internal market. This would ensure a consistent (minimum) level of protection for all victims (individuals and companies) and consistent incentives to prevent damage and ensure accountability.
The arguments above are substantiated in more detail in chapter 3 of the impact assessment.
|
2.3Based on the answers to the questions below, can the objectives of the proposed action be achieved sufficiently by the Member States acting alone (necessity for EU action)?
|
Given the existing differences between tort liability regimes across Member States and the likelihood that those differences would only increase if measures were adopted at national level to adapt such liability regimes to AI risks, the objectives of the proposed action (such as ensuring legal certainty on a cross-border basis and consistently preventing compensation gaps in cases where AI systems are involved) cannot be sufficiently achieved by Member States acting alone. In absence of EU action, the identified obstacles to a well-functioning internal market for safe AI-enabled products and services would still persist.
|
(a)Are there significant/appreciable transnational/cross-border aspects to the problems being tackled? Have these been quantified?
|
In the area covered by this proposal, emerging divergent national rules would increase legal uncertainty and fragmentation, creating obstacles to the rollout of AI-enabled products and services across the internal market. Legal uncertainty would particularly affect companies active cross-borders by imposing the need for additional legal information/representation, risk management costs and foregone revenue. At the same time, differing national rules on compensation claims for damage caused by AI would increase transaction costs for businesses, especially for cross-border trade, entailing significant internal market barriers.
For the quantification of the cross-border activities in this sector, see also section 6.2.6 of the economic study.
|
(b)Would national action or the absence of the EU level action conflict with core objectives of the Treaty or significantly damage the interests of other Member States?
|
In the absence of EU harmonised rules for compensating damage caused by AI systems, providers, operators and users of AI systems on the one hand and injured persons on the other hand would be faced with 27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States, which is against the core objective of the Treaty to establish a well-functioning internal market. At the same time, the fact that victims of harm caused by AI systems would not benefit of the same level of protection as victims of harm caused by other technologies is against the principle of equal access to justice, hampering the trust in AI and ultimately the uptake of AI-enabled products/services across the EU Member States.
|
(c)To what extent do Member States have the ability or possibility to enact appropriate measures?
|
There is concrete evidence showing that a number of Member States may take unilateral legislative measures to address the specific challenges posed by AI with respect to liability. For example, AI strategies adopted in Czechia, Italy, Malta, Poland and Portugal mention initiatives to clarify liability (see also section 3.1 of the IA). However, given the already existing large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measures on liability would follow existing different national approaches and therefore increase fragmentation in the EU. Thus relying on separate measures at the level of each Member State cannot address the main concerns identified by the present initiative.
|
(d)How does the problem and its causes (e.g. negative externalities, spill-over effects) vary across the national, regional and local levels of the EU?
|
Some Member States envisage already the adoption of measures on liability for AI-enabled products and services, while other Member States have not put forward such initiatives. This divergence of approaches will increase the already existing fragmentation of tort liability regimes among Member States. As a consequence, some victims would be left to bear the burden of damage caused by AI, affecting victims disproportionately and allowing for an externalisation of costs linked to the roll-out of AI. At the same time, companies active cross-border would face a multiplicity of liability regimes, hampering their development and innovation potential. These elements would undermine the functioning of the internal market for safe AI-enabled products and services. This matter is less relevant for local and regional levels within Member States, given that tort liability is in general regulated at national level in Europe.
|
(e)Is the problem widespread across the EU or limited to a few Member States?
|
So far, businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes in Member States would apply to damage caused by AI. This uncertainty concerns not only their own Member State but particularly other Member States where they will export to or operate their products and services. Some Member States may take unilateral legislative measures to address the specific challenges posed by AI with respect to liability (see Member States mentioned under 2.3(c) above). It is very likely that any national AI-specific measures on liability would follow existing different national tort law approaches, thus increasing the barriers to the roll-out of AI-enabled products and services across the internal market in general, and not only on some national markets.
|
(f)Are Member States overstretched in achieving the objectives of the planned measure?
|
No, the measure is proposed to support Member States in ensuring legal certainty and consistently preventing compensation gaps in cases where AI systems are involved.
|
(g)How do the views/preferred courses of action of national, regional and local authorities differ across the EU?
|
Tort law traditions differ significantly across Member States and their adaptation to deal with AI-liability cases is likely to increase such differences. The latter concern for instance the standard of proof for causation, procedural alleviations of the burden of proof, administrative law measures assisting those charged with the burden of proof, the basis for liability, different approaches to causal uncertainty etc. In addition, while fault-based liabilities are the liability regimes by default in Europe and are framed very differently across Member States, some Member States also have strict liability regimes in place.
|
2.4Based on the answer to the questions below, can the objectives of the proposed action be better achieved at Union level by reason of scale or effects of that action (EU added value)?
|
The objectives of the proposed action, to ensure legal certainty and consistently prevent compensation gaps in cases where AI systems are involved, thus creating the conditions for the deployment of AI-technologies in the internal market, can be better achieved at Union level, by ensuring a minimum level of harmonisation. This will increase public trust and the uptake of AI-enabled products and services, enabling companies (in particular SMEs, which are among the most active in the AI sector) to expand their activities across borders. On the contrary, relying only on possible measures at national level would increase legal fragmentation across the EU and will create obstacles to cross-border activities (as explained under question 2.3 above).
|
(a)Are there clear benefits from EU level action?
|
The conditions for the roll-out and development of AI-technologies in the internal market can be significantly improved by preventing fragmentation and increasing legal certainty through harmonised measures at EU level, compared to possible adaptations of liability rules at national level. The economic study underpinning the Impact Assessment of this proposal concluded – as a conservative estimate – that targeted harmonisation measures on civil liability for AI would have a positive impact of 5 to 7 % on the production value of relevant cross-border trade. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. This would lower stakeholders’ legal information/representation, internal risk management and compliance costs, facilitate financial planning as well as risk estimates for insurance purposes, and enable companies – in particular SMEs – to explore new markets across borders.
|
(b)Are there economies of scale? Can the objectives be met more efficiently at EU level (larger benefits per unit cost)? Will the functioning of the internal market be improved?
|
It is only through EU action that the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI can be achieved consistently across the internal market. Harmonised adaptations of existing liability rules are needed, ensuring a consistent (minimum) level of protection for all victims (citizens and companies) and consistent incentives to prevent harm and ensure accountability. The Directive will create the conditions for businesses to use cross-border opportunities and expand their AI-related activities in the internal market. Based on the overall value of the EU AI market affected by the liability-related problems addressed by this Directive, it is estimated that the latter will generate an additional market value between ca. EUR 500mln and ca. EUR 1.1bln (see page 49 of the Impact Assessment).
|
(c)What are the benefits in replacing different national policies and rules with a more homogenous policy approach?
|
Harmonising at EU level the relevant liability rules to respond to the risks posed by AI systems would significantly improve the conditions for the roll-out and development of AI-technologies in the internal market. Such more homogenous approach is needed to prevent legal fragmentation (which may result from possible adaptations of liability rules at national level) and increase legal certainty for consumers and businesses, while preventing liability gaps and ensuring that victims of harm caused by AI systems are duly compensated, including in cases with cross-border dimensions.
|
(d)Do the benefits of EU-level action outweigh the loss of competence of the Member States and the local and regional authorities (beyond the costs and benefits of acting at national, regional and local levels)?
|
The Directive follows a minimum harmonisation approach, focusing on disclosure of information and alleviation of the burden of proof through establishment of causality presumption subject to strict conditions. This approach, based on a minimum intervention at EU level, allows victims of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could maintain, for example, reversals of the burden of proof under fault-based regimes, or national no-fault liability regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems. The benefits of EU action, described under points 2.4(a)-(c) above, outweigh the minimum loss of competence of Member States in this area.
|
(e)Will there be improved legal clarity for those having to implement the legislation?
|
The risks stemming from AI systems pose challenges for the current tort liability regimes across Member States. To tackle those risks, the Directive will bring legal clarity on the right to access to information and the correspondent disclosure obligations, as well as alleviate the burden of proof of claimants by establishing rebuttable presumptions of causality between non-compliance with legal duties of care and the output of the AI system that caused the damage. These rules will significantly increase legal certainty for claimants and defendants, for legal practitioners and national courts, as well as for relevant public authorities. At the same time, consumers and businesses will benefit of this enhanced legal clarity, which will increase their trust in the internal market for AI-enabled products and services. The assessment of relevant liability risks will be facilitated, as well as their insurability.
|
3. Proportionality: How the EU should act
|
3.1 Does the explanatory memorandum (and any impact assessment) accompanying the Commission’s proposal contain an adequate justification regarding the proportionality of the proposal and a statement allowing appraisal of the compliance of the proposal with the principle of proportionality?
|
The explanatory memorandum clarifies that the proposal is based on a staged approach. In the first stage, the objectives are achieved with a minimally invasive approach; the second stage involves re-assessing the need for more stringent or more extensive measures.
The first stage is limited to the burden-of-proof measures strictly necessary to address the AI-specific problems identified. It builds on the substantive conditions of liability currently existing in national rules, such as fault or causality, but focuses on targeted proof-related measures, ensuring that victims have the same level of protection as in cases not involving AI systems. Moreover, from the various tools available in national law for easing the burden of proof, this proposal has chosen to use rebuttable presumptions as the least interventionist tool. Such presumptions are commonly found in national liability systems, and they balance the interests of claimants and defendants. At the same time they are designed to incentivise compliance with existing duties of care. However, the proposal does not lead to a reversal of the burden of proof, to avoid exposing providers, operators and users of AI systems to higher liability risks, which may hamper innovation and reduce the uptake of AI-enabled products and services.
The second stage included in the proposal ensures that future technological, regulatory and jurisprudential developments will be taken into account when re-assessing the need to harmonise other elements of the claims for compensation, including situations where no-fault liability would be more appropriate, as requested by the European Parliament. Such assessment would also likely consider whether such a harmonisation would need to be coupled with mandatory insurance to ensure effectiveness.
The elements above are substantiated in more detail in the impact assessment (see especially section 4 of annex 10 to the IA, on the comparison of the policy options in terms of proportionality).
|
3.2Based on the answers to the questions below and information available from any impact assessment, the explanatory memorandum or other sources, is the proposed action an appropriate way to achieve the intended objectives?
|
Yes, this Directive, focusing on the alleviation of the burden of proof, reflects the Commission’s targeted approach of ensuring that victims are not less, but also not more protected due to the involvement of AI. Any shifts in the risk and cost distribution between affected stakeholders that would go beyond counter-balancing the specific proof-related challenges of AI are thus avoided, and Member States’ well-established liability systems are respected to the maximum extent possible. In a second stage, under the review clause, it will be assessed whether a higher level of regulatory intervention is needed (such as non-fault liability or mandatory insurance), subject also to the evolution and deployment of AI-technologies on the market and the emergence of possible incidents linked to AI systems.
|
(a)Is the initiative limited to those aspects that Member States cannot achieve satisfactorily on their own, and where the Union can do better?
|
The Directive is limited to the measures strictly necessary to address the AI-specific problems identified. In particular, it would not touch upon the substantive conditions of liability like fault or causality (which remain in the remit of national law), but focus on targeted proof-related measures (disclosure of information and alleviation of the burden of proof through rebuttable presumptions) ensuring that victims have the same level of protection as in cases not involving AI. Under the review clause, the Directive provides for re-assessing the need for more stringent or more extensive measures at a later stage, without committing to specific outcomes of such review.
|
(b)Is the form of Union action (choice of instrument) justified, as simple as possible, and coherent with the satisfactory achievement of, and ensuring compliance with the objectives pursued (e.g. choice between regulation, (framework) directive, recommendation, or alternative regulatory methods such as co-legislation, etc.)?
|
A mandatory instrument is able to prevent protection gaps stemming from partial or no implementation. While a non-binding instrument presents a less intrusive approach, it is unlikely to address the identified problems in an effective manner. Among the binding instruments, a directive is the most suitable instrument, as it provides the desired harmonisation effect and legal certainty, while also being the adequate instrument to enable MS in a flexible manner to fit the harmonised measures without friction into their national liability regimes. On the contrary, a regulation would not have allowed for such flexibility in the politically sensitive field of civil law. On these grounds, the choice has been made to submit this proposal in the form of a directive.
|
(c)Does the Union action leave as much scope for national decision as possible while achieving satisfactorily the objectives set? (e.g. is it possible to limit the European action to minimum standards or use a less stringent policy instrument or approach?)
|
The Directive achieves the set objectives with a minimally invasive approach. It is focused on burden-of-proof measures strictly necessary to address the AI-specific problems identified. It builds on the substantive conditions of liability currently existing in national rules, such as fault or causality, but focuses on targeted proof-related measures, ensuring that victims have the same level of protection as in cases not involving AI systems. Moreover, from the various tools available in national law for easing the burden of proof, this proposal has chosen to use rebuttable presumptions as the least interventionist tool. However, the proposal does not lead to a reversal of the burden of proof, to avoid exposing providers, operators and users of AI systems to higher liability risks, which may hamper innovation and reduce the uptake of AI-enabled products and services. Given that this Directive follows a minimum harmonisation approach, Member States remain free to use in their national law more invasive tools if they deem it appropriate, such as reversals of the burden of proof, irrebuttable presumptions, strict liability etc.
|
(d)Does the initiative create financial or administrative cost for the Union, national governments, regional or local authorities, economic operators or citizens? Are these costs commensurate with the objective to be achieved?
|
The Directive will not have implications for the budget of the European Union or its Member States, at national, regional or local level. It also does not impose specific costs on economic operators or citizens. By reducing fragmentation and increasing legal certainty regarding stakeholders’ liability exposure, the Directive lowers stakeholders’ legal information/representation, internal risk management and compliance costs, facilitating financial planning as well as risk estimates for insurance purposes.
|
(e)While respecting the Union law, have special circumstances applying in individual Member States been taken into account?
|
By proposing a minimum harmonisation approach, this Directive takes into account the different levels of regulatory intervention of Member States as regards the tort liability for damages caused by AI systems. The Impact Assessment and the Comparative Legal Study underpinning this proposal have assessed the various civil law traditions on tort liability present in different Member States. The directive as a legal instrument gives the Member States the needed flexibility when integrating the harmonised measures into their national liability regimes.
|