This document is an excerpt from the EUR-Lex website
Document 52026AB0010
Opinion of the European Central Bank of 13 March 2026 on a proposal for a regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence (CON/2026/10)
Opinion of the European Central Bank of 13 March 2026 on a proposal for a regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence (CON/2026/10)
Opinion of the European Central Bank of 13 March 2026 on a proposal for a regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence (CON/2026/10)
CON/2026/10
OJ C, C/2026/2285, 15.4.2026, ELI: http://data.europa.eu/eli/C/2026/2285/oj (BG, ES, CS, DA, DE, ET, EL, EN, FR, GA, HR, IT, LV, LT, HU, MT, NL, PL, PT, RO, SK, SL, FI, SV)
|
Official Journal |
EN C series |
|
C/2026/2285 |
15.4.2026 |
OPINION OF THE EUROPEAN CENTRAL BANK
of 13 March 2026
on a proposal for a regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence
(CON/2026/10)
(C/2026/2285)
Introduction and legal basis
On 19 November 2025, the European Commission published a proposal for a regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI) (1) (hereinafter the ‘proposed regulation’).
The European Central Bank (ECB) has decided to deliver an own initiative opinion on the proposed regulation. The ECB’s competence to deliver an opinion is based on Articles 127(4) and 282(5) of the Treaty on the Functioning of the European Union since the proposed regulation contains provisions falling within the ECB’s fields of competence, in particular regarding the ECB’s tasks concerning the prudential supervision of credit institutions pursuant to Article 127(6) of the Treaty. In accordance with Article 17.5, first sentence, of the Rules of Procedure of the European Central Bank, the Governing Council has adopted this opinion.
1. General observations
|
1.1. |
In 2021 the ECB received a request from the Council of the European Union for, and adopted, an opinion on the original proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (2). |
|
1.2. |
The ECB welcomes the objective of the proposed regulation to promote innovation and competitiveness within the internal market by simplifying and streamlining the implementation of Regulation (EU) 2024/1689 of the European Parliament and of the Council (3) (hereinafter the ‘AI Act’). The ECB acknowledges the importance of establishing an accessible regulatory framework specific to AI systems while ensuring a consistent and high-level protection of public interests as regards health, safety and fundamental rights, as well as the importance of trustworthy and ethically sound artificial intelligence (AI) usage, as stipulated in the AI Act and in the proposed regulation (4). In particular, as credit institutions increasingly rely on both high-risk and general-purpose AI systems, a streamlined framework is essential to ensure compliance with the AI Act and Union banking law. |
|
1.3. |
From a prudential supervisory perspective, the establishment of well-designed innovation pathways allow credit institutions to safely experiment with AI solutions in controlled environments, while ensuring early identification of potential risks. Such pathways support both technological progress in the banking sector and the ECB’s ability to assess potential impacts on the operational resilience, model risk and data governance of credit institutions. |
|
1.4. |
The ECB understands that the proposed regulation does not impose additional obligations on the ECB in its capacity as prudential supervisor of credit institutions, nor does the ECB need to apply the AI Act in its supervision. Nonetheless, due to the ECB’s supervision of credit institutions’ information and communication technology (ICT) risk, and the fact that certain requirements under the AI Act may overlap with, or be addressed through, compliance with Union financial services law, close coordination and effective exchange of information between the ECB and national market surveillance authorities supervising credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council (5) (hereinafter the ‘CRD’) are essential to ensure consistent supervision and the effective application of the AI Act in respect of such laws concerning credit institutions. |
2. Institutional competences and supervisory powers
|
2.1. |
Under Article 127(6) of the Treaty and Council Regulation (EU) No 1024/2013 (6) (hereinafter the ‘SSM Regulation’), the Council has conferred on the ECB specific tasks concerning policies relating to the prudential supervision of credit institutions, with a view to contributing, inter alia, to the safety and soundness of credit institutions and the stability of the financial system within the Union and each Member State (7). The ECB’s mandate is limited to the prudential supervision of credit institutions. The ECB understands that conformity assessments (8), breach identifications (9) and enforcement of the AI Act (10) constitute tasks of market surveillance authorities and are not part of the ECB’s prudential supervisory mandate (11). |
|
2.2. |
Notwithstanding the ECB’s restricted mandate in this field, the AI Act provides that national market surveillance authorities, supervising regulated credit institutions participating in the Single Supervisory Mechanism (SSM), should report, without delay, to the ECB any information identified in the course of their market surveillance activities that may be of potential interest for the ECB’s prudential supervisory tasks (12). |
3. Exchange of supervisory information
|
3.1. |
Credit institutions that are providers or deployers of high-risk systems are deemed to comply with certain provisions of the AI Act when they comply with similar obligations under Union financial services law (13). The ECB welcomes this principle, which prevents credit institutions from being faced with double reporting and potential conflicting supervisory views. However, as indicated in paragraph 2.1, the ECB does not have a market surveillance mandate under the AI Act and, in general, banking supervisors do not have the same mandate and supervisory priorities as market surveillance authorities under the AI Act. This means that, generally, prudential supervisors of credit institutions will not or cannot review requirements and pursue breaches under the AI Act. To ensure a level playing field in overseeing the application of the AI Act, banking supervisors like the ECB should therefore be able to share information with market surveillance authorities supervising credit institutions which are relevant to those market surveillance authorities. |
|
3.2. |
The current framework primarily envisages the ECB as a recipient of information from market surveillance authorities (14). In order to achieve the objectives of the AI Act, including the protection of fundamental rights, and ensure effective and coordinated market surveillance and prudential supervision, the proposed regulation should include an explicit legal basis allowing the ECB, on a need-to-know basis and subject to professional secrecy requirements, to share relevant prudential supervisory information with competent national market surveillance authorities supervising credit institutions which are regulated under the CRD and which fall under the ECB’s supervision. A similar information exchange arrangement should also be introduced between national competent authorities supervising credit institutions and market surveillance authorities. Without such a clear legal basis, it is unclear how coordination between market surveillance and prudential supervisory authorities (15) could possibly be implemented in practice. The ECB also sees merit in extending to national market surveillance authorities supervising credit institutions which are not competent authorities for banking supervision (and therefore not part of the SSM) the duty to report to the ECB and to national competent authorities for banking supervision any information identified in the course of their market surveillance activities that may be of potential interest for the prudential supervisory tasks of the ECB and the national competent authorities for banking supervision (16). A legal basis for such information sharing could be used, for instance, to share information on risk management or governance of the AI systems deployed by supervised credit institutions, or to share information impacting on their risk profile. |
4. Scope and classification of models and use cases
|
4.1. |
It is important for market participants, in particular credit institutions that use AI for credit scoring, to know which techniques fall under the definition of AI system (17) and which do not, as well as which techniques are considered ‘high-risk’ under the AI Act. |
|
4.2. |
Further clarification would be appropriate, particularly by expressly excluding generalised linear models (e.g. linear or logistic regressions) from the scope of the AI Act’s definition of high-risk AI systems, when used for credit scoring (18), due to their high level of explainability and transparency (19). Such clarification could be reflected in the AI Act (20), and the Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (21). |
|
4.3. |
In this regard, the ECB notes that generalised linear models, including linear and logistic regression, are inherently interpretable and transparent (22). Their operation can be fully explained through a limited and stable set of parameters, whose influence on outcomes can be assessed using well-established statistical tools. Consequently, such models do not exhibit the ‘black-box’ characteristics that underpin the AI Act’s enhanced governance and risk-mitigation requirements, which are primarily designed to address the challenges posed by complex, non-linear or self-learning systems (23), for which the current legal framework might not be suitable (24). Further, from a proportionality and legal-certainty perspective, including generalised linear models, when used as standalone statistical techniques, within the scope of the AI Act’s definition of high-risk AI systems would not materially contribute to reducing the risks associated with these models. Rather, this would create unnecessary compliance burdens, in particular in regulated financial sectors such as banking where such models are well established, broadly used and already subject to extensive prudential governance and supervisory scrutiny. Treating those algorithms equally to more complex and novel modelling methods used for credit scoring would be in contrast with the stated simplification objective of the proposed regulation. The ECB considers that further clarification would be warranted to expressly exclude generalised linear models from the definition of high-risk AI systems under the AI Act, without prejudice to cases where such models form part of more complex AI systems requiring a holistic risk assessment. |
|
4.4. |
The AI Act clarifies that AI systems provided for by Union law for prudential purposes to calculate credit institutions’ capital requirements (known as ‘internal models’) should not be considered high-risk (25). At the same time, AI systems intended to evaluate the creditworthiness of natural persons or to establish their credit score are to be considered high-risk under the AI Act (26). However, under Regulation (EU) 575/2013 of the European Parliament and of the Council (27) (hereinafter the ‘CRR’), internal models should play an essential role in the credit approval process and are consequently closely interlinked with credit scoring models (28). This may lead to a situation in which prudential supervisors, assessing the internal model, and the market surveillance authority, assessing the credit scoring model, provide different and potentially conflicting guidance in their supervision. In this context, the ECB reiterates that an effective coordination and information exchange between market surveillance authorities and prudential supervisors is necessary to foster consistent application of overlapping requirements under the AI Act and the CRR, and to ensure an effective protection of fundamental rights. In any case, the ECB acknowledges that, even with proper information exchange, the root cause of the issue of potentially conflicting requirements would not be completely resolved. In this regard, credit institutions and their supervisors are faced with a complex process in order to ensure that the credit institution’s internal model complies with the CRR and, in parallel, with the AI Act, also considering that the prudential supervisor and the competent market surveillance authority will most likely assess each model at a different point in time. |
|
4.5. |
Finally, enhanced coordination between the Commission guidelines and guidance on the implementation of the AI Act and the European System of Financial Supervision would help avoid unnecessary administrative burdens and support a coherent supervisory framework. |
Where the ECB recommends that the proposed regulation is amended, a specific drafting proposal is set out in a separate technical working document accompanied by an explanatory text to this effect. The technical working document is available in English on EUR-Lex.
Done at Frankfurt am Main, 13 March 2026.
The President of the ECB
Christine LAGARDE
(1) COM(2025) 836 final.
(2) Opinion CON/2021/40 of the European Central Bank of 29 December 2021 on a proposal for a regulation laying down harmonised rules on artificial intelligence (OJ C 115, 11.3.2022, p. 5).
(3) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj).
(4) See recital 1 of the AI Act and recital 1 of the proposed regulation.
(5) Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338, ELI: http://data.europa.eu/eli/dir/2013/36/oj).
(6) Council Regulation (EU) No 1024/2013 of 15 October 2013 conferring specific tasks on the European Central Bank concerning policies relating to the prudential supervision of credit institutions (OJ L 287, 29.10.2013, p. 63, ELI: http://data.europa.eu/eli/reg/2013/1024/oj).
(7) See Article 1, first paragraph, of the SSM Regulation.
(8) Article 5(6) and Articles 46 and 60 of the AI Act.
(9) Articles 73, 74, 75, 76 and 80, and Article 83(1) of the AI Act.
(10) Article 79(5) to (9), Articles 81 and 82, Article 83(2) and Articles 85 and 99 of the AI Act.
(11) See paragraphs 2.1.4 and 2.1.5 of Opinion CON/2021/40.
(12) Article 74(7) of the AI Act.
(13) Article 26(5) and (6) of the AI Act.
(14) See Article 74(7) of the AI Act.
(15) See recital 158 of the AI Act.
(16) See Article 74(7), second paragraph of the AI Act.
(17) Article 3, point (1), of the AI Act.
(18) See Annex III to the AI Act.
(19) On this point, see also European Parliament resolution of 25 November 2025 on the impact of artificial intelligence on the financial sector (2025/2056(INI)), available on the European Parliament’s website at www.europarl.europa.eu.
(20) For instance, in Annex III to the AI Act.
(21) See, for instance, paragraph 42 of the Communication from the Commission, Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) (C(2025) 5053 final).
(22) National Bank of Belgium, ‘Financial Market Infrastructures and Payment Services Report’, 2019, pp. 64-65, available on the NBB’s website at www.nbb.be; F. Pérez-Cruz et al., ‘Managing explanations: how regulators can address AI explainability’, in Financial Stability Institute Occasional Paper No 24, 2025, p. 15, available on the website of the Bank for International Settlements at www.bis.org.
(23) J. Tejero, ‘Unwrapping black-box models: a case study in credit risk’, in Revista de Estabilidad Financiera, 43, 2022, pp. 96 et seq. Available on the website of the Banco de España at www.bde.es.
(24) The ECB also excludes generalised linear regression models from the stricter expectations applicable to more complex models used for own funds requirements calculations. See ECB Guide to Internal Models, para. 31, fn. 35, available on the ECB’s banking supervision website at www.bankingsupervision.europa.eu.
(25) Recital 58 of the AI Act.
(26) See Annex III to the AI Act.
(27) Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and amending Regulation (EU) No 648/2012 (OJ L 176, 27.6.2013, p. 1, ELI: http://data.europa.eu/eli/reg/2013/575/oj).
(28) See Article 144(1), point (b), of the CRR.
ELI: http://data.europa.eu/eli/C/2026/2285/oj
ISSN 1977-091X (electronic edition)