|
Official Journal |
EN C series |
|
C/2025/113 |
10.1.2025 |
Opinion of the European Economic and Social Committee
a) General-purpose AI: way forward after the AI Act
(exploratory opinion requested by the European Commission)
b) A secure technology for the future: Artificial Intelligence
(exploratory opinion requested by the Hungarian Presidency)
INT/1055
(C/2025/113)
Rapporteur:
Sandra PARTHIE|
Advisor |
Vera DEMARY (for the rapporteur) |
||||
|
Referrals |
|
||||
|
Legal basis |
a) and b) Article 304 of the Treaty on the Functioning of the European Union |
||||
|
Section responsible |
Single Market, Production and Consumption |
||||
|
Adopted in section |
1.10.2024 |
||||
|
Adopted at plenary session |
23.10.2024 |
||||
|
Plenary session No |
591 |
||||
|
Outcome of vote (for/against/abstentions) |
186/2/2 |
1. Conclusions and recommendations
|
1.1. |
To be competitive in general-purpose AI (GPAI), Europe must invest in secure connectivity and resilient backbone infrastructure as well as a resilient supply chain to ensure that the effects of generative AI can be harnessed for European actors and aligned with European values and needs. |
|
1.2. |
The EESC emphasises that AI is a very dynamic subject and that the AI Act will have to be updated in a similarly flexible and dynamic way to achieve its objective of creating an ecosystem that is trustworthy and respects EU fundamental rights and values. Even though GPAI models are highly technical and predominantly relevant in the business-to-business (B2B) context, they have an indirect impact on workers and consumers. To dispel fears and enhance awareness, the EESC advises organising dialogues with stakeholders, including social partners, about the codes of practice in workplaces and workers’ rights in the context of GPAI. |
|
1.3. |
To combat a market concentration dominated by large, often non-European, digital companies, the European Economic and Social Committee (EESC) believes it is essential to mobilise the tools of competition policy (assessing the potential abuse of a company’s dominant position, merger control) to prevent, identify and address critical behaviour and situations. Coordinated European and national investment in innovation is needed to help develop EU value chains and value creation in AI. |
|
1.4. |
The planned voluntary codes of practice with respect to the AI Act will make it easier for companies to comply with the regulations. The EESC expects it to provide users, developers and other AI stakeholders with guidelines, best practices for applying the regulation, templates, information on thresholds and standards, and easy-to-use checklists. |
|
1.5. |
The AI Office will play a crucial role in implementing and enforcing the AI Act’s provisions, including providing guidance, establishing codes of conduct, promoting international cooperation in AI, promoting European standards and enforcing EU regulations vis-à-vis European and non-European companies active in the EU. The AI Office and national authorities must have the necessary resources to monitor, evaluate and enforce the provisions of the law, ensuring compliance and consumer rights protection. |
|
1.6. |
The EESC is conscious of the concerns that various categories of content creators currently have about the use of generative AI. It is of utmost importance to ensure that AI is developed in a way that respects patentability, copyright and intellectual property rules. |
|
1.7. |
AI can increase energy and resource efficiency by improving processes and providing solutions that allow for virtual tests, digital twins and other options for reducing the use of materials. AI systems and models and their development also have an impact on the environment and energy usage, which must be accurately measured and considered. |
2. General comments
|
2.1. |
With the AI Act in force, the focus now shifts to implementing the regulation with respect to general-purpose AI and to producing competitive ‘AI made in Europe’. AI is a very dynamic subject and the AI Act will have to be constantly updated to achieve its objective of creating an ecosystem that is trustworthy and respects EU fundamental rights and values. |
|
2.2. |
As the AI Act is very comprehensive, this opinion concentrates on the key aspects of GPAI that is being trained with non-personal data and is being applied in a business-to-business (B2B) or business-to-government (B2G) setting, for example in Internet of Things (IoT) devices. It aims to address providers and deployers, as well as issues of innovation, investment in all parts of the AI value chain, the promotion of use cases, access to data and the future governance system related to GPAI. |
|
2.3. |
According to the AI Act, general-purpose AI refers to an AI model ‘that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications’ (1). General-purpose AI systems (GPAIS) can therefore perform many tasks, including tasks that they were not originally trained for. This definition of GPAI includes – as an umbrella term – foundation models and large language models, making generative AI one type of foundation model and hence also GPAIS. |
|
2.4. |
The AI Act addresses the risks of developing and using GPAI extensively and aims to implement a framework for the safe handling of the technology. In this context, the EESC supports the implementation of the ‘AI Pact’ by the Commission and stresses that any lessons learnt, e.g. by early adopters, from this could be beneficial for MSMEs and should therefore be put in a format that will make them easily accessible to this group. |
|
2.5. |
The EESC deplores the fact that the AI Act fails to address the risk of not developing and using GPAI, which mainly lies in decreasing competitiveness among European companies, possibly resulting in declining sales, job losses, economic stagnation and poverty. Even though the EESC concurs with the need to regulate AI and the risks that its development and use pose, it also calls for the AI Act’s effects to be closely monitored and for adjustments to be considered should the regulation prove to have a negative impact on how well European AI-focused companies can innovate. This may happen specifically if companies are uncertain about how the regulation applies to them, apply it in the wrong way or generally decide that the regulation is too complex for the European market to be worthwhile investing and innovating in. |
|
2.6. |
The EESC is conscious of the fact that major GPAI developments are being spearheaded by actors outside the EU’s jurisdiction. Nevertheless, European values (in particular sustainability, social rights and respect for human rights and the environment), data protection and transparency need to be key criteria for all developers and providers as well as for users of AI, especially in high-risk applications. In other AI use cases, users and consumers should be made aware of the technology. Furthermore, non-European actors in the European market or using European data must comply with these requirements. |
3. Specific comments
3.1. Governance
|
3.1.1. |
The planned codes of practice with respect to GPAI and the AI Act will make it easier for companies to comply with the rules. The EESC expects it to provide users, developers and other AI stakeholders with guidelines, best practices for applying the regulation, templates, information on thresholds and standards, and easy-to-use checklists. The quality of the templates the AI Office provides will determine the quality of the information that providers of GPAI will have to make available. |
|
3.1.2. |
The EESC strongly advises organising dialogues with stakeholders, including social partners, about the codes of practice in workplaces as these will make it easier for providers and users to comply with the regulations and help build trust in AI, including GPAI. The EESC underlines that adherence to labour rights and safe workplace requirements must always be ensured, including making sure that workers understand AI and use it responsibly. In this context, the EESC advocates providing adequate training, increasing general digital literacy and raising awareness among members of the public (2). |
|
3.1.3. |
The AI Office will play a crucial role in implementing and enforcing the AI Act’s provisions, including providing guidance, establishing codes of conduct, promoting international cooperation in AI, promoting European standards and enforcing EU regulations vis-à-vis European and non-European companies active in the EU. The EESC underlines that the European AI Office needs to be able to assess models’ training procedures and offer the possibility of independent auditing. They should establish effective complaints mechanisms for members of the public and users regarding GPAI and should coordinate cross-border investigations. |
|
3.1.4. |
Standards for AI systems, for example governing their robustness and reliability in the medical field or in recruitment, are crucial for the development of AI models and use cases. The EESC believes that rapidly establishing and applying standards for evaluating GPAIS is of global interest and that these standards should be harmonised at least at EU level but preferably at global level. |
|
3.1.5. |
Constant evaluation of the effects of the AI Act on GPAI in the EU is crucial for the regulation to achieve the intended effects in the medium term. Unlike the stance favoured by the US, the EU’s vision, which the EESC supports, includes bans on the most invasive forms of the technology and strict rules requiring hyperscale companies to be more open about how they design AI-based products. |
|
3.1.6. |
The EESC is conscious of the concerns that various categories of content creators currently have about the use of generative AI. It is of utmost importance to ensure that AI is developed in a way that respects patentability, copyright and intellectual property rules. Creators (e.g. of literary or artistic work) should retain the ability to grant or refuse permission for their work to be used, including by GPAI models. |
|
3.1.7. |
The EESC calls for the AI Office to help stakeholders understand other international AI frameworks (e.g. the OECD’s (3)) and regulations, such as those in China and the US, and understand how they interrelate to the EU AI Act, including such topics as ‘open’ vs ‘licensing-based’ models. The AI Office should also cooperate with other EU and Member State authorities dealing with access to data for companies to make data more accessible, especially for MSMEs. The EESC also calls on the Commission, the European External Action Service (EEAS) and Member States to adopt a ‘multistakeholder approach’ regarding efforts to create a global AI governance mechanism, such as the US’s executive order on AI (4) and the Chinese government’s measures for the management of GPAI (5). This approach could also entail regular international and inclusive policy dialogues on AI governance to avoid fragmentation into different regions. |
|
3.1.8. |
The EESC calls on the European Commission and the relevant national authorities to provide sufficient funding in order to efficiently implement the AI Act and to closely cooperate with the AI Office so that it can achieve the stated aims. Without it, the AI Office will not be able to attract and keep the talented people and AI experts necessary to fulfil the above tasks. |
|
3.1.9. |
The EU recently adopted the Data Act to improve data sharing and access to data. Its data strategy and the Data Governance Act are along the same lines. These regulations must now be implemented and put into practice. Their impact must be carefully monitored and evaluated before any further regulatory attempts to foster data access are considered. |
|
3.1.10. |
Regulatory sandboxes are included in the provisions of the AI Act at national level, providing a space for newcomers to experiment. In the EESC’s view, it is crucial to streamline the efforts and goals of these sandboxes and for Member States and participants to share the lessons learnt with each other. |
3.2. Innovation and investment
|
3.2.1. |
The AI Act aims to foster a safe and reliable AI ecosystem. Since users and consumers can benefit from AI innovation through more advanced and efficient products and services used in sectors like health and sustainable energy, it is important that the implementation of the AI Act does not impede such innovation. |
|
3.2.2. |
The future of GPAI made in Europe relies on computing power, chips, cloud capacity and data, all of which require considerable investment in land, energy and data-centre equipment. Investment, as well as other incentives to that end, should focus on secure connectivity and resilient backbone infrastructure, as both are paramount if Europe wants to become digitally sovereign. The EESC points specifically to IMEC, the world’s leading independent nanoelectronics R&D hub, located in Belgium, as a centre of excellence in Europe and a key piece in the AI value chain, which should be further developed and emulated. |
|
3.2.2.1. |
Cloud hosts have unparalleled power to monitor, detect and stifle competitors. In the current context, smaller European providers find themselves restricted to fine-tuning US models into customised or domain-specific systems that require less computing power, data and labour. This leaves downstream developers and deployers dependent on larger upstream model providers. European cloud service providers can be part of a solution that meets the demands for compliance with European values and security needs and should be promoted, e.g. through using their services in EU public procurement projects. |
|
3.2.3. |
The market for AI chips that can run models during their training phase and deal with user queries is highly concentrated and dominated by a very small number of companies, notably Taiwanese and US ones. An estimated USD 5-7 trillion worth of investment is needed to create matching sovereign chip-building capacity in the EU. Forming strategic partnerships with several reliable allies in this area could be a viable and less costly alternative. |
|
3.2.4. |
Generative AI will affect the pharmaceutical, manufacturing, media, automotive, aerospace, defence, medical, electronics and energy industries and the service sector among others by augmenting core processes with AI models. It will impact marketing, design, corporate communications, training and software engineering by augmenting the support processes carried out in many organisations. |
3.3. Promotion of use cases
|
3.3.1. |
GPAI will, to a significant degree, replace cognitive, intelligent work with automation supported by AI. Companies, independent of their size, need technical expertise and support in adapting to the new AI environment, in deploying it usefully and in complying with the relevant and possibly fast-changing regulatory framework. Equally, the effects this transition might have on the labour market and workers have to be addressed by adapting education programmes and curriculums, reskilling and upskilling workers and supporting those less able to adapt to the changes in those processes. |
|
3.3.2. |
The EESC believes that, in the future, new materials, drugs and services will be systematically discovered using generative AI techniques. Generative AI can reduce the costs and time required to discover new cures and can create designs optimised to meet specific goals and constraints, accelerating the design process. Europe needs to be at the forefront of these developments to benefit from them. The EU already possesses several supercomputing centres. The EESC calls on the Commission to ensure that the capacities provided are sufficient to enable such projects to be operated in Europe, and to raise awareness among stakeholders about this offer and the possibilities of university-industry collaboration. |
|
3.3.3. |
To increase access to and use of GPAIS, they need to be explainable. Transparency could help to, for example, make automation more efficient, make decision-making easier, prevent mistakes in repetitive tasks, find errors in processes faster and increase productivity. The gains in energy and resource efficiency through better processes and solutions that allow for virtual tests, digital twins and other options for reducing the use of materials must be weighed against the impact of GPAIS on the environment and energy usage. |
|
3.3.4. |
The EESC is conscious of the concerns that various categories of content creators currently have about the use of generative AI technologies. The EU copyright framework provides for key rules to protect content creators when their work is used by AI developers. The text and data mining (TDM) exceptions introduced with Directive (EU) 2019/790 (6) provide a relevant framework for the use of protected content for AI training. In particular, the TDM exception in Article 4 applies on condition that the rights over the content used for TDM have not been expressly reserved by rightholders. Rightholders may leverage this opt-out mechanism to negotiate commercial licences with AI developers for the use of their content. The EESC calls on the Commission and the future AI Office to ensure that all developers and users of GPAI-based applications comply with the rules. |
3.4. EU AI competitiveness
|
3.4.1. |
The EESC calls on the Commission to promote competition in AI by ensuring consistency with the EU Digital Markets Act to adapt it to the structure of the AI value chain and by providing a favourable framework for research, development and the scaling-up of European AI models, with European values serving as guiding principles. |
|
3.4.2. |
To combat a market concentration dominated by large, often non-European, digital companies, the EESC believes it is essential to mobilise the tools of competition policy (assessing the potential abuse of a company’s dominant position, merger control) to prevent, identify and address critical behaviour and situations. Coordinated European and national investment in innovation is needed to help develop EU value chains and value creation in AI. |
|
3.4.3. |
Competition authorities in the EU need to leverage their capacities and ensure that so-called hyperscalers do not abuse their B2B or B2G market position. They urgently need qualified staff to follow up on the enforcement of EU policies, such as the Digital Markets Act. Agreements such as those between the EU and the US on collaborating on semiconductors to ensure transparency regarding subsidies and provide an early warning system in case of disruption need to be implemented and extended to other stakeholders. |
|
3.4.4. |
It is important to help European companies and MSMEs access the resources needed to develop AI models, such as research facilities, computing capacity, data and skilled labour so that they can be not only users of AI solutions but also developers and can develop GPAI models based on their own needs and products. Access to supercomputing capacity must be comprehensive, as the long-term availability of computing power is required to train models. |
|
3.4.5. |
Once developed, the EU should aim to promote and disseminate European solutions through bodies such as the G7, the G20 and other international fora and standard-setting organisations. |
|
3.4.6. |
Although the AI Act goes into detail on regulating GPAI, statistical data on the status quo of these models in Europe, their development and their impact is scarce. To improve this, the topic should be covered in the Eurostat ICT survey. The collected data could then very well be used for the above-mentioned constant evaluation of the AI Act. |
Brussels, 23 October 2024.
The President
of the European Economic and Social Committee
Oliver RÖPKE
(1) Article 3(63).
(2) See also the existing and upcoming EESC opinions specifically targeting the use of AI at the workplace and the use of AI to achieve more equal and inclusive societies.
(3) https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
(4) www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
(5) Interim Measures for the Management of Generative Artificial Intelligence Services.
(6) Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (Text with EEA relevance.) ( OJ L 130, 17.5.2019, p. 92).
ELI: http://data.europa.eu/eli/C/2025/113/oj
ISSN 1977-091X (electronic edition)