This document is an excerpt from the EUR-Lex website
Document 52024IE1024
Opinion of the European Economic and Social Committee – Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies (own-initiative opinion)
Opinion of the European Economic and Social Committee – Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies (own-initiative opinion)
Opinion of the European Economic and Social Committee – Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies (own-initiative opinion)
EESC 2024/01024
OJ C, C/2025/1185, 21.3.2025, ELI: http://data.europa.eu/eli/C/2025/1185/oj (BG, ES, CS, DA, DE, ET, EL, EN, FR, GA, HR, IT, LV, LT, HU, MT, NL, PL, PT, RO, SK, SL, FI, SV)
![]() |
Official Journal |
EN C series |
C/2025/1185 |
21.3.2025 |
Opinion of the European Economic and Social Committee
Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies
(own-initiative opinion)
(C/2025/1185)
Rapporteur:
Franca SALIS-MADINIER
Advisors |
Odile CHAGNY (for the rapporteur) Isaline OSSIEUR (for Group I) Etzerodt CLEMENS ØRNSTRUP (for Group I) Aïda PONCE DEL CASTILLO (for Group II) Anthony BOCHON (for Group III) |
Plenary Assembly decision |
15.2.2024 |
Legal basis |
Rule 52(2) of the Rules of Procedure |
Section responsible |
Employment, Social Affairs and Citizenship |
Adopted in section |
17.12.2024 |
Adopted at plenary session |
22.1.2025 |
Plenary session No |
593 |
Outcome of vote (for/against/abstentions) |
142/103/14 |
1. Conclusions and recommendations
1.1. |
The EESC supports the joint declaration of the social partners issued during the last G7 summit in Italy (1), which states that ‘the rapid advancement of artificial intelligence (AI) systems, including generative AI (GAI), is undoubtedly one of the most significant trends affecting the world of work and our societies more broadly. If this change is for the better or for the worst is not pre-determined: it depends on the decisions taken by policy-makers to adopt ambitious and effective policies as well as regulatory frameworks that favour social progress, inclusiveness, equality, economic prosperity, sustainable enterprises, business continuity and resilience, the creation of decent jobs, respect for democratic institutions and workers’ rights […] social dialogue plays a key role in this regard’. |
1.2. |
Social dialogue and worker involvement play a crucial role in preserving workers’ fundamental rights and promoting ‘trustworthy’ AI in the world of work. One of the levers for minimising the risks and harmful impacts of AI systems is stronger involvement of workers and their representatives. |
1.3. |
The EESC insists that legislative initiatives, or any other initiatives adapting existing law, should address the gaps in the protection of workers’ rights at work and ensure that humans remain in control in all human-machine interactions. |
1.4. |
The EU’s existing legal provisions relevant to the use of AI in the workplace should be backed up by explicit guidance concerning existing legislation. |
1.5. |
The EESC supports the swift implementation of Article 4 of the AI Act (2), which states that providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy among their staff. |
1.6. |
Public authorities must implement skills development initiatives for workers to ensure that artificial intelligence systems enhance, rather than replace, humans. |
1.7. |
The EESC strongly recommends that EU public policies and the Member States develop AI modules in education and training and make awareness-raising materials on AI available to citizens. Formal education on AI technologies should be provided at an early stage, thereby allowing for the acculturation of all citizens. |
1.8. |
In order to avoid the fragmentation of Member States’ current initiatives and to ensure a level playing-field in the single market, the EESC issues a strong call for enforced social dialogue on the deployment of AI systems on the basis of an ad hoc EU legal instrument that includes provisions to achieve the following more effectively:
|
1.9. |
The EESC calls on the European AI Office to establish close cooperation with the European cross-sectoral social partners, in order to ensure that the role of social dialogue is adequately reflected in the AI Office’s upcoming guidelines and to produce clarifications on all AI systems. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established. |
2. General comments
2.1. |
The objective of this own-initiative opinion is to deliver recommendations and specific proposals to policy-makers at European and national level with a view to creating an environment conducive to the positive deployment of AI systems and tools in the world of work. Taking into account the request from the incoming Polish Presidency, a specific chapter is dedicated to exploring levers for harnessing the potential and mitigating the risks of AI in employment and labour market policies. This opinion has gathered input through two workshops held at the EESC using the Joint Research Centre’s foresight methodology, with the participation of European and international experts. |
2.2. |
AI has been identified as a ‘general purpose’ technology (8), enabling transformative digital applications with high potential for social and economic impact (9). AI, big data and high-performance computing are intertwined. Generative AI (GAI) includes systems such as sophisticated large language models that can create new content, ranging from text to images, by learning from extensive training data. This opinion takes a broad approach to examining both algorithmic management systems and the wider use of digital workplace technologies like surveillance tools and digital HR platforms having a significant impact on workers. |
2.3. |
While estimates show that the uptake of AI tools in European companies has increased rapidly since the emergence of GAI tools, there appears to be a significant discrepancy in uptake between large enterprises and SMEs (10). Despite the lack of consistent data and knowledge on exactly how AI will develop, it is reasonable to expect that AI will affect the world of work in many ways and could become a prominent feature of many people’s jobs (11). |
2.4. |
The scenarios explored at our workshops on the future of AI in the world of work (12) demonstrate that it is not too late to influence the development of AI at work in order to ensure that it is implemented to the benefit of all. The visionary scenario defined as the ‘ideal scenario’ showed that adapting EU law and strengthening social dialogue at the most appropriate level can play a significant role in shaping the trusted uses of AI at work, encouraging European companies to invest in research, development and innovation. |
2.5. |
The impact of AI on the world of work is indirectly addressed in EU legislation relating to social issues (13). The EU institutions have also invested significant resources in developing a shared digital agenda and a digital single market. |
2.6. |
In line with the European Commission President’s proposal to include initiatives looking at how digitalisation is impacting the world of work, from AI management to telework, and in line with the recent social partners’ declaration issued at the G7 summit, the EESC claims that any legislative initiative should address the gaps in the protection of workers’ rights provided for in current legislation relevant to AI and ensure that the ‘human in control’ principle is effectively applied. |
2.7. |
The EESC asks the European Commission to ensure that existing legal provisions relevant for the use of AI in the workplace are backed up by explicit guidance. |
2.8. |
The social partners and citizens have an essential role to play in preserving workers’ fundamental rights and in promoting ‘trustworthy’ AI in the world of work and in society as a whole (14). |
3. Definition of AI systems in the context of this opinion
3.1. For the purpose of this opinion, the EESC recommends using the definition set out in the AI Act (15)
3.2. |
According to Article 3(1) of the AI Act, an ‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. |
3.3. |
This definition is broad enough to cover the three essential features of AI systems, namely:
|
3.4. |
Moreover, the EESC highlights three main specific features of AI systems used in the workplace compared with previous digital technologies:
|
4. Impact of AI on the labour market and working conditions
4.1. Impact on employment
4.1.1. |
Due to its rapid evolution, uncertainty around the integration of AI across production processes, and effects on organisations and workers, the impacts of AI on employment and workers cannot be predetermined. The extent to which AI creates, modifies, supports or destroys jobs also depends on the purpose of its adoption (processes optimisation, cost reduction, efficiency, innovation, etc.). |
4.1.2. |
AI has potential value in: (i) matching job profiles with prospective candidates, thereby improving efficiency in employment policies; (ii) recruiting suitable candidates; (iii) accelerating onboarding processes; (iv) managing performance; (v) conceiving effective personalised training methods and identifying digital skills gaps; (vi) collecting and analysing big data; (vii) drawing up minutes; (viii) drafting job descriptions; and (ix) managing retention and turnover, as well as predicting future hiring needs (21). |
4.1.3. |
The biggest potential economic uplift from AI is likely to come from improved productivity (22). This includes automation of routine and tedious tasks (23), complementing employees’ capabilities and freeing them up to focus on more stimulating work that adds greater value (24). Recent technological developments enable AI systems to replace or complement human roles in a much more diverse range of fields than previous technologies, including non-routine and cognitive tasks (25), notably in health and educational sectors. |
4.1.4. |
Recent studies highlight the ‘complementation’ potential of GAI technologies (26). The ILO estimates (27) that the potential for transformation is more than twice greater than the potential for automation in high-income countries (13,4 % against 5,1 %, respectively). Empirical findings, however, remain inconclusive when it comes to the effects on employment and productivity (28). The actual scale of productivity gains is uncertain (29), and could be exaggerated (30). Regarding GAI, the proportion of occupations for which there is uncertainty about the potential for automation and complementation is high (31) (11,6 % in high-income countries (32)). GAI can improve access to work for people with disabilities (33), but may also automate the jobs they occupy. |
4.2. AI will have a significant impact on skills needs (34)
4.2.1. |
On the one hand, AI will replicate some manual skills and fine psychomotor abilities, as well as cognitive skills such as comprehension, planning, and advising. On the other hand, the skills needed to develop and maintain AI systems, and the skills required to adopt, use and interact with AI applications, will become more important. Demand for basic digital skills and data science (35) will increase. There will be a need for cognitive and transferable skills that best complement AI (including creative problem solving, originality, and other transferable skills such as social and managerial skills (36)). The skills and knowledge of managers and business leaders will also matter for the adoption of AI (37). There is evidence that a lack of adequate AI-related skills acts as a barrier to the use of AI at work (38). AI can also be used to improve adult learning systems. |
4.3. Working conditions
4.3.1. |
Working conditions are the field where the impacts of AI systems are the most ambiguous (39). This is because AI systems have the potential to transform organisational and managerial activities and control (40), as well as to redesign organisational processes (41) in profound ways. |
4.3.2. |
If used sensibly, AI-enabled tools could improve occupational health and safety conditions by helping lighten workers’ workload (42) and improve work-life balance (43) and mental health at work (44). AI tools may help to remove or reduce hazardous tasks and help avoid musculoskeletal disorders. If redistributed in a fair way, the time workers save on tasks can help improve their wellbeing. When used to replace repetitive and tedious tasks, AI tools can increase job satisfaction (45). Using AI applications can lead to better, fairer and non-discriminatory decisions and practices (46). |
4.3.3. |
AI applications may ensure a more comprehensive distribution of better-quality information, provide deep data insights, and improve the quality of delimited decisions. |
4.3.4. |
On the other hand, some risks may arise if the conditions for trusty deployment of AI systems at work are not met (47). |
4.3.5. |
Algorithmic management systems enable a form of pervasive control much more powerful than any previous form of control, with potentially harmful impacts for workers (48). Evidence has been gathered (49) showing the adverse impact that algorithmic management can have on workers’ health and safety and the lack of corporate interest in employing such technologies to improve occupational health and safety. |
4.3.6. |
Workers may face abusive surveillance, discrimination, loss of autonomy, and psychosocial risks (50). AI tools can disrupt workplace collectives and exacerbate feelings of isolation among employees. |
4.3.7. |
AI systems may reinforce information asymmetries between management and workers (51). |
4.4. Tackling inequalities
4.4.1. |
Recent studies focused on GAI highlight that these systems have the potential to reduce inequalities, to perpetuate them, or even to create new risks of inequalities. |
4.4.1.1. |
Inequalities between high-skilled and less-skilled workers could potentially be reduced. Most studies agree that white-collar, higher-skilled occupations will face greater employment-related risks as a result of the adoption of GAI (52). If the purpose of deploying GAI is to increase company innovation, improve work organisation, and boost quality jobs, less-skilled and less-experienced workers will benefit the most from GAI tools, with AI models sharing the best practices of expert workers with new recruits (53). |
4.4.1.2. |
Women and low-skilled and older workers are the vulnerable groups most affected by AI (54). Because clerical jobs are more exposed to the risks of automation, employment effects are gendered, with more than double the share of women potentially affected by automation (55) and more often concerned when it comes to their job’s potential for transformation. |
4.4.1.3. |
The low proportion of women among graduates in STEM fields or in computer science and information technology in some countries raises significant challenges in terms of gender employment and wage inequalities in the context of the spread of AI (56). The issue of gender equality should be better addressed at all levels. Social dialogue and better policies to anticipate these trends are key. |
4.4.2. New global inequalities
4.4.2.1. |
The development of AI is supported by invisible workers, mostly located in low-income countries and working in poor conditions (57). |
4.5. Social dialogue and worker involvement are key for harnessing the potential of AI.
4.5.1. |
Because of the specific features of AI systems and their impact on organisational issues, acceptance, reliability and trust in the development and adoption of AI are key in promoting the positive effects of AI (58). |
4.5.2. |
There is evidence that consulting workers’ representatives leads to better performance and working conditions (59). When workers’ representatives use the ‘collective voice’ system of codetermination, this helps protect workers’ privacy, autonomy and discretion against workforce management technologies (60). |
4.5.3. |
To ensure an effective social dialogue in all entities where AI is deployed, the EESC calls for explicit guidance to be provided in relevant legal and non-legal texts. |
4.5.4. |
People and stakeholders should choose the uses and purpose of AI in our societies and at work. The social partners and citizens should be involved in public debates, literacy and training. |
4.6. AI-preparedness is essential (61)
4.6.1. |
To harness the deployment of AI at work, the EESC calls on the European Commission to encourage: |
4.6.1.1. |
policy-makers at all levels to review skills policies in order to ensure that emerging AI systems will complement workers rather than replacing them; |
4.6.1.2. |
public policies to develop AI modules in early education and training and to make awareness-raising materials on AI available to citizens. These policies should provide formal education on AI technologies at an early stage, thus allowing all citizens to become acculturated; |
4.6.1.3. |
swift implementation of Article 4 of the AI Act, under which providers and deployers of AI systems must adopt measures to ensure a sufficient level of AI literacy among their staff. |
5. Enforcement and adaptation of current EU legislation
5.1. The GDPR (62) is not specifically designed to address workplace data protection issues.
5.1.1. |
The legal bases of the GDPR as a stand-alone Regulation are not sufficient to mitigate the risk of harmful AI systems at work, as they do not take into account the specific nature of AI systems when it comes to the processing of personal data in the employment and workplace context. |
5.1.2. |
There is a consensus that neither employee consent nor legitimate interest constitute a valid basis for data processing in the context of asymmetric power dynamics in the workplace (63). There is evidence from a recent survey of 6 300 workers in Nordic countries that abuse occurs in such asymmetric power across all the sectors covered by the study (64). |
5.1.3. |
Article 15(1(h) of the GDPR on transparency requirements has limited scope as it excludes semi-automated decision-making. |
5.1.4. |
Employees can be affected more by the processing of data collected from other employees than by data collected relating to themselves (65). The rights of trade unions or workers’ representatives to control data collection and processing are limited. The assumption that privacy-related harm is always individual constitutes a significant weakness in the context of algorithmic management systems. Moreover, the provisions on data protection-related impact assessments are insufficient. |
5.1.5. |
Article 88 of the GDPR states that the processing of personal data in the employment context is to be addressed via ‘more specific rules’, to be set by the Member States in law or through collective agreements (66). Since the GDPR entered into force, Article 88 has been poorly implemented and has remained a dead letter in almost all Member States (67). |
5.1.6. |
To give workers more power over their data, the EESC asks the European Commission to adopt measures to ensure Article 88 of the GDPR is effectively enforced. |
5.2. The AI Act
5.2.1. |
The overarching goal of the AI Act is to stimulate the uptake and spread of AI in the EU by advancing a uniform legal framework for ‘trustworthy AI’, based on the logic that if the risks associated with using AI are addressed, its uptake will increase. As a cross-cutting piece of legislation, the Regulation pursues a number of overriding objectives, such as a high level of protection of health, safety and fundamental rights. The legal basis of the AI Act is primarily rooted in the single market provisions of the TFEU. The AI Act is not specifically designed to address workplace issues (68). |
5.2.2. |
The AI Act imposes limitations on the use of AI in the workplace through the prohibition of certain systems and strict requirements associated with the provision or deployment of AI systems, especially those categorised as high-risk due to their application in employment, worker management, and access to self-employment. In this regard, the AI Act represents an important step in the right direction. |
5.2.3. |
The EESC identifies several loopholes in the AI Act with respect to fundamental rights in the workplace. |
5.2.3.1. |
The AI Act acknowledges that an ex-ante definition of risk is not sufficient to protect against the potential harm caused by AI systems and that it cannot be fully determined in advance, since it also depends on the context of deployment (69). Systems for managing the risks associated with high-risk AI systems have to be implemented throughout the whole lifecycle of the system (Article 9). However, the AI Act explicitly recognises the possibility that, although an AI system may be in compliance with the Regulation, it can pose a risk to the health or safety of persons, or to fundamental rights. (Article 82) (70). |
5.2.3.2. |
The obligation to carry out a Fundamental Rights Impact Assessment on high-risk systems applies only to bodies governed by public law, private entities providing a public service, and banking and insurance entities (Article 27). |
5.2.3.3. |
Workers’ representatives only have a right to be informed, not to be consulted (Article 26(7)). Regulatory sandboxes (Article 57) and the testing of high-risk systems in real world conditions (Article 60) are allowed exclusively before the system is put into service. |
5.2.3.4. |
The EESC considers that experimentation is crucial to allow organisations to identify risks, test for unintended consequences, and fine-tune algorithms in a controlled environment before full-scale deployment. The AI Act should be backed up by explicit guidance to provide legal clarity and ensure its unambiguous application and enforcement. |
5.2.4. |
In order to tackle these issues, the EESC calls on the AI Office to establish close cooperation with the European cross-sectoral social partners when developing its first guidelines, in order to produce clear guidelines and clarifications on AI systems that infer the emotions of natural persons. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established. |
5.2.5. |
The EESC considers that it is up to providers to carry out an ex ante Fundamental Rights Impact Assessment for high-risk systems before they are deployed in all entities. |
5.2.6. |
The EESC asks including clear guidelines on how sandboxes and real-world testing conditions can be used in any upcoming initiative on AI at work. |
5.3. Directives on informing and consulting employees, safety and health of workers at work, and the autonomous European Social Partners framework agreement on digitalisation
5.3.1. |
Directive 2002/14/EC (71) guarantees collective rights to information and consultation for employee representatives and covers any anticipatory measure that poses a threat to employment, as well as any decision that leads to ‘substantial changes’ in work organisation (Article 4). Under Directive 89/391/EC (72), employers have to ensure the safety and health of workers in all work-related aspects and must provide information and training and consult workers representatives on health and safety. However, the opaque way in which AI tools are introduced, their evolving nature, and the complexity involved in defining ‘substantial changes’, requires these Directives to be strengthened through explicit guidance. |
5.3.2. |
The dimension of the iterative and dynamic nature of AI systems resonates with the iterative dialogue process contained in the European Social Partners Framework Agreement on Digitalisation (73). |
5.3.3. |
However, the agreement only addresses AI issues marginally, and specific AI-related actions by the national social partners in connection with its implementation have been rather limited, with most of them only addressing issues relating to teleworking and the right to disconnect. |
5.3.4. |
The EESC asks the European Commission to address the context of AI clearly in an ad hoc instrument, in order to take into account the dynamic dimension of social dialogue and the health and safety risk assessments of AI systems in the autonomous agreement. |
5.4. The Platform Work Directive (PWD) (74)
5.4.1. |
The PWD contains provisions that could effectively regulate automated monitoring and decision-making systems. Specifically, Chapter III (75):
|
5.4.2. |
The provisions of Chapter III of the PWD apply only to persons performing platform work. However, algorithmic management practices in regular workplaces are already a reality (76), for example in allocating and optimising work shifts, screening and assessing job applicants, assessing employment performance, and addressing human resources issues. The EESC calls on the European Commission to broaden the scope of the provisions of Chapter III of the PWD to cover all workers. |
Brussels, 22 January 2025.
The President
of the European Economic and Social Committee
Oliver RÖPKE
(1) Shaping the advancement of artificial intelligence through social dialogue.
(2) OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj.
(4) OJ L, 2024/2831, 11.11.2024, ELI: http://data.europa.eu/eli/dir/2024/2831/oj.
(5) OJ L 80, 23.3.2002, p. 29.
(6) OJ L 183, 29.6.1989, p. 1.
(7) Framework Agreement on Digitalisation | Etuc resources center.
(8) World Economic Forum, Markets of Tomorrow: Pathways to a New Economy, 2020.
(9) McKinsey, Shaping the digital transformation in Europe, 2020.
(11) STOA (Scientific foresight) study on The use of artificial intelligence in workplace management, 2022.
(13) See Annex.
(15) OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj.
(16) K. C. Kellogg et al. (2020), Algorithms at Work: The New Contested Terrain of Control.
(17) Case of all AI tools based on adaptative machine learning. See the updated OECD definition.
(18) OECD (2024), The impact of Artificial Intelligence on productivity, distribution and growth.
(19) Kellogg et al. (2020); Cazzaniga et al (2024), Gen-AI: Artificial Intelligence and the Future of Work.
(20) T. Babina et al (2024), ‘Firm Investments in Artificial Intelligence Technologies and Changes in Workforce Composition’.
(21) EPRS (2022), AI and digital tools in workplace management and evaluation; Tambe et al ‘Artificial Intelligence in Human Resources Management: Challenges and a Path Forward’, California Management Review, 2019.
(22) PwC (2017), Sizing the prize.
(23) EPRS (2022), Hmoud, B. and Laszo V. L. (2019), Will Artificial Intelligence Take Over Human Resources Recruitment and Selection, Network Intelligence Studies.
(24) PwC (2017).
(25) OECD (2024).
(26) M. Comunale et al.(2024), ‘The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions’, IMF Working Paper No. 2024/65.
(27) Gmyrek P. et al. (2023), ‘Generative AI and jobs: A global analysis of potential effects on job quantity and quality’, ILO Working Paper 96.
(28) Comunale et al. (2024).
(29) Comunale et al. (2024).
(30) D. Acemoglu (2024), The Simple Macroeconomics of AI, Massachusetts Institute of Technology.
(31) Gmyrek P. at al. (2023).
(32) Gmyrek P. et al. (2023), p. 37.
(33) OECD (2024), Who will be the workers most affected by AI?, OECD AI Working Paper no. 26.
(34) OECD (2023), OECD Employment Outlook 2023.
(35) Report of the French Generative AI Commission, 2024.
(36) Alekseeva, L. et al. (2021), ‘The demand for AI skills in the labor market’, Labour Economics, Vol. 71.
(37) OCDE (2023), OECD Employment Outlook 2023.
(38) OECD (2023), The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers.
(39) Böhmer & H. Schinnenburg (2023), Critical exploration of AI-driven HRM to build up organizational capabilities.
(40) Kellogg et al., (2020), J. Adams-Prassl, H. Abraha et al (2023): Regulating algorithmic management: A blueprint, European Labour Law Journal 2023, Vol. 14(2) pp. 124-151.
(41) Nurski L. (2024), AI at Work, why there’s more to it than task automation, CEPS Explainer.
(42) EPRS (2022), CIPD and PA Consulting; People and machines: from hype to reality; Chartered Institute of Personnel and Development, 2019.
(43) EP study, Improving working conditions using Artificial Intelligence, 2021.
(44) Workplace Intelligence, AI at Work 2020 Study.
(45) OECD (2023), OECD Employment Outlook 2023.
(46) Pessach, D., Singer, G., Avrahami, D., et al. (2020), Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support Systems.
(47) V. Mandinaud, A. Ponce del Castillo (2024), AI systems, risks and working conditions AI systems, risks and working conditions, in Artificial intelligence, labour and society, ETUI.
(48) Kellogg et al. 2020.
(49) EU-OSHA(2024), Worker management through AI - From technology development to the impacts on workers and their safety and health, 2024.
(50) EU-OSHA (2022), OSH Pulse - Occupational safety and health in post-pandemic workplaces, 2022.
(51) OECD (2023).
(52) Eloundou, T. et al. (2023), GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
(53) Comunale et al. (2024); Brynjolfsson E. et al. (2023): Generative IA at work, NBER Working Paper No. 3116.
(54) OECD 2024.
(55) ILO Global Survey on Microtasks workers (2017), Tubaro et al. (2020). The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data & Society, 7(1).
(56) Report of the French Generative AI Commission, 2024.
(57) At global level, the development of AI is supported by invisible workers, mostly located in low-income countries, with poor working conditions. Tubaro et al. (2020).
(58) OECD (2024).
(59) OECD (2023), ‘The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers.
(60) FEPS (2024), Algorithm by and for the workers.
(61) Cazzaniga et al. (2024).
(62) OJ L 119, 4.5.2016, p. 1.
(63) EPRS (2022).
(64) FEPS (2024).
(65) Martin Tisné (2020), The Data Delusion: protecting individual data isn’t enough when the harm is collective, Luminate, Stanford University’s Cyber Policy Center.
(66) Article 88 is still massively underutilised in the EU Member States.
(67) Abraha H. (2023), Article 88 GDPR and the Interplay between EU and Member State Employee Data protection rules, The Modern Law Review.
(68) Aida Ponce Del Castillo, The EU’s AI Act: governing through uncertainties and complexity, identifying opportunities for action, global workplace law and policy, kluwerlawonline.com, 2024.
(69) Isabel Kusche (2024), Possible harms of artificial intelligence and the EU AI act: fundamental rights and risk, Journal of Risk Research.
(70) Isabel Kusche (2024).
(71) OJ L 80, 23.3.2002, p. 29.
(72) OJ L 183, 29.6.1989, p. 1.
(73) https://resourcecentre.etuc.org/agreement/framework-agreement-digitalisation.
(74) OJ L, 2024/2831, 11.11.2024, ELI: http://data.europa.eu/eli/dir/2024/2831/oj.
(75) J. Adams-Prassl, H. Abraha et al (2023).
(76) EU Science Hub (2024), ‘Algorithmic management practices in regular workplaces are already a reality’.
ANNEX I
The following amendment, which received at least a quarter of the votes cast, was rejected in the course of the debate (Rule 14(3) of the Rules of Procedure):
AMENDMENT 1
SOC/803
Pro-worker artificial intelligence
Replace the whole opinion presented by the SOC section with the following text (reason provided at the end of the document):
Amendment |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1. Conclusions and Recommendations
2. The opportunities and challenges of AI for EU’s economy
2.7. Opportunities and challenges of AI in the world of work
3. The European Framework covering the use of AI at work 3.1. The existing EU legislative framework
3.2. Role of Social Dialogue
3.3. Assessment of the current situation
|
Reason
This text comprises an amendment which aims to set out a generally divergent view to an opinion presented by the section and is therefore to be described as a counter-opinion. It sets out the reasons why the EESC considers that there is no need for additional legislation on AI in the world of work and why the Commission should leave space for companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework.
Outcome of the vote
In favour: 112
Against: 136
Abstention: 11
(1) The European Commission (2024) The future of European competitiveness part A a competitiveness strategy for Europe .
(2) European Parliament Research Service, AI investment: EU and global indicators.
(3) The European Commission (2024) The future of European competitiveness part B In-depth analysis of recommendations .
(4) https://www.ecb.europa.eu/press/economic-bulletin/focus/2024/html/ecb.ebbox202406_01~9c8418b554.en.html.
(5) https://www.ecb.europa.eu/press/economic-bulletin/focus/2024/html/ecb.ebbox202404_01~3ceb83e0e4.en.html.
(6) See for instance CEPS’s AI World Navigate Tomorrow's Intelligence Today / AI World.
(7) OECD (2024), OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris, https://doi.org/10.1787/a1689dc5-en.
(8) See for instance Lane, M., M. Williams and S. Broecke (2023), ‘ The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers’, OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(9) EU-OSHA (2021) Impact of artificial intelligence on occupational safety and health .
(10) Bruegel_factsheet_2024_0.pdf.
(11) See section 3 of this counter opinion for further reference.
(12) As stated in the mission letter of Roxana Mînzatu, Executive Vice President.
(13) PL Presidency exploratory opinion request was titled Artificial intelligence - potential and risks in the context of employment and labour market policies.
(14) OECD (2024), OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris,.
(15) See for instance CEPS’s AI World Navigate Tomorrow's Intelligence Today/AI World.
(16) Political Guidelines for the next European Commission 2024-2029.
(17) In 2023, 8 % of EU enterprises used artificial intelligence technologies. For large EU enterprises this figure was 30,4 %. See Eurostat Use of artificial intelligence in enterprises - Statistics Explained.
(18) Eurostat Use of artificial intelligence in enterprises - Statistics Explained.
(19) The European Commission (2024) The future of European competitiveness part A a competitiveness strategy for Europe .
(20) The European Commission (2024) The future of European competitiveness part B In-depth analysis of recommendations .
(22) The AI Index report see also here.
(23) The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.
(24) Boston Consulting Group (2023) Turning GenAI Magic into Business Impact.
(25) The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.
(26) This evolution is reflected in reports and studies on jobs evolutions, such as the Future of Jobs Report 2025 by the Word Economic Forum (Future of Jobs Report 2025: These are the fastest growing and declining jobs | World Economic Forum).
(27) The AI Index report see also here.
(29) See for instance EURES How AI can improve the talent acquisition process.
(30) IMF Gen-AI: Artificial Intelligence and the Future of Work.
(31) WEF Future of Jobs Report 2025 January 2025.
(32) Lane, M., M. Williams and S. Broecke (2023), “ The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers”, OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(33) Workplace Intelligence, AI at Work 2020 Stud, CIPD and PA Consulting; People and machines: from hype to reality; Chartered Institute of Personnel and Development, 2019.
(34) EP study, Improving working conditions using Artificial Intelligence, 2021.
(35) Workplace Intelligence, AI at Work 2020 Study.
(36) Lane, M., M. Williams and S. Broecke (2023), ‘The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers’, OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(37) Dana Pessach, Gonen Singer, Dan Avrahami, Hila Chalutz Ben-Gal, Erez Shmueli, Irad Ben-Gal, Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming, Decision Support Systems, Volume 134, 2020.
(38) See Report Artificial intelligence for worker management: an overview.
(39) See Work organisation and job quality in the digital age | European Foundation for the Improvement of Living and Working Conditions.
(40) Baiocco, S., Fernández-Macías, E., Rani, U. and Pesole, A., The Algorithmic Management of work and its implications in different contexts, Seville: European Commission, 2022, JRC129749.
(41) The Cyber Resilience Act increases the obligations for manufacturers of connected products to make sure vulnerabilities are handles and patched, as well as to increase the protections of the devices/machines.
(44) As stated in the mission letter of Roxana Mînzatu, Executive Vice President Executive Vice-President.
(45) For risks, see for instance Baiocco, S., Fernández-Macías, E., Rani, U. and Pesole, A., The Algorithmic Management of work and its implications in different contexts, Seville: European Commission, 2022, JRC129749.
ELI: http://data.europa.eu/eli/C/2025/1185/oj
ISSN 1977-091X (electronic edition)