OPINION

European Economic and Social Committee

Pro-worker artificial intelligence

_____________

Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies

(own-initiative opinion)

SOC/803

Rapporteur: Franca SALIS-MADINIER

EN

Advisors

Odile CHAGNY (for the rapporteur)

Isaline OSSIEUR (for Group I)

Etzerodt CLEMENS ØRNSTRUP (for Group I)

Aïda PONCE DEL CASTILLO (for Group II)

Anthony BOCHON (for Group III)

Plenary Assembly decision

15/2/2024

Legal basis

Rule 52(2) of the Rules of Procedure

Section responsible

Employment, Social Affairs and Citizenship

Adopted in section

17/12/2024

Adopted at plenary session

22/1/2025

Plenary session No

593

Outcome of vote
(for/against/abstentions)

142/103/14

1.Conclusions and recommendations

1.1The EESC supports the joint declaration of the social partners issued during the last G7 summit in Italy 1 , which states that ‘the rapid advancement of artificial intelligence (AI) systems, including generative AI (GAI), is undoubtedly one of the most significant trends affecting the world of work and our societies more broadly. If this change is for the better or for the worst is not pre-determined: it depends on the decisions taken by policy-makers to adopt ambitious and effective policies as well as regulatory frameworks that favour social progress, inclusiveness, equality, economic prosperity, sustainable enterprises, business continuity and resilience, the creation of decent jobs, respect for democratic institutions and workers’ rights […] social dialogue plays a key role in this regard’.

1.2Social dialogue and worker involvement play a crucial role in preserving workers’ fundamental rights and promoting ‘trustworthy’ AI in the world of work. One of the levers for minimising the risks and harmful impacts of AI systems is stronger involvement of workers and their representatives.

1.3The EESC insists that legislative initiatives, or any other initiatives adapting existing law, should address the gaps in the protection of workers’ rights at work and ensure that humans remain in control in all human-machine interactions.

1.4The EU’s existing legal provisions relevant to the use of AI in the workplace should be backed up by explicit guidance concerning existing legislation.

1.5The EESC supports the swift implementation of Article 4 of the AI Act 2 , which states that providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy among their staff.

1.6Public authorities must implement skills development initiatives for workers to ensure that artificial intelligence systems enhance, rather than replace, humans.

1.7The EESC strongly recommends that EU public policies and the Member States develop AI modules in education and training and make awareness-raising materials on AI available to citizens. Formal education on AI technologies should be provided at an early stage, thereby allowing for the acculturation of all citizens.

1.8In order to avoid the fragmentation of Member States’ current initiatives and to ensure a level playing-field in the single market, the EESC issues a strong call for enforced social dialogue on the deployment of AI systems on the basis of an ad hoc EU legal instrument that includes provisions to achieve the following more effectively:

·enable the enforcement of Article 88 of the General Data Protection Regulation (GDPR) 3 and give explicit guidance on consent and legitimate interest;

·broaden the scope of the provisions contained in the Platform Work Directive (PWD) 4 , addressing the challenges that algorithmic management systems pose to all workers;

·strengthen the rules applicable under Directive 2002/14/EC 5 when high-risk AI systems are introduced, and give explicit guidance on the provisions of EU Directive 89/391 6 on Safety and Health at Work;

·integrate the dimension of the dynamic process of social dialogue and the risk assessments of AI systems, as defined in the Autonomous European Social Partners Framework Agreement on Digitalisation of 2020 7 ;

·extend the communication of Data Protection Impact Assessments (DPIA) to workers’ representatives, as provided for under the PWD;

·provide ex-ante Fundamental Rights Impact Assessments (FRIA), to be carried out by providers before high-risk systems are deployed; and

·establish clear guidelines on how sandboxes and real-world testing conditions can be used.

1.9The EESC calls on the European AI Office to establish close cooperation with the European cross-sectoral social partners, in order to ensure that the role of social dialogue is adequately reflected in the AI Office’s upcoming guidelines and to produce clarifications on all AI systems. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established.

2.General comments

2.1The objective of this own-initiative opinion is to deliver recommendations and specific proposals to policy-makers at European and national level with a view to creating an environment conducive to the positive deployment of AI systems and tools in the world of work. Taking into account the request from the incoming Polish Presidency, a specific chapter is dedicated to exploring levers for harnessing the potential and mitigating the risks of AI in employment and labour market policies. This opinion has gathered input through two workshops held at the EESC using the Joint Research Centre’s foresight methodology, with the participation of European and international experts.

2.2AI has been identified as a ‘general purpose’ technology 8 , enabling transformative digital applications with high potential for social and economic impact 9 . AI, big data and high-performance computing are intertwined. Generative AI (GAI) includes systems such as sophisticated large language models that can create new content, ranging from text to images, by learning from extensive training data. This opinion takes a broad approach to examining both algorithmic management systems and the wider use of digital workplace technologies like surveillance tools and digital HR platforms having a significant impact on workers.

2.3While estimates show that the uptake of AI tools in European companies has increased rapidly since the emergence of GAI tools, there appears to be a significant discrepancy in uptake between large enterprises and SMEs 10 . Despite the lack of consistent data and knowledge on exactly how AI will develop, it is reasonable to expect that AI will affect the world of work in many ways and could become a prominent feature of many people’s jobs 11 .

2.4The scenarios explored at our workshops on the future of AI in the world of work 12 demonstrate that it is not too late to influence the development of AI at work in order to ensure that it is implemented to the benefit of all. The visionary scenario defined as the ‘ideal scenario’ showed that adapting EU law and strengthening social dialogue at the most appropriate level can play a significant role in shaping the trusted uses of AI at work, encouraging European companies to invest in research, development and innovation.

2.5The impact of AI on the world of work is indirectly addressed in EU legislation relating to social issues 13 . The EU institutions have also invested significant resources in developing a shared digital agenda and a digital single market.

2.6In line with the European Commission President’s proposal to include initiatives looking at how digitalisation is impacting the world of work, from AI management to telework, and in line with the recent social partners’ declaration issued at the G7 summit, the EESC claims that any legislative initiative should address the gaps in the protection of workers’ rights provided for in current legislation relevant to AI and ensure that the ‘human in control’ principle is effectively applied.

2.7The EESC asks the European Commission to ensure that existing legal provisions relevant for the use of AI in the workplace are backed up by explicit guidance.

2.8The social partners and citizens have an essential role to play in preserving workers’ fundamental rights and in promoting ‘trustworthy’ AI in the world of work and in society as a whole 14 .

3.Definition of AI systems in the context of this opinion

3.1For the purpose of this opinion, the EESC recommends using the definition set out in the AI Act 15 .

3.2According to Article 3(1) of the AI Act, an ‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.

3.3This definition is broad enough to cover the three essential features of AI systems, namely:

·to enable automated decision-making;

·to do so with a degree of autonomy;

·to interact with workers and impact them.

3.4Moreover, the EESC highlights three main specific features of AI systems used in the workplace compared with previous digital technologies:

·AI systems can be instantaneous, interactive and opaque 16 .

·An important group 17 of AI systems have a capacity for adaptiveness once introduced in the workplace 18 . This can introduce uncertainty and unpredictability over how an AI system will behave in a specific context.

·AI systems offer great opportunities to improve allocation and coordination, to facilitate efficient decision-making within businesses, and to improve organisational learning 19 . As an emerging method of invention, AI is expected to foster innovation 20 . When increasing its scale (size of parameters, training data, etc.), an AI system may even produce ‘accidental discoveries’, thereby fostering innovation.

4.Impact of AI on the labour market and working conditions

4.1Impact on employment

4.1.1Due to its rapid evolution, uncertainty around the integration of AI across production processes, and effects on organisations and workers, the impacts of AI on employment and workers cannot be predetermined. The extent to which AI creates, modifies, supports or destroys jobs also depends on the purpose of its adoption (processes optimisation, cost reduction, efficiency, innovation, etc.).

4.1.2AI has potential value in: (i) matching job profiles with prospective candidates, thereby improving efficiency in employment policies; (ii) recruiting suitable candidates; (iii) accelerating onboarding processes; (iv) managing performance; (v) conceiving effective personalised training methods and identifying digital skills gaps; (vi) collecting and analysing big data; (vii) drawing up minutes; (viii) drafting job descriptions; and (ix) managing retention and turnover, as well as predicting future hiring needs 21 .

4.1.3The biggest potential economic uplift from AI is likely to come from improved productivity 22 . This includes automation of routine and tedious tasks 23 , complementing employees’ capabilities and freeing them up to focus on more stimulating work that adds greater value 24 . Recent technological developments enable AI systems to replace or complement human roles in a much more diverse range of fields than previous technologies, including non-routine and cognitive tasks 25 , notably in health and educational sectors.

4.1.4Recent studies highlight the ‘complementation’ potential of GAI technologies 26 . The ILO estimates 27 that the potential for transformation is more than twice greater than the potential for automation in high-income countries (13.4% against 5.1%, respectively). Empirical findings, however, remain inconclusive when it comes to the effects on employment and productivity 28 . The actual scale of productivity gains is uncertain 29 , and could be exaggerated 30 . Regarding GAI, the proportion of occupations for which there is uncertainty about the potential for automation and complementation is high 31 (11.6% in high-income countries 32 ). GAI can improve access to work for people with disabilities 33 , but may also automate the jobs they occupy.

4.2AI will have a significant impact on skills needs 34

4.2.1On the one hand, AI will replicate some manual skills and fine psychomotor abilities, as well as cognitive skills such as comprehension, planning, and advising. On the other hand, the skills needed to develop and maintain AI systems, and the skills required to adopt, use and interact with AI applications, will become more important. Demand for basic digital skills and data science 35 will increase. There will be a need for cognitive and transferable skills that best complement AI (including creative problem solving, originality, and other transferable skills such as social and managerial skills 36 ). The skills and knowledge of managers and business leaders will also matter for the adoption of AI 37 . There is evidence that a lack of adequate AI-related skills acts as a barrier to the use of AI at work 38 . AI can also be used to improve adult learning systems.

4.3Working conditions

4.3.1Working conditions are the field where the impacts of AI systems are the most ambiguous 39 . This is because AI systems have the potential to transform organisational and managerial activities and control 40 , as well as to redesign organisational processes 41 in profound ways.

4.3.2If used sensibly, AI-enabled tools could improve occupational health and safety conditions by helping lighten workers’ workload 42 and improve work-life balance 43 and mental health at work 44 . AI tools may help to remove or reduce hazardous tasks and help avoid musculoskeletal disorders. If redistributed in a fair way, the time workers save on tasks can help improve their wellbeing. When used to replace repetitive and tedious tasks, AI tools can increase job satisfaction 45 . Using AI applications can lead to better, fairer and non-discriminatory decisions and practices 46 .

4.3.3AI applications may ensure a more comprehensive distribution of better-quality information, provide deep data insights, and improve the quality of delimited decisions.

4.3.4On the other hand, some risks may arise if the conditions for trusty deployment of AI systems at work are not met 47 .

4.3.5Algorithmic management systems enable a form of pervasive control much more powerful than any previous form of control, with potentially harmful impacts for workers 48 . Evidence has been gathered 49 showing the adverse impact that algorithmic management can have on workers’ health and safety and the lack of corporate interest in employing such technologies to improve occupational health and safety.

4.3.6Workers may face abusive surveillance, discrimination, loss of autonomy, and psychosocial risks 50 . AI tools can disrupt workplace collectives and exacerbate feelings of isolation among employees.

4.3.7AI systems may reinforce information asymmetries between management and workers 51 .

4.4Tackling inequalities

4.4.1Recent studies focused on GAI highlight that these systems have the potential to reduce inequalities, to perpetuate them, or even to create new risks of inequalities.

4.4.1.1Inequalities between high-skilled and less-skilled workers could potentially be reduced. Most studies agree that white-collar, higher-skilled occupations will face greater employment-related risks as a result of the adoption of GAI 52 . If the purpose of deploying GAI is to increase company innovation, improve work organisation, and boost quality jobs, less-skilled and less-experienced workers will benefit the most from GAI tools, with AI models sharing the best practices of expert workers with new recruits 53 .

4.4.1.2Women and low-skilled and older workers are the vulnerable groups most affected by AI 54 . Because clerical jobs are more exposed to the risks of automation, employment effects are gendered, with more than double the share of women potentially affected by automation 55  and more often concerned when it comes to their job’s potential for transformation.

4.4.1.3The low proportion of women among graduates in STEM fields or in computer science and information technology in some countries raises significant challenges in terms of gender employment and wage inequalities in the context of the spread of AI 56 . The issue of gender equality should be better addressed at all levels. Social dialogue and better policies to anticipate these trends are key.

4.4.2New global inequalities

4.4.2.1The development of AI is supported by invisible workers, mostly located in low-income countries and working in poor conditions 57 .

4.5Social dialogue and worker involvement are key for harnessing the potential of AI.

4.5.1Because of the specific features of AI systems and their impact on organisational issues, acceptance, reliability and trust in the development and adoption of AI are key in promoting the positive effects of AI 58 .

4.5.2There is evidence that consulting workers’ representatives leads to better performance and working conditions 59 . When workers’ representatives use the ‘collective voice’ system of codetermination, this helps protect workers’ privacy, autonomy and discretion against workforce management technologies 60 .

4.5.3To ensure an effective social dialogue in all entities where AI is deployed, the EESC calls for explicit guidance to be provided in relevant legal and non-legal texts.

4.5.4People and stakeholders should choose the uses and purpose of AI in our societies and at work. The social partners and citizens should be involved in public debates, literacy and training.

4.6AI-preparedness is essential 61 .

4.6.1To harness the deployment of AI at work, the EESC calls on the European Commission to encourage:

4.6.1.1policy-makers at all levels to review skills policies in order to ensure that emerging AI systems will complement workers rather than replacing them;

4.6.1.2public policies to develop AI modules in early education and training and to make awareness-raising materials on AI available to citizens. These policies should provide formal education on AI technologies at an early stage, thus allowing all citizens to become acculturated;

4.6.1.3swift implementation of Article 4 of the AI Act, under which providers and deployers of AI systems must adopt measures to ensure a sufficient level of AI literacy among their staff.

5.Enforcement and adaptation of current EU legislation

5.1The GDPR 62 is not specifically designed to address workplace data protection issues.

5.1.1The legal bases of the GDPR as a stand-alone Regulation are not sufficient to mitigate the risk of harmful AI systems at work, as they do not take into account the specific nature of AI systems when it comes to the processing of personal data in the employment and workplace context.

5.1.2There is a consensus that neither employee consent nor legitimate interest constitute a valid basis for data processing in the context of asymmetric power dynamics in the workplace 63 . There is evidence from a recent survey of 6300 workers in Nordic countries that abuse occurs in such asymmetric power across all the sectors covered by the study 64 .

5.1.3Article 15(1(h) of the GDPR on transparency requirements has limited scope as it excludes semi-automated decision-making.

5.1.4Employees can be affected more by the processing of data collected from other employees than by data collected relating to themselves 65 . The rights of trade unions or workers’ representatives to control data collection and processing are limited. The assumption that privacy-related harm is always individual constitutes a significant weakness in the context of algorithmic management systems. Moreover, the provisions on data protection-related impact assessments are insufficient.

5.1.5Article 88 of the GDPR states that the processing of personal data in the employment context is to be addressed via ‘more specific rules’, to be set by the Member States in law or through collective agreements 66 . Since the GDPR entered into force, Article 88 has been poorly implemented and has remained a dead letter in almost all Member States 67 .

5.1.6To give workers more power over their data, the EESC asks the European Commission to adopt measures to ensure Article 88 of the GDPR is effectively enforced.

5.2The AI Act

5.2.1The overarching goal of the AI Act is to stimulate the uptake and spread of AI in the EU by advancing a uniform legal framework for ‘trustworthy AI’, based on the logic that if the risks associated with using AI are addressed, its uptake will increase. As a cross-cutting piece of legislation, the Regulation pursues a number of overriding objectives, such as a high level of protection of health, safety and fundamental rights. The legal basis of the AI Act is primarily rooted in the single market provisions of the TFEU. The AI Act is not specifically designed to address workplace issues 68 .

5.2.2The AI Act imposes limitations on the use of AI in the workplace through the prohibition of certain systems and strict requirements associated with the provision or deployment of AI systems, especially those categorised as high-risk due to their application in employment, worker management, and access to self-employment. In this regard, the AI Act represents an important step in the right direction.

5.2.3The EESC identifies several loopholes in the AI Act with respect to fundamental rights in the workplace.

5.2.3.1The AI Act acknowledges that an ex-ante definition of risk is not sufficient to protect against the potential harm caused by AI systems and that it cannot be fully determined in advance, since it also depends on the context of deployment 69 . Systems for managing the risks associated with high-risk AI systems have to be implemented throughout the whole lifecycle of the system (Article 9). However, the AI Act explicitly recognises the possibility that, although an AI system may be in compliance with the Regulation, it can pose a risk to the health or safety of persons, or to fundamental rights. (Article 82) 70 .

5.2.3.2The obligation to carry out a Fundamental Rights Impact Assessment on high-risk systems applies only to bodies governed by public law, private entities providing a public service, and banking and insurance entities (Article 27).

5.2.3.3Workers’ representatives only have a right to be informed, not to be consulted (Article 26(7)). Regulatory sandboxes (Article 57) and the testing of high-risk systems in real world conditions (Article 60) are allowed exclusively before the system is put into service.

5.2.3.4The EESC considers that experimentation is crucial to allow organisations to identify risks, test for unintended consequences, and fine-tune algorithms in a controlled environment before full-scale deployment. The AI Act should be backed up by explicit guidance to provide legal clarity and ensure its unambiguous application and enforcement.

5.2.4In order to tackle these issues, the EESC calls on the AI Office to establish close cooperation with the European cross-sectoral social partners when developing its first guidelines, in order to produce clear guidelines and clarifications on AI systems that infer the emotions of natural persons. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established.

5.2.5The EESC considers that it is up to providers to carry out an ex ante Fundamental Rights Impact Assessment for high-risk systems before they are deployed in all entities.

5.2.6The EESC asks including clear guidelines on how sandboxes and real-world testing conditions can be used in any upcoming initiative on AI at work.

5.3Directives on informing and consulting employees, safety and health of workers at work, and the autonomous European Social Partners framework agreement on digitalisation

5.3.1Directive 2002/14/EC 71 guarantees collective rights to information and consultation for employee representatives and covers any anticipatory measure that poses a threat to employment, as well as any decision that leads to ‘substantial changes’ in work organisation (Article 4). Under Directive 89/391/EC 72 , employers have to ensure the safety and health of workers in all work-related aspects and must provide information and training and consult workers representatives on health and safety. However, the opaque way in which AI tools are introduced, their evolving nature, and the complexity involved in defining ‘substantial changes’, requires these Directives to be strengthened through explicit guidance.

5.3.2The dimension of the iterative and dynamic nature of AI systems resonates with the iterative dialogue process contained in the European Social Partners Framework Agreement on Digitalisation 73 .

5.3.3However, the agreement only addresses AI issues marginally, and specific AI-related actions by the national social partners in connection with its implementation have been rather limited, with most of them only addressing issues relating to teleworking and the right to disconnect.

5.3.4The EESC asks the European Commission to address the context of AI clearly in an ad hoc instrument, in order to take into account the dynamic dimension of social dialogue and the health and safety risk assessments of AI systems in the autonomous agreement.

5.4The Platform Work Directive (PWD) 74

5.4.1The PWD contains provisions that could effectively regulate automated monitoring and decision-making systems. Specifically, Chapter III 75 :

·expands the algorithmic transparency regime of the GDPR to cover both solely automated and semi-automated decisions;

·establishes a collective right to information and expertise by requiring digital labour platforms to make algorithmic management systems intelligible to platform workers, their representatives, and labour authorities;

·provides that platforms shall communicate the Data Protection Impact Assessment to workers’ representatives;

·prohibits the processing of personal data ‘not intrinsically connected to and strictly necessary for the performance of the contract’ and bans the processing of any personal data ‘on the emotional or psychological state’ of platform workers under all circumstances;

·introduces a right to a human interface (Article 10 on human oversight of monitoring systems and of decision-making systems, and Article 11 on human review);

·introduces, in Article 12 on safety and health, specific requirements for the evaluation of the risks that automated monitoring or decision-making systems pose to the safety and health of workers.

5.4.2The provisions of Chapter III of the PWD apply only to persons performing platform work. However, algorithmic management practices in regular workplaces are already a reality 76 , for example in allocating and optimising work shifts, screening and assessing job applicants, assessing employment performance, and addressing human resources issues. The EESC calls on the European Commission to broaden the scope of the provisions of Chapter III of the PWD to cover all workers.

Brussels, 22 January 2025.

The president of the European Economic and Social Committee

Oliver Röpke

_____________

N.B.: Appendices overleaf.

APPENDIX I to the OPINION 
of the

European Economic and Social Committee

Overview of existing EU laws pertaining to AI in relation to the world of work

Document and date of adoption

Subject

Does it mention workers?

AI literacy

Consent

AI management

Health and safety at work

Surveillance

Worker involvement

Discrimination and equality

AI Act, 13.06.2024

The AI Act divides artificial intelligence products into four categories of risk and provides for proper regulation for each level.

Yes (Recital 9, Recital 48, Recital 57, Recital 92, Article 2, Article 26, Annex III).

For suppliers, purchasers, and other people who are affected by AI (Recital 20).

Consent of natural persons to participate in AI testing (Recital 141).

AI management tools must be classified as high-risk AI (Annex III).

Employers are obliged to inform workers about the AI tools deployed (Recital 20).

Not specifically about the workplace. AI systems are considered high-risk when they pose threats to health and safety (Article 6).

It is acknowledged that AI might interfere excessively in workers’ privacy (Recital 57).

Employers are obliged to inform and consult workers on AI implementation (Recital 92).

AI management can reinforce discrimination by perpetuating existing biases (Recital 57). This discrimination can have detrimental effects when important decisions are made (Recital 58). The technical documentation of AI systems should contain assessments of their discriminatory potential (Annex IV).

Platform Work Directive ,

(Council adopted the finalised act on 14 October 2024. Signatures are expected soon. Not yet published in the OJ.)

The Platform Work Directive protects platform workers by setting clear criteria for an employment relationship in a platform setting. The Directive also protects platform workers from the negative effects of algorithmic management.

The scope of the Directive concerns platform workers (Article 2).

Not mentioned.

Not mentioned.

No psychological, biometric, private, off-duty data should be processed or collected (Article 7). All decisions concerning job dismissal or something of equivalent detriment must be taken by a human

(Article 10).

All platforms are obliged to evaluate the health and safety risks of AI management on platform workers. They must also introduce protective and preventive measures. AI cannot be used in a way that pressures platform workers (Article 12 under Chapter III).

Digital labour platforms must provide workers with detailed information on the specifics of AI monitoring and decision-making, as well as specific features of the system. All workers have a right to receive a written and understandable explanation

(Article 9).

Platforms must involve workers’ representatives in evaluating decisions made by AI (Article 11). If there are more than 250 people working on a platform in a Member State, the platform must cover the costs of hiring experts to assist workers’ representatives (Article 13). Platform workers are encouraged to participate in collective bargaining (Article 25).

Algorithmic management should not process any racial data, should not be discriminatory, and should not pose threats to fundamental rights (Article 7). If AI tools are discriminatory, platforms are obligated to modify their systems (Article 11). Platforms must protect their employees from violence and harassment (Article 12).

GDPR , 27.04.2016

The General Data Protection Regulation proposes general security and protection measures for the processing of data (personal, health, and other categories). The Regulation recognises the responsibility for asking for consent when processing data, and prohibits the use of personal data for automated decision-making, except in exceptional situations.

Yes, with regard to the limitations on AI profiling based on somebody’s performance at work (Article 4, Article 22). Data processing in the context of deployment can be further regulated by the Member States (Article 88).

Not mentioned.

Consent in the context of employment can be defined individually by the Member States (Article 88).

Ban on automated decision-making when it involves profiling and when it makes life-changing decisions for people. Automated decision-making should be allowed only in special cases (Article 22). People affected by automated decision-making must be informed about this in a clear way (Article 13, Article 14).

To be decided by the Member States (Article 88, Recital 155).

Not mentioned.

Not mentioned.

AI and data processing must not be discriminatory. Potential biases should be anticipated and prevented (Recital 71).

NIS2 Directive , 14.12.2022

The NIS2 Directive obliges the Members States to establish good cybersecurity measures, and forces key industry entities (‘essential entities’ and ‘important entities’) to collaborate with national government in cases of cybersecurity risks. This Regulation provides an overview of holistic national cybersecurity strategies. Breaches of personal data are to be investigated by national authorities.

Yes, with regard to administrative workers, who should be educated on and informed about tackling security risks (Article 20).

Not mentioned.

Not mentioned.

Not mentioned.

Not mentioned.

On a national level, each Member State should establish ‘computer security incident response teams’ that would monitor network flows and respond to threats and attacks. Not applicable to workers unless the system at work is in under attack (Article 10).

Not mentioned.

Not mentioned.

Cyber Resilience Act. (The Council approved the text on 10 October 2024. The text still needs to be signed. Not yet published in the OJ.)

The Act identifies two issues: digital products are not made secure enough and users do not have enough education to enable them to choose secure products and use them in a safe way.

The Member States should encourage and enable the upskilling of employees of cybersecurity tool manufacturers (Article 10).

Not mentioned.

Not mentioned.

Not mentioned.

Products that might influence the health and safety of their users must undergo conformity assessment (Article 7).

Not mentioned.

Not mentioned.

Not mentioned.

Critical Entities Resilience Directive , 14.12.2022

This Directive obliges Member States to ensure the continuity of essential services in the event of security disruptions. It also obliges critical entities (energy, transport, banking, finance, health, water, digital infrastructure, space, public administration, food) to engage in ongoing risk assessments and to be prepared to address emerging hazards.

Yes, with regard to workers handling sensitive data who might need to undergo background checks. There should be different categories of workers to avoid everyone having access to sensitive data

(Article 13).

Not mentioned.

Not mentioned.

Not mentioned.

Not mentioned.

Critical entities are authorised to monitor and carry out background checks on employees whenever there are risks of them misusing their access to sensitive data (Article 13).

Not mentioned.

Not mentioned.

Foresight methodology used for this opinion and contributions

In preparing this opinion, two participatory workshops were held, applying the foresight methodology developed by the European Commission’s Joint Research Centre (JRC) and the Competence Centre on Foresight 77 . This methodology is a structured approach aimed at helping policy-makers anticipate and prepare for future challenges. It integrates various tools and techniques to systematically explore long-term trends, emerging issues, and potential scenarios. The JRC also emphasises participatory foresight by engaging stakeholders in workshops and consultations, ensuring a collaborative, ‘out of the box’ approach to envisioning the future.

The first participatory foresight workshop, held on 27 June 2024, produced four distinct scenarios concerning the future of artificial intelligence in the world of work and the workplace. All linguistic versions of these scenarios are available on the webpage dedicated to this opinion 78 .

The second workshop, held on 26 September 2024, produced a document titled ‘An Ideal Vision of AI in the World of Work and at the Workplace in 2035’, which is also available in all languages on the opinion’s webpage 79 .

The EESC extends its sincere gratitude to the representatives of the following organisations for their invaluable contributions throughout the development of this opinion, and particularly for their active participation in the workshops:

·European Commission (Joint Research Centre, Directorate-General for Employment, Social Affairs and Inclusion, Directorate-General for Research and Innovation)

·European Parliament (The European Parliamentary Research Service (EPRS))

·European Centre for the Development of Vocational Training (CEDEFOP)

·European Foundation for the Improvement of Living and Working Conditions (Eurofound)

·European Agency for Safety and Health at Work (EU-OSHA)

·European Training Foundation (ETF)

·International Labour Organization (ILO)

·Organisation for Economic Co-operation and Development (OECD)

·Council of Europe (CoE)

·National Agency for the Improvement of Working Conditions, France (Agence Nationale pour l'Amélioration des Conditions de Travail – ANACT)

·SMEunited – the Association of Crafts and SMEs in Europe

·SGI Europe – Services of General Interest Europe

·Confederation of German Employers’ Associations (Bundesvereinigung der Deutschen Arbeitgeberverbände – BDA)

·EuroCommerce – The European Retail and Wholesale Association

·industriAll European Trade Union

·Union Network International Global Union (UNI Global Union)

·European Grouping of Societies of Authors and Composers (GESAC)

·Professional artists association LESVOIX.FR

·AlgorithmWatch

·Centre for European Policy Studies (CEPS)

·Foundation for European Progressive Studies (FEPS)

·Technical University of Denmark (Danmarks Tekniske Universitet – DTU)

·Utrecht University

·Paris-Sorbonne Business School (Université Paris 1 Panthéon-Sorbonne)

·Observatory of the Social and Ethical Impact of Artificial Intelligence, Spain (Observatorio del Impacto Social y Ético de la Inteligencia Artificial – OdiseIA)

·Einstein Center Digital Future (ECDF) (interdisciplinary project involving several universities in Berlin)

APPENDIX II to the OPINION 
of the

European Economic and Social Committee

The following amendment, which received at least a quarter of the votes cast, was rejected in the course of the debate (Rule 14(3) of the Rules of Procedure):

AMENDMENT 1

SOC/803

Pro-worker artificial intelligence

Replace the whole opinion presented by the SOC section with the following text (reason provided at the end of the document):

Amendment

1.Conclusions and Recommendations 

1.1‘With the world now on the cusp of another digital revolution, triggered by the spread of artificial intelligence (AI), a window has opened for Europe to redress its failings in innovation and productivity and to restore its manufacturing potential.’ 80

1.2At present Europe is weak in digital technologies such as AI. 81 US and China are already way ahead and this gap will be difficult to overcome. 82 Moreover, there are large differences in growth of labour productivity between the USA and EU (Euro zone) 83 . Since the pandemic until mid-2024, the labour productivity per hour worked increased by 0.9% in the euro area, compared to 6.7% in the United States. Analysis of the European Central bank argues that these figures are associated with labour market churn and higher investment in digitalisation. 84

1.3The potential benefits of deploying AI are substantial: it increases competitiveness and productivity, drives innovation and scientific progress 85 , boosts the green transition 86 and supports improvement of working conditions. We must ensure that the EU does not lose out on the digital transition. In order to benefit from AI’s potential, the myths and fears surrounding AI need to be dismantled and alleviated.

1.4In the world of work the benefits of AI include e.g. automation of routine and tedious tasks; complementing employees’ capabilities and freeing them up to focus on more stimulating work that adds greater value; enabling workers to complete tasks more quickly and improving the quality of their output. AI can also support improving work organisation and job design and better identifying future skills and hiring needs. 87 This requires broad acceptance by and collaboration with the workforce as well as providing the necessary training of workers for the deployment and use of AI at the workplace. The EESC recalls that the development, deployment and use of AI must always follow the human in command principle.

1.5Widespread deployment of AI will also benefit the ability of the managers and workers to improve occupational safety and health (OSH) by strengthening unbiased and evidence-based risk assessment, targeted OSH inspections and indeed help better identify issues (including psychosocial risks) where interventions are required. This includes better prevention of workplace accidents. 88

1.6At the same time there are fears and concerns linked to more widespread use of AI in the world of work. These include, for instance, work intensification leading to increased stress, increased monitoring and control, lack of human oversight, loss of autonomy and acquired skills becoming quickly obsolete.

1.7In order to address these fears social partners and social dialogue at all levels have an essential role to play. The EESC considers that promoting responsible and ‘trustworthy’ AI in the world of work requires a positive and enabling environment for social dialogue in accordance with applicable national rules and practices.

1.8The EESC notes that in total there are 116 pieces of legislation in the EU digital agenda for 2030 89 . More specifically, the impact of AI on the world of work is already covered by EU legislation on AI, following the human in command principle, as well as existing social legislation 90 . Implementation and enforcement of the existing legal framework is essential to ensure smooth deployment of the AI so that it can be a motor for economic and technological progress in the EU.

1.9In light of all this, the EESC regrets that notwithstanding the existing wide legislative framework that already provides a comprehensive and sufficient regulation of AI in working life, the European Commission considers that new legislation is still needed regarding the impact of digitalisation in the world of work 91 . This also contradicts the current political commitment to regulatory simplification and reduction of regulatory and reporting requirements by 25%. Changing the existing regulatory framework even before its implementation would send a very negative message in terms of advancement and investment in AI in the EU.

1.10Instead, the Commission should allow companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework. This ensures that the social partners autonomy is respected and that deployment of AI will be a tool to improve working conditions, advance the green transition and boost EU’s competitiveness.

1.11In order to effectively support companies in particular SMEs in the uptake of AI, there is the need for (i) efficient and effective implementation and enforcement of the existing legislation and guidance while avoiding at all costs the introduction of additional requirements as well as multiple reporting; (ii) strong social dialogue also through reinforcing capacities of social partners while respecting national practices and (iii) availability of skilled workforce and appropriate training opportunities.

2.The opportunities and challenges of AI for EU’s economy

2.1The opinion SOC/803 was prepared on the basis of the Own Initiative Opinion (OIO) proposal by Group II (proposed under the original title: ‘For an artificial intelligence pro-workers: the trade union role to prevent and minimise the negative impacts on the world of work’) which aimed to assess the impact of AI in the world of work and to provide proposals (legislative and non-legislative) and recommendations to address the protection of workers’ privacy and fundamental rights and which was later merged with the exploratory opinion ‘Artificial intelligence - potential and risks in the context of employment and labour market policies’ requested by the Polish Presidency 92 .

2.2The EESC believes that AI has the potential to yield tremendous benefits, including enhanced productivity gains, accelerating scientific progress and helping address climate change. 93 It drives innovation and has rightly been called as a transformative force reshaping our entire economy. 94 It is paramount that EU businesses are at the forefront of this development to enhance EU’s competitiveness and position the EU as an AI international reference. ‘Through our Artificial Intelligence (AI), Europe is already leading the way on making AI safer and more trustworthy, and on tackling the risks stemming from its misuse. We must now focus our efforts on becoming a global leader in AI innovation.’ 95

2.3The digital transformation represents an opportunity for Europe but we are facing significant challenges. Estimates show that the initially slow uptake of AI tools in European companies has increased rapidly since the emergence of Generative AI (GAI) tools but there appears to be a significant discrepancy in uptake between large enterprises and SMEs 96 . There are also sectoral and country-by-country differences: according to Eurostat, in 2023, use of AI is widespread in information and communication sectors, in professional and in scientific and technical activities, whereas the uptake in other sectors is more limited. There are also significant differences in uptake across EU countries. 97

2.4As stated in the Draghi report ‘With the world now on the cusp of another digital revolution, triggered by the spread of artificial intelligence (AI), a window has opened for Europe to redress its failings in innovation and productivity and to restore its manufacturing potential.’ 98 However, Europe is weak in digital technologies such as AI. US and China are already way ahead and this gap will be difficult to overcome. 99

2.5The Stanford University Global AI Vibrancy tool 100 shows individual countries AI Vibrancy ranking. US, China and United Kingdom are in top three and amongst top 10 countries there are only two EU Members States (France and Germany are ranked as 5th and 8th respectively). As for the origins of AI models, according to the AI Index Report 101 , 61 notable AI models originated from U.S. based institutions, clearly outperforming EU’s 21 and China’s 15 AI models.

2.6According to McKinsey’s global survey 102 , 65 percent of respondents report that their organisations are regularly using generative AI (GAI), nearly double the percentage from their previous survey less than a year ago. This has an impact on company performance. For instance, by deploying well known GAI technologies at the workplace rough estimates point to 10-20% increases in efficiency and when AI is used to reshape workflows and tasks the potential may be even greater. 103 The functions where companies use AI most often are marketing and sales and in product and service development as well as in IT. 104

2.7Opportunities and challenges of AI in the world of work

2.7.1AI will affect the world of work in many ways and could become a prominent feature of many people’s jobs across all sectors of the economy. There are both opportunities and challenges and how these are perceived by both employers and workers also plays an important role.

2.7.2AI improves productivity for instance by automating routine tasks and complementing workers’ capabilities. 105 One of the top 10 takeaways of the AI index report 106 was that AI supports workers in being more productive and leads to higher quality work. AI enables workers to complete tasks more quickly and to improve the quality of their output. There are studies showing AI’s potential to bridge the skill gap between low- and high-skilled workers. At the same time other studies warn that using AI without proper human oversight can lead to diminished performance.

2.7.3AI tools can help companies in identifying what kind of skills are absent in a company’s workforce and addressing the digital skills (or other skills) gaps 107 . Thus, AI can help companies to better predict future hiring needs. As also pointed out by the European Labour Authority (ELA), 70% of human resources agents across Europe are using some sort of AI tool when searching for or assessing candidates and that AI assisted hiring procedures also may enhance applicants’ experience of the process. 108

2.7.4There are studies showing the employment impacts of AI. The IMF discussion note 109 considers that 40% of global employment will be exposed to AI. Specifically, in advanced economies it is estimated that 60% of jobs are exposed to AI, with half of those jobs benefiting from AI and increased productivity while about half may be negatively affected by AI. According to WEF 110 , a net growth of 78 million jobs (7% of today’s total employment) is expected by 2030.

2.7.5According to a large survey by the OECD, of workers and employers on the impact of AI at work ‘Workers and employers alike were overwhelmingly positive about the impact of AI on performance and working conditions. For instance, 79% and 80% of AI users in finance and manufacturing, respectively, said that AI had improved their own performance, compared to 8% in both sectors who said that AI had worsened it. Across all performance and working conditions indicators considered, workers who use AI were more than four times as likely to say that AI had improved their performance and working conditions as to say that it had worsened them.’ 111

2.7.6The EESC underlines that AI-enabled tools can improve occupational health and safety conditions by helping lighten workers’ workload 112 and improve work-life balance 113 and mental health at work 114 . AI tools will help to remove or reduce hazardous tasks and help avoid musculoskeletal disorders. The time workers save on tasks can help improve their wellbeing. AI tools can increase job satisfaction 115 . Using AI applications can lead to better, fairer and non-discriminatory decisions and practices for instance in hiring 116 .

2.7.7For instance, the ways in which AI may reduce or remove occupational health and safety (OHS) risks include, but are not limited to, the following:

a)By providing managers and workers representatives with better information to identify OSH issues – including psychosocial risks – and areas where OSH interventions are required to reduce various risk factors such as harassment and violence, and providing early warnings of hazardous situations, stress, health issues and fatigue in relation to tasks and activities carried out by workers.

b)By providing workers and managers with individually tailored real-time advice to influence their behaviour in a safer manner. For instance, organisations can use monitoring devices that measure the biometric information of workers to ensure that they are not fatigued, which may increase the risk of accidents.

c)By supporting evidence-based prevention and advanced workplace risk assessment.

d)By supporting evidence-based and more efficient, targeted OSH inspections.

e)By harnessing the power of automation and robotisation in industry, logistics or construction to reduce the risks of repetitive and hazardous tasks. 117

f)By using IoT devices and sensors to monitor work equipment in real time can detect equipment failures or failures before they occur, contributing to enhanced security.

g)By using advances in artificial intelligence, virtual and augmented reality to virtually test certain safety configurations and conditions to prepare workers for risk-free training and help employers offer tailored training.

h)By applying exoskeleton research to relieve the employee in the handling of heavy loads. Significant progress is also being made in this field for workers with disabilities.

i)By integrating automated systems into supply chains to limit manual handling tasks.

j)By using processing data via AI to better design workstations and logistics processes, to limit the risks of exposure of employees. 118

2.7.8JRC research 119 has identified the ongoing digitisation and automation across industries and the push for efficiency as drivers for adoption of algorithmic management in regular workplaces. It identified and analysed changes and challenges arising from algorithmic management in terms of changes in work organisation and effects on job quality.

2.7.9In terms of changes in work organisation the JRC specifically points to the following potential impacts from algorithmic management: centralisation of knowledge and control, redefinition of tasks and roles and blurring organisational boundaries.

2.7.10JRC also identifies effects on job quality in terms of skills and discretion, work intensity, social environment and earnings and prospects.

2.7.11Concerns, uncertainties and fears about potential risks and consequences of deploying AI can prevent us from taking it on board even though AI can improve jobs and make them more efficient.

2.7.12Occupations are not static and the EESC believes that even jobs that are not extensively affected by AI at the moment, will face reskilling needs. Training and new skills are needed to make the most of new data-based tech solutions (including AI) at work.

2.7.13It is essential to accompany companies and their workers in the uptake of AI and to ensure that all companies and in particular SMEs do not lose the train of AI but can fully benefit from it. This requires ensuring upskilling of workers and better support for companies. Workers need access to necessary training and companies need flexibility to find their best training methods. A trust-based dialogue to build good, company-specific practices is key. The goal is to ensure that deployment of AI technologies benefits both companies and workers as it leads to higher productivity. This requires commitment from both companies and workers.

2.7.14In order to benefit from AI’s potential to increase competitiveness and productivity, and to ensure that the EU does not lose out on the digital transition, the myths and fears surrounding AI need to be dismantled and alleviated while focussing on the implementation and enforcement of the existing legislative framework.

3.The European Framework covering the use of AI at work

3.1The existing EU legislative framework

3.1.1The EESC points out that the following existing EU legislation include provisions to ensure that when an AI tool is deployed in the workplace, on the one hand the safe and fair working conditions of employees are safeguarded and on the other hand, workers are involved in the deployment process of an AI tool:

·The 2016 General Data Protection Regulation (GDPR): A non-exhaustive overview of relevant articles in GDPR include article 35 GDPR (data protection impact assessments); article 7 GDPR (prohibits linking consent to the performance of a contract); article 9, paragraph 2 GDPR (transparency of data processing and limitation to the ability to process sensitive personal data); article 15, paragraph 1(h) GDPR (right to meaningful human input on important decisions, with an opt-out of fully automated decision-making in the workplace); article 22 GDPR (prohibits fully automated decision-making processes in employment relationships); article 88 GDPR (room for collective bargaining and Member States to enact more strict provisions on GDPR at national level in the employment context);

·The 2024 AI Act: Annex III of the AI Act classifies certain AI systems used for recruitment, decisions about promotion, dismissal and task assignment, and monitoring of persons in work-related contractual relationships as ‘high risk’. Due to this classification, these AI systems would be subject to legal requirements relating to risk management (article 9), data quality and data governance (article 10), documentation and recording-keeping (articles 11-12), transparency and provision of information to users (article 13), human oversight (article 14), robustness, accuracy, and security (article 15), and information on the deployment of high-risk AI in the workplace under penalty of 15,000,000 EUR or up to 3% worldwide annual turnover. (Art 99.4.g), Moreover Article 2.11 of the AI Act enables member states to create more favourable provisions to the protection of workers’ rights. Also, Article 4 (AI literacy) provides that providers and deployers of AI systems) shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used;

·The 1989 Framework Directive on OSH: The Framework Directive 89/391/EEC obliges employers to perform a risk assessment to pre-emptively ensure that AI tools will not harm the safety and health of workers;

·The 2000 Employment Equality Directive;

·The 2002 Directive on Information and Consultation: Article 4, paragraph 2 (c) obliges employers to inform and consult workers on decisions likely to lead to substantial changes in work organisation or in contractual relations;

·The 2024 Platform Work Directive: Digital labour platforms will be required to introduce specific measures on the use of automated monitoring and decision-making systems (article 6), the human monitoring of automated systems (article 7), the human review of significant decisions (article 8), and the information and consultation rights of platform workers (article 9).

3.1.2The challenges arising for companies in their organisational operations (cybersecurity, security breaches, privacy, data management, etc.), as well as those regarding work organisation, have also been scrutinised at EU level. In addition to the GDPR,

·The NIS2 Directive: The NIS2 Directive obliges companies providing essential services in a country - e.g. energy, transport, water management, digital infrastructure, telecoms, etc. - to organise operations in a way that increases their protection against attacks and breaches of security, including data security;

·The Cyber Resilience Act 120 : The Cyber Resilience Act increases the obligations for manufacturers of connected products to make sure vulnerabilities are handles and patched, as well as to increase the protections of the devices/machines;

·The Critical Entities Resilience Directive and sector specific regulation, including DORA, aim to specifically tackle the organisational challenges for companies related to the use of AI.

3.1.3The EESC notes that there is on-going work, facilitated by the AI Office, to prepare a Code of Practice for General Purpose AI (CoP) to will detail the rules of the AI Act for providers of general-purpose AI models and general-purpose AI models with systemic risks. 121

3.2Role of Social Dialogue

3.2.1The social partners and social dialogue at all levels have an essential role to play in promoting responsible and ‘trustworthy’ AI in the world of work. To ensure an effective social dialogue on AI matters, the EESC calls for promoting a positive and enabling environment for social dialogue. Strong and constructive social dialogue at all levels in accordance with applicable national rules and practices is the main tool for minimising the risks and possible harmful impacts, which should also facilitate the use of AI in order to benefit from its potential. To that end, capacity building of social partners on AI needs to be developed to ensure knowledge and understanding of the challenges and opportunities it poses.

3.2.2The EESC notes that the European Social Partners signed in 2020 the Autonomous Framework Agreement on Digitalisation 122 , which covers 1) digital skills and how to secure employment; 2) modalities of connecting and disconnecting; 3) AI and guaranteeing the human in control principle and 4) respect of human dignity and surveillance. It provides, i.a. that deployment of AI systems should follow the ‘human in command principle’ and should be safe, i.e. it should prevent harm.

3.2.3As regards the interplay between the existing EU legislation (the AI ACT) and social dialogue, the EESC calls on the European AI Office to establish close cooperation with the European cross-sectoral social partners, to ensure that the role of social dialogue is adequately reflected in the AI Office’s upcoming guidelines and in secondary legislation. The EESC further calls on the AI Office to produce clarifications on all AI systems. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established.

3.3Assessment of the current situation

3.3.1The EESC considers that the existing 116 pieces of legislation in the EU digital agenda for 2030 – in particular GDPR and the AI Act and the other legislation referred to above – sufficiently cover the challenges posed by AI at work, including discrimination, occupational safety and health, information and consultation, data protection etc.

3.3.2In light of this, the EESC regrets that notwithstanding the existing wide legislative framework that already provides a comprehensive and sufficient regulation of AI in working life the European Commission considers that new legislation is still needed regarding the impact of digitalisation in the world of work. As stated in the mission letter of Roxana Mînzatu, Executive Vice President Executive Vice-President for Social Rights and Skills, Quality Jobs and Preparedness ‘[t]his should be done notably through an initiative on algorithmic management and through possible legislation on AI in the workplace, following consultation with social partners. (…) 123 .

3.3.3This intention also contradicts the current political priority of simplification and reduction of regulatory and reporting requirements by 25 pct. This is also in clear contradiction with the common understanding on the need for simplification of the current EU framework to improve EU’s competitiveness and business environment.

3.3.4Instead, the Commission should allow companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework. This ensures that the social partners autonomy is respected.

3.3.5Should there, however, be any potential initiative related to AI in the workplace, it must first and foremost aim to effectively implement and enforce the existing comprehensive legislative EU-framework. Secondly, it should aim to help companies mitigating the possible risks in the world of work 124 while fully benefitting from the opportunities offered by AI. This would ensure enhanced prosperity, productivity, sustainability and social well-being.

3.3.6The EESC is of the strong opinion, that if the Commission proposes a new initiative on AI in the workplace or on algorithmic management, this initiative should not use Chapter III from the Platform Work Directive as a blueprint. The rules in the Platform Work Directive are designed to work specifically in those kinds of businesses and treating all EU companies as if they were digital labour platforms will be a significant barrier to uptake of new technologies.

3.3.7Further, a wide revision of the existing legislation would be detrimental in terms of burden both for legislators and enforcers and send a very negative message in terms of advancement and investment in AI in the EU.

3.3.8Going forward, the EESC therefore calls for a significant focus on reducing legal complexity, as this will better enable European businesses to use AI in a responsible and ethical way. This need for reducing legal complexity arises for instance from overlaps in existing legislation as well as multiple and continuous reporting obligations.

Reason

This text comprises an amendment which aims to set out a generally divergent view to an opinion presented by the section and is therefore to be described as a counter-opinion. It sets out the reasons why the EESC considers that there is no need for additional legislation on AI in the world of work and why the Commission should leave space for companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework.

Outcome of the vote

In favour:    112

Against:    136

Abstention:    11

(1)

    Shaping the advancement of artificial intelligence through social dialogue .

(2)

    Regulation (EU) 2024/1689 .

(3)

    Regulation (EU) 2016/679 .

(4)

    Directive (EU) 2024/2831 .

(5)

    Directive 2002/14/EC .

(6)

    Council Directive 89/391/EEC .

(7)

    Framework Agreement on Digitalisation | Etuc resources center .

(8)

   World Economic Forum, Markets of Tomorrow: Pathways to a New Economy , 2020.

(9)

   McKinsey, Shaping the digital transformation in Europe , 2020.

(10)

   See link .

(11)

   STOA (Scientific foresight) study on The use of artificial intelligence in workplace management , 2022.

(12)

    See webpage .

(13)

   See Annex.

(14)

    See webpage .

(15)

    Regulation (EU) 2024/1689 .

(16)

   K. C. Kellogg et al. (2020), Algorithms at Work: The New Contested Terrain of Control .

(17)

   Case of all AI tools based on adaptative machine learning. See the updated OECD definition .

(18)

   OECD (2024), The impact of Artificial Intelligence on productivity, distribution and growth .

(19)

   Kellogg et al. (2020); Cazzaniga et al (2024), Gen-AI: Artificial Intelligence and the Future of Work .

(20)

   T. Babina et al (2024), ‘Firm Investments in Artificial Intelligence Technologies and Changes in Workforce Composition’.

(21)    EPRS (2022), AI and digital tools in workplace management and evaluation; Tambe et al ‘Artificial Intelligence in Human Resources Management: Challenges and a Path Forward’, California Management Review, 2019.
(22)      PwC (2017), Sizing the prize .
(23)      EPRS (2022), Hmoud, B. and Laszo V. L. (2019), Will Artificial Intelligence Take Over Human Resources Recruitment and Selection, Network Intelligence Studies.
(24)      PwC (2017).
(25)      OECD (2024).
(26)      M. Comunale et al.(2024), ‘The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions’, IMF Working Paper No. 2024/65.
(27)      Gmyrek P. et al. (2023), ‘Generative AI and jobs: A global analysis of potential effects on job quantity and quality’, ILO Working Paper 96.
(28)      Comunale et al. (2024).
(29)      Comunale et al. (2024).
(30)      D. Acemoglu (2024), The Simple Macroeconomics of AI, Massachusetts Institute of Technology.
(31)    Gmyrek P. at al. (2023).
(32)    Gmyrek P. et al. (2023), p. 37.
(33)    OECD (2024), Who will be the workers most affected by AI?, OECD AI Working Paper no. 26
(34)    OECD (2023), OECD Employment Outlook 2023 .
(35)      Report of the French Generative AI Commission, 2024.
(36)      Alekseeva, L. et al. (2021), ‘The demand for AI skills in the labor market’, Labour Economics, Vol. 71.
(37)    OCDE (2023), OECD Employment Outlook 2023 .
(38)      OECD (2023), The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers .
(39)      Böhmer & H. Schinnenburg (2023), Critical exploration of AI‐driven HRM to build up organizational capabilities .
(40)      Kellogg et al., (2020), J. Adams-Prassl, H. Abraha et al (2023): Regulating algorithmic management: A blueprint, European Labour Law Journal 2023, Vol. 14(2) pp. 124-151.
(41)      Nurski L. (2024), AI at Work, why there’s more to it than task automation, CEPS Explainer.
(42)      EPRS (2022), CIPD and PA Consulting; People and machines: from hype to reality; Chartered Institute of Personnel and Development, 2019.
(43)      EP study, Improving working conditions using Artificial Intelligence, 2021.
(44)      Workplace Intelligence, AI at Work 2020 Study.
(45)      OECD (2023), OECD Employment Outlook 2023 .
(46)      Pessach, D., Singer, G., Avrahami, D., et al. (2020), Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support Systems.
(47)    V. Mandinaud, A. Ponce del Castillo (2024), AI systems, risks and working conditions AI systems, risks and working conditions, in Artificial intelligence, labour and society , ETUI.
(48)    Kellogg et al. 2020.
(49)    EU-OSHA(2024), Worker management through AI - From technology development to the impacts on workers and their safety and health , 2024.
(50)    EU-OSHA (2022), OSH Pulse - Occupational safety and health in post-pandemic workplaces , 2022.
(51)    OECD (2023).
(52)    Eloundou, T. et al. (2023), GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
(53)    Comunale et al. (2024); Brynjolfsson E. et al. (2023): Generative IA at work, NBER Working Paper No. 3116.
(54)    OECD 2024.
(55)      ILO Global Survey on Microtasks workers (2017), Tubaro et al. (2020). The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data & Society, 7(1).
(56)      Report of the French Generative AI Commission, 2024.
(57)      At global level, the development of AI is supported by invisible workers, mostly located in low-income countries, with poor working conditions. Tubaro et al. (2020).
(58)    OECD (2024).
(59)      OECD (2023), The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers .
(60)      FEPS (2024),  Algorithm by and for the workers .
(61)    Cazzaniga et al. (2024).
(62)       Regulation (EU) 2016/679 .
(63)    EPRS (2022).
(64)      FEPS (2024).
(65)

   Martin Tisné (2020), The Data Delusion: protecting individual data isn’t enough when the harm is collective, Luminate, Stanford University’s Cyber Policy Center.

(66)    Article 88 is still massively underutilised in the EU Member States. 
(67)      Abraha H. (2023), Article 88 GDPR and the Interplay between EU and Member State Employee Data protection rules, The Modern Law Review.
(68)      Aida Ponce Del Castillo, The EU’s AI Act: governing through uncertainties and complexity, identifying opportunities for action, global workplace law and policy, kluwerlawonline.com, 2024.
(69)      Isabel Kusche (2024), Possible harms of artificial intelligence and the EU AI act: fundamental rights and risk, Journal of Risk Research.
(70)    Isabel Kusche (2024).
(71)       Directive 2002/14/EC .
(72)       Directive 89/391/EEC .
(73)       https://resourcecentre.etuc.org/agreement/framework-agreement-digitalisation .
(74)     Directive (EU) 2024/2831 .
(75)

   J. Adams-Prassl, H. Abraha et al (2023). 

(76)    EU Science Hub (2024), ‘Algorithmic management practices in regular workplaces are already a reality’.
(77)      Since 2019, there is a Commission Vice-President - the first ever member of the College of Commissioners - in charge of strategic foresight, ensuring long-term policy coordination between all Directorates-General and building close foresight cooperation and alliances with the other EU institutions. The Commission produces an annual Strategic Foresight Report, which feeds into the Commission’s work programmes and multi-annual programming exercises.
(78)      See web page dedicated to the opinion.
(79)      See web page dedicated to the opinion.
(80)    The European Commission (2024) The future of European competitiveness part A a competitiveness strategy for Europe .
(81)    European Parliament Research Service, AI investment: EU and global indicators .
(82)    The European Commission (2024) The future of European competitiveness part B In-depth analysis of recommendations .
(83)     https://www.ecb.europa.eu/press/economic-bulletin/focus/2024/html/ecb.ebbox202406_01~9c8418b554.en.html .
(84)    https://www.ecb.europa.eu/press/economic-bulletin/focus/2024/html/ecb.ebbox202404_01~3ceb83e0e4.en.html.
(85)    See for instance CEPS’s AI World Navigate Tomorrow's Intelligence Today / AI World .
(86)    OECD (2024), OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris,  https://doi.org/10.1787/a1689dc5-en .
(87)    See for instance Lane, M., M. Williams and S. Broecke (2023), ‘ , OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(88)    EU-OSHA (2021) Impact of artificial intelligence on occupational safety and health .
(89) 10      Bruegel_factsheet_2024_0.pdf .
(90)

   See section 3 of this counter opinion for further reference.

(91)    As stated in the mission letter of Roxana Mînzatu, Executive Vice President.
(92)    PL Presidency exploratory opinion request was titled Artificial intelligence - potential and risks in the context of employment and labour market policies.
(93)    OECD (2024),  OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris, .
(94)    See for instance CEPS’s AI World Navigate Tomorrow's Intelligence Today / AI World .
(95)     Political Guidelines for the next European Commission 2024-2029 .
(96)

   In 2023, 8% of EU enterprises used artificial intelligence technologies. For large EU enterprises this figure was 30.4%. See Eurostat Use of artificial intelligence in enterprises - Statistics Explained .

(97)    Eurostat Use of artificial intelligence in enterprises - Statistics Explained .
(98)    The European Commission (2024) The future of European competitiveness part A a competitiveness strategy for Europe .
(99)    The European Commission (2024) The future of European competitiveness part B In-depth analysis of recommendations .
(100)     Global AI vibrancy tool .
(101)     The AI Index report see also here .
(102)     The state of AI in early 2024: Gen AI adoption spikes and starts to generate value .
(103)      Boston Consulting Group (2023) Turning GenAI Magic into Business Impact .
(104)     The state of AI in early 2024: Gen AI adoption spikes and starts to generate value .
(105)      This evolution is reflected in reports and studies on jobs evolutions, such as the Future of Jobs Report 2025 by the Word Economic Forum ( Future of Jobs Report 2025: These are the fastest growing and declining jobs | World Economic Forum ).
(106)     The AI Index report see also here .
(107)    WEF forum .
(108)    See for instance EURES How AI can improve the talent acquisition process .
(109)    IMF Gen-AI: Artificial Intelligence and the Future of Work .
(110)    WEF Future of Jobs Report 2025 January 2025.
(111)    Lane, M., M. Williams and S. Broecke (2023), ‘ , OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(112)     Workplace Intelligence, AI at Work 2020 Stud, CIPD and PA Consulting; People and machines: from hype to reality ; Chartered Institute of Personnel and Development, 2019.
(113)    EP study, Improving working conditions using Artificial Intelligence , 2021.
(114)     Workplace Intelligence, AI at Work 2020 Study.
(115)    Lane, M., M. Williams and S. Broecke (2023), ‘’ , OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris.
(116)    Dana Pessach, Gonen Singer, Dan Avrahami, Hila Chalutz Ben-Gal, Erez Shmueli, Irad Ben-Gal, Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming, Decision Support Systems , Volume 134, 2020.
(117)    See Report Artificial intelligence for worker management: an overview .
(118)

   See Work organisation and job quality in the digital age | European Foundation for the Improvement of Living and Working Conditions .

(119)    Baiocco, S., Fernández-Macías, E., Rani, U. and Pesole, A., The Algorithmic Management of work and its implications in different contexts, Seville: European Commission, 2022, JRC129749.
(120)    The Cyber Resilience Act increases the obligations for manufacturers of connected products to make sure vulnerabilities are handles and patched, as well as to increase the protections of the devices/machines.
(121)    See for instance here .
(122)    See e.g. here .
(123)    As stated in the mission letter of Roxana Mînzatu, Executive Vice President Executive Vice-President.
(124)    For risks, see for instance Baiocco, S., Fernández-Macías, E., Rani, U. and Pesole, A., The Algorithmic Management of work and its implications in different contexts, Seville: European Commission, 2022, JRC129749.