1.Conclusions and Recommendations
1.1‘With the world now on the cusp of another digital revolution, triggered by the spread of artificial intelligence (AI), a window has opened for Europe to redress its failings in innovation and productivity and to restore its manufacturing potential.’
1.2At present Europe is weak in digital technologies such as AI.
US and China are already way ahead and this gap will be difficult to overcome.
Moreover, there are large differences in growth of labour productivity between the USA and EU (Euro zone)
. Since the pandemic until mid-2024, the labour productivity per hour worked increased by 0.9% in the euro area, compared to 6.7% in the United States. Analysis of the European Central bank argues that these figures are associated with labour market churn and higher investment in digitalisation.
1.3The potential benefits of deploying AI are substantial: it increases competitiveness and productivity, drives innovation and scientific progress
, boosts the green transition
and supports improvement of working conditions. We must ensure that the EU does not lose out on the digital transition. In order to benefit from AI’s potential, the myths and fears surrounding AI need to be dismantled and alleviated.
1.4In the world of work the benefits of AI include e.g. automation of routine and tedious tasks; complementing employees’ capabilities and freeing them up to focus on more stimulating work that adds greater value; enabling workers to complete tasks more quickly and improving the quality of their output. AI can also support improving work organisation and job design and better identifying future skills and hiring needs.
This requires broad acceptance by and collaboration with the workforce as well as providing the necessary training of workers for the deployment and use of AI at the workplace. The EESC recalls that the development, deployment and use of AI must always follow the human in command principle.
1.5Widespread deployment of AI will also benefit the ability of the managers and workers to improve occupational safety and health (OSH) by strengthening unbiased and evidence-based risk assessment, targeted OSH inspections and indeed help better identify issues (including psychosocial risks) where interventions are required. This includes better prevention of workplace accidents.
1.6At the same time there are fears and concerns linked to more widespread use of AI in the world of work. These include, for instance, work intensification leading to increased stress, increased monitoring and control, lack of human oversight, loss of autonomy and acquired skills becoming quickly obsolete.
1.7In order to address these fears social partners and social dialogue at all levels have an essential role to play. The EESC considers that promoting responsible and ‘trustworthy’ AI in the world of work requires a positive and enabling environment for social dialogue in accordance with applicable national rules and practices.
1.8The EESC notes that in total there are 116 pieces of legislation in the EU digital agenda for 2030
. More specifically, the impact of AI on the world of work is already covered by EU legislation on AI, following the human in command principle, as well as existing social legislation
. Implementation and enforcement of the existing legal framework is essential to ensure smooth deployment of the AI so that it can be a motor for economic and technological progress in the EU.
1.9In light of all this, the EESC regrets that notwithstanding the existing wide legislative framework that already provides a comprehensive and sufficient regulation of AI in working life, the European Commission considers that new legislation is still needed regarding the impact of digitalisation in the world of work
. This also contradicts the current political commitment to regulatory simplification and reduction of regulatory and reporting requirements by 25%. Changing the existing regulatory framework even before its implementation would send a very negative message in terms of advancement and investment in AI in the EU.
1.10Instead, the Commission should allow companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework. This ensures that the social partners autonomy is respected and that deployment of AI will be a tool to improve working conditions, advance the green transition and boost EU’s competitiveness.
1.11In order to effectively support companies in particular SMEs in the uptake of AI, there is the need for (i) efficient and effective implementation and enforcement of the existing legislation and guidance while avoiding at all costs the introduction of additional requirements as well as multiple reporting; (ii) strong social dialogue also through reinforcing capacities of social partners while respecting national practices and (iii) availability of skilled workforce and appropriate training opportunities.
2.The opportunities and challenges of AI for EU’s economy
2.1The opinion SOC/803 was prepared on the basis of the Own Initiative Opinion (OIO) proposal by Group II (proposed under the original title: ‘For an artificial intelligence pro-workers: the trade union role to prevent and minimise the negative impacts on the world of work’) which aimed to assess the impact of AI in the world of work and to provide proposals (legislative and non-legislative) and recommendations to address the protection of workers’ privacy and fundamental rights and which was later merged with the exploratory opinion ‘Artificial intelligence - potential and risks in the context of employment and labour market policies’ requested by the Polish Presidency
.
2.2The EESC believes that AI has the potential to yield tremendous benefits, including enhanced productivity gains, accelerating scientific progress and helping address climate change.
It drives innovation and has rightly been called as a transformative force reshaping our entire economy.
It is paramount that EU businesses are at the forefront of this development to enhance EU’s competitiveness and position the EU as an AI international reference. ‘Through our Artificial Intelligence (AI), Europe is already leading the way on making AI safer and more trustworthy, and on tackling the risks stemming from its misuse. We must now focus our efforts on becoming a global leader in AI innovation.’
2.3The digital transformation represents an opportunity for Europe but we are facing significant challenges. Estimates show that the initially slow uptake of AI tools in European companies has increased rapidly since the emergence of Generative AI (GAI) tools but there appears to be a significant discrepancy in uptake between large enterprises and SMEs
. There are also sectoral and country-by-country differences: according to Eurostat, in 2023, use of AI is widespread in information and communication sectors, in professional and in scientific and technical activities, whereas the uptake in other sectors is more limited. There are also significant differences in uptake across EU countries.
2.4As stated in the Draghi report ‘With the world now on the cusp of another digital revolution, triggered by the spread of artificial intelligence (AI), a window has opened for Europe to redress its failings in innovation and productivity and to restore its manufacturing potential.’
However, Europe is weak in digital technologies such as AI. US and China are already way ahead and this gap will be difficult to overcome.
2.5The Stanford University Global AI Vibrancy tool
shows individual countries AI Vibrancy ranking. US, China and United Kingdom are in top three and amongst top 10 countries there are only two EU Members States (France and Germany are ranked as 5th and 8th respectively). As for the origins of AI models, according to the AI Index Report
, 61 notable AI models originated from U.S. based institutions, clearly outperforming EU’s 21 and China’s 15 AI models.
2.6According to McKinsey’s global survey
, 65 percent of respondents report that their organisations are regularly using generative AI (GAI), nearly double the percentage from their previous survey less than a year ago. This has an impact on company performance. For instance, by deploying well known GAI technologies at the workplace rough estimates point to 10-20% increases in efficiency and when AI is used to reshape workflows and tasks the potential may be even greater.
The functions where companies use AI most often are marketing and sales and in product and service development as well as in IT.
2.7Opportunities and challenges of AI in the world of work
2.7.1AI will affect the world of work in many ways and could become a prominent feature of many people’s jobs across all sectors of the economy. There are both opportunities and challenges and how these are perceived by both employers and workers also plays an important role.
2.7.2AI improves productivity for instance by automating routine tasks and complementing workers’ capabilities.
One of the top 10 takeaways of the AI index report
was that AI supports workers in being more productive and leads to higher quality work. AI enables workers to complete tasks more quickly and to improve the quality of their output. There are studies showing AI’s potential to bridge the skill gap between low- and high-skilled workers. At the same time other studies warn that using AI without proper human oversight can lead to diminished performance.
2.7.3AI tools can help companies in identifying what kind of skills are absent in a company’s workforce and addressing the digital skills (or other skills) gaps
. Thus, AI can help companies to better predict future hiring needs. As also pointed out by the European Labour Authority (ELA), 70% of human resources agents across Europe are using some sort of AI tool when searching for or assessing candidates and that AI assisted hiring procedures also may enhance applicants’ experience of the process.
2.7.4There are studies showing the employment impacts of AI. The IMF discussion note
considers that 40% of global employment will be exposed to AI. Specifically, in advanced economies it is estimated that 60% of jobs are exposed to AI, with half of those jobs benefiting from AI and increased productivity while about half may be negatively affected by AI. According to WEF
, a net growth of 78 million jobs (7% of today’s total employment) is expected by 2030.
2.7.5According to a large survey by the OECD, of workers and employers on the impact of AI at work ‘Workers and employers alike were overwhelmingly positive about the impact of AI on performance and working conditions. For instance, 79% and 80% of AI users in finance and manufacturing, respectively, said that AI had improved their own performance, compared to 8% in both sectors who said that AI had worsened it. Across all performance and working conditions indicators considered, workers who use AI were more than four times as likely to say that AI had improved their performance and working conditions as to say that it had worsened them.’
2.7.6The EESC underlines that AI-enabled tools can improve occupational health and safety conditions by helping lighten workers’ workload
and improve work-life balance
and mental health at work
. AI tools will help to remove or reduce hazardous tasks and help avoid musculoskeletal disorders. The time workers save on tasks can help improve their wellbeing. AI tools can increase job satisfaction
. Using AI applications can lead to better, fairer and non-discriminatory decisions and practices for instance in hiring
.
2.7.7For instance, the ways in which AI may reduce or remove occupational health and safety (OHS) risks include, but are not limited to, the following:
a)By providing managers and workers representatives with better information to identify OSH issues – including psychosocial risks – and areas where OSH interventions are required to reduce various risk factors such as harassment and violence, and providing early warnings of hazardous situations, stress, health issues and fatigue in relation to tasks and activities carried out by workers.
b)By providing workers and managers with individually tailored real-time advice to influence their behaviour in a safer manner. For instance, organisations can use monitoring devices that measure the biometric information of workers to ensure that they are not fatigued, which may increase the risk of accidents.
c)By supporting evidence-based prevention and advanced workplace risk assessment.
d)By supporting evidence-based and more efficient, targeted OSH inspections.
e)By harnessing the power of automation and robotisation in industry, logistics or construction to reduce the risks of repetitive and hazardous tasks.
f)By using IoT devices and sensors to monitor work equipment in real time can detect equipment failures or failures before they occur, contributing to enhanced security.
g)By using advances in artificial intelligence, virtual and augmented reality to virtually test certain safety configurations and conditions to prepare workers for risk-free training and help employers offer tailored training.
h)By applying exoskeleton research to relieve the employee in the handling of heavy loads. Significant progress is also being made in this field for workers with disabilities.
i)By integrating automated systems into supply chains to limit manual handling tasks.
j)By using processing data via AI to better design workstations and logistics processes, to limit the risks of exposure of employees.
2.7.8JRC research
has identified the ongoing digitisation and automation across industries and the push for efficiency as drivers for adoption of algorithmic management in regular workplaces. It identified and analysed changes and challenges arising from algorithmic management in terms of changes in work organisation and effects on job quality.
2.7.9In terms of changes in work organisation the JRC specifically points to the following potential impacts from algorithmic management: centralisation of knowledge and control, redefinition of tasks and roles and blurring organisational boundaries.
2.7.10JRC also identifies effects on job quality in terms of skills and discretion, work intensity, social environment and earnings and prospects.
2.7.11Concerns, uncertainties and fears about potential risks and consequences of deploying AI can prevent us from taking it on board even though AI can improve jobs and make them more efficient.
2.7.12Occupations are not static and the EESC believes that even jobs that are not extensively affected by AI at the moment, will face reskilling needs. Training and new skills are needed to make the most of new data-based tech solutions (including AI) at work.
2.7.13It is essential to accompany companies and their workers in the uptake of AI and to ensure that all companies and in particular SMEs do not lose the train of AI but can fully benefit from it. This requires ensuring upskilling of workers and better support for companies. Workers need access to necessary training and companies need flexibility to find their best training methods. A trust-based dialogue to build good, company-specific practices is key. The goal is to ensure that deployment of AI technologies benefits both companies and workers as it leads to higher productivity. This requires commitment from both companies and workers.
2.7.14In order to benefit from AI’s potential to increase competitiveness and productivity, and to ensure that the EU does not lose out on the digital transition, the myths and fears surrounding AI need to be dismantled and alleviated while focussing on the implementation and enforcement of the existing legislative framework.
3.The European Framework covering the use of AI at work
3.1The existing EU legislative framework
3.1.1The EESC points out that the following existing EU legislation include provisions to ensure that when an AI tool is deployed in the workplace, on the one hand the safe and fair working conditions of employees are safeguarded and on the other hand, workers are involved in the deployment process of an AI tool:
·The 2016 General Data Protection Regulation (GDPR): A non-exhaustive overview of relevant articles in GDPR include article 35 GDPR (data protection impact assessments); article 7 GDPR (prohibits linking consent to the performance of a contract); article 9, paragraph 2 GDPR (transparency of data processing and limitation to the ability to process sensitive personal data); article 15, paragraph 1(h) GDPR (right to meaningful human input on important decisions, with an opt-out of fully automated decision-making in the workplace); article 22 GDPR (prohibits fully automated decision-making processes in employment relationships); article 88 GDPR (room for collective bargaining and Member States to enact more strict provisions on GDPR at national level in the employment context);
·The 2024 AI Act: Annex III of the AI Act classifies certain AI systems used for recruitment, decisions about promotion, dismissal and task assignment, and monitoring of persons in work-related contractual relationships as ‘high risk’. Due to this classification, these AI systems would be subject to legal requirements relating to risk management (article 9), data quality and data governance (article 10), documentation and recording-keeping (articles 11-12), transparency and provision of information to users (article 13), human oversight (article 14), robustness, accuracy, and security (article 15), and information on the deployment of high-risk AI in the workplace under penalty of 15,000,000 EUR or up to 3% worldwide annual turnover. (Art 99.4.g), Moreover Article 2.11 of the AI Act enables member states to create more favourable provisions to the protection of workers’ rights. Also, Article 4 (AI literacy) provides that providers and deployers of AI systems) shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used;
·The 1989 Framework Directive on OSH: The Framework Directive 89/391/EEC obliges employers to perform a risk assessment to pre-emptively ensure that AI tools will not harm the safety and health of workers;
·The 2000 Employment Equality Directive;
·The 2002 Directive on Information and Consultation: Article 4, paragraph 2 (c) obliges employers to inform and consult workers on decisions likely to lead to substantial changes in work organisation or in contractual relations;
·The 2024 Platform Work Directive: Digital labour platforms will be required to introduce specific measures on the use of automated monitoring and decision-making systems (article 6), the human monitoring of automated systems (article 7), the human review of significant decisions (article 8), and the information and consultation rights of platform workers (article 9).
3.1.2The challenges arising for companies in their organisational operations (cybersecurity, security breaches, privacy, data management, etc.), as well as those regarding work organisation, have also been scrutinised at EU level. In addition to the GDPR,
·The NIS2 Directive: The NIS2 Directive obliges companies providing essential services in a country - e.g. energy, transport, water management, digital infrastructure, telecoms, etc. - to organise operations in a way that increases their protection against attacks and breaches of security, including data security;
·The Cyber Resilience Act
: The Cyber Resilience Act increases the obligations for manufacturers of connected products to make sure vulnerabilities are handles and patched, as well as to increase the protections of the devices/machines;
·The Critical Entities Resilience Directive and sector specific regulation, including DORA, aim to specifically tackle the organisational challenges for companies related to the use of AI.
3.1.3The EESC notes that there is on-going work, facilitated by the AI Office, to prepare a Code of Practice for General Purpose AI (CoP) to will detail the rules of the AI Act for providers of general-purpose AI models and general-purpose AI models with systemic risks.
3.2Role of Social Dialogue
3.2.1The social partners and social dialogue at all levels have an essential role to play in promoting responsible and ‘trustworthy’ AI in the world of work. To ensure an effective social dialogue on AI matters, the EESC calls for promoting a positive and enabling environment for social dialogue. Strong and constructive social dialogue at all levels in accordance with applicable national rules and practices is the main tool for minimising the risks and possible harmful impacts, which should also facilitate the use of AI in order to benefit from its potential. To that end, capacity building of social partners on AI needs to be developed to ensure knowledge and understanding of the challenges and opportunities it poses.
3.2.2The EESC notes that the European Social Partners signed in 2020 the Autonomous Framework Agreement on Digitalisation
, which covers 1) digital skills and how to secure employment; 2) modalities of connecting and disconnecting; 3) AI and guaranteeing the human in control principle and 4) respect of human dignity and surveillance. It provides, i.a. that deployment of AI systems should follow the ‘human in command principle’ and should be safe, i.e. it should prevent harm.
3.2.3As regards the interplay between the existing EU legislation (the AI ACT) and social dialogue, the EESC calls on the European AI Office to establish close cooperation with the European cross-sectoral social partners, to ensure that the role of social dialogue is adequately reflected in the AI Office’s upcoming guidelines and in secondary legislation. The EESC further calls on the AI Office to produce clarifications on all AI systems. In-depth, robust and clearly structured coordination channels between the AI Office and the European Commission’s DG EMPL and DG Connect should be established.
3.3Assessment of the current situation
3.3.1The EESC considers that the existing 116 pieces of legislation in the EU digital agenda for 2030 – in particular GDPR and the AI Act and the other legislation referred to above – sufficiently cover the challenges posed by AI at work, including discrimination, occupational safety and health, information and consultation, data protection etc.
3.3.2In light of this, the EESC regrets that notwithstanding the existing wide legislative framework that already provides a comprehensive and sufficient regulation of AI in working life the European Commission considers that new legislation is still needed regarding the impact of digitalisation in the world of work. As stated in the mission letter of Roxana Mînzatu, Executive Vice President Executive Vice-President for Social Rights and Skills, Quality Jobs and Preparedness ‘[t]his should be done notably through an initiative on algorithmic management and through possible legislation on AI in the workplace, following consultation with social partners. (…)
.
3.3.3This intention also contradicts the current political priority of simplification and reduction of regulatory and reporting requirements by 25 pct. This is also in clear contradiction with the common understanding on the need for simplification of the current EU framework to improve EU’s competitiveness and business environment.
3.3.4Instead, the Commission should allow companies to develop responsible and ethical approaches to work with AI technologies within the current legal framework. This ensures that the social partners autonomy is respected.
3.3.5Should there, however, be any potential initiative related to AI in the workplace, it must first and foremost aim to effectively implement and enforce the existing comprehensive legislative EU-framework. Secondly, it should aim to help companies mitigating the possible risks in the world of work
while fully benefitting from the opportunities offered by AI. This would ensure enhanced prosperity, productivity, sustainability and social well-being.
3.3.6The EESC is of the strong opinion, that if the Commission proposes a new initiative on AI in the workplace or on algorithmic management, this initiative should not use Chapter III from the Platform Work Directive as a blueprint. The rules in the Platform Work Directive are designed to work specifically in those kinds of businesses and treating all EU companies as if they were digital labour platforms will be a significant barrier to uptake of new technologies.
3.3.7Further, a wide revision of the existing legislation would be detrimental in terms of burden both for legislators and enforcers and send a very negative message in terms of advancement and investment in AI in the EU.
3.3.8Going forward, the EESC therefore calls for a significant focus on reducing legal complexity, as this will better enable European businesses to use AI in a responsible and ethical way. This need for reducing legal complexity arises for instance from overlaps in existing legislation as well as multiple and continuous reporting obligations.
|