This document is an excerpt from the EUR-Lex website
Document 52018IE1473
Opinion of the European Economic and Social Committee on ‘Artificial intelligence: anticipating its impact on work to ensure a fair transition’ (own-initiative opinion)
Opinion of the European Economic and Social Committee on ‘Artificial intelligence: anticipating its impact on work to ensure a fair transition’ (own-initiative opinion)
Opinion of the European Economic and Social Committee on ‘Artificial intelligence: anticipating its impact on work to ensure a fair transition’ (own-initiative opinion)
EESC 2018/01473
OJ C 440, 6.12.2018, p. 1–7
(BG, ES, CS, DA, DE, ET, EL, EN, FR, HR, IT, LV, LT, HU, MT, NL, PL, PT, RO, SK, SL, FI, SV)
6.12.2018 |
EN |
Official Journal of the European Union |
C 440/1 |
Opinion of the European Economic and Social Committee on ‘Artificial intelligence: anticipating its impact on work to ensure a fair transition’
(own-initiative opinion)
(2018/C 440/01)
Rapporteur: |
Franca SALIS-MADINIER |
Plenary Assembly decision |
15.2.2018 |
Legal basis |
Rule 29(2) of the Rules of Procedure |
Section responsible |
Single Market, Production and Consumption |
Adopted in section |
4.9.2018 |
Adopted at plenary |
19.9.2018 |
Plenary session No |
537 |
Outcome of vote (for/against/abstentions) |
183/1/2 |
1. Conclusions and recommendations
1.1. |
Artificial intelligence (AI) and robotics will expand and amplify the impact of the digitalisation of the economy on labour markets (1). Technological progress has always affected work and employment, requiring new forms of social and societal management. The EESC believes that technological development can contribute to economic and social progress; however, it feels that it would be a mistake to overlook its overall impact on society. In the world of work, AI will expand and amplify the scope of job automation (2). This is why the EESC would like to give its input to efforts to lay the groundwork for the social transformations which will go hand in hand with the rise of AI and robotics, by reinforcing and renewing the European social model. |
1.2. |
The EESC flags up the potential of AI and its applications, particularly in the areas of healthcare, security in the transport and energy sectors, combating climate change and anticipating threats in the field of cybersecurity. The European Union, governments and civil society organisations have a key role to play when it comes to fully tapping the potential advantages of AI, particularly for people with disabilities or reduced mobility, the elderly and people with chronic health issues. |
1.3. |
However, the EU has insufficient data on the digital economy and the resulting social transformation. The EESC recommends improving statistical tools and research, particularly on AI, the use of industrial and service robots, the internet of Things and new economic models (the platform-based economy and new forms of employment and work). |
1.4. |
The EESC calls on the European Commission to promote and support studies carried out by European sector-level social dialogue committees on the sector-specific impact of AI and robotics and, more broadly, of the digitalisation of the economy. |
1.5. |
It is acknowledged that AI and robotics will displace and transform jobs, by eliminating some and creating others. Whatever the outcome, the EU must guarantee access to social protection for all workers, employees and self-employed or bogus self-employed persons, in line with the European Pillar of Social Rights. |
1.6. |
The Commission has proposed reinforcing the European Globalisation Adjustment Fund so that it can assist employees who lose their jobs and self-employed people who have to wind up their businesses as a result of the digitalisation of the economy (3). The EESC sees this as a step towards the establishment of a fully-fledged European transition fund which would help manage the digital transformation in a socially responsible way. |
1.7. |
The EESC recommends applying and reinforcing the principles, commitments and obligations set out in the existing texts adopted by the European institutions and the social partners on informing and consulting workers (4), particularly when deploying new technologies, including AI and robotics. The EESC calls for a European programme that takes an inclusive approach to AI, is founded on these texts and on the European Pillar of Social Rights, and involves all stakeholders. |
1.8. |
The EESC recommends that the ethical guidelines on AI to be prepared by the Commission should draw a line in the sand for interaction between workers and intelligent machines so that humans never become the underlings of machines. With a view to inclusive AI, these guidelines must establish principles of participation, responsibility and ownership of production processes so that, as stressed by the ILO constitution, work gives those who perform it the satisfaction of giving the fullest measure of their skill and attainments and making their greatest contribution to the common wellbeing. |
1.9. |
The EESC also recommends that these guidelines factor in principles of transparency when using AI systems for recruitment, assessment and supervision of workers for management purposes, along with principles of health and safety and improving working conditions. Lastly, the guidelines must safeguard rights and freedoms with regard to the processing of workers’ data, in accordance with the principles of non-discrimination. |
1.10. |
The implementation of the ethical guidelines on AI must be monitored. A European observatory focusing on ethics in AI systems could be assigned responsibility for acting as watchdog, including in businesses. |
1.11. |
The EESC recommends that engineers and intelligent machine designers be trained in ethics to avoid establishing new forms of digital Taylorism, where humans are relegated to following orders dictated by machines. Spreading best practice and exchanging experiences in this field should be encouraged. |
1.12. |
The EESC calls for the principle of legal responsibility to be clarified. In the interaction between man and machine, emerging health and safety risks must be tackled more ambitiously under the umbrella of the Product Liability Directive (5). |
1.13. |
Given the danger of social polarisation in the digital transformation, the EESC is calling on the EU institutions to begin a debate on financing public budgets and social protection systems in an economy with increasing numbers of robots (6), as taxation on labour is still the main source of tax revenue in Europe. In order to apply the principle of fairness, this debate should consider the redistribution of the benefits of digitalisation. |
2. Introduction
2.1. |
The development of AI has been patchy since the concept first appeared in 1956, and throughout the second half of the 20th century. It has been the cause of high hopes alternating with crushing disappointments. However, it has seen a significant new upsurge in the last few years, made possible by the collection, organisation and storage of amounts of data that are unprecedented in human history (big data) and by the exponential increase in computing power and algorithm capacity. |
2.2. |
The EESC drew up an opinion on AI in 2017 (7), which addressed a considerable number of issues. As pointed out in that opinion, there is no precise definition of AI. For the purposes of the present opinion, we will consider AI to be a discipline which sets out to use digital technologies to create systems capable of autonomously reproducing human cognitive functions, including in particular grasping data, a form of understanding and adaptation (problem solving, automatic reasoning and learning). |
2.3. |
AI systems are now capable of solving complex problems which are sometimes beyond the scope of human intelligence. AI applications would seem to be potentially unlimited: in banking, insurance, transport, healthcare, education, energy, marketing and defence, along with sectors such as industry, construction, farming, crafts etc. (8). AI is expected to render production processes for goods and services more efficient, make businesses more profitable and help promote economic growth. |
2.4. |
This renewed surge forward in AI also means that a number of questions regarding its potential role in society, its level of autonomy and its interaction with human beings have surfaced again. As pointed out in the EESC’s 2017 opinion on AI (9), these questions focus particularly on ethics, security, transparency, privacy and labour standards, education, accessibility, legislation and regulation, governance and democracy. |
2.5. |
The different approaches need to come together in the debate on AI in order to look beyond the purely economic considerations which sometimes fetter it. A multidisciplinary framework of this sort would be valuable when analysing the impact of AI on the world of work, since this is one of the main areas in which humans and machines interact. Work has always been affected by technology. The effects of AI on jobs and work therefore need to be considered very carefully at political level, as part of the institutions’ role involves making economic changes socially sustainable (10). |
2.6. |
This own-initiative opinion aims to shine a spotlight on how AI will affect work, including the nature and organisation of work and working conditions. As the EESC has already pointed out (11), we need better statistics and research to be able to deliver accurate forecasts of developments in the labour market and clear indicators of particular trends, particularly as regards the quality of work, the polarisation of jobs and income, and working conditions during the digital transformation. The EU has insufficient data on what is referred to as the ‘sharing’ economy, on-demand work platforms and the new models of online subcontracting, as well as on the use of robots in industry and services to individuals, the internet of Things, and the use and spread of AI systems. |
3. AI and developments in the number of jobs
3.1. |
The question of how the deployment of AI and robotics across production processes will affect the number of jobs is controversial. Many studies have endeavoured to find an answer but failed to reach a scientific consensus, and the range of findings (from 9 % to 54 % of jobs at risk (12)) reflects the complexity of choosing a methodology and the way in which this shapes the outcome of the research. |
3.2. |
Accurately predicting what will happen is no easy task, because the technical potential of automation is not the only factor which comes into play: political, regulatory, economic and demographic changes — along with social acceptability — also have a bearing. The availability of cutting-edge technology is no guarantee that it will be used and become widespread. |
3.3. |
Lastly, it is still impossible to predict the net number of jobs that can be automated in each sector without taking into consideration the changes in professions and the pace of job creation. The development of AI systems will require new jobs in engineering, IT and telecommunications (engineers, technicians and operators) and in big data: data officers, data analysts, data miners, etc. |
3.4. |
Public authorities will need to ensure that this digital transformation, which could affect both the number and quality of jobs, is socially sustainable (13). One of the risks flagged up by experts is the danger of jobs becoming polarised, with highly successful people — who have skills useful for the digital economy — on the one side and people who are losing out — whose qualifications, experience and expertise will be gradually rendered obsolete by this transformation — on the other. In its recent communication (14), the European Commission proposed a response to this challenge, rooted largely in education, training and improving basic writing, reading and numerical skills, along with digital skills. This response should be supported by the economic and social stakeholders, including in the context of national, European, interprofessional and sectoral social dialogue (15). |
3.5. |
The EESC considers however that this focus will not be able to meet all the challenges, particularly uncertainty as regards job trends. Three additional pathways are worth exploring: ‘inclusive’ AI, anticipating change, and finally — when redundancy plans are unavoidable — socially responsible and managed restructuring. |
4. Inclusive and smart AI and robotics
4.1. |
The EESC supports the principle of a programme of inclusive AI and robotisation. This means that when new processes using new technologies are introduced in businesses, workers should be involved in the practicalities of how these processes work. As pointed out by the WRR (16), ‘inclusive and smart’ deployment of new technologies, where workers remain central to the processes and are involved in improving them, can help promote improvement in production processes (17). |
4.2. |
Given the impact of algorithms on recruitment, working conditions and professional evaluation, the EESC supports the principle of algorithmic transparency, which does not involve revealing codes but rather ensuring that the parameters and criteria used to make decisions are understandable. There must always be provision for appeal to a human. |
4.3. |
AI which places workers at the centre takes account of the views of those people who will be working with the new technological processes, clearly defines the tasks and responsibilities which will stay in the hands of workers, and retains some forms of work ownership by workers so that workers do not become mere underlings. |
4.4. |
The principle of legal responsibility must be clarified. Industrial or service robots collaborate with humans on an increasingly frequent basis. AI enables robots to ‘climb out of their cages’, and accidents can happen (18). This is why the responsibility of autonomous systems in the event of accidents must be clearly pinned down, and there must be provision for covering the health and safety risks to which workers are exposed. The European Commission is beginning to explore these emerging risks in connection with the Product Liability Directive (19). A more ambitious approach is needed with regard to safety in the workplace. |
4.5. |
The principle of fairness applied to the world of work consists of not alienating workers from their work. Some experts stress that there is a risk that AI may contribute to a form of de-skilling of workers. This is why steps must be taken to ensure that, as the ILO constitution puts it, work gives those who perform it the satisfaction of giving the fullest measure of their skill and attainments and making their greatest contribution to the common wellbeing. From a management point of view, this is also a way to keep workers motivated. |
5. Anticipating change
5.1. |
Many studies over the last few years have shown that European — and even national — social dialogue is being eroded, despite efforts by the Commission and the European Council to reinvigorate it. However, social dialogue is one of the most effective tools for coping with the social challenges of digitalisation. The EESC therefore calls vehemently for this dialogue to be kept up in businesses and at all relevant levels, in order to prepare for the transformations in a socially acceptable way. The EESC would point out that social dialogue is one of the best guarantees of a peaceful society and reduced inequality. Above and beyond political pledges to revive social dialogue, the EU institutions have a clear duty to encourage and contribute to this form of dialogue. |
5.2. |
Particularly when introducing these technologies, this dialogue must make it possible to discover how production processes will change in businesses and sectors and to assess what new skills and training will be needed. However, it should also be an opportunity to explore early on how AI can be used to improve organisational and production processes and boost workers’ skills, and how the resources freed up by AI can be optimised to develop new products and services or to improve the quality of customer service. |
5.3. |
Socially responsible restructuring |
5.4. |
When redundancy plans are deemed inevitable, the challenge is to manage the social impact of corporate restructuring. As the European social partners have pointed out in their Orientations for reference in managing change and its social consequences (20), many case studies stress the importance of exploring all possible alternatives to layoffs, such as training, re-skilling and start-up support. |
5.5. |
In the event of restructuring, informing and consulting with workers must make it possible, in line with relevant European directives (21), to improve risk anticipation, facilitate employee access to training within the undertaking, make work organisation more flexible while maintaining security, and promote employee involvement in the operation and future of the undertaking. |
5.6. |
Lastly, as the European Commission quite rightly points out, the EU must guarantee that everyone, including employees and self-employed or bogus self-employed persons, has access to social protection‘regardless of the type and duration of their employment relationship’, in accordance with the European Pillar of Social Rights (22). |
6. AI and developments in working conditions
6.1. |
On 25 April 2018, the European Commission proposed a European approach to promote investment policies in AI development and establish ethical guidelines. It stressed that AI technologies have the potential to radically change our society, particularly in the sectors of transport, healthcare and manufacturing. |
6.2. |
This transformative potential affects production processes and the tasks involved in work. The impact can be positive, particularly as regards the way in which AI can improve these processes and the quality of work. The same positive knock-on effect may be felt in the form of ‘flexible’ work structures, with greater weight being attached to shared decision-making, independently organised teams, workers who perform a variety of tasks, a horizontal management structure and innovative and participatory work practices (23). |
6.3. |
As pointed out by the EESC (24) and the Commission itself, AI can help workers perform repetitive, difficult or even dangerous tasks, and some AI applications can improve employees’ wellbeing and make their daily life easier. |
6.4. |
However, this approach raises new questions at the same time, particularly as regards the interaction between AI and workers, and developments in the tasks involved in work. In factories, businesses and offices, just how autonomous will intelligent machines be and how will they complement the work performed by human beings? The EESC points out that in the new world of work, the definition of the relationship between people and machines is crucial. An approach centred on humans controlling machines is fundamental (25). |
6.5. |
As a matter of principle, it is not ethically acceptable for a human being to be controlled by AI or seen as the underling of a machine which issues orders regarding which, how and when tasks should be performed. However, at times it would seem that we have already crossed that particular ethical Rubicon (26). This is why AI ethical guidelines must draw a line in the sand. |
6.6. |
The EU must now make it a priority to avoid new forms of digital Taylorism shaped by the developers of intelligent machines. This is why, as the EESC recently pointed out, European researchers, engineers, designers and entrepreneurs who are involved in the development and marketing of AI systems must act in accordance with ethical and social responsibility criteria. One good response to this imperative could be to incorporate ethics and the humanities into training courses in engineering (27). |
6.7. |
Another question touches on oversight and monitoring by management. Everyone agrees on the need for reasonable oversight of production processes and thus of the work carried out as well. Currently, new technological tools would potentially make it possible to deploy intelligent systems to monitor workers in every respect and in real time, with the risk that this oversight and monitoring could become disproportionate. |
6.8. |
The reasonable and proportionate nature of the monitoring of work performed and performance indicators, and the relationship of trust between managers and subordinates, is therefore an issue which should also be included on the agenda for social dialogue at national, European, interprofessional and sectoral level. |
6.9. |
The issue of algorithm and learning data bias and potentially harmful discrimination is still controversial. Some people feel that algorithms and other predictive recruitment software can reduce recruitment-related discrimination and promote ‘smarter’ recruitment, while others consider that recruitment software will always run the danger of reflecting, even involuntarily, the bias of the people who programmed these recruitment robots. Some experts feel that algorithmic models will only ever be opinions embedded in mathematics (28). This is why it is imperative to ensure that there is provision for appeal to a human (in connection with the principle of transparency considered above: the right to request the criteria on which decisions are made), and that the collection and processing of data are in line with the principles of proportionality and specific purpose. In any event, data may not be used for any purpose other than the one for which they were collected (29). |
6.10. |
The General Data Protection Regulation gives Member States the option to establish more specific rules (through legislation or collective agreements) to guarantee the protection of rights and freedoms with regard to the processing of employees’ personal data within the framework of employment relationships, and this provides genuine leverage that the states and social partners must use (30). |
6.11. |
It should be pointed out here that these dangers do not apply solely to employees. The development of online subcontracting, platform-based work and various forms of crowdworking also goes hand in hand with new automated systems for managing performance and attendance, which sometimes seem to exceed the bounds in terms of ethics (for instance, the worker’s webcam is activated by the platform and screenshots are taken remotely). |
6.12. |
The algorithms used by these platforms, which establish how much freelancers are paid, their online reputation and access rights among other things, are often opaque. Workers are not told how the algorithms operate and do not have access to the operational criteria applied. |
7. Laying the groundwork for a fair transition
7.1. |
In the medium term, the danger of social polarisation stressed by many experts calls for in-depth discussion on the future of our social models, including the way they are financed. The EESC calls on the Commission to launch a debate on taxation and the financing of public budgets and collective social protection systems in an economy with rapidly increasing numbers of robots (31), as taxation of work is still the main source of tax revenue in Europe. This debate should also touch on the redistribution of the benefits of digitalisation. |
7.2. |
The Commission proposes reinforcing the European Globalisation Adjustment Fund (FEM), partly with a view to assisting employees whose jobs become obsolete and self-employed people who have to wind up their businesses as a result of the digitalisation and automation of the economy (32). The EESC sees this as a step towards the establishment of a fully-fledged European transition fund which would help anticipate and manage the digital transformation and the restructuring it will bring about in a socially responsible way. |
7.3. |
National debate is increasingly coming to focus on the social — and more broadly, societal — aspects of AI. Recent discussions in the UK Parliament (33) and the French Senate have illustrated the need to promote an ethical approach to AI, based on a number of principles such as loyalty, transparency and clear explanations of algorithm-based systems, the ethics and responsibility of AI applications, and raising awareness among researchers, experts and specialists as regards the potential for misuse of their research findings. In France, the Villani report claims that it aims to give meaning to AI (34). Many experts from Yale, Stanford, Cambridge and Oxford universities warn against the ‘unresolved vulnerabilities’ of AI and flag up the imperative need to anticipate, prevent and mitigate them (35). Similarly, Quebec’s Research Fund (FRQ) has been working with the University of Montreal for several months on a project to establish a global observatory on the societal impact of AI and digitalisation (36). |
7.4. |
All these initiatives show that the debate on AI needs to look beyond purely economic and technical considerations, so that public discussion explores the role that society would like to see AI play, including in the world of work. This debate will be a way to avoid falling into the trap of a ‘false dichotomy’ between a totally naïve and optimistic view of AI and its impact, and the expectation of widespread disaster (37). Launching the debate at national level is a useful first step, but the EU also has a role to play, particularly in setting ethical guidelines, as the Commission has already begun doing. |
7.5. |
Responsibility for enforcing these guidelines will have to be entrusted to an observatory focusing on ethics in AI systems. We need to ensure that AI and its applications promote the wellbeing and empowerment of people and workers with due respect for fundamental rights, and do not contribute, either directly or indirectly, to loss of ownership, de-skilling and loss of autonomy. The principle of humans being in the driving seat in every situation, including work, must be applied in practice. |
7.6. |
This principle must also apply to other sectors, such as health professionals, who provide services closely linked to human beings’ life, health, security and quality of life. Only through rigorous ethical rules will it be possible to guarantee that workers, along with consumers, patients, clients and other service providers will be able to make the most of the new AI applications. |
Brussels, 19 September 2018.
The President of the European Economic and Social Committee
Luca JAHIER
(1) D. Acemoglu, P. Restrepo (2018), Artificial Intelligence, Automation and Work, NBER Working Paper 24196, January 2018. See also: Employment Council (2017), Automatisation, numérisation et emploi (Automation, digitalisation and employment). (Volume 1) (www.coe.gouv.fr).
(2) D. Acemoglu, op.cit.; Employment Council (2017), op. cit.
(3) COM(2018) 380 final.
(4) Directive 2002/14/EC; Joint Declaration of Intent by UNICE, ETUC and CEEP on social dialogue and new technologies, 1985; Joint opinion of the social partners on new technologies, the organisation of work and the adaptability of the labour market, 1991; Reference guidelines for managing change and its social impact, 2003.
(5) COM(2018) 246 final.
(6) https://ifr.org/ifr-press-releases/news/robots-double-worldwide-by-2020.
(7) OJ C 288, 31.8.2017, p. 1.
(8) For instance, see https://www.techemergence.com.
(9) OJ C 288, 31.8.2017, p. 1.
(10) Eurofound (2018), Automation, digitalisation and platforms: Implications for work and employment, Publications Office of the European Union, Luxembourg.
(11) OJ C 13, 15.1.2016, p. 161.
(12) Frey and Osborne, 2013; Bowles, 2014; Arntz, Gregory and Zierahn, 2016; Le Ru, 2016; McKinsey, 2016; OECD, 2017; see also exploratory opinion CCMI/136, OJ C 13, 15.1.2016, p. 161.
(13) http://www.oecd.org/employment/future-of-work/.
(14) COM(2018) 237 final.
(15) OJ C 367, 10.10.2018, p. 15.
(16) The Dutch scientific council for government policy.
(17) https://english.wrr.nl/latest/news/2015/12/08/wrr-calls-for-inclusive-robot-agenda.
(18) See work on Emerging risks by the European Agency for Safety and Health at Work (https://osha.europa.eu/emerging-risks). According to the agency, ‘Current approaches and technical standards aiming to protect employees from the risk of working with collaborative robots will have to be revised in preparation for these developments.’
(19) COM(2018) 246 final.
(20) Joint text by UNICE, CEEP, UEAPME and ETUC of 16.10.2003.
(21) Directive 2002/14/EC establishing a general framework for informing and consulting employees in the European Community.
(22) OJ C 303, 19.8.2016, p. 54; OJ C 173, 31.5.2017, p. 15; OJ C 129, 11.4.2018, p. 7; OJ C 434, 15.12.2017, p. 30.
(23) OJ C 434, 15.12.2017, p. 30.
(24) OJ C 367, 10.10.2018, p. 15.
(25) OJ C 288, 31.8.2017, p. 1; OJ C 367, 10.10.2018, p. 15.
(26) Several European media outlets have reported on working conditions in certain logistics centres where the workers are totally controlled by algorithms telling them which tasks need to be performed within set timeframes, and where their performance is assessed in real time.
(27) OJ C 367, 10.10.2018, p. 15.
(28) Cathy O’Neil, Harvard PhD and data scientist, Models are opinions embedded in mathematics (https://www.theguardian.com/books/2016/oct/27/cathy-oneil-weapons-of-math-destruction-algorithms-big-data).
(29) For instance, see the work carried out by the French CNIL (Comment permettre à l’homme de garder la main? Les enjeux éthiques des algorithmes et de l’intelligence artificielle — How can we make sure that humans stay on top? The ethical issues of algorithms and artificial intelligence,
https://www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_garder_la_main_web.pdf).
(30) Regulation (EU) 2016/679 (Article 88).
(31) https://ifr.org/ifr-press-releases/news/robots-double-worldwide-by-2020.
(32) COM(2018) 380 final.
(33) https://www.parliament.uk/ai-committee.
(34) http://www.enseignementsup-recherche.gouv.fr/cid128577/rapport-de-cedric-villani-donner-un-sens-a-l-intelligence-artificielle-ia.html.
(35) https://www.eff.org/files/2018/02/20/malicious_ai_report_final.pdf.
(36) http://nouvelles.umontreal.ca/article/2018/03/29/le-quebec-jette-les-bases-d-un-observatoire-mondial-sur-les-impacts-societaux-de-l-ia/.
(37) D. Acemoglu, op. cit. See also Eurofound 2018, Automation, digitalisation and platforms: Implications for work and employment, Publications Office of the European Union, Luxembourg, p. 23: ‘The risks comprise unwarranted optimism, undue pessimism and mistargeted insights’.