Choose the experimental features you want to try

This document is an excerpt from the EUR-Lex website

Rules for trustworthy artificial intelligence in the EU

SUMMARY OF:

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

WHAT IS THE AIM OF THE REGULATION?

Regulation (EU) 2024/1689 aims to encourage the development and uptake of safe and trustworthy artificial intelligence (AI) systems across the European Union (EU) single market in both the private and public sectors, while ensuring EU citizens’ health and safety and respect for fundamental rights. The regulation sets out risk-based rules on:

  • placing on the market, putting into service and using certain AI systems;
  • banning certain AI practices;
  • requirements and obligations around high-risk AI systems;
  • transparency for certain AI systems;
  • transparency and risk management for general-purpose AI models (powerful AI models that underpin AI systems capable of carrying out a wide range of tasks);
  • market monitoring, market surveillance, governance and enforcement;
  • supporting innovation, focusing on small and medium-sized enterprises (SMEs) and start-ups.

There are some exemptions, such as for systems used exclusively for military and defence or for research purposes.

KEY POINTS

What is an AI system?

An AI system is a machine-based system designed to operate with some level of autonomy that can:

  • adapt after it is deployed; and
  • generate outputs such as predictions, content, recommendations or decisions from input it receives (to achieve explicit or implicit objectives).

A risk-based approach

The legislation follows a risk-based approach, which means the higher the risk of causing harm to society, the stricter the rules. The regulation defines the use of AI in the following areas as high risk, due to the potential impact on fundamental rights, safety and well-being:

  • safety components in products subject to EU harmonisation legislation (or as stand-alone products) that are required to undergo a third-party conformity assessment under the same EU harmonisation legislation;
  • biometrics, when used for remote identification, categorising individuals by sensitive attributes (such as race or religion), or emotion recognition, except where used to simply verify identity;
  • critical infrastructure, when the AI is a safety component in areas such as digital infrastructure, traffic, water, gas, heating and electricity;
  • education and vocational training, including access to education issues, evaluating learning outcomes, assessing education levels or monitoring behaviour during tests;
  • employment, including recruiting, selecting candidates, making decisions on employment terms (promotions, terminations), task allocation or performance monitoring;
  • essential services – AI systems used by public authorities for evaluating eligibility for public services (healthcare, benefits), credit scoring, insurance risk assessment and prioritising emergency responses;
  • law enforcement – AI systems used for assessing crime risks, polygraphs, evaluating evidence reliability, predicting recidivism or profiling individuals for criminal investigations;
  • migration and border control – AI systems used to assess risks related to migration, asylum and visa applications, or to detect and identify individuals in migration contexts;
  • administration of justice and democratic processes – AI systems used by judicial authorities for legal research and interpretation or systems that could influence election outcomes.

The regulation prohibits the following AI practices with an unacceptable level of risk

  • Subliminal or deceptive techniques to manipulate individual or group behaviour, impairing their ability to make informed decisions and potentially causing harm.
  • Exploiting vulnerabilities based on age, disability, or socioeconomic situations to manipulate individuals or groups, leading to potential harm.
  • Social scoring, evaluating or classifying people based on behaviour or characteristics, resulting in unfair treatment unrelated to the context in which the data were collected or in a manner disproportionate to the behaviour’s severity.
  • Criminal risk assessment, predicting the likelihood of committing a crime solely based on profiling or personality traits, except in objective, fact-based criminal investigations.
  • Facial recognition database scraping from the internet or security cameras without specific targeting.
  • Inferring emotions in sensitive areas, such as workplaces or educational institutions, unless used for medical or safety purposes.
  • Biometric categorisation based on data to infer sensitive attributes like race, religion or political opinions, except for lawful use in law enforcement.
  • Real-time biometric identification in public by law enforcement, unless strictly necessary for particular situations (e.g. finding missing persons, preventing imminent threats or identifying suspects of serious crimes). This must follow strict legal procedures, including prior authorisation, a limited scope and safeguards to protect rights and freedoms.

The regulation introduces disclosure obligations where a risk could arise from a lack of transparency around the use of AI:

  • AI designed to impersonate humans (e.g. a chatbot) needs to inform the human it is interacting with;
  • the output of generative AI needs to be marked as AI-generated in a machine-readable way;
  • in certain cases, the output of generative AI needs to be visibly labelled, namely deepfakes and text that is intended to inform the public of matters of public interest.

All other AI systems are deemed to have limited risk and therefore the regulation does not introduce any further rules.

Trustworthy use of large AI models

  • General-purpose AI models are AI models that are trained on large amounts of data and can perform a wide range of tasks. They can be components of AI systems.
  • The regulation introduces transparency obligations for providers of such general-purpose AI models, namely technical documentation, the provision of information to downstream developers of AI systems and the disclosure of data used in training the model.
  • The most powerful general-purpose AI models can pose systemic risks. If a model meets a certain threshold of capability, the provider of that model must fulfill additional risk management and cybersecurity obligations.

Governance

  • The regulation sets up several governing bodies active from :
    • national competent authorities that will oversee and enforce the rules for AI systems;
    • an AI Office within the European Commission that will coordinate the coherent application of the common rules across the EU and act as regulator for general-purpose AI models.
  • The EU Member States and the AI Office will cooperate closely on an AI Board, comprising Member States’ representatives, to ensure the consistent and effective application of the regulation.
  • The regulation sets up two advisory bodies for the AI Office and the AI Board:
    • a scientific panel of independent experts to provide scientific advice;
    • an advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission.

Penalties

The fines for infringements are set as a percentage of the offending company’s annual turnover or a predetermined amount, whichever is higher. Small and medium-sized enterprises and start-ups are subject to proportional administrative fines.

Transparency and protecting fundamental rights

Increased transparency applies to the development and use of high-risk AI systems:

  • before a high-risk AI system is deployed by entities providing public services, its fundamental rights impact must be assessed;
  • high-risk AI systems and entities using them must be registered in an EU database.

Innovation

The regulation provides for an innovation-friendly legal framework and aims to promote evidence-based regulatory learning. It envisages AI regulatory sandboxes, enabling a controlled environment in which innovative AI systems can be developed, tested and validated, including in real-world conditions. Furthermore, the regulation allows real-world testing of high-risk AI systems under certain conditions.

Evaluation and review

The Commission assesses the need for amendments to the list of high-risk uses of AI and the list of prohibited practices every year. By , and every four years thereafter, the Commission will evaluate and report on the following:

  • adding to or extending the list of high-risk categories;
  • amendments to the list of AI systems requiring additional transparency measures;
  • amendments to improve supervision and governance.

FROM WHEN DOES THE REGULATION APPLY?

The regulation will apply from . However, there are some exceptions:

  • the prohibitions, definitions and obligations regarding AI literacy have applied since ;
  • some rules will take effect on , including those on governance structure, penalties, and obligations for providers of general-purpose AI models.

BACKGROUND

For further information, see:

MAIN DOCUMENT

Regulation (EU) 2024/1689 of the European Parliament and of the Council of laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, ).

last update

Top