Themenbild Services der RTR

AI Act

What is the AI Act?

Negotiations on the rules for dealing with AI have taken a good three years, but an agreement has now been reached. This is the world's first set of rules that provides for rules for artificial intelligence. It is intended to create legal certainty for all economic players involved in the private and public sectors (provider, and deployers of AI systems, product manufacturers, authorised representatives, importers and distributors). The AI Act is intended to promote the introduction of human-centred and trustworthy AI systems while ensuring a high level of protection for health, safety and fundamental rights, including democracy, the rule of law and protection of the environment. 

Text of the AI Act

What has happened so far at EU level??

  • With the White Paper on Artificial Intelligence on 19 February 2020, the first concepts regarding the development of AI in the European Union were presented
  • On 21 April 2021, the Commission followed with its official proposal for a regulation

  • Trilogue negotiations between the EU Parliament, Council and Commission led to a political agreement on 9 December 2023
  • On 13 March 2024, the EU Parliament passed the AI Act
  • On 21 May 2024, the Council of the European Union adopted the AI Act

Next steps

The AI Act is in the final phase of the ordinary legislative procedure. On 13 March 2024, the EU Parliament adopted the AI Act. This will be followed by a vote by the Council of the EU ("Council of Ministers") and then publication in the Official Journal of the EU.

The AI Act will come into force in 2024. The obligations will apply in stages.

Time Frame of the AI Act

Objectives of the AI Act

The AI Act was adopted with the aim of establishing a standardised legal framework for aspects relating to AI systems. It brings:


  • Harmonised rules for the placing on the market, putting into service and use of AI systems in the Union

The AI Act is an EU regulation, which means that the same rules apply throughout the EU. In the harmonised areas, the individual member states are prohibited from adopting national regulations. Member states may only regulate something if the AI Act permits this. Harmonised rules also ensure the free movement of AI-based goods and services across national borders.


  • Prohibitions of certain AI practices

The AI Act follows a risk-adapted approach and divides AI systems and practices into four groups. The first group includes bans on certain artificial intelligence practices. Certain possible applications of AI systems have too great a potential for harm in terms of health, physical integrity and fundamental rights and are therefore prohibited. Certain practices are enumerated in an exhaustive list. These include:

  • AI systems that manipulate human behaviour in order to circumvent human free will;
  • AI that is used to exploit people's weaknesses (due to their age, disability, social or economic situation);
  • Biometric categorisation systems to draw conclusions about sensitive information (e.g. political, religious or philosophical beliefs, sexual orientation, race);
  • Ratings based on social behaviour or personal characteristics (social scoring);
  • The use of real-time remote biometric recognition systems in publicly accessible spaces for law enforcement purposes (with some exceptions such as certain victims or missing children, perpetrators of certain offences, etc.);
  • Risk assessment systems that use profiling to assess the risk or predict whether a natural person will commit a criminal offence (predictive policing);
  • Untargeted reading of facial images from the Internet or from surveillance footage to create facial recognition databases;
  • Emotion recognition in the workplace and in educational institutions.


  • Specific requirements for high-risk AI systems and obligations for operators of such systems

The second group of risk-based regulations includes high-risk AI systems (also known as high-risk AI, see Annex I and III of the AI Act). Although these AI systems have a high risk potential, they should not be prohibited. Instead, specific requirements are placed on AI systems and obligations are imposed on operators in relation to such systems.

  • The obligations include, among others:
  • Establishment of risk management systems;
  • Fulfilment of data governance requirements;
  • Documentation obligations in technical terms;
  • Recording obligations;
  • Transparency obligations in relation to users;
  • Sufficient implementation of human monitoring tools;
  • Ensuring accuracy, robustness and cyber security.


  • Harmonised transparency rules for certain AI systems

Certain AI systems have a low risk, which is why the requirements for high-risk systems should not apply to them in line with the risk-based regulatory approach, but transparency requirements must be complied with. For example, when using a chatbot, it must be pointed out that users are communicating with a technical voice assistant.


  • Harmonised rules for the placing on the market of general purpose AI models

Due to the market developments in AI systems in the meantime, regulations for the placing on the market of general purpose AI systems were also introduced in the negotiations.


  • Rules on market monitoring, market surveillance governance and enforcement

After AI systems have been placed on the market, they should continue to be monitored by market surveillance authorities. It should be possible to detect malfunctions of an AI system. Users should also be able to report violations of the AI Act. Market surveillance and enforcement should be carried out by national market surveillance authorities and by the AI Office that has already been established at EU level.


  • Measures to support innovation with a particular focus on SMEs, including start-ups.

In order not to restrict the development possibilities of AI, regulations to promote innovation are also laid down. AI systems should be able to be (further) developed in test environments (so-called "sandboxes"). A distinction is made between regulatory and operational sandboxes.

Other relevant legal acts

Even before and during the negotiations on the AI Act, other legal acts were adopted or are still being negotiated at EU level that are important for the application of the AI Act and for AI systems.

Other relevant legal acts (extract):