Themenbild Services der RTR

Risk levels of AI systems

AI systems and their categorisation into four risk levels

The AI Act pursues a risk-based approach to introduce a proportionate and effective binding set of rules for AI systems. AI systems are categorised according to their risk potential as unacceptable, high, low, and minimal risk. The AI Act defines risk as " the combination of the probability of harm and the severity of that harm" to public interests (health, safety, fundamental rights, including democracy, the rule of law and environmental protection) and individual interests. Damage can be material or immaterial in nature. It covers physical, psychological, social, or economic damage.

The General Purpose AI Models (GPAI) take a special position in this categorisation.

The infographic summarises the continuous text and describes the four risk levels of AI systems
AI systems are categorised according to their risk potential as unacceptable, high, limited and minimal/no risk © RTR (CC BY 4.0)

Unacceptable risk

Some practices in connection with AI systems pose too high a risk in terms of the probability of harm occurring and the extent of harm to individual or public interests, which is why they are prohibited.

These include according to Article 5 AI Act:

  • AI systems that manipulate human behaviour in order to circumvent human free will;
  • AI systems that are used to exploit people's weaknesses (due to their age, disability, social or economic situation);
  • AI systems that make assessments of natural persons based on social behaviour or personal characteristics (social scoring);
  • Risk assessment systems that use profiling to assess the risk or predict whether a natural person will commit a criminal offence (predictive policing);
  • Untargeted extraction of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in the workplace and in educational institutions (with the exception of AI systems for medical [e.g. therapeutic use] or security purposes);
  • Biometric categorisation systems to draw or determine conclusions about sensitive information (e.g. political, religious or philosophical beliefs, sexual orientation, race);
  • Use of real-time remote biometric recognition systems in publicly accessible spaces for law enforcement purposes (with some exceptions such as certain victims or missing children, perpetrators of certain offences, etc.).

High-risk AI systems

As the name suggests, high-risk AI systems according to Article 6 AI Act pose a high risk in terms of the probability of damage occurring and the extent of damage to individual or public interests. However, high-risk AI systems are not prohibited per se; placing on the market or putting into services is only permitted under compliance with certain requirements. Such AI systems are listed in Annex I and III of the AI Act, among others:

Annex I - AI system is itself the product or is intended as a safety component of a product in the following areas regulated by Union law.

Section A - List of Union harmonisation legislation based on the New Legislative Framework:

Section B - List of other Union harmonisation legislation:

Annex III - AI system depending on area of use:

  • Biometrics, in so far as their use is permitted under relevant Union or national law;
  • Safety components in critical infrastructure;
  • Education and vocational training;
  • Employment, workers management and access to self-employment;
  • Access to and enjoyment of essential private services and essential public services and benefits;
  •  Law enforcement, in so far as their use is permitted under relevant Union or national law;
  • Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law;
  • Administration of justice and democratic processes.

Critical Infrastructure and safety components

Critical Infrastructure under the AI Act
The relationship between the AI Act and the Critical Entities Resiliance Directive, shown as a Venn diagram. © RTR (CC BY 4.0)

AI systems used in critical infrastructure may be considered high-risk AI systems under Article 6(2) in conjunction with Annex III, Item 2 of the AI Act (AIA). Specifically, an AI system is classified as a high-risk AI system if it is used as a safety component:

  • in the management and operation
  • of critical digital infrastructure,
  • road traffic, or
  • in the supply of water, gas, heating or electricity.

For the assessment, the key questions are what constitutes "critical infrastructure" and what qualifies as a "safety component."

In general, critical infrastructure is part of a critical entity. According to the definition in Article 3(62) AIA, the term "critical infrastructure" refers to Article 2(4) of Directive (EU) 2022/2557 ("Directive on the Resilience of Critical Entities," "CER"). According to Articles 2(1), 2(4), and 2(5) CER, a "critical entity"—whether a public or private entity—must be designated as such by the respective Member State.

The corresponding "critical infrastructure" includes:

  • Assets, facilities, equipment, networks, or systems, or
  • Parts of an asset, facility, equipment, network, or system,
  • that are necessary for the provision of an essential service. 

A service is deemed essential if it is crucial for:

  • The maintenance of vital societal functions,
  • Critical economic activities,
  • Public health and safety, or
  • The environment.

Article 2 of the Delegated Regulation 2023/2450 of the European Commission, issued pursuant to CER, provides a non-exhaustive list of essential services.

The term "safety component" is defined in Article 3(14) AIA as follows:

A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.

Regarding critical infrastructure, the co-legislators provide further clarification in Recital 55 AIA:

(55) […] Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or the health and safety of persons and property but which are not necessary in order for the system to function. The failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.


Whether an exception applies under one of the grounds for exemption outlined in Article 6(3) AIA will depend on the specific circumstances of each case.

AI systems with "limited" risk

AI systems with "limited" risk are AI systems whose risk can be minimised through transparency. Such AI systems are not prohibited; providers and deployers are mainly subject to transparency obligations, such as informing persons that they are interacting with an AI system or that content has been artificially generated. AI systems with "limited" risk include the following systems in accordance with Article 50 AI Act:

  • AI systems that interact directly with natural persons (e.g. chatbots);
  • AI systems that generate or manipulate image, audio, text or video content, also known as generative AI (this is to be distinguished from deepfakes for manipulating human behaviour, which are prohibited);
  • Use of biometric categorisation and emotion recognition systems (a distinction must be made between these and AI systems, which are prohibited!).

AI systems with "minimal" or no risk

All other AI systems are classified as those with "minimal" or no risk. They are not subject to any specific obligations under the AI Act; compliance with Codes of Practices is recommended but voluntary.

Further Links