Themenbild Services der RTR

Risk levels of AI systems

AI systems and their categorisation into four risk levels

The AI Act pursues a risk-based approach to introduce a proportionate and effective binding set of rules for AI systems. AI systems are categorised according to their risk potential as unacceptable, high, low, and minimal risk. The AI Act defines risk as " the combination of the probability of harm and the severity of that harm" to public interests (health, safety, fundamental rights, including democracy, the rule of law and environmental protection) and individual interests. Damage can be material or immaterial in nature. It covers physical, psychological, social, or economic damage.

The General Purpose AI Models (GPAI) take a special position in this categorisation.

The infographic summarises the continuous text and describes the four risk levels of AI systems
AI systems are categorised according to their risk potential as unacceptable, high, limited and minimal/no risk © RTR (CC BY 4.0)

Unacceptable risk

Some practices in connection with AI systems pose too high a risk in terms of the probability of harm occurring and the extent of harm to individual or public interests, which is why they are prohibited.

These include according to Article 5 AI Act:

  • AI systems that manipulate human behaviour in order to circumvent human free will;
  • AI systems that are used to exploit people's weaknesses (due to their age, disability, social or economic situation);
  • AI systems that make assessments of natural persons based on social behaviour or personal characteristics (social scoring);
  • Risk assessment systems that use profiling to assess the risk or predict whether a natural person will commit a criminal offence (predictive policing);
  • Untargeted extraction of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in the workplace and in educational institutions (with the exception of AI systems for medical [e.g. therapeutic use] or security purposes);
  • Biometric categorisation systems to draw or determine conclusions about sensitive information (e.g. political, religious or philosophical beliefs, sexual orientation, race);
  • Use of real-time remote biometric recognition systems in publicly accessible spaces for law enforcement purposes (with some exceptions such as certain victims or missing children, perpetrators of certain offences, etc.).

High-risk AI systems

As the name suggests, high-risk AI systems according to Article 6 AI Act pose a high risk in terms of the probability of damage occurring and the extent of damage to individual or public interests. However, high-risk AI systems are not prohibited per se; placing on the market or putting into services is only permitted under compliance with certain requirements. Such AI systems are listed in Annex I and III of the AI Act, among others:

Annex I - AI system is itself the product or is intended as a safety component of a product in the following areas regulated by Union law:

Annex III - AI system depending on area of use:

  • Biometrics, in so far as their use is permitted under relevant Union or national law;
  • Critical infrastructure;
  • Education and vocational training;
  • Employment, workers management and access to self-employment;
  • Access to and enjoyment of essential private services and essential public services and benefits;
  •  Law enforcement, in so far as their use is permitted under relevant Union or national law;
  • Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law;
  • Administration of justice and democratic processes.

AI systems with "limited" risk

AI systems with "limited" risk are AI systems whose risk can be minimised through transparency. Such AI systems are not prohibited; providers and deployers are mainly subject to transparency obligations, such as informing persons that they are interacting with an AI system or that content has been artificially generated. AI systems with "limited" risk include the following systems in accordance with Article 50 AI Act:

  • AI systems that interact directly with natural persons (e.g. chatbots);
  • AI systems that generate or manipulate image, audio, text or video content, also known as generative AI (this is to be distinguished from deepfakes for manipulating human behaviour, which are prohibited);
  • Use of biometric categorisation and emotion recognition systems (a distinction must be made between these and AI systems, which are prohibited!).

AI systems with "minimal" or no risk

All other AI systems are classified as those with "minimal" or no risk. They are not subject to any specific obligations under the AI Act; compliance with Codes of Practices is recommended but voluntary.

Further Links