The AI Act pursues a risk-based approach to introduce a proportionate and effective binding set of rules for AI systems. AI systems are categorised according to their risk potential as unacceptable, high, low, and minimal risk. The AI Act defines risk as " the combination of the probability of harm and the severity of that harm" to public interests (health, safety, fundamental rights, including democracy, the rule of law and environmental protection) and individual interests. Damage can be material or immaterial in nature. It covers physical, psychological, social, or economic damage.
The General Purpose AI Models (GPAI) take a special position in this categorisation.
Some practices in connection with AI systems pose too high a risk in terms of the probability of harm occurring and the extent of harm to individual or public interests, which is why they are prohibited.
These include according to Article 5 AI Act:
As the name suggests, high-risk AI systems according to Article 6 AI Act pose a high risk in terms of the probability of damage occurring and the extent of damage to individual or public interests. However, high-risk AI systems are not prohibited per se; placing on the market or putting into services is only permitted under compliance with certain requirements. Such AI systems are listed in Annex I and III of the AI Act, among others:
Annex I - AI system is itself the product or is intended as a safety component of a product in the following areas regulated by Union law.
Section A - List of Union harmonisation legislation based on the New Legislative Framework:
Section B - List of other Union harmonisation legislation:
Annex III - AI system depending on area of use:
AI systems used in critical infrastructure may be considered high-risk AI systems under Article 6(2) in conjunction with Annex III, Item 2 of the AI Act (AIA). Specifically, an AI system is classified as a high-risk AI system if it is used as a safety component:
For the assessment, the key questions are what constitutes "critical infrastructure" and what qualifies as a "safety component."
In general, critical infrastructure is part of a critical entity. According to the definition in Article 3(62) AIA, the term "critical infrastructure" refers to Article 2(4) of Directive (EU) 2022/2557 ("Directive on the Resilience of Critical Entities," "CER"). According to Articles 2(1), 2(4), and 2(5) CER, a "critical entity"—whether a public or private entity—must be designated as such by the respective Member State.
The corresponding "critical infrastructure" includes:
A service is deemed essential if it is crucial for:
Article 2 of the Delegated Regulation 2023/2450 of the European Commission, issued pursuant to CER, provides a non-exhaustive list of essential services.
The term "safety component" is defined in Article 3(14) AIA as follows:
A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.
Regarding critical infrastructure, the co-legislators provide further clarification in Recital 55 AIA:
(55) […] Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or the health and safety of persons and property but which are not necessary in order for the system to function. The failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.
Whether an exception applies under one of the grounds for exemption outlined in Article 6(3) AIA will depend on the specific circumstances of each case.
AI systems with "limited" risk are AI systems whose risk can be minimised through transparency. Such AI systems are not prohibited; providers and deployers are mainly subject to transparency obligations, such as informing persons that they are interacting with an AI system or that content has been artificially generated. AI systems with "limited" risk include the following systems in accordance with Article 50 AI Act:
All other AI systems are classified as those with "minimal" or no risk. They are not subject to any specific obligations under the AI Act; compliance with Codes of Practices is recommended but voluntary.
Further Links