AI-systems are developing rapidly and also affects sensitive areas such as health, security and fundamental rights. With the AI Act, the EU is introducing comprehensive legal rules to minimise the risks in these areas. The AI Act is also intended to protect the rule of law, democracy and the environment.
Another aim of the AI Act is to create standardised rules for affected parties throughout the EU. It also aims to eliminate legal uncertainties in order to motivate companies to participate in progress and innovation through artificial intelligence.
The AI Act is adopted as a regulation and is therefore directly applicable in the member states and regulates both the private and public sectors. Companies in and outside the EU are affected if they place AI systems on the market in the EU or if individuals in the EU are affected by them. This ranges from pure providers of tools that utilise artificial intelligence to developers of high-risk AI systems.
The categorisation depends on the intended use and the application modalities of the AI system. The AI Act lists the prohibited practices and use cases of high-risk AI systems (Annex I and III). The EU Commission is authorised to extend the list of high-risk AI systems. In doing so, it takes account of market and technological developments and ensures consistency. AI systems that carry out profiling, i.e. the creation of personality profiles of natural persons, are always considered high-risk.
The natural or legal person, authority, institution or other body that develops a high-risk AI system or has it developed and also places it on the market or puts it into operation under its own name or brand has the most extensive obligations. They must ensure that the requirements placed on high-risk AI systems are met. The obligations include, among others:
Following its adoption by the European Parliament and the Council, the AI Act will enter into force on the twentieth day following its publication in the Official Journal. It will then be fully applicable 24 months after its entry into force, in accordance with the following staged procedure:
More on the time frame of the AI Act
Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purposes of this Regulation. Those national competent authorities shall exercise their powers independently, impartially and without bias.
In addition, the Commission has set up a new European AI Office within the Commission to monitor general purpose AI models.
There will also be an AI Board, a Scientific Panel and an Advisory Forum, which will have an advisory and supportive function.
The AI Service Desk at RTR acts as a point of contact and information hub and is available to the Austrian AI ecosystem in preparation for the European AI Act. The following tasks are at the centre of this:
We recommend that you make use of the information offered by the AI Service Desk at RTR.
If AI-systems are placed on the market or put into service that do not comply with the requirements of the Regulation, Member States must lay down effective, proportionate and dissuasive penalties, including fines, and notify them to the Commission.
Certain threshold values are defined in the ordinance for this purpose:
In order to harmonise national rules and procedures for setting fines, the Commission will draw up guidelines based on the Committee's recommendations.
As the EU institutions, bodies, offices and agencies should lead by example, they will also be subject to the rules and possible sanctions. The European Data Protection Supervisor will be authorised to impose fines on them.
The AI Act provides for the right of natural and legal persons to lodge a complaint with a national authority. On this basis, national authorities can initiate market surveillance in accordance with the procedures of the market surveillance regulations.
In addition, the proposed AI Liability Directive aims to provide individuals seeking compensation for harm caused by high-risk AI systems with effective means to identify potentially liable persons and to secure relevant evidence for a claim for compensation. To this end, the proposed Directive provides for the disclosure of evidence of certain high-risk AI systems suspected of having caused harm.
In addition, the Product Liability Directive, which is currently being revised, will ensure that people who are killed, injured or suffer material damage in the Union as a result of a defective product will receive compensation. It is clarified that AI systems and products containing AI systems are also covered by the existing rules.
The AI Act does not require the appointment of an AI officer or AI legal representation. However, regardless of the risk level, it obliges providers and deployers of AI systems to take measures to ensure that their staff and other persons involved in the operation and use of AI systems on their behalf have sufficient competences in this area.
The legal definition of the AI Act is relevant for the legal treatment of artificial intelligence. It represents the gateway to the scope of application of the regulation. The definition is as follows:
"AI systems are computer systems that are able to perform tasks that normally require human intelligence. These systems can process information, recognise patterns, draw conclusions and even learn to improve their performance. AI systems are based on algorithms and data that enable them to solve complex problems and make decisions. Examples of AI systems include chatbots, facial recognition technologies, self-driving cars and personalised recommendation systems."
The intention of the Union legislator is not to cover simpler traditional software applications or programming approaches that are based exclusively on rules defined by natural persons for the automatic execution of processes.
Generative AI are AI systems that make it possible to generate new information, including text, audio and images, based on user input. Due to the wide range of applications, such AI systems are used in a wide variety of contexts, such as for translations, answering questions and chatbots.
The English term "prompt" is used in IT to describe an instruction to a user to make an input. Generative AI works by entering "prompts". To generate an image, text or video (output), the AI system needs an input. Depending on the AI system, a prompt can be text-, image- or audio-based. A text-based prompt can consist of words, special characters and numbers, e.g: "A picture of 3 cats sitting on the windowsill sleeping."
The importance of prompts has already led to the development of prompt marketplaces.