The AI Service Desk at RTR acts as a point of contact and information hub and is available to the Austrian AI ecosystem in preparation for the European AI Act. The following tasks are at the centre of this:
We recommend that you make use of the information offered by the AI Service Desk at RTR.
AI-systems are developing rapidly and also affects sensitive areas such as health, security and fundamental rights. With the AI Act, the EU is introducing comprehensive legal rules to minimise the risks in these areas. The AI Act is also intended to protect the rule of law, democracy and the environment.
Another aim of the AI Act is to create standardised rules for affected parties throughout the EU. It also aims to eliminate legal uncertainties in order to motivate companies to participate in progress and innovation through artificial intelligence.
The AI Act is adopted as a regulation and is therefore directly applicable in the member states and regulates both the private and public sectors. Companies in and outside the EU are affected if they place AI systems on the market in the EU or if individuals in the EU are affected by them. This ranges from pure providers of tools that utilise artificial intelligence to developers of high-risk AI systems.
Following its adoption by the European Parliament and the Council, the AI Act will enter into force on the twentieth day following its publication in the Official Journal. It will then be fully applicable 24 months after its entry into force, in accordance with the following staged procedure:
More on the time frame of the AI Act
More in the risk levels of AI systems
The categorisation depends on the intended use and the application modalities of the AI system. The AI Act lists the prohibited practices and use cases of high-risk AI systems (Annex I and III). The EU Commission is authorised to extend the list of high-risk AI systems. In doing so, it takes account of market and technological developments and ensures consistency. AI systems that carry out profiling, i.e. the creation of personality profiles of natural persons, are always considered high-risk.
The natural or legal person, authority, institution or other body that develops a high-risk AI system or has it developed and also places it on the market or puts it into operation under its own name or brand has the most extensive obligations. They must ensure that the requirements placed on high-risk AI systems are met. The obligations include, among others:
More on the provider obligations
Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purposes of this Regulation. Those national competent authorities shall exercise their powers independently, impartially and without bias.
In addition, the Commission has set up a new European AI Office within the Commission to monitor general purpose AI models.
There will also be an AI Board, a Scientific Panel and an Advisory Forum, which will have an advisory and supportive function.
More on the authorities and bodies at EU-level
If AI-systems are placed on the market or put into service that do not comply with the requirements of the Regulation, Member States must lay down effective, proportionate and dissuasive penalties, including fines, and notify them to the Commission.
Certain threshold values are defined in the ordinance for this purpose:
In order to harmonise national rules and procedures for setting fines, the Commission will draw up guidelines based on the Committee's recommendations.
As the EU institutions, bodies, offices and agencies should lead by example, they will also be subject to the rules and possible sanctions. The European Data Protection Supervisor will be authorised to impose fines on them.
The AI Act provides for the right of natural and legal persons to lodge a complaint with a national authority. On this basis, national authorities can initiate market surveillance in accordance with the procedures of the market surveillance regulations.
In addition, the proposed AI Liability Directive aims to provide individuals seeking compensation for harm caused by high-risk AI systems with effective means to identify potentially liable persons and to secure relevant evidence for a claim for compensation. To this end, the proposed Directive provides for the disclosure of evidence of certain high-risk AI systems suspected of having caused harm.
In addition, the Product Liability Directive, which is currently being revised, will ensure that people who are killed, injured or suffer material damage in the Union as a result of a defective product will receive compensation. It is clarified that AI systems and products containing AI systems are also covered by the existing rules.
The AI Act does not require the appointment of an AI officer or AI legal representation. However, regardless of the risk level, it obliges providers and deployers of AI systems to take measures to ensure that their staff and other persons involved in the operation and use of AI systems on their behalf have sufficient competences in this area.
The legal definition of the AI Act is relevant for the legal treatment of artificial intelligence. It represents the gateway to the scope of application of the regulation. The definition is according to Art. 3 no. 3 AIA as follows:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The intention of the Union legislator is not to cover simpler traditional software applications or programming approaches that are based exclusively on rules defined by natural persons for the automatic execution of processes.
Generative AI are AI systems that make it possible to generate new information, including text, audio and images, based on user input. Due to the wide range of applications, such AI systems are used in a wide variety of contexts, such as for translations, answering questions and chatbots.
The English term "prompt" is used in IT to describe an instruction to a user to make an input. Generative AI works by entering "prompts". To generate an image, text or video (output), the AI system needs an input. Depending on the AI system, a prompt can be text-, image- or audio-based. A text-based prompt can consist of words, special characters and numbers, e.g: "A picture of 3 cats sitting on the windowsill sleeping."
The importance of prompts has already led to the development of prompt marketplaces.
"Large Language Models" are computational linguistic language models that generate texts. In a given context, the next word is selected on the basis of a probability previously defined in the algorithm. These models are called "large" in relation to the scope of their training data and the number of parameters. To exaggerate, it is said that these models are trained with the "entire Internet".
LLMs are based on the so-called transformer model, a special type of artificial neural network. They therefore fall into the area of deep learning. The areas of application are diverse: creating texts, answering questions (chatbots, virtual assistants), generating code, creating content for marketing and websites and translating between different languages, to name just a few possibilities. The best-known examples of large language models include the GPT model series from OpenAI, Meta LLama and the Mistral series from Mistral AI.
Under certain circumstances, LLMs can be categorised as general purpose AIs. In any case, it should be noted that they are not perfect. They also require human supervision, as they can sometimes make mistakes or pose challenges in terms of ethics and fairness.
The amount of data required to train an AI model can vary greatly and depends on several factors. It is difficult to give general figures, as this depends heavily on the use case, the complexity of the model and the specific requirements of the project. However, some rough guidelines and examples can be helpful to get a feel for it. A rule of thumb is that you need at least 10 to 100 times more training data sets than model parameters to train a good generalising model. For a simple task such as email spam filtering, where a classification with only a few classes takes place, a few hundred to a thousand emails may be sufficient to train the model.
An example of a moderate task is the classification of handwritten digits with the MNIST dataset. The training dataset consists of 60,000 images of handwritten digits (0-9), plus a test dataset with 10,000 images.
Complex tasks can require data sets ranging in size from hundreds of thousands to millions of training data. One example of this is the classification of objects in high-resolution images using deep learning and neural networks. A concrete application example of this is the recognition and diagnosis of diseases on medical images such as X-rays, CT scans and MRI images. Considerably more training data is required by large language models (LLMs).
The LLama-3 model published by Meta as open source was trained with 15T tokens of text, i.e. 15 trillion tokens (T stands for "trillion", i.e. 1,000,000,000,000). Finally, petabytes of data are analysed during the development and testing of self-driving cars, as Tesla and NVIDIA, for example, have announced (1 Petabyte (PB) equals to 1,000,000 Gigabyte (GB)).
So, you can see that this question cannot be answered with a general "yes" or "no" and always depends on the specific use case.
If an AI model generates information that is not based on training data or real facts, it is said to be "hallucinating". Such hallucinations are particularly familiar from large language models (LLMs). They can appear to be real, plausible answers, but are actually incorrect or unreliable. Depending on the context of the query, different forms of hallucinations can be distinguished.
Depending on whether the reference parameter is data that has been provided to the AI system or real, verifiable knowledge, the first case is referred to as fidelity and the second as factuality. Hallucinations can occur for various reasons. If the training data is incomplete, incorrect or distorted, the AI model can draw false conclusions. Generative AI models that make predictions from probabilities can hallucinate when they try to give logical or coherent answers.
However, as the most probable answer may not always be correct, how can this problem be addressed? Factual hallucinations can be limited by retrieval augmented generation (RAG). This involves adding an additional external knowledge source (e.g. a database) from which relevant documents or information are identified and extracted on the basis of the respective user enquiry. This retrieval component is the basis for the subsequent generation component of the RAG system. Other strategies include improved training data and the development of mechanisms to validate and verify generated information through external sources.
Last but not least, awareness raising and user training are important. It is important to critically scrutinise the answers of an AI system and, if possible, validate them with reliable sources. This is especially true for important or sensitive information.
If a company uses an AI offered on the market, i.e. the company is an deployer and not a provider within the meaning of the AI Act ("AIA"), the following applies: Without prejudice to other transparency obligations under Union or national law, deployers of an AI system generating or manipulating image, sound or video content that is a deep fake, must disclose that the content was artificially generated or manipulated in accordance with Art. 50 para. 4 AIA.
A "deep fake" within the meaning of the AI Act is an AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful (see Art. 3 para. 60 AIA). For example, a video of a politically active person giving an interview that appears to be real but it is not the case.
If the images generated are not "deep fakes", you are not subject to any transparency obligations, i.e. you do not have to identify AI-generated images as such. There are also exceptions to the transparency obligation for artistic, creative, satirical or fictional depictions (Art. 50 para. 4 AIA).
The transparency obligations do not apply to AI-generated and manipulated texts if they have been checked by a human or if there is an editorial manager. If this is not the case and the texts are of public interest, they must be disclosed as AI-generated.
If labelling is required, this must be done in a clear and unambiguous manner and must comply with the applicable accessibility requirements (see Art. 50 para. 5 AIA).
It should also be noted that the AI Act is gradually unfolding its obligations; although it came into force on 1 August 2024, there are transitional arrangements. The provisions on transparency obligations are mandatory from 2 August 2026.
According to Article 3(3) of the AI Act (AIA), a "provider" is defined as a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
"Putting into service" is defined in Article 3(11) AIA as the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
If a provider develops or has developed an AI system or general-purpose AI model and puts it into service for own use, the provider obligations still apply. Any "naming" of the AI system is irrelevant.
The designation of national authorities responsible for implementation must be finalized by August 2, 2025 (https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/Zeitplan.en.html).
Currently, there is no national implementation of the responsibilities outlined in the AI Act in Austria. This applies to both market surveillance authorities and the notifying authority.
Annex III does not specify any particular division of tasks or roles but defines which AI systems are considered high-risk under Article 6(2). A company that develops and puts its own AI system into service assumes both the role of the provider and the role of the deployer. As such, the obligations of both the provider (https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/Provider_obligations.en.html) and the deployer (https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/Deployer_obligations.en.html) apply.
If a high-risk AI system is developed in one subsidiary of the group and used in another subsidiary, the obligations for the provider and the deployer will depend on the specific circumstances of each case.
If you, as a business, use third-party AI systems without making any modifications to them, you are generally classified as a deployer within the AI value chain, meaning the obligations for deployers apply to you. In the case of AI-generated videos, this is considered a limited-risk AI system, for which transparency obligations will apply starting on August 2, 2026.
This means that if your videos are classified as "Deep Fakes" under the AI Act, you will be subject to a labelling requirement. Otherwise, no additional obligations under the AI Act apply to the videos. For further information, please visit our website: https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/Transparency_obligations.en.html
Whether the AI provider has specific claims will typically be outlined in the terms and conditions or other contractual agreements. Providers of general-purpose AI models, such as systems for the synthetic generation of text, images, and videos, are required, under the obligations of the AI Act from August 2, 2025, to provide a strategy for copyright compliance (Article 53(1)(c) AIA).
Some providers already offer such strategies, limiting the model's training data to licensed content. Certain providers also ensure the safe use of the generated content with a guarantee of indemnity. It is advisable to review the specific terms in the contractual agreements.
An AI system may be considered high-risk AI systems under Article 6(2) of the AI Act (AIA) if it is used as a safety component:
In the area of high-risk AI systems, Article 6(1)(a) of the AI Act states that an AI system is classified as high-risk if:
In general, critical infrastructure is part of a critical entity. According to the definition in Article 3(62) AIA, the term "critical infrastructure" refers to Article 2(4) of Directive (EU) 2022/2557 ("Directive on the Resilience of Critical Entities", "CER"). According to Articles 2(1), 2(4), and 2(5) CER, a "critical entity"—whether a public or private entity—must be designated as such by the respective Member State.
The corresponding "critical infrastructure" includes:
A service is deemed essential if it is crucial for:
Article 2 of the Delegated Regulation 2023/2450 of the European Commission, issued pursuant to CER, provides a non-exhaustive list of essential services.
The term "safety component" is defined in Article 3(14) AIA as follows:
"A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property."
Regarding critical infrastructure, the co-legislators provide further clarification in Recital 55 AIA:
"(55) […] Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or the health and safety of persons and property but which are not necessary in order for the system to function. The failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres."
Whether an exception applies under one of the grounds for exemption outlined in Article 6(3) AIA will depend on the specific circumstances of each case.
AI regulatory sandboxes allow for supervised testing under real-world conditions.
Art. 57 of the AIA requires Member States to ensure that their competent authorities establish at least one AI regulatory sandbox at the national level, which must be operational by 2 August 2026. This regulatory sandbox may also be established in collaboration with the competent authorities of other Member States. The European Commission may provide technical support, guidance, and tools to facilitate the establishment and operation of AI regulatory sandboxes.
Within AI regulatory sandboxes, developers have access to a controlled environment to test and refine new AI systems for a specified period before they are released to the market. The testing process is governed by a plan agreed upon by both the developers and the authorities, detailing the testing procedures.
The primary benefit of these sandboxes is the support and oversight provided by the competent authorities, which helps to mitigate risks. This includes ensuring compliance with fundamental rights, health and safety standards, and other legal requirements. Additionally, guidelines will be available to clarify regulatory expectations and requirements.
Developers may request written documentation of the activities conducted within the sandbox. Furthermore, the authorities will prepare a report summarizing the activities performed and the results obtained, which developers can use in the context of the conformity assessment process.
The establishment of AI regulatory sandboxes aims to contribute to the following objectives:
Developers participating in the AI regulatory sandboxes remain liable under applicable EU and national liability laws for any damages caused to third parties as a result of testing conducted within the sandbox.