Themenbild Services der RTR

Companion legislation

Delegated Acts – Implementing Acts – Codes of Practice and Conduct – Guidelines – Harmonized Standards – Common Specifications

The AI Act is the world’s first comprehensive regulatory framework addressing the governance of AI systems and general-purpose AI models (GPAI). It aims to provide legal certainty for the coming years — ideally, decades.

However, it is sometimes necessary to revise already adopted legal acts in order to adapt them to current developments in a specific area and thus ensure their effective enforcement. The European Parliament and the Council may therefore delegate to the Commission the power to adopt delegated acts or implementing acts in certain matters.

To support providers and operators as effectively as possible in the practical implementation and compliance with the provisions of the AI Act, the development of codes of conduct is both encouraged and facilitated, while at the same time the Commission is empowered to issue guidelines.

The following list provides an overview.


Involved Institutions

An overview of the institutions established at the EU level under the AI Act can be found on our overview page. Below is a brief summary of their respective roles.


European Commission

The European Commission has the exclusive right of initiative to propose formal EU legislation and submit it to the Council of the European Union and the European Parliament. In addition, the Commission regularly conducts calls for inputs and public consultations. These allow stakeholders to participate in the legislative development process and share their perspectives.


AI Board

The AI Board is composed of representatives from the Member States. The AI Office also participates but does not take part in the voting. The AI Board establishes two permanent sub-groups to ensure cooperation and exchange between market surveillance authorities and notifying authorities. Furthermore, the AI Board may set up additional permanent or temporary sub-groups to fulfill its tasks. The AI Board advises and supports the European Commission and the Member States in order to facilitate the consistent and effective implementation of the AI Act.


AI Office

The AI Office, which is part of the European Commission, supports, among other things, the preparation of implementing and delegated acts, the development of guidelines to support and facilitate the practical application of the AI Act, and aims to contribute to the development of expertise and capabilities within the Union in the field of artificial intelligence (Art. 64 AIA). The AI Office also performs numerous other tasks, which are defined both in the AI Act itself and more specifically in the Commission Decision establishing the AI Office. Additionally, it assumes administrative responsibilities for the AI Board. Further details about the AI Office can be found under "Authorities & Institutions".


Delegated Acts

Delegated acts, pursuant to Article 290 of the Treaty on the Functioning of the European Union ( TFEU), are legal acts adopted by the Commission under powers conferred upon it by the legislator. The Council and the European Parliament may delegate part of their legislative powers to the Commission, provided this delegation is explicitly set out in the basic act—in this case, the AI Act.

Delegated acts serve to amend or supplement certain non-essential elements of a legislative act. They are particularly used in areas that require a swift and flexible response, such as technology-driven sectors. In contrast to codes of conduct and guidelines, delegated acts are legally binding.

Article 97 of the AI Act lays down the procedure and conditions for the adoption of delegated acts by the Commission. The Commission has been conferred the following powers to adopt delegated acts under the AI Act:

  • Modifying conditions under which a high-risk AI system is determined not to pose a significant risk (Article 6(6) and (7))
  • Adding to, removing from, or amending the list of high-risk AI systems in Annex III (Article 7(1) and (3))
  • Amending the minimum content requirements of the technical documentation in Annex IV (Article 11(3))
  • Amending Annexes VI and VII (Article 43(5))
  • Specifying which high-risk AI systems listed in points 2 to 8 of Annex III are subject to the conformity assessment procedure referred to in Annex VII (Article 43(6))
  • Updating the content of the EU declaration of conformity as set out in Annex V (Article 47(5))
  • Modifying the thresholds and supplementing benchmarks and indicators used to identify general-purpose AI models with systemic risk (Article 51(3))
  • Updating and clarifying the indicators listed in Annex XIII (Article 52(4))
  • Detailing measurement and calculation methods to support compliance with the minimum content requirements of the technical documentation set out in Annex XI (Article 53(5))
  • Amending Annexes XI and XII (Article 53(6))

As of today, the Commission has not yet made use of these delegated powers.

Implementing Acts

As a general rule, Member States are responsible for adopting all measures necessary under national law to implement a legal act. However, under Article 291 of the Treaty on the Functioning of the European Union (TFEU), the Commission may adopt binding implementing acts where uniform conditions for implementation are required.

The AI Act provides for powers to adopt implementing acts in several areas, including the following:

  • Aussetzung, Einschränkung oder Widerruf des Status als notifizierte Stelle, sofern Mitgliedstaat säumig ist (Art. 37 Abs. 4 AIA)
  • Establishment of common specifications for requirements under Chapter III, Section 2 ("Requirements for high-risk AI systems") or, where applicable, obligations under Chapter IV (AI systems with "limited" risk) (Article 41 AI Act)
  • Approval of a code of practice (Article 56(6) and Article 50(7) AI Act)
  • Implementing acts may also be adopted where codes of conduct are not developed in due time (Article 56(9) AI Act) or where no harmonized standards exist (Article 41 AI Act)
  • Detailed provisions on the establishment, development, implementation, operation, and supervision of AI regulatory sandboxes (Article 58 AI Act)
  • Individual elements of a plan for real-world testing (Article 60(1) AI Act)
  • Establishment of a scientific panel of independent experts (Article 68(1) AI Act)
  • Detailed rules for the development of a template for the post-market monitoring plan of a high-risk AI system (Article 72(3) AI Act; to be adopted no later than 2 February 2026)
  • Detailed procedures and conditions for the evaluation of a general-purpose AI (GPAI) model (Article 92(6) AI Act)
  • Detailed rules and procedural safeguards for the imposition of fines by the Commission (Article 101(6) AI Act)


Codes of Practices

In recent years, the EU has increasingly relied on the instrument of co-regulation. Unlike traditional legislation (hard law), co-regulation involves economic operators voluntarily adopting measures and practices to regulate certain economic and social interests. This process is supported, guided, supervised and/or monitored by EU institutions or independent regulatory bodies.

Codes of practice do not constitute "laws" in the traditional sense—unless they are formally made binding.

Under the AI Act, codes of practice are used in specific areas. While these codes are generally voluntary, the Commission may approve a code of practice through an implementing act, thereby granting it general applicability within the Union. In such cases, the code becomes binding (see Article 56(6) and Article 50(7) AI Act). The AI Office is responsible for steering the co-regulatory process under the AI Act.

By 2 May 2025, codes of practice are to be developed in the following area:

  • General-purpose AI (GPAI) models / GPAI models with systemic risk (Article 56(2) AI Act)
    The code should cover, at a minimum, the obligations laid down in Articles 53 and 55 AI Act, including the following aspects:
    • Means to ensure that the information referred to in Article 53(1)(a) and (b) remains up to date in light of market and technological developments;
    • Appropriate level of detail in summarising the content used for training;
    • Identification of the nature and characteristics of systemic risks at Union level, where applicable including their root causes;
    • Measures, procedures, and modalities for the assessment and management of systemic risks at Union level.

On 10 July 2025, the AI Office published the so-called GPAI Code of Practice. The code consists of three separate documents. The first document covers transparency measures. The second sets out provisions on copyright compliance. The third includes, among other things, guidance on the identification and analysis of systemic risks, documentation obligations, and a framework for risk mitigation and assessment for GPAI models with systemic risk.
 

General-Purpose AI Code of Practice


Additional Code of Practice (to be developed by 2 August 2026)

  • Code of Practice on Transparency Obligations: Aims to facilitate the implementation of the obligations related to the detection and labelling of artificially generated or manipulated content (Article 50(7) AI Act).
Code of Practice on Transparency Obligations


Code of conduct

Article 95 of the AI Act provides for the possibility of developing voluntary codes of conduct. However, the Commission is not empowered to make such codes legally binding across the Union. These codes are intended to facilitate the application of some or all of the requirements laid down for high-risk AI systems to AI systems that do not pose a high risk (Article 95(1) AI Act).

  • Application of specific requirements to all AI systems, including by providers and deployers. Among other aspects, the following elements should be covered:
    • Relevant elements from the Union’s Ethics Guidelines for Trustworthy AI;
    • Assessment and mitigation of the environmental impact of AI systems, including energy-efficient programming and techniques for the efficient design, training, and deployment of AI;
    • Promotion of AI literacy, particularly for individuals involved in the development, deployment, and use of AI systems;
    • Support for inclusive and diverse design of AI systems;
    • Assessment and prevention of adverse effects on vulnerable persons or groups of vulnerable persons.


Guidelines

Guidelines must be distinguished from codes of conduct and codes of practice. Guidelines issued by the Commission are not "laws" in the formal sense but rather explanatory and informal documents intended to provide practical guidance on how specific provisions of the AI Act should be applied.

These Commission-issued guidelines are not legally binding, but serve as practical and informal tools to explain the application of Union law. While they do not constitute legal norms, they often result in a self-commitment by the Commission. Administrative bodies are not strictly bound to follow them in all cases; however, Article 99(1) AI Act specifies that guidelines must be taken into account when imposing penalties or other enforcement measures. The Court of Justice of the European Union (CJEU) is not bound by Commission guidelines when interpreting Union law (see, for instance, CJEU, Case C‑376/20 P, Commission v. CK Telecoms UK Investments, para. 125).

Die According to Article 96 of the AI Act, the Commission shall develop guidelines to support the practical implementation of the Regulation, in particular with regard to:

  • The application of the requirements and obligations set out in Articles 8 to 15 and Article 25;
    • The prohibited practices referred to in Article 5.
    • On 4 February 2025, the Commission published the Guidelines on Prohibited AI Practices.
    Guidelines on prohibited AI practices
  • The practical implementation of the provisions on substantial modifications;
  • The practical application of the transparency obligations under Article 50;
  • Detailed information on the relationship between this Regulation and the Union harmonisation legislation listed in Annex I, as well as other relevant
    Union legislation, including with regard to coherent enforcement;
  • The application of the definition of an AI system as set out in Article 3(1).
    • On 6 February 2025, the Commission published the Guidelines on the Definition of an AI System.
    Guidelines on AI system definition


On 18 July 2025, the Commission published guidelines on the scope of obligations for providers of general-purpose AI models.

Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act


The Commission is also preparing guidelines in the following areas:

  • The practical implementation of Article 6 AI Act, including a comprehensive list of practical examples of use cases that qualify as high-risk or non-high-risk AI systems (Article 6(5) AI Act; to be adopted no later than 2 February 2026);
  • Elements of the quality management system referred to in Article 17, in a simplified form for micro-enterprises (Article 63(1) AI Act);
  • Reporting of serious incidents involving high-risk AI systems by providers to the relevant market surveillance authorities (Article 73(7) AI Act; to be adopted no later than 2 August 2025).

In addition, at the national level, the Austrian Federal Ministry of Social Affairs, Health, Care and Consumer Protection commissioned the Research Institute – Digital Human Rights Center to develop guidelines on the right to explanation under Article 86 AI Act.

These documents are available at the following links:


Harmonised Standards

Standards  are, in principle, non-binding guidelines that define technical specifications for products, services, and processes of various kinds. Typically, standards are developed by private standardisation bodies upon the initiative of stakeholders who have identified a relevant need. A standard becomes a "harmonised standard" when it is requested by the European Commission through a formal standardisation mandate and accepted by the relevant body (see Article 3(27) in conjunction with Article 2(1)(c) of Regulation (EU) No 1025/2012).

Harmonised standards also play a key role in relation to high-risk AI systems and general-purpose AI (GPAI) models. Compliance with such standards gives rise to a presumption of conformity with the requirements of the AI Act. This applies to the requirements for high-risk AI systems set out in Chapter III, Section 2, as well as the requirements for GPAI models and GPAI models with systemic risk laid down in Chapter V, Sections 2 and 3 (see Article 40(1) AI Act).

The European standardisation organisations are:

  • CENELEC for electrotechnical standards
  • ETSI for telecommunications standards
  • CEN for all other sectoral standards

On 22 May 2023, the Commission issued a standardisation request to CEN and CENELEC in support of Union policy on artificial intelligence (see reference here). The two organisations were invited to develop European standards by 30 April 2025.

The following standards were requested:

1. European standard(s) and/or European standardisation deliverable(s) on risk management systems for AI systems (currently under development)

2. European standard(s) and/or European standardisation deliverable(s) on governance and quality of datasets used to build AI systems (currently under development)

3. European standard(s) and/or European standardisation deliverable(s) on record keeping through logging capabilities by AI systems

4. European standard(s) and/or European standardisation deliverable(s) on transparency and information provisions for users of AI systems

5. European standard(s) and/or European standardisation deliverable(s) on human oversight of AI systems

6. European standard(s) and/or European standardisation deliverable(s) on accuracy specifications for AI systems

7. European standard(s) and/or European standardisation deliverable(s) on robustness specifications for AI systems

8. European standard(s) and/or European standardisation deliverable(s) on cybersecurity specifications for AI systems (currently under development)

9. European standard(s) and/or European standardisation deliverable(s) on quality management systems for providers of AI systems, including post-market monitoring processes

10. European standard(s) and/or European standardisation deliverable(s) on conformity assessment for AI systems (approved on 25 November 2024)

The CEN-CENELEC work programme can be accessed here.

Independently of the standardisation request issued by the European Commission, the European Telecommunications Standards Institute (ETSI) has already published technical reports and specifications, which are available here.

In addition, attention should also be drawn to internationally recognised ISO standards, although these do not qualify as harmonised standards within the meaning of the AI Act. Currently published ISO standards can be found here.


Common specifications

Common specifications, as referred to in Article 41 of the AI Act, are a set of technical specifications within the meaning of Article 2(4) of Regulation (EU) No 1025/2012, compliance with which enables conformity with certain requirements of the AI Act. The Commission may adopt common specifications through implementing acts, but only in the absence of harmonised standards.


AI-Pact

Given the varying risk levels of certain AI practices and the corresponding need for gradual adaptation, the AI Act provides for a phased timeline for the application of its provisions (see Timeline & Implementation). The AI Pact is intended to support early preparedness and advance planning for the implementation of the measures and obligations under the AI Act. Participation in the AI Pact is voluntary and is primarily aimed at fostering collaboration and exchange among organisations and companies. Further information can be found here.


Further links

EUR-Lex: Delegated acts

Council: Implementing and delegated acts

European Commission: Implementing and delegated acts

European Economic and Social Committee: Opinion on Self-regulation and co-regulation in the Community legislative framework

Harmonised Standards: Standards in Europe