Cambridge Commentary on EU General-Purpose AI Law

Explore the Cambridge Commentary
Chapter V
Codes of Practice
Commentary by Zlatko Grigorov & Ludivine Stewart (joint first authors)

AI Act provision

Article 56: Codes of Practice

  1. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches.
  2. The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues:
    1. the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of market and technological developments;
    2. the adequate level of detail for the summary about the content used for training;
    3. the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate;
    4. the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain.
  3. The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.
  4. The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level.
  5. The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants.
  6. The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes of practice. The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
  7. The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.
  8. The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in light of emerging standards. The AI Office shall assist in the assessment of available standards.
  9. Codes of practice shall be ready at the latest by 2 May 2025. The AI Office shall take the necessary steps, including inviting providers pursuant to paragraph 7.

If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

Recitals

Recital 116

The AI Office should encourage and facilitate the drawing up, review and adaptation of codes of practice, taking into account international approaches. All providers of general-purpose AI models could be invited to participate. To ensure that the codes of practice reflect the state of the art and duly take into account a diverse set of perspectives, the AI Office should collaborate with relevant national competent authorities, and could, where appropriate, consult with civil society organisations and other relevant stakeholders and experts, including the Scientific Panel, for the drawing up of such codes. Codes of practice should cover obligations for providers of general-purpose AI models and of general-purpose AI models presenting systemic risks. In addition, as regards systemic risks, codes of practice should help to establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their sources. Codes of practice should also be focused on specific risk assessment and mitigation measures.

Recital 117

The codes of practice should represent a central tool for the proper compliance with the obligations provided for under this Regulation for providers of general-purpose AI models. Providers should be able to rely on codes of practice to demonstrate compliance with the obligations. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the Union, or, alternatively, to provide common rules for the implementation of the relevant obligations, if, by the time this Regulation becomes applicable, a code of practice cannot be finalised or is not deemed adequate by the AI Office. Once a harmonised standard is published and assessed as suitable to cover the relevant obligations by the AI Office, compliance with a European harmonised standard should grant providers the presumption of conformity. Providers of general-purpose AI models should furthermore be able to demonstrate compliance using alternative adequate means, if codes of practice or harmonised standards are not available, or they choose not to rely on those.

Recital 121

Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art, to promote innovation as well as competitiveness and growth in the single market. Compliance with harmonised standards as defined in Article 2, point (1)(c), of Regulation (EU) No 1025/2012 of the European Parliament and of the Council (41), which are normally expected to reflect the state of the art, should be a means for providers to demonstrate conformity with the requirements of this Regulation. A balanced representation of interests involving all relevant stakeholders in the development of standards, in particular SMEs, consumer organisations and environmental and social stakeholders in accordance with Articles 5 and 6 of Regulation (EU) No 1025/2012 should therefore be encouraged. In order to facilitate compliance, the standardisation requests should be issued by the Commission without undue delay. When preparing the standardisation request, the Commission should consult the advisory forum and the Board in order to collect relevant expertise. However, in the absence of relevant references to harmonised standards, the Commission should be able to establish, via implementing acts, and after consultation of the advisory forum, common specifications for certain requirements under this Regulation. The common specification should be an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation request has not been accepted by any of the European standardisation organisations, or when the relevant harmonised standards insufficiently address fundamental rights concerns, or when the harmonised standards do not comply with the request, or when there are delays in the adoption of an appropriate harmonised standard. Where such a delay in the adoption of a harmonised standard is due to the technical complexity of that standard, this should be considered by the Commission before contemplating the establishment of common specifications. When developing common specifications, the Commission is encouraged to cooperate with international partners and international standardisation bodies.

Recital 135

Without prejudice to the mandatory nature and full applicability of the transparency obligations, the Commission may also encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content, including to support practical arrangements for making, as appropriate, the detection mechanisms accessible and facilitating cooperation with other actors along the value chain, disseminating content or checking its authenticity and provenance to enable the public to effectively distinguish AI-generated content.

Recital 179

This Regulation should apply from 2 August 2026. However, taking into account the unacceptable risk associated with the use of AI in certain ways, the prohibitions as well as the general provisions of this Regulation should already apply from 2 February 2025. While the full effect of those prohibitions follows with the establishment of the governance and enforcement of this Regulation, anticipating the application of the prohibitions is important to take account of unacceptable risks and to have an effect on other procedures, such as in civil law. Moreover, the infrastructure related to the governance and the conformity assessment system should be operational before 2 August 2026, therefore the provisions on notified bodies and governance structure should apply from 2 August 2025. Given the rapid pace of technological advancements and adoption of general-purpose AI models, obligations for providers of general-purpose AI models should apply from 2 August 2025. Codes of practice should be ready by 2 May 2025 in view of enabling providers to demonstrate compliance on time. The AI Office should ensure that classification rules and procedures are up to date in light of technological developments. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from 2 August 2025.

Select bibliography

  • Bernsteiner C and Schmitt TR, ‘Praxisleitfäden’ in Martini M and Wendehorst C, (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (C.H. Beck 2025).
  • Kutterer C and Karathanasis T, ‘The AI Act’s GPAI Code: Hidden Policy Choices’ (2025) AI Regulation Papers 25-03-1 <https://ai-regulation.com/gpai-cop-hidden-policy-choices/> accessed 18 September 2025.
  • Pehlivan CN, Forgó N and Valcke P (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Kluwer Law International BV 2024).
  • Pirvan P, ‘The EU Commission’s General-Purpose AI Code of Practice: Pioneering Accountable AI Development While Setting a Global Governance Milestone’ (2025) 2 Journal of AI Law and Regulation 257.
  • Schneider A, ‘Artikel 56 Praxisleitfäden’ in Schefzig J and Kilian R, (eds), Beck’scher Online-Kommentar KI-Recht (3rd edn, C.H. Beck 2025).

Commentary

1. General remarks

1.1. Introduction

1Article 56 of the AI Act sets out the procedure for the drawing up and adoption of the codes of practice in relation to general-purpose AI (GPAI) models and their providers. Codes of practice are envisioned as a ‘central tool’1 for the implementation of the obligations of providers of GPAI models under the AI Act. The codes of practice are intended to be a temporary instrument2 for providers of GPAI models to demonstrate compliance with certain of their obligations under the AI Act, until harmonised standards are adopted.3 These obligations concern Articles 53(1) and 55(1) AI Act, covering respectively the obligations for providers of GPAI models and of GPAI models with systemic risk.4

2Article 56’s role goes beyond the GPAI model regime. Article 50(7) of the AI Act provides that the AI Office should ‘encourage and facilitate’ the adoption of codes of practice ‘to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content’.5 This provision states that the Commission may approve such codes via implementing acts in accordance with the procedure laid down in Article 56(6) of the AI Act, or adopt common rules if the Commission does not deem the code of practice adequate.6 The Commission recently launched a call for expression of interest to participate in the creation of a code of practice for transparent AI systems.7 It should be noted that Article 50 is not part of Chapter V on GPAI models but of Chapter IV concerning transparency obligations for providers and deployers of certain AI systems. Article 56(6) specifically concerns GPAI models, as its insertion in Chapter V indicates. The Commission indicated that the requirements laid down in Article 50 are ‘complementary’ to the transparency rules applicable to GPAI models.8 Considering that Article 56(6) has been drafted specifically for GPAI models,9 it will need to apply mutatis mutandis10 to the adoption of the codes of practice under Article 50(7).

3The role of codes of practice is best understood when considering the rules that apply to GPAI models, as well as the context in which they were adopted.11 The obligations laid down in Articles 53(1) and 55(1) AI Act are phrased in rather general terms. This naturally represents a challenge for providers in implementing and complying with their obligations. These obligations entered into application from 2 August 2025.12 As noted elsewhere in this commentary,13 harmonised standards are expected to be the principal mechanism for GPAI model providers to demonstrate compliance. A harmonised standard is defined as a ‘European standard adopted on the basis of a request made by the Commission for the application of Union harmonisation legislation’.14 The adoption of harmonised standards is, however, a complex and lengthy process. The ongoing process for the adoption of standards in relation to high-risk AI systems, which has been in progress for more than two years since the request was issued in 2023 by the Commission, is indicative of the time required to develop standards.15 At the time of writing, a standardisation request regarding GPAI models from the Commission has not been issued. In that regard, there are important differences between codes of practice and harmonised standards, such as the fact that codes of practice cannot create a presumption of conformity, a point which will be further explored below.16 Harmonised standards are adopted on the basis of a request made by the Commission for the application of Union harmonisation legislation and can be subject to an objection mechanism by the European Parliament and Member States.17

4Finally, codes of practice must be distinguished from ‘common specifications’, which are adopted by the Commission via an implementing act pursuant to Article 41 AI Act. Common specifications are defined under Article 3(28) in connection with Article 2(4) of Regulation (EU) No 1025/2012 as a set of technical specifications: documents that prescribe technical requirements to be fulfilled by a product, process, service or system. Common specifications are described in recital 121 as ‘an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation’ in the absence of harmonised standards.

5The codes of practice in the EU AI Act are an example of co-regulation, defined as a ‘regulation method that includes the participation of both private and public actors in the regulation of specific interests and objectives’.18 Pursuant to Article 56(3) AI Act, the AI Office may invite ‘all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice’. The same provision specifies that ‘[c]ivil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts’ may support the process. The General-Purpose AI Code of Practice (GPAI Code of Practice), which was finalised and published in July 2025, was developed through a multi-stakeholder process that followed a call to participate19 and involved industry (notably providers of GPAI models and downstream providers), academia, civil society, rightsholders, as well as EU Member States represented on the AI Board.20

1.2. Structure and overview

6This chapter analyses Article 56 in the order of the provision, proceeding paragraph by paragraph. Where a concept spans multiple paragraphs, the core discussion appears at its first occurrence, with appropriate summaries and cross-references elsewhere. Where relevant, the analysis is grounded in the adopted GPAI Code of Practice21 and the first adequacy assessments issued by the AI Board22 and the Commission,23 as well as any other actions of the AI Board and the AI Office.

7In line with this approach, this chapter first addresses Article 56(1), which frames the AI Office’s role in drafting codes of practice as one of encouragement and facilitation. The analysis then turns to Article 56(2), which sets the codes’ minimum content by reference to Articles 53 and 55. This section is best read alongside the substantive chapters on those provisions, which detail the commitments and measures reflected in the adopted GPAI Code of Practice.

8The next section sets out the consultation framework under Article 56(3). It assesses the scope of the AI Office’s discretion in inviting different stakeholder categories and in determining the appropriate modes of engagement: direct involvement in drafting versus advisory input. It then scrutinises the participation observed in the drafting of the GPAI Code of Practice and offers legal interpretative guidance on participation requirements during the adaptation procedure under Article 56(8).

9The section on Article 56(4) assesses the legal character of its three criteria for codes of practice: (i) clear statement of specific objectives, (ii) commitments or measures for achieving those objectives, and (iii) taking into account the needs and interests of all interested parties. It also examines whether the fulfilment of these criteria is required for an adequacy finding under Article 56(6), first subparagraph. The section on Article 56(5) analyses the scope and purpose of regular reporting to the AI Office and clarifies the provision’s addressee as the AI Office and the nature of its duties. It then explores the potential consequences of this provision for providers.

10The analysis of Article 56(6) proceeds in two parts, corresponding to its subparagraphs and their distinct functions. The discussion of the first subparagraph defines what regular monitoring and evaluation entails. It then turns to the second and third sentences of the first subparagraph which serve as the legal basis for adequacy assessments by the AI Board and the AI Office. It scrutinises the legal form of the instruments used to publish those assessments, drawing on comparison with other similar procedures across EU legislation, and then examines potential avenues for judicial review, as well as the required form, means, and timing of the publication of assessments.

11The second subparagraph of Article 56(6) has generated significant uncertainty about what it means for a code to acquire ‘general validity within the Union’ when approved via an implementing act. The section addresses this by analysing the conferral of implementing powers on the Commission in the context of centralised enforcement of Chapter V of the AI Act, compared to the ordinary function of implementing acts to unify national enforcement. It draws parallels to other instruments and evaluates competing views on the legal consequences of approving a code of practice by an implementing act.

12The section on Article 56(7) clarifies the legal consequences of adherence to a code of practice. It proceeds from the absence of a formal presumption of conformity and derives the legal and practical implications of adhering to a code. It assesses the compliance value and supervisory expectations attached to adherence, and the corresponding potential adverse consequences of non-adherence. It also addresses the effects of selective adherence and analyses the relation between codes of practice and harmonised standards.

13The chapter then turns to Article 56(8)’s review and adaptation mechanism. While discussions on potential adaptations are addressed where they arise in other sections of the chapter, the specific discussion focuses on the formal procedure for introducing adaptations. Finally, Article 56(9) also examines its two subparagraphs separately considering their different aims. First, it assesses the legal character of the deadline for drawing up codes of practice and the consequences of delay. Second, it analyses the power of the Commission to adopt common rules by an implementing act – the scope and effects of the rules, their interaction with other instruments under the AI Act, and the conditions that must be met before adoption is permissible.

2. Substance

2.1. Article 56(1): Encouragement and facilitation of the drawing up of codes of practice by the AI Office

14Article 56(1) mandates the AI Office to ‘encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches’. The role of the AI Office in relation to the codes of practice has been described in the literature as ‘giving the impulse’,24 in other words to set in motion and support the drafting process. The wording ‘encourage’ and ‘facilitate’ covers tasks related to organisation, logistics and coordination. The experience of the adopted GPAI Code of Practice sheds further light on what this means in practice. As part of the GPAI Code of Practice, the AI Office carried out the following tasks: opening a call for expression of interest to participate in the drawing up of a code of practice, verifying the eligibility and confirming participation to the stakeholders that expressed their interests,25 appointing the chairs and vice-chairs for each working group,26 organising meetings, and drawing up meeting minutes.27 Article 56(1) emphasises that the process should take into account international approaches. Given that Article 56(1) is addressed to the AI Office, it follows that the Office is required to consider such approaches in facilitating the development of codes of practice. For example, this obligation is reflected in the AI Office’s appointment of chairs and vice-chairs of the GPAI Code of Practice working groups, ensuring diverse representation from geographical regions beyond Europe and from different areas of expertise.28

15At the same time as the call to participate in the drafting process, the AI Office launched a multi-stakeholder consultation on trustworthy GPAI models under the AI Act in July 2024. This consultation aimed to allow any stakeholder ‘to have their say on the topics covered by the first adopted GPAI Code of Practice’29 and to inform the drafting process of the GPAI Code of Practice.30

2.2. Article 56(2): Minimum obligations to be covered in the codes of practice

16Under the AI Act, the AI Office and the AI Board shall ensure that the codes of practice cover ‘at least’ the obligations provided for in Article 53 (for all GPAI providers) and in Article 55 AI Act (for GPAI with systemic risks providers).31 Article 56(2) specifically indicates that the following issues should be covered in the codes of practice:

  • ‘the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of market and technological developments’;32
  • ‘the adequate level of detail for the summary about the content used for training’;33
  • ‘the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate’;34
  • and finally, ‘the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain’.35

17Recital 116 indicates that the codes of practice should ‘help to establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their sources’ and focus on specific risk assessment and mitigation measures. Both the AI Office and the AI Board must assess whether the codes of practice cover these obligations.36 Only a code of practice that has been assessed as adequate in accordance with the procedure laid down in Article 56(6) will enjoy the effects of adhering to the code, which are analysed below.37

18As already mentioned in the introduction, the GPAI Code of Practice was finalised and published in July 2025.38 It is composed of three separate chapters: the Transparency Chapter, the Copyright Chapter, and the Safety and Security Chapter.39 The former two chapters concern all providers of GPAI models whereas the latter chapter on Safety and Security is only relevant for providers of GPAI models with systemic risk. There are 12 commitments included across the three chapters, 10 of which are featured in the Safety and Security Chapter. On 1 August 2025, the Commission40 and the AI Board41 published their respective adequacy assessment of the GPAI Code of Practice, confirming that the GPAI Code of Practice sufficiently covers the obligations laid down in Articles 53 and 55 as required by Article 56(6) AI Act. Both the Commission’s opinion and the AI Board’s conclusion highlight the possibility that the GPAI Code of Practice may be deemed inadequate in the future,42 following the procedure for monitoring and evaluating the effectiveness of codes of practice under Article 56(6),43 and the possibility for review and adaptation under Article 56(8).44 This is indicative of the character of the codes of practice as flexible instruments, which should be reviewed and adapted to technological advancements, such as new capabilities or new model architectures, and expertise gained in the field.

2.3. Article 56(3): Stakeholder participation in the drafting process

19While the AI Office bears the primary responsibility for organising the preparation of the codes of practice under Article 56(1), Article 56(3) of the AI Act empowers it to invite GPAI model providers and national competent authorities to participate in the drafting process itself. The use of the verb ‘may invite’ in the provision45 suggests that the AI Office has discretion in whether to include those stakeholders rather than a legal duty to do so.46 Despite the wording of the provision being framed in discretionary terms, recital 116 states that the AI Office ‘should’ collaborate with the national competent authorities to ensure that the codes adequately account for diverse perspectives.47 However, despite this interpretative guidance of the non-binding recital, the AI Act contains no reference to procedures for participation or consequences of non-inclusion of national authorities. Therefore, it is preferable to interpret the use of ‘should’ in recital 116 as setting a policy objective for, rather than a definitive obligation upon, the AI Office.48

20In contrast to the possibility to include model providers and national competent authorities in the drawing up of the codes itself, Article 56(3), second sentence is framed as allowing for the invitation of ‘[c]ivil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts’ to ‘support’ the drafting process, but not necessarily to participate in it. This has led some authors to postulate that such support activities encompass observing drafting process, providing advice and comments, or sometimes wording proposals, but as a rule, in line with principles of co-regulation, should not involve any direct participation in the detailed drafting.49 While such a reading is compatible with Article 56(3), there is no strong indication that the provision makes this approach mandatory for the AI Office. Rather, as stated above, it is evident from the wording of Article 56(3) that it accords a wide range of discretion to the AI Office as to whether to include various stakeholders in the process, the scope of such participation, and the prioritisation of different views. Still, some non-mandatory, soft-law considerations support broader stakeholder inclusion in the drafting process, such as the fact that stakeholder consultations form one of the ‘key instruments’ for better regulation under the Commission’s internal Better Regulation Guidelines.50 With view of this discretion, the way in which the AI Office organised stakeholder participation for the drafting of the adopted GPAI Code of Practice may be taken as indicative of its preferred interpretative approach and inform how Article 56(3) may be operationalised for drawing up future codes of practice, or as is discussed below, for adapting the current GPAI Code of Practice under Article 56(8).51 Consequently, the following analysis focuses on the procedures adopted by the AI Office in structuring the drafting process of the initial GPAI Code of Practice.

21As stated above, the drafting process commenced on 30 July 2024 with the publication of an open call for expression of interest to participate in the drawing up of the codes of practice, which also included an indicative timeline of the drafting process.52 The open call first described the establishment of a Code of Practice Plenary to function as the primary forum for the iterative drafting process.53 The Plenary itself was divided into four working groups to allow for the thematic development of the different chapters and measures:

  • Working Group 1: Transparency and copyright-related rules;
  • Working Group 2: Risk identification and assessment measures for systemic risks;
  • Working Group 3: Risk mitigation measures for systemic risks; and
  • Working Group 4: Internal risk management and governance for general-purpose AI model providers.54

22The AI Office appointed chairs and vice-chairs for each of the working groups, who according to the initial open call were required to be ‘independent experts’.55 On this basis, the call set out three categories of selection criteria for these roles: (i) expertise in relevant areas, (ii) ability to effectively fulfil the role and the tasks, and (iii) independence.56 It described these criteria as non-exhaustive,57 leaving room for the AI Office to take additional considerations into account as they may arise. The independence criterion entails the absence of financial or other interests that may affect the selected candidate’s independence, impartiality, or objectivity.58 For this reason, candidates were asked to declare any direct or indirect conflicts,59 but the call did not reference any specific conflict evaluation procedures or thresholds. Overall, the criteria were rather abstract, reflecting the broad discretion the AI Office retained in selecting participants and (vice-)chairs. Although the call stated that the declarations of interest of selected candidates may be made public,60 this has not occurred.

23The call further provided that the Plenary shall consist of:

all interested and eligible general-purpose AI model providers, downstream providers integrating a general-purpose AI model into their AI system, other industry organisations, other stakeholder organisations such as civil society or rightsholders organisations, as well as academia and other independent experts.61

24Unlike the eligibility requirements for (vice-)chairs, those for other participants understandably did not include independence criteria. The call required GPAI model providers to either have existing or planned operations within the Union, the latter of which had to be substantiated by demonstrating that genuine steps have been taken in that direction.62 Crucially, considering that the open call envisioned dedicated workshops with the chairs and vice-chairs explicitly only for GPAI model providers,63 declarations of interest had to include a self-assessment that they met the GPAI model provider criteria.64 In addition to an actual or planned presence, downstream providers and other industry stakeholders had to demonstrate a legitimate interest in participating.65 For downstream providers, eligibility was limited to those integrating a GPAI model into their AI system,66 whereas for other industry organisations they had to establish that their members were affected by or had a direct link to the ‘objectives and the content of the future Code of Practice’.67 Academia, independent experts, and related organisations were able to participate irrespective of any EU presence, provided they substantiated relevant expertise in at least one of the working group topics and participated in a personal capacity.68 Other stakeholders, such as civil society or rightsholder organisations, had to be established or physically present in the Union, demonstrate a legitimate interest, and show representative powers as regards to the affected stakeholder group they claimed to represent.69

25Therefore, the AI Office adopted a maximally inclusive approach to stakeholder participation under Article 56(3). The Commission has repeatedly highlighted that more than 1,000 stakeholders had taken part in the Plenary sessions to assist with the drawing up of a code of practice.70 In fact, the participation of this wide array of stakeholders in the drafting process has been highlighted in the adequacy assessments of both the Commission71 and the AI Board.72 While some observers and stakeholders have criticised the AI Office on the relative weight that it accorded to varying actor inputs and the lack of structured procedural and transparency rules,73 from a legal perspective, as noted above, Article 56(3) affords the AI Office broad discretion over the extent of stakeholder involvement.

26Simultaneously with the call, the AI Office launched a multi-stakeholder consultation procedure,74 the primary aim of which was the collection of a ‘broad range of input and perspectives’75 through a template questionnaire which was poised to ‘form the basis of the first drafting iteration of the Code’.76 The original consultation participation invitation states that the AI Office will publish a summary of the received responses in aggregate form.77 Commission press releases state that more than 430 submissions had been received and that preliminary results were presented during the first Plenary meeting.78 While there had been expectations for a more comprehensive public report to be published in autumn 2024,79 no such report appears to have been made publicly available as of the time of writing. Thus, it remains unclear to what extent different stakeholders’ input was reflected in the initial drafts and which submissions informed particular measures.

27Given the anticipated reviews and adaptations to the GPAI Code of Practice under Article 56(8),80 it should be considered whether Article 56(3) may likewise be applicable to the review phase. Considering that stakeholder involvement under Article 56(3) is discretionary and entrusted to the AI Office, there are no legal obstacles to inviting stakeholders to participate in or support the amendment process. If the introduction of amendments is undertaken and the AI Office again decides to opt for broad stakeholder participation, there are no binding rules that require it to apply the same eligibility criteria as in the initial drafting process. However, despite the fact that such procedures do not represent legislative action by the Union, the selection procedures should still be prepared with view of the general principles of EU law, which are binding on all Union institutions, bodies, offices, and agencies, and specifically the principles of participation and transparency,81 civil society dialogue,82 good administration,83 good governance and access to documents,84 and legal certainty85.

2.4. Article 56(4): Clear objectives, measures for achievement of objectives, and stakeholder consideration

28Article 56(4) mirrors the structure of Article 56(2) in that it is framed as a requirement addressed to the AI Office and AI Board who ‘shall aim to ensure’ that the adopted codes of practice cover three criteria in addition to those contained in Article 56(2): (i) ‘that the codes of practice clearly set out their specific objectives’; (ii) that they ‘contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives’; and (iii) ‘that they take due account of the needs and interests of all interested parties, including affected persons, at Union level’.86 Some authors have contended that the chosen wording across different language versions of that provision, specifically ‘shall aim to ensure’ in the English version, ‘anstreben’ in the German version,87 and ‘s’efforcent88 in the French version, leave ambiguity as to whether these additional criteria should be interpreted as imposing a binding obligation on the AI Office and AI Board or whether they articulate non-binding policy objectives.89

29Considering the equivalent wording in Articles 56(2) and 56(4), the chosen formulation ‘shall aim to ensure’ (across the different language versions) can be interpreted as sitting in the middle of those two options on the basis of the co-regulatory model applicable to the drafting and adoption of the codes of practice. Specifically, the AI Office and the AI Board are not the final decision-making authorities on the substantive content of the codes of practice,90 and thus obligations on the mandatory content of the codes cannot be directly placed on them. Instead, the two bodies have the authority to assess the adequacy of the adopted codes.91 Relatedly, Article 56(1) frames the AI Office’s role in relation to codes of practice as one of encouragement and facilitation rather than direct drafting.

30Thus, the less determinate phrasing could reflect the presence of two separate duties: (i) for the AI Office, under Article 56(4) read in conjunction with Article 56(1), to exercise its facilitation responsibilities to encourage the inclusion of the covered criteria in the contents of the codes of practice and (ii) for both the AI Office and the AI Board, under Article 56(4) read in conjunction with Article 56(6), to take the cumulative fulfilment of those criteria into account in their respective adequacy assessments.92 Conversely, if the adopted codes of practice fail to appropriately meet these criteria, this interpretation of Article 56(4) would inform a duty on the AI Office and AI Board to deem them inadequate in their assessments under Article 56(6), first subparagraph.

31However, while this reading leads to a coherent interpretative approach across Article 56(2), (4) and (6), it does not necessarily follow from the text of Article 56(6), which does not cross-reference the preceding paragraphs as requisite conditions for a positive adequacy assessment. By contrast, Article 45(4) of the Digital Services Act (“DSA”),93 which provides for comparable assessments of the codes of conduct under the DSA by the Commission and the European Board for Digital Services, in its wording expressly ties the assessment to the fulfilment of the aims set out in its preceding paragraphs 1 and 3.94

32With that said, the interpretation that the adequacy assessments under Article 56(6) must take into account the fulfilment of paragraphs 2 and 4 seems to be shared by both the Commission and AI Board. In their respective adequacy assessments, each of the two bodies has based their conclusions on a combined examination of whether the criterium under Article 56(2) and the three criteria under Article 56(4) have been fulfilled.95 Specifically, in its assessment the AI Board has explicitly treated the criteria of Article 56(2) together with those of Article 56(4) as forming one cumulative whole for evaluating the GPAI Code of Practice’s adequacy.96 The Commission likewise has included the fulfilment of the objectives set out in Article 56(4) as part of its adequacy assessment criteria,97 but it has not made the same explicit consolidation of the two provisions as essentially representing one unified whole for the purposes of adequacy evaluation.

2.4.1. Clear statement of specific objectives

33With regard to the first criterion under Article 56(4), namely the requirement for the codes of practice to ‘clearly set out their specific objectives’, both the AI Board and the Commission have assessed the adopted GPAI Code of Practice as adequate.98 Both bodies primarily note the fact that each of the Code’s chapters has a dedicated ‘Objectives’ section, which contains Objective A, for the Code to serve as a guiding document for demonstrating compliance, and Objective B, to ensure compliance and enable the assessment of compliance by the AI Office.99 Both assessments also refer to the recitals of each chapter as dedicated to clarifying the various purposes and considerations of the underlying commitments.100 The Commission goes further and also treats purpose-clause phrases as ‘in order to’ or ‘for the purpose of’ as setting out the specific objectives of the individual commitments to which they attach.101

34On its side, the AI Board has moved beyond an adequacy assessment in the strict sense and has issued two forward-looking recommendations: (i) to consider whether additional objectives should be introduced in future iterations and (ii) to take account of relevant international approaches, as part of ongoing monitoring and evaluation for future versions.102 By articulating these considerations in its initial adequacy assessment, the AI Board appears to signal the potential factors it may weigh in subsequent adequacy reviews, including the possibility of deeming the current GPAI Code of Practice inadequate if evolving circumstances warrant amendments to the objectives that the Code fails to incorporate. This approach by the AI Board is also instructive for the future review and adaptation procedures under Article 56(8) discussed in more detail below.103

2.4.2. Commitments or measures for achievement of objectives

35The second criterion used under Article 56(4) to assess the adequacy of the adopted GPAI Code of Practice is that it contains ‘commitments or measures, including key performance indicators as appropriate, to ensure the achievement’ of the objectives under the first criterion. As outlined above, the Transparency and Copyright Chapters of the GPAI Code of Practice each contain one commitment, while the Safety and Security Chapter, aimed at the providers of GPAI models with systemic risk, contains ten.104 Each commitment is accompanied by detailed measures for its specific implementation. Both the AI Board and the Commission have assessed these commitments and measures as adequate.105

36Both bodies paid particular attention to the absence of proactive reporting obligations in the Transparency and Copyright Chapters, which they deemed is not a prerequisite for a positive adequacy assessment on the basis that effective monitoring can be accomplished by alternative means sufficient to meet the stated objectives.106 The Commission specifically noted that this choice takes into account ‘the size and capacity of providers that typically place general-purpose AI models without systemic risk on the market’.107 This reasoning links the second criterion to the third one which concerns taking into account all interested parties, discussed in the subsection below.

37The adopted GPAI Code of Practice does not contain any key performance indicators (KPIs) for any of the commitments and measures across its three chapters. The Commission has specifically stated that: ‘[i]n line with the assessment of the independent experts that drafted the Code, the Commission considers key performance indicators as currently not appropriate’ for ensuring the fulfilment of the objectives of the Code’s objectives.108 Notably, the final adopted text of the Code of Practice provides no reasoning behind the choice to omit KPIs, and in fact, does not mention the term at all. An examination of the drafting history of the Code reveals the progressive abandonment of KPIs from its intended framework. The first draft contained an outline for each of the commitments with respective measures, sub-measures, and placeholders for KPIs, but no concrete KPIs had been written in at that point.109 The second draft removed sub-measures as an item and included substantive KPIs that largely required complete compliance with the underlying measures (i.e., 100% fulfilment).110 In line with this, the third draft incorporated the majority of the previously titled KPIs into the measures themselves and explicitly stated that ‘[s]takeholders should not expect the final adopted version of the Code to contain KPIs’.111

38Nevertheless, in its adequacy assessment the Commission still encouraged signatories to aim to expand their reporting: ‘for example by including key performance indicators, where they become appropriate to measure the implementation and outcome of the Code’.112 Such voluntary adoption of KPIs by signatories may in turn instruct potential adaptations of the GPAI Code of Practice under Article 56(8). The AI Board likewise included future-oriented recommendations, specifically to monitor fulfilment of the Safety and Security Chapter’s risk assessment process with a view to determining whether further clarifications would be needed to align it with the requirements of the Cyber Resilience Act.113

2.4.3. Taking due account of the needs and interests of all interested parties

39The third criterion under Article 56(4) requires codes of practice to ‘take due account of the needs and interests of all interested parties, including affected persons, at Union level.’ Specific note must be made to the explicit distinction made in the provision between interested parties and affected persons. While balancing interests along the value chain, for example taking into account the special requirements of small and medium-sized enterprises (SMEs), downstream providers, open-source model providers, rightsholder organisations, and other such stakeholders, is one of the leading aims of the preparation of codes of practice, the explicit singling out of affected persons underscores the protection of fundamental rights as one of the primary facets of the AI Act’s purpose.114 It also signals that the rights of natural persons constitute a distinct class of interests, separate from that of other stakeholders, that must be substantively considered and evidenced in both the drafting and assessment of a code of practice.

40In view of the requirement to reflect and balance competing interests, the AI Board in its adequacy assessment has highlighted the breadth and diversity of stakeholder participation in the drafting process.115 However, whereas Article 56(3) provides discretion to the AI Office in whether to include various stakeholders in different capacities, Article 56(4) is substantive and prescribes that an adopted code of practice should meaningfully reflect and balance the needs and interests of all interested parties. Therefore, this criterion under Article 56(4) could, in principle, be satisfied even without active stakeholder participation, provided that the adopted text demonstrates a reasoned and proportionate weighing of those interests.

41Both adequacy assessments have commended the clarification in the Transparency Chapter that modifiers need to only comply with documentation requirements in respect of the modifications they have introduced.116 While this general interpretation is also reflected in recital 68 of the AI Act, and has been subsequently further elaborated in the Commission Guidelines, also regarding copyright and systemic risk obligations,117 the GPAI Code of Practice’s clarification provides a further soft-law source of legal certainty considering that GPAI model modifications are not directly dealt with in any of the binding provisions of the AI Act. The assessments also note the Code’s balancing of providers’ interests in protecting trade secrets and confidential information against the public interest in transparency, through the express cross-reference to the confidentiality provisions in Articles 78 and 53(1)(b) of the AI Act.118

42Both assessments conclude that the Copyright Chapter sets out clear, practicable measures for providers and respects the principle of proportionality by taking into account providers’ size and capacity in setting obligations while ensuring transparency to rightsholders.119

43The adequacy assessments also conclude that the Safety and Security Chapter has adequately accounted for interested parties.120 Specifically, both assessments have highlighted Measure 3.1’s focus on investigating potential effects ‘on natural persons, including vulnerable groups’121 as a goal when collecting model-independent information.122 This builds on the specific attention paid by Article 56(4) to affected persons as discussed above. The other measures highlighted as exemplary of the taking into account of the interests of interested parties by both bodies are Appendix 3.2, which requires model evaluations to align with the expected downstream uses; Measure 3.5, which calls for consideration of end-user feedback in post-market monitoring; and Measure 9.1, which prescribes the facilitation of reporting of incidents by downstream providers and final users.123

2.5. Article 56(5): Reporting to the AI Office

44The AI Act also requires the AI Office to ensure that participants to the codes of practice report regularly to the AI Office.124 This reporting concerns the implementation of the commitments under the codes of practice, the measures taken, and their outcomes, including ‘as measured against the key performance indicators as appropriate’.125 The term ‘participants to the codes of practice’ must be understood as providers of GPAI models that have signed the codes of practice and who are the ones implementing the commitments laid down in those codes of practice.126 The expectations regarding the reporting commitment and key performance indicators will depend on the size and capacity of providers.127

45In accordance with Article 56(6), the AI Office is required to monitor and evaluate the achievement of the objectives of the codes of practice, as well as the contribution of these codes to the proper application of the AI Act.128 To accomplish this objective, it is essential that providers report on the manner in which they implement commitments, and the outcomes of their measures.129 Article 56(5) is, however, addressed to the AI Office, not to the participants, which indicates that it is not an enforcement provision and there is no legal obligation for participants to report to the AI Office under this provision. Considering that the addressee of this obligation is the AI Office, the implications for providers are unclear. Providers are already required under Article 53(1)(a) to provide relevant information to the AI Office upon request,130 and the Commission may request documentation and information under Article 91 AI Act. It has been suggested in the literature that a failure to report on the implementation of the commitments as well as the measures taken and their outcomes could be a trigger for closer scrutiny on the part of the AI Office.131 It could furthermore be argued that an obligation to report specifically on the commitments taken under the codes of practice and their outcomes could be derived from Article 56(5) in combination with Article 53(3) AI Act which sets out a general obligation for providers to cooperate with the Commission ‘in the exercise of their competences and powers pursuant to this Regulation.’ As discussed elsewhere in this commentary,132 the contours and implications of the duty of cooperation are uncertain. It is therefore doubtful whether such an interpretation would resist the test of legal certainty which requires legal rules to be sufficiently clear and precise.133

46The added value of Article 56(5) becomes clearer when considering the Safety and Security Chapter of the GPAI Code of Practice which gives shape to the reference to reporting in that provision. Indeed, Commitment 7 in that chapter specifically refers to 56(5) AI Act and states that signatories ‘commit to reporting to the AI Office information about their model and their systemic risk assessment and mitigation processes and measures by creating a Safety and Security Model Report’.134 In addition, signatories ‘commit to keeping the Model Report up-to-date […] and notifying the AI Office of their Model Report’.135 Commitment 7.7 further clarifies the scope of this Model Report notification by stating that signatories will give the AI Office access to the Model Report by the time they place a model on the market and specifies the conditions within which the signatories may delay providing access to the Model Report.136

2.6. Article 56(6): Adequacy assessments

47Article 56(6) comprises two subparagraphs. The first imposes a duty on the AI Office and the AI Board to monitor and evaluate how codes of practice contribute to the proper application of the regulation, including whether their objectives are being achieved and whether they cover the obligations of Articles 53 and 55.137 It also requires publication of adequacy assessments by each of the two bodies on the codes of practice.138 The second subparagraph empowers the Commission to approve a code of practice via an implementing act, thereby ‘giving it general validity within the Union’.139

2.6.1. Article 56(6), first subparagraph

48The first sentence of the first subparagraph requires the AI Office and the AI Board to conduct regular monitoring and evaluation of the achievement of the objectives of the codes of practice and their contribution to the ‘proper application’ of the AI Act.140 The provision does not prescribe a specific time interval in which evaluations must be performed, meaning that in general it is left to the discretion of the two bodies.141 However, read together with Article 56(8), which envisages review and adaptation of the codes,142 particularly when relevant standards emerge, a potential interpretation is that adequacy should be reassessed at the minimum whenever new standards are issued. With that said, this approach would seem insufficient in light of the two provisions’ distinct evident purposes. It is preferable to conclude that regular monitoring should occur at the least more frequently than formal reviews and adaptations, that is on an ongoing basis, and largely before any concrete needs for adaptation may arise.

49In its initial adequacy assessment, the Commission largely restated that it will ‘regularly monitor and evaluate’ the effectiveness of the adopted GPAI Code of Practice, while indicating that it will consider initiating formal amendments ‘at least every two years, for instance based on the emergence of standards, relevant technological developments, or changes in the risk landscape’.143 This suggests that the Commission considers that monitoring cycles should be more frequent than the proposed two-year horizon for amendments. In its adequacy assessment the AI Board indicated a degree of deferral to the AI Office, noting it will rely on regular updates from the AI Office’s monitoring activities to inform its own actions.144 It also stated that it intends to ‘engage in collaborative efforts with the Commission, national competent authorities, downstream providers, and other relevant entities’.145

50The second sentence of the first subparagraph requires the AI Office and the Board to assess whether the codes of practice cover the obligations contained in Articles 53 and 55, and to monitor and evaluate whether the codes achieve their stated objectives. Some commentators have postulated that the apparent repetition of the monitoring and evaluation requirement is more likely a concretisation of the first sentence rather than a separate monitoring duty.146 Others treat it as a requirement to specifically include an evaluation of the fulfilment of the objectives of the codes as part of the adequacy assessment.147

51The final sentence of the first subparagraph serves as the primary legal basis for the publication of the adequacy assessments. A systematic reading treats the second and third sentences together, with the third sentence imposing publication of the adequacy assessments and the second sentence specifying the assessments’ content.148 On that reading, some authors contend that before a first phase of monitoring and evaluations could be performed, the assessment is to address only formal adequacy, namely whether an adopted code satisfies the minimum requirements in Article 56(2) and (4), and once monitoring has been conducted it should also address practical adequacy, namely whether the code effectively contributes to compliance with the AI Act.149

52A question arises as to the potential legal effects that may ensue in the event that the Commission and the AI Board issue divergent adequacy assessments. On a proper reading of Article 56(6), which assigns equal weight to the assessments of both bodies, it is preferable to adopt the interpretation that a negative adequacy finding by either would render a code as inadequate in its entirety and would preclude reliance on it to demonstrate compliance. This is reinforced by the interpretation provided in the Commission Guidelines which specify that the positive effects of adhering to a code of practice arise only when it is deemed adequate by both the AI Board and the AI Office.150 This is in contrast with Article 56(9) which grants discretionary power to the Commission to adopt common implementing rules in the event of a negative assessment by the AI Office but not by the AI Board as discussed below.151

53In apparent response to this inconsistency between the weight awarded to the Commission and the AI Board’s assessments of codes of practice under Article 56(6) and Article 56(9), the Commission has, in its 19 November 2025 proposal to amend the AI Act, suggested changes to the structure of Article 56.152 First, the Commission proposes replacing references to the AI Office in the first paragraph of Article 56(6) with references to the Commission.153 More pertinently, the Commission’s proposal for AI Act amendment retains the wording that the Commission and AI Board ‘shall regularly monitor and evaluate the achievement of the objectives of the codes of practice’, but suggests removing the requirement that both bodies issue an adequacy assessment on whether ‘the codes of practice cover the obligations provided for in Articles 53 and 55’.154 Instead, the proposal provides that only the Commission (‘taking utmost account of the opinion of the Board’) shall conduct such an adequacy assessment and publish it.155 The Commission’s proposal is subject to the ordinary legislative procedure leaving it presently uncertain whether (or to what extent) its contents will be adopted.156 Accordingly, the following discussion analyses the originally enacted version of Article 56(6), requiring the publication of an assessment by both bodies.

54While Article 56(6) thus envisions the publication of the respective adequacy assessments by the AI Office and AI Board, it remains unclear as to their supposed legal form, means of publication, and timing of adoption and publication.

2.6.1.1. Form of the adequacy assessment instrument

55With regard to the form in which they may be issued, this depends on the issuer. Under Article 288(1) TFEU, Union institutions may adopt regulations, directives, decisions, recommendations and opinions. As an EU institution under Article 13 TEU, the Commission could in principle adopt a decision of general application under Article 288(4) TFEU, which would be binding in its entirety, or issue a non-binding opinion.157 In the absence of a specific legal basis empowering the Commission to adopt an adequacy assessment as a binding decision, such assessments under Article 56(6) should take the form of non-binding opinions. The adoption of opinions follows one of the decision-making procedures contained in Article 6 of the Commission’s Rules of Procedure.158 The Commission indeed published its initial adequacy assessment in the form of an opinion on 1 August 2025.159

56As to the form of the AI Board assessment, the Board is not an EU institution within the meaning of Article 13 TEU.160 Its acts are therefore not acts under Article 288 TFEU, but are governed by its Rules of Procedure adopted pursuant to Articles 65 and 66 of the AI Act.161 Under Article 6 of those Rules of Procedure the Board may adopt ‘opinions, recommendations and advice’ on matters relevant to the implementation of the AI Act, which are all non-binding acts.162 The AI Board framed its initial adequacy assessment published on 1 August 2025 as the ‘Conclusion of the Artificial Intelligence Board on the Assessment of the General-Purpose AI Code of Practice pursuant to Article 56 of Regulation 2024/1689’.163 A ‘conclusion’ is not listed as a distinct act under the Rules of Procedure and would thus fall within the category ‘other documents’. The choice of that title likely draws on the provision regarding the assessment of codes of conduct in Article 45(4) of the DSA, which expressly provides that the European Board for Digital Services and the Commission should publish their ‘conclusions’,164 which title was then adopted by the European Board for Digital Services in its assessment of the two adopted DSA codes.165

57Thus, both assessment instruments by the Commission and the AI Board represent non-binding acts, which leads to some key considerations regarding their contestability. As soft-law acts, they cannot be subject to an action for annulment under Article 263 TFEU, which explicitly excludes opinions and recommendations from its scope. Also, the established case law firmly posits the presence of a binding legal effect of the contested act as a mandatory requirement for access to judicial review under Article 263 TFEU166 and has settled that arguments regarding effective judicial protection cannot transform a non-binding measure into a reviewable act.167 In a similar vein, the plea of illegality under Article 277 TFEU is also unavailable as it is limited to acts of general application adopted by an EU institution, body, office or agency.168 The pre-Lisbon Article 241 EC confined that plea to regulations,169 while the current TFEU provision has broadened the scope to include all acts of general application, which includes certain directives and decisions, but not non-binding acts.170

58Therefore, judicial challenges to an assessment such as seeking annulment or disapplication are unavailable. The practical consequences of the absence of a route to judicial review may not be immediately apparent considering that the AI Act explicitly provides for compliance by other adequate means for non-signatories.171 However, non-adherence can potentially entail significant administrative and material costs.172 Under the Commission Guidelines, non-signatories will be expected to substantiate how their chosen alternative adequate means are compliant, for example through preparing a gap analysis against an adequate code of practice, and may face more frequent requests for information and for access to conduct model evaluations across the lifecycle.173 A sanctions asymmetry may also arise considering that the Commission may treat commitments implemented in line with an adequate code as a mitigating factor.174

59A challenge to a future negative assessment of a code of practice previously deemed adequate also cannot be attempted via the routes of annulment or objection of illegality. It can be considered whether a provider that had relied on a code previously assessed as adequate may challenge a subsequent negative assessment on the basis of the protection of legitimate expectations.175 However the success of such a claim is questionable, considering that the established case law considers that no legitimate expectations may arise on the basis that an ‘existing situation which is capable of being altered by the Community institutions within the limits of their discretionary power will be maintained’.176

2.6.1.2. Means of publication

60The means of publication of the respective adequacy assessments are also not regulated by the AI Act.177 Unlike decisions of general applicability, which must be published in the Official Journal of the European Union,178 publication of Commission opinions is merely encouraged by Article 13(2) of Regulation (EC) No 1049/2001.179 Similarly, the AI Board’s Rules of Procedure require the publication of its meetings’ agenda, participants and minutes on the Commission’s Register of Expert Groups, but do not contain any rules on the publication of its acts.180 Nevertheless, the general principle of transparency would necessitate that publication should occur at least in such a manner that would ensure foreseeability for interested parties as to its place and means.181 With that in mind, the initial assessment of both the AI Board and the Commission were made available solely on the Commission’s website: the Commission’s opinion in the ‘Library’ section containing Commission documents182 and the AI Board’s conclusion under the AI Board’s section of the website.183 Both were also cross-linked from the general information page on the GPAI Code of Practice.184

2.6.1.3. Timing of the adequacy assessment

61The timing of the initial assessments was limited by the deadline contained in Article 56(9), second subparagraph of the AI Act, which permits the Commission to adopt common rules via an implementing act for the implementation of Articles 53 and 55 where a code of practice cannot be finalised by 2 August or is found inadequate. Looking ahead, if a subsequent adequacy assessment is negative, the temporal interaction between Article 56(6) and Article 56(8) and (9) remains unclear. The AI Act does not specify any period for amending a code of practice following such later negative assessment before the Commission may proceed to adopt common rules by means of implementing acts, nor does it clarify whether the Commission may even adopt common rules if an existing code is not amended after such a negative adequacy assessment. Those questions are discussed in more detail in the analysis of Article 56(9) below.185

2.6.2. Article 56(6), second subparagraph

62Article 56(6), second subparagraph, empowers the Commission to approve a code of practice by means of an implementing act and thereby to ‘give it a general validity within the Union’. The provision has prompted vastly divergent views as to its meaning and effects and its appropriate interpretation is still subject to debate.186 The main uncertainties concern two general issues. First, it is unclear what purposes and objectives are served by conferring power on the Commission to adopt implementing acts in the context of centralised enforcement, given that implementing acts are ordinarily used to ensure uniform conditions of implementation across Member State national enforcement systems.187 The second major line of debate surrounds the meaning of granting ‘general validity’ through implementing act approval,188 with some commentators arguing that approval by an implementing act is a necessary precondition for reliance on a code of practice,189 whereas others contend that a code of practice approved by an implementing act produces stronger legal effects than one merely assessed as adequate, transforming it from a voluntary instrument of reliance to a (mandatory) set of obligations for all providers.190

2.6.2.1. Power to adopt implementing acts in the context of centralised enforcement

63The first issue that arises is the drafting decision to require an implementing act for approval, which is inconsistent with that instrument’s principal purpose and functions. The primary legal basis for the adoption of an implementing act by the Commission is contained in Article 291(2) TFEU. This provision envisages the conferral of implementing powers on the Commission where uniform conditions of implementation are required. The case law of the CJEU has clarified that, when so empowered, ‘the Commission is called on to provide further detail in relation to the content of the legislative act, in order to ensure that it is implemented under uniform conditions in all Member States’.191 This means that implementing acts are required where harmonised legislation is to be enforced by the Member States and additional Commission guidance is needed to secure substantively identical conditions across national enforcement mechanisms.192 By contrast, the harmonised rules governing GPAI model providers are enforced exclusively by the Commission.193 With no national enforcement envisioned, the rationale for conferring power to adopt an implementing act remains unclear.194 Furthermore, implementing acts may contain any measure necessary for the implementation of a given legislation, but they may not substantively amend or supplement the legislative provisions which they are elucidating.195 Accordingly, adopting an implementing act that merely elaborates on the content of obligations, without the possibility of amendment, appears superfluous, particularly where the institution that drafts the implementing act is the same authority that will apply it.

64An argument can be made that adopting an implementing act would enhance legal certainty. The consideration of legal certainty, however, cannot serve as a justification to depart from primary EU law, which clearly delineates the basis and purpose of implementing acts. Still, Article 291(2) of the TFEU refers to ensuring uniform conditions for implementation without expressly limiting that uniformity to differences among Member States.196 While the practice and case law have so far read it this way, it is not excluded that this interpretation might evolve to also cover implementing acts that, in effect, set self-binding implementation measures on the Commission itself.197 Compared with issuing guidelines, which may also be self-binding on the Commission,198 adoption through an implementing act under the comitology examination procedure would afford Member States a formal voice and would aid to ensure that national preferences are reflected notwithstanding centralised enforcement.199 It would also allow for judicial review of the implementing act itself, discussed further down, in contrast to the absence of paths to judicially challenge adequacy assessments under Article 56(6), first subparagraph.200

2.6.2.2. ‘General validity within the Union’ in other EU legislation

65The second main issue that arises with Article 56(6), second subparagraph, is the stated legal effect of approval via an implementing act, since ‘general validity within the Union’ has no settled meaning in EU legislation.

66The phrase appeared for the first and only time in a similar context in Article 40(9) of the General Data Protection Regulation (“GDPR”),201 which at the time of its adoption also raised significant uncertainty among commentators as to its precise meaning.202 Article 40(2) GDPR allows associations and other bodies representing categories of controllers or processors to draft codes of conduct specifying the application of the GDPR. After approval by the competent supervisory authority and an opinion from the European Data Protection Board (“EDPB”), the Commission may grant such a code general validity in the Union through approving it via an implementing act.203 In the context of the GDPR, this approach is consistent, given ‘most importantly the fact that its enforcement is, in the case of GDPR undertaken by independent national [data protection authorities]’.204 The EDPB’s Guidelines 04/2021 on Codes of Conduct as Tools for Transfers state that ‘only those codes having been granted general validity within the Union may be relied upon for framing transfers’.205 Furthermore, the distinction between national and transnational codes of conduct, dependent on whether a code relates to personal data processing in one or multiple Member States,206 constitutes an important delineation under the GDPR and exemplifies the need for a centralised procedure thereunder to ensure uniform, harmonised conditions of application across national authorities.

67By contrast, as explained above, supervision and enforcement of Chapter V of the AI Act are centralised with the Commission, so the GDPR-specific rationales for formal recognition by the Commission do not fit the codes of practice framework under the AI Act.207 Furthermore, whereas under the GDPR the Commission’s approval by an implementing act is its first formal assessment on a given code of conduct, under the AI Act the Commission already issues an adequacy assessment pursuant to Article 56(6), first subparagraph.208 Therefore, a subsequent implementing act has the appearance of an unnecessary duplication of approval procedures if ‘general validity’ under the AI Act is interpreted in the same jurisdictional-enhancing way as under the GDPR.209 A code of practice assessed as adequate under Article 56(6), first subparagraph, of the AI Act can already be relied upon by GPAI model providers, regardless of where in the Union they have placed their models.210 The DSA, for example, as discussed above, also includes an assessment procedure by the Commission for codes of conduct under Article 45(4) thereof, and respectively does not additionally confer the power for further approval of the positively assessed codes through an implementing act.211

2.6.2.3. Potential interpretations of granting ‘general validity within the Union’ through an implementing act

68The foregoing issues have produced vastly divergent views on the purpose and effects of an implementing act under Article 56(6), second subparagraph. Two substantive interpretations of Article 56(6), second subparagraph, can be distinguished: (i) one opinion posits that reliance on a code of practice to demonstrate compliance within the meaning of Articles 53(4) and 55(2) is conditional on its approval by the Commission through an implementing act and, (ii) conversely, the Guidelines consider that a positive adequacy assessment is alone sufficient for reliance by signatories within the meaning of Articles 53(4) and 55(2), in line with which other commentators postulate that approval through implementing act renders a code mandatory for all GPAI model providers.

2.6.2.3.1. Implementing act required to rely on a code of practice

69The first interpretation proposes that, in effect, Article 56(6) contains a three-tiered hierarchy of the codes of practice.212 The first category includes codes that have been assessed as adequate and also approved by an implementing act.213 Under this view providers may rely upon codes of practice to demonstrate compliance with Articles 53 and 55, only if they have been approved by an implementing act.214 This view is supported by a grammatical reading of Articles 53(4) and 55(2), both of which specify that providers who do not ‘adhere to an approved code’215 need to demonstrate compliance by other adequate means. Within Article 56, the term ‘approve’ appears only in the second subparagraph of Article 56(6), which empowers the Commission to approve a code of practice by an implementing act.

70The second category under this proposed hierarchy is codes of practice found adequate but not approved via an implementing act by the Commission, which, under that interpretation, can operate as industry standards but cannot, by themselves, suffice to demonstrate compliance and would still require providers to explain how the measures adopted on their basis meet the AI Act’s obligations.216 Under this interpretation, the practical value of a positive adequacy assessment would lay chiefly in the private law context,217 for example to support a provider’s argumentation that it has followed a sufficient duty of care if a claim for damages is lodged against it.

71The third category concerns codes of practice that have either been deemed inadequate or have not yet been assessed, which would also have no public law effect and under the proposed delineation would at most reflect a private understanding among the drafting participants about how to fulfil their obligations.218 The difference under this interpretative approach between the second and third categories lies in the private law effects a code of practice may produce for third parties, that is, for non-signatories. For example, if a downstream system provider claims it has suffered damages due to failure by the GPAI model provider to furnish it with the necessary documentation on the underlying model, the model provider would be able to rely on the fact that it provided the information considered necessary for transparency according to an adequate code of practice to show that it acted in accordance with industry standards (and thus, met the requisite duty of care). By contrast, no such reliance against third parties would be available where the code has not been assessed as adequate.219

72Notably, the AI Office, at least at one point, also contemplated interpreting Article 56(6), second subparagraph, to mean that approval by an implementing act is required as a precondition for reliance on a code of practice. This is exemplified by the explanatory webpage of Questions & Answers regarding the regulation of GPAI models under the AI Act, which at the time of initial writing (August 2025), stated that ‘[if] approved via implementing act, the [GPAI] Code of Practice obtains general validity, meaning that adherence to the Code of Practice becomes a means to demonstrate compliance with the AI Act, while not providing a presumption of conformity with the AI Act’.220 This view has been abandoned by the AI Office, considering that the Commission’s Guidelines adopt a diametrically opposed position, which states unequivocally that providers may rely on codes of practice assessed as adequate, even when they have not been approved by an implementing act, to demonstrate compliance with the AI Act.221

73The Guidelines can produce a self-binding effect on the Commission.222 In short, upon publication of Guidelines, the Commission limits its own interpretative discretion, ‘at the risk of being found to be in breach of general principles of law, such as equal treatment or the protection of legitimate expectations’.223 This means that the interpretation contained in the current Guidelines may be considered the de facto applicable interpretation. However, the authoritative interpretation of the AI Act rests with the CJEU,224 which may still interpret Article 56(6), second subparagraph, as requiring Commission implementing act approval as a precondition to reliance as suggested by the presented interpretation of a three-tiered hierarchy.

2.6.2.3.2. Implementing act creating enforceable standards for all providers

74In light of the Commission’s explicit position in the Guidelines that GPAI model providers may rely on codes of practice under Articles 53(4) and 55(2) once those codes have been assessed as adequate, without the need for approval by an implementing act,225 it is necessary to consider the possible legal effects of such approval by an implementing act, especially if the Guidelines’ reading remains the authoritative one, that is, if the CJEU confirms the view that a code of practice assessed as adequate is binding on the Commission. An alternative interpretation to the three-tiered hierarchy outlined above, advanced by other authors and compatible with the current Guidelines, is that approval by an implementing act transforms a code from a voluntary instrument, binding only on signatories, into an ‘enforceable compliance standard’ for all GPAI model providers.226 Such a reading would mean that a code of practice approved by an implementing act would remove the possibility of reliance on alternative adequate means to show compliance.

75This reading follows, in part, from a systematic reading of Article 56(6), second subparagraph, together with Article 56(9), second subparagraph.227 Such an analogous treatment between the two provisions rests on the reasonable expectation that since both envisage their operationalisation through adoption of implementing acts, they should produce similar legal effects. The Commission’s Guidelines indirectly support this view by focusing on the two instruments together in the same guidance. Specifically, paragraph 99 of the Guidelines presents approval of a code of practice by an implementing act together with adoption of common rules by an implementing act in the absence of an adequate code.228 Yet, the cited paragraph still limits itself by repeating that approval of an adopted code of practice would lead to ‘general validity within the Union’, whereas it explicitly states that common rules adopted by an implementing act under Article 56(9) would apply to all GPAI model providers (with or without systemic risk).229

76It can be contended that approval by an implementing act does not necessarily withdraw the possibility for providers to rely on alternative adequate means to show compliance. Such a reading may be supported by reference to the fact that even in the presence of harmonised standards, which grant a presumption of conformity, providers still remain free to use alternative measures, simply if they ‘choose’ not to rely on those standards.230 This might suggest that, absent an express provision to the contrary, an implementing act conferring ‘general validity’ on a code of practice would not in itself exclude recourse to alternative means of compliance. This interpretation would limit the effects of approval by an implementing act only to extending the potential for reliance on the approved code of practice to non-signatories. However, it is questionable whether such a limited consequence can be said to amount to a legal effect at all, considering that non-signatories are in any event free to adopt and rely on measures contained in a code of practice assessed as adequate in line with the principle of legitimate expectations.

77Therefore, a strong argument can be made that exclusion of alternative means of compliance would precisely be the effect of an implementing act under Article 56(6), second subparagraph, by analogy with Article 56(9), second subparagraph. Specifically, as expounded in detail below, the AI Act contains no references to the possibility of relying on alternative means in the presence of common rules under Article 56(9), whereas it repeatedly does so for codes of practice and harmonised standards.231 Furthermore, the Commission Guidelines’ interpretation that common rules adopted by an implementing act are applicable to all providers could be interpreted as meaning that they exclude alternative adequate means of compliance.232 Such a reading is supported by the consistent CJEU case law, which favours interpretations that preserve a provision’s legal effectiveness and avoid rendering it redundant.233

78The applicability of alternative adequate means is especially pertinent considering that codes of practice, even when approved via an implementing act, do not grant a presumption of conformity but serve only as an instrument for facilitating the demonstration of compliance.234 This issue is also present with regard to the common rules that may be adopted by an implementing act under Article 56(9), second subparagraph.235 Moreover, even where harmonised standards exist and confer a presumption of conformity, providers may still rely on alternative means of compliance.236 Yet, the binding provisions that expressly preserve the option of relying on alternative adequate means – Articles 53(4) and 55(2), read together with recital 117 – do not prescribe such an option with regard to the implementing acts that the Commission is empowered to adopt with relation to codes of practice, namely a code of practice approved by an implementing act under Article 56(6), second subparagraph, and common rules in lieu of a code of practice under Article 56(9), second subparagraph. Therefore, this absence in the legislation can be interpreted as meaning that reliance on alternative adequate means is permissible only in the cases explicitly enumerated in the AI Act – in the presence of codes of practice or harmonised standards – whereas by converse implication, such alternative means would not be available when an implementing act has been adopted. This interpretative resolution is in line with the general principle of legal interpretation that ‘ubi lex voluit dixit, ubi noluit tacuit’ (when the law wanted to regulate the matter further, it did so; when it did not want to regulate the matter further, it remained silent)’.237 The CJEU has repeatedly applied this reasoning, stating that where the EU legislature has expressly included a particular situation within the scope of a provision, but has omitted doing so for another, this precludes the latter from being brought within scope by analogy.238 With that said, while the Court has applied this principle on occasion, there is no firmly established line of case law, thus it cannot be asserted with certainty that the CJEU would also apply it to the matter of the availability of alternative means of compliance under the present discussion.239

2.6.2.3.3. Limitations on the scope of an implementing act adopted under Article 56(6), second subparagraph

79Even if this ‘enforceable compliance standard’ interpretation is accepted,240 the limitations of the effects of a potential implementing act approving a code of practice should be examined:

80Some authors posit that a code of practice not approved by an implementing act may contain voluntary commitments exceeding the statutory obligations under Articles 53 and 55; however, once approved, those additional commitments would bind all providers.241 This interpretation specifically advances the idea that this would ensure that signatories are not put in a more unfavourable position than non-signatories.242

81However, this reading cannot be supported in light of the limits on the effects of implementing acts, which, as discussed above, may not amend or supplement their underlying legislation.243 If a code of practice approved by an implementing act is deemed to impose obligations that go beyond Articles 53 and 55, the implementing act might be subject to (partial) annulment under Articles 263 and 264 TFEU.244 Unlike the non-binding adequacy assessments,245 implementing acts are subject to direct judicial review, and the Commission must act within the scope of the powers conferred to it.246 According to settled case law, the adoption of an implementing act that goes beyond the scope of the conferred power renders that act unlawful due to lack of competence.247 Since Article 56(6), second subparagraph, does not, and arguably cannot,248 confer on the Commission the power to implement specific measures that go beyond the obligations of Articles 53 and 55, strictly interpreted, any measures deemed in excess can be declared void under Article 264 TFEU at least to the extent of the excess.249 A plea of illegality under Article 277 TFEU would also be available if an implementing act is adopted as it would represent an act of general application in contrast to the adequacy assessment instruments.250

82Thus, the approval by an implementing act cannot lawfully achieve the purported objective of widening the application scope of all commitments, including those that are entirely voluntary and do not proceed directly from Articles 53 and 55.

2.6.2.4. Commission proposal to remove power to adopt an implementing act under Article 56(6), second subparagraph

83As noted above,251 as part of its wider Digital Omnibus Package aimed at simplifying certain measures of EU digital regulation, on 19 November 2025 the Commission published a proposal for amending the AI Act.252 Notably, the explanatory memorandum of the proposal indicates that Article 1(15) and (16) thereof ‘remove the Commission empowerments in Articles 50 and 56 AI Act to adopt implementing acts to give codes of practice for general purpose AI models and transparency obligations for certain AI systems general validity in the Union.’253 Furthermore, recital 23 of the proposed amending regulation states that the AI Act ‘should […] be amended to remove the empowerments conferred on the Commission in Article 50(7), Article 56(6), and Article 72(3) thereof to adopt implementing acts’.254 However, the operative provisions of the Commission’s proposal do not provide for such a power withdrawal. Article 1(16) of the proposal, referred to in the explanatory memorandum, begins as follows: ‘in Article 56(6), the first subparagraph is replaced by the following’,255 and then proposes changes to the procedure for the adequacy assessment under Article 56(6) AI Act, discussed in Section 2.6.1. above.

84Accordingly, the proposed Article 1(16)’s express limitation of the amendment to Article 56(6)’s first subparagraph, if adopted as drafted, cannot withdraw the power of the Commission to adopt an implementing act under Article 56(6), second subparagraph. No other operative provisions of the proposal can be reasonably interpreted as repealing Article 56(6), second subparagraph. Moreover, Article 1(15) of the proposal, which contains suggested amendments to Article 50(7) AI Act and is also referred to in the explanatory memorandum to the proposal, refers to the ‘procedure laid down in Article 56(6), first subparagraph’,256 which presupposes that a second subparagraph is retained. Therefore, if the intention of the Commission is indeed to withdraw the power to adopt an implementing act under Article 56(6), second subparagraph, the final amending text would need to be made significantly more precise.

85With that said, as highlighted above,257 considering the inherent uncertainties present under the ordinary legislative procedure required to adopt an amending regulation to the AI Act, the legal consequences of any change to Article 56(6), second subparagraph, will depend on the final wording agreed by the EU legislature.

2.7. Article 56(7): Adherence to the codes of practice

86Under the AI Act, the AI Office ‘may invite all providers of general-purpose AI models to adhere to the codes of practice’.258 In addition, for providers of GPAI models that do not present systemic risks, ‘adherence may be limited to the obligations provided for in Article 53, unless they explicitly declare their interest to join the full code’.259

87As noted in the introduction to this chapter, codes of practice, as a voluntary tool,260 are not legally binding. This is clear from the formulation in Article 53, paragraph 4, and Article 55, paragraph 2, of the AI Act which state that providers of GPAI models may rely on codes of practice to demonstrate compliance with their corresponding obligations under these provisions until a harmonised standard is published. The codes of practice themselves do not impose obligations on providers; rather, they serve as one possible means of demonstrating compliance. They therefore fall under the category of soft law.261 Under the AI Act, providers remain free to rely on other adequate means to meet their obligations.262 The Commission may nevertheless ‘approve’ a code of practice by means of an implementing act according to Article 56(6), second subparagraph, the possible legal implications of which are examined in detail in the previous section.263 Importantly, the Commission indicates in its Guidelines that an implementing act is not a prerequisite for providers to rely on a code of practice deemed adequate under Article 56(6).264 This section therefore considers more closely the effects of adhering to a code of practice that has been assessed as adequate but not approved by an implementing act. While codes of practice are not legally binding, they can create legal effects,265 that is ones capable of ‘inducing certain behaviour and modifying normative reality’,266 as well as practical effects, which are examined in this section.

2.7.1. Effects of adherence to the codes of practice
2.7.1.1. No formal presumption of conformity

88The legal implications of adherence to the codes of practice are not entirely clear, in particular whether adherence to the codes grants a presumption of conformity similar to the one enjoyed under harmonised standards.267 As mentioned above,268 harmonised standards constitute European standards adopted following a standardisation request by the Commission for the application of Union harmonisation legislation.269 Harmonised standards that are published in the Official Journal of the European Union create a presumption of conformity,270 which means that if a product (here a model) complies with the technical specifications defined under said harmonised standard, it is presumed to comply with the relevant requirement under the AI Act.271 As the CJEU has ruled, a presumption of conformity provided for in EU secondary law ‘means that any natural or legal person who wishes effectively to challenge that presumption in respect of a given product or service must demonstrate that that product or service does not meet that standard or, alternatively, that that standard is not fit for purpose’.272 The presumption of conformity represents an important incentive for private actors to adhere to harmonised standards. Indeed, the presumption affects the burden of proof. It creates a legal fiction and ‘modifies the legal position of economic operators vis-à-vis national public authorities in the sense of shifting the burden of proof in administrative and judicial procedures on those public authorities.’273 It is however not an absolute presumption, and the actor responsible for market surveillance, here the AI Office, can provide evidence that the provider does not comply with the relevant standard.274

89The text of the AI Act only expressly confers a presumption of conformity in relation to harmonised standards275 and common specifications.276 In particular, the AI Act does not create a presumption of conformity in the context of the codes of practice. The Guidelines confirm this by stating that ‘[a]s opposed to adherence to a code of practice, compliance with harmonised standards grants a presumption of conformity with the corresponding obligations under the AI Act’.277

90It is not self-evident why the EU legislature chose not to confer a presumption of conformity for codes of practice. This possibly reflects the temporary nature of the codes of practice, which will lose their legal effects as soon as harmonised standards are published.278 Furthermore, it could serve as a way for the AI Act to emphasise the distinction between codes of practice and harmonised standards, which are adopted upon request by the Commission and involve a higher degree of involvement by the EU institutions.279 As aptly summarised in the literature, harmonised standards constitute a ‘piece of private regulation that is not only commissioned by a public regulator but whose contents it also assesses and monitors and which it officially publishes’.280 Another key distinction is the publication of harmonised standards in the Official Journal of the European Union. The CJEU went so far as to declare in the James Elliot Construction case that the harmonised standards at issue formed ‘part of EU law’.281 However, the next sections of the present analysis, which examine the legal effects of signing and adhering to codes of practice assessed as adequate under Article 56(6),282 demonstrate the fine line between presumption of conformity under harmonised standards and the legal effects under codes of practice assessed as adequate in accordance with Article 56(6). The present analysis is based on the interpretation of the Commission Guidelines that a code of practice that has been positively assessed under Article 56(6), first paragraph, may be relied upon by its signatories to demonstrate compliance. Other authors have put forward different interpretations, considering an approval by an implementing act to be necessary for such reliance.283

2.7.1.2. Effects of adherence to the codes of practice

91The main objective of the codes of practice is to facilitate compliance284 and, according to the Commission, ‘give [providers] more legal certainty’.285 In its Guidelines, the Commission has indicated that providers of GPAI models will be able to rely on codes of practice which have been assessed as adequate under the procedure laid down in Article 56(6) of the AI Act to demonstrate compliance with their obligations under Articles 53(1) and 55(1) of the AI Act.286 In other words, compliance with such codes of practice will be considered by the Commission as sufficient for satisfying the relevant obligations. In particular, the Guidelines mention that for providers that sign an adequate code of practice, the Commission ‘will focus its enforcement activities on monitoring their adherence to the code of practice’.287 Moreover, the Guidelines posit that insofar as providers are ‘transparent about the measures they implement to comply with the AI Act’ they will ‘benefit from increased trust from the Commission and other stakeholders288 without specifying which particular stakeholders.289

92It is noted, however, that the Guidelines may be considered only legally binding on the Commission, meaning that the effects of signing and/or adhering to a code of practice contained therein may be subject to binding reinterpretation by the CJEU. Indeed, it is settled case law that when adopting Guidelines or other soft-law instruments which describe how an EU institution will exercise its discretion, such instruments are self-binding on that institution only.290 Under the principle of legitimate expectations, which is a corollary of legal certainty,291 an EU institution cannot depart from its own content, in order to avoid any breach of this principle.292 Concretely, it follows that if a provider complies with a code of practice assessed as adequate, the Commission may not consider that the provider does not comply with its obligations under Article 53(1) and with Article 55(1) AI Act without the risk of breaching the principle of legitimate expectations. This, in practice, means that while the AI Act does not formally confer a presumption of conformity, signing and adhering to codes of practice that have been assessed as adequate under Article 56(6), first subparagraph, triggers de facto similar effects to that presumption.

93As seen above, it follows under a presumption of conformity that if a provider complies with harmonised standards, the model is assumed to comply with the obligations under the AI Act covered by these harmonised standards. Similarly, the codes of practice allow providers, according to the Commission Guidelines, to demonstrate compliance with their relevant obligations under the AI Act. Under the principle of legitimate expectations, the Commission cannot depart from its own Guidelines. Furthermore, while a presumption of conformity shifts the burden of proof in favour of the provider, the indication in the Commission Guidelines that providers will benefit from an ‘increased trust’293 is arguably close to creating such an effect. This being said, the Commission Guidelines are carefully drafted and still leave room for some discretion, indicating that the Commission will ‘focus’ its enforcement activities on monitoring adherence to the codes of practice, not that it will restrict or focus exclusively on monitoring adherence to those codes.294 The main difference remains that the legal effects described for (adequate) codes of practice are triggered by the principle of legitimate expectation as developed in the case law of the Court, not the AI Act itself. Indeed, the conditions to invoke the principle of legitimate expectations are not clearly articulated in the case law and may vary depending on the specific context matter.295 This in turn creates less legal certainty for providers in comparison to a presumption of conformity under harmonised standards.

94Moreover, commitments enacted in line with codes of practice assessed as adequate may play a role as a potential mitigating factor when fixing the amount of fines under Article 101(1) AI Act.296 Indeed, commitments can be considered as a sign of good faith on the part of providers. The AI Act specifically indicates that the Commission shall take ‘into account commitments […] made in relevant codes of practice in accordance with Article 56’.297 The recent Guidelines on the scope of the obligations for GPAI models adopted by the Commission are more nuanced, indicating that the Commission ‘may take into account commitments implemented in line with a code of practice that is assessed as adequate as a mitigating factor when fixing the amount of fines, depending on the specific circumstances’.298 This phrasing does not necessarily contradict the wording of Article 101(1) AI Act, which only obliges the Commission to ‘take them into account’, that is, to consider commitments. Whether commitments under codes of practice can affect the level of fines ultimately depends on the circumstances of each case and remains within the Commission’s discretion. As this effect only arises at the stage of setting fines, it presupposes that the provider has already been found to have infringed one or more obligations under the AI Act. The mitigating effect could then potentially operate at two levels: (i) the Commission may take into account the range of commitments adopted by the provider under its other obligations as a mitigating factor for the infringement or (ii) where, in respect of a specific infringement, the provider has sought to engage with the code of practice but has not fully implemented the commitment at issue, such efforts may nonetheless be regarded as an indication of good faith.299 However, it is difficult at the present time to envisage precisely how a signatory’s adherence could be used as a mitigating factor. In particular, the first scenario described above could lead to fragmentation of the GPAI Code of Practice, whereby a provider could effectively ‘trade off’ one commitment against another in the Code. Indeed, if for example a provider were to breach a commitment in the Copyright Chapter of the GPAI Code of Practice, adherence to commitments under the Safety and Security Chapter would offer little principled basis for mitigating the breach, considering that the two corresponding obligations under the AI Act have very different purposes.300

95Another related question is whether a provider that has signed a code of practice can still demonstrate compliance through alternative means, and if this has any legal implications. This situation could arise, for example, if the Commission were to assess that a provider who has signed a code of practice is not complying with the commitments and the provider were to rely on alternative means as a defence. The question is whether, in such a situation, signing a code of practice has a self-binding effect, in line with the maxim venire contra factum proprium non valet, according to which one may not contradict their own previous conduct.301 Since codes of practice are considered a voluntary instrument,302 it would be consistent to allow providers to demonstrate compliance by alternative means in such cases. However, if a provider fails to demonstrate compliance by alternative means, one could argue that the departure from a code of practice could be considered an aggravating factor in determining the amount of the fine. As indicated in the previous paragraph, Article 101(1) indeed states that, when fixing the amount of the fine, the Commission shall take into account commitments ‘made in relevant codes of practice in accordance with Article 56’. While this clearly indicates that commitments should be considered a mitigating factor, this provision could also be read to mean that commitments that are not respected can be considered as an aggravating factor.

96There has been criticism of the GPAI Code of Practice. Some providers have argued that it introduces measures that go beyond the AI Act,303 leading to criticism stating that the GPAI Code of Practice introduces prescriptive measures.304 Other companies that have signed the Code have also expressed similar criticism in relation to EU Copyright law.305 The discussion, which will become particularly salient should the Commission approve a code by an implementing act, will ultimately hinge on the interpretation of the scope of conferred power on the Commission and the delineation between ‘implementation’ of uniform conditions and the prohibition on ‘supplementation and amendment’.306

97While the previous paragraphs explored the effects of signing and/or adhering to an approved code of practice, the effects of non-adherence remains to be examined., What are the consequences for providers who do not adhere to an approved code of practice under Article 56(6)? Arguably, the most important consequence for providers that do not adhere to an approved code of practices concerns legal certainty. As seen above, by virtue of the principle of legitimate expectations, providers that have signed and adhered to the GPAI Code of Practice enjoy similar effects to a presumption of conformity. Providers that do not adhere to this Code will need to demonstrate compliance with ‘alternative adequate means of compliance for assessment by the Commission’.307 Non-adherence can therefore trigger legal uncertainty and a higher workload when demonstrating compliance. Moreover, the Guidelines also indicate that in such situations, providers could expect a larger number of requests for information and access to conduct model evaluations.308 As the Guidelines underline, in such cases the ‘AI Office will have less of an understanding of how they are ensuring compliance with their obligations under the AI Act and will typically need more detailed information, including about modifications made to general-purpose AI models throughout their entire lifecycle’.309 In particular, providers in such cases ‘are expected to explain how the measures they implement ensure compliance with their obligations under the AI Act’.310 The Guidelines specify that this could entail, for example, carrying out a ‘gap analysis’311 which would entail comparing the measures implemented with the measures set out in a code of practice assessed as adequate.312 This demonstrates that the GPAI Code of Practice will be taken into consideration by the Commission as the reference point when analysing compliance. This may be a compelling incentive, as beyond the likely higher workload this would entail, providers run the risk that the Commission will consider the alternative measures as insufficient to comply with their obligations under Article 53 or, when relevant, Article 55.

98However, the requirement to carry out a gap analysis could be opposed on two arguments. Firstly, conducting a gap analysis would place an additional burden on providers that have not signed a code of practice, which contradicts the voluntary nature of such codes. In other words, this would amount to sanctioning non-adherence to codes of practice through imposition of additional costs on non-signatories to demonstrate compliance. Secondly, related to the first argument, this should especially be the case should a code of practice go beyond the obligations set out in the AI Act – as has been claimed by some AI providers to be the case for the current GPAI Code of Practice.313 It should be clarified that the codes of practice, which are drafted by stakeholders, may contain voluntary commitments that go beyond the AI Act. This reflects the function of codes of practice not only as a tool for providers to demonstrate compliance with the AI Act but also to promote best practice in the field314 and advance the state of the art.315 Nevertheless, in the event that a code of practice comprises voluntary commitments that extend beyond the regulatory framework, a distinction should be drawn between such voluntary commitments and those that implement the AI Act for the purpose of enforcement.316

2.7.1.3. Selective adherence to the codes of practice

99Another question that deserves further consideration concerns the case of selective adherence to codes of practice, or so-called ‘opt-out’.317 This scenario relates to a situation in which a provider has signed a code of practice but has expressed a reservation. One illustration of this situation is the company xAI, which has declared its intention to only adhere to the Security and Safety Chapter of the GPAI Code of Practice, and not to the two other chapters on transparency and copyright.318

100On the Commission website which includes a list of the signatories, it is indicated that ‘In addition, xAI signed up to the Safety and Security Chapter; this means that it will have to demonstrate compliance with the AI Act’s obligations concerning transparency and copyright via alternative adequate means’.319 This suggests that a selective adherence to a given chapter is indeed allowed by the Commission and raises the question of the consequences of opt-outs. According to the Guidelines on the scope of the obligations for GPAI models, any ‘opt-out from chapters of the code of practice results in losing the benefits of facilitating the demonstration of compliance in that respect’.320 The terms ‘in that respect’ suggest that the benefits of adhering to a given chapter of the GPAI Code of Practice (where the provider opts in) still apply. The provider ‘loses’ (or more accurately, does not benefit from) the ‘increased trust’ from the Commission and other stakeholders with regard to the chapters it opted out from. Such a reading encourages voluntary adherence to the GPAI Code of Practice and would be consistent with the nature of codes of practice as a flexible instrument. It must be noted, however, that this approach does not appear workable where a provider wishes to sign and/or adhere only to parts of a given chapter of the GPAI Code of Practice, considering that the commitments within each chapter, taken in the aggregate, represent a unified approach to compliance with the relevant provisions for which that chapter was adopted.321 A remaining question is whether an opt-out has implications for the other effects we envisaged above, such as the mitigating factor when fixing fines, beyond the question of facilitating compliance. Although this ultimately falls under the discretion of the Commission, it seems rather unlikely that a provider could rely on commitments made in relation to other parts of the GPAI Code of Practice.

2.7.2. Relation between the codes of practice and harmonised standards

101The codes of practice are intended to be a temporary instrument in the AI Act. Although Article 56 does not set out clearly the relation between the codes of practices and harmonised standards, both Article 53(4) and 55(2) indicate that providers may rely on the codes of practice ‘until a harmonised standard is published’. This is also repeated in the Guidelines.322 Moreover, the position of harmonised standards as the preferred means to promote compliance under the AI Act is also evident from Article 41 concerning common specifications. According to Article 41(4), should a harmonised standard be published in the Official Journal of the European Union, the Commission shall repeal the relevant implementing acts adopting common specifications.

102It follows that once harmonised standards are adopted and published, any adopted codes of practice will ‘lose’ the effects described above such as the benefit of ‘increased trust’ from the Commission or the mitigating factor for fines. Providers could then still rely on the codes of practice as alternative means to demonstrate compliance, but they will have to demonstrate how these correspond to the relevant obligations under the AI Act. However, once harmonised standards have been published, the Commission and the AI Board could be expected to declare the codes of practice as no longer adequate for demonstrating compliance with the obligations specified in the AI Act. This is supported by a reading by analogy to Article 41(4), which requires the Commission to repeal common specifications once harmonised standards have been published. It is unclear whether codes of practice automatically lose their effect once a harmonised standard has been published or only once the Commission and the Board have declared them as no longer adequate. The wording of Articles 53(4) and 55(2) states that providers may rely on codes of practice ‘until a harmonised standard is published’ suggests that the codes of practice automatically lose their effect with the publication of harmonised standards in the Official Journal of the European Union.

2.8. Article 56(8): Review and adaptation of codes of practice

103Article 56(8) indicates that the AI Office ‘shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in light of emerging standards’ and that it should assist in the assessment of available standards. As pointed out in the literature, the review and adaptation of the codes of practices is necessary to ensure that the latter reflect the state of the art.323 This provision serves as another reminder of the considerable challenges associated with the regulation of a rapidly evolving technology, and the (still) limited knowledge and research that has been accumulated in this field to date.324 In addition to advances in the technology, changes in the risk landscape or emerging standards may necessitate a review and adaptation of the code of practice.325 Despite considerations of ensuring certainty for providers, authors have suggested no technological solutions should be entrenched with exclusivity in the long term.326 The Commission, as part of the ‘Questions and answers on the code of practice for General-Purpose AI’, explains that:

The Chairs and Vice-Chairs have written the Code to be as future-proof as possible. However, the rapid evolution of AI technology poses a challenge. Even with the future-proof design, the Code will still require periodic updates. The AI Office may facilitate formal updates to the Code in response to technological developments, changes in the risk landscape, or experience with the application of the AI Act rules.327

104Under Article 56(6), the AI Office and the Board are tasked with regularly monitoring and evaluating ‘the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of [the AI Act]’. As argued above, the term ‘regularly’ denotes a continuous process.328 As a result of the monitoring and evaluation process, the AI Office and the Board may assess the adopted GPAI Code of Practice as no longer adequate to achieve its objectives under the AI Act. Nevertheless, there is no requirement that a code of practice be assessed as inadequate for the possibility of its formal review and adaptation to be activated. The wording in Article 56(8) indicates that it is the Commission decision when it is ‘appropriate’ to encourage and facilitate the review and adaptation of codes of practice. The AI Board indicated it may encourage the Commission to do so and propose updates to the GPAI Code of Practice.329 As stated above, the Commission has more concretely expressed that it intends the GPAI Code of Practice to be reviewed ‘at least every two years’.330

105There is an apparent trade-off between the flexibility of the codes of practices and their ‘inherently adaptable nature’,331 and the principles of legal certainty and legitimate expectations. A code assessed as adequate can be subsequently considered as inadequate, which has the potential to increase the uncertainty among providers and possibly discourage them from primarily relying on codes that may at any time lose their positive adherence effects. The established case law on this point, however, is unambiguous: economic operators cannot derive legitimate expectation that an existing situation which is capable of being altered by the EU institutions in the exercise of their discretionary power will be maintained.332

106The possibility of reviewing and adapting co-regulatory codes is not unprecedented. For example, Article 45(4), second subparagraph, DSA uses equivalent wording regarding the codes of conduct that may be adopted thereunder: ‘The Commission and the Board shall also encourage and facilitate regular review and adaptation of the codes of conduct’.333 However, neither the DSA nor the AI Act clarifies the exact procedure for adaptation nor prescribes how the legal effects of an adapted code are operationalised.

107A key question that arises is whether signatories to an adopted code of practice will be automatically bound by any revised version of that code of practice under Article 56(8). The adopted GPAI Code of Practice contains no conditions that would bind signatories to a version other than the one they have already endorsed. Thus, each subsequently adopted iteration of a code of practice would require providers to sign anew. Therefore, to avoid parallel reliance regimes, any currently applicable code will need to be assessed as inadequate upon adoption of a revised code. Otherwise, multiple versions of a positively assessed code would coexist, each capable of being relied upon to demonstrate compliance.334 This logic is indirectly reflected in the Commission’s initial adequacy assessment of the GPAI Code of Practice, which notes that a positive adequacy finding does not preclude later reversal ‘following the Commission’s review and adaptation of the Code pursuant to Article 56(8) of the AI Act’.335

108Conversely, issuing an inadequacy assessment before a replacement code of practice is adopted would create an interim gap during which providers could not rely on centrally developed measures and would need to demonstrate compliance by alternative adequate means under the AI Act.336 While such an approach does not prima facie contradict Article 56, and it could even be argued that the AI Board and the AI Office, in effect, have a duty to issue an inadequacy assessment once the regular monitoring reveals material deficiencies, it would be undesirable from a practical perspective because it would create a regulatory compliance gap.

109While the current structure of the GPAI Code of Practice requires signing on to any new adapted versions, an alternative approach can be foreseen by analogy to standard terms in private law contracts, under which the parties agree that one of them may unilaterally amend terms that bind the other unless it expressly objects. Still, there do not seem to be any barriers in the AI Act that would preclude such a condition to be envisioned in future iterations of codes of practice. Whether a unilateral adaptation clause would be desirable is uncertain, and it is arguable that such a solution may deter some providers from acceding to a code of practice that contains it.

110As with the initial drawing up of codes of practice under Article 56(1),337 the AI Act does not lay down a precise procedure for their review and adaptation in Article 56(8). The wording of Article 56(8) mirrors that of Article 56(1) in providing only that the AI Office may ‘encourage and facilitate’ the process. As outlined in the discussion of Article 56(3) above, stakeholder participation is, in principle, also available under the adaptation procedure of Article 56(8), but the AI Office again retains broad discretion as to whether and how to facilitate it.338 From a practical perspective, and considering that any subsequent iteration of a code of practice will require providers to sign on to the new iteration, involving providers in the drafting process is as desirable here as during the initial drafting of the GPAI Code of Practice.339

2.9. Article 56(9): Initial deadline for development of a code of practice and adoption of common rules by an implementing act

111Article 56(9) comprises two subparagraphs that address distinct matters and serve different functions.

2.9.1. Article 56(9), first subparagraph: initial deadline for development of a code of practice

112The first subparagraph requires that codes of practice should be finalised ‘at the latest by 2 May 2025’ and charges the AI Office with taking the necessary steps of achieving this. The deadline contained therein appears to be purely instructive, as no legal consequences are attached to its non-fulfilment.340 By contrast, the second subparagraph of Article 56(9) explicitly establishes such consequences with regard to non-fulfilment by 2 August 2025.341 Therefore, the first subparagraph operates as a soft target for the AI Office rather than a binding legal term.342 In practice, the GPAI Code of Practice was not adopted within that initial deadline and was only published on 10 July 2025,343 with the AI Board and the Commission issuing their initial adequacy assessments on 1 August 2025.344

2.9.2. Article 56(9), second subparagraph: adoption of common rules by an implementing act

113The second subparagraph provides that, if a code of practice is not finalised by 2 August 2025, or if the AI Office finds it inadequate following its assessment, the Commission may adopt an implementing act laying down common rules for implementing the obligations in Articles 53 and 55, including in particular the matters listed in Article 56(2). A few issues immediately arise with regard to that subparagraph.

2.9.2.1. Pre-conditions and procedure for adopting common rules

114As a starting point, regard must be had to the chosen phrasing for the conferral of implementing powers, which is ambiguous as to (i) whether the Commission may adopt an implementing act if no code of practice has been adopted or a code is found inadequate by 2 August 2025, or (ii) whether that date applies only to non-adoption and the option to establish common rules remains available whenever the AI Office issues a negative adequacy assessment. The ambiguity stems from the placement of the temporal qualifier ‘by 2 August 2025’ in the beginning of the sentence before both alternative conditions,345 rather than as a postposed modifier to the first condition, that is only after the condition regarding the absence of a finalised code of practice.346 This drafting uncertainty appears across language versions.347 The Commission could have clarified its preferred interpretation in the Guidelines, however it has chosen to only describe the adoption of common rules where no code is available by 2 August 2025, omitting any discussion on the scope of the second alternative option provided for in the second subparagraph of Article 56(6).348

115The prevailing view in the literature seems to favour the second reading, whereby the Commission may adopt common rules at any point after a negative adequacy assessment is published by the AI Office.349 This reading promotes a level of consistency across the AI Act, which treats codes of practice, respectively common rules in lieu of such codes, as interim mechanisms to facilitate compliance until harmonised standards are in place.350 Considering that the goal of both instruments is to improve legal certainty for providers by identifying the minimum requisite measures to meet the obligations, if agreement cannot be reached within the co-regulatory process in case a code of practice is deemed inadequate, and harmonised standards are not yet available, a regulatory gap may arise in which there is no clear consensus on adequate compliance measures.

116It should also be noted that Article 56(9) treats a potential negative adequacy assessment by only the AI Office – regardless of the AI Board’s position – as grounds for conferring power to adopt an implementing act. Given that Article 56(6) assigns equal, cumulative weight to the assessments of the AI Board and the AI Office,351 it is difficult to reconcile that only the AI Office’s assessment is relied upon. A textual interpretation would suggest that a negative assessment by the AI Board alone, which would in principle preclude reliance on a code of practice as evidence of compliance,352 would not activate the Commission’s power to adopt an implementing act, thereby again risking a regulatory gap. Although, in practice, the AI Office and the AI Board cooperate closely and significant divergences in opinion are unlikely, the drafting choice remains significant. A possible explanation lies in the context of centralised enforcement – giving weight to assessments from national authorities would be logical if they were the ones that have a direct view upon a code’s enforcement effectiveness; however, the AI Board members are informed indirectly only through the AI Office.353 This also raises the question as to why an assessment by the AI Board was required in the first place absent national enforcement; however, those political drafting choices are out of the scope of the present commentary. With that said, the choice of the examination procedure for the adoption of implementing acts under the provision still affords significant power to Member States in shaping potential common rules.354

117If we follow the interpretation that common rules may be adopted at any time when a negative adequacy assessment is issued by the AI Office, a further issue arises of how that power would interact with the review and adaptation of the codes of practice under Article 56(8).355 A systematic reading of Article 56(6), (8), and (9) in light of the purpose and goals of the AI Act, reveals a potential practical interpretive framework. If, during its regular monitoring and evaluation activities, the AI Office deems that a code is likely to become inadequate, it may begin the process of adaptation and review before issuing an inadequacy decision on a previous version. During this procedure, the AI Office would be able to form a view of whether the process of adaptation would be able to effectively resolve the identified issues, as well as whether an adapted code would be acceptable to providers. Only if this process fails, would an inadequacy decision by the AI Office be published to be followed by the development of common rules in the form of an implementing act. This approach would fit with the treatment of a potential negative assessment by the AI Office as the sole basis for conferring power on the Commission under Article 56(9), second subparagraph. The AI Office could defer a negative assessment to maximise incentives for providers to participate in the adaptation process of a code of practice, while allowing the Commission to retain the option, if the issuance of a negative assessment proves necessary, to resort to adoption of common rules by an implementing act. This builds on the idea that the ‘discretionary power to adopt or withhold common rules represents yet another significant level of influence for the Commission, granting it substantial control over the compliance trajectory’.356

118Arguably, the most crucial question that must be examined in relation to the second subparagraph of Article 56(9) is that of the scope and legal effects of adopting implementing acts laying down common rules.

119First, as with the second subparagraph of Article 56(6),357 the choice of an implementing act as the vehicle for adopting common rules is difficult to reconcile with the general logic of such instruments under Article 291(2) TFEU, which is aimed at ensuring the application of uniform conditions across national implementations,358 whereas enforcement of the GPAI model provider rules lies exclusively with the Commission.359 This is supplemented by the limitation that implementing acts cannot amend or supplement the underlying legislation,360 which calls into question the practical purpose and effect of conferring implementing power in either case. The general considerations on the conferral of power to adopt implementing acts in the context of the rules on GPAI model providers are discussed in greater detail in the section above on Article 56(6) and need not be repeated here.361

120While the parallels between the implementing acts envisaged in Articles 56(6) and 56(9) are very instructive, differences must also be noted. Article 56(9), second subparagraph, does not employ the ambiguous notion of ‘general validity’ but rather refers to the adoption of ‘common rules’. The latter concept is better established in Union law, with numerous legislative acts empowering the Commission to lay down common rules by an implementing act, typically of technical or procedural nature, to secure uniform application by national authorities.362 By contrast, however, when set in the context of the centralised enforcement under Chapter V of the AI Act, ‘common’ in the instance of Article 56(9), second paragraph, might be best understood not as common across Member States, but as common specifications on the contents of the obligations in the AI Act that are applicable to all GPAI model providers where no code of practice is applicable.363

121Such an interpretation is favoured by the Commission under the current Guidelines, which state that common rules adopted by an implementing act would be interpreted by the AI Office as applicable to all GPAI model providers (with or without systemic risk).364

122There are two possible interpretations of the legal effects of this guidance. First, on a more literal reading, this might mean just that, unlike codes of practice which may be relied upon only by their signatories,365 common rules may be relied upon by any provider. Under this interpretation, therefore, common rules adopted by an implementing act would produce an effect regarding all GPAI model providers comparable to the effect a code of practice assessed as adequate has on signatories.366 In other words, GPAI model providers may rely on the common rules to demonstrate compliance, but they will also remain free to choose not to and to rely on alternative adequate means instead.367

123However, it seems that this general applicability may instead be interpreted more broadly, requiring all providers to rely on the common rules if such are adopted.368 Thus, the second possible view posits that common rules adopted by an implementing act preclude reliance on alternative adequate means. This exclusion of alternative adequate means in the case that common rules are adopted by an implementing act aligns with the second possible interpretation of an implementing act under Article 56(6), second subparagraph, as an ‘enforceable compliance standard’. As presented in detail in the discussion of Article 56(6),369 this reading is based on the logic that Articles 53(4) and 55(2), read together with recital 117, expressly state that providers may rely on alternative adequate means only in relation to codes of practice and harmonised standards, but not when an implementing act has been adopted under the meaning of either Article 56(6), second subparagraph, or Article 56(9), second subparagraph. The conclusion that the absence of explicit mention of implementing acts in the provisions that govern alternative adequate means of compliance excludes implementing acts from the scope of those provisions is grounded in interpretative principles applied by the CJEU. These principles are explained in detail in the relevant section in relation to the implementing acts envisioned in Article 56(6), second subparagraph, and need not be repeated here.370

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L 1689/1 (“AI Act”) recital 117. ↩︎
  2. Annex to the Communication to the Commission – Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act) C(2025) 5045 final para 100; this temporary character is also made clear by the reference in articles 53(4) and 55(2) AI Act to ‘until a harmonised standard is published’. ↩︎
  3. See AI Act, art 40. ↩︎
  4. For more details on the substance of those obligations refer to the commentaries on Article 53 and Article 55 in this work. ↩︎
  5. Under article 50(4) AI Act, deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated. Moreover, recital 136 indicates that: ‘The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065 [Digital Services Act]’. ↩︎
  6. AI Act, art 50(7). ↩︎
  7. European Commission, ‘Commission launches consultation to develop guidelines and Code of Practice on transparent AI systems’ (4 September 2025) <https://digital-strategy.ec.europa.eu/en/news/commission-launches-consultation-develop-guidelines-and-code-practice-transparent-ai-systems> accessed 15 September 2025. ↩︎
  8. European Commission, ‘Guidelines and Code of Practice on transparent AI systems’ (Q&A, last updated 4 September 2025) <https://digital-strategy.ec.europa.eu/en/faqs/guidelines-and-code-practice-transparent-ai-system> accessed 15 September 2025. ↩︎
  9. In particular, AI Act, art 56(6) states: ‘The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives.’ ↩︎
  10. While article 50(7) only refers specifically to article 56(6), it is plausible that the practice of the drafting of the GPAI Code of Practice will inform the process under article 50(7). For example, articles 56(1) and (3) could apply by analogy to the drafting process under article 50(7). ↩︎
  11. Also, consider that the focus of the present commentary is on rules that relate to GPAI models and GPAI models with systemic risk. ↩︎
  12. AI Act, art 113 point (b). ↩︎
  13. See commentaries on Articles 53 and 55 in this work. ↩︎
  14. Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council Text with EEA relevance OJ L 316/1, art 2(1)(c). See also AI Act, art 3(27). ↩︎
  15. Cynthia Kroet, ‘EU Standards Bodies Flag Delays to Work on AI Act’ Euronews (16 April 2025) <https://www.euronews.com/next/2025/04/16/eu-standards-bodies-flag-delays-to-work-on-ai-act> accessed 15 September 2025. ↩︎
  16. See Section 2.7.1.1. ↩︎
  17. See Regulation (EU) No 1025/2012 (n 14) art 11. ↩︎
  18. Paul Verbruggen, ‘Does Co-Regulation Strengthen EU Legitimacy?’ (2009) 15 European Law Journal 425. ↩︎
  19. European Commission ‘AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice’ (30 July 2024) <https://digital-strategy.ec.europa.eu/en/news/ai-act-participate-drawing-first-general-purpose-ai-code-practice> accessed 14 September 2025. ↩︎
  20. European Commission, ‘Questions and answers on the code of practice for General-Purpose AI’ (last update 11 July 2025) <https://digital-strategy.ec.europa.eu/en/faqs/questions-and-answers-code-practice-general-purpose-ai> accessed 14 September 2025; European Commission, ‘AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice’ (n 19). ↩︎
  21. European Commission, ‘The General-Purpose AI Code of Practice’ (2025) <https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai> accessed 22 September 2025; when referring to the adopted Code of Practice cited hereto, the present analysis uses the capitalised term ‘Code of Practice’, whereas when discussing considerations on codes of practice under article 56 more generally ‘code(s) of practice’ is used. ↩︎
  22. European Commission, ‘Conclusion of the Artificial Intelligence Board on the Assessment of the General-Purpose AI Code of Practice pursuant to Article 56 of Regulation 2024/1689 (Artificial Intelligence Act)’ (“AI Board Adequacy Conclusion”) (1 August 2025) <https://ec.europa.eu/newsroom/dae/redirection/document/118687> accessed 22 September 2025. ↩︎
  23. Commission Opinion of 1 August 2025 on the assessment of the General-Purpose AI Code of Practice within the meaning of Article 56 of Regulation (EU) 2024/1689’ C(2025) 5361 final (“Commission Adequacy Opinion”). ↩︎
  24. Clemens Bernsteiner and Thomas Rainer Schmitt, ‘Praxisleitfäden’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (C.H. Beck 2025) para 8; see also, Sections 2.2. and 2.4. on the role of the AI Office in practice to ensure that codes of practice cover specific subject matters and objectives. ↩︎
  25. European Commission, ‘AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice’ (n 19). ↩︎
  26. ibid. ↩︎
  27. ibid. ↩︎
  28. The list is available at European Commission, ‘Meet the Chairs leading the development of the first General-Purpose AI Code of Practice’ (30 September 2024) <https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice> accessed 14 September 2025. ↩︎
  29. European Commission, ‘AI Act: Have Your Say on Trustworthy General-Purpose AI’ (30 July 2024) <https://digital-strategy.ec.europa.eu/en/consultations/ai-act-have-your-say-trustworthy-general-purpose-ai> accessed 14 September 2025. ↩︎
  30. European Commission, ‘AI Office received strong interest for participation in drafting the first General-Purpose AI Code of Practice’ (6 September 2024) <https://digital-strategy.ec.europa.eu/en/news/ai-office-received-strong-interest-participation-drafting-first-general-purpose-ai-code-practice> accessed 14 September 2025. ↩︎
  31. AI Act, art 56(2). ↩︎
  32. AI Act, art 56(2)(a). ↩︎
  33. AI Act, art 56(2)(b). ↩︎
  34. AI Act, art 56(2)(c). See also European Commission, ‘Code of Practice for General-Purpose AI Models – Safety and Security Chapter’ (2025) <https://ec.europa.eu/newsroom/dae/redirection/document/118119> accessed 19 September 2025 (“Code of Practice Safety and Security Chapter”), recital (c). ↩︎
  35. AI Act, art 56(2)(d). ↩︎
  36. AI Act, art 56(6). ↩︎
  37. See Section 2.7. ↩︎
  38. European Commission ‘General-Purpose AI Code of Practice now available’ (10 July 2025) <https://digital-strategy.ec.europa.eu/en/news/general-purpose-ai-code-practice-now-available> accessed 10 September 2025. ↩︎
  39. European Commission ‘GPAI Code of Practice’ (n 21). ↩︎
  40. Commission Adequacy Opinion (n 23). ↩︎
  41. AI Board Adequacy Conclusion (n 22). ↩︎
  42. AI Board Adequacy Conclusion (n 22) 1; Commission Adequacy Opinion (n 23) para 3. ↩︎
  43. For more details, see Section 2.6.1. ↩︎
  44. For more details, see Section 2.8. ↩︎
  45. AI Act, art 56(3). ↩︎
  46. Adrian Schneider, ‘Artikel 56 Praxisleitfäden’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (3rd edn, C.H. Beck 2025) para 11. ↩︎
  47. ibid. ↩︎
  48. It should also be kept in mind that according to the established case law, recitals are non-binding and may be relied upon to explain the ‘content’ of a given legislative measure, but may not by themselves serve as ‘ground for derogating from the actual provisions of the measure in question’ (see, in particular, Case C-344/04 International Air Transport Association and European Low Fares Airline Association v Department for Transport [2006] ECR I-00403 para 76 and case law cited therein), nor can they constitute a legal rule as such (see, Case 215/88 Casa Fleischhandels-GmbH v Bundesanstalt für landwirtschaftliche Marktordnung [1989] ECR 1989-02789 para 31). ↩︎
  49. Bernsteiner and Schmitt (n 24) para 9. ↩︎
  50. European Commission, ‘Commission staff working document – Better Regulation Guidelines’ SWD(2021) 305 final, 9. ↩︎
  51. See Section 2.8. ↩︎
  52. European Commission ‘AI Act: Participate in the drawing-up of the first GPAI Code of Practice’ (n 19). ↩︎
  53. European Commission, ‘Call for expression of interest to participate in the drawing-up of the first general-purpose AI code of practice’ (30 July 2024) <https://digital-strategy.ec.europa.eu/en/news/ai-act-participate-drawing-first-general-purpose-ai-code-practice> accessed 14 September 2025, 5. ↩︎
  54. ibid 8; see, European Commission, ‘Second Draft General-Purpose AI Code of Practice’ (19 December 2024) https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts accessed 18 September 2025 and European Commission, ‘1 – Commitments – Third Draft General-Purpose AI Code of Practice’ (11 March 2025) https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts accessed 18 September 2025, according to which it appears that the names of the working groups were altered in the process of drafting to:
    ‘Working Group 1: Transparency and copyright-related rules;
    Working Group 2: Risk assessment for systemic risk;
    Working Group 3: Technical risk mitigation for systemic risk;
    Working Group 4: Governance risk mitigation for systemic risk.’ ↩︎
  55. ibid; cf with discussion on the rules applicable to the scientific panel of independent experts in forthcoming commentary on Article 68. ↩︎
  56. European Commission, ‘Call for expression of interest’ (n 53) 13–14; cf selection criteria for the scientific panel of independent experts in forthcoming commentary on Article 68, Section 2.3. ↩︎
  57. European Commission, ‘Call for expression of interest’ (n 53) 13: ‘[t]he criteria that will be considered for the selection include: […]’, where the use of the word ‘include’ indicates that the listed criteria are taken into account but do not exhaust the relevant considerations of the AI Office. ↩︎
  58. ibid 14. ↩︎
  59. ibid. ↩︎
  60. ibid. ↩︎
  61. ibid 5. ↩︎
  62. ibid 10–11. ↩︎
  63. ibid 6: the call for expression of interest specifies that ‘[a]s main addressees of the Code following Article 56, providers of general-purpose AI models will be invited to dedicated workshops with the Chairs and, as appropriate, Vice-Chairs to contribute to informing each iterative drafting round, in addition to their Plenary participation’, and that ‘[t]he work across all Working Groups can be accompanied by dedicated workshops on specific topics where a better understanding on specific items should be reached’ (emphasis added); the wording used expressly distinguishes dedicated workshops for providers of general-purpose AI models, whereas it envisions the organisation of workshops with other stakeholders only as a discretionary possibility. ↩︎
  64. ibid 11. ↩︎
  65. ibid 12. ↩︎
  66. ibid. ↩︎
  67. ibid. ↩︎
  68. ibid. ↩︎
  69. ibid 13. ↩︎
  70. European Commission, ‘Industry, academia and civil society contribute to the work on Code of practice for general-purpose artificial intelligence’ (24 September 2025) <https://digital-strategy.ec.europa.eu/en/news/industry-academia-and-civil-society-contribute-work-code-practice-general-purpose-artificial> accessed 18 September 2025; European Commission, ‘The kick-off Plenary for the General-Purpose AI Code of Practice took place online’ (30 September 2024) <https://digital-strategy.ec.europa.eu/en/news/kick-plenary-general-purpose-ai-code-practice-took-place-online> accessed 18 September 2025. ↩︎
  71. Commission Adequacy Opinion (n 23) para 5. ↩︎
  72. AI Board Adequacy Conclusion (n 22) 9. ↩︎
  73. Martin Ebers, ‘When Guidance Becomes Overreach: How the Forthcoming Code of Practice Threatens to Undermine the EU’s AI Act’ (Verfassungsblog, 8 April 2025) <https://verfassungsblog.de/when-guidance-becomes-overreach-gpai-codeofpractice-aiact/> accessed 18 September 2025; Corporate Europe Observatory, ‘Coded for Privileged Access: How Big Tech Weakens Rules on Advanced AI’ (30 April 2025) <https://corporateeurope.org/en/2025/04/coded-privileged-access> accessed 18 September 2025. ↩︎
  74. See European Commission, ‘Multi-stakeholder Consultation’ available at European Commission, ‘AI Act: Have Your Say on Trustworthy General-Purpose AI’ (n 29). ↩︎
  75. ibid 3. ↩︎
  76. ibid; also stated in European Commission, ‘Call for expression of interest’ (n 53) 5. ↩︎
  77. ibid 4. ↩︎
  78. European Commission, ‘Industry, academia and civil society contribute to the work on Code of practice for general-purpose artificial intelligence’(n 70); European Commission, ‘The kick-off Plenary for the General-Purpose AI Code of Practice took place online’ (n 70). ↩︎
  79. Jacob Wulff Wold, ‘Commission Discloses Disagreements Between General-Purpose AI Providers and Other Stakeholders’ (Euractiv, 1 October 2024) <https://www.euractiv.com/section/tech/news/commission-discloses-disagreements-between-general-purpose-ai-providers-and-other-stakeholders/> accessed 18 September 2025. ↩︎
  80. See Section 2.8. ↩︎
  81. TEU, art 10(3): Consolidated versions of the Treaty on European Union and the Treaty on the Functioning of the European Union [2007] OJ C 202/1 (“TEU”, “TFEU”). ↩︎
  82. TEU, art 11(2). ↩︎
  83. TFEU, art 298(1). ↩︎
  84. TFEU, arts 15(1) and (3). ↩︎
  85. Established as a general principle of Union law; see, in particular, Herwig C.H. Hofmann, ‘General Principles of EU Law and EU Administrative Law’ in Catherine Barnard and Steve Peers (eds), European Union Law (3rd edn, Oxford University Press 2020), 225. ↩︎
  86. AI Act, art 56(4). ↩︎
  87. AI Act, art 56(4), German language version: ‘Das Büro für Künstliche Intelligenz und das KI-Gremium streben an, sicherzustellen, dass in den Praxisleitfäden ihre spezifischen Ziele […]’ (emphasis added). ↩︎
  88. AI Act, art 56(4), in the French language version: ‘Le Bureau de l’IA et le Comité IA s’efforcent de veiller à ce que les codes de bonne pratique définissent clairement leurs objectifs spécifiques et contiennent des engagements ou des mesures […]’ (emphasis added). ↩︎
  89. Schneider (n 46) para 15. ↩︎
  90. With view of article 56(1) AI Act, which specifies the role of the AI Office as one of encouragement and facilitation rather than direct drafting; see discussion in Section 2.1. ↩︎
  91. AI Act, art 56(6). ↩︎
  92. This view appears to be shared by Bernsteiner and Schmitt (n 24) paras 17–20 through combining the discussion on article 56(2) and (4) AI Act. ↩︎
  93. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/1 (“DSA”). ↩︎
  94. DSA, art 45(4): ‘The Commission and the Board shall assess whether the codes of conduct meet the aims specified in paragraphs 1 and 3, and shall regularly monitor and evaluate the achievement of their objectives, having regard to the key performance indicators that they might contain. They shall publish their conclusions.’ (emphasis added). ↩︎
  95. AI Board Adequacy Conclusion (n 22); Commission Adequacy Opinion (n 23). ↩︎
  96. AI Board Adequacy Conclusion (n 22) 3. ↩︎
  97. Commission Adequacy Opinion (n 23) para 13. ↩︎
  98. Commission Adequacy Opinion (n 23) para 40; AI Board Adequacy Conclusion (n 22) 7. ↩︎
  99. Commission Adequacy Opinion (n 23) para 39; AI Board Adequacy Conclusion (n 22) 7. ↩︎
  100. Commission Adequacy Opinion (n 23) para 39; AI Board Adequacy Conclusion (n 22) 7. ↩︎
  101. Commission Adequacy Opinion (n 23) para 39. ↩︎
  102. AI Board Adequacy Conclusion (n 22) 7. ↩︎
  103. See Section 2.8. ↩︎
  104. European Commission, ‘General-Purpose Code of Practice now available’ (n 38). ↩︎
  105. Commission Adequacy Opinion (n 23) para 45; AI Board Adequacy Conclusion (n 22) 9. ↩︎
  106. Commission Adequacy Opinion (n 23) para 42; AI Board Adequacy Conclusion (n 22) 8. ↩︎
  107. Commission Adequacy Opinion (n 23) para 42. ↩︎
  108. ibid para 44. ↩︎
  109. European Commission, ‘First Draft General-Purpose AI Code of Practice’ (14 November 2024) <https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts> accessed 18 September 2025. ↩︎
  110. European Commission, ‘Second Draft General-Purpose AI Code of Practice’ (n 54). ↩︎
  111. European Commission, ‘1 – Commitments – Third Draft General-Purpose AI Code of Practice’ (n 54) 4. ↩︎
  112. Commission Adequacy Opinion (n 23) para 46. ↩︎
  113. AI Board Adequacy Conclusion (n 22) 8. ↩︎
  114. See also the forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
  115. AI Board Adequacy Conclusion (n 22) 9; see also Section 2.3. ↩︎
  116. Commission Adequacy Opinion (n 23) para 47; AI Board Adequacy Conclusion (n 22) 9. ↩︎
  117. Annex to the Communication to the Commission – Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act)’ C(2025) 5045 final paras 68–71. ↩︎
  118. Commission Adequacy Opinion (n 23) para 47; AI Board Adequacy Conclusion (n 22) 9. ↩︎
  119. Commission Adequacy Opinion (n 23) para 48; AI Board Adequacy Conclusion (n 22) 10. ↩︎
  120. Commission Adequacy Opinion (n 23) para 49; AI Board Adequacy Conclusion (n 22) 9. ↩︎
  121. Code of Practice Safety and Security Chapter (n 34) Measure 3.1. ↩︎
  122. Commission Adequacy Opinion (n 23) para 49; AI Board Adequacy Conclusion (n 22) 10. ↩︎
  123. ibid. ↩︎
  124. AI Act, art 56(6). ↩︎
  125. ibid; see also discussion on (non-)inclusion of key performance indicators in the adopted Code of Practice in Section 2.4.2. ↩︎
  126. It is also indicative that the Commission mentions in its website that ‘Providers adhering to the Code should regularly report to the AI Office on the implementation of measures taken and their outcomes, including as measured against key performance indicators as appropriate.’ in European Commission, ‘Artificial Intelligence – Questions and Answers’ (1 August 2024) <https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683> accessed 18 September 2025. ↩︎
  127. AI Act, art 56(6), second sentence: ‘Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants.’; see, also, AI Act, recital 119, second sentence. ↩︎
  128. See Section 2.6.1. for a discussion on the objectives under article 56(6), first subparagraph, AI Act. ↩︎
  129. See also Bernsteiner and Schmitt (n 24) paras 11–13. ↩︎
  130. See also European Commission, ‘The General-Purpose AI Code of Practice – Transparency Chapter’ (2025), <https://ec.europa.eu/newsroom/dae/redirection/document/118120> accessed 22 September 2025 (“Code of Practice Transparency Chapter”), Commitment 1. ↩︎
  131. See on this point Bernsteiner and Schmitt (n 24) paras 11–13. ↩︎
  132. See, in particular, the commentaries on Article 53, Section 2.3., and on Article 93, Section 2.2.2. ↩︎
  133. Case C-63/93 Duff and others v Minister for Agriculture and Food and Attorney General [1996] ECR I-00569, para 20. ↩︎
  134. Code of Practice Transparency Chapter (n 130) Commitment 7; see also, Code of Practice Transparency Chapter (n 130) Commitment 1, Measure 1.4. on the signatories’ obligation to provide the AI Office with access to their safety and security frameworks, as well as to any updates thereto. ↩︎
  135. ibid. ↩︎
  136. Code of Practice Transparency Chapter (n 130) Commitment 7.7. ↩︎
  137. AI Act, art 56(6), first subparagraph. ↩︎
  138. ibid. ↩︎
  139. AI Act, art 56(6), second subparagraph. ↩︎
  140. ibid. ↩︎
  141. Schneider (n 46) para 19. ↩︎
  142. For a discussion on article 56(8) see Section 2.8. ↩︎
  143. Commission Adequacy Opinion (n 23) para 51 (emphasis added). ↩︎
  144. AI Board Adequacy Conclusion (n 22) 11. ↩︎
  145. ibid. ↩︎
  146. Schneider (n 46) para 18. ↩︎
  147. Bernsteiner and Schmitt (n 24) para 15. ↩︎
  148. ibid. ↩︎
  149. ibid. ↩︎
  150. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  151. See Section 2.9.2.1. ↩︎
  152. European Commission, ‘Digital Omnibus: AI Regulation Proposal’ (19 November 2025) <https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal> accessed 24 November. ↩︎
  153. European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)’ COM (2025) 836 final, 2025/0359 (COD) art 1(16). Considering that under the currently enacted article 3(47) AI Act, references to the AI Office are construed as references to the Commission, this proposed amendment is therefore largely corrective and stylistic in nature and offers limited additional legal effect. ↩︎
  154. Commission Proposal for Digital Omnibus on AI (n 153) art 1(16). ↩︎
  155. ibid. ↩︎
  156. TFEU, art 294. ↩︎
  157. See Kieran Bradley, ‘Legislating in the European Union’ in Catherine Barnard and Steve Peers (eds), European Union Law (3rd edn, Oxford University Press 2020), 105–107 for discussion on so-called ‘rule-making decisions’ (i.e. generally applicable decisions) as well as the legal effects of opinions. ↩︎
  158. Commission Decision (EU) 2024/3080 of 4 December 2024 establishing the Rules of Procedure of the Commission and amending Decision C(2000) 3614 [2024] OJ 2024/3080. ↩︎
  159. Commission Adequacy Opinion (n 23). ↩︎
  160. See, also, European Artificial Intelligence Board, ‘Rules of Procedure of the European Artificial Intelligence Board’ (12 September 2024) Ref Ares(2024) 6457550 https://ec.europa.eu/transparency/expert-groups-register/screen/expert-groups/consult?lang=en&groupID=3966 accessed 18 September 2025, art 2(1): ‘The Board, an independent advisory group, is established in accordance with Article 65 of Regulation (EU) 2024/1689.’ (emphasis added). ↩︎
  161. ibid. ↩︎
  162. ibid art 6(1). ↩︎
  163. AI Board Adequacy Conclusion (n 22) 1. ↩︎
  164. DSA, art 45(4). ↩︎
  165. European Board for Digital Services, ‘Conclusion of the Board – Code of Practice on Disinformation’ <https://digital-strategy.ec.europa.eu/en/library/code-conduct-disinformation> accessed 18 September 2025; European Board for Digital Services, ‘Conclusions of the Board – Code of Conduct Disinformation <https://digital-strategy.ec.europa.eu/en/library/code-conduct-countering-illegal-hate-speech-online> accessed 18 September 2025. ↩︎
  166. See, for example, Case C-743/19 European Parliament v Council of the European Union [2022] ECLI:EU:C:2022:569 para 36 and case law cited therein. ↩︎
  167. Case C-689/19 P VodafoneZiggo Group BV v European Commission [2021] ECLI:EU:C:2021:142 para 138 and case law cited therein. ↩︎
  168. For a detailed discussion on the application of article 277 TFEU, see Koen Lenaerts, Kathleen Gutman and Janek Tomsz Nowak, EU Procedural Law (2nd edn, Oxford University Press 2023) ch 9, 433–447. ↩︎
  169. Consolidated Version of the Treaty establishing the European Community [2002] OJ C 340/173. ↩︎
  170. Lenaerts, Gutman and Nowak (n 168) 436. ↩︎
  171. AI Act, arts 53(4) and 55(2); see also AI Act, recital 117. ↩︎
  172. See Section 2.7.1.2. ↩︎
  173. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 95. ↩︎
  174. AI Act, art 101(1), second subparagraph; Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 96; see also forthcoming commentary on Article 101 in this work. ↩︎
  175. For a broader discussion of legitimate expectations, see Section 2.7.1.2. ↩︎
  176. Case C-110/97 Kingdom of the Netherlands v Council of the European Union [2001] ECR I-08763 para 115 and case law cited therein. ↩︎
  177. Schneider (n 46) para 19. ↩︎
  178. TFEU, art 297(2). ↩︎
  179. Regulation (EC) No 1049/2001 of the European Parliament and of the Council of 30 May 2001 regarding public access to European Parliament, Council and Commission documents [2001] OJ L145/43 art 13(2). ↩︎
  180. European Artificial Intelligence Board, ‘Rules of Procedure’ (n 165) art 16. ↩︎
  181. Bernsteiner and Schmitt (n 24) para 15. ↩︎
  182. Commission Adequacy Opinion (n 23). ↩︎
  183. European Commission, ‘AI Board’ <https://digital-strategy.ec.europa.eu/en/policies/ai-board> accessed 18 September 2025. ↩︎
  184. European Commission, ‘The General-Purpose AI Code of Practice’ (n 21). ↩︎
  185. See Section 2.9.2.1. ↩︎
  186. The main sources of the different interpretations described in the present subsection include: Cornelia Kutterer and Theodoros Karathanasis, ‘The AI Act’s GPAI Code: Hidden Policy Choices’ (2025) AI Regulation Papers 25-03-1 <https://ai-regulation.com/gpai-cop-hidden-policy-choices/> accessed 18 September 2025; Bernsteiner and Schmitt (n 24). ↩︎
  187. Discussed in Section 2.6.2.1. ↩︎
  188. See Section 2.6.2.2. ↩︎
  189. This view is discussed in Section 2.6.2.3.1. ↩︎
  190. This view is discussed in Section 2.6.2.3.2. ↩︎
  191. Case C-65/13 European Parliament v European Commission [2014] ECLI:EU:C:2014:2289 para 43 and case law cited therein (emphasis added). ↩︎
  192. Bradley (n 157) 136. ↩︎
  193. AI Act, art 88(1). ↩︎
  194. Kutterer and Karathanasis (n 186) 6. ↩︎
  195. European Parliament v European Commission (n 191) paras 44-45. ↩︎
  196. TFEU, art 291(2): ‘Where uniform conditions for implementing legally binding Union acts are needed, those acts shall confer implementing powers on the Commission, or, in duly justified specific cases and in the cases provided for in Articles 24 and 26 of the Treaty on European Union, on the Council’. ↩︎
  197. cf to the self-binding effect that Commission Guidelines have as described in paragraph ‎73. ↩︎
  198. See paragraph 73 and case law cited in n 222–223. ↩︎
  199. AI Act, art 56(6), second subparagraph, read together with AI Act, art 98(2) and Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers [2011] OJ L 55/13 art 5. ↩︎
  200. See Section 2.6.2.3.3. ↩︎
  201. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 (“GDPR”). ↩︎
  202. Carl Vander Maelen, ‘Codes of (Mis)Conduct? An Appraisal of Articles 40-41 GDPR in View of the 1995 Data Protection Directive and Its Shortcomings’ (2020) 6 European Data Protection Law Review 231, 239. ↩︎
  203. Article 40(5)–(9) GDPR. ↩︎
  204. Kutterer and Karathanasis (n 186), 6. ↩︎
  205. European Data Protection Board, ‘Guidelines 04/2021 on Codes of Conduct as tools for transfers’ (version 2.0, 22 February 2022) <https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-042021-codes-conduct-tools-transfers_en> accessed 19 September 2025, para 22 ↩︎
  206. European Data Protection Board, ‘Guidelines 1/2019 on Codes of Conduct and Monitoring Bodies under Regulation 2016/679’ (4 June 2019) <https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-12019-codes-conduct-and-monitoring-bodies-0_en> accessed 19 September 2025, Appendix 1. ↩︎
  207. ibid. ↩︎
  208. See Section 2.6.1. ↩︎
  209. The term ‘jurisdictional effect’ is used by analogy to the effect of the implementing act envisioned in article 40(9) GDPR in Kutterer and Karathanasis (n 186) 7. ↩︎
  210. AI Act, arts 53(4) and 55(2); Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  211. For a general discussion of the different legal instruments from which the AI Act appears to have drawn inspiration and the implications for analogical interpretation, see forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
  212. Bernsteiner and Schmitt (n 24) para 21. ↩︎
  213. ibid para 22. ↩︎
  214. ibid. ↩︎
  215. AI Act, arts 53(4) and 55(2) (emphasis added). ↩︎
  216. Bernsteiner and Schmitt (n 24) para 23. ↩︎
  217. ibid. ↩︎
  218. ibid para 24. ↩︎
  219. ibid. ↩︎
  220. The quoted version of the explanatory webpage was active until 4 September 2025, as seen in the web-archived version of European Commission, ‘General-Purpose AI Models in the AI Act – Questions & Answers’ (Internet Archive, 4 September 2025) <https://web.archive.org/web/20250904171338/https://digital-strategy.ec.europa.eu/en/faqs/general-purpose-ai-models-ai-act-questions-answers> accessed 19 September 2025 (emphasis added). The current version has been updated: European Commission, ‘Questions and answers on the code of practice for General-Purpose AI’ (n 20) to reflect the position of the Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  221. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) section 5.1, paras 94–100. ↩︎
  222. Case C-57/19 P European Commission v Tempus Energy and Tempus Energy Technology [2021] ECLI:EU:C:2021:663 para 143; Case C-654/17 P Bayerische Motoren Werke v European Commission and Freistaat Sachsen [2019] ECLI:EU:C:2019:634 para 82; Case C-526/14 Tadej Kotnik and Others v Državni zbor Republike Slovenije [2016] ECLI:EU:C:2016:570 para 40 and case law cited therein. ↩︎
  223. Case C-11/22 Est Wind Power OÜ v AS Elering [2023] ECLI:EU:C:2023:765 para 31. ↩︎
  224. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 9. ↩︎
  225. ibid para 94. ↩︎
  226. Kutterer and Karathanasis (n 186) 6; Theodoros Karathanasis, ‘The AI Act: Balancing Implementation Challenges and the EU’s Simplification Agenda’ (24 June 2025) <https://ssrn.com/abstract=5311501> accessed 19 September 2025, 2. ↩︎
  227. Kutterer and Karathanasis (n 186) 6; see, also, the discussion in Section 2.9.2.2. ↩︎
  228. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 99. ↩︎
  229. ibid. ↩︎
  230. AI Act, arts 53(4) and 55(2); AI Act, recital 117. ↩︎
  231. ibid. ↩︎
  232. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 99. ↩︎
  233. See, for example, Case C-928/19 P European Federation of Public Service Unions (EPSU) v European Commission [2021] ECLI:EU:C:2021:656 para 38; Case C-339/15 Criminal proceedings against Luc Vanderborght [2017] ECLI:EU:C:2017:335 para 41; Case C-601/14 European Commission v Italian Republic [2016] ECLI:EU:C:2016:759 para 46. ↩︎
  234. By comparison to articles 40(1) and 41(3) AI Act; see, also, Section 2.7.1. ↩︎
  235. See discussion in Section 2.9.2.2. ↩︎
  236. AI Act, arts 53(4) and 55(2) and recital 117. ↩︎
  237. Case C-482/23 European Commission v Kingdom of Denmark [2025] ECLI:EU:C:2025:150, Opinion of AG Rantos, para 32; the principle is also directly invoked in Case C-621/18 Andy Wightman and Others v Secretary of State for Exiting the European Union [2018] ECLI:EU:C:2018:978, Opinion of AG Campos Sánchez-Bordona, fn 67. ↩︎
  238. See, for example, Case C-480/22 EVN Business Service and Others [2023] ECLI:EU:C:2023:918 para 30; Case C-573/17 Criminal proceedings against Daniel Adam Popławski [2019] ECLI:EU:C:2019:530 para 47. ↩︎
  239. For an examination of the extent to which the CJEU relies on case-based or precedent-based reasoning, see forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
  240. Kutterer and Karathanasis (n 186) 6; see Section 2.6.2.3.2. ↩︎
  241. This view is stated in Kutterer and Karathanasis (n 186) 7; and Karathanasis (n 226) 2. ↩︎
  242. ibid. ↩︎
  243. See paragraph ‎61. ↩︎
  244. For a detailed discussion on the application of articles 263 and 264 TFEU, see Lenaerts, Gutman and Nowak (n 168) ch 7, 275–411. ↩︎
  245. On the (im)possibility of judicial review to adequacy assessments, see Section 2.6.1.1. ↩︎
  246. Case C-355/10 European Parliament v Council of the European Union [2012] ECLI:EU:C:2012:516 paras 66–67 (the scope of the conferred power to adopt implementing measures ‘cannot amend essential elements of basic legislation or supplement it by new essential elements. Ascertaining which elements of a matter must be categorised as essential is not – contrary to what the Council and the Commission claim – for the assessment of the European Union legislature alone, but must be based on objective factors amenable to judicial review’). ↩︎
  247. Case 22/88 Industrie- en Handelsonderneming Vreugdenhil BV and Gijs van der Kolk – Douane Expediteur BV v Minister van Landbouw en Visseri [1989] ECR 1989-02049 paras 17–25; Joint cases C-14/06 and C-295/06 European Parliament and Kingdom of Denmark v Commission of the European Communities [2008] ECR I-01649 paras 50–78; C-540/14 DK Recycling und Roheisen v Commission [2016] ECLI:EU:C:2016:469, paras 47–58. ↩︎
  248. cf the scope of implementing acts under article 291(2) TFEU to the scope of delegated acts under article 290(1) TFEU, which includes the power to amend and supplement certain ‘non-essential elements of the legislative act’ conferring delegated power. ↩︎
  249. TFEU, art 264(2) on the possibility for partial annulment; for limits of conferred powers, see Vreugdenhil and Others (n 246) para 20. ↩︎
  250. TFEU, art 277; for a detailed discussion on the application of article 277 TFEU, see Lenaerts, Gutman and Nowak (n 168) ch 9, 433–447. ↩︎
  251. See Section 2.6.1., paragraph 53. ↩︎
  252. European Commission, ‘Digital Omnibus: AI Regulation Proposal’ (n 152). ↩︎
  253. Commission Proposal for Digital Omnibus on AI (n 153) 8. ↩︎
  254. ibid recital 23. ↩︎
  255. ibid art 1(16) (emphasis added). ↩︎
  256. ibid art 1(15) (emphasis added). ↩︎
  257. See Section 2.6.1., paragraph 53. ↩︎
  258. AI Act, art 56(7). ↩︎
  259. ibid. ↩︎
  260. Commission Adequacy Opinion (n 23) para 4.  ↩︎
  261. See Lindin Senden, Soft Law in European Community Law (Bloomsbury Publishing Plc, 2004), 112, which defines ‘soft law’ as ‘[r]ules of conduct that are laid down in instruments which have not been attributed legally binding force as such, but nevertheless may have certain (indirect) legal effects, and that are aimed at and may produce practical effects’ . ↩︎
  262. AI Act, recital 117. ↩︎
  263. See Section 2.6.2.2. ↩︎
  264. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94; see also discussion in Section 2.6.2.3. ↩︎
  265. See on the difference between legal binding effects and legal effects: Petra Láncos and others (eds) The Legal Effects of EU Soft Law: Theory, Language and Sectoral Insights (Edward Elgar Publishing 2023). ↩︎
  266. Case C-16/16 P Kingdom of Belgium v European Commission [2017] ECLI:EU:C:2017:959, Opinion of Advocate General Bobek, para 88 ↩︎
  267. See, for example, Ceyhun Necati Pehlivani, Nikolaus Forgó and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Kluwer Law International BV, 2024), 856 which indicates: ‘A presumption of conformity with the obligations of Article 53 exists when providers of GPAI models have adhered to codes of practice within the meaning of Article 56’; see also, Petruta Pirvan, ‘The EU Commission’s General-Purpose AI Code of Practice: Pioneering Accountable AI Development While Setting a Global Governance Milestone’ (2025) 2 Journal of AI Law and Regulation 257, 257; Ebers (n 73). ↩︎
  268. See Section 1.1. ↩︎
  269. Regulation 1025/2012 (n 14) art 2(1)(c). ↩︎
  270. See Commission Notice, The ‘Blue Guide’ on the Implementation of EU Product Rules 2022 [2022] OJ C247/1: ‘Union harmonisation legislation may set out that Harmonised standards provide a presumption of conformity with the essential requirements they aim to cover, if their references have been published in the OJEU’. ↩︎
  271. See, on the presumption of conformity under harmonised standards, Annalisa Volpato, ‘The Legal Effects of Harmonised Standards in EU Law: From Hard to Soft Law, and Back?’ in Petra L Láncos and others (eds), The Legal Effects of EU Soft Law (Edward Elgar Publishing 2023) 204–207. See forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
  272. Case C-588/21 P Public.Resource.Org and Right to Know v European Commission [2024] ECLI:EU:C:2024:201 para 76. ↩︎
  273. Volpato (n 270) 205–206. ↩︎
  274. ibid; see also, Public.Resource.Org v Commission (n 271) para 76. ↩︎
  275. AI Act, art 40(1) as well as arts 53(4) and 55(2). ↩︎
  276. AI Act, art 41(3): ‘High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2. of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations’ (emphasis added). ↩︎
  277. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 100 (emphasis added). ↩︎
  278. On the relation between the codes of practice and harmonised standards, see Section 2.7.3. ↩︎
  279. Consider, for example, the possibility for the European Parliament and Member States to issue formal objections under Regulation No 1025/2012 (n 14) art 11. ↩︎
  280. Linda Senden ‘The Constitutional Fit of European Standardization Put to the Test’ (2017) 44 Legal Issues of Economic Integration 337, 341. ↩︎
  281. Case C-613/14 James Elliott Construction Limited v Irish Asphalt Limited [2016] ECLI:EU:C:2016:821 para 40 ↩︎
  282. For a discussion on the procedure for assessing a code of practice as adequate, refer to Section 2.6.1. ↩︎
  283. For a detailed analysis of both views, see Section 2.6.2.3. ↩︎
  284. European Commission, ‘Questions and answers on the code of practice for General-Purpose AI’ (n 20) states that: ‘[a]dhering to a Code assessed as adequate by the AI Office and the Board will offer a simple and transparent way to demonstrate compliance with the AI Act. This offers a streamlined compliance process, with enforcement focused on monitoring their adherence to the Code, resulting in greater predictability and reduced administrative burden.’; See also Code of Practice Transparency Chapter, 3 (n 130) which states that the specific objectives of this Code includes to ‘serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 AI Act, while recognising that adherence to the Code does not constitute conclusive evidence of compliance with these obligations under the AI Act’ (emphasis added). This objective is also repeated in the Copyright and Security and Safety Chapters of the GPAI Code of Practice. ↩︎
  285. European Commission, ‘The General-Purpose AI Code of Practice’ (n 21). ↩︎
  286. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  287. ibid. ↩︎
  288. ibid (emphasis added). ↩︎
  289. ibid. ↩︎
  290. Bayerische Motoren Werke v European Commission and Freistaat Sachsen (n 222) paras 81 and 82 and case law cited therein. ↩︎
  291. See, inter alia, Case C‑148/23, Gestore dei Servizi Energetici SpA – GSE v Erg Eolica Ginestra Srl and Others [2024] ECLI:EU:C:2024:555 para 54. ↩︎
  292. See, inter alia, Case C‑226/11 Expedia Inc. v Autorité de la concurrence and Others [2012] ECLI:EU:C:2012:795 para 28 and case law quoted (concerning notices). As Advocate General Bobek indicated in its opinion on the case Belgium v. Commission, AG Opinion (n 265) para 90, as regards the legal effects of recommendations: ‘if an EU institution adopts recommendations as to how others are supposed to behave, it is perhaps fair to assume that should it become relevant, that institution can be expected to follow that same recommendation as to its own practices and behaviour. From this point of view, the legitimate expectation thus created is effectively analogous to other types of soft law that EU institutions or bodies generate and which is perceived as the (auto)limitation of the exercise of their own discretion in the future.’ ↩︎
  293. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  294. ibid. ↩︎
  295. Herwig Hoffman, Gerard Rowe and Alexander Türk, Administrative Law and Policy of the European Union (OUP 2011), 178; the authors identify three main conditions to rely on the principle of legitimate expectations in the case law: ‘justifiable reliance, an affected interest, and priority for the protection of expectations over the interest of the EU’. ↩︎
  296. See forthcoming commentary on Article 101. ↩︎
  297. AI Act, art 101(1), last sentence. ↩︎
  298. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 96 (emphasis added). ↩︎
  299. AI Act, art 101(1), second subparagraph, states that ‘In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and duration of the infringement, taking due account of the principles of proportionality and appropriateness. The Commission shall also into account commitments made in accordance with Article 93(3) or made in relevant codes of practice in accordance with Article 56.’ For more details on this discussion, see forthcoming commentary on Article 101. ↩︎
  300. ibid. ↩︎
  301. This principle has been invoked in few instances in the case law of the CJEU, see, for example, Joined Cases T-50/06 RENV II and T-69/06 RENV II, Ireland and Aughinish Alumina v European Commission [2016] ECLI:EU:T:2016:227 para 192, which refers by analogy to the case C-177/13 P, Marek Marszałkowski v Office for Harmonisation in the Internal Market [2014] ECLI:EU:C:2014:183 para 73. ↩︎
  302. Commission Adequacy Opinion (n 23) para 4. ↩︎
  303. Matthias Bastian ‘Google and xAI Sign EU AI Code of Practice’ (The Decoder, 31 July 2025) <https://the-decoder.com/google-and-xai-sign-eu-ai-code-of-practice/> accessed 18 September 2025. ↩︎
  304. European Commission, ‘The General-Purpose AI Code of Practice – Copyright Chapter’ (2025), <https://ec.europa.eu/newsroom/dae/redirection/document/118115> accessed 22 September 2025, Measure 1.5. ↩︎
  305. Kent Walker, ‘We will sign the EU AI Code of Practice’ (Google, 30 July 2025) <https://blog.google/around-the-globe/google-europe/eu-ai-code-practice/> accessed 18 September 2025. ↩︎
  306. See Section 2.6.2.3. ↩︎
  307. AI Act, arts 53(4) and 55(2). ↩︎
  308. ibid. ↩︎
  309. ibid. ↩︎
  310. See Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 95. ↩︎
  311. ibid.; see also, Blue Guide (n 270) 55 on the use of a ‘gap analysis’ as a technique for demonstrating alternative means of compliance to conformity with a harmonised standard under the New Legislative Framework. Relatedly, for a general discussion on the relationship between the AI Act and the New Legislative Framework, see forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
  312. ibid. ↩︎
  313. See Bastian (n 303). In the event that a code of practice approved by an implementing act contains commitments that go beyond the scope of the AI Act’s obligations, it may be subject to judicial challenge and (partial) annulment as described in Section 2.6.2.3.3. ↩︎
  314. See Kutterer and Karathanasis (n 186) 9. ↩︎
  315. This is particularly evident in recital (f) of the Code of Practice Safety and Security Chapter (n 34) which states that ‘[t]he Signatories recognise that this Chapter should encourage providers of general-purpose AI models with systemic risk to advance the state of the art in AI safety and security and related processes and measures’(emphasis added). ↩︎
  316. See also Kutterer and Karathanasis (n 186) 9. ↩︎
  317. This question has been raised by other commentators, for example, Gustavo Gil Gasiola, ‘The GPAI Code of Practice: Delayed, Yet Full of Promise’ (VerfBlog, 16 July 2025), <https://verfassungsblog.de/the-gpai-code-of-practice/> accessed 17 September 2025. ↩︎
  318. European Commission ‘The General-Purpose AI Code of Practice’ (n 21). ↩︎
  319. ibid. ↩︎
  320. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94 (emphasis added). ↩︎
  321. See, for example, Commission Adequacy Opinion (n 23) para 7, which provides an overview of how the commitments and measures in each chapter of the Code of Practice correspond to the obligations set out in article 53(1)(a) and (b) AI Act (Transparency Chapter), article 53(1)(c) (Copyright Chapter), and the specific obligations applicable to providers of GPAI models with systemic risk (Safety and Security Chapter). ↩︎
  322. ibid para 100. ↩︎
  323. Bernsteiner and Schmitt (n 24) paras 8–10; see also, Cornelia Kutterer ‘Regulating Foundation Models in the AI Act: From “High” to “Systemic” Risk’ (2024) AI Regulation Papers 24-01-1 <https://ai-regulation.com/wp-content/uploads/2024/01/C-Kutterer-Regulating-Foundation-Models-in-the-AI.pdf> accessed 18 September 2025, 10 (‘[t]his co-regulatory approach allows for the evolution of these commitments and to adapt to emerging new advancements in AI and state-of-the-art safety research’). ↩︎
  324. Scholars have pointed out that little research has been dedicated to mitigation approaches; see, for example, Risto Uuk and others, ‘Effective Mitigations for Systemic Risks from General-Purpose AI’ (2024) <https://ssrn.com/abstract=5021463> accessed 19 September 2025. ↩︎
  325. Commission Adequacy Opinion (n 23) para 51. ↩︎
  326. Péter Mezei, ‘The Multi-layered Regulation of Rights Reservation (Opt-out) Under EU Copyright Law and the AI Act – For the Benefit of Whom?’ (2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5064018> accessed 19 September 2025, 9. ↩︎
  327. European Commission, ‘Questions and answers on the code of practice for General-Purpose AI’ (n 20). ↩︎
  328. For a discussion on the regularity with which monitoring and evaluation should occur see Section 2.6.1. ↩︎
  329. AI Board Adequacy Conclusion (n 22) 11. ↩︎
  330. Commission Adequacy Opinion (n 23) para 51; see also, European Commission, ‘Questions and answers on the code of practice for General-Purpose AI’ (n 20). ↩︎
  331. AI Board Adequacy Conclusion (n 22) 11. ↩︎
  332. Case T-79/13 Alessandro Accorinti and Others v European Central Bank [2015] ECLI:EU:T:2015:756 para 76 and case law cited therein. ↩︎
  333. DSA, art 45(4). ↩︎
  334. See, in particular, Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94, which states that compliance may be demonstrated by ‘adhering to a code of practice that is assessed as adequate by the AI Office and the Board’, meaning that if multiple codes are simultaneously assessed as adequate any of them might be relied upon. ↩︎
  335. Commission Adequacy Opinion (n 23) para 3. ↩︎
  336. AI Act, arts 53(4) and 55(2) AI Act read in conjunction with Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 94. ↩︎
  337. See Section 2.1. ↩︎
  338. For a discussion on the discretion of the AI Office in organising stakeholder participation, see Section 2.3. ↩︎
  339. ibid. ↩︎
  340. Schneider (n 46) para 28. ↩︎
  341. AI Act, art 56(9), second subparagraph. ↩︎
  342. Bernsteiner and Schmitt (n 24) para 26. ↩︎
  343. European Commission, ‘General-Purpose AI Code of Practice now available’ (n 38). ↩︎
  344. Commission Adequacy Opinion (n 23). ↩︎
  345. AI Act, art 56(9), second subparagraph: ‘If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide […]’ (emphasis added). ↩︎
  346. For example, such wording might have looked like this: ‘If a code of practice cannot be finalised by 2 August 2025, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide […]’. ↩︎
  347. For example, the German text of article 56(9), second subparagraph, AI Act reads: ‘Kann bis zum 2. August 2025 ein Verhaltenskodex nicht fertiggestellt werden oder erachtet das Büro für Künstliche Intelligenz dies nach seiner Bewertung gemäß Absatz 6 des vorliegenden Artikels für nicht angemessen, kann die Kommission […]’ (emphasis added); and the French version: ‘Si, à la date du 2 août 2025, un code de bonnes pratiques n’a pas pu être mis au point, ou si le Bureau de l’IA estime qu’il n’est pas approprié à la suite de son évaluation au titre du paragraphe 6 du présent article, la Commission peut prévoir […]’ (emphasis added). ↩︎
  348. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 99. ↩︎
  349. See, for example, Bernsteiner and Schmitt (n 24) para 27 and Kutterer and Karathanasis (n 186) 6. ↩︎
  350. AI Act, arts 53(4) and 55(2); Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 100; see also, Section 2.7.3. ↩︎
  351. See Section 2.6.1..; cf Commission Proposal for Digital Omnibus on AI (n 153) art 1(16). ↩︎
  352. ibid. ↩︎
  353. cf other inconsistencies related to centralised enforcement of the AI Act in Section 2.6.2.1. ↩︎
  354. Regulation (EU) No 182/2011 (n 199) art 5. ↩︎
  355. See Section 2.8. ↩︎
  356. Kutterer and Karathanasis (n 186) 6. ↩︎
  357. See discussion in Section 2.6.2.1. ↩︎
  358. European Parliament v European Commission (n 191) para 43.  ↩︎
  359. AI Act, art 88(1). ↩︎
  360. European Parliament v European Commission (n 191) paras 44–45. ↩︎
  361. See Section 2.6.2.1. ↩︎
  362. Regulation (EU) 2020/1056 of the European Parliament and of the Council of 15 July 2020 on electronic freight transport information [2020] OJ L 249/33 art 8; Regulation (EU) 2021/2115 of the European Parliament and of the Council of 2 December 2021 establishing rules on support for strategic plans to be drawn up by Member States under the common agricultural policy (CAP Strategic Plans) and financed by the European Agricultural Guarantee Fund (EAGF) and by the European Agricultural Fund for Rural Development (EAFRD) and repealing Regulations (EU) No 1305/2013 and (EU) No 1307/2013 [2021] OJ L 435/1; Regulation (EU) 2021/2116 of the European Parliament and of the Council of 2 December 2021 on the financing, management and monitoring of the common agricultural policy and repealing Regulation (EU) No 1306/2013 [2021] OJ L 435/187; Regulation (EU) 2015/755 of the European Parliament and of the Council of 29 April 2015 on common rules for imports from certain third countries (recast) [2015] OJ L 123/33. ↩︎
  363. Kutterer and Karathanasis (n 186) 6–7. ↩︎
  364. Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 2) para 99. ↩︎
  365. See Section 2.7.1.1. ↩︎
  366. See Section 2.7.1.2. for discussion on effects of code of practice on signatories. ↩︎
  367. This is analogous to one of the potential interpretations of the effect of an implementing act under article 56(6), second subparagraph, AI Act, described in paragraph ‎76. ↩︎
  368. See, for example, Kutterer and Karathanasis (n 186) 6–7; Karathanasis (n 226) 2. ↩︎
  369. See discussion in Section 2.6.2.3.2. ↩︎
  370. See Section 2.6.2.3.2., text to paragraph ‎78. ↩︎
Contents
Submitted:  
Published:  
Updated:  
Cite
Copied to clipboard
Cite
Zlatko Grigorov & Ludivine Stewart (joint first authors), 'Article 56: Codes of Practice' (Cambridge Commentary on EU General-Purpose AI Law, 1 Mar 2026) <https://cambridge-commentary.ai/article-56/>
Copied to clipboard