AI Act provision
Article 111: AI systems already placed on the market or put into service and general-purpose AI models already placed on the marked [sic]
1. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before 2 August 2027 shall be brought into compliance with this Regulation by 31 December 2030.
The requirements laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT system established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts and where those legal acts are replaced or amended.
2. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), this Regulation shall apply to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this Article, that have been placed on the market or put into service before 2 August 2026, only if, as from that date, those systems are subject to significant changes in their designs. In any case, the providers and deployers of high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the requirements and obligations of this Regulation by 2 August 2030.
3. Providers of general-purpose AI models that have been placed on the market before 2 August 2025 shall take the necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027.
Recitals
Recital 177
In order to ensure legal certainty, ensure an appropriate adaptation period for operators and avoid disruption to the market, including by ensuring continuity of the use of AI systems, it is appropriate that this Regulation applies to the high-risk AI systems that have been placed on the market or put into service before the general date of application thereof, only if, from that date, those systems are subject to significant changes in their design or intended purpose. It is appropriate to clarify that, in this respect, the concept of significant change should be understood as equivalent in substance to the notion of substantial modification, which is used with regard only to high-risk AI systems pursuant to this Regulation. On an exceptional basis and in light of public accountability, operators of AI systems which are components of the large-scale IT systems established by the legal acts listed in an annex to this Regulation and operators of high-risk AI systems that are intended to be used by public authorities should, respectively, take the necessary steps to comply with the requirements of this Regulation by end of 2030 and by 2 August 2030.
Select bibliography
- Mezei P, ‘A Saviour or a Dead End? Reservation of Rights in the Age of Generative AI’ (2024) 46 European Intellectual Property Review 461.
- Peukert A, ‘Copyright in the Artificial Intelligence Act – A Primer’ (2024) 73 GRUR International 497.
- Rosati E, ‘Infringing AI: Liability for AI-Generated Outputs Under International, EU, and UK Copyright Law’ (2024) 16 European Journal of Risk Regulation 603.
- Winkelmeier A and Korab C, ‘Article 111. AI Systems Already Placed on the Market or Put into Service and General-Purpose AI Models Already Placed on the Marked [sic]’ in Ceyhun Necati Pehlivan, Nikolaus Forgó, and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Kluwer Law International BV 2024), 1449.
- Wendehorst C, ‘Art. 111 Bereits in Verkehr gebrachte oder in Betrieb genommene KI-Systeme und bereits in Verkehr gebrachte KI-Modelle mit allgemeinem Verwendungszweck’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (1st edn, C.H. Beck 2024).
Commentary
1. General remarks
1Article 111 AI Act1 is complementary to Article 113, with both provisions governing the temporal application of the obligations placed on providers by the AI Act. While Article 113 sets out the general rules on the entry into force of the Act and the staggered entry into application of its various chapters and provisions, Article 111 prescribes transitional periods for bringing into conformity certain AI systems and all general-purpose AI (“GPAI”) models that had been lawfully placed on the market before the respective obligations imposed on them have entered into application.
1.1. Structure and overview
2Article 111(1) requires AI systems, other than those falling within the scope of the prohibited practices in Article 5, that (i) are components of the large-scale IT systems governed by the legislation listed in Annex X and (ii) were placed on the market or put into service before 2 August 2027 to be brought into compliance with the AI Act by 31 December 2030.
3Article 111(2) specifies that the AI Act’s obligations concerning high-risk AI systems do not apply to those systems that were placed on the market before 2 August 2026, unless the system’s design has undergone a ‘significant change’ after that date. The provision also prescribes that if a high-risk AI system is intended for use by a public authority, regardless of its placement date, it should be brought into compliance with the AI Act by 2 August 2030.
4Article 111(3) introduces a special transitional rule for the general entry into application of Chapter V with respect to GPAI models referred to in Article 113(b). While Chapter V generally applies from 2 August 2025, Article 111(3) states that providers of GPAI models placed on the market before that date have until 2 August 2027 to comply with the relevant obligations.
5Given that this commentary’s scope is focused primarily on GPAI models,2 the central focus of the present analysis is Article 111(3). Accordingly, reference is made to Articles 111(1) and 111(2) only insofar as needed to identify and resolve any tensions between the temporal obligations imposed on GPAI model providers and those applicable to AI systems.
6In light of the above, this analysis begins by situating Article 111(3) within the legislative context of the broader temporal frameworks under the EU’s New Legislative Framework, of which the AI Act is intended to form a part.3 It then compares the New Legislative Framework’s general treatment of products placed on the market before the entry into application of new harmonised rules with the sui generis approach adopted by the AI Act.
7The substantive analysis of Article 111(3) begins with overarching considerations applicable to all obligations imposed on providers of GPAI models placed on the market before 2 August 2025. It clarifies the provision’s legal character and effects through a systematic reading of the AI Act. Given that such models may be modified during the additional transitional period ending on 2 August 2027, either by the initial provider or by downstream actors, the section also pays particular attention to when specific modifications, under the current Commission interpretation, might necessitate immediate compliance with obligations and when the obligations relating to a modified model might continue to benefit from the temporal deferral under Article 111(3).
8Following the overarching considerations, the analysis turns to examining the temporal implications of Article 111(3) for the obligations of GPAI model providers, organised thematically. It begins with a discussion of the fulfilment of the documentation and transparency obligations under Article 53(1)(a) and (b) where the model was placed on the market before 2 August 2025. The section also briefly assesses how the deferral of these obligations affects high-risk AI systems based on GPAI models.
9Thereafter, a significant portion of the analysis deals with the obligations to adopt a copyright compliance policy and to provide a training-content summary under Article 53(1)(c) and (d). The attention afforded to this topic reflects the uncertainty among commentators about how compliance should be ensured when training activities took part before obligations were applicable; namely, whether retraining or unlearning is required and under what conditions. The discussion centres on the temporal implications of rightsholders’ reservations against the use of their works for the purposes of text and data mining, and how the timing of such reservations affects providers’ copyright obligations during the additional transitional period under Article 111(3).
10The discussion then turns to the relationship between Article 111(3) and the obligation under Article 54 for third-country providers of GPAI models to appoint an authorised representative based in the Union. The analysis of Article 111(3) concludes with how the provision’s temporal deferral affects providers of GPAI models with systemic risk that were placed on the market before 2 August 2025. It focuses on three areas of potential tension: (i) when should a provider notify the Commission under Article 52(1) that a GPAI model placed on the market before 2 August 2025 has high-impact capabilities under Article 51(1)(a), (ii) the consequences when a model placed on the market before 2 August 2025 is designated during the transitional period as a GPAI model with systemic risk, and (iii) the consequences when post-market modifications of a GPAI model without systemic risk placed on the market before 2 August 2025 give rise to a GPAI model with systemic risk.
11Finally, the chapter briefly examines how the rules for providers of GPAI models placed on the market before 2 August 2025 interact with Articles 111(1) and 111(2) and highlights potential conflicts in their respective temporal regimes.
1.2. Legislative context
12The AI Act was initially largely set within the context of the New Legislative Framework4 which aligns product harmonisation legislation with the reference provisions contained in Regulation (EC) No 765/2008,5 Decision No 768/2008/EC,6 and Regulation (EU) 2019/1020.7 The inclusion of transitional provisions, which ensure the continued lawful marketing of products already placed on the market in conformity with pre-existing harmonised rules or national regulatory measures predating harmonisation, represent a standard and usually straightforward element in the pieces of legislation that form part of the New Legislative Framework. While the AI Act has been categorised by some authors as fitting ‘into the traditional European concept of product safety law’,8 this view is largely informed by the initial drafts of the AI Act. The final version of the legislation, however, ‘explicitly combines three EU regulatory approaches, i.e., risk-based, product safety, and rights-based’.9 This departure from the AI Act’s product safety genesis, especially in relation to the unique regulatory challenges introduced with the inclusion of GPAI models within the Act’s subject matter scope, has led to a unique approach in the AI Act now in force for models already placed on the market. In short, Article 111 still exhibits some of the product safety logic present in the transitional regimes under other harmonisation legislation of the New Legislative Framework, but it enshrines a largely sui generis approach.
13A review of other legislative acts within the New Legislative Framework10 reveals that products already placed on the market prior to the entry into application of new or amended obligations are generally fully exempted from the need to comply with those new requirements (i.e. a full grandfather clause).11 This approach is readily explained by the fact that most of these pieces of legislation regulate physical products; requiring modifications to meet new legal standards post-market placement for such products would often be technically impossible without a full withdrawal from the market. Accordingly, the usual approach is either to require the recall of products already placed on the market or, where the safety risk is not sufficiently high, to permit sales until existing stocks are exhausted.12 However, the latter approach is not appropriate for digital products, which are not constrained by a finite stock of units already placed on the market.
14The Cyber Resilience Act (“CRA”),13 which represents the only other legislative measure falling within the current New Legislative Framework that substantially regulates digital products, exhibits notable structural parallels with the AI Act. Both adopt a horizontal framework, apply a risk-based methodology, and embed product safety principles.14 In the CRA, this means that ‘requirements and obligations combine a more traditional by-design approach, now fully entrenched in EU cybersecurity regulation, with a lifecycle approach, which is not a hallmark of EU product safety legislation’.15 As explained below, this combination is also reflected in the AI Act’s lifecycle compliance requirements for GPAI models.16 However, despite these similarities, the CRA’s transitional regime still follows the more conventional New Legislative Framework model whereby products with digital elements already placed on the market prior to the entry into application of the relevant provisions must be brought into conformity only if they have undergone ‘substantial modification’ after that applicability date.17
15The above approaches follow explicitly from the common transitional architecture of the New Legislative Framework as clarified in the Blue Guide, which represents the European Commission’s guidance on the harmonisation of product rules under the New Legislative Framework.18 Section 2.10. of the Blue Guide sets out a general rule that a ‘product, which is placed on the market before the end of the transitional period, should be allowed to be made available on the market or put into service’.19 Thus, the new compliance regime is temporally applicable only regarding those products that are placed on the market following the full entry into application of the relevant obligation.20 This permits the continued making available of those products that have been lawfully placed on the market before applicability without imposing any requirements to bring them into conformity with subsequent harmonised rules. However, in specific circumstances, the Blue Guide does allow authorities to entirely prohibit ‘the making available of such products if this is deemed necessary for safety reasons or other objectives of the legislation’.21 But the intermediate approach taken by the AI Act for GPAI models, which requires providers to take the necessary steps to comply with newly applicable rules, is entirely novel to this classic product safety structure.
16This novel approach reflects the AI Act’s evolution within the broader context of the EU’s strategy to regulate emerging technologies in a manner consistent with its established product safety and security framework.22 In its early stages, the legislative design was primarily oriented towards ensuring that AI systems, like other products in the internal market, complied with safety standards, closely mirroring the EU’s general product safety regulatory approach for physical goods.23 At that time, AI was conceptualised largely as a product component to be regulated within the familiar product safety structures, which is why the initial Commission proposal for the AI Act only concerned AI systems but not the underlying models.24 However, this legislative approach shifted markedly as public attention surged following the release of high-profile GPAI models, which is posited to have directly led to their inclusion within the scope of the AI Act.25
17This shift in public attention, and the corresponding evolution of the AI Act to include GPAI models, resulted in a unique approach to the transitional regime compared to the standard product safety regulatory method. On the one hand, the final and binding version of the GPAI temporal provisions still retain a largely product safety logic in that they require objective, that is, results-based rather than conduct-based, compliance at the time of market placement, whereby ‘a subjective element is not relevant for this supervisory dimension’.26 On the other hand, the fact that the providers of models placed on the market before 2 August 2025 are still expected to ‘take the necessary steps in order to comply’ by the end of the transitional period27 introduces, as suggested above, a sui generis approach to the transitional framework of product safety by mixing product-focused with entity-centered obligations.
2. Substance
2.1. Article 111(3)
2.1.1. Overarching considerations
18Assessed against the Blue Guide framework, Article 113 sets out a conventional transitional period by deferring the entry into application of Chapter V for approximately twelve months following the entry into force of the AI Act.28 Article 111(3), however, adds complexity by specifically delaying the application of obligations regarding models placed on the market before 2 August 2025 until the fixed date of 2 August 2027, rather than delaying them indefinitely as under the majority of other New Legislative Framework instruments.
19Thus, the temporal applicability under Article 111(3) can be interpreted as being two-staged: (i) during the transitional period, there is, in effect, a duty of conduct placed on the provider to take the necessary steps to comply, for which purpose it is to be supported by the AI Office;29 and (ii) on the fixed final date, a result-based obligation arises for the provider, at which point the models placed on the market before 2 August 2025 must comply with the applicable obligations under the AI Act by 2 August 2027.
20While there is no positive obligation for full compliance at any particular intermediate date before 2 August 2027, the question arises as to how compliance actions taken during the transitional period are to be assessed. More precisely, should the adequacy of those actions be evaluated by reference to the applicable rules and the relevant factual situation at the time the provider took the steps to comply or at the fixed date of 2 August 2027?30 Under the former approach, Article 111(3) will be interpreted not as deferring the application under Article 113(b) further, but rather as providing an additional period during which enforcement is delayed against the models placed on the market before 2 August 2025. Such a reading may be supported by Article 111(3)’s focus on ‘necessary steps’ to comply ‘by’ the given date, rather than expressly delaying the entry into application of the full obligations under Chapter V for the models placed on the market before 2 August 2025. Conversely, the provision may be interpreted as representing a lex specialis applicability rule taking precedence over the general applicability contained in Article 113(b), whereby compliance is assessed with reference to the full applicable legal and factual situation on 2 August 2027.
21Considering the different regimes for the various providers that will arise under the first reading depending on the date at which they took their actions to comply, and the general purposes and goals of the transitional regime of the Act, at the time of writing (November 2025) the second reading seems preferable.
2.1.1.1. GPAI models placed on the market before 1 August 2024
22As an overarching consideration, it must also be determined which GPAI models placed on the market before 2 August 2025 fall within the scope of Article 111(3). The AI Act was published in the Official Journal of the European Union on 12 July 2024.31 Proceeding from the general rule established in Article 297(1) TFEU32 and restated in Article 113 AI Act, the regulation entered into force on the twentieth day following the date of its publication. Thus, the AI Act entered into force on 1 August 2024.
23Consistent with the principle of non-retroactivity, some authors have suggested that compliance is required only for those models that were placed on the market between 1 August 2024 and 2 August 2025.33 In short, the principle of non-retroactivity covers situations where a rule is introduced and applied to events which have already been concluded: ‘[t]his can occur either where the date of entry into force precedes the date of publication, or where the regulation applies to circumstances that have been concluded before the entry into force of the measure’.34
24The Court of Justice of the EU (CJEU) has further clarified that:
provisions of Community law have no retroactive effect unless, exceptionally, it clearly follows from their terms or general scheme that such was the intention of the legislature, that the purpose to be achieved so demands and that the legitimate expectations of those concerned are duly respected.35
25The above considerations concerning non-retroactivity point to a conclusion that the AI Act, in general, and Article 111(3), in particular, may impose obligations only in relation to models that were placed on the market after entry into force of the AI Act on 1 August 2024.
26This approach to retroactivity follows from treating the AI Act primarily as a product safety regulation. While all regulatory obligations more generally are ultimately borne by persons (natural or legal), a product safety framework is first and foremost centred on the safety characteristics of the underlying products.36 Specifically, product safety regulation is aimed at preventing design or manufacturing dangers and defects (so-called pre-marketing product safety regulation), supplemented by post-marketing obligations, such as recall orders or requiring certain reporting to authorities.37 As explained above, the New Legislative Framework’s Blue Guide envisions two main approaches to treating products that were already on the market prior to entry into force of new harmonisation rules: either permitting them to remain on the market until available stock depletion or, when safety concerns prevail, ordering a full recall.38 The second case does not, necessarily, offend the concept of retroactivity: if full recall is required for those products placed on the market before the entry into force of the legislative act, the apparent retroactivity of the measure that would be in direct conflict with legitimate expectations can be justified on the public interest ground of ensuring safety per the case law cited above.39
27Conversely, the AI Act’s provisions on GPAI model providers may be understood not only as product safety measures directed at ensuring certain model characteristics, but rather as continuous obligations on the organisation of the entity that has placed a GPAI model on the market. Such a reading can be understood as a sui generis fusion of product safety regulation with what is referred to elsewhere in this commentary as ‘entity regulation’.40 The concept of ‘entity regulation’ can be understood as a series of norms that governs the behaviour of entities belonging to a defined class by virtue of a shared characteristic – in this case, entities that have placed a GPAI model on the market or otherwise qualify as providers. Unlike pure product safety obligations imposed on manufacturers, the obligations under this reading do not attach directly to the manufacturing process or the product’s qualities but on the provider’s conduct in relation to the product.41 Under such a reading, it can be argued that new harmonised legislation which, for the first time, imposes certain organisational obligations on GPAI model providers as a defined entity class, need not rely on the same safety justifications to bind entities that entered the class before the new regulation entered into force (provided they remain within the class after the transitional period).
28For example, under the General Data Protection Regulation (“GDPR”),42 which can be considered a form of entity regulation,43 data processors must continuously comply with its requirements, regardless of when they first began processing personal data, for so long as they remain processors.44 By analogy, a GPAI model provider, by virtue of placing and continuously offering a model on the market, must meet organisational obligations (e.g. transparency, respect for copyright reservations, and risk assessment and mitigation) after entry into force. Under this interpretation, those obligations would not be treated as retroactive product safety requirements imposed on a model placed on the market before entry into force on 1 August 2024, but as forward-looking obligations on the provider, contingent on its decision to continue offering the model. For this reason, explicit public interest justification may not be necessary to impose these new obligations on the provider after the end of the transitional period. This interpretative approach to treating the AI Act as a sui generis fusion of product safety regulation and the proposed concept of ‘entity regulation’ presents merely a possible option of thinking on the idiosyncratic characteristics of the AI Act compared to other legislative texts in the New Legislative Framework.45
29Proceeding from the foregoing, the compliance position of providers of GPAI models placed on the market before 1 August 2024 turns on whether Chapter V of the AI Act is understood as product safety regulation proper or as substantially falling under what this commentary terms ‘entity regulation’.46 In the former case, retroactivity is, in principle, prohibited unless the AI Act provides adequate justification. Conversely, if characterised primarily as ‘entity regulation’, imposing obligations on GPAI providers in respect of their models placed on the market before 1 August 2024 seems to not raise the same retroactivity concerns. The practical significance of this question is uncertain given that the pace of GPAI development is such that by 2 August 2027, when providers of GPAI models placed on the market before 1 August 2024 might be required to comply, such models are likely to have limited commercial value.47 Nevertheless, where the treatment of models placed before 1 August 2024 is identified as a concern, the present chapter addresses it expressly.48
2.1.1.2. Modifications during the transitional period
30An additional consideration that must be taken into account throughout the entire discussion of Article 111(3) AI Act is that of model modifications.
31It is foreseeable that during the extended transitional period provided for in Article 111(3), GPAI models already placed on the market may continue to be fine-tuned, updated, or otherwise modified.49 Whilst on the market, modifications to GPAI models may be introduced not only by the original provider but also by downstream actors. This raises the question of how the general temporal applicability of obligations is affected when a model placed on the market before 2 August 2025 is subsequently modified and if this is different depending on whether the initial provider introduced the modifications or they were made by a downstream actor.
32The AI Act’s binding provisions do not set out an express framework governing modifications to GPAI models, unlike the detailed regime in relation to substantial modifications to high-risk AI systems.50 In fact, the sole textual reference to GPAI model modifications appears in recital 97 of the AI Act which only addresses modifications by downstream actors, but not changes introduced by the initial provider. Therefore, questions about how obligations are allocated in connection with modifications turn on the interpretation of general concepts in the AI Act, such as when a new model is considered placed on the market and who qualifies as a provider. The Commission’s July 2025 Guidelines on the scope of the obligations for general-purpose AI models established by AI Act (the Guidelines) seek to clarify many of these issues for both initial providers and downstream actors, which are dealt with respectively in Section 2.2. on the lifecycle of GPAI models and Section 3.2 on downstream modifiers as providers of GPAI models.51 Accordingly, the present discussion of modifications to models placed on the market before 2 August 2025 largely draws on the interpretation of the Guidelines. It should, however, be borne in mind that the Guidelines are non-binding (save on the Commission itself, as explained below) and do not necessarily represent a definitive interpretation of the AI Act.52 Such conclusive interpretation may only be given by the CJEU, which may ultimately adopt a different approach to modifications.53
33Nevertheless, although the Guidelines are non-binding on private parties and the courts, the settled case law of the CJEU recognises that such soft-law, interpretive guidance can produce a self-binding effect on the Commission.54 In short, upon publication of Guidelines, insofar as the Commission has limited its own interpretative discretion, it is required to act according to its own guidance ‘at the risk of being found to be in breach of general principles of law, such as equal treatment or the protection of legitimate expectations’.55 Therefore, the Guidelines can be taken as the de facto applicable interpretation for the time being, and the present analysis proposes alternative readings only when such are considered to materially enhance the analysis.56
34The Guidelines differentiate modifications to GPAI models by reference to the entity that has introduced them. In particular, the Commission distinguishes between: (i) modifications by the initial provider (or on its behalf), where modifications that do not involve a new large pre-training run are treated as part of the original model’s lifecycle rather than as a new model,57 and (ii) modifications by downstream actors, where the modifier is considered the provider of the modified model if the modification has given rise to a ‘significant change in the model’s generality, capabilities, or systemic risk’.58 The differentiated temporal treatment according to this delineation is discussed in detail in the following paragraphs.
35Under the Guidelines, a model’s lifecycle is framed broadly, such that it begins at the start of the large pre-training run, with subsequent developments by or on behalf of the provider forming ‘part of the same model’s lifecycle rather than giving rise to new models’.59 Therefore, in the event that modifications are introduced by the initial provider of the model placed on the market before 2 August 2025 during the transitional period of Article 111(3), there is a strong argument that those modifications would need to be assessed within the framework of the obligations placed on the provider during the model’s lifecycle. This interpretation is supported by the Guidelines’ discussion on Article 111(3), which expressly states that the AI Act covers the models placed on the market prior to 2 August 2025 ‘throughout their entire lifecycle’.60
36If the introduced modifications do not entail a large pre-training run, which the Commission considers necessary for a modified model to be classified as a new model placed on the market, then those modifications form part of that initial model’s lifecycle.61 It follows that the obligation of the provider to bring the model into conformity with the requirements of the AI Act under Article 111(3) would apply from 2 August 2027 to the initial model and any subsequently made modifications as part of its lifecycle. This is explained by the fact that those modifications would not amount to a new placing on the market, and thus all obligations regarding the model placed on the market before 2 August 2025 enter into applicability at the same time (i.e. on 2 August 2027).
37It is now necessary to consider the case of a downstream provider that modifies a GPAI model that is already placed on the market. In this scenario, the first step is to assess whether the modified model meets the Guidelines’ criteria to qualify as a new model placed on the market.62 Specifically, as stated above the Commission considers a new model to be placed on the market only if the downstream modification has given rise to a ‘significant change in the model’s generality, capabilities, or systemic risk’.63
38In the event the described criteria are not met, no new model has been placed on the market such that the modifications are considered to form part of the original model’s lifecycle. This has two main consequences: (i) a systematic reading of the Guidelines would suggest that the downstream modifier incurs no obligations regarding the modified model at any point in time64 and (ii) the initial provider likewise incurs no further obligations if the modification cannot be considered to have been performed on its behalf.65 Such a situation may arise, for example, where a downstream modifier performs additional training or fine-tuning to improve a specific capability of the original model for the downstream modifier’s own use case, but this modification does not meet the relatively high thresholds for what is considered a ‘significant change’ under paragraphs 63 and 64 of the Guidelines.66 Such a situation would create a regulatory gap, where no actor has obligations under the AI Act with regard to the introduced minor modifications; for example, there would be no obligation placed on anyone to draw up transparency documentation regarding the modification. Where the modification introduced by the downstream entity is, in fact, attributable to the initial model provider, then the temporal applicability rules with regard to a model’s lifecycle set out above would apply.
39If, however, the downstream modification does give rise to a ‘significant change in the model’s generality, capabilities, or systemic risk’,67 again two consequences follow: (i) the modified model will be classified as a new model and (ii) at the time of placing it on the market, the downstream modifier is considered the provider of that new model.68 If the modified model represents a GPAI model without systemic risk, paragraph 68 of the Guidelines (based on recital 109 of the AI Act) prescribes that the scope of obligations for the modifier shall be limited to the extent of the introduced modification:
the documentation required by Article 53(1), points (a) and (b), AI Act is limited to information on the modification, while the copyright policy required by Article 53(1), point (c), AI Act and the summary of the content used for training required by Article 53(1), point (d), AI Act are limited to the data used as part of the modification.69
40As the modified model in this scenario is placed on the market after 2 August 2025, the extended deferral in Article 111(3) would not be applicable to the downstream modifier. Therefore, compliance with the AI Act will be required from the downstream modifier immediately at the time of placing on the market.
41Importantly, given that the obligations are confined to the extent of the introduced modifications, there is a good argument that the downstream modifier must satisfy them even if the initial provider has not yet fulfilled its corresponding obligations. Considering the nature of the obligations under Article 53(1)(a) to (d), as well as Article 54, there is no apparent barrier to, or exemption from, downstream compliance in the absence of compliance by the initial provider. In particular, there is no indication whether in the AI Act or in the currently published Guidelines that the documentation requirements or the copyright requirements in Article 53 when applied to the downstream modifications depend on prior compliance by the upstream initial provider. Rather, the downstream modifier needs to comply with its obligations from the moment it is considered a provider of the modified model. The precise consequences for the different sets of obligations imposed on the providers of models placed on the market before 2 August 2025 are discussed in the relevant sections below.
2.1.2. Documentation and transparency obligations under Article 53(1)(a) and (b)
42Providers who have placed a GPAI model on the market before 2 August 2025 must fully comply with the documentation and transparency obligations contained in Article 53(1)(a) and (b) by 2 August 2027.70 The precise substance of these obligations, namely to maintain technical documentation for provision to the AI Office and national authorities under Article 53(1)(a) and to ensure transparency to downstream providers under Article 53(1)(b), is analysed in the chapter on Article 53 of the present commentary and is not repeated here.71 The current section covers the temporal dimension of those obligations, that is, from which date should the provider of a model placed on the market before 2 August 2025 comply. It only covers the temporal aspects applicable to GPAI models without systemic risk placed on the market before 2 August 2025. The specific obligations applicable to providers of GPAI models with systemic risk placed on the market before 2 August 2025 are discussed in Section 2.1.5..
2.1.2.1. Integration into high-risk AI systems
43Because the models that fall under Article 111(3) are already on the market, they may be freely integrated into AI systems during the additional transitional period until 2 August 2027.72 Therefore, there are significant benefits to early compliance with documentation and transparency requirements to improve downstream provider access to the information needed to meet their own obligations.73 However, neither the AI Act nor the Transparency Chapter of the Code of Practice contains any express obligations or incentives to that effect. This situation risks creating compliance difficulties especially for providers of high-risk AI systems placed on the market between 2 August 2026 and 2 August 2027, when the system has been built upon a GPAI model placed on the market before 2 August 2025.74
44In particular, in the absence of compliant transparency documentation under Article 53(1)(b) produced by the GPAI model provider, high-risk AI system providers may struggle to meet their own documentation obligations under Article 18 AI Act. More importantly, understanding the underlying GPAI model’s ‘capabilities and limitations’75 is essential for high-risk AI system providers to establish an appropriate risk management system under Article 9 AI Act. Without specific transparency carve-outs from the extended transitional period for GPAI models placed on the market before 2 August 2025 in relation to their integration into high-risk AI systems, the resolution of this issue is left entirely to market dynamics.76 It seems to have been assumed that high-risk AI system providers will require GPAI model providers to furnish the information necessary for their own compliance as a precondition of integrating the GPAI model in question.
45This market-centric approach creates a supervision challenge in verifying whether providers of high-risk AI systems have fulfilled their obligations, particularly the identification and analysis of risks, their estimation and evaluation (including probability of occurrence), and the adoption of appropriate and targeted risk management measures.77 It must be borne in mind that Article 111(3) does not defer the AI Act’s general applicability under Article 113 as regards the AI Office’s supervisory powers (which commence on 2 August 2026).78 Rather, it specifically defers the obligations imposed on GPAI model providers that placed models on the market before 2 August 2025.79
46Accordingly, while such a GPAI model provider would not be under the obligation to have fully compliant Article 53(1)(a) documentation before 2 August 2027, it seems possible for the AI Office to request whatever documentation and information is available to the provider for the purpose of assisting national authorities under Article 75(3) to verify the compliance of the relevant high-risk AI system.80 However, Article 91(1) AI Act limits information requests by the AI Office to (i) ‘documentation drawn up by the provider in accordance with Articles 53 and 55’ and (ii) ‘any additional information that is necessary for the purpose of assessing compliance of the provider’ with the AI Act. The AI Office may not compel the provider to produce documentation beyond its obligations under the AI Act at the time of the request.81 Therefore, under this reading a provider could refuse to supply the documentation it is not yet required to maintain, and there would be no legal basis for the AI Office to impose a fine (under Article 101 AI Act) in such circumstances. If such a situation arises, the issue is whether the national authority’s inability to verify the high-risk AI system’s compliance, even after seeking assistance from the AI Office, can ground enforcement action against that system provider and, if so, what measures are permissible, including whether recall or market withdrawal of the system may be required. Those questions are out of scope of the present analysis but represent an important research avenue.
2.1.2.2. Modifications to initial model
47When a GPAI model that has been placed on the market before 2 August 2025 gets modified before 2 August 2027, it is necessary to consider when and how the documentation and transparency obligations regarding that modification must be fulfilled. As set out above,82 the main delineating consideration is whether the modification has been introduced by the initial provider of the model, including on its behalf, or by a downstream modifier.83
48As explained above,84 when modifications are made by or on behalf of the initial provider, the current Commission Guidelines take the view that they form part of the same model’s lifecycle, unless they include a new large pre-training run.85 In this context, modifications that do not amount to developing and placing a new model on the market, in principle, require the initial provider to meet its documentation and transparency obligations within the lifecycle framework for the initial model. Specifically, Article 53(1)(a) and (b) require providers to keep technical and transparency documentation for authorities and downstream actors up to date. The Guidelines interpret this as requiring regular updates throughout the model’s lifecycle, including for any modifications that form part of it.86 This general interpretation of lifecycle obligations needs to be read together with the Guidelines’ specific interpretation of Article 111(3), which means that models placed on the market before 2 August 2025 enjoy the benefit of that provision throughout their lifecycle before 2 August 2027.87 Therefore, a systematic reading of the current Guidelines posits that the provider need not complete updates during the transitional period at the moment each modification is introduced, but must instead ensure that all modifications are appropriately reflected in the documentation the provider is required to prepare under Article 53(1)(a) and (b) by 2 August 2027.
49As also noted above, when the modifications are introduced by a downstream modifier, the Guidelines consider a downstream modifier as the provider of a new model, if the given modifications have given rise to a ‘significant change in the model’s generality, capabilities, or systemic risk’ assessed by reference to the compute used for the modification.88 If this criterion is met, the Guidelines deem that a new model has been placed on the market within the meaning of Article 3(9) after 2 August 2025, and therefore the compliance deferral under Article 111(3) is inapplicable to that new model. Nevertheless, recital 109 AI Act and paragraph 68 of the Guidelines state that notwithstanding the Guidelines’ position that a new model has been placed on the market, the downstream modifier’s obligations under Article 53(1)(a) and (b) are limited only to the ‘information on the modification’.89 Although such a narrowing of the downstream modifier’s obligations is not explicit in the binding provisions of the AI Act, it is an approach that is reasonable from both a formal legal and practical perspective: downstream modifiers are not in a position to provide sufficiently detailed or accurate information about the initial model, only about the modification they introduced. Therefore, because the downstream modifier is expected to satisfy documentation and transparency obligations only with respect to the modification, there is no legal basis to postpone the downstream modifier’s compliance until the initial provider’s deadline of 2 August 2027. Accordingly, the requisite documentation must be in place at the time of market placement of the modified model by the downstream modifier.
2.1.3. Copyright compliance policy and training-content summary obligations under Article 53(1)(c) and (d)
50The requirement to bring into compliance by 2 August 2027 the GPAI models that have been placed on the market prior to 2 August 2025 has raised the most uncertainty in relation to the copyright obligations placed on providers.90 In contrast to the transparency and documentation obligations, which can, at least in principle, be easily prepared after market placement, the copyright-related obligations are tied most closely to the actions taken during the training of the GPAI model. This has raised the question of whether compliance with the requirements of Article 53(1)(c) and (d) before 2 August 2027 requires retraining or unlearning of the GPAI models placed on the market before 2 August 2025. The issue is especially pertinent to the requirement to ensure that the reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790 (“CDSM Directive”), that is, the so-called text and data mining (“TDM”) opt-outs by rightsholders, have been respected during training.91
2.1.3.1. (Non-)Existence of obligation to retrain or unlearn
51Some authors have suggested a narrow reading of both the substance of Article 53(1)(c) and its temporal dimensions under Article 111(3).92 Such a narrow interpretation suggests that Article 53(1)(c) requires providers primarily to adopt internal policies that prescribe the steps required to respect rightsholders’ TDM opt-outs during training but does not oblige them to retrain or unlearn models that were already placed on the market before 2 August 2025. According to this view, it is sufficient to ensure that only future training is conducted under those policies.93
52By contrast, other authors defend a broader interpretation of the obligation to respect TDM opt-outs and its temporal reach.94 On this view, Article 111(3) functions prospectively with respect to the provider’s implementation of the adopted policy and also provides a grace period after which full compliance of the model placed on the market and its training data must be proven. Notably, this view has been espoused by one of the co-chairs of the working group that developed the Chapter on Copyright Policy of the Code of Practice adopted under Article 56 of the AI Act:95
Thus, 36 months after the entry into force of the AI Act, all [GPAI model] providers will have to demonstrate that they identified and respected state-of-the-art bot-exclusions when they trained the model and they must draw up and make publicly available a training content summary.96
53To be clear, under this wide interpretation, upon the expiry of the transitional period, the provider is under the obligation to fully demonstrate that it has sufficiently identified and respected the reservations on TDM made by rightsholders regardless of where and when the GPAI model was initially trained. It also requires the publication of a training-content summary pursuant to Article 53(1)(d). Such a reading implies backward-looking or after-the-event verification tasks and, if non-compliance is identified, requires the undertaking of technical remedial measures, such as selective retraining or unlearning. This reading raises further interpretive questions, such as whether the state-of-the-art technical measures to respect opt-outs are assessed (i) against the baseline at the time of initial training, (ii) at the time of assessment, or (iii) at the end of the transitional period. It also raises the issue of which reservations must be taken into account; that is, should reservations made after the initial training but before or at the time of the compliance assessment be taken into consideration. Both of those questions are discussed further below.
54On the other hand, the narrow interpretation introduced above is more difficult to reconcile with the purpose of Article 53(1)(c); it goes against the spirit and purpose of the obligation contained within Article 53(1)(c) by reducing it to a formal policy adoption requirement, regardless of its effectiveness. The text of the AI Act itself offers little support for such a narrow reading. Specifically, regard must be had to the text of recital 106, which states that one of the primary aims of the Article 53(1)(c) obligation is to ensure a level playing field between GPAI model providers who performed training in the Union and those operating in third countries where copyright protection for TDM may be weaker. This is also supported by the Code of Practice Chapter on Copyright Policy which states that signatories commit to reproducing and extracting only lawfully accessible copyright content97 and to identifying and complying with the expressed rights reservations by copyright holders.98 These measures are presented as substantive requirements over and above a merely formal requirement to have an appropriate copyright policy in place.99
55Assuming the broader interpretation prevails, obvious practical questions arise, in particular whether and how GPAI model providers can demonstrate that they have fully respected rightsholders’ reservations. The Commission’s interpretation in the current Guidelines reads:
In particular, providers of general-purpose AI models placed on the market before 2 August 2025 are not required to conduct retraining or unlearning of models, where it is not possible to do this for actions performed in the past, where some of the information about the training data is not available, or where its retrieval would cause the provider disproportionate burden. Such instances must be clearly disclosed and justified in the copyright policy and in the summary of the content used for training.100
56Notably, the Commission’s phrasing is worded negatively; it frames the scope of compliance requirements by referring to the exception rather than what it considers the baseline rule. Implicitly, by focussing on the exceptional circumstances that permit an exemption, paragraph 111 of the current Guidelines indicates that retraining or unlearning is, in principle, necessary to comply with opt-outs if they have not been sufficiently respected at the time of initial training. Yet, the choice to focus on potentially wide exemptions seems to acknowledge the criticism that retraining models already on the market entails substantial financial costs and potential sustainability concerns.101 However, combining the Commission’s lenient approach with what appears to be the imposition of positive obligations on providers, not envisaged in the AI Act itself, creates inconsistencies and potential issues with interpretation and enforcement. The discussion that follows identifies some of the key challenges with applying the interpretation posited by the Guidelines, as it stands at the time of writing (November 2025), and the likely grounds for judicial challenge, whilst acknowledging that the Commission may amend its reading in due course.
57First, by imposing additional disclosure and justification requirements alongside the copyright policy and training-content summary, the approach taken by the Guidelines raises the question of whether the Commission had the competence to establish such additional quasi-obligations through non-binding executive guidance if not sufficiently supported by the legislation’s text. Second, and on a related note, the Guidelines supply no criteria for assessing what constitutes a ‘disproportionate burden’,102 and none of the exemption criteria are articulated or explored in the binding provisions of the AI Act or even in the recitals. Thus, the Guidelines seem to confer notable discretion on the Commission (not provided for in the AI Act) to assess whether the exemption criteria have been met by a given GPAI model provider. It is foreseeable that this might provide fertile ground for contentious interactions between the Commission and providers and may lead to judicial challenges if the Commission takes enforcement action informed by the Guidelines’ approach. With that said, where a provider relies on those exemptions and the Commission declines to take enforcement action, in addition to the lack of incentive for the providers to challenge the interpretation, there would also be an absence of a reviewable act capable of challenge under Article 263 TFEU for the rightsholders. Thus, without an appropriate vehicle for judicial challenge the Guidelines would continue to represent the de facto authoritative interpretation. As has been explained above, although non-binding on private parties and the courts, according to settled case law the Guidelines can have a self-binding effect on the Commission.103 Therefore, the Commission will be expected to respect the exemptions it has articulated in the Guidelines when exercising its enforcement powers under the AI Act or risk a GPAI model provider arguing before the CJEU that the Commission has unlawfully failed to apply its own Guidelines.104
58However, a situation can be foreseen where a GPAI model provider does not undertake retraining or unlearning to respect relevant opt-outs and also fails to provide sufficient justification in its copyright policy and training-content summary. For the time being, based on the current Guidelines, the sufficiency of the providers’ disclosure and justification is to be assessed by an unclear internal Commission standard and procedure. It seems that, in principle at least, a GPAI model provider’s failure to satisfy the Commission of its disclosure and justification for invoking an exemption could itself ground an enforcement decision, even a sanction in the form of a fine under Article 101 AI Act, rather than a mere absence of confirmation that opt-outs have been respected. In that event, the provider would have a clear incentive and standing to challenge the decision, thereby enabling the CJEU to assess whether the Guidelines’ approach to Article 111(3) is compatible with the AI Act. This judicial challenge scenario, however, still carries non-negligible risks for providers who would be risking that the CJEU overturns the entirety of the Guidelines’ interpretation on excluding retraining obligations, and not only the exemption criteria thereunder.
59The other, and for this reason more practically likely, route to challenge the Guidelines’ interpretations is via a preliminary reference procedure105 triggered by private actions brought by copyright holders. As discussed in the commentary on Article 53,106 some authors contend that the obligations set out in the AI Act constitute a Schutznorm, with the consequence that copyright holders derive rights directly from the AI Act to bring damages claims for copyright violations.107 The notion that the AI Act’s reference to the CDSM Directive yields horizontal direct effect for opt-outs is unsupported by the AI Act’s wording and the general principles of copyright law, and the more orthodox analysis would be that ‘non-compliance does not automatically amount to copyright infringement, although it could still lead to administrative fines under the AI Act’.108 Moreover, recital 108 suggests that not every individual copyright infringement in training TDM would amount to a violation of the obligation under Article 53(1)(c).109 Nonetheless, some rightsholders are still likely to attempt this argument, particularly given that current and anticipated litigation against GPAI model providers is primarily concerned with training and output copyright infringements.110 For the purposes of the present analysis, the merits of such private law claims are largely irrelevant; what matters is that their arguments are likely to hinge on a maximalist interpretation of the AI Act, including the obligations on GPAI model providers that fall within the scope of Article 111(3). Any ensuing litigation could result in an authoritative interpretation of Article 111(3) by the CJEU under the preliminary reference procedure of Article 267 TFEU, which might conflict with that of the current Guidelines.
60Judicial review in either of the two potential CJEU pathways described above would need to discuss whether the framing of the temporal transitional obligation contained in Article 111(3), that is as a requirement to ‘take the necessary steps’,111 can be read as potentially limiting the scope of compliance only to those actions that are strictly necessary, understood as measures that are not prohibitive or unduly burdensome after carrying out a proportionality test. In such cases, the CJEU will need to decide whether Article 111 confers sufficient executive discretion to the Commission to, in effect, permit a non-compliant GPAI model that is already placed on the market before 2 August 2025 to remain on the market by requiring the provision of justifications in the copyright policy and summary. It is foreseeable that the CJEU may also be asked to determine whether sanctions or other enforcement measures may be based purely on the failure to meet those additional positive quasi-obligations introduced by the (current) Guidelines. By examining this question, even if the CJEU recognises and endorses the discretion of and the additional conditions imposed by the Commission, the court will be in a position to introduce its own proportionality criteria that it deems appropriate for carrying out the assessment of a ‘disproportionate burden’.112 Pending the determination of a judicial review, the range of possible outcomes increases uncertainty regarding the applicable rules to the transitional regime; however, a binding judgment would definitively settle (many if not all of) the questions raised, providing legal certainty, even if it results in narrower or revised exemption criteria to those currently articulated by the Commission in the Guidelines.
61Finally, it should also be established whether models placed on the market between entry into force of the AI Act on 1 August 2024113 and the entry into applicability of Chapter V on 2 August 2025,114 the training of which, however, occurred before 1 August 2024, would also need to be brought into compliance. In this instance, as noted above, the AI Act follows the product safety logic whereby the date of placing on the market is the operative date for the applicability of obligations.115 Thus, regardless of whether the AI Act applies to GPAI models placed on the market before 1 August 2024, for those models that were placed on the market after that date, the time when training was planned, took place, or was concluded is irrelevant. The legal consequences appear straightforward: those models fall within the scope of Article 111(3) and their providers must, by 2 August 2027, assess compliance of the training data and retrain if non-conformity with opt-outs is established or, alternatively, provide the required justifications under the Guidelines. That said, this approach might conflict with the general principle of legitimate expectations and the related case law, which holds that legitimate expectations are not respected when ‘the measures adopted, although foreseeable, were introduced at a time when they could no longer be taken into account in formulating investment decisions’.116 Providers may argue that when they took the decision to invest in training activities there was no regulation in force, and having to either limit the lifecycle of a model based on such training until 2 August 2027 or to undertake retraining breaches their legitimate expectations. However, private actors’ legitimate expectations are not absolute, and a potential court decision would need to undertake a balancing of competing interests.117 As with GPAI models placed on the market before 1 August 2024, it is arguable whether this issue would have significant practical impact considering the quick pace at which new more capable models are released in order to effectively compete on the market.118
2.1.3.2. Timing of reservations’ expression
62An important issue that arises out of the transitional period for GPAI models already placed on the market concerns timing of the copyright reservations. The temporal effects of TDM opt-outs must be examined in two aspects: (i) as a general point, should only reservations made before the TDM activities had taken place be taken into account or is there an obligation to continuously monitor and respect subsequent opt-outs; and (ii) should the providers of GPAI models already placed on the market before 2 August 2025 take into account all reservations made before 2 August 2027 or only those that were present at the time of training.
63The first question appears largely settled, as discussed below, in favour of taking into account only those TDM reservations that were present at the time of a GPAI model’s initial training. Nevertheless, a brief examination of this discussion is still useful, if for no other purpose than to clarify the imperfect legal situation created by incorporating the private law instrument in Article 4(3) CDSM Directive into the AI Act’s ex ante obligations, and to elucidate the balancing of interests required to make this relationship work within the AI Act’s temporal logic as a public law instrument.
64Establishing the legal consequences of the timing of TDM reservation is more closely connected with interpreting the wording of Article 4(3) CDSM Directive rather than the text of the AI Act. The legislator has chosen to base the public law obligations of GPAI model providers under the AI Act on a reference to the CDSM Directive, rather than introduce AI Act-specific copyright requirements. This fact has been interpreted by some authors to suggest that the delimitation of the scope of the obligation to respect opt-outs has to be based on ‘the minimum content defined by the EU copyright directives’, and national implementing measures are irrelevant.119
65Article 4(3) of the CDSM Directive is silent on the timing of reservations, and the cross-reference in Article 53(1)(c) AI Act also does not provide any limitations or caveats to its scope. While, notably, the Hungarian transposition legislation of the Directive has made it explicit that only ex ante opt-outs (i.e. those made prior to the TDM taking place) are covered,120 this national measure does not proceed directly from the minimum harmonisation rule. With that said, the ‘fundamental’ logic of copyright law is focused on ex ante acts by rightsholders when it comes to authorisations, or opt-ins, of use of their works.121 In the context of use for TDM, the CDSM Directive operationalises the opt-out as a limitation conditioning mechanism available to the rightsholder rather than an exclusive right, meaning that in the absence of a reservation, the rightsholder effectively accepts the ‘“free use” of the proprietary subject matters for TDM purposes by others’.122 For GPAI models, this is preferable from a practical perspective: requiring compliance with ex post reservations (i.e. those exercised after the TDM had occurred) would be burdensome for the providers. This has led to the popular conclusion that ‘only ex ante reservations could comply with the fundamental ideas behind reservation of rights’.123
66While this is the prevailing opinion, authors have also warned against a narrow and strict application of the ex ante rule by pointing out different practical considerations, such as the fact that rightsholders often publish their protected works via third-party platforms over which they exercise little control.124 This could lead to situations where mining occurs before an effective reservation can be expressed.125 Taking into account the suggested goal of the relevant provision of the CDSM Directive to afford broad protection to rightsholders, a more nuanced reading of the CDSM Directive whereby ‘ex post reservations shall not be automatically excluded from the scope of Article 4(3)’,126 has some merit.
67However, it seems that this more nuanced reading is more suited to private actions by rightsholders for damages suffered due to a failure to respect their opt-outs rather than enforcement of public law obligations where, in general, the principle of legal certainty takes priority.127 This is consistent with the approach adopted in the Copyright Chapter of the Code of Practice, which requires signatories to appropriately ensure that they inform rightsholders of the measures they have adopted ‘to identify and comply with rights reservations expressed pursuant to Article 4(3) of Directive (EU) 2019/790 at the time of crawling’.128 The wording used by the Code of Practice confirms the ex ante timing of the reservations in order to affect any given TDM activity by limiting opt-out detection and corresponding measures to the time of TDM, whilst not covering any retroactive detection or removal for ex post expressions of reservations.129
68The question of reservations made between initial training and 2 August 2027 has been raised repeatedly in the recent literature discussing the copyright obligations under the AI Act.130 However, it has not received the same substantive analysis as the territorial reaches of the obligations related to respecting opt-outs has.131 One explanation is the uncertainty that was present among authors as to the precise scope of the obligations that providers of GPAI models already placed on the market before 2 August 2025 face as regards copyright.132 Before the Guidelines, in the absence of a definitive interpretation as to whether this entailed a requirement to conform the models’ training to the relevant opt-out obligations or simply a forward-facing policy, it remained unclear whether there would be an open question regarding the timing of reservations for the models placed on the market before 2 August 2025. However, since the Guidelines were issued, as discussed in detail above, the debate is now relatively settled towards the more predominant view that, for copyright, Article 111(3), prima facie, requires retraining to cure non-compliant TDM.133 That said, there remains the practical question of the timing of the reservations in those cases where the exemption justifications stated by the GPAI model provider are deemed insufficient by the Commission or where the CJEU decides to disapply or amend in some way the exemptions contained in the Guidelines.134
69Following the discussion of the general nature of the provision of Article 111(3),135 it is argued that compliance will be assessed by reference to the applicable rules and understanding of the state of the art at the time of the final transitional date (2 August 2027) to conduct a review of whether opted-out copyright works were included in the initial training data. This interpretative approach is based on an expectation that state-of-the-art techniques to review which works had been included in the initial training data may have significantly improved by 2 August 2027. However, even with that reading, it remains uncertain which date is relevant for assessment of whether reservations have been respected in the training: the date on which the training actually took place or the deferral date for compliance, that is, 2 August 2027.
70The first possible reading is that compliance with opt-out obligations is assessed by reference to the date on which the GPAI model was initially trained. In such a situation, 2 August 2027 would only serve as the point by which compliance must be ensured but without setting any additional ex post reservation requirements. Under this reading, providers would not be required to ensure conformity of the model’s training data with the reservations that are present on 2 August 2027. Instead, the provider’s obligation would be assessed at the time of initial training. Accordingly, under this reading, the training data would need to exclude any works for which a reservation within the meaning of Article 53(1)(c) had already been made at the time of initial training. This interpretation is in line with the baseline approach of the Guidelines136 because it still requires actual compliance with reservations, but it only applies this requirement for reservations that were present at the time when initial training took place.
71A potential criticism of this approach is that Chapter V, which also contains the explicit reference to the CDSM Directive, entered into application only on 2 August 2025.137 It may be suggested that rightsholders, who would have otherwise made reservations following the entry into application of Chapter V, if they are not allowed to opt out during the transitional period, would be effectively prevented to assert their newly established right and avoid their copyrighted works being included in the training data of models placed on the market before 2 August 2025.138
72In line with this reasoning, an alternative reading would be that, even though the GPAI model was placed on the market before 2 August 2025, Article 111(3) introduces a legal fiction that, for the purposes of TDM compliance, the model is trained on 2 August 2027. The consequence of this interpretation is that the provider’s obligations are measured against the factual state of reservations on that 2 August 2027 date, meaning that training data would need to exclude all works for which an opt-out was expressed prior to that date. While maximising rightsholder protection, this reading would subject providers of models placed on the market before 2 August 2025 to retroactive and unequal treatment compared to providers placing models thereafter, who only need to respect ex ante reservations, that is, those present at the time of TDM.139
73For these reasons, the first reading would provide a higher level of interpretative consistency and seems preferable as it preserves legal certainty and legitimate expectations by avoiding disproportionate retroactive obligations. This reading is also better aligned with the Code of Practice’s demand for compliance at the time of TDM for the purposes of training across models placed on the market at different times, and would be easier to monitor.140 While it may afford a lower level of protection of rightsholders’ interests at the public law level of the AI Act, the approach seems to strike a sensible balance between competing interests and does not impair rightsholders’ ability to bring private actions for damages.141
2.1.3.3. Modifications to initial model
74The treatment of modifications made during the transitional period to GPAI models placed on the market before 2 August 2025, insofar as they affect providers’ copyright obligations, follows the framework for modifications set out in Section 2.1.1.2. of this commentary. For the purposes of the present discussion, the focus is on subsequent modifications that entail further training after 2 August 2025 of those GPAI models that have been placed on the market before 2 August 2025.
75In the event that such modifications are introduced by or on behalf of the initial provider and they do not lead to the placing of a new model on the market, then, based on the current Guidelines issued by the AI Office, they form part of the lifecycle of the model that was placed on the market prior to 2 August 2025.142 Considering that the AI Act does not establish rules on the differentiated treatment of training runs that follow the initial large training run, under the current interpretation of the Guidelines, compliance would be deferred until 2 August 2027 for those subsequent training runs completed within the same model’s lifecycle.143 The Guidelines adopt a comparatively lenient approach to the retraining of models already placed on the market by permitting providers to forgo retraining upon the provision of adequate justification.144 However, it is suggested that a different approach would be preferable for subsequent training iterations after 2 August 2025, even when falling within the same model’s lifecycle. This is because, as a matter of policy, such subsequent training runs would be completed after the entry into force and application of the obligations under Chapter V of the AI Act.145 Thus, GPAI model providers would have had the ability to familiarise themselves with the requirements for respecting appropriately expressed TDM reservations. Therefore, it is suggested that any training activities undertaken following 2 August 2025, regardless of the fact that they would benefit from the deferred applicability under Article 111(3), ought to respect reservations and not benefit from the exemptions contained in the (current) Guidelines.146
76In addition to the above, initial provider modifications to GPAI models placed on the market prior to the entry into force of the AI Act on 1 August 2024 also need to be considered. Must they be compliant with the Act? As outlined above, the principle of non-retroactivity could be interpreted to suggest that such models, including any modifications to such models, fall entirely outside the AI Act’s scope.147 Some authors have argued, however, that while this might be the principal case, if such a model was modified following the entry into force of the AI Act, then the latter could apply at least partially to those modifications.148 This argument is based on the settled case law of the CJEU which states that:
a new rule of law applies from the entry into force of the act introducing it, and, while it does not apply to legal situations that have arisen and become final under the old law, it does apply to their future effects, and to new legal situations.149
77While such an interpretation seems equitable, it is difficult to reconcile it with the concept of a model’s lifecycle under the current Guidelines, which deems any such modifications as part of the original model.150 Considering that the initial model was placed on the market before the AI Act entered into force, and is therefore outside its scope, applying the product safety logic would suggest that the later training or modification would likewise also fall outside the scope, since the relevant baseline is always the date of placing on the market.151 However, the AI Act’s temporal logic does not map neatly onto a traditional product safety framework.152 If, in this instance, the AI Act is understood as primarily concerned with regulating entities rather than products, the apparent contradiction can be resolved and obligations may lawfully attach to the provider conducting the modifications post entry into force, even if the underlying model predates entry into force.153 If the initial provider modifications result in a new GPAI model placed on the market, then regardless of when the original model was placed on the market, the provider will be required to fully comply with obligations at the time of the new placing on the market.
78With regard to subsequent training after 2 August 2025 carried out by downstream modifiers, if the modification has resulted in the placing of a new model on the market, then a preferred interpretation of the Guidelines suggests that the modifier must ensure that the modifications comply with applicable copyright obligations at the time of placing on the market and cannot rely on Article 111(3)’s deferral.154 Where the modification does not result in a new model, no additional obligations arise for the modifier.155 Further, if such modification cannot be attributed to the initial provider, the AI Act likewise imposes no further obligations on that initial provider.156 Accordingly, minor subsequent training runs by downstream modifiers could contravene rightsholders’ reservations without offending the AI Act. However, some recourse is available: the public law exclusion does not prejudice rightsholders’ recourse under private law, such as to submit claims for damages.157
2.1.4. Appointing an authorised representative under Article 54
79In addition to the obligations under Article 53, providers of GPAI models placed on the market before 2 August 2025 that are established in third countries must also ‘appoint an authorised representative established in the Union’ under Article 54(1).158 Whether and how Article 111(3) may affect this is examined below.
80First, it must be determined whether Article 54 places an ‘obligation’ on GPAI model providers within the meaning of Article 111(3). While the title of Article 53 explicitly refers to the fact that it governs ‘obligations’ of GPAI model providers, the title and text of Article 54 do not contain such an express reference to obligations. However, both Articles 53 and 54 sit within Chapter V, Section 2. AI Act, entitled ‘Obligations for providers of general-purpose AI models’. Moreover, Articles 55(1) and 93(1)(a) unambiguously treat Article 54 as (or containing) an ‘obligation’.159 This interpretation also aligns with the Commission’s (current) Guidelines.160 A systematic reading, therefore, confirms that Article 54 constitutes an obligation within the meaning of Article 111(3), which in principle benefits from the temporal deferral contained therein for third-country providers of GPAI models placed on the market before 2 August 2025.
81In this context, it should be examined how the temporal deferral is operationalised considering that Article 54(1) requires the appointment of an authorised representative ‘prior to placing’161 a GPAI model on the Union market. That requirement sits in immediate tension with Article 111(3), which by definition applies only to models already placed on the market. Neither the AI Act nor the Guidelines address this issue. It can be inferred that the purpose of requiring the appointment of a representative prior to the placing on the market in Article 54 is to ensure that the AI Office has an easily accessible and appropriately authorised Union contact point for supervision and monitoring of third-country GPAI model providers immediately from the moment of placing the GPAI model on the market.162 Therefore, a practical interpretation of Article 111(3) together with Article 54 would posit that a third-country provider whose GPAI model was placed on the market before 2 August 2025 must appoint an authorised representative before 2 August 2027. This is so because this represents the date on which all substantive obligations for that provider would commence their applicability,163 and for practical purposes that date can, in effect, be treated as equivalent to the date of market placement.
82With that said, nothing prevents a GPAI model provider from appointing an authorised representative before that date.164 In fact, such an action is likely to be beneficial for the provider, particularly considering the Commission’s acknowledgment that providers whose models were placed on the market before 2 August 2025 may experience challenges to fulfill their obligations by 2 August 2027,165 and the stated commitment by the AI Office to support such providers in undertaking the necessary steps for compliance.166 It can be imagined that such support activities may be extended to a GPAI model provider during the transitional period whether an authorised representative is appointed or not. That said, the presence of an authorised representative in the Union during Article 111(3)’s transitional period is more likely to facilitate engagement and full compliance after 2 August 2027, especially given the representative’s role in verifying compliance with Article 53 (and, where applicable, Article 55)167 and in cooperating with the AI Office and national competent authorities.168
2.1.5. Obligations on GPAI models with systemic risk placed on the market before 2 August 2025
2.1.5.1. GPAI models with systemic risk under Article 51(1)(a)
83As a starting point, it is necessary to evaluate how providers of GPAI models with systemic risk placed on the market before 2 August 2025 that have high-impact capabilities under Article 51(1)(a) are to comply with the AI Act. Article 51(2) unambiguously introduces a presumption that models trained with a cumulative amount of computation measured in floating point operations greater than 1025 have high-impact capabilities under Article 51(1)(a).169 Researchers estimate that some models placed on the Union market before 2 August 2025 have exceeded this threshold.170 Therefore, two important questions arise with relation to Article 111(3): (i) by which date should providers comply with the additional obligations for GPAI models with systemic risk under Article 55 and (ii) when should a provider notify the Commission of the fact that a given GPAI model has met the high-impact capability thresholds under Article 52(1), first sentence?
84The answer to the first question is reasonably clear: Article 111(3) grants an additional transitional period for compliance with obligations until 2 August 2027. This means that a provider of a GPAI model with systemic risk, classified as such on the basis of the condition under Article 51(1)(a) and placed on the market before 2 August 2025,171 has until 2 August 2027 to comply with the substantive obligations contained in Article 55, in addition to those under Articles 53 and 54.172 Considering that GPAI models with systemic risk are subjected to additional obligations, particularly the identification, assessment, and mitigation of systemic risks,173 precisely because of their expected higher negative impacts relative to GPAI models without systemic risk,174 the application of this transitional period is difficult to reconcile with the risk profile of those models and also, therefore, the risk-based ethos of the AI Act.175 Nevertheless, this policy compromise between legal certainty and risk mitigation appears to be a deliberate choice considering that no exclusion from the additional transitional period under Article 111(3) has been introduced for GPAI models with systemic risk.
85The resolution of the second question is not as readily apparent from a literal reading of Article 111(3) alone. As with the discussion above on the interplay between Article 54 and 111(3),176 the present analysis turns on first determining whether the requirement placed on the provider by the first sentence of Article 52(1) to notify the Commission once the high-impact capability condition is met (or it becomes known that it will be met) constitutes an obligation within the meaning of Article 111(3).
86Initially, one might note that the first sentence of Article 52(1) contains a procedural obligation rather than a substantive one. However, Article 111(3) speaks about ‘obligations’ broadly, and draws no distinction in consequences between the two categories. Therefore, a differentiated treatment of procedural and substantive obligations for providers of GPAI models with systemic risk solely on the basis of the obligations’ different legal character seems to lack concrete justification in Article 111(3).
87That conclusion, however, does not exhaust the inquiry into whether the notification required by the first sentence of Article 52(1) should benefit from the additional transitional period contained in Article 111(3). In particular, it is necessary to consider: (i) the systematic placement of the notification obligation within the structure of the AI Act, (ii) its meaning and legal effects, and (iii) the intended purposes and objectives pursued by that provision. These matters are addressed in that order below.
88Examined systematically, Article 52(1) sits within Section 1. of Chapter V, entitled ‘Classification rules’, whereas the substantive obligations for providers of GPAI models, including for those with systemic risk under Article 55, are placed in Sections 2 and 3 of Chapter V, the titles of which explicitly state that they concern provider obligations.177 The case law of the CJEU, while not according the same legal effect to titles of sections, chapters and articles as to the operative provisions themselves, has on numerous occasions treated such titles as interpretative aids, particularly for systematic interpretation across provisions included in or excluded from specific sections.178 The CJEU accordingly has taken into account section and chapter titles unless the legislative act itself states that they are provided solely for ease of reference,179 which is not the case with the AI Act. Moreover, unlike the content of Articles 53, 54 and 55 which are referred to explicitly within other binding provisions as obligations,180 no such reference is made in respect to Article 52(1). The systematic placement of Article 52(1), therefore, can support the view that it should not be treated as an obligation within the meaning of Article 111(3).
89With that said, however, regard must be had not only to the grammatical considerations and systematic placement of Article 52(1), but also to its intended meaning and legal effects.181
90The first sentence of Article 52(1) uses the phrase ‘shall notify’, which indicates a positive obligation for the provider. Furthermore, the second sentence of Article 52(1) sets out requirements for the content of the notification, which may be read as specifying the necessary elements for the fulfillment of an obligation. Specifically, the provision requires the provider to collect and include the information that gives rise to the high-impact capability assessment.182 This can be understood as falling within the concept of ‘necessary steps’ required to comply with an obligation within the meaning of Article 111(3). This interpretation appears also to be shared by the Commission: the (current) Guidelines expressly refer to Article 52(1) as an ‘obligation’ to notify.183
91Relatedly, the legal consequences of non-notification include not only that the Commission may designate a GPAI model as having systemic risk under the third sentence of Article 52(1) in lieu of the absent notification,184 but also that non-notification itself may attract the imposition of a fine under Article 101(1)(a), irrespective of compliance with Article 55.185 This supports a conclusion that the first sentence of Article 52(1) lays down a distinct obligation producing legal effects independent of, and in addition to, giving rise to the application of Article 55. Under this interpretation, therefore, the first sentence of Article 52(1) should be treated as an obligation under Article 111(3).
92Finally, while keeping in mind the foregoing, it must be considered that Article 52(1), first sentence, contains a specific notification timeline, namely notification two weeks after high-impact capability thresholds are met or it becomes known that they will be met. Whether this entails an obligation to notify even before market placement is a contentious issue.186 The resolution of that debate bears directly on the interpretation of the relationship between Article 111(3) and Article 52(1), first sentence, insofar as it sheds light on the latter’s intended purpose when contrasted with Article 55.187
93The (current) Commission Guidelines support the view that a pre-market placement notification is required.188 Under this reading, GPAI models with systemic risk that are (or will be) placed on the market after 2 August 2025 have to be notified once it becomes reasonably foreseeable for the provider that the high-impact capabilities presumption threshold is likely to be met.189 This means that notification will be required before the date on which material obligations under Article 55 come into effect (i.e. the market placement date).190 This pre-market placement notification approach is supported by recital 112, sixth sentence, AI Act which states that early notification is ‘valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks and the providers can start to engage with the AI Office early on’.
94If the temporal applicability of the substantive obligations under Article 55 is distinct from the notification obligation for GPAI models with systemic risk placed on the market after 2 August 2025, an argument can be made that an analogous differentiation may be required under Article 111(3) for models placed on the market before 2 August 2025. This is explained by the fact that under this interpretation, the notification requirement pursues a set of objectives separate from and temporally antecedent to the substantive obligations applicable upon market placement. It follows that, by analogy, for GPAI models with systemic risk that have been placed on the market before 2 August 2025, notification under this interpretation should also occur before the Article 55 obligations become applicable on 2 August 2027.
95Precisely how long before 2 August 2027 notification should occur, however, is not easily determinable. Again, looking at GPAI models with systemic risk placed on the market after 2 August 2025, their notification should occur, at a minimum, when it becomes apparent that the high-impact capability thresholds have been met.191 For GPAI models with systemic risk placed on the market before 2 August 2025, however, this moment of reaching the high-impact capability threshold occurred before Article 52(1) even entered into application on 2 August 2025.192 It could be argued, therefore, that in line with the pre-market placement notification interpretation, notification for those models should occur at the earliest possible moment after entry into application of Chapter V or, in other words, within two weeks after 2 August 2025. Under this interpretation, GPAI models with systemic risk placed on the market before 2 August 2025 need to be notified within that two-week period, whereas the substantive obligations under Article 55 are deferred to 2 August 2027 by virtue of Article 111(3).
96On the other hand, if it is determined that no pre-market placement notification obligation exists in general, then, by the converse analogy, it could be concluded that the notification procedure and the Article 55 substantive obligations cannot be differentiated by reference to their intended purposes.193 If, for those GPAI models with systemic risk placed on the market after 2 August 2025, the date of placing on the market is the relevant date for operation of both the procedural notification obligation and the substantive obligations, Article 111(3) affords no basis for treating the obligations differently for models placed on the market before 2 August 2025. Under this interpretation, therefore, both Article 52(1) and Article 55 read in conjunction with Article 111(3) should become applicable concurrently on 2 August 2027.
97While all three possible interpretative considerations analysed above are supported by pre-existing CJEU case law in other areas, the determination as to whether, and the extent to which, weight should be accorded to the systematic placement, the wording and effects, or the intended purposes of a specific provision is determined with regard to the particularities of the question being examined by the court.194 Attempting to perform this determination on Article 52(1) read together with Article 111(3), here, in the abstract, would be largely speculative and lack utility.
98Regardless of which date is ultimately determined to be applicable to the notification obligation, if the provider of a GPAI model with systemic risk pursuant to Article 51(1)(a) fails to notify, the Commission itself may designate the GPAI model as having systemic risk following the procedure of Article 52(1), third sentence.195 The designation decision does not change the deferred applicability of the material obligations under Article 55 read in conjunction with Article 111(3) for those GPAI models with systemic risk placed on the market before 2 August 2025 to a date other than 2 August 2027. However, as mentioned above, failure to notify in time (however that is determined) may give rise to a fine on that basis.196 Further, the designation decision may be accompanied by a penalty for failure to meet material obligations under Article 55 between the deferred date of applicability (2 August 2027) and the later date of the designation decision.197
2.1.5.2. GPAI model placed on the market before 2 August 2025, classified as a GPAI model with systemic risk under the designation mechanism of Article 51(1)(b)
99Even if a model does not meet the Article 51(1)(a) or 51(2) thresholds, it may still be classified as a GPAI model with systemic risk via Commission decision under Article 51(1)(b) if it has capabilities or an impact equivalent to those set out in Article 51(1)(a) assessed against the designation criteria in Annex XIII.198
100In light of legal certainty considerations, at the time of writing in September 2025, the Guidelines posit that a provider of a GPAI model with systemic risk so designated under Article 51(1)(b) is required to comply with the specific obligations concerning such models after it has been informed of the designation decision.199 However, a designation decision does not, in itself, amount to the placing of a new model on the market, rather it only classifies a GPAI model that has already been placed on the market as one possessing systemic risk.200 Therefore, it can be concluded that a designation decision does not alter the date on which the model has been placed on the market, which is the operative date for assessing the applicability of Article 111(3). Consequently, on a systematic reading of Article 111(3) together with Articles 51 and 52, a model placed on the market before 2 August 2025 and designated as a GPAI model with systemic risk during the transitional period must comply with all obligations, including those under Article 55, by 2 August 2027. If designation occurs after 2 August 2027, Article 111(3) no longer provides a deferral even if the model has been placed on the market before 2 August 2025; thus, the provider will be required to comply with its obligations under Article 55 following receipt of the designation decision.201
2.1.5.3. GPAI model placed on the market before 2 August 2025, modified into a GPAI model with systemic risk
101Under the current version of the Guidelines, only a new large pre-training run undertaken by the initial provider is treated as placing a new model on the market.202 All other modifications by or on behalf of the initial provider are considered to form part of the lifecycle of the original model.203 The Guidelines specifically note that different considerations apply when modifications are introduced by downstream actors.204 As already explained above, the Commission’s position is that a downstream modifier places a new model on the market where the introduced modifications result in a ‘significant change in the model’s generality, capabilities, or systemic risk’.205 While this dual approach is largely unproblematic for modifications to GPAI models placed on the market before 2 August 2025 that neither previously exhibited nor subsequently acquired systemic risk, it raises several issues where modifications by the initial provider result in a model that presents systemic risk, considering the importance of timely systemic risk assessment and mitigation.206
102Specifically, the Guidelines’ dual approach according to the identity of the modifier, rather than the nature of the modification,207 leads to a situation where: (i) if subsequent modifications made by the initial provider within the lifecycle of the original GPAI model that did not initially present systemic risk result in the emergence of systemic risk, then compliance with systemic risk-specific obligations is deferred to 2 August 2027, whereas (ii) if the downstream modifier through such subsequent modifications introduced systemic risks in a GPAI model that previously did not present them, compliance is required immediately upon placement of the modified GPAI model with systemic risk. This dichotomy is explained in detail below.
103Modifications by the initial provider that do not include a large pre-training run, despite resulting in the emergence of systemic risk, will be considered by the Commission to represent part of the lifecycle of the initial model that was placed on the market prior to 2 August 2025.208 Therefore, the Commission’s current Guidelines result in Article 111(3) applying to the modified model, such that the original GPAI model provider would be required to meet the specific obligations related to a GPAI model with systemic risk only on 2 August 2027.209
104Conversely, if a downstream entity introduces modifications to a GPAI model that was already placed on the market by an upstream GPAI model provider before 2 August 2025 and those modifications lead to the emergence of systemic risk, then the downstream modifier is deemed the provider of the modified model and must comply with Article 55 without any temporal deferrals.210
105The resulting different treatment of modifications giving rise to emergence of systemic risk depending on the entity introducing modifications is difficult to reconcile with the binding provisions of the AI Act, which do not make a distinction between modifications by the initial provider versus a downstream modifier, mainly because they do not regulate modifications to GPAI models at all. Rather, the AI Act’s definitions of ‘provider’,211 ‘general-purpose AI model’,212 and ‘placing on the market’,213 which the Guidelines are expected to follow and interpret when addressing modifications and describing their consequences, operate as unified concepts, without caveats or carve-outs depending on an entity’s position in the value chain.
106One possible interpretative solution is to read the Guidelines as positing that a modification which has met the Guidelines’ ‘significant change’ threshold214 does not necessarily entail that a new model, as such, has been placed on the market within the meaning of Article 3(63) read in conjunction with Article 3(9). Rather, by operation of a legal fiction, the modified model is considered placed on the market only for the purposes of imposing the limited obligations on a downstream modifier as set out in recital 109 AI Act.215 This reading finds some support in the Guidelines’ phrasing, which states that a ‘modifier becomes the provider of the modified general-purpose AI model’, rather than referring to the occurrence of a new placing on the market.216 This possible solution, however, is difficult to reconcile with the Guidelines’ specific approach to downstream modifiers who become providers of a GPAI model with systemic risk.217 In this regard, the Guidelines prescribe that a modifier must comply with the full set of obligations applicable to providers of GPAI models with systemic risk rather than a limited set of obligations related only to the introduced modifications,218 unlike the approach for modifiers of GPAI models without systemic risk under recital 109.219
107Furthermore, this narrow and literal interpretation is not consistent with a systematic reading of the Guidelines’ general approach to downstream modifications. While the Guidelines’ text does not expressly state that a modified model constitutes a new model or a new placing on the market, it repeatedly contrasts original model and modified models.220 This suggests that the so-called ‘modified model’ is indeed considered a new model or at least a new placing on the market.221 This is supported by the fact that there is no legal basis in the AI Act for imposing obligations on a downstream modifier unless it qualifies as a ‘provider’ under the meaning of Article 3(3). This, in turn, requires that the entity has placed a GPAI model on the market for it to fall within the AI Act’s definition of ‘provider’.
108Therefore, it seems that the tension cannot be resolved through consistent interpretation of the Guidelines alone. As explained in detail above, the Guidelines can produce self-binding effects on the Commission’s exercise of interpretative discretion,222 which means that despite the apparent inconsistency in the treatment of systemic risk depending on whether it emerges from modifications to a GPAI model without systemic risk placed on the market before 2 August 2025 (i) by or on behalf of the initial provider or (ii) by a downstream modifier, the Commission will likely be obliged to apply its own reading. Under the former scenario, the initial provider will have to comply with the obligations for GPAI models with systemic risk by 2 August 2027; in the latter scenario, the downstream modifier will have to comply upon placing the modified GPAI model with systemic risk on the market. Consequently, only a revision of the Guidelines or a binding interpretation by the CJEU can alter the de facto standing of this interpretation as the applicable one.
2.2. Articles 111(1) and (2)
109Article 111(1) defers the application of obligations to AI systems, which are components of large-scale IT systems and were placed on the market or put into service before 2 August 2027, until 31 December 2030. Considering that any such systems that may be based on a GPAI model placed on the market before 2 August 2025 will need to achieve compliance well after the Article 111(3) deferral for GPAI models has expired,223 a discussion on Article 111(1) falls outside the scope of the present analysis.224
110Article 111(2), first sentence, states that the AI Act is inapplicable to high-risk AI systems placed on the market before 2 August 2026 unless they undergo significant changes in their design after that date. Where such a high-risk AI system is based on a GPAI model placed on the market before 2 August 2025, a temporal tension may arise between the GPAI model provider’s deferred compliance date (2 August 2027) and a potential high-risk AI system provider’s obligation to comply between 2 August 2026 and 2 August 2027 following a significant change to its design in that period. These issues are substantively the same as those discussed in the subsection on documentation and transparency obligations for high-risk systems placed on the market after 2 August 2026 that rely on GPAI models placed on the market before 2 August 2025. Accordingly, these matters are fully addressed there and need not be repeated.225
2.2.1. Commission proposal to amend Article 111(2) and to add a new Article 111(4)
111Additionally, on 19 November 2025, the Commission published a proposal to amend the AI Act as part of its Digital Omnibus Package aimed at simplifying certain measures of Union digital regulation.226 The proposal is subject to the ordinary legislative procedure and, at the time of writing (November 2025), it remains uncertain whether (or to what extent) its contents will be adopted.227 Accordingly, only the main points of the proposal are outlined herein.
112Within its proposal, the Commission states that the general entry into application of the AI Act on 2 August 2026, and, in particular, the application of Sections 1, 2 and 3 of Chapter III AI Act containing substantive obligations for providers of high-risk AI systems, is likely to be impeded by the ‘delayed availability of standards, common specifications, and alternative guidance and the delayed establishment of national competent authorities’.228 Consequently, the Commission has proposed amendments to Article 113, whereby the application of Sections 1, 2 and 3 of Chapter III AI Act is preconditioned on ‘the adoption of a decision of the Commission confirming that adequate measures in support of compliance with Chapter III are available’.229 In connection with this, the proposal suggests amending Article 111(2) so that it applies to high-risk AI systems placed on the market before the date of entry into application of the relevant obligations under the amended Article 113,230 rather than placed on the market before the fixed date of 2 August 2026 as is the case under the enacted version of the Act. If adopted as proposed, these changes are likely to mitigate the potential conflicts, discussed above,231 between the entry into application of obligations for providers of high-risk AI systems based on GPAI models and for providers of those GPAI models. This is because the Commission’s proposal is aimed, in effect, at deferring the applicability date of high-risk AI system obligations. Consequently, even if a high-risk AI system is based on a GPAI model placed on the market before 2 August 2025, which benefits from the additional transitional period under Article 111(3), the obligations of the GPAI model provider to comply by 2 August 2027 are more likely to be satisfied before or at the same time as the obligations of Chapter III begin to apply to the AI system provider under the new proposed Article 113(3)(d).232
113Chapter IV AI Act, which contains a single provision, Article 50, and sets out the transparency obligations for providers and deployers of certain AI systems, is unaffected by the proposed amendments to Article 113. Therefore, even if the Commission proposal is adopted, Article 50 would still apply from 2 August 2026. However, the Commission has highlighted a need for providing an additional transitional period to enable providers of AI systems that generate synthetic audio, image, video, or text to comply with Article 50(2) AI Act,233 which requires them to ‘ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated’.234 Accordingly, the proposal suggests adding a new paragraph 4 to Article 111, under which providers of such generative AI systems placed on the market before 2 August 2026 benefit from an additional transitional period to comply with Article 50(2) obligations by 2 February 2027 (6 months from the date of entry into application).235
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L 1689/1 (“AI Act”). ↩︎
- Consider that the focus of the present commentary as a whole is on the AI Act’s rules on and application with relation to GPAI models and GPAI models with systemic risk. ↩︎
- AI Act, recital 9; see also paras 12-13. ↩︎
- AI Act, recital 9; European Commission, ‘New Legislative Framework’ <https://single-market-economy.ec.europa.eu/single-market/goods/new-legislative-framework_en> accessed 11 September 2025; for a general discussion on the relationship between the New Legislative Framework and the AI Act, see forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
- Regulation (EC) 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 [2008] OJ L218/30. ↩︎
- Decision 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC [2008] OJ L218/82. ↩︎
- Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 [2019] OJ L169/1. ↩︎
- Malte Stieper and Michael Denga, ‘The International Reach of EU Copyright through the AI Act’ (2024) 194 Beiträge zum Transnationalen Wirtschaftsrecht, Forschungsstelle für Transnationales Wirtschaftsrecht 1, 18. ↩︎
- Pier Giorgio Chiara, ‘Understanding the regulatory approach of the Cyber Resilience Act: Protection of fundamental rights in disguise?’ (2025) 62 European Journal of Risk Regulation 469, 470; for a detailed discussion on the legal character of the obligations contained in the AI Act more generally, see forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
- European Commission, ‘New Legislative Framework’ (n 4). ↩︎
- See, for example, article 52(1) of Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC and Council Directive 73/361/EEC [2023] OJ L 165/1; article 110(4) of Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU [2017] OJ L 117/176; article 120(4) of Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC [2017] OJ L 117/1; article 47(1) of Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC [2016] OJ L 81/51. ↩︎
- Commission Notice, The ‘Blue Guide’ on the Implementation of EU Product Rules 2022 [2022] OJ C247/1; for more details on the framework of the Blue Guide, see para 15; for a general discussion on the interpretative value of the Blue Guide in relation to the AI Act, see forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
- Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act) [2024] OJ L 2024/2847 (“CRA”). ↩︎
- Chiara (n 9) 471. ↩︎
- ibid 476. ↩︎
- See Section 2.1.1.2. ↩︎
- CRA, art 69. ↩︎
- Blue Guide (n 12). ↩︎
- ibid 31–32. ↩︎
- See, for example, cited provisions of different New Legislative Framework legislative acts in n 11. ↩︎
- Blue Guide (n 12), 31–32. ↩︎
- Ioannis Revolidis, ‘Regulating General Purpose Artificial Intelligence (GPAI) within the EU AI Act: Challenges and Considerations’ (2024) <http://dx.doi.org/10.2139/ssrn.5122935> accessed 11 September 2025, 3. ↩︎
- ibid. ↩︎
- European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final. ↩︎
- Revolidis (n 22) 3. ↩︎
- Stieper and Denga (n 8) 18. ↩︎
- AI Act, art 111(3). ↩︎
- Blue Guide (n 12). ↩︎
- European Commission, ‘Annex to the Communication to the Commission – Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act)’ C(2025) 5045 final para 110. ↩︎
- This question is most pertinent with relation to timing of reservations against text-and-data mining and is discussed in further detail in Section 2.1.3.2.2. ↩︎
- AI Act, OJ L, 2024/1689, 12.7.2024. ↩︎
- Consolidated version of the Treaty on the Functioning of the European Union (“TFEU”) [2012] OJ C326/1. ↩︎
- Eleonora Rosati, ‘Infringing AI: Liability for AI-Generated Outputs under International, EU, and UK Copyright Law’ (2024) 16 European Journal of Risk Regulation 603, 613. ↩︎
- Paul Craig and Gráinne de Búrca, EU Law: Text, Cases, and Materials (8th edn, Oxford University Press 2024) ch 16, 589 refers to this view as ‘actual retroactivity’. ↩︎
- Case T-357/02 Freistaat Sachsen v Commission [2007] ECR II-01261 para 98 and case law cited therein. ↩︎
- See discussion in forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
- Luke Nottage, ‘Product Safety Regulation’ in Geraint Howells, Iain Ramsay and Thomas Wilhelmsson (eds), Handbook of Research on International Consumer Law (2nd edn, Edward Elgar 2018) 233. ↩︎
- See Section 1.2., para 16. ↩︎
- Freistaat Sachsen v Commission (n 35) para 98. ↩︎
- See discussion in forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
- ibid. ↩︎
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 (“GDPR”). ↩︎
- See forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
- Article 2(1) GDPR sets a wide material scope, and article 99(2) GDPR prescribes one single date for entry into application regarding all covered entities two years after entry into force. Recital 171 GDPR specifically provides that processing already underway on the date of entry into application must be brought into conformity with the new harmonised rules of the GDPR, irrespective of when processing began. ↩︎
- See Section 1.2. for an overview of other differences in the transitional temporal regime of the AI Act compared to other New Legislative Framework instruments. ↩︎
- See forthcoming chapter on Product, Model and Entity Regulation in this work. ↩︎
- Jaime Sevilla and Edu Roldán, ‘Training Compute of Frontier AI Models Grows by 4–5x per Year’ (Epoch AI, 28 May 2024) <https://epoch.ai/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year> accessed 26 September 2025. ↩︎
- See, for example, the discussion in Section 2.1.3.1., para 61. ↩︎
- For a detailed general discussion on modifications to GPAI models see forthcoming chapter on Modifications in this work. ↩︎
- See, for example, AI Act, arts 3(23) and 25(1)(b). ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29). ↩︎
- ibid para 9. ↩︎
- ibid. ↩︎
- Case C-57/19 P Commission v Tempus Energy Ltd and Tempus Energy Technology Ltd. [2021] ECLI:EU:C:2021:663 para 143; Case C-654/17 P Bayerische Motoren Werke AG and Freistaat Sachsen v Commission [2019] ECLI:EU:C:2019:634 para 82; Case C-526/14 Tadej Kotnik and Others v Državni zbor Republike Slovenije [2016] ECLI:EU:C:2016:570 para 40 and case law cited therein. ↩︎
- Case C-11/22 Est Wind Power OÜ v Elering AS [2023] ECLI:EU:C:2023:765 para 31. ↩︎
- A more detailed discussion on modifications to GPAI models outside the current Guidelines’ interpretation is contained within forthcoming chapter on Modifications in this work. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23. ↩︎
- ibid para 62. ↩︎
- ibid para 23. ↩︎
- ibid para 109. ↩︎
- ibid para 23. ↩︎
- ibid paras 61–64. ↩︎
- ibid para 62. ↩︎
- ibid para 62 read in conjunction with paras 68–69. ↩︎
- ibid para 23. ↩︎
- ibid paras 63 and 64; under para 63, a modification would lead to a new model being placed on the market where its training compute exceeds one third of the initial model’s training compute. Alternatively, under para 64, if the downstream modifier cannot know or estimate the original compute, the following substitute thresholds are applied by the Commission: (i) for a GPAI model with systemic risk, one third of the presumption threshold for high-impact capabilities (currently 10^25 FLOPs); (ii) otherwise, one third of the presumption threshold for presence of a GPAI model (currently 10^23 FLOPs). ↩︎
- ibid para 62. ↩︎
- ibid paras 68–69. ↩︎
- ibid para 68. ↩︎
- AI Act, art 111(3). ↩︎
- See commentary on Article 53 in this work, Sections 2.1.1. and 2.1.2. ↩︎
- AI Act, recital 85. ↩︎
- See AI Act, art 16 for overview of the obligations placed on providers of high-risk systems; see also commentary on Article 53 in this work, Section 2.1.2.3. ↩︎
- AI Act, art 113, second sentence. ↩︎
- AI Act, art 54(1)(b)(i). ↩︎
- See, also commentary on Article 53 in this work, Section 2.1.2.3., para 76 for a further discussion on the AI Act’s reliance on market functioning and incentives. ↩︎
- AI Act, art 9. ↩︎
- AI Act, art 113, second sentence. ↩︎
- AI Act, art 111(3). ↩︎
- See, also, AI Act, recital 161: ‘Furthermore, market surveillance authorities should be able to request assistance from the AI Office where the market surveillance authority is unable to conclude an investigation on a high-risk AI system because of its inability to access certain information related to the general-purpose AI model on which the high-risk AI system is built. In such cases, the procedure regarding mutual assistance in cross-border cases in Chapter VI of Regulation (EU) 2019/1020 should apply mutatis mutandis.’ ↩︎
- See also, forthcoming commentary on Article 91 in this work. ↩︎
- See Section 2.1.1.2. for a general analysis of the implications of modifications on models placed on the market before 2 August 2025. ↩︎
- See Section 2.1.1.2. ↩︎
- ibid. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23.Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23. ↩︎
- ibid para 24. ↩︎
- ibid para 109. ↩︎
- ibid paras 62–64; for a detailed analysis of the Guidelines’ position, see Section 2.1.1.2. ↩︎
- ibid para 68. ↩︎
- For overview of relevant literature, see Péter Mezei, ‘The Multi-layered Regulation of Rights Reservation (Opt-out) Under EU Copyright Law and the AI Act-For the Benefit of Whom? (v3. 0)’ (31 March 2025) <https://ssrn.com/abstract=5064018> accessed 11 September 2025, 2, fn 8. ↩︎
- Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC [2019] OJ L 130/92. ↩︎
- Adriana Winkelmeier and Christoph Korab, ‘Article 111. AI Systems Already Placed on the Market or Put into Service and General-Purpose AI Models Already Placed on the Marked [sic]’ in Ceyhun Necati Pehlivan, Nikolaus Forgó, and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Kluwer Law International BV 2024), 1449. ↩︎
- ibid. ↩︎
- Alexander Peukert, ‘Copyright in the Artificial Intelligence Act – A Primer’ (2024) 73 GRUR International 497. ↩︎
- European Commission, ‘Meet the Chairs Leading the Development of the First General-Purpose AI Code of Practice’ (30 September 2024).
<https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice> accessed 11 September 2025. ↩︎ - Peukert (n 94) 505 (emphasis added). ↩︎
- European Commission, ‘The General-Purpose AI Code of Practice – Copyright Chapter’ (2025), <https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai> accessed 11 September 2025, Measure 1.2. ↩︎
- ibid Measure 1.3. ↩︎
- ibid, cf Measure 1.2 and 1.3 to Measure 1.1. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111 (emphasis added). ↩︎
- Winkelmeier and Korab (n 92) 1449. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111. ↩︎
- See discussion in Section 2.1.1.2., para 33. ↩︎
- Case C-386/10 P Chalkor AE Epexergasias Metallon v Commission [2011] ECLI:EU:C:2011:815 para 62. ↩︎
- TFEU art 267. ↩︎
- See commentary on Article 53 in this work, Section 2.1.3.1.2. ↩︎
- Clemens Bernsteiner and Thomas Rainer Schmitt, ‘Art. 53 Pflichten für Anbieter von KI-Modellen mit allgemeinem Verwendungszweck’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (1st edn, C.H. Beck 2024) para 36. ↩︎
- João Pedro Quintais, ‘Copyright, the AI Act and Extraterritoriality’ (June 19, 2025) The Lisbon Council <http://dx.doi.org/10.2139/ssrn.5316132> accessed 11 September 2025, 9; see also Susana Navas Navarro, ‘The Training of AI Models in the Context of the EU Copyright Law and the AI Act’ (2025) 13(8) Open Journal of Social Sciences (Print) 263, 273. ↩︎
- A more detailed discussion of this argument can be found in commentary on Article 53 in this work, Section 2.1.3.1.2., para 91. ↩︎
- See, for example, ‘DAIL – The Database of AI Litigation’ (GW Ethical Tech Initiative) <https://blogs.gwu.edu/law-eti/ai-litigation-database> accessed 11 September 2025, which contains a summary database about ongoing and completed AI-related litigation in the US. ↩︎
- Emphasis added. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111 (emphasis added). ↩︎
- AI Act, art 113, first sentence; see also, discussion in Section 2.1.1.1. as to whether GPAI models placed on the market before 1 August 2024 fall within the scope of the AI Act in general and art 111(3) in particular. ↩︎
- AI Act, art 113(b). ↩︎
- Stieper and Denga (n 8) 18. ↩︎
- Case C-368/89 Antonio Crispoltoni v Fattoria autonoma tabacchi di Città di Castello [1991] ECR 1991 I-03695; see also Case 120/86 J. Mulder v Minister van Landbouw en Visserij [1988] ECR 1988-02321; Case C‑201/08 Plantanol GmbH & Co. KG v Hauptzollamt Darmstadt [2009] ECR 2009-I-08343 para 52. ↩︎
- While a detailed analysis of the case law and doctrine on the mechanisms for balancing interests under EU law falls outside the scope of this commentary, relevant considerations are discussed in detail in: Robert Thomas, Legitimate Expectations and Proportionality in Administrative Law (Hart Publishing 2000). ↩︎
- Sevilla and Roldán (n 47), as well as para 29. ↩︎
- Stieper and Denga (n 8) 14. ↩︎
- Péter Mezei, ‘A Saviour or a Dead End? Reservation of Rights in the Age of Generative AI’ (2024) 46 European Intellectual Property Review 461, 466 ↩︎
- ibid. ↩︎
- ibid. ↩︎
- ibid. ↩︎
- ibid. ↩︎
- ibid. ↩︎
- ibid. ↩︎
- For a general discussion on legal certainty as general principle of EU law, see Herwig C.H. Hofmann, ‘General Principles of EU Law and EU Administrative Law’ in Catherine Barnard and Steve Peers (eds), European Union Law (3rd edn, Oxford University Press 2020). ↩︎
- Code of Practice, Copyright Chapter (n 97) Measure 1.3, para 4 (emphasis added). ↩︎
- ibid. ↩︎
- Mezei, ‘The Multi-layered […]’ (n 90), 12. ↩︎
- A detailed discussion of the debate regarding the territorial scope of Article 53(1)(c) AI Act can be found in commentary on Article 53 in this work, Section 2.1.3.1.2. ↩︎
- See Section 2.1.3.1. ↩︎
- Section 2.1.3.1., para 54. ↩︎
- Section 2.1.3.1., paras 58–60. ↩︎
- I.e. the provision is treated as a lex specialis rule to the general applicability timeline contained in Article 113(b) AI Act, see Section 2.1.1. and particularly para 20. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111; see, also discussion in Section 2.1.3.1. ↩︎
- AI Act, art 113(b). ↩︎
- This effect might be especially pronounced in respect to rightsholders based outside the Union who would not have been covered by the CDSM Directive provision prior to the applicability of the AI Act. The consequences would be exacerbated if the same providers trained subsequent models on synthetic data generated by the earlier models, thereby circumventing any future TDM opt-out obligations. However, those practical discussions warrant separate analysis and do not fall within the scope of the present commentary. ↩︎
- Code of Practice, Copyright Chapter’ (n 97) Measure 1.3, para 4; see also para 67. ↩︎
- ibid. ↩︎
- AI Act, recital 108. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23; see also discussion in Section 2.1.1.2., para 36. ↩︎
- ibid. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111; see also discussion in Section 2.1.3.1. ↩︎
- AI Act, art 113(b). ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 111; for a detailed discussion of the exemptions contained in that paragraph, see Section 2.1.3.1. ↩︎
- See Section 2.1.1.1., paras 23–25. ↩︎
- Rosati (n 33) 613. ↩︎
- Joined Cases C-339/20 and C-397/20 Criminal proceedings against VD and SR [2022] ECLI:EU:C:2022:703 para 62 as cited in Rosati (n 33) 613; see also Case C-258/17 E.B. v Versicherungsanstalt öffentlich Bediensteter BVA [2019] ECLI:EU:C:2019:17 para 50; Case C-266/09 Stichting Natuur en Milieu and Others v College voor de toelating van gewasbeschermingsmiddelen en biociden [2010] ECR I-13119 para 32. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23; see also, forthcoming chapter on Modifications in this work for a detailed discussion on the concept of ‘lifecycle’ as used by the Commission. ↩︎
- See Section 2.1.1.1. ↩︎
- See Section 1.2. on legislative context. For a discussion on the idiosyncratic characteristics of the AI Act within the New Legislative Framework, see also Section 2.1.1.1. as well as forthcoming chapter on Product, Model and Entity Regulation and forthcoming chapter on Interpreting the AI Act through Systematic Analogies in this work. ↩︎
- See Section 2.1.1.1. ↩︎
- See Section 2.1.1.2. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 68. ↩︎
- ibid para 23. ↩︎
- AI Act, recital 108. ↩︎
- For a detailed discussion on the substance of AI Act, art. 54, see commentary on Article 54 in this work. ↩︎
- For a detailed analysis of articles 55 and 93 more generally, refer to commentary on Article 55 and forthcoming commentary on Article 93 in this work. ↩︎
- See, for example, Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) paras 72 and 73. ↩︎
- Emphasis added. ↩︎
- See, for example, Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 50. ↩︎
- AI Act, art 111(3). ↩︎
- See, also, discussion in commentary on Article 54 in this work, Section 2.1.2. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 110. ↩︎
- ibid. ↩︎
- AI Act, art 54(3)(a). ↩︎
- AI Act, art 54(3)(d); see, also, commentary on Article 54 in this work, Section 2.3.3. ↩︎
- Outside of the presumption under Article 51(2) AI Act, the substantive criteria under Article 51(1)(a) AI Act for evaluating when certain capabilities of a GPAI model are ‘high impact’ on the basis of ‘appropriate technical tools and methodologies, including indicators and benchmarks’ is still subject to debate as described in commentary on Article 51 in this work, Section 2.1.1.3. ↩︎
- Robi Rahman and others, ‘Over 30 AI Models Have Been Trained at the Scale of GPT-4’ (Epoch AI, 30 January 2025; last updated 6 June 2025) <https://epoch.ai/data-insights/models-over-1e25-flop> accessed 25 September 2025 (the researchers state that, for at least some models for which they have published estimated compute values, their conclusions rest on estimates made with a high degree of precision). ↩︎
- On the question of whether classification as a GPAI model with systemic risk under Article 51(1)(a) AI Act is automatic upon reaching the relevant thresholds, or requires a designation decision by the Commission, see commentary on Article 51 in this work, Section 2.1.1.1. ↩︎
- See Sections 2.1.2., 2.1.3., and 2.1.4. ↩︎
- AI Act, art 55(1). ↩︎
- See, for example, AI Act, recital 97, sentence 13. ↩︎
- It could be argued, specifically, that prolonged integration and deployment of AI systems based on GPAI models with systemic risk that are not subject to the obligations under Article 55 AI Act may deepen the hazards endemic to AI systems based on GPAI models with systemic risk. ↩︎
- See Section 2.1.4. ↩︎
- AI Act ch V s 2: ‘Obligations for providers of general-purpose AI models’ (emphasis added); AI Act, ch V s 3: ‘Obligations of providers of general-purpose AI models with systemic risk’ (emphasis added). ↩︎
- See, for example, Case C‑604/11 Genil 48 SL and Comercial Hostelera de Grandes Vinos SL v Bankinter SA and Banco Bilbao Vizcaya Argentaria SA [2013] ECLI:EU:C:2013:344 para 39; Case C‑291/13 Sotiris Papasavvas v O Fileleftheros Dimosia Etaireia Ltd and Others [2014] ECLI:EU:C:2014:2209 para 39; Case C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II) [2020] ECLI:EU:C:2020:559 para 92. ↩︎
- Case C‑330/13 Lukoyl Neftohim Burgas AD v Nachalnik na Mitnicheski punkt Pristanishte Burgas Tsentar pri Mitnitsa Burgas [2014] ECLI:EU:C:2014:1757 para 33; Case C-97/15 Sprengen/Pakweg Douane BV v Staatssecretaris van Financiën [2016] ECLI:EU:C:2016:556 para 31 and case law cited therein. ↩︎
- AI Act, arts 55(1) and 93(1)(a). ↩︎
- See, for example, Case C‑180/21 VS v Inspektor v Inspektorata kam Visshia sadeben savet [2022] ECLI:EU:C:2022:967 para 41 and case law cited therein (‘it is necessary, for the interpretation of a provision of EU law, that account be taken not only of its wording, but also of its context and the objectives pursued by the rules of which it is part’); Case C-480/10 European Commission v Kingdom of Sweden [2013] ECLI:EU:C:2013:263 para 33 (‘in determining the scope of a provision of European Union law, its wording, context and objectives must all be taken into account’). ↩︎
- For a substantive discussion on AI Act, art 52(1), second sentence, see commentary on Article 52 in this work, Section 2.1.2. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 45; authors have also referred to article 52(1), first sentence, as an obligation, see, for example, Tobias Haar and Jonas Siglmüller, ‘Art. 52 Verfahren’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (3rd edn, C.H. Beck 2025) paras 5–6. ↩︎
- For discussion on AI Act, art 52(1), third sentence, see commentary on Article 52 in this work, Section 2.1.3. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 45. ↩︎
- See commentary on Article 52, Section 2.1.1.2.3. ↩︎
- Teleological considerations regarding the importance of the purpose of EU legislation are taken into consideration by the CJEU; see, for example, Case C‑579/21 Proceedings brought by J.M. [2023] ECLI:EU:C:2023:501 para 38 and case law cited therein (‘the interpretation of a provision of EU law requires that account be taken not only of its wording, but also of its context and the objectives and purpose pursued by the act of which it forms part’, emphasis added); Case C‑162/09 Secretary of State for Work and Pensions v Taous Lassal [2010] ECR I-09217 paras 51–52. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 31. ↩︎
- ibid. ↩︎
- See, also, commentary on Article 52 in this work, Section 2.1.1.2.5. ↩︎
- AI Act, art 51(1), first sentence; Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) paras 30 – 31. ↩︎
- AI Act, art 113(b). ↩︎
- See commentary on Article 52 in this work, Section 2.1.1.2.5. ↩︎
- See, in particular, case law cited in nn 178, 181 and 187. ↩︎
- See commentary on Article 52 in this work, Section 2.1.3. ↩︎
- See Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 45. ↩︎
- ibid para 46 clarifies that the moment when the GPAI model met the condition under Article 51(1)(a) AI Act is the relevant date for applicability of obligations, and not the date of the designation decision under Article 52(1), third sentence AI Act. ↩︎
- AI Act, art 52(4); a detailed examination of the designation criteria and procedure can be found in respectively commentaries on Article 51 and Article 52 in this work. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 46. ↩︎
- AI Act, art 51(1)(b). ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 46; see also, commentary on Article 52 in this work, Section 2.3.1.2. ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 23. ↩︎
- ibid. ↩︎
- ibid. ↩︎
- ibid para 62; see also Section 2.1.1.2. ↩︎
- ibid para 71; whereas the obligations of downstream modifiers of GPAI models without systemic risk are limited only to the extent of the introduced modifications, modifiers of a GPAI model with systemic risk (including where the modification itself gives rise to systemic risk) require full compliance with all obligations, meaning that it necessitates higher dependence on referring, for example, to the initial provider’s assessments, transparency documentation, etc. ↩︎
- ibid, cf para 23 and para 62. ↩︎
- ibid para 23. ↩︎
- ibid para 109. ↩︎
- ibid para 62. ↩︎
- AI Act, art 3(3). ↩︎
- AI Act, art 3(63). ↩︎
- AI Act, art 3(9). ↩︎
- Commission Guidelines on the Scope of the Obligations for General-Purpose AI Models (n 29) para 63. ↩︎
- ibid para 68. ↩︎
- ibid para 62. ↩︎
- ibid paras 70–71. ↩︎
- ibid. ↩︎
- ibid para 68; also cf Sections 2.1.2.2. and 2.1.3.3. on the effect of modifications with relation to, respectively, the timing of documentation and transparency obligations, and the timing of copyright obligations. ↩︎
- ibid paras 63, 64, 65 and 66. ↩︎
- For a detailed discussion on whether the current approach of the Commission Guidelines to downstream modifications presupposes the placement of a new model on the market see forthcoming chapter on Modifications in this work. ↩︎
- See para 33. ↩︎
- Art 111(1) and (3) AI Act. ↩︎
- For a more detailed discussion of AI Act, art 111(1) refer to Christiane Wendehorst, ‘Art. 111 Bereits in Verkehr gebrachte oder in Betrieb genommene KI-Systeme und bereits in Verkehr gebrachte KI-Modelle mit allgemeinem Verwendungszweck’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (C.H. Beck 2025) paras 2–6. ↩︎
- See discussion in Section 2.1.2.1..; for a general analysis of AI Act, art 111(2) refer to Wendehorst (n 224) paras 5–7. ↩︎
- European Commission, ‘Digital Omnibus on AI Regulation Proposal’ (19 November 2025) <https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal> accessed 24 November 2025. ↩︎
- TFEU art 294. ↩︎
- European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)’ COM (2025) 836 final, 2025/0359 (COD) recital 22. ↩︎
- ibid art 1(31). The Commission has proposed a differentiated period for entry into application following the adoption of a decision by the Commission depending on the type of high-risk AI system. ↩︎
- ibid art 1(30)(a). ↩︎
- See Section 2.1.2.1. ↩︎
- Commission Proposal for Digital Omnibus on AI (n 228) art 1(31). ↩︎
- ibid recital 20. ↩︎
- AI Act, art 50(2). ↩︎
- Commission Proposal for Digital Omnibus on AI (n 228) art 1(30)(b). ↩︎