Cambridge Commentary on EU General-Purpose AI Law

Explore the Cambridge Commentary
Chapter V
Procedure
Commentary by Gregor Gindlin

AI Act provision

Article 52: Procedure

  1. Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.
  2. The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk.
  3. Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk.
  4. The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria set out in Annex XIII.
    The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex XIII by specifying and updating the criteria set out in that Annex.
  5. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision.
  6. The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law.

Recitals

Recital 111

It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI model should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better understood after its placing on the market or when deployers interact with the model. According to the state of the art at the time of entry into force of this Regulation, the cumulative amount of computation used for the training of the general-purpose AI model measured in floating point operations is one of the relevant approximations for model capabilities. The cumulative amount of computation used for training includes the computation used across the activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of floating point operations should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability. To inform this, the AI Office should engage with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and associated systemic risk of general-purpose AI models, and could take into account the way the model will be placed on the market or the number of users it may affect. To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set threshold. That decision should be taken on the basis of an overall assessment of the criteria for the designation of a general-purpose AI model with systemic risk set out in an annex to this Regulation, such as quality or size of the training data set, number of business and end users, its input and output modalities, its level of autonomy and scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.

Recital 112

It is also necessary to clarify a procedure for the classification of a general-purpose AI model with systemic risks. A general-purpose AI model that meets the applicable threshold for high-impact capabilities should be presumed to be a general-purpose AI models with systemic risk. The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the threshold of floating point operations because training of general-purpose AI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of general-purpose AI models are able to know if their model would meet the threshold before the training is completed. In the context of that notification, the provider should be able to demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does not present systemic risks, and that it thus should not be classified as a general-purpose AI model with systemic risks. That information is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks and the providers can start to engage with the AI Office early on. That information is especially important with regard to general-purpose AI models that are planned to be released as open-source, given that, after the open-source model release, necessary measures to ensure compliance with the obligations under this Regulation may be more difficult to implement.

Recital 113

If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as a general-purpose AI model with systemic risk, which previously had either not been known or of which the relevant provider has failed to notify the Commission, the Commission should be empowered to designate it so. A system of qualified alerts should ensure that the AI Office is made aware by the scientific panel of general-purpose AI models that should possibly be classified as general-purpose AI models with systemic risk, in addition to the monitoring activities of the AI Office.

Select bibliography

  • Bernsteiner C and Schmitt T R, ‘Art. 52 Verfahren’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026).
  • Bond T and Abbady S, ‘Article 52 Procedure’ in Ceyhun Necati Pehlivan, Nikolaus Forgó and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Wolters Kluwer 2024).
  • Erben A, Negele M, Heim L and Sevilla J, Training Compute Thresholds – Key Considerations for the EU AI Act, Fernández Llorca D, Gómez E (eds), (Publications Office of the European Union, JRC143255, 2025).
  • Förster Chr, Straburzynski J, ‘§ 2 Pflichtenkataloge’ in Christian Förster (ed), Die KI-Verordnung in der Praxis: Rechtliche Grundlagen und Pflichten bei der Anwendung von Kl im Unternehmen (C H Beck 2025).
  • Haar T and Siglmüller J, ‘Art. 52 Verfahren’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025).
  • Hecht M, ‘Regulierung von GPAI-Modellen durch die KI-Verordnung’ (2025) Künstliche Intelligenz und Recht 30.
  • Hilgendorf E and Härtlein J, ‘Art. 52 Verfahren’ in Eric Hilgendorf and Johannes Härtlein (eds.), KI-VO: Verordnung über künstliche Intelligenz (Nomos 2025).
  • Hofmann-Coombe J, ‘§ 7. KI-Modelle mit allgemeinem Verwendungszweck’ in Eric Hilgendorf and David Roth-Isigkeit (eds), Die neue Verordnung der EU zur Künstlichen Intelligenz (2nd edn, C H Beck 2025).
  • Martini M, ‘§ 3. Risikobasierter Ansatz’ in Eric Hilgendorf and David Roth-Isigkeit (eds), Die neue Verordnung der EU zur Künstlichen Intelligenz (2nd edn, C H Beck 2025).
  • Schneider A and Schneider L ‘Art. 52 Verfahren’ in David Bomhard, Fritz-Ulli Pieper and Susanne Wende (eds), Kommentar KI-VO: Verordnung über Künstliche Intelligenz (dfv 2025).

Commentary

1. General remarks

1.1. Introduction

1Article 52 AI Act1 sets out rules for the classification of general-purpose AI (“GPAI”) models as GPAI models with systemic risk. Together with Article 51,2 it forms the basis for the AI Act’s two-tiered approach3 to the regulation of GPAI models, with some obligations applying to providers of all GPAI models4 and additional, more stringent, obligations applying to providers of GPAI models with systemic risk.5 Article 52 only applies to GPAI models,6 not to GPAI systems into which such models may be integrated.7

2The relationship between Article 52 and Article 51 is complex.8 Given that Article 51(1) establishes substantive requirements for classification9 and Article 52 is entitled ‘Procedure’,10 the provisions appear to distinguish substantive classification rules (Article 51) from procedural classification rules (Article 52).11 However, Article 52 contains not only procedural but also substantive rules for classification, as evidenced by its second and third paragraphs which establish the substantive requirements under which the Commission may reject arguments submitted by a provider contesting classification.12

3Moreover, to the extent that Article 52 contains procedural rules regarding notification,13 designation,14 contestation,15 and reassessment16 of classification, it only partially specifies which classification condition under Article 51(1) these rules concern.17 This adds to the complexity. The notification obligation under Article 52(1)’s first sentence and the procedure to contest classification under Article 52(2) both expressly reference the classification condition under Article 51(1)(a),18 yet none of Article 52’s provisions expressly reference Article 51(1)(b)’s classification condition.19 This raises the question of to what extent Article 52’s procedural rules concern Article 51(1)(b) at all.20 At the same time, Article 52’s two designation provisions – Article 52(1)’s third sentence and Article 52(4)’s second subparagraph21 – expressly reference neither classification condition under Article 51(1). A systematic interpretation of these provisions supports the conclusion that designation under Article 52(1)’s third sentence relates to classification under Article 51(1)(a) and designation under Article 52(4)’s first subparagraph relates to classification under Article 51(1)(b),22 though clearer drafting would have enhanced legal certainty.23

1.2. Structure & overview

4Article 52’s six paragraphs can broadly be organized into three groups.24 The first group comprises Article 52’s first, second and third paragraphs, which are closely linked to Article 51(1)(a) establishing classification of GPAI models with high-impact capabilities.25 The second group comprises Article 52’s fourth and fifth paragraphs, which relate to Article 51(1)(b) establishing classification of GPAI models having capabilities or an impact equivalent to those set out in Article 51(1)(a).26 The sixth paragraph of Article 52 does not share a specific connection to either condition under Article 51, applying regardless of whether a model has been classified under Article 51(1)(a) or (b).

5This chapter’s analysis begins in Section 2.1. with Article 52(1). The first two sentences of this paragraph establish a notification obligation for providers of GPAI models with (actual or presumed) high-impact capabilities,27 while its third sentence grants the Commission the power to designate GPAI models as presenting systemic risk absent notification.28 Section 2.2. addresses Article 52(2) and (3) which set out a procedure for providers to contest the classification of their models as presenting systemic risk together with notification pursuant to Article 52(1) and the requirements for this challenge.

6Section 2.3. examines Article 52’s fourth paragraph. Its first subparagraph grants the Commission the power to designate GPAI models as presenting systemic risk.29 Although Article 52(4)’s first subparagraph does not refer to Article 51(1)(b), convincing arguments support that Article 51(1)(b)’s substantive requirements apply, thereby requiring that a model has capabilities or an impact equivalent to those set out in Article 51(1)(a), that is, high-impact capabilities.30 Article 52(4)’s second subparagraph grants the Commission the power to amend Annex XIII through delegated acts.31 Section 2.4. proceeds with an analysis of Article 52(5)’s rules for reassessment of a designation decision under Article 52(4). In particular, it examines whether Article 52(5) may be applied, beyond its literal wording, to GPAI models that have been classified as presenting systemic risk based on the model’s high-impact capabilities.32

7Finally, Section 2.5. addresses the Commission’s obligation under Article 52(6) to publish and keep an up-to-date list of GPAI models with systemic risk.

2. Substance

2.1. Article 52(1): Notification obligation and Commission designation

8The internal structure of Article 52’s first paragraph is two-fold. Its first two sentences set out a notification obligation for providers of GPAI models with (actual or presumed)33 high-impact capabilities,34 while its third sentence establishes the Commission’s power to designate GPAI models as presenting systemic risks.35 The connection between these provisions becomes apparent when one considers that designation under Article 52(1)’s third sentence presupposes that the provider did not notify the Commission of its model (‘without its notification’).36 This requirement of non-notification not only establishes the link between the designation provision and the preceding provisions of Article 52(1), but also represents – in the most compelling view – an expression of the fact that Article 52(1)’s designation provision and notification obligation are to be understood in close connection with Article 51(1)(a).37 Thus, Article 52(1) adds to Article 51’s rules on classification of GPAI models with high-impact capabilities.38

2.1.1. Notification obligation

9The first sentence of Article 52(1) establishes a notification obligation for GPAI model providers whose model has, or is known to prospectively have, high-impact capabilities.39 Recital 112 clarifies that this obligation serves a dual regulatory purpose: it helps the AI Office to anticipate the market placement of GPAI models with systemic risks and enables providers to ‘engage with the AI Office early on’.40 This purpose is in line with the Commission’s Guidelines on the Scope of the Obligations for General-Purpose AI Models (“Commission Guidelines”) stating that ‘[t]he AI Office encourages close informal cooperation with providers during the training of their general-purpose AI models to facilitate compliance and ensure timely market placement, in particular for providers of general-purpose AI models with systemic risk.’41

10Recital 112 rightly emphasises the particular importance of notification for GPAI models that are planned to be released as open source42 because ‘after the open-source model release, necessary measures to ensure compliance with the obligations under this Regulation may be more difficult to implement’.43 The difficulties stem from the fact that, upon the open-source release of a model, its original developer can control its deployment and modification only in limited ways. Whereas proprietary models allow access to be restricted or revoked more easily, open-source models can be downloaded, copied, and distributed independently. This could render compliance measures under Article 93(1)(b) and (c), such as the withdrawal or recall of the model, practically impossible to implement, and the model weights and code could become permanently available to the public, for example through reuploads of downloaded copies to the internet.44

11Article 52’s second sentence requires the provider to include certain model information in the notification.45 The notification pursuant to Article 52(1) is not the only means by which the Commission can obtain information about GPAI models. Under Article 53(1)(a), providers of GPAI models must draw up, keep up to date and provide, upon request, to the AI Office the technical documentation of the model, including information about the ‘computational resources used to train the model’.46 Under Articles 91 and 92, the Commission has the powers to request documentation and information and to conduct evaluations. In contrast to these provisions, the notification obligation under Article 52(1) offers the Commission the advantage that providers must provide information proactively, rather than the Commission having to actively seek it out.47

12It should be emphasised that the obligation to notify the Commission constitutes an autonomous legal duty, the breach of which may be sanctioned independently of any systemic-risk-related obligations to which a provider may be subject.48 A provider failing to notify the Commission in breach of its obligation under Article 52(1)’s first sentence may be fined under Article 101(1)(a).49

2.1.1.1. Entry into application and transition period

13In principle, providers have to notify the Commission as of 2 August 2025, as this is the date when Chapter V of the AI Act, including Article 52(1), enters into application.50 However, there is some interpretive uncertainty regarding whether this also applies to models that have been placed on the market before 2 August 2025.51 For such models, Article 111(3)’s ‘grandfather provision’ sets out that providers ‘shall take the necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027’.52 While the provision’s wording seems to include the notification obligation under Article 52(1)’s first sentence, a competing interpretation holds that Article 111(3) only refers to the ‘substantive obligations’ contained within Section 2. and 3 of Chapter V.53 Depending on Article 111(3)’s interpretation, providers of GPAI models that have been placed on the market before 2 August 2025 therefore have to notify the Commission as early as 2 August 2025 or as late as 2 August 2027.54

2.1.1.2. Scope of the notification obligation

14Article 52(1) requires the provider of a GPAI model to notify the Commission where the model either has actual or presumed high-impact capabilities or it becomes known that it will have such capabilities per Article 51(1)(a). Article 51(2) introduces a presumption that a model has high-impact capabilities when the cumulative amount of computation used for its training is greater than 1025 floating-point operations (“FLOPs”).55 This presumption directly applies to Article 51(1)(a)’s classification condition56 and – by Article 52(1)’s reference to Article 51(1)(a) – to the notification obligation that is linked to this condition as well.57

15The notification obligation is addressed at GPAI model ‘providers’ in the sense of Article 3(3).58 It remains unclear if Article 52(1)’s reference to ‘the relevant provider’ (emphasis added) carries any meaning beyond the ‘provider’ definition. There are several instances in which the AI Act refers to ‘the relevant provider’ without a clearly discernible system.59 For the purpose of the notification obligation, different explanations appear possible. For instance, it can be useful to distinguish between different providers in the context of GPAI model modifications.60 In that context, Article 52(1)’s reference to ‘the relevant provider’ could be read as acknowledging that the notification obligation may not only apply to the original provider but also to a downstream modifier.61 Moreover, the addition of ‘relevant’ could be explained differently if Article 52(1) is interpreted to apply before the placing of the GPAI model on the Union market, as discussed below.62 Under that interpretation, the provision’s use of ‘the relevant provider’ could clarify – in light of the provider definition under Article 3(3) referring to the model’s market placement – that an entity can be regarded as a provider before it places the model on the market for the purpose of the notification obligation.63 The absence of a similar reference to ‘the relevant provider’ in Article 52(2), however, makes it plausible that the legislature did not intentionally make this drafting choice.

2.1.1.2.1. Presumed high-impact capabilities

16Exceeding Article 51(2)’s training compute threshold is likely to be the primary trigger for the notification obligation,64 regardless of the rebuttable nature of the high-impact capabilities presumption.65 Article 52(2) allows a provider to contest its model’s classification by presenting arguments against the presence of systemic risks when submitting the notification,66 in particular by rebutting Article 51(2)’s high-impact capabilities presumption.67 Accordingly, a provider of a GPAI model that meets the training compute threshold but exceptionally lacks high-impact capabilities should rebut this presumption ‘with its notification’68 rather than choosing not to notify at all.69 Under Articles 52(2) and (3), the Commission determines whether the provider’s arguments against classification are convincing and should be accepted or rejected.70 The Commission’s decision-making authority would be undermined if providers could simply opt out of notification themselves. That would also sit in tension with Recital 112 which links the provider’s obligation to notify the Commission directly to the training compute threshold under Article 51(2) without mention of an exception.71

2.1.1.2.2. Actual high-impact capabilities

17While the notification obligation arguably also applies where a model’s high-impact capabilities are not merely presumed but actually present – as evidenced by Article 52(1)’s reference to Article 51(1)(a) rather than Article 51(2)72 – determining whether a GPAI model possesses actual high-impact capabilities could pose practical difficulties.73 Article 3(64) defines high-impact capabilities rather abstractly as ‘capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models’.74 Article 51(1)(a) envisages the use of ‘appropriate technical tools and methodologies, including indicators and benchmarks’ for the evaluation of whether a GPAI model possesses such capabilities.75 However, the Commission has not yet exercised its power under Article 51(3) to introduce any assessment instruments by delegated act, and it is uncertain to what extent appropriate assessment instruments are presently available.76 The Commission Guidelines address this challenge by taking the position that the notification obligation does not apply to a GPAI model with actual high-impact capabilities that, however, does not meet the Article 51(2) threshold, at least until delegated acts establish the relevant assessment instruments.77 Given the difficulties of determining whether a GPAI model has actual high-impact capabilities, there is a practical appeal to this interpretation which, however, does not accord well with Article 52(1)’s express reference to the Article 51(1)(a) condition. This makes a timely adoption of delegated acts under Article 51(3) to provide for appropriate assessment instruments under Article 51(1)(a) even more desirable.

2.1.1.2.3. Prospective high-impact capabilities

18A further question regarding the scope of the notification obligation is whether it can already be triggered by the provider’s knowledge that the training compute threshold under Article 51(2) will be met before it is actually met.78 The wording of Article 52(1)’s first sentence appears somewhat inconsistent in this respect. Its first part links the notification obligation to the GPAI model ‘meet[ing] the condition referred to in Article 51(1), point (a)’. This could mean that providers are not obliged to notify the Commission before their model has met a classification condition under Article 51(1)(a), and in particular before their model has reached Article 51(2)’s training compute threshold.79 However, the second part of the sentence lists (i) the model ‘meet[ing] that requirement’ and (ii) ‘it becom[ing] known that it will be met’ as alternative conditions for triggering the notification obligation. On this basis, prospective high-impact capabilities may be sufficient for triggering the notification obligation.80

19Acknowledging the somewhat infelicitous wording of Article 52(1)’s first sentence, this last interpretation is the most convincing one.81 This finds support in the well-established principle that where a provision of EU law – or a part of it, in this case the clause ‘or it becomes known that it will be met’ – is open to several interpretations, preference must be given to the interpretation which ensures that it is not rendered redundant and, thus, retains its effectiveness.82

20This interpretation finds further support in Recital 112’s second sentence which states that ‘[t]he provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption’ (emphasis added),83 thereby avoiding the infelicitous wording of Article 52(1)’s first sentence. Moreover, recent research indicates that providers will, at least in some cases, know about the maximum amount of computation available for model training up to twelve months before training begins,84 will have drawn up a preliminary training plan up to three months before training85 and will be able to provide high-confidence estimates of expected compute expenditure due to pre-training resource commitments up to two weeks before training.86 This suggests that the phrasing ‘or it becomes known that it will be met’ has a relevant scope of application.

21The Commission Guidelines share this interpretation of Article 52(1)’s first sentence. They suggest that ‘a notification may be required before training is complete, if the provider can reasonably foresee that the requirement that leads to the presumption of the model having high-impact capabilities is reasonably likely to be met’ (emphasis added).87 Such an interpretation is reminiscent of cases in EU competition law where both actual and constructive knowledge have been considered as sufficient to establish an undertaking’s awareness of a certain conduct.88 The wording of the provision (‘becomes known’) can indeed be understood to encompass situations where the provider possesses all the necessary information and could reasonably have estimated the cumulative amount of computation used for the model’s training,89 regardless of whether this estimation is actually performed.90 Such an interpretation is sensible, as it prevents circumvention of the notification obligation. Moreover, Article 51(2)’s training compute threshold is a quantitative threshold, and assessing whether it is exceeded generally does not require complex normative assessments but rather estimating and adding compute expenditure for different computational activities.91 Recital 112 highlights the importance of the notification obligation in relation to Article 52(1)’s training compute threshold as providers ‘are able to know if their model would meet the threshold before training is completed’.92

2.1.1.2.4. Model modifications

22A GPAI model can be modified, either by its original developer or by a downstream actor.93 Such modifications pose a multitude of interpretive issues, and different approaches to interpreting the AI Act’s GPAI model provisions in the face of modifications are conceivable.94 This section examines when a GPAI model’s modification gives rise to a notification obligation on the basis of the Commission Guidelines’ approach to modifications.95 This approach clearly distinguishes between different actors responsible for a model’s modification. Regarding the original developer’s modifications of a GPAI model, the Commission Guidelines take the position that any subsequent development following the initial large pre-training run forms part of the same model’s lifecycle rather than creating new models.96 According to this approach, a model stays the same model along its entire lifecycle, regardless of different development stages.97 The Commission Guidelines take a different approach when another actor modifies the model.98 In such cases, the Commission assumes that not every modification by a downstream actor results in that modifier becoming the provider of the modified model. Instead, a downstream modifier becomes the provider of the modified GPAI model only where the modification amounts to ‘a significant change in the model’s generality, capabilities, or systemic risk.’99 According to the Commission Guidelines, this is indicated where the training compute used for modifying the model exceeds a third of the training compute of the original model100 or, alternatively, where the downstream modifier can neither be expected to know the training compute of the original model nor estimate it, where the modification exceeds a replacement training compute threshold.101 Specifically, this replacement threshold is set at one-third of the threshold for the high-impact capabilities presumption under Article 51(2) – currently 1025 FLOPs – if the original model is a GPAI model with systemic risk, or otherwise at one-third of the Commission Guidelines’ threshold for the presumption of a model being a GPAI model – currently 1023 FLOPs.102

23On the basis of this approach to modifications, the original provider of a GPAI model must notify the Commission where its modification causes the model to meet one of the conditions that trigger the notification obligation103 for the first time. This may occur, for example, where the training compute threshold under Article 51(2) is not met upon completion of the large pre-training run104 but only during the subsequent fine-tuning105 performed by the original provider of the model.106

24The downstream modifier of a GPAI model with systemic risk must notify the Commission if it modifies the model in such a way that it becomes the provider of the modified GPAI model.107 According to the Commission Guidelines, the modified model is presumed to have high-impact capabilities as well in this case and is therefore considered to be a GPAI model with systemic risk under Article 51(1)(a).108 This, in turn, means its provider must notify the Commission pursuant to Article 52(1)’s first sentence.109

25The Commission Guidelines do not expressly address the scenario of a downstream actor modifying a GPAI model that has not been classified as a GPAI model with systemic risk in such a way that they become the provider of a modified GPAI model which meets one of the conditions that trigger the notification obligation for the first time.110 This may be the case, for example, where subsequent fine-tuning performed by a downstream actor causes the training compute threshold under Article 51(2) to be met for the first time. On the basis of the Commission’s approach to modifications, there is no apparent reason why the downstream modifier would not be required to notify the Commission in this scenario as well.111

2.1.1.2.5. Market placement of the model

26The AI Act applies, according to Article 2(1), to providers placing on the market GPAI models in the Union, and it does not apply, according to Article 2(8), to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service.112 Moreover, an exclusion of research and development activities before the placing on the market of a GPAI model from the AI Act’s scope may be derived from Article 1(2)(e), Article 2(6) and Article 3(3) and (63). This raises the question of whether a prospective provider may be obliged to notify the Commission of a GPAI model before its placing on the market under Article 52(1)’s first sentence.113

27Prima facie, the abovementioned provisions appear to leave no room for any pre-market placement notification obligations.114 This, however, sits in tension with the fact that the legislature clearly envisaged providers to notify the Commission pursuant to Article 52(1)’s first sentence already before the placing on the market of a GPAI model. Recital 112’s sixth sentence states in that respect that the information received by the Commission in the context of the notification – either via the notification itself or an exemption request under Article 52(2) – ‘is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks’ (emphasis added).

28This legislative intent is also reflected in Article 52(1)’s first sentence itself, as the provision requires a provider to notify the Commission already in case of its knowledge of a model’s prospective high-impact capabilities (‘it becomes known that it will be met’).115 There may be cases in which the provider gains this knowledge after market placement, for example where Article 51(2)’s training compute threshold is only surpassed due to fine-tuning performed after the model has been placed on the market.116 However, Recital 112 suggests that the legislature included this second alternative triggering the notification obligation precisely for cases where a prospective provider has the relevant knowledge before the market placement of the model.117 This is not only evidenced by Recital 112’s sixth sentence referred to in the preceding paragraph but also reinforced by its fourth sentence, which states that ‘training of general-purpose AI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of general-purpose AI models are able to know if their model would meet the threshold before the training is completed’ (emphasis added).118

29In line with Article 52(1)’s wording and Recital 112, prospective providers may thus be required to notify the Commission already during a model’s training and at the earliest possible time.119 This idea is confirmed by the Safety and Security Chapter of the Code of Practice, which, in Measure 1.1, holds that signatories to the Code of Practice ‘will have confirmed their [Safety and Security] Framework no later than four weeks after having notified the Commission pursuant to Article 52(1) AI Act and no later than two weeks before placing the model on the market’.120 This measure presupposes pre-market placement notifications since its first requirement (‘no later than four weeks after having notified the Commission’) would otherwise be redundant in light of its second requirement (‘no later than two weeks before placing the model on the market’).121

30Interestingly, seemingly similar interpretive issues arise in the context of high-risk AI system obligations as well. For example, Article 9(8)’s first sentence expressly requires that high-risk AI systems must be tested before they are placed on the market or put into service,122 whereas Article 2(8) exempts testing activity regarding AI systems prior to their being placed on the market or put into service from the AI Act’s scope.123 To resolve this friction, obligations expressly aimed at the research phase of AI systems could be read as leges speciales to Article 2(8),124 or these obligations could apply retroactively once AI systems are placed on the market or put into service125 or there is an intention to do so.126 In light of Article 52(1)’s wording and the corresponding Recital 112, it appears worth considering corresponding interpretations with regard to pre-market placement notification obligations as well.127

2.1.1.3. Notification period

31The notification must take place ‘without delay and in any event within two weeks after [the notification] requirement is met or it becomes known that it will be met’ (emphasis added).128 According to one estimate, providers typically know one to three weeks before final pre-training whether their model will surpass Article 51(2)’s training compute threshold.129 Requiring the provider to notify ‘without delay’ – instead of ‘without undue delay’130 – suggests a strict interpretation of this requirement.131 The necessity of further investigations to determine whether the GPAI model does not present, due to its specific characteristics, systemic risks in preparation of a submission pursuant to Article 52(2) may justify use of the entire notification period.132 Absent sufficient reasons, the provider may be obliged to notify the Commission before the two-week period has expired.133

2.1.1.4. Impact on classification

32There are two ways in which a provider’s notification of a GPAI model pursuant to Article 52(1)’s first sentence indirectly influences classification. First, it bars the Commission from designating the model pursuant to Article 52(1)’s third sentence, as this provision requires that the Commission ‘becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified’ (emphasis added).134 However, the Commission can still designate the model pursuant to Article 52(4)’s first subparagraph if the requirements of that designation provision are met.135 Second, a provider of a GPAI model with high-impact capabilities may present arguments to demonstrate the absence of systemic risks only ‘with its notification’.136 This implies that having notified the Commission pursuant to Article 52(1)’s first sentence precludes the provider from a submission of arguments pursuant to Article 52(2) at a later point in time.137 This consequence of notification is significant because – as discussed below138 – the reassessment provision under Article 52(5) does not apply to models that have been classified under Article 51(1)(a) and of which the provider has notified the Commission under Article 52(1)’s first sentence.139

33The AI Act does not specify whether these indirect effects of classification also occur in case of an incomplete notification – that is, one that does not contain all the information required pursuant to Article 52(1)’s second sentence.140 Such an incomplete notification likely triggers the effects of notification regarding designation under Article 52(1)’s third sentence, as is supported by both substantive and textual arguments. If one accepts that one rationale of such designation is to bring clarity as to whether a model meets the criteria for classification under Article 51(1)(a),141 such incomplete notification satisfies this rationale equally well. As in the case of ‘complete’ notification, no clarification is needed where the provider has already acknowledged that the model meets or will meet the Article 51(1)(a) condition by notifying the Commission. This conclusion finds textual support in the fact that even an incomplete notification arguably remains a ‘notification’ for the purposes of Article 52(1)’s third sentence, provided it contains the minimum content required by Article 52(1)’s first sentence – namely, the provider’s statement that its model meets or will meet the condition set out in Article 51(1)(a).

34Regarding the bar on contesting classification under Article 52(2) upon submission of an incomplete notification, similar arguments apply. Article 52(2) requires contestation of classification with the ‘notification’ – not upon providing a complete notification. While contesting classification alongside an incomplete notification may pose considerable difficulties,142 it would appear unjustified for a provider to preserve the right to contest classification simply by submitting an incomplete notification.143 Moreover, a provider who fails to submit a complete notification breaches its obligation under Article 52(1)’s second sentence and may be fined under Article 101(1)(a).144

35Beyond these indirect effects of notification, it is worth considering whether notification itself directly triggers the classification of a GPAI model with systemic risk. Although the AI Act does not expressly provide for that effect, it is not prima facie implausible; on the contrary, it would offer an elegant explanation for certain interpretive issues raised by Articles 51 and 52. It could, for example, explain why designation under Article 52(1)’s third sentence requires the absence of prior notification.145 This requirement establishes a form of alternativity between notification and designation within Article 52(1),146 which could be explained by both notification and designation having the same effect of triggering classification.

36Moreover, this interpretation would resolve an interpretive tension that potentially arises between classification under Article 51(1)(a) and (2) and the rejection of the providers’ arguments under Article 52(3). Under Article 51(1)(a) and (2), a model is classified once it reaches the training compute threshold of 1025 FLOPs.147 Article 52(3), however, suggests that a model may already be classified before this threshold is reached, as, by its wording, classification immediately follows the Commission’s rejection of the provider’s arguments,148 which may occur before the model reaches the training compute threshold under Article 51(2).149 Earlier classification in case of a rejection decision under Article 52(3) compared to classification under Article 51(1)(a) and (2) would be problematic since it would effectively penalise providers that contest classification under Article 52(2). The same issue would not arise if notification under Article 52(1)’s first sentence would already trigger classification.

37Nevertheless, it is unlikely that the legislature intended to link classification of GPAI models with high-impact capabilities to notification, since the wording of Article 51(1)(a) mentions neither notification nor designation. If the legislature had the intent to provide for notification as a classification trigger, it would have been natural to express this intent in Article 51(1)(a) itself, as Article 51(1)(b) does mention the procedural requirement of a Commission decision. Moreover, neither Article 52(1)’s first sentence nor the recitals provide an indication that the legislature intended to accord notification such significance. It is therefore more convincing that a notification pursuant to Article 52(1)’s first sentence does not trigger classification but rather that Article 51(1)(a) automatically classifies GPAI models with high-impact capabilities as presenting systemic risk – an interpretation that is analysed in depth in the Commentary on Article 51 in this work.150

38There are further consequences of notification to consider. Under Article 52(6), the Commission must update the published list of GPAI models with systemic risk where the Commission becomes aware of such a model through a provider’s notification.151 Furthermore, the time of notification is relevant for signatories of the Code of Practice with respect to their commitment to create a Safety and Security Framework as signatories commit to ‘hav[ing] confirmed the [Safety and Security] Framework no later than four weeks after having notified the Commission pursuant to Article 52(1) AI Act and no later than two weeks before placing the model on the market’.152

39Some authors have further argued that the Commission must necessarily render a designation (or non-designation) decision regarding the classification of the model as a GPAI model with systemic risk in all cases of notification under Article 52(1)’s first sentence.153 According to this view, the provider would have a legitimate interest in obtaining clarity concerning the obligations it faces, while from a systematic perspective such an approach would be warranted given that the Commission must also decide in cases where a provider contests classification under Article 52(2) and (3) or requests reassessment under Article 52(4)’s first subparagraph.154 However, the AI Act neither requires nor provides a legal basis for a Commission decision following every notification, as it does not require a Commission designation for classification under Article 51(1)(a) either.155 Nor would such a decision appear necessary for reasons of legal certainty. When a provider notifies the Commission without challenging classification under Article 52(2), the provider is able to know that Article 55 obligations must be observed upon the model’s automatic classification under Article 51(1)(a).156 Moreover, when a provider contests classification under Article 52(2), a Commission decision under Article 52(3) will be issued,157 thereby providing further clarity.

2.1.2. Article 52(1), second sentence: Content of the notification

40The second sentence of Article 52(1) specifies the information a provider must supply with its notification pursuant to Article 52(1)’s first sentence. It sets out that the notification must include ‘the information necessary to demonstrate that the relevant requirement has been met’. The ‘relevant requirement’ refers to the conditions that trigger the notification obligation under Article 52(1)’s first sentence, that is, the GPAI model meeting Article 51(2)’s compute threshold or otherwise having high-impact capabilities, or the provider’s knowledge that either of these conditions will be met.158

41While the notification is not the only means by which the Commission can obtain information about GPAI models, it offers a distinctive advantage over Articles 53(1)(a)159, 91160 and 92161 in that providers must provide information proactively, rather than the Commission having to actively seek it out.162 By requiring the provider to demonstrate that its model meets the classification requirements under Article 51(1)(a) and (2), the notification procedure further enables the Commission to identify errors or misrepresentations in the provider’s assessment before including the model in the published list of GPAI models with systemic risk under Article 52(6).163 In this way, the requirement in Article 52(1)’s second sentence guards against both inadvertent misassessments and providers seeking to advertise their model as sufficiently powerful to present systemic risk and thus secure inclusion on the published list.164 Moreover, the information included in the notification helps further the Commission’s understanding of the state of the art of the most advanced GPAI models.

42Where the notification obligation is triggered because the model meets or will meet the training compute threshold under Article 51(2),165 the provider must provide information about the cumulative amount of training compute that was used or will be used.166 According to the Commission Guidelines, this includes a ‘description of the approach used to estimate this amount of compute, including approaches used to make approximations where precise information is not available’.167

43Article 52(1)’s second sentence does not specify whether it is sufficient for the provider to state that the amount of compute used for the model’s training lies above the threshold under Article 51(2) or whether a model provider is required to provide the Commission with their exact estimate of the cumulative amount of computation used for the model’s training.168 Arguments can be made for either interpretation. The exact amount of compute by which the threshold under Article 51(2) is surpassed may be considered irrelevant for the presumption of high-impact capabilities and therefore unnecessary to demonstrate that the model meets the condition under Article 51(1)(a). However, the inclusion of some estimate – in combination with a description of the estimation method – enables the Commission to verify whether the provider’s determination that the threshold is met is plausible. A requirement is arguably not demonstrated to be met if the information provided does not allow verification of the plausibility of this claim.169 Merely stating that the lower bounds of the estimate exceed the threshold could be insufficient, as it would not permit the Commission to cross-check a compute estimate against other available information about the model, its provider or comparable models.170

44Moreover, Recital 112 states that information provided in the context of notification ‘is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks’.171 It is not clear to what information this refers – either information provided under Article 52(1)’s second sentence or information provided in the context of a challenge to classification under Article 52(2).172 Nevertheless, it can be argued that the provision of an exact estimate of training compute would be of particular value for the AI Office, whereas the mere information that the threshold under Article 51(2) is met would only allow it to draw limited conclusions regarding capabilities or risks that can reasonably be expected with regard to the model. In this respect, a purposive argument can be made that an obligation to provide additional information with notification may not be interpreted in a way that this obligation merely encompasses information that does have only little or no additional value at all for the notified body.173

45Where the notification obligation is triggered because the model has or will have actual high-impact capabilities, information necessary to demonstrate this may be more encompassing and could include any information relevant to verify that this is the case, such as information about the ‘technical tools and methodologies’ used in the evaluation of the model’s high-impact capabilities under Article 51(1)(a)174 or information about the number of parameters, quality or size of the data set or information about evaluations of the model’s capabilities.175 Where the notification obligation is triggered because the compute threshold is met, supplying such information generally does not appear necessary under Article 52(1)’s second sentence except where, for the reasons set out above, such information is necessary to verify the compute estimate itself.176 For example, this could be the case with regard to the number of parameters and training tokens where the model’s training compute has been estimated via the so-called ‘architecture-based approach’.177

2.1.3. Article 52(1), third sentence: Commission designation

46The third sentence of Article 52(1) is one of the two provisions under Article 52 establishing the Commission’s power to designate GPAI models as presenting systemic risk, with the other being the first subparagraph of Article 52(4).178 It sets out both substantive and procedural requirements.179

47Since Article 52(1)’s third sentence addresses scenarios where the provider has not notified the Commission of its model (‘of which it has not been notified’), the Commission must obtain the necessary information for the designation decision from other sources. These sources include the AI Office’s monitoring activities under Article 89(1),180 complaints received from downstream providers pursuant to Article 89(2),181 and qualified alerts by the scientific panel pursuant to Article 90(1).182 Although Article 52(1)’s third sentence – unlike Article 51(1)(b) and Article 52(4)’s first subparagraph – does not expressly mention these qualified alerts, this follows from the wording of Article 90(1)(b), which refers to Article 51 in general and therefore includes Article 51(1)(a) as the basis for Commission designation under Article 52(1)’s third sentence. It is further suggested by Recital 113, which mentions a ‘system of qualified alerts’ in the context of the designation of a GPAI model as presenting systemic risk of which the Commission has not been notified.183 Information shared by providers in structured dialogues or as a response to information requests under Article 53(1)(a) and Article 91(1) could also play a role for designation decisions.

2.1.3.1. Relationship with Article 52(4), first subparagraph

48To the extent that Article 52 contains procedural rules relating to Article 51,184 it only partially specifies the relevant substantive classification condition under Article 51(1) that it concerns.185 The absence of express reference to Article 51(1) is particularly notable in the case of the two designation provisions contained in Article 52 – Article 52(1)’s third sentence and Article 52(4)’s second subparagraph. Even though these provisions do not expressly identify how they relate to Article 51(1) and its conditions for classification, Article 52(1)’s third sentence is commonly interpreted to relate to the high-impact capabilities-based classification under Article 51(1)(a),186 while Article 52(4)’s first subparagraph is commonly understood to relate to classification under Article 51(1)(b), which is based on capabilities or impact equivalent to high-impact capabilities.187

Classification provision (Article 51(1))Designation provision
A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions:(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;Article 52(1), third sentence

If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.
(b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.Article 52(4), first subparagraph

The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel pursuant to Article 90(1)(a), on the basis of criteria set out in Annex XIII.
Commission Designation: Overview of Relevant Provisions and Their Relationship

49For Article 52(1)’s third sentence, its connection to the high-impact capabilities-based classification under Article 51(1)(a) follows from its systematic placement in the first paragraph of Article 52, of which the other sentences exclusively relate to Article 51(1)(a) as well.188 This argument is reinforced by the negative requirement for designation under Article 52(1)’s third sentence, that is, the requirement that the Commission ‘has not been notified’ of the model. This requirement implies that the model did meet Article 51(1)(a)’s classification threshold, as that is the only scenario that would require notification.189

50Similarly, Article 52(4)’s first subparagraph refers to Annex XIII, which contains the criteria to assess Article 51(1)(b)’s determination ‘that a general-purpose AI model has capabilities or an impact equivalent to those set out in Article 51(1), point (a)’.190 This implies that this first subparagraph of Article 52(4) relates to Article 51(1)(b).191 Article 51(1)(b) and Article 52(4)’s first subparagraph support this interpretation by expressly stating that the Commission may decide ‘ex officio or following a qualified alert from the scientific panel’ – a phrase absent from Article 51(1)(a) and Article 52(1)’s third sentence.192 It finds additional support in Recital 111, which conflates the wording of Article 51(1)(b) and Article 52(4)’s first subparagraph,193 suggesting that these provisions refer to the same designation decision. Against this background, the apparent difference in decision content appears to be a minor drafting inconsistency without substantive relevance: Article 51(1)(b)’s Commission decision determines that the GPAI model ‘has capabilities or an impact equivalent to those set out in point (a)’, whereas the Commission decision under Article 52(4)’s first subparagraph ‘designat[es] […] a general-purpose AI model as presenting systemic risks’. A Commission decision finding that the model has capabilities or an impact equivalent to high-impact capabilities, however, apparently has no independent meaning apart from the model’s classification as a GPAI model with systemic risk.194 This corresponds to the consequence of designation under Article 52(4)’s first subparagraph being that the GPAI model is considered to present systemic risks.195

51Article 51(1)(b)’s reference to Article 51(1)(a) and Recital 111 allows an additional parallel that suggests that Article 51(1)(a) corresponds to Article 52(1)’s first sentence and Article 51(1)(b) corresponds to Article 52(4)’s first subparagraph: just as classification under Article 51(1)(b) (‘equivalent to those set out in point (a)’) builds on classification under Article 51(1)(a), designation under Article 52(4)’s first subparagraph serves to complement the system of notification and designation under Article 52(1), first and third sentence.196

2.1.3.2. Procedural requirements

52Article 52(1)’s third sentence itself does not establish any procedural requirements – apart from the requirement that the Commission should not have been previously notified of the model197 – for the designation of a GPAI model as presenting systemic risk. However, Article 94 establishes that the procedural rights laid down in Article 18 of the Market Surveillance Regulation198 (“MSR”) apply mutatis mutandis to GPAI model providers.199

53The more compelling arguments support the applicability of Article 94 in the context of Article 52, despite both provisions being located in different chapters of the AI Act.200 This interpretation finds support in the wording of Article 94, which does not refer to the enforcement powers under Section 5 of Chapter IX of the AI Act specifically but to ‘the providers of the general-purpose AI model’ in general.201 It is reinforced by Article 94 stating that the procedural rights laid down in Article 18 MSR shall apply ‘without prejudice to more specific procedural rights provided for in this Regulation’, which implies its application where the AI Act – as with regard to designation under Article 52 – does not provide for more specific procedural rights. However, it should be noted that Article 52 applies from 2 August 2025, whereas Article 94 only applies from 2 August 2026.202

54The application of the procedural rights under Article 18 MSR in the context of designation has some interesting consequences. For the Commission, this means that it must state the exact grounds of the designation decision.203 It also entails that the designation decision must be communicated without delay to the relevant provider, who must at the same time be informed of the remedies available to it and of the time limits to which those remedies are subject.204 Moreover, it implies that before a designation decision is made, the provider concerned must be given the opportunity to be heard within an appropriate period of not less than 10 working days.205 This right to prior hearing is an expression of the right to good administration under Article 41(2)(a) of the Charter.206 Article 18(3) MSR also provides for an exception from the opportunity to be heard where it is not possible to give the provider that opportunity because of the urgency of the designation decision, based on health or safety requirements or other grounds relating to the public interests covered by the AI Act.

55Beyond Article 18 MSR and partially overlapping with it, the right to good administration under Article 41(1) of the Charter applies, giving the provider the right to have its affairs ‘handled impartially, fairly and within a reasonable time’ by the Commission. This includes (i) the provider’s right to be heard before a designation decision,207 (ii) the provider’s right to have access to its file, while ‘respecting the legitimate interests of confidentiality and of professional and business secrecy’,208 and (iii) the Commission’s obligation to give reasons for its designation decision.209

2.1.3.3. Substantive requirements

56The Commission may designate a model as presenting systemic risk pursuant to Article 52(1)’s third sentence, if it becomes aware of ‘a general-purpose AI model presenting systemic risks of which it has not been notified’ (emphasis added). The main interpretive question regarding this substantive requirement – discussed in the following subsections210 – is whether it sets out a procedure regarding Article 51(1)(a) and its substantive requirements or whether it introduces a distinct classification pathway and corresponding substantive requirement, that is, that the GPAI model can be designated if it presents risks that meet the definition of systemic risk under Article 3(65).211

2.1.3.3.1. High-impact capabilities (Article 51(1)(a))

57The more convincing view entails that Article 52(1)’s third sentence relates to the classification condition under Article 5(1)(a) and requires a model to have high-impact capabilities.212 This interpretation is supported by the connection between both provisions, evidenced by the designation provision’s systematic placement in the first paragraph of Article 52, of which the other sentences exclusively relate to Article 51(1)(a), and its requirement that the Commission ‘has not been notified’ of the model.213 This connection implies that Commission designation under Article 52(1)’s third sentence serves to replace a lack of (obliged) notification and is subject to the same requirement for Commission designation as required for classification under Article 51(1)(a).214 This is reinforced by the strong indicators of the other designation provision under Article 52(4)’s first subparagraph relating to Article 51(1)(b).215 These suggest that Article 52(1)’s third sentence does not relate to Article 51(1)(b) as well, as the existence of two designation provisions under Article 52 relating to one and the same classification condition under Article 51(1) would appear duplicative. Relating Article 52(1)’s third sentence to Article 51(1)(a) avoids this redundancy, as Article 52 contains no other designation provision relating to Article 51(1)(a). While Article 51(1)(b)’s wording suggests that GPAI models with high-impact capabilities meet the requirements not only for automatic classification under Article 51(1)(a) but also for designation under Article 52(4)’s first subparagraph in conjunction with Article 51(1)(b),216 this would not necessarily render designation under Article 52(1)’s third sentence based on Article 51(1)(a) redundant, as this designation provision does not require the Commission to have regard to the criteria contained within Annex XIII,217 and the presumption of high-impact capabilities under Article 51(2) does apply.218

58This interpretation is not fundamentally challenged by the fact that models that meet Article 51(1)(a)’s requirements are automatically classified as presenting systemic risk.219 The automatic classification of GPAI models with high-impact capabilities under Article 51(1)(a) means that designation under Article 52(1)’s third sentence is not required for such model’s classification and has no direct effect on the classification status of the model.220 This appears counterintuitive given the distinct consequences of designation under Article 52(4)’s first subparagraph as constitutive for classification of a GPAI model under Article 51(1)(b).221 Nonetheless, a model’s automatic classification under Article 51(1)(a) does not necessarily render a Commission decision under Article 52(1)’s third sentence redundant. Where there is uncertainty about whether a model meets the criteria for classification under Article 51(1)(a), a legally binding Commission decision under Article 52(1)’s third sentence could bring clarity.222 Moreover, the characterisation of this Commission decision as a designation (‘decide to designate’) instead of a mere confirmation can potentially be explained by the further consequences of this decision beyond confirming the classification status.223 In particular, the provider can no longer challenge classification pursuant to Article 52(2) and (3) anymore.224

2.1.3.3.2. Systemic risk presence (Article 3(65))

59By contrast, an interpretation of Article 52(1)’s third sentence as allowing the Commission to designate a GPAI model if it presents risks that meet the definition of systemic risk under Article 3(65) does not appear as convincing. Such an interpretation would find support in the provision’s wording, which requires the Commission becoming aware of a ‘general-purpose AI model presenting systemic risk’ (emphasis added) without expressly referencing either Article 51(1)(a) or (b).225 It would further afford designation under Article 52(1)’s third sentence a constitutive effect for classification more similar to the one of designation under Article 52(4)’s first subparagraph. Moreover, it would establish Article 52(1)’s third sentence as an additional classification pathway beyond Article 51(1)(a) and (b), which could enhance the classification framework’s adaptability to evolving technological developments.226

60However, there are convincing arguments against an interpretation of Article 52(1)’s third sentence directly relating to the definition of systemic risk under Article 3(65). The first sentence of Recital 113 – corresponding to designation under Article 52(1)’s third sentence227 – sets out that ‘[i]f the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as a general-purpose AI model with systemic risk […] the Commission should be empowered to designate it so.’ (emphasis added) This indicates that the legislature intended the designation to be based on the conditions for classification under Article 51(1), rather than the systemic risk definition under Article 3(65). If the legislature had the intent to provide for the designation of a GPAI model as presenting systemic risk directly based on the definition under Article 3(65), it would have appeared close at hand to establish this additional classification pathway directly in Article 51(1) itself. Moreover, the wording of Article 52(1)’s third sentence does not only permit a reading of the provision referring to Article 3(65). Its requirement of the Commission becoming aware of a ‘general-purpose AI model presenting systemic risk’ can also be read as shorthand for the Commission becoming aware of a GPAI model with high-impact capabilities that has therefore been automatically classified as a GPAI model with systemic risk.228

2.1.3.3.3. Applicability of high-impact capabilities presumption (Article 51(2))

61Following an interpretation of designation under Article 52(1)’s third sentence as requiring the GPAI model to meet the condition under Article 51(1)(a), the question arises as to whether the presumption of high-impact capabilities under Article 51(2) is applicable in this context. The more compelling arguments support its applicability.229 Article 51(2) contains a presumption of high-impact capabilities for the purpose of the classification condition under Article 51(1)(a).230 It further flows from the relationship of Commission designation under Article 52(1)’s third sentence with the notification obligation under Article 52(1)’s first sentence, as set out above.231 As the high-impact capabilities presumption under Article 51(2) applies in the context of the notification obligation,232 this implies that it is applicable in the context of designation under Article 52(1)’s third sentence as well.

62The fact that Article 52(2) expressly allows providers to contest classification only with notification – not following designation under Article 52(1)’s third sentence233 – does not fundamentally challenge this interpretation. While this means that a provider whose model has been designated cannot rebut the high-impact capabilities presumption through Article 52(2),234 the provider retains the right to be heard under Article 94 in conjunction with Article 18(2) MSR before a designation decision is made.235 Moreover, where a model meets Article 51(2)’s training compute threshold but does not actually have high-impact capabilities, its provider can prevent its designation under Article 52(1)’s third sentence by notifying the Commission pursuant to Article 52(1)’s first sentence and contesting classification pursuant to Article 52(2) together with the notification.

2.1.3.4. Commission discretion and timing

63The wording of Article 52(1)’s third sentence (‘may’) suggests that the Commission has discretion in deciding whether to designate a GPAI model as presenting systemic risk that meets the requirements for designation.236 In this respect, the provision aligns with the other designation provision under Article 52(4)’s first subparagraph.237 It should be noted, however, that GPAI models with high-impact capabilities are automatically classified as presenting systemic risk under Article 51(1)(a).238 Designation of such models under Article 52(1)’s third sentence thus has – in contrast to designation pursuant to Article 52(4)’s first subparagraph – no direct effect on the classification status of the model.239 As a result, it is likely that different considerations influence the Commission’s exercise of its discretion under both provisions. Considerations that could inform the Commission’s decision under Article 52(1)’s third sentence include the time elapsed since the provider became aware that its model would meet the training compute threshold and thus require notification under Article 52(1)’s first sentence,240 and the extent to which a model surpasses Article 51(2)’s training compute threshold, given that higher training compute may indicate greater model capabilities.241

64As discussed above, Article 52(1)’s first sentence raises the question of whether a provider may be obliged to notify the Commission of a GPAI model before its placing on the market.242 Article 52(1)’s third sentence comes with the parallel question of whether the Commission can designate a GPAI model as presenting systemic risk already prior to the model’s market placement. Like pre-market placement notification obligations, pre-market placement Commission designations could sit in tension with an exclusion of research and development activities before the placing on the market of a GPAI model from the AI Act’s scope.243 Unlike pre-market notification obligations, however, the recitals provide no indication of the legislature’s intent to allow pre-market Commission designations.244 At the same time, the fact that designation under Article 52(1)’s third sentence substitutes for a missing notification under Article 52(1)’s first sentence (‘of which it has not been notified’) suggests that if pre-market notification obligations are recognized, pre-market designations should likewise be possible.

2.1.3.5. Consequences of designation

65Uncertainty regarding the consequences of a GPAI model’s designation as presenting systemic risk pursuant to Article 52(1)’s third sentence arises from the fact that – according to the Commission Guidelines and the view taken here245 – a GPAI model with high-impact capabilities is automatically classified as presenting systemic risk under Article 51(1)(a).246 Thus, designation of such a model does not alter its classification status as it is already classified as presenting systemic risk.247 Accordingly, designation under the third sentence of Article 52(1) appears to be largely declaratory in nature.

66However, a designation decision under Article 52(1)’s third sentence does have important legal implications. In cases of designation under Article 52(1)’s third sentence, it becomes impossible for a provider to contest the model’s classification pursuant to Article 52(2).248 Such a challenge to classification must be submitted together with notification pursuant to Article 52(1)’s first sentence.249 Yet designation under Article 52(1)’s third sentence presupposes the absence of notification250 and establishes that the GPAI model meets Article 51(1)(a)’s classification condition,251 thereby supplanting the provider notification and precluding the provider from challenging classification under Article 52(2).

67Moreover, any challenge to classification after the model’s designation under Article 52(1)’s third sentence would amount to a contestation of the designation itself – yet the legislature has not expressly provided for a challenge to designation in Article 52(2) rather than in Article 52(5).252 Furthermore, unlike regular cases of contestation pursuant to Article 52(2) in which the Commission considers the model’s classification for the first and only time in the context of notification, contestation after designation would effectively entitle a provider who has failed to fulfil its notification obligation to two reviews of the model’s classification by the Commission. It is unclear how this additional review could be justified.

68A further question is whether a provider of a model that has been designated pursuant to Article 52(1)’s third sentence can request reassessment of this designation under Article 52(5).253 The provision’s wording only applies to designations under Article 52(4)’s first subparagraph, and the more convincing arguments – which are discussed in detail in a subsequent section254 – appear to preclude its analogous application to designations under Article 52(1)’s third sentence.

69Moreover, a designation decision pursuant to Article 52(1)’s third sentence is – by its nature as a Commission decision – legally binding.255 It can be challenged pursuant to Article 263(4) TFEU.256 The challenge does not have a suspensory effect.257 A provider of a designated GPAI model that did not challenge the designation is in principle precluded from challenging an enforcement decision on the grounds that the GPAI model does not have high-impact capabilities and therefore should not have been designated as presenting systemic risk.258

2.2. Article 52(2) and (3): Procedure for contesting classification upon notification

70GPAI models with high-impact capabilities, including those which meet the training compute threshold under Article 51(2) and are therefore presumed to have high-impact capabilities, are automatically classified as a GPAI model with systemic risk under Article 51(1)(a).259 However, for such models, Article 52’s second and third paragraphs set out a procedure that allows providers to contest260 their classification.261

71The two paragraphs are closely interconnected and overlap in part.262 Article 52(2) sets out the scope of the procedure and establishes the requirements for contesting classification,263 whereas Article 52(3) governs the Commission’s decision upon a provider’s submission pursuant to Article 52(2).264

2.2.1. Scope (‘general-purpose AI model that meets the condition referred to in Article 51(1)(a)’ and ‘although it meets that requirement’)

72In general, the procedure to contest classification under Article 52(2) and (3) applies in cases of notification pursuant to Article 52(1)’s first sentence.265 As the notification obligation under Article 52(1)’s first sentence applies to both (i) GPAI models with actual high-impact capabilities266 and (ii) to GPAI models whose high-impact capabilities are presumed based on the training compute threshold under Article 51(2),267 so does the procedure to contest classification under Article 52’s second and third paragraphs.268 The former follows naturally from Article 52(2)’s wording, which refers to ‘a general-purpose AI model that meets the condition referred to in Article 51(1), point (a)’. The Article 51(1)(a) condition requires the GPAI model to have high-impact capabilities.269 The latter application – the procedure to contest classification for GPAI models with only presumed high-impact capabilities – merits discussion because a literal reading of Article 52(2) provides, at first glance, an argument against the possibility of contesting classification of a GPAI model where high-impact capabilities are merely presumed. Article 52(2) not only fails to reference the high-impact capabilities presumption under Article 51(2) but also requires that the GPAI model ‘meets that requirement’, with ‘that requirement’ appearing to refer to the condition referred to in Article 51(1)(a) – namely, the GPAI model having high-impact capabilities.270

73Despite this ambiguous wording, the classification procedure is not restricted to providers of GPAI models with actual high-impact capabilities.271 In any case, excluding providers of models with only presumed high-impact capabilities would be difficult to justify. Where a GPAI model with actual high-impact capabilities may possess specific characteristics that demonstrate the absence of systemic risk, there is no reason why such characteristics should not be equally relevant for a GPAI model with only presumed high-impact capabilities. Moreover, limiting the procedure under Article 52’s second and third paragraphs to GPAI models with actual high-impact capabilities would create the counterintuitive consequence that a provider of a model with presumed high-impact capabilities would be required first to prove that the model possesses actual high-impact capabilities in order to be able to demonstrate that the model lacks systemic risk.

74This reading of Article 52(2)’s reference to models that ‘mee[t] the condition referred to in Article 51(1), point (a)’ as including ‘or is presumed to meet it under Article 51(2)’ finds support in the notification obligation under Article 52(1)’s first sentence. The wording of this provision corresponds with Article 52(2) insofar as only Article 51(1)(a) is expressly referenced. However, in the context of the notification obligation, the recitals clarify that this reference encompasses models whose high-impact capabilities under Article 51(2) are only presumed.272

2.2.2. Requirements for contesting classification

75Article 52(2) contains several interconnected requirements for a provider’s contestation of classification, which are discussed in the subsequent sections. It requires that a provider present ‘arguments to demonstrate that […] the general-purpose AI model does not present […] systemic risks’,273 and further stipulates that these arguments be ‘sufficiently substantiated’.274 Article 52(2)’s structure indicates that the requirement of sufficient substantiation is closely related to the requirement to demonstrate the absence of systemic risk.275 Moreover, Article 52(3)’s joint reference to both requirements in the same breath suggests that the provider’s arguments are sufficiently substantiated if, and only if, they demonstrate the absence of systemic risks.276

76The phrase ‘due to its specific characteristics’ in Article 52(2) further qualifies the nature of the arguments the provider is required to present,277 whereas the requirement to present arguments ‘with its notification’ relates to the timing of the submission.278 Article 52(2)’s use of the term ‘exceptionally’ in requiring the provider to demonstrate that the model does not present systemic risks seems to establish no independent requirement but rather suggests that the legislature considered models within the scope of Article 52(2) as typically presenting systemic risk and that the requirements for contesting classification must therefore not be interpreted too leniently.279 The last subclause of Article 52(2) (‘and therefore should not be classified as presenting systemic risk’) does not denote an additional requirement for the provider’s submission but rather clarifies its purpose: to contest classification.280

2.2.2.1. Systemic risk absence, including rebuttal of high-impact capabilities presumption under Article 51(2) (‘does not present […] systemic risks’)
2.2.2.1.1. Available lines of argument

77In order to challenge a model’s classification under Article 52(2) and (3), the provider must demonstrate that it does not present systemic risks. As laid out above, the procedure for contesting classification applies to both GPAI models with actual high-impact capabilities and to GPAI models whose high-impact capabilities are presumed based on the training compute threshold under Article 51(2).281 Therefore, there are two conceivable lines of argument: first, the provider’s arguments can be aimed at rebutting the presumption under Article 51(2) that the model has high-impact capabilities; second, they can be aimed at demonstrating that the model does not present systemic risks despite having high-impact capabilities or regardless of having such capabilities.282 Both lines of argument – which are explored in greater detail below283 – are available for providers seeking to contest classification of their model.284

78For the latter line of argument, this follows directly from Article 52(2)’s wording which expressly allows a provider of a model with high-impact capabilities ‘to demonstrate that the general-purpose AI model does not present […] systemic risks.’ Regarding the former line of argument, Article 52(2)’s wording might initially suggest that it precludes the rebuttal of the high-impact capabilities presumption under Article 51(2). The parenthetical ‘although it meets that requirement’ could – while not excluding the possibility to contest classification for GPAI models with only presumed high-impact capabilities285 – appear to presuppose as immutable that the model possesses such capabilities.286 Under Article 52(2), ‘that requirement’ can only reasonably refer to ‘the condition referred to in Article 51(1), point (a)’, that is, having high-impact capabilities.287

79However, this textual argument is not persuasive. As demonstrated above, Article 52(2)’s reference to Article 51(1)(a) must be understood as encompassing the high-impact capabilities presumption under Article 51(2).288 Allowing a provider of a GPAI model with only presumed high-impact capabilities to contest its classification289 but not allowing for the rebuttal of this presumption must therefore be rejected as an inconsistent reading of Article 52(2)’s wording. This is reinforced by the fact that high-impact capabilities, not Article 51(2)’s training compute threshold, remain the primary indicator that the model presents systemic risks.290

2.2.2.1.2. Effects of rebutting the high-impact capabilities presumption under Article 51(2)

80Where a provider presents sufficiently substantiated arguments to rebut Article 51(2)’s presumption of high-impact capabilities in the context of the procedure to contest classification under Article 52(2) and (3), the question arises of whether such a rebuttal alone suffices ‘to demonstrate that the general-purpose AI model does not present […] systemic risks’.291 The Commission Guidelines suggest that it does, stating that ‘[i]n the case where the provider’s arguments are aimed at rebutting the presumption that the model has high-impact capabilities and therefore does not present systemic risks, the Commission will assess whether the provider has presented sufficiently substantiated arguments manifestly calling into question this presumption.’292 Tellingly, the Commission Guidelines make no mention of any further assessment of whether the model might nevertheless present systemic risks despite lacking high-impact capabilities.293

81Support for the Commission’s view may be found in Article 3(65)’s definition of systemic risk, which characterises systemic risk as ‘specific to the high-impact capabilities of general-purpose AI models’ (emphasis added). If Article 3(65)’s ‘specific to’ is read as exclusive to,294 it follows that, by definition, only GPAI models with high-impact capabilities can present systemic risks. Under this interpretation, a provider who successfully rebuts Article 51(2)’s presumption would therefore have demonstrated that its model does not present systemic risks.

82By contrast, if Article 3(65)’s ‘specific to’ is read as characteristic of295 – thus implying that GPAI models with high-impact capabilities typically present systemic risks without excluding that GPAI models without such capabilities may under certain circumstances present systemic risks as well – then rebutting Article 51(2)’s presumption alone does not conclusively prove that a model does not present systemic risks. On this reading, the procedure to contest classification under Article 52(2) and (3) could require a provider to present further arguments beyond rebutting Article 51(2)’s presumption to demonstrate the absence of systemic risks.

83Ultimately, the Commission’s interpretation appears compelling irrespective of how Article 3(65) is construed – a question discussed elsewhere296 – as it finds further support in a comparison of the procedure to contest classification under Article 52(2) and (3) with designation of GPAI models as presenting systemic risk under Article 51(1)(b) in conjunction with Article 52(4)’s first subparagraph.297 In this context, the Commission carries the burden of proof for designating a GPAI model as presenting systemic risks based on reasons beyond its high-impact capabilities.298 Departing from this principle where a provider has successfully rebutted Article 51(2)’s presumption of high-impact capabilities in its submission of arguments under Article 52(2) appears to lack justification, as the model’s high-impact capabilities – and not its exceeding of Article 51(2)’s training compute threshold – are the primary indicator that the model presents systemic risks.299

84This is confirmed by a further consideration, which relates to Article 51(1)(a) and the timing of the Commission’s decision pursuant to Article 52(3). Where the Commission accepts the provider’s arguments aimed at rebutting the high-impact capabilities presumption before the model exceeds Article 51(2)’s training compute threshold,300 this arguably eliminates the legal basis for the model’s classification under Article 51(1)(a), insofar as it is based on the model’s presumed high-impact capabilities due to meeting Article 51(2)’s threshold.301 In other words, once the model exceeds Article 51(2)’s training compute threshold, the presumption of high-impact capabilities cannot take effect anymore as it is already rebutted. This in turn suggests that even where the Commission decides after the model exceeds Article 51(2)’s threshold, a rebuttal of the high-impact capabilities presumption must suffice to contest classification under Article 52(2) and (3), as there is no apparent reason why the consequences of successfully rebutting the high-impact capabilities presumption should depend on when the Commission makes its decision under Article 52(3).302

2.2.2.2. Relevant model characteristics (‘due to its specific characteristics’)

85The parenthetical ‘due to its specific characteristics’ qualifies the nature of the arguments the provider is required to present to contest classification. ‘Specific characteristics’ refers to the individual model whose classification the provider contests, rather than to GPAI models generally.303 ‘Characteristics’ can be broadly defined as any qualities, properties and features of the GPAI model.304 Beyond this basic understanding, however, there is considerable uncertainty surrounding the questions of what Article 52(2) means by ‘specific characteristics’ and, correspondingly, in which ways the parenthetical ‘due to its specific characteristics’ constrains the provider’s argumentative scope.305 These questions carry considerable significance, as their resolution determines how high the bar is set for providers seeking to challenge the systemic risk classification of models that meet the compute threshold under Article 51(2) and are therefore classified as presenting systemic risk under Article 51(1)(a).306

86While the Commission Guidelines acknowledge the existence of the ‘specific characteristics’ requirement, they do not elaborate on its interpretation.307 Insofar as they state that measures to mitigate systemic risks are not suitable grounds for challenging classification,308 the Commission does not expressly argue that these mitigations are not ‘specific characteristics’ but rather that the mitigation of a systemic risk does not lead to its absence.309

87Two principal approaches present themselves for clarifying the notion of ‘specific characteristics’. A first would be comparative in nature: characteristics could be deemed specific if they sufficiently distinguish the model from reference GPAI models with systemic risk.310 A second approach would not require such a comparison: on this view, ‘specific characteristics’ could be understood as encompassing certain characteristics that are particularly relevant in the context of systemic risk classification.311

88Both of these approaches, which are discussed in the following sections, allow for broad and narrow interpretations of ‘specific characteristics’.312 While they are not necessarily mutually exclusive, the second approach appears particularly compelling as it accords with the legislative assumption underlying the AI Act that arguments based on certain characteristics of a GPAI model may be more suitable for demonstrating the absence of systemic risk than arguments based on others are.313

2.2.2.2.1. Characteristics distinguishing the model from reference GPAI models with systemic risk

89One approach that could help clarify the notion of ‘specific characteristics’ in Article 52(2) involves determining whether a model’s characteristics constitute ‘specific characteristics’ by comparison with other models of a reference group. According to this approach, where the characteristics already occur to a certain extent in other models, the provider could no longer challenge its model’s systemic risk classification on the basis of these characteristics. Such a relative approach would require establishing (i) a reference group that is involved in the comparison, (ii) thresholds for how often the characteristic may occur in the reference group until it ceases to be a ‘specific’ characteristic (“cut-off points”),314 and (iii) criteria for when a difference between two characteristics is significant enough for them to be different characteristics and not variants of the same characteristics (“differentiation criteria”).315 The apparent choice for a reference group comprises models that have been classified as GPAI models with systemic risk (or a subgroup of these),316 since the provider seeks to demonstrate through its challenge to classification that its model does not belong to this group as it does not present systemic risk.317 The first approach’s reliance on cut-off points and differentiation criteria which are not specified in the AI Act would make ‘specific characteristics’ an inherently flexible criterion. It may be adjusted in light of technological developments and regulatory amendments, such as updates to the training compute threshold under Article 51(2).318 However, this flexibility would increase the legal uncertainty for providers seeking to challenge classification under Article 52(2).

90This first approach to interpreting ‘specific characteristics’ finds support in the provision’s wording. One possible meaning of ‘specific’ in natural language use is ‘relating to one thing but not others’.319 Moreover, the resulting limitation of the providers’ scope for argumentation can, at least in some cases, be justified with administrative efficiency: where one or several GPAI models with systemic risk adopt a certain safety characteristic and the Commission has already concluded in previous reviews of this safety characteristic that its adoption is not sufficient to establish the absence of systemic risk, not allowing another provider to challenge classification under Article 52(2) based on this safety characteristic reduces the Commission’s administrative burden to review this characteristic again.

91Nevertheless, circumstances may warrant allowing a provider to challenge classification based on a particular safety characteristic even where one or several GPAI models with systemic risk have already adopted that characteristic. First, the Commission might not have reviewed this safety characteristic yet despite its earlier adoption by other GPAI models.320 Specific challenges arise where the providers of those models chose not to challenge classification. Second, even where the Commission has reviewed a safety characteristic before, another review of it may still be warranted, for example in light of new scientific evidence or the other features of the GPAI model that justify a different evaluation of the same safety characteristic in different GPAI models.321

92However, these cases do not raise objections against the first approach in principle, as they appear to be addressable through the selection of appropriate cut-off points and differentiation criteria. For example, new scientific evidence that has emerged in the interim could form part of the differentiation criteria, thereby permitting renewed review of a safety characteristic despite previous GPAI models with systemic risk possessing the same characteristic. Moreover, under an appropriately high cut-off point for determining when a characteristic ceases to be a ‘specific characteristic’ because it is sufficiently prevalent, the scenario of a provider of a GPAI model with systemic risk forgoing a classification challenge to stifle competition becomes hypothetical.

93A more principled objection against the first approach is that it is not apparent what it actually demands of the provider in practice. The provider bears the burden of proof in the context of the challenge to classification under Article 52(2).322 However, proving that a characteristic is ‘specific’ under the first approach requires information about the prevalence of this characteristic in GPAI models with systemic risk. Where providers of GPAI models with systemic risk do not make such information publicly available, the provider seeking to contest classification may have no means to acquire such information. The difficulty is compounded by the fact that the Commission will not be able to provide the necessary information in all cases, as information or documentation related to GPAI models obtained by the Commission pursuant to Article 53 shall be treated in accordance with the confidentiality obligations set out in Article 78.323 However, this problem may be smaller than it appears at first glance. In particular, where a provider has developed a certain safety characteristic itself, it may not prove difficult to argue that it constitutes a ‘specific characteristic’. In other cases, however, one could consider it sufficient if the provider provides arguments on the basis of information available to it, as nothing impossible can be demanded of it (impossibilium nulla obligatio).324

2.2.2.2.2. Certain types of characteristics

94Another approach to clarifying the notion of ‘specific characteristics’ that would not require a comparison with reference GPAI models with systemic risk325 would be interpreting it to include a certain type of model characteristic. This second approach could draw on the legislative assumption underlying the AI Act that arguments based on a certain type of characteristic of a GPAI model may generally be more suitable for demonstrating the absence of systemic risk than arguments based on other types. This assumption finds support in Annex XIII, which contains a list of criteria that the Commission shall take into account for designation of a model as presenting systemic risk under Articles 51(1)(b) and Article 52(4)’s first subparagraph.326 The existence of this list acknowledges that the listed criteria might be more relevant than others for systemic risk classification. Such an interpretation of ‘specific characteristics’ finds further support by the term’s use in a similar sense in Recital 65, which distinguishes between the ‘specific characteristics’ of an AI system and its ‘use’, thereby implying – with regard to AI systems, not GPAI models – that an AI system’s use does not fall into the category of ‘specific characteristics’.327

95However, the difficulty of this approach lies not so much in its abstract justification but in determining which types of characteristics actually constitute ‘specific characteristics’ under it.328 The criteria listed in Annex XIII can only provide limited guidance on this question,329 and with other reference points largely absent,330 this approach currently suffers from a lack of legal certainty, which might change with the availability of appropriate technical tools and methodologies, including indicators and benchmarks, for the evaluation of high-impact capabilities under Article 51(1)(a)331 or other form of guidance by the Commission or the Court of Justice of the European Union.332

96A more principled objection against the second approach is that it threatens the coherence of the AI Act’s systemic risk classification framework. To the extent that the GPAI model having ‘specific characteristics’ is an additional substantive requirement for a challenge to classification under Article 52(2), there is an inherent tension with the fact that the designation of a GPAI model as presenting systemic risk under Article 52(1)’s third sentence and Article 52(4)’s first subparagraph comes with no comparable requirement.333

2.2.2.3. Timing of classification contestation (‘with its notification’)

97Article 52(2) requires the provider of a GPAI model with high-impact capabilities to contest its classification as presenting systemic risk ‘with its notification’. This wording implies that the submission of arguments against classification must coincide with the notification under Article 52(1)’s first sentence and that a later submission is inadmissible.334 Moreover, it suggests that the provider cannot contest classification in the absence of notification, meaning that they cannot contest classification in designation cases under Article 52(1)’s third sentence.335

98This requirement of temporal coincidence has the potential to pose considerable difficulties both for the provider seeking to contest classification and the Commission deciding on the provider’s submission: under Article 52(1)’s first sentence, the (prospective) provider of a (future) GPAI model may be obliged to notify the Commission already before or during the training of the GPAI model.336 However, at this early stage in the model’s lifecycle, it may be difficult to foresee whether the model’s specific characteristics will be sufficient to rule out the presence of systemic risks or, where the notification obligation is triggered by Article 51(2)’s training compute threshold, whether the GPAI model will actually possess high-impact capabilities.337 Although the ability to contest classification later could help overcome this difficulty, it is uncertain to what extent a notifying provider can challenge its model’s classification after notification, as Article 52(5) expressly establishes reassessment rights only for models that have been designated under Article 52(4)’s first subparagraph.338 Article 52(2)’s timing requirement might thus incentivise providers not to notify the Commission of the model’s high-impact capabilities within the timeframe provided for by Article 52(1)’s first sentence in order to collect more argumentative material for contesting classification under Article 52(2).339 Conversely, the Commission may risk deciding on the provider’s arguments against systemic risk classification on the basis of forecasted capabilities and forecasted risks of the GPAI model which might be far better understood after training is completed.

99It seems doubtful whether these difficulties can be fully resolved within the existing legal framework. Among various conceivable interpretive approaches, the one that finds most support in Article 52 is to allow a provider who has notified the Commission on the basis of prospective knowledge that the model will meet the classification condition under Article 51(1)(a) or the training compute threshold under Article 51(2) to renotify once that condition or threshold is actually met. This approach finds limited support in Article 52(1)’s first sentence, which provides for different temporal triggers for notification without excluding the possibility of renotification.340 Renotification aligned with the temporal triggers in Article 52(1)’s first sentences would usefully inform the Commission not only that the model will eventually meet the classification condition, but also when it has done so. However, while Article 52(1) does not expressly preclude renotification, there is no indication that the legislature envisaged double notification. By establishing alternative temporal triggers in Article 52(1)’s first sentence, the legislature likely sought to ensure earlier notification while also preserving an objective trigger for the notification obligation that does not depend on the provider’s knowledge – not to permit repeated notifications serving as vehicles for contestation pursuant to Article 52(2).

100Less convincing alternative approaches to resolving the difficulties arising from the requirement of temporal coincidence include interpreting ‘with its notification’ not as simultaneously with the notification, but rather as ‘not before’ notification or requiring that the submission of arguments must coincide with notification while allowing arguments to be supplemented in certain cases. In particular, one could consider submissions of arguments as not being late where they occur before or simultaneously with the market placement of the model. However, these interpretations are not close at hand given the wording of Article 52(2) and that the exclusion from systemic risk classification only operates ‘exceptionally’.341

101Further, even though the requirement of temporal coincidence may pose considerable difficulties, these do not appear insurmountable. The Commission Guidelines provide guidance on which kind of information could be included in a submission under Article 52(2).342 This guidance is apparently given with the temporal coincidence requirement in mind as the Commission Guidelines refer to ‘information available to [the provider] at the time of notification about the model’s achieved or anticipated capabilities, including in the form of actual or forecasted benchmark results (for example based on scaling analyses)’.343 Moreover, a provider may be able to compare the model’s architecture and function to existing models already before training has started and thus demonstrate that the model does not pose systemic risk.344 Pre-notification contacts between a provider and the Commission may further facilitate a provider’s challenge to classification.345

2.2.2.4. Standard of proof (‘sufficiently substantiated arguments’)

102Article 52(2) requires the provider to present ‘sufficiently substantiated arguments’ to demonstrate that the GPAI model does not present systemic risks due to its specific characteristics. The burden of proof thus rests with the GPAI model provider.346 More substantively, the AI Act does not specify when the threshold of sufficient substantiation is met. The parallel provision Article 3(5) DMA provides some guidance on this question.347 Under that provision, arguments brought forward by an undertaking against its gatekeeper designation are not sufficiently substantiated if they ‘do not manifestly call into question the presumptions set out in [Article 3(2) DMA]’.348 The Commission Guidelines echo this language by stating that where the provider seeks to rebut Article 51(2)’s presumption of high-impact capabilities, the Commission assesses ‘whether the provider has presented sufficiently substantiated arguments manifestly calling into question this presumption.’349 On the basis of the German language version of Article 52(2),350 some scholars argue that Article 52(2) requires the provider to present the necessary factual information together with a logical and transparent argument, including addressing potential counterarguments.351 In its assessment of the submitted arguments, the Commission may take into account that the absence of systemic risk in models with high-impact capabilities would be the exception under Article 52(2).352

103With regard to the rebuttal of the high-impact capabilities presumption under Article 51(2),353 the Commission has specified in its guidelines which kind of evidence providers are expected to submit with their challenge to classification. Providers are required to submit available information about the model’s actual or forecasted capabilities.354 Additionally, the Commission ‘strongly advises’ providers to submit further information that allows the Commission to draw conclusions about the model’s (high-impact) capabilities.355 This encompasses information relating to ‘model architecture, number of parameters, number of training examples, data curation and processing techniques, training techniques, input and output modalities, expected tool use, and expected context length’.356 Based on the Commission Guidelines, it is easier for providers to rebut the high-impact capabilities presumption when their models exceed the training compute threshold under Article 51(2) by smaller amounts.357

104With regard to models that do not present systemic risk despite having high-impact capabilities, the Commission has, so far, provided little guidance on the kind of evidence providers are expected to submit to prove this. The Commission Guidelines merely state in that respect that ‘mitigations already or planned to be implemented are not suitable grounds for a model being excluded from classification as a general-purpose AI model with systemic risk’.358

2.2.3. Commission decision
2.2.3.1. Acceptance, rejection and discretion

105Interestingly, and in contrast to the parallel provision for challenging gatekeeper designation under Article 3(5) DMA,359 the AI Act expressly provides only for the rejection of the providers’ arguments pursuant to Article 52(3) – and not for their acceptance. It may reasonably be assumed, however, that the second and third paragraphs of Article 52 implicitly presuppose the possibility of the Commission accepting the providers’ arguments,360 as only a legally binding acceptance decision provides the provider with legal certainty that the Commission will not reject the provider’s arguments at a later point in time.361

106If the Commission considers that a provider’s contestation of classification does not meet Article 52(2) and (3)’s requirements, it is obliged to reject the provider’s arguments.362 Given that Article 52(3) expressly only regulates the case where the provider does not present sufficiently substantiated arguments and was not able to demonstrate that the GPAI model does not present systemic risks,363 the question arises as to whether this provision establishes the Commission’s lack of discretion in all cases. The most compelling view is that the Commission may accept the provider’s arguments if it recognizes the absence of systemic risks despite their insufficient substantiation. The contrary view – that the Commission must reject the provider’s arguments in all cases of insufficient substantiation – is defended by some scholars,364 but does not appear entirely convincing, as there is no apparent reason that would require the erroneous classification of a GPAI model as presenting systemic risk in such cases.

107The AI Act does not specify whether the Commission can also issue a conditional acceptance decision in certain cases – that is, a decision accepting the provider’s arguments against systemic risk classification of its model while simultaneously laying down obligations to enable the Commission to monitor the emergence of systemic risks. Such conditional acceptance decisions could offer a practical solution to the challenges facing both providers and the Commission arising from the requirement to contest classification together with notification.365 The Commission’s power to make favourable decisions conditional upon certain requirements is generally recognised in areas such as EU state aid and merger control law.366 In the context of systemic risk classification, the wording of Article 52(3) (‘shall reject’) might argue against the Commission’s power to issue conditional acceptance decisions where not all requirements for an unconditional acceptance decision are met. This literal interpretation is, however, weakened by the fact that Article 52(2) and (3) do not contain any language concerning acceptance decisions, whether conditional or unconditional, at all. Moreover, combining an acceptance decision with obligations to enable the Commission to monitor the emergence of systemic risks may find support in Articles 89(1) and 91(1), which, under certain conditions, empower the Commission to undertake monitoring actions and request information to ensure the effective implementation of the AI Act and assess provider compliance with it.

108Following an interpretation where GPAI models with high-impact capabilities are automatically classified as presenting systemic risk under Article 51(1)(a),367 there are two scenarios with regard to the legal effects of a Commission decision on a provider’s challenge to classification. In the first scenario, the model is already classified as presenting systemic risk at the time of the Commission decision. In this scenario, where the Commission accepts the provider’s arguments, the GPAI model is no longer classified as presenting systemic risk.368 This means that its provider is no longer subject to the specific obligations for providers of GPAI models with systemic risk.369 Where the Commission rejects the provider’s arguments, the GPAI model continues to be classified as presenting systemic risk under Article 51(1)(a).370

109In the second scenario, the model is not yet classified as presenting systemic risk at the time of the Commission decision. For example, this may be the case where the provider notified the Commission before starting with model training because it knew that the model will meet the compute threshold under Article 51(2)371 and the Commission promptly decides on the provider’s challenge to classification. Here, the Commission’s acceptance of the provider’s arguments means that the model will not be classified under Article 51(1)(a) once it meets its requirements, whereas a rejection decision likely means that it will be classified once classification requirements are met.372

110The first scenario laid out above comes with the question of the timing of when the Commission’s decision under Article 52(3) takes effect – and more specifically whether that is from the decision date onwards (ex nunc) or backdated to the start of a model’s development (ex tunc).373 The Commission Guidelines apply the decision only from the decision date forwards, stating that providers are released from systemic risk obligations only ‘from the moment when [they are] informed of the acceptance decision.’374 However, there are also arguments supporting backdating the decision: if the Commission determines that a model does not present systemic risks warranting heightened obligations, imposing sanctions for non-compliance during the review period appears disproportionate and unnecessarily punitive. Additionally, prospective application incentivises providers to over-comply with potentially inapplicable obligations, with a chilling effect on innovation that the AI Act intends to support.375

111According to the Commission Guidelines, an acceptance decision does not protect the provider from later classification of its model as presenting systemic risk.376 The Commission holds that it may reverse its acceptance decision where the information underlying the decision was substantially incomplete, incorrect or misleading, or where there is a substantial change of facts.377 Moreover, the Commission assumes that an acceptance decision does not preclude the designation of the model as a GPAI model with systemic risk based on the Annex XIII criteria.378 Indeed, the AI Act does not preclude the Commission from designating a GPAI model as presenting systemic risk pursuant to Article 52(4)’s first subparagraph in conjunction with Article 51(1)(b) based on these criteria where the requirements for designation are met.

2.2.3.3. Timing

112Article 52(3) does not state a time limit for the Commission’s decision. However, under Article 41(1) of the Charter providers are entitled to have their classification challenges handled within a reasonable time.379 The Commission decision can be challenged pursuant to Article 263(4) TFEU.380 The view advanced by some authors that a Commission decision in the context of Article 52(2) and (3) cannot be challenged in isolation of a Commission decision designating the model as presenting systemic risk381 does not appear convincing as classification under Article 51(1)(a) operates automatically and thus does not require a Commission designation decision.382

2.2.3.4. Classification pending decision

113As classification under Article 51(1)(a) operates automatically,383 this raises the question of whether a provider challenging its model’s classification must comply with the obligations that may follow from the classification384 while the Commission’s decision is still pending.385 This seems to be the case,386 as Article 52’s second and third paragraphs do not expressly provide for a suspensive effect of a challenge to classification.387 This supports the view that, until the Commission’s decision, the principle established by Articles 51(1)(a) and (2) continues to apply: a GPAI model with actual or presumed high-impact capabilities is classified as a GPAI model with systemic risk and specific obligations for providers remain applicable. This ensures that systemic risks potentially stemming from a model, some of which may require mitigation early in the model’s lifecycle,388 are properly addressed, while simultaneously preventing providers from contesting classification for the sole purpose of delaying their compliance with specific obligations for GPAI models with systemic risk.

2.3. Article 52(4): Commission designation and delegated acts

114Article 52(4) consists of two subparagraphs. The first subparagraph establishes the Commission’s power to designate GPAI models as presenting systemic risks on the basis of the criteria set out in Annex XIII.389 The second subparagraph establishes the Commission’s power to amend this annex.390

2.3.1. Article 52(4), first subparagraph: Commission designation

115Article 52(4)’s first subparagraph is one of the two provisions under Article 52 establishing the Commission’s power to designate GPAI models as presenting systemic risks, with the other one being Article 52(1)’s third sentence.391 It allows the Commission to designate a GPAI model as presenting systemic risks based on the criteria set out in Annex XIII.392 While the first subparagraph of Article 52(4) contains no express reference to Article 51(1)(b), compelling reasons – including the shared reference to Annex XIII – indicate that a designation decision under Article 52(4)’s first subparagraph constitutes the Commission decision referred to in Article 51(1)(b),393 as explored in more detail above.394

116The designation decision can be taken ex officio or following a qualified alert from the scientific panel.395 With regard to the latter, it is unclear why Article 52(4)’s first subparagraph specifically references Article 90(1)(a) rather than Article 90(1)(b) or Article 90(1) in its entirety.396 Article 90(1)(a) concerns qualified alerts where a GPAI model ‘poses concrete identifiable risk at Union level’, while Article 90(1)(b) addresses qualified alerts where a GPAI model ‘meets the conditions referred to in Article 51’, thereby specifically addressing systemic risk classification. Information shared by providers in structured dialogues or as a response to information requests under Article 53(1)(a) and Article 91(1) could also play a role for designation decisions.397

2.3.1.1. Requirements

117Article 52(4)’s first subparagraph establishes the Commission’s power to designate GPAI models as presenting systemic risks, without expressly setting out the conditions under which the Commission is empowered to do so. Its wording (‘as presenting systemic risks’) might suggest that designation requires a model to exhibit risks that meet the systemic risk definition under Article 3(65). However, designation of a model as presenting systemic risks is not the same as requiring the presence of systemic risks for designation. More convincingly, designation under Article 52(4)’s first subparagraph comes with the substantive requirements for classification set out in Article 51(1)(b), thereby requiring that a model has capabilities or an impact equivalent to those set out in Article 51(1)(a), that is, high-impact capabilities.398

118This interpretation, shared by the Commission Guidelines,399 also finds support in Article 52(4)’s first subparagraph, which requires the Commission to take designation decisions ‘on the basis of criteria set out in Annex XIII’. While, in principle, the same set of criteria could be taken into account for various stricter or broader requirements, Annex XIII by its wording specifically contains criteria ‘[f]or the purpose of determining that a general-purpose AI model has capabilities or an impact equivalent to those set out in Article 51(1), point (a)’. Therefore, Article 52(4)’s first subparagraph’s reference to Annex XIII implies that the standard set out in this annex applies to designation under this provision – in particular since Article 52(4)’s first subparagraph does not expressly set out the conditions for designation itself. This interpretation of designation under Article 52(4)’s first subparagraph mirroring Article 51(1)(b)’s substantive requirements is further supported by the substantial textual alignment between the two provisions and their conflation in the recitals, as laid out above.400 The point when a GPAI model possesses capabilities or an impact equivalent to high-impact capabilities, as well as the meaning of the criteria set out in Annex XIII, is discussed elsewhere.401

119Article 52(4)’s first subparagraph does not set out any further procedural requirements for designation. As for designation under Article 52(1)’s third sentence, the procedural rights laid down in Article 18 MSR and the right to good administration under Article 41(1) and (2) of the Charter – which have been discussed above – apply for a provider facing designation of its model under Article 52(4)’s first subparagraph.402 In particular, the provider has the right to be heard under Article 18(3) MSR.403

2.3.1.2. Consequences

120A model’s designation under Article 52(4)’s first subparagraph means that it is classified as a GPAI model with systemic risk.404 Therefore, the AI Act’s specific provisions for such models – in particular, the obligations for providers of GPAI models with systemic risk under Article 55(1) – apply.405 The designation is constitutive of classification under Article 51(1)(b), as indicated by the provision’s wording (‘based on a decision of the Commission’).406 The Commission’s designation decision becomes effective once the provider is informed of it,407 and is amenable to judicial review under Article 263 TFEU.408 It is further subject to reassessment pursuant to Article 52(5) on the basis of new reasons that have arisen since the designation decision.409

2.3.2. Article 52(4), second subparagraph: Delegated acts

121The second subparagraph of Article 52(4) is one of two provisions in Section 1. of Chapter V establishing the Commission’s power to adopt delegated acts concerning the AI Act’s systemic risk classification framework, with the other being Article 51(3).410 Based on Article 290(1) TFEU,411 that subparagraph empowers the Commission to amend Annex XIII by ‘specifying’ and ‘updating’ the criteria it contains. Disagreement exists regarding the scope of this delegation of power. Some argue for its narrow interpretation based on the provision’s wording.412 On this view, specification would only cover a more precise description of existing criteria but no substantive change, while an update would cover only an adjustment of criteria for the purpose of reflecting societal or technological advancement.413 Accordingly, the Commission would not be allowed to introduce new criteria.414 In stark contrast, it has also been argued that Article 52(4)’s second subparagraph allows the Commission to both detail existing criteria and introduce new ones, granting the Commission a ‘wide ranging license’ to determine the criteria for systemic risk designation.415

122The wording of Article 52(4)’s second subparagraph weighs against the narrow interpretation, given that it empowers the Commission ‘to amend Annex XIII’ (emphasis added). Under Article 290(1) TFEU, the power to amend a legislative act constitutes, alongside the power to supplement, one of two distinct categories of delegated powers.416 In general, a power to ‘amend’ a legislative act aims to ‘authorise the Commission to modify or repeal non-essential elements’ of an act, whereas a power to ‘supplement’ a legislative act aims to ‘authorise the Commission to flesh out that act’.417 Unlike a power to supplement, which must be exercised ‘in compliance with the entirety of the legislative act’, a power to amend is not subject to this constraint, since the Commission ‘is not required to act in compliance with the elements that the authority conferred on it aims precisely to “amend”’.418 The use of ‘amend’ in Article 52(4)’s second subparagraph therefore suggests that the Commission may add or remove Annex XIII criteria, not merely elaborating existing ones.

123That the Commission is empowered to amend Annex XIII ‘by specifying and updating’ its criteria does not compel a different interpretation.419 In particular, ‘updating’ is a vague notion that can, in principle, encompass the inclusion of new criteria.420 The fact that it refers syntactically to the ‘criteria set out in that Annex’ in Article 52(4)’s second subparagraph, rather than directly to Annex XIII itself does not dictate a different conclusion, since this formulation can encompass both individual criteria and the criteria collectively.421

124Moreover, the original version of Annex XIII itself supports a broad interpretation of Article 52(4)’s delegation of power. It contains several broadly framed criteria, some of which include non-exhaustive lists of examples.422 Point (e) of Annex XIII exemplifies this approach, requiring the Commission to take into account ‘the benchmarks and evaluations of capabilities of the model, including considering the number of tasks without additional training, adaptability to learn new, distinct tasks, its level of autonomy and scalability, the tools it has access to’. Such broad and open-ended criteria suggest that the legislature sought to ensure consideration of diverse factors through Annex XIII rather than to confine the Commission to a closed list of criteria. Moreover, a convincing argument for a broad scope of Article 52(4)’s delegation of power emerges where Annex XIII is interpreted as providing merely a non-exhaustive list of criteria that the Commission can take into account when deciding whether to designate a GPAI model as presenting systemic risk.423 As this would imply that the Commission may take additional criteria into account for designation, it would appear close at hand that the Commission should also be able to amend Annex XIII with such criteria.

125The power to adopt delegated acts under Article 52(4) is conferred on the Commission subject to the conditions laid down in Article 97.424 The delegation runs for five years from 1 August 2024, with tacit extension for periods of an identical duration absent opposition by the European Parliament and the Council no later than three months before the end of each period.425 Both institutions retain the power to revoke the delegation at any time.426 Before adopting any delegated act, the Commission must consult experts designated by the Member States in line with the principles established in the Interinstitutional Agreement on Better Law-Making of 13 April 2016.427 Once adopted, the delegated act must be notified to the European Parliament and the Council,428 which then have three months (extendable to six months) to raise objections that would prevent the act from taking effect.429

2.4. Article 52(5): Procedure for contesting designation

126Article 52(5)’s first sentence establishes the provider’s right to request reassessment of its model’s designation as presenting systemic risk. Whether it only applies to models that have been designated under Article 52(4)’s first subparagraph or to all models classified as presenting systemic risk is unclear. As will be examined below, the stronger arguments appear to oppose an analogous application of Article 52(5) beyond its literal wording.430 The second to fourth sentences of Article 52(5) establish requirements for the reasoning and frequency of such requests.431

127Neither Article 52(5) nor other provisions of the AI Act expressly empower the Commission to reassess the classification of models at its own initiative.432 Some authors have argued that despite this omission the Commission may initiate reassessment procedures433 and may even be required to do so in the event of an increase in the training compute threshold under Article 51(2).434 According to this view, the Commission’s power to initiate reassessment is implied from its power to conduct evaluations under Article 92, its duty to maintain an updated list of GPAI models with systemic risk under Article 52(6) and the fact that a model’s reassessment represents a continuation of initial classification which may occur at the Commission’s initiative as well.435 Moreover, it has been advanced that the absence of Commission-initiated reassessment would risk perpetuating a model’s legally defective classification where the compute threshold is raised.436 Indeed, it is difficult to see why the Commission should be precluded from declassifying a model where it would no longer meet the criteria for original classification. Given that a model’s classification as presenting systemic risk does not entail any corresponding rights for its provider,437 a provider cannot insist on maintaining a systemic risk status for their model.438 While it is conceivable that a provider may have a commercial interest in maintaining the classification status of its model – for instance, to advertise that the model remains among the most advanced439 – such commercial interests do not appear to be protected.440 Moreover, the AI Act’s definition of systemic risk as being specific to capabilities that match or exceed the capabilities of the most advanced GPAI models441 could imply that obligations to assess and mitigate such risks should apply to only a limited number of providers of GPAI models with systemic risk at a time.442 Indeed, the Safety and Security Chapter443 of the Code of Practice was drafted assuming no more than fifteen providers would be subject to the obligations for GPAI models with systemic risk at a time.444 Commission-initiated reassessment could be anchored both in Article 52(6)’s duty to update the list of GPAI models with systemic risk, which could be interpreted as including an implied power to reassess a model’s classification, or in an analogous application of Article 52(5).445 Both approaches appear problematic, however, given that Article 52(6)’s duty is unlikely to require the Commission to conduct substantive reassessment446 and the scope of Article 52(5) – on the interpretation advanced below – extends only to models that have been designated under Article 52(4)’s first subparagraph.447

2.4.1. Scope of Article 52(5)

128Uncertainty regarding the scope of reassessment under Article 52(5) arises from the seemingly incomplete nature of the framework for reassessment of systemic risk classification. While Article 52 contains two designation provisions448 and Article 51(1)(a) further establishes automatic classification of GPAI models with high-impact capabilities as presenting systemic risk,449 there is only one provision for reassessment of classification.450 By its wording, this provision only applies where a GPAI model has been designated as presenting systemic risk pursuant to Article 52(4)’s first subparagraph.451 This leads to the question of whether it can be applied analogously to other cases of classification, that is, where a GPAI model has been classified under Article 51(1)(a) and where a GPAI model has been designated pursuant to Article 52(1)’s third sentence. While scholars are divided on this question,452 the Commission Guidelines appear to interpret Article 52(5) narrowly as applying only to models designated under Article 52(4)’s first subparagraph, without, however, expressly excluding a wider scope of application either.453 Even if weighty arguments can be advanced for an analogous application of Article 52(5) as well, a literal interpretation of Article 52(5)’s scope appears ultimately more convincing.

129In favour of analogy, the conceivable grounds for reassessment, such as the change in the factual circumstances underlying classification and a change in the legal standards, particularly following an amendment of the training compute threshold under Article 51(2),454 are not specific only to designation under Article 52(4)’s first subparagraph but apply in all cases of classification. Additionally, Recital 111 draws no distinction between designation under Article 52(1)’s third sentence and designation under Article 52(4)’s first subparagraph regarding reassessment, merely stating that ‘[u]pon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.’455 This may suggest equal treatment of designation pursuant to Article 52(1)’s third sentence and designation pursuant to Article 52(4)’s first subparagraph with regard to reassessment.456 These arguments support an analogous application of Article 52(5) beyond its literal wording to the extent that no other provision of the AI Act provides for declassification or reassessment in such cases.457 Additionally, where a provider may only request reassessment for designation under Article 52(4)’s first subparagraph and not for designation under Article 52(1)’s third sentence, this may imply the Commission’s discretionary power to determine the availability of reassessment requests based solely on its choice of decisional grounds, where the requirements for designation under both provisions are met. It is doubtful whether, and for what reason, the legislature intended to confer this discretionary power upon the Commission.

130However, there are also arguments against an analogous application of Article 52(5). The wording of Article 52(5) arguably excludes its analogous application per se. While it is not impossible that the addition of ‘pursuant to paragraph 4’ was a drafting oversight,458 that would remain a largely speculative assumption. Assuming that Article 52(5) contains no drafting oversight, the addition of ‘pursuant to paragraph 4’ can only serve the purpose of excluding the applicability of Article 52(5) in cases of designation under Article 52(1)’s third sentence, as it does not add to the meaning of Article 52(5)’s first sentence in any other discernible way.459

131In addition, there are substantive arguments in favour of applying certain elements of Article 52(5) only to designation under Article 52(4)’s first subparagraph and not to designation under Article 52(1)’s third sentence. These arguments imply that – even if one accepts the general arguments in favour of the possibility of declassification or reassessment in cases of designation under Article 52(1)’s third sentence – a basis for reassessment for such cases cannot be found in Article 52(5)’s analogous application.460 Specifically, a provider’s right to request reassessment six months after the first designation decision and subsequently after every decision maintaining reassessment461 may be particularly justified for designation under Article 52(4)’s first subparagraph. This is because designation under Article 52(4)’s first subparagraph in some respect comes with a lower substantive threshold than designation under Article 52(1)’s third sentence, if one accepts the argument that designation under Article 52(4)’s first subparagraph is linked to the condition under Article 51(1)(b)462 and that this condition does not require the GPAI model to have high-impact capabilities for classification.463 Moreover, designation under Article 52(4)’s first subparagraph is not necessarily based on the model’s capabilities alone but rather on ‘an overall assessment of the criteria for the designation of a general-purpose AI model with systemic risk set out in [Annex XIII]’.464 A higher number of criteria playing a role in designation under Article 52(4)’s first subparagraph may be an argument for more frequent reassessments, as they could imply more diverse reasons to justify such reassessment.

132Furthermore, Article 52(5) does not readily provide an appropriate legal standard for reassessment of models classified under Article 51(1)(a) or designated under Article 52(1)’s third sentence. As will be discussed below, Article 52(5) in its direct application employs the same substantive standard as Article 51(1)(b) in conjunction with Article 52(4)’s first subparagraph – namely whether the model has capabilities or an impact equivalent to high-impact capabilities – while also reflecting changes in legal standards.465 This standard, however, corresponds to the designation pathway under Article 52(4)’s first subparagraph and does not fully align with the classification criteria applicable under Article 51(1)(a) and Article 52(1)’s third sentence. Analogous application would thus require not only extending the scope of Article 52(5) but also determining an appropriate adapted standard for reassessment,466 which further undermines the case for analogy.

133Overall, while there are arguments supporting an analogous application of Article 52(5), the stronger arguments appear to oppose such an analogy. In light of unclear evidence whether the addition of ‘pursuant to paragraph 4’ in Article 52(5)’s first sentence was intentional or a drafting oversight, there is a strong argument in favour of respecting the wording of Article 52(5)’s first sentence as binding and therefore not applying it to cases where a GPAI model has been classified under Article 51(1)(a) and where a GPAI model has been designated as presenting systemic risk pursuant to Article 52(1)’s third sentence.

134It should be noted that, if Article 52(5) were to be applied analogously to designation under Article 52(1)’s third sentence, this could be an argument to extend the analogy to cases of automatic classification under Article 51(1)(a). To conclude otherwise would give providers of GPAI models with high-impact capabilities an incentive to breach their notification obligation under Article 52(1)’s first sentence in order to obtain a designation under Article 52(1)’s third sentence,467 which would allow them to periodically request reassessment later on.468

2.4.2. Reassessment request
2.4.2.1. Timing

135A provider may request reassessment at the earliest six months after the designation decision.469 Where the Commission, following its reassessment, decides to maintain the designation as a GPAI model with systemic risk, providers may request reassessment at the earliest six months after that decision.470 The AI Act does not provide for an exception from these timing requirements in case of an update of the relevant criteria for systemic risk classification.471

2.4.2.2. Reasons

136Article 52(5)’s third sentence requires that a request contains new, objective and detailed reasons.472 For the first request following designation, a reason is new when it has arisen since the designation decision.473 For subsequent requests, new reasons should be understood as encompassing reasons that have arisen since the last reassessment request.474 These reasons include both a change in the factual circumstances underlying the designation and a change in the legal standards, particularly with regard to the criteria contained in Annex XIII,475 which may be amended by the Commission via delegated acts.476 Changes in the factual circumstances underlying the designation could include, for example, a decline in the number of registered users477 or new evaluations of the model’s capabilities.478

137A borderline case between a change in circumstances and a change in the legal standard arises when more advanced GPAI models are developed and placed on the market. The emergence of such models may prompt the Commission to amend the training compute threshold under Article 51(2) and the criteria contained in Annex XIII.479 However, it could also constitute grounds for reassessment in itself given the dynamic nature of the concept of high-impact capabilities, which Article 3(64) defines as capabilities that match or exceed the capabilities recorded in the most advanced GPAI models.480 The emergence of new GPAI models can therefore raise the threshold of what constitutes high-impact capabilities, with the consequence that models which previously met this threshold may subsequently fall outside it. This has implications not only for Article 51(1)(a)’s classification condition, which directly requires high-impact capabilities, but also for Article 51(1)(b)’s classification condition, which requires a model to have capabilities or an impact equivalent to high-impact capabilities.

2.4.3. Commission decision

138The Commission must take changes in the legal standards – particularly with regard to the criteria contained in Annex XIII481 – into account in its decision on a reassessment request.482 For example, if the Commission raised the threshold in point (f) of Annex XIII from 10,000 to 20,000 business users and a provider subsequently requested reassessment, the new threshold would apply.483 For Annex XIII’s criteria, this follows from the fact that the reference in the first sentence of Article 52(5) to the ‘criteria set out in Annex XIII’ implies that the currently applicable version of the annex applies. More generally, it appears reasonable to assume that the legislature’s intent to design the classification rules under Articles 51 and 52 as responsive to technological developments484 extends to the rules for reassessment as well. Additionally, it would be difficult to justify the application of outdated legal standards for reassessment of GPAI models where a different set of rules would be applicable to the classification of new models.

139Where Article 52(5) is applied directly – that is, in cases of GPAI models that have been designated as presenting systemic risk under Article 52(4)’s first subparagraph in conjunction with Article 51(1)(b) – the question arises as to whether, apart from the changes in legal standards that need to be taken into account as just outlined, fundamentally the same legal standard applies. In other words, whether both initial designation decisions under Article 52(4)’s first subparagraph and reassessment decisions under Article 52(5) apply the standard of whether the GPAI model has capabilities or an impact equivalent to high-impact capabilities.485

140Article 52(5) does not expressly address this question. Regarding the substantive standard for reassessment, it expressly references neither Article 51(1)(a) nor Article 51(1)(b) but sets out that the Commission decides whether the GPAI model ‘can still be considered to present systemic risk’. This could be read as implying that the relevant test for the Commission is whether the model presents risks that meet the definition of systemic risk under Article 3(65).486 However, this textual argument appears weak, as assessing whether a GPAI model ‘can still be considered to present systemic risks’ can be distinguished from assessing whether it ‘still presents systemic risks’.487 The phrase ‘can still be considered to present systemic risks’ alludes rather to the result of the assessment – whether the model can still be considered to be a GPAI model with systemic risk488 – rather than articulating the applicable standard.

141Despite the lack of express reference, the wording of Article 52(5)’s first sentence indicates that the Commission must – subject to the qualification noted above concerning changes in legal standards – determine for the reassessment whether the GPAI model has capabilities or an impact equivalent to high-impact capabilities, just as for initial designation under Article 52(4)’s first subparagraph.489 This is because both Article 52(4)’s first subparagraph and Article 52(5)’s first sentence set out that for the original designation decision and the reassessment, the decision must be taken ‘on the basis of [the] criteria set out in Annex XIII’. This requirement, together with the use of the term ‘reassess’ in Article 52(5)’s first sentence implies some connection with the original assessment. This is reinforced by the fact that Annex XIII, by its wording, contains criteria for the purpose of determining that a GPAI model has ‘capabilities or an impact equivalent to those set out in Article 51(1), point (a)’.

142Furthermore, applying the same legal standard provides a convincing explanation for the provider’s right to periodically request reassessment under Article 52(5)’s fourth sentence.490 On the view taken here, reassessment under Article 52(5) is only available for models that have been designated under the designation pathway established by Article 52(4)’s first subparagraph in conjunction with Article 51(1)(b).491 If a different substantive standard were to apply for reassessment – in particular, whether the model presents risks that fall under the definition of systemic risk under Article 3(65) – the question would arise as to why a renewed request would be permissible following rejection of a reassessment request, whereas for GPAI models where the Commission has rejected the provider’s challenge to classification under Article 52(2) and (3), reassessment is not available.492 Both procedures would culminate in a Commission decision applying similar substantive tests that assess whether systemic risks are present in the model.493 Applying the same standard for both designation under Article 52(4)’s first subparagraph and reassessment under Article 52(5) avoids this inconsistency, as the justification for periodic reassessment based on Article 52(4)’s first subparagraph’s substantive standard for designation would persist following the first reassessment request.

143As laid out above, the more convincing arguments speak against extending Article 52(5)’s scope to models that have been automatically classified under Article 51(1)(a) or designated under Article 52(1)’s third sentence.494 However, if Article 52(5) were applied analogously in these cases, the question would arise as to whether the analogy also extends to the applicable legal standard. Put differently, the question is whether the Commission would apply the same legal standard as for models that have been designated as presenting systemic risk under Article 52(4)’s first subparagraph or whether the applicable legal standard would reflect the difference in original classification criteria. The latter interpretation could find support in the notion of reassessment itself, which implies – notwithstanding changes in the legal standards themselves, which, as noted above, need to be taken into account – continuity in the applicable legal standard. Moreover, from a practical perspective, it would be burdensome if the provider and the Commission had to apply two different standards for the assessment of classification within, in extreme cases, merely six months. On the other hand, if reassessment under Article 52(5) were available to all models, it would also be worth considering whether to apply the same legal standard across all cases.

144It has been pointed out that the wording of Article 52(5)’s first sentence (‘may decide to reassess’) suggests that it is up to the Commission to decide whether to initiate a reassessment following a provider’s request.495 However, compelling arguments support the view that the Commission may not remain inactive upon receiving such a request.496 This interpretation is supported by the fact that Article 52(5)’s first sentence also sets out that the Commission ‘shall take the request into account’, which implies a duty to engage substantively with the arguments brought forward by the provider. Moreover, the requirements that Article 52(5)’s second to fourth sentences impose on a reassessment request appear justified precisely because of this duty. Were the Commission permitted to disregard a request at its discretion, there would be no need to limit the frequency of such requests. Accordingly, the discretionary element reflected in the wording of Article 52(5)’s first sentence does not concern whether the Commission must act, but rather suggests that the Commission has discretion in its assessment of whether the model still has capabilities or an impact equivalent to high-impact capabilities, just as with the initial designation decision under Article 52(4)’s first subparagraph in conjunction with Article 51(1)(b).497

145A provider can challenge the reassessment decision pursuant to Article 263(4) TFEU498 or request further reassessment six months after the reassessment decision on the basis of new reasons.499

2.5. Article 52(6): List of GPAI models with systemic risk

146Article 52(6) obliges the Commission to publish a list of GPAI models with systemic risk and to keep that list up to date.500 The list includes both models that are automatically classified under Article 51(1)(a) because of their (actual or presumed) high-impact capabilities and models which have otherwise been designated as presenting systemic risk.501 The recitals do not clarify the purpose of this published list, which some authors have contemplated as an information source for affected persons as well as civil society at large.502

147The Commission’s obligation to publish the list is ‘without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law’.503 It has been pointed out that paying attention to various laws of Member States protecting such information could pose considerable difficulties to the Commission.504

148The list will contain information necessary to identify the GPAI model such as its name and the name of the respective provider.505 Whether a provider can prevent its model from being featured on the list based on the argument that its existence represents confidential business information or a trade secret appears questionable. The provision does, in any case, not provide a legal basis for further publication of information about the GPAI model with systemic risk that is protected by intellectual property rights or concerns confidential business information or trade secrets.506 However, Article 53 may provide a legal basis to request such information for certain interested parties. Under Article 53(1)(b), providers of AI systems who intend to integrate a GPAI model – which does not need to be classified as presenting systemic risk – into their system have the right to request information and documentation of the GPAI model under Article 53(1)(b) in conjunction with Annex XI.507

149In contrast to the parallel provision in Article 33(6) DSA – which provides for the publication of a list of designated very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”) by the Commission – Article 52(6) does not prescribe that the list must be published in the Official Journal of the European Union. The Commission would therefore appear to have discretion regarding the location, format and manner of publication.508

150The AI Act does not specify the scope of the Commission’s duty to keep the list up to date under Article 52(6). It seems clear that this duty encompasses the obligation to add models that have been designated as presenting systemic risk to the list and to remove models where the provider has successfully requested reassessment of designation under Article 52(5) from the list. With regard to models with high-impact capabilities that have been classified under Article 51(1)(a),509 it can also be assumed that the Commission is obliged to add these to the list where the provider has notified the Commission without contesting classification pursuant to Article 52(2), and where the provider has contested classification but the Commission has rejected the provider’s arguments pursuant to Article 52(3).

151However, it remains uncertain whether the Commission can or must add GPAI models of which it has been notified pursuant to Article 52(1)’s first sentence to the list where the provider has contested classification and the decision under Article 52(3) is still pending. This could be possible if one assumes that models with high-impact capabilities are automatically classified under Article 51(1)(a)510 and that the provider’s submission of arguments pursuant to Article 52(2) has no suspensive effect.511 However, it appears questionable whether it is justified to temporarily include the model in the published list in cases where the Commission later accepts the provider’s arguments against classification. Including the model with a note that a decision on classification is still pending could in such cases represent a compromise between the provider’s interests and the public interest in learning about GPAI models that (potentially) present systemic risk.

152Article 52(6) does not appear to provide a legal basis for the Commission to reassess the classification of GPAI models contained in the list.512 While an interpretation of Article 52(6) that allows the Commission to reassess a GPAI model’s designation or classification – for example, when the Commission raises the compute threshold under Article 51(2) – could arguably be considered to be part of the Commission’s obligation to ‘keep that list up to date’ and would therefore not automatically exceed the wording of Article 52(6), it does not seem to correspond to the legislature’s intent. Even though no other provision expressly provides for Commission-initiated reassessment of a model’s classification as presenting systemic risk,513 there is no indication that the legislature intended the Commission’s duty to update the list under Article 52(6) to be understood in such a broad sense. Such an understanding of the provision is, in any case, supported by a comparison with the parallel provisions concerning the designation of VLOPs and VLOSEs under the DSA and the designation of gatekeepers under the DMA.514

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L 1689/1 (“AI Act”). ↩︎
  2. See commentary on Article 51, Section 1.1. in this work. ↩︎
  3. See Claudio Novelli and others, ‘A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities’ (2025) 16 European Journal of Risk Regulation 566, 572; David Bomhard and Jonas Siglmüller, ‘AI Act – das Trilogergebnis’ (2024) Recht Digital 45 para 29; Mario Martini, ‘§ 3. Risikobasierter Ansatz’ in Eric Hilgendorf and David Roth-Isigkeit (eds), Die neue Verordnung der EU zur Künstlichen Intelligenz (2nd edn, C H Beck 2025) para 190; for a critique of this tiered approach, see Sandra Wachter ‘Loopholes in EU AI Regulation’ 26 Yale Journal of Law & Technology 3 (2024) 671, 697. ↩︎
  4. See AI Act, art 53(1). However, article 53(2) provides a partial exemption from these obligations for providers of certain free and open-source models (see commentary on Article 53 paras 110–114 in this work). ↩︎
  5. See AI Act, art 55(1). See Adrian Schneider and Leonie Schneider, ‘Art. 51 Einstufung von KI-Modellen mit allgemeinem Verwendungszweck als KI-Modelle mit allgemeinem Verwendungszweck mit systemischem Risiko’ in David Bomhard, Fritz-Ulli Pieper and Susanne Wende (eds), Kommentar KI-VO: Verordnung über Künstliche Intelligenz (Fachmedien Recht und Wirtschaft 2025) para 1; Martini (n 3) para 190; Gregory Smith and others, ‘General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk: Classification and Requirements for Providers’ (RAND, 2024) <https://www.rand.org/pubs/research_reports/RRA3243-1.html> accessed 27 January 2026. ↩︎
  6. Article 3(63) defines a GPAI model as ‘an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’. For an analysis of this definition, see forthcoming commentary on Article 3(63) in this work. ↩︎
  7. Clemens Bernsteiner and Thomas Rainer Schmitt, ‘Art. 51 Einstufung von KI-Modellen mit allgemeinem Verwendungszweck als KI-Modelle mit allgemeinem Verwendungszweck mit systemischem Risiko’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) para 5; Jason Hofmann-Coombe, ‘§ 7. KI-Modelle mit allgemeinem Verwendungszweck’ in Eric Hilgendorf and David Roth-Isigkeit (eds), Die neue Verordnung der EU zur Künstlichen Intelligenz (2nd edn, C H Beck 2025) para 9. Article 3(66) defines a GPAI system as ‘an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’. However, article 52 continues to apply to the GPAI model even after its integration into an AI system (see AI Act, recital 97, ninth sentence). ↩︎
  8. Tobias Haar and Jonas Siglmüller, ‘Art. 51 Einstufung von KI-Modellen mit allgemeinem Verwendungszweck als KI-Modelle mit allgemeinem Verwendungszweck mit systemischem Risiko’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025) para 17; see also commentary on Article 51, Section 1.1. in this work. ↩︎
  9. See commentary on Article 51, Section 2.1. in this work. ↩︎
  10. For the role of the wording of an article’s title in the interpretation of operative provisions, see Case C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II) [2020] ECLI:EU:C:2020:559 para 92; see also Case C‑291/13 Sotiris Papasavvas v O Fileleftheros Dimosia Etaireia Ltd and Others [2014] ECLI:EU:C:2014:2209 para 39 with regard to a section title. ↩︎
  11. See Eric Hilgendorf and Johannes Härtlein, ‘Art. 52 Verfahren’ in Eric Hilgendorf and Johannes Härtlein (eds.), KI-VO: Verordnung über künstliche Intelligenz (Nomos 2025) para 1; see also commentary on Article 51, Section 1.1. in this work. ↩︎
  12. See Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 1; further, see Section 2.2.2..; see also commentary on Article 51, Section 1.1. in this work. ↩︎
  13. AI Act, art 52(1), first and second sentences. ↩︎
  14. AI Act, art 52(1), third sentence, and 52(4), first subparagraph. ↩︎
  15. AI Act, art 52(2) and (3). ↩︎
  16. AI Act, art 52(5). ↩︎
  17. Article 52’s first and second paragraphs expressly reference article 51, whereas article 52’s third to fourth paragraphs do not expressly reference article 51. ↩︎
  18. For classification under article 51(1)(a), which is based on a model’s high-impact capabilities, see commentary on Article 51, Section 2.1.1. in this work. ↩︎
  19. For classification under article 51(1)(b), which is based on a model having capabilities or an impact equivalent to high-impact capabilities, see commentary on Article 51, Section 2.1.2. in this work. ↩︎
  20. As discussed below, both designation under article 52(4)’s first subparagraph (see Section 2.1.3.1. and Section 2.3.1.) and reassessment under article 52(5) (see Section 2.4.1.) should be interpreted as relating to article 51(1)(b). ↩︎
  21. For a discussion of these designation provisions, see Section 2.1.3. and Section 2.3.1. respectively. ↩︎
  22. See Section 2.1.3.1. ↩︎
  23. Haar and Siglmüller, ‘Art. 51’ (n 8) para 17; Hofmann-Coombe (n 7) para 52. ↩︎
  24. See Haar and Siglmüller, ‘Art. 51’ (n 8) para 18. ↩︎
  25. Haar and Siglmüller, ‘Art. 51’ (n 8) para 18; for classification of GPAI models with high-impact capabilities, see commentary on Article 51, Section 2.1.1. in this work. ↩︎
  26. Haar and Siglmüller, ‘Art. 51’ (n 8) para 18; see Section 2.3.1.1. and Section 2.4.1. ↩︎
  27. See Section 2.1.1.2. ↩︎
  28. See Section 2.1.3. ↩︎
  29. See Section 2.3.1. ↩︎
  30. See Section 2.3.1.1. ↩︎
  31. See Section 2.3.2. ↩︎
  32. See Section 2.4.1. ↩︎
  33. The AI Act defines high-impact capabilities under article 3(64) and contains a presumption of high-impact capabilities under article 51(2). Both are relevant in the context of the notification obligation (see Section 2.1.1.2.). ↩︎
  34. See Section 2.1.1..; for the content of the notification, see Section 2.1.2. ↩︎
  35. See Section 2.1.3. ↩︎
  36. See Section 2.1.3.2. ↩︎
  37. See Section 2.1.3.1. For the general need to interpret articles 51 and 52 in context, see commentary on Article 51, Section 1.1. in this work. ↩︎
  38. See further commentary on Article 51, Section 2.1.1. in this work. ↩︎
  39. Providers of GPAI models that meet the substantive requirements for classification under article 51(1)(b) (for a discussion of these requirements see commentary on Article 51, Section 2.1.2.1. in this work), do not face a comparable obligation to notify the Commission. Instead, such models are designated by the Commission as GPAI models with systemic risk either ex officio or following a qualified alert from the scientific panel under article 52(4)’s first subparagraph (see commentary on Article 51, Section 2.1.2.2.). ↩︎
  40. AI Act, recital 112, sixth sentence; see Adrian Schneider and Leonie Schneider, ‘Art. 52 Verfahren’ in David Bomhard, Fritz-Ulli Pieper and Susanne Wende (eds), Kommentar KI-VO: Verordnung über Künstliche Intelligenz (Fachmedien Recht und Wirtschaft 2025) para 6. ↩︎
  41. European Commission, ‘Annex to the Communication to the Commission – Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act)’ C(2025) 5045 final (“Commission Guidelines”), para 102. ↩︎
  42. For an explanation of “open-source AI” see OECD, ‘AI Openness: A Primer for Policymakers’, (OECD, Artificial Intelligence Papers No. 44, 2025) <https://www.oecd.org/en/publications/ai-openness_02f73362-en.html> accessed 27 January 2026, 12–16. ↩︎
  43. The systematic positioning of recital 112’s seventh sentence suggests that it relates to information contained in a submission of arguments pursuant to article 52(2) and (3). However, its actual content strongly implies that it relates to information contained in the notification pursuant to article 52(1) as well. ↩︎
  44. For an analysis of the different measures under article 93(1), see forthcoming commentary on Article 93, Section 2.1. in this work. ↩︎
  45. For a discussion of this requirement see Section 2.1.2. ↩︎
  46. AI Act, art 53(1)(a) in conjunction with annex XI, s 1, point 2(d); see commentary on Article 53, Section 2.1.1. in this work. ↩︎
  47. See commentary on Article 53, para 57 in this work. ↩︎
  48. For a discussion of the latter, see forthcoming commentary on Article 55 in this work. ↩︎
  49. Commission Guidelines (n 41) para 45; Paul Nemitz, ‘Art. 101 Geldbußen für Anbieter von KI-Modellen mit allgemeinem Verwendungszweck’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) para 19; unclear: Jens Schefzig ‘Art. 101 Geldbußen für Anbieter von KI-Modellen mit allgemeinem Verwendungszweck’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025) para 15 who argues that article 101(1)(a) encompasses all substantive obligations for providers and specifically lists articles 53, 54(1)–(2) and 55 but not article 52. ↩︎
  50. AI Act, art 113(3)(b). See Tobias Haar and Jonas Siglmüller, ‘Art. 52 Verfahren’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2026) para 3. ↩︎
  51. See commentary on Article 111, Section 2.1.5.1. in this work. ↩︎
  52. AI Act, art 111(3); see commentary on Article 111, Section 2.1.1. in this work. ↩︎
  53. See commentary on Article 111, Section 2.1.5.1. in this work. ↩︎
  54. The competing interpretations of article 111(3) with regard to the notification obligation are discussed in-depth in commentary on Article 111, Section 2.1.5.1. in this work. ↩︎
  55. See commentary on Article 51, Section 2.2. in this work. ↩︎
  56. AI Act, art 51(2). ↩︎
  57. See Section 2.1.1.2.1..; further see Toby Bond and Shima Abbady, ‘Article 52 Procedure’ in Ceyhun Necati Pehlivan, Nikolaus Forgó and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Wolters Kluwer 2024) 840, s 3.1; Commission Guidelines (n 41) para 32; Haar and Siglmüller, ‘Art. 52’ (n 50) para 14; see AI Act, recital 112, third and fourth sentences: ‘The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the threshold of floating point operations […].’ ↩︎
  58. AI Act, art 3(3): ‘“provider” means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge’. For a discussion of this definition, see forthcoming commentary on Article 3(3) in this work. ↩︎
  59. The AI Act refers to the ‘relevant provider’ of a GPAI model in article 52(1) and (3) and recital 113’s first sentence and to the ‘relevant provider’ of an AI system in article 80(2), article 83(1) and recital 143’s tenth sentence. ↩︎
  60. See forthcoming chapter on Modifications in this work. ↩︎
  61. See Section 2.1.1.2.4. ↩︎
  62. For a discussion of pre-market placement notification obligations, see Section 2.1.1.2.5. ↩︎
  63. See Bond and Abbady (n 57) 840–841, s 3.1 for an analysis of this tension between pre-market placement notification obligations under article 52(1) and the provider definition under article 3(3). They propose a reading of these provisions where ‘for the notification obligation under Article 52(1) to apply, the developer must have an intention to place the model on the market or put it into service such that they will become a provider of the model in future.’; further, see forthcoming commentary on Article 2 in this work. ↩︎
  64. See Commission Guidelines (n 41) paras 30, 32; Haar and Siglmüller, ‘Art. 52’ (n 50) para 6; Christian Förster and Julia Straburzynski, ‘§ 2 Pflichtenkataloge’ in Christian Förster (ed), Die KI-Verordnung in der Praxis: Rechtliche Grundlagen und Pflichten bei der Anwendung von Kl im Unternehmen (C H Beck 2025). ↩︎
  65. See commentary on Article 52, Section 2.2.2. in this work. ↩︎
  66. For the requirements for contesting classification, see Section 2.2.2. ↩︎
  67. See Section 2.2.2.1. ↩︎
  68. See AI Act, art 52(2). ↩︎
  69. See also Haar and Siglmüller, ‘Art. 51’ (n 8) para 39; Clemens Bernsteiner and Thomas Rainer Schmitt, ‘Art. 52 Verfahren’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) para 15. In that respect, the high-impact capabilities presumption under article 51(2) and the systemic risk “presumption” under Article 51(1)(a) (see commentary on Article 51, Section 2.1.1., para 14 in this work on the presumptive nature of article 51(1)(a)) operate in the same way. This appears to accord with the legislature’s intent, given that the recitals themselves do not clearly distinguish between them (see AI Act, recital 111, seventh sentence, and recital 112, third sentence). Treating the high-impact capabilities presumption in the same manner as the systemic risk presumption may also be warranted as long as the former serves as the primary trigger for the notification obligation, since the “rebuttal framework” established by articles 51(1)(a) and 52(2) and (3) would otherwise risk being undermined. ↩︎
  70. See Section 2.2.3.1. ↩︎
  71. See AI Act, recital 112, third and fourth sentences: ‘The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a GPAI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the threshold of floating point operations because training of GPAI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of GPAI models are able to know if their model would meet the threshold before the training is completed.’ ↩︎
  72. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) paras 14–15; Hofmann-Coombe (n 7) para 44; opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 6 who argue that providers would not be able to know when a model has actual high-impact capabilities; recital 112’s third sentence further implies that the notification obligation is not exclusively triggered by the training compute threshold under article 51(2), as it states that the notification obligation is ‘especially relevant in relation to the threshold of floating point operations’ (emphasis added). ↩︎
  73. Haar and Siglmüller, ‘Art. 52’ (n 50) para 6. ↩︎
  74. For a discussion of this definition, see forthcoming commentary on Article 3(64) in this work. ↩︎
  75. See commentary on Article 51, Section 2.1.1.3. in this work. ↩︎
  76. For a proposal of how to assess whether a GPAI model has high-impact capabilities based on principal component analysis (PCA) from a model’s results on a selection of benchmarks, see Marius Hobbhahn, Dirk Hovy and Joaquin Vanschoren, ‘A Proposal to Identify High-Impact Capabilities in General-Purpose AI Models’ (Publications Office of the European Union, JRC143258, 2025) <https://publications.jrc.ec.europa.eu/repository/handle/JRC143258> accessed 27 January 2026. For a discussion of article 51(1)(a)’s requirements regarding assessment instruments, see commentary on Article 51, Section 2.1.1.3. in this work. ↩︎
  77. The Commission Guidelines (n 41) address this question in a footnote to paragraph 32. The paragraph itself states that a notification may be triggered ‘because the model meets or will meet the threshold laid down in Article 51(2) AI Act, or because the model is the result of a modification of a general-purpose AI model with high-impact capabilities that meets or will meet the threshold laid down in [the Commission Guidelines]’. The accompanying footnote clarifies that ‘[i]n the absence of a delegated act amending the thresholds or supplementing the benchmarks and indicators listed in Article 51(1) and (2) AI Act, these two requirements are the only requirements that trigger the obligation to notify the Commission under Article 52(1) AI Act.’; to the same effect: Haar and Siglmüller, ‘Art. 52’ (n 50) para 6. ↩︎
  78. It should be noted that this question is different from the question of whether a provider can be obliged to notify the Commission under article 52(1)’s first sentence of a GPAI model before its placing on the market (for a discussion of this question, see Section 2.1.1.2.5.), since model training may continue after its placing on the market and the training compute threshold may therefore be met after market placement. In practice, however, both questions may arise in parallel. ↩︎
  79. Haar and Siglmüller, ‘Art. 52’ (n 50) para 6. ↩︎
  80. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) paras 14–15; see also Lukas Feiler, Nikolaus Forgó and Michaela Nebel, ‘Article 52’ in Lukas Feiler, Nikolaus Forgó and Michaela Nebel (eds.), The EU AI Act: A Commentary (Globe Law and Business 2025) para 4 (‘With regard to the question of when it “becomes known” that a general-purpose Al model fulfils the conditions of Article 51(1)(a), the actual knowledge of the specific provider must be taken into account.’) ↩︎
  81. This interpretation is shared by the Commission Guidelines (n 41) para 31 and Bernsteiner and Schmitt, ‘Art. 52’ (n 69) paras 13–14. ↩︎
  82. See, for example, Cases C-154/21 RW v Österreichische Post AG [2023] ECLI:EU:C:2023:3 para 29, and C-31/17 Cristal Union, the legal successor to Sucrerie de Toury SA v Ministre de l’Économie et des Finances [2018] ECLI:EU:C:2018:168 para 41; further, see Koen Lenaerts and José A. Gutiérrez-Fons, ‘To Say What the Law of the EU Is: Methods of Interpretation and the European Court of Justice’ (2014) 20 Columbia Journal of European Law 3, 17–21. ↩︎
  83. Förster and Straburzynski (n 64) para 347. ↩︎
  84. Alexander Erben and others, ‘Training Compute Thresholds – Key Considerations for the EU AI Act’ (Publications Office of the European Union, JRC143255, 2025) <https://publications.jrc.ec.europa.eu/repository/handle/JRC143255> accessed 27 January 2026, 40–41. ↩︎
  85. ibid 40–41. ↩︎
  86. ibid 40–41. ↩︎
  87. Commission Guidelines (n 41) para 31. ↩︎
  88. See Case T-340/17 Japan Airlines Co. Ltd v European Commission [2002] ECLI:EU:T:2022:181 para 262. ↩︎
  89. See AI Act, art 51(2). ↩︎
  90. In particular, article 52(1)’s first sentence does not expressly limit the notification obligation to cases of ‘actual knowledge’ – a term which the DSA uses and distinguishes from ‘awareness’ of a fact (see Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/1 (“DSA”) recital 53 and arts 5(1)(e), 6(1)(a), 16(3)). ↩︎
  91. See AI Act, recital 112, fourth sentence, and commentary on Article 51, Section 2.2.1. in this work. However, there are certain computational activities such as the training of parent models used in distillation for which it is difficult to determine whether they count towards article 51(2)’s training compute threshold (see commentary on Article 51, Section 2.2.1.2. in this work). ↩︎
  92. AI Act, recital 112, fourth sentence. ↩︎
  93. See AI Act, recital 97, fifth sentence; Commission Guidelines (n 41) paras 23, 60. ↩︎
  94. For a broader discussion of the role of GPAI model modifications under the AI Act, see forthcoming chapter on Modifications in this work; see also Philipp Hacker and Matthias Holweg, ‘The Regulation of Fine-Tuning: Federated Compliance for Modified General-Purpose AI Models’, 60 Computer Law & Security Review (2026) 106234 6–11. ↩︎
  95. See Commission Guidelines (n 41) para 23, 60–71. ↩︎
  96. Commission Guidelines (n 41) para 23. ↩︎
  97. Commission Guidelines (n 41) para 23. ↩︎
  98. Commission Guidelines (n 41) paras 23, 61–62. ↩︎
  99. Commission Guidelines (n 41) paras 61–62. ↩︎
  100. Commission Guidelines (n 41) para 63. ↩︎
  101. Commission Guidelines (n 41) para 64. ↩︎
  102. Commission Guidelines (n 41) para 64. For the use of a FLOPs threshold for determining whether an AI model is a GPAI model, see Commission Guidelines (n 41) paras 15–21. ↩︎
  103. For these conditions, see Sections 2.1.1.2.1.–2.1.1.2.3. ↩︎
  104. In accordance with the Commission Guidelines (n 41) para 25, fn 5, the large pre-training run is understood as ‘the foundational training run conducted on a large amount of data to build the model’s general capabilities, which may take place after smaller experimental training runs, and which may be followed by fine-tuning for specialisation or other post-training enhancements.’ ↩︎
  105. See Commission Guidelines (n 41) para 60, fn 11: ‘The Commission considers “fine-tuning” to be one way of “modifying” a general-purpose AI model.’ ↩︎
  106. For a discussion of the different “computational activities” that count towards article 51(2)’s training compute threshold, see commentary on Article 51, Section 2.2.1. in this work. ↩︎
  107. Commission Guidelines (n 41) paras 70–71; see also Commission Guidelines (n 41) para 30, fn 6 which states that a provider must notify the Commission where ‘the model is the result of a modification of a general-purpose AI model with high-impact capabilities that meets or will meet the threshold laid down in paragraph 60’. The reference to paragraph 60 must likely should be read as a reference to Commission Guidelines (n 41) para 64, as Commission Guidelines (n 41) para 60 does not contain any thresholds. ↩︎
  108. Commission Guidelines (n 41) para 70. ↩︎
  109. Commission Guidelines (n 41) para 71. ↩︎
  110. For these conditions, see Sections 2.1.1.2.1.–2.1.1.2.3. ↩︎
  111. In particular, the Commission Guidelines (n 41) para 62 assume that modifications by a downstream actor can lead to ‘a significant change in the model’s […] systemic risk’. ↩︎
  112. For a discussion of the exclusion of research and development activities from the AI Act’s scope, see forthcoming commentary on Article 2 in this work. ↩︎
  113. See European Commission, ‘General-Purpose AI Models in the AI Act – Questions & Answers’ <https://digital-strategy.ec.europa.eu/en/faqs/general-purpose-ai-models-ai-act-questions-answers> accessed 16 December 2025. ↩︎
  114. See Bond and Abbady (n 57) 840–841, s 3.1; Christiane Wendehorst, ‘Art. 2 Anwendungsbereich’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) paras 114–115. ↩︎
  115. Bond and Abbady (n 57) 840, s 3.1; for notifications in case of prospective high-impact capabilities, see Section 2.1.1.2.3. ↩︎
  116. See commentary on Article 51, Section 2.2.1. in this work for a discussion of the different computational activities that need to be taken into account for article 51(2)’s training compute threshold. ↩︎
  117. Bond and Abbady (n 57) 840, s 3.1; see European Commission, ‘General-Purpose AI Models in the AI Act – Questions & Answers’ (n 113). ↩︎
  118. Bond and Abbady (n 57) 840, s 3.1; see European Commission, ‘General-Purpose AI Models in the AI Act – Questions & Answers’ (n 113). ↩︎
  119. Commission Guidelines (n 41) para 31 (‘In particular, a notification may be required before training is complete, if the provider can reasonably foresee that the requirement that leads to the presumption of the model having high-impact capabilities is reasonably likely to be met’); Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 15; see also Bond and Abbady (n 57) 840–41, s 3.1 who propose a reading of the notification obligation as applying before the market placement of the model in cases where the provider has ‘an intention to place the model on the market or put it into service such that they will become a provider of the model in future’ (emphasis added), while concluding that this reading sits in tension with article 2(8), which, according to their view, implies ‘that the AI Act does not impose obligations prior to placing on the market or putting into service’ of a model; opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 7 who reject a “knowledge-based” notification obligation under the second alternative of article 52(1)’s first sentence altogether based on the provision’s wording but acknowledge that the legislature appears to have afforded this alternative, based on recital 112, the predominant scope of application. ↩︎
  120. European Commission, ‘Code of Practice for General-Purpose AI Models – Safety and Security Chapter’ (2025) <https://ec.europa.eu/newsroom/dae/redirection/document/118119> accessed 27 January 2026, 6. For the role of codes of practice for GPAI model regulation under the AI Act, see commentary on Article 56, Section 1.1. in this work. ↩︎
  121. Where providers only notify the Commission with or after placing the model on the market, fulfilling the second requirement automatically entails fulfillment of the first requirement. ↩︎
  122. See AI Act, art 9(8), first sentence: ‘The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service.’ ↩︎
  123. See Michèle Finck, ‘In Search of the Lost Research Exemption: Reflections on the AI Act’ (2025) 74 GRUR International 903. ↩︎
  124. This is the first interpretation proposed by Finck (n 123) 904 who admits that it sits in tension with article 2(8)’s broad wording. ↩︎
  125. Paul Voigt, ‘Art. 2 Anwendungsbereich’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025) para 42; see Wendehorst, ‘Art. 2’ (n 114) para 115. ↩︎
  126. This is the second interpretation proposed by Finck (n 123) 904 who admits that it renders article 2(8) redundant (presumably in light of article 2(1)(a) establishing that the AI Act applies to providers placing on the market or putting into service AI systems). ↩︎
  127. See forthcoming commentary on Article 2 in this work. ↩︎
  128. AI Act, art 52(1), first sentence. ↩︎
  129. Erben and others (n 84) 11. Pre-training is an important stage of the training of GPAI models. It has been estimated that it accounted for more than 90% of a model’s training compute at the time of the AI Act’s drafting (Venkat Somala, Anson Ho and Séb Krier, ‘Three Challenges Facing Compute-Based AI Policies’ (Epoch AI, 2025) <https://epoch.ai/gradient-updates/three-issues-undermining-compute-based-ai-policies> accessed 16 December 2025). ↩︎
  130. AI Act, art 26(5) (emphasis added). ↩︎
  131. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 15; however, see Schneider and Schneider, ‘Art. 52’ (n 40) para 7 who argue that the duration of the notification period needs to be determined on a case-by-case basis and may take into account the extent of systemic risk stemming from the model. ↩︎
  132. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 15. ↩︎
  133. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 15; Schneider and Schneider, ‘Art. 52’ (n 40) para 7; see also Section 2.1.1.2.5. for a discussion of pre-market placement notification obligations under article 52(1)’s first sentence. ↩︎
  134. See Section 2.1.3.2. ↩︎
  135. Commission Guidelines (n 41) paras 42, 45; for designation under Article 52(4)’s first subparagraph, see Section 2.3.1. ↩︎
  136. AI Act, art 52(2). ↩︎
  137. See Section 2.2.2.3. ↩︎
  138. See Section 2.4.1. ↩︎
  139. See Section 2.4.1. ↩︎
  140. See Section 2.1.2. Pre-notification contacts in which the provider does not yet indicate that its model meets or will meet the classification condition under Article 51(1)(a) do not constitute (incomplete) notification (for such pre-notification contacts, see Section 2.2.2.3. ↩︎
  141. See Section 2.1.3.3.1. ↩︎
  142. See Section 2.2.2.3. ↩︎
  143. It may be objected, with some justification, that a provider who completely fails to notify in breach of its notification obligation also preserves the possibility of contesting classification under Article 52(2) (see Section 2.2.2.3.). This objection holds only to a limited extent, however. As just noted, in the absence of notification, the Commission can designate the GPAI model as presenting systemic risk pursuant to Article 52(1)’s third sentence, which likewise precludes the provider from submitting arguments pursuant to Article 52(2) at a later point in time. Moreover, the view taken here does not necessarily incentivise a provider to fail to notify entirely rather than submit an incomplete notification, since complete failure to notify will typically be a more serious infringement of the notification obligation under Article 52(1), which should be reflected in the assessment of a fine under Article 101(1)(a). ↩︎
  144. See Commission Guidelines (n 41) para 45, which discusses the notification obligation under Article 52(1) generally but does not expressly distinguish between the first and second sentence of the provision. ↩︎
  145. See Section 2.1.3.2. ↩︎
  146. As both notification and designation under article 52(1) arguably relate to article 51(1)(a), this “alternativity” is different from article 51’s establishment of two alternative conditions for classification of GPAI models as presenting systemic risk. ↩︎
  147. See Section 2.2.2. ↩︎
  148. To the same effect: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 20. According to article 52(3), upon rejection of the provider’s arguments ‘the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk’ (emphasis added). This wording implies classification from the moment of rejection onwards. The translations of ‘shall be considered’ in the German (‘gilt als’) and French language versions (‘est considéré’) of article 52(3) express this even more clearly. ↩︎
  149. This would be the case where (i) the provider notifies the Commission before its model meets article 51(2)’s training compute threshold (see Section 2.1.1.2.3.) and (ii) the Commission promptly rejects the provider’s challenge to classification (see Section 2.2.3.1.). ↩︎
  150. See commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  151. See Section 2.5. ↩︎
  152. Code of Practice, Safety and Security Chapter (n 120) 7, Measure 1.1. ↩︎
  153. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 12; for the similar question of whether classification under article 51(1)(a) requires a Commission designation decision, see commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  154. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 12; similar: Haar and Siglmüller, ‘Art. 51’ (n 8) paras 26–28 with regard to the question of automatic classification under article 51(1)(a). ↩︎
  155. See commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  156. This is clarified by Commission Guidelines (n 41) paras 26–27, 45–46; for automatic classification under article 51(1)(a), see commentary on Article 51, Section 2.1.1.1. in this work; for the effects of classification, see commentary on Article 51, Section 2.1.4. in this work. ↩︎
  157. See Section 2.2.3.1. ↩︎
  158. See Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16; Hofmann-Coombe (n 7) para 46; see also Haar and Siglmüller, ‘Art. 52’ (n 50) para 8 which assume that the notification obligation is only triggered where a GPAI model meets the training compute threshold under article 51(2) and, accordingly, relate article 52(1), second sentence, only to this condition. For a discussion of the conditions that trigger the notification obligation, see Section 2.1.1.2. It is assumed that the minor wording differences between article 52(1)’s first sentence (‘that requirement’ that ‘is met or it becomes known that it will be met’) and second sentence (‘the relevant requirement’ that ‘has been met’) are merely attributable to the fact that the GPAI model provisions were drafted under heavy time constraints, with no substantive differentiation intended. ↩︎
  159. See AI Act, art 53: ‘Providers of general-purpose AI models shall: (a) draw up and keep up-to-date the technical documentation of the model […] for the purpose of providing it, upon request, to the AI Office and the national competent authorities; […].’ ↩︎
  160. See AI Act, art 91(1): ‘The Commission may request the provider of the general-purpose AI model concerned to provide the documentation drawn up by the provider in accordance with Articles 53 and 55, or any additional information that is necessary for the purpose of assessing compliance of the provider with this Regulation.’ ↩︎
  161. See AI Act, art 92(1): ‘The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned: (a) to assess compliance of the provider with obligations under this Regulation, where the information gathered pursuant to Article 91 is insufficient; or (b) to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following a qualified alert from the scientific panel in accordance with Article 90(1), point (a).’ ↩︎
  162. See commentary on Article 53, para 57 in this work. ↩︎
  163. See Hofmann-Coombe (n 7) para 46. ↩︎
  164. The wording of article 52(1)’s second sentence (‘information necessary to demonstrate that the relevant requirement has been met’, emphasis added) suggests that the legislature was concerned that providers would notify the Commission of GPAI models that do not meet classification requirements under article 51(1)(a) and (2). Given that systemic risk classification comes with substantial obligations under article 55(1), however, it is unclear whether this concern is well-founded. ↩︎
  165. See Section 2.1.1.2.1. ↩︎
  166. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16; Haar and Siglmüller, ‘Art. 52’ (n 50) para 8; Erben and others (n 84) 43. ↩︎
  167. Commission Guidelines (n 41) para 32; see Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16; opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 8. This duty is reminiscent of the notification obligation in the context of gatekeeper designation under Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L 265/1 (“DMA”). In this context, an undertaking notifying the Commission pursuant to article 3(3) DMA that it meets the requirements for gatekeeper designation is required to provide ‘precise and succinct explanations about the methodology’ that it used to determine its relevant turnover, market capitalisation and numbers of end users and business user (see article 2(1) in conjunction with sections 3.4 and 4.3 of annex I of Commission Implementing Regulation (EU) 2023/814 of 14 April 2023 on detailed arrangements for the conduct of certain proceedings by the Commission pursuant to Regulation (EU) 2022/1925 of the European Parliament and of the Council [2023] OJ L 102/6). ↩︎
  168. Apparently in favour of the latter interpretation: Haar and Siglmüller, ‘Art. 52’ (n 50) para 8; unclear: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16 who interpret article 52(1), second sentence, broadly as encompassing information regarding the criteria set out in annex XIII. ↩︎
  169. See Erben and others (n 84) 42 (‘According to Article 52(1), the notification should include evidence that the compute threshold has or is expected to be met. We recommend that the evidence provided enables the competent authority to validate the provider’s compute estimation that has triggered the obligation to notify.’) ↩︎
  170. For different methods of verifying declared amounts of training compute, see Erben and others (n 84) 43–45. ↩︎
  171. AI Act, recital 112, sixth sentence. ↩︎
  172. See AI Act, recital 112, third to sixth sentences: ‘The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption. […] In the context of that notification, the provider should be able to demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does not present systemic risks, and that it thus should not be classified as a general-purpose AI model with systemic risks. That information is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks and the providers can start to engage with the AI Office early on.’ ↩︎
  173. See AI Act, recital 112, sixth sentence. ↩︎
  174. For an analysis of the meaning of ‘appropriate technical tools and methodologies, including indicators and benchmarks’ in article 51(1)(a), see commentary on Article 51, Section 2.1.1.3. in this work. ↩︎
  175. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16; see AI Act, annex XIII, points (a)–(e). For the relevance of the criteria contained in annex XIII for classification of GPAI models based on their high-impact capabilities, see commentary on Article 51, Section 2.4.2. in this work. ↩︎
  176. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 16 argue that the notification must include information on benchmarks and training performed, as well as details regarding the criteria set out in annex XIII without differentiating between the different triggers of the notification because of the relevance of such information for the Commission. Erben and others (n 84), 11 recommend providers to ‘include data for compute verification: training duration, cluster specifications (accelerator count and type), expected utilisation rates, and basic architectural parameters.’ ↩︎
  177. For this approach, see Commission Guidelines (n 41) paras 124–133; Erben and others (n 84) 30–34. For different methods to estimate a model’s training compute, including this architecture-based approach, see commentary on Article 51, Section 2.2.1.3.2. in this work. ↩︎
  178. Commission Guidelines (n 41) para 45; see Haar and Siglmüller, ‘Art. 52’ (n 50) para 10; see also Feiler, Forgó and Nebel, ‘Art. 52’ (n 80) para 5 who argue that the designation decision referred to in article 52(1)’s third sentence is made pursuant to article 52(4)’s first subparagraph, thereby challenging the distinction between both provisions. This is not entirely convincing. While such an interpretation finds limited support in the wording of article 52(1)’s third sentence (‘may decide to designate’ instead of ‘may designate’) and could explain why the reassessment provision under article 52(5) only expressly refers to designation under article 52(4) (see Section 2.4.1.), there are convincing arguments against this interpretation, as article 52(1)’s third sentence does not contain any reference to article 52(4)’s fourth subparagraph and establishes its own requirements (see Commission Guidelines (n 41), para 45; Haar and Siglmüller, ‘Art. 52’ (n 50) para 10). For these requirements see Section 2.1.3.2. and Section 2.1.3.3..; further, see Section 2.1.3.1. on the relationship between article 52(1)’s third sentence and article 52(4)’s first subparagraph. ↩︎
  179. See Section 2.1.3.2. and Section 2.1.3.3. ↩︎
  180. See AI Act, recital 113, second sentence. ↩︎
  181. Article 89(2) confers upon downstream providers (see AI Act, art 3(68)) ‘the right to lodge a complaint alleging an infringement of [the AI Act]’. A complaint pursuant to article 89(2) may relate not only to a GPAI model provider’s breaches of its substantive obligations under articles 53 and 55 but also to a breach of the notification obligation under article 52(1) (Clemens Bernsteiner and Thomas Rainer Schmitt, ‘Art. 89 Überwachungsmaßnahmen’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) para 16). ↩︎
  182. See AI Act, recital 113, second sentence. ↩︎
  183. See AI Act, recital 113, first and second sentence. ↩︎
  184. Despite its title ‘Procedure’, article 52 does not only contain procedural rules for classification (see commentary on Article 51, Section 1.1. in this work and Section 1.1.). ↩︎
  185. See Section 1.1. ↩︎
  186. Commission Guidelines (n 41) para 45; Haar and Siglmüller, ‘Art. 51’ (n 8) para 18; without a clear differentiation between article 52(1)’s third sentence and article 52(4)’s first subparagraph: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 21 (presupposing – contrary to the view taken in commentary on Article 52, Section 2.1.2.1.1. in this work – that article 51(1)(a) and (b) do not come with distinct substantive requirements, see Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 25). ↩︎
  187. Haar and Siglmüller, ‘Art. 51’ (n 8) paras 18, 59; Bond and Abbady (n 57) 842–43, s 3.3; Janine Wendt and Domenik Wendt, Das neue Recht der Künstlichen Intelligenz (Nomos 2025) s 11 para 22; Moritz Hecht, ‘Regulierung von GPAI-Modellen durch die KI-Verordnung’ (Künstliche Intelligenz und Recht, 2025) 30, 34; to the same effect also: Commission Guidelines (n 41) para 45 (‘The designation can occur: under Article 52(4) AI Act, if the Commission concludes that a model has capabilities or an impact equivalent to high-impact capabilities based on the criteria set out in Annex XIII AI Act […].’); unclear: Schneider and Schneider, ‘Art. 52’ (n 40) para 13 and Hofmann-Coombe (n 7) para 52; without a clear differentiation between Article 52(1)’s third sentence and article 52(4)’s first subparagraph in general: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 21 (presupposing – contrary to the view taken in commentary on Article 51, Section 2.1.2.1.1. in this work – that article 51(1)(a) and (b) do not come with distinct substantive requirements, see Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 25). ↩︎
  188. See Haar and Siglmüller, ‘Art. 51’ (n 8) para 18. ↩︎
  189. See Section 2.1.1.2. These arguments weigh not only against interpreting article 52(1)’s third sentence as relating to article 51(1)(b) but also against interpreting it as introducing a procedure and corresponding substantive requirement independent of article 51(1) altogether (see Section 2.1.3.3.2.). ↩︎
  190. AI Act, annex XIII. ↩︎
  191. See Haar and Siglmüller, ‘Art. 51’ (n 8) para 18. ↩︎
  192. Haar and Siglmüller, ‘Art. 51’ (n 8) para 18; Hofmann-Coombe (n 7) para 52 questions this relationship between article 51(1)(b) and article 52(4)’s first subparagraph because of the latter provision’s reference to article 90(1)(a). It is indeed unclear why article 52(4)’s first subparagraph references article 90(1)(a) rather than article 90(1)(b) or article 90(1) in its entirety (see Section 2.3.1.). It appears possible that this is due to a drafting oversight. ↩︎
  193. Recital 111’s eleventh and twelfth sentences state that ‘there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to thosecaptured by the set threshold. That decision should be taken on the basis of an overall assessment of the criteria for the designation of a general-purpose AI model with systemic risk set out in an annex to this Regulation […]’ (emphasis added). The phrase ‘capabilities or an impact equivalent’ matches the wording of article 51(1)(b) whereas ‘on the basis of’ matches wording of article 52(4)’s first subparagraph. ↩︎
  194. AI Act, art 51(1). ↩︎
  195. See AI Act, art 52(5), first sentence (‘Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII.’); further, see Section 2.3.1.2. ↩︎
  196. See AI Act, recital 111:‘It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI models with systemic risks. […] To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set threshold. […]’. For the complementary purpose of classification under article 51(1)(b) in conjunction with article 52(4)’s first subparagraph, see also commentary on Article 51, Section 2.1.2. in this work. ↩︎
  197. Haar and Siglmüller, ‘Art. 52’ (n 50) para 10; unclear with regard to this requirement: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 12. In the German language version of article 51(1)’s third sentence (‘Erlangt die Kommission Kenntnis von einem KI-Modell mit allgemeinem Verwendungszweck, das systemische Risiken birgt, die ihr nicht mitgeteilt wurden, […]’), ‘of which it has not been notified’ relates to the systemic risks, not the GPAI model. As the notification obligation under article 52(1)’s first sentence relates to the GPAI model and not its associated risks, this appears to be a translation inconsistency without substantive relevance. ↩︎
  198. Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 [2019] OJ L 169/1 (“MSR”). ↩︎
  199. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 10. However, see Bernsteiner and Schmitt, ‘Art. 94 Verfahrensrechte der Wirtschaftsakteure des KI-Modells mit allgemeinem Verwendungszweck’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (C H Beck 2026) paras 6–8 who argue for a narrow scope of application for article 94 but do not specifically address designation of GPAI models as presenting systemic risk. ↩︎
  200. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 10. Article 94 is contained in Section 5 of Chapter IX AI Act entitled ‘Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models’ and Article 52 is contained in Section 1. of Chapter V AI Act entitled ‘Classification rules’. ↩︎
  201. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 10. ↩︎
  202. AI Act, art 113(2) and (3)(b). ↩︎
  203. MSR, art 18(1) in conjunction with AI Act, art 94; Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 11. Article 18 MSR obliges ‘market surveillance authorities’ which article 3(4) MSR defines as ‘authorit[ies] designated by a Member State under Article 10 [MSR] as responsible for carrying out market surveillance in the territory of that Member State’. Where article 18 MSR is applied mutatis mutandis under article 94, it obliges the Commission (see Jens Ambrock and Behrang Raji, ‘Art. 94 Verfahrensrechte der Wirtschaftsakteure des KI-Modells mit allgemeinem Verwendungszweck’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2026) para 6). ↩︎
  204. MSR, art 18(2) in conjunction with AI Act, art 94; Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 11. ↩︎
  205. MSR, art 18(3) in conjunction with AI Act, art 94; Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 11. ↩︎
  206. Charter of Fundamental Rights of the European Union [2012] OJ C 326/391 (“Charter”); compare Case T-1077/23 Bytedance v European Commission [2024] ECLI:EU:T:2024:478 paras 343–344 for the relevance of the right to good administration regarding gatekeeper designation under the DMA. ↩︎
  207. Charter, art 41(2)(a). ↩︎
  208. Charter, art 41(2)(b). ↩︎
  209. Charter, art 41(2)(c). ↩︎
  210. See Section 2.1.3.3.1., Section 2.1.3.3.2. and Section 2.1.3.3.3. ↩︎
  211. For a broader discussion of further classification pathways beyond article 51(1)(a) and (b), see commentary on Article 51, Section 2.1.3. in this work. It does not appear close at hand to interpret article 52(1)’s third sentence as referring to article 51(1)(b) and annex XIII, as article 52(4)’s first subparagraph already provides for designation with regard to these provisions (see Section 2.1.3.1. and Section 2.3.1.). ↩︎
  212. Commission Guidelines (n 41) para 45; Haar and Siglmüller, ‘Art. 52’ (n 50) para 10; unclear: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) paras 21–22; see also Stefan Larsson, Jockum Hildén and Kasia Söderlung, ‘Implications of Regulating a Moving Target: Between Fixity and Flexibility in the EU AI Act’, (2025) 18 Law, Innovation and Technology (forthcoming) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5211101> accessed 16 December 2025, 26 (‘The Commission has the same competence [of designating a GPAI model as posing systemic risk] in cases when it becomes aware of a GPAI model presenting systemic risks, which meets the criteria concerning the number of FLOPs, and of which it has not been notified (Article 52 (1)’). ↩︎
  213. See Section 2.1.3.1. ↩︎
  214. Commission Guidelines (n 41) para 45 (‘The designation can occur […] under Article 52(1) AI Act, if the provider of a general-purpose AI model meeting the condition referred to in Article 51(1)(a), AI Act failed to notify the Commission […].’); Haar and Siglmüller, ‘Art. 52’ (n 50) para 10. ↩︎
  215. See Section 2.1.3.1. ↩︎
  216. See commentary on Article 51, Section 2.1.2.1.6.1. in this work. ↩︎
  217. For a discussion of mandatory consideration of annex XIII criteria for designation under article 51(1)(b) in conjunction with article 52(4)’s first subparagraph, see commentary on Article 51, Section 2.1.2.1.2.1. in this work. For the relevance of annex XIII criteria for classification under article 51(1)(a) see commentary on Article 51, Section 2.4.2. in this work. ↩︎
  218. See Section 2.1.3.3.3. Article 51(2)’s wording and point (c) of annex XIII suggest that the presumption of high-impact capabilities does not apply to designation under article 52(4)’s first subparagraph in conjunction with article 51(1)(b). ↩︎
  219. For a discussion of automatic classification under article 51(1)(a) see commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  220. See Section 2.1.3.5. ↩︎
  221. For the role of Commission designation in the context of article 51(1)(b), see commentary on Article 51, Section 2.1.2.2. in this work. ↩︎
  222. In particular, a provider of a designated GPAI model that did not challenge the designation is in principle precluded from challenging an enforcement decision on the grounds that the GPAI model does not have high-impact capabilities and therefore should not have been designated as presenting systemic risk (see commentary on Article 51, Section 2.1.4., last paragraph in this work). ↩︎
  223. For a discussion of the consequences of designation under Article 52(1)’s third sentence, see Section 2.1.3.5. ↩︎
  224. See Section 2.1.3.5. ↩︎
  225. Moreover, the AI Act’s definition of ‘systemic risk’ under article 3(65), in principle, enjoys general applicability throughout the AI Act, as the definitions under article 3 apply ‘[f]or the purposes of this Regulation’ (AI Act, art 3). ↩︎
  226. For a discussion of further classification pathways beyond article 51(1)(a) and (b) and their potential to enhance the classification framework’s adaptability, see commentary on Article 51, Section 2.1.3. in this work. ↩︎
  227. There are several indicators for recital 113’s first sentence relating to article 52(1)’s third sentence. First, it mentions requirements for designation (‘which previously had either not been known or of which the relevant provider has failed to notify the Commission’) which correspond in part with article 52(1)’s third sentence (‘of which it has not been notified’). Second, it is directly preceded by recital 112, which relates to the notification obligation under article 52(1)’s first and second sentence and the right of the provider to demonstrate the absence of systemic risks pursuant to article 52(2). Third, recital 111’s eleventh sentence (‘To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set threshold.’) relates to the designation provision under article 52(4)’s first subparagraph. In light of recital 111’s eleventh sentence, it appears reasonable to assume that recital 113’s first sentence does not (exclusively) relate to article 52(4)’s first subparagraph rather than article 52(1)’s third sentence. ↩︎
  228. For a discussion of automatic classification under article 51(1)(a) see commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  229. The Commission Guidelines (n 41) para 45 do not expressly address this question. However, their statement that ‘[t]he designation can occur […] under Article 52(1) AI Act, if the provider of a general-purpose AI model meeting the condition referred to in Article 51(1)(a), AI Act failed to notify the Commission […]’ suggests that the Commission Guidelines favour the applicability of the high-impact capabilities presumption under article 51(2) in the present context, as they regard it as applicable both in the context of article 51(1)(a) (Commission Guidelines (n 41) para 28) and in the context of the notification obligation under article 52(1)’s first sentence (Commission Guidelines (n 41) para 30). ↩︎
  230. See AI Act, art 51(2) (‘shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a)’). ↩︎
  231. See Section 2.1.3.1. ↩︎
  232. See Section 2.1.1.2.1. ↩︎
  233. See Section 2.2.2.3. ↩︎
  234. For a discussion of the rebuttal of the high-impact capabilities presumption in the context of a challenge to classification under Article 52(2), see Section 2.2.2.1. ↩︎
  235. See Section 2.1.3.2. ↩︎
  236. See also AI Act, recital 113, first sentence: ‘If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as a general-purpose AI model with systemic risk, which previously had either not been known or of which the relevant provider has failed to notify the Commission, the Commission should be empowered to designate it so’ (emphasis added). ↩︎
  237. See AI Act, art 52(4), first subparagraph (‘may designate’); for a discussion of Commission discretion with regard to designation under article 52(4)’s first subparagraph in conjunction with article 51(1)(b), see commentary on Article 51, Section 2.1.2.1. in this work. ↩︎
  238. This is discussed in depth in commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  239. For the consequences of designation under article 52(1)’s third sentence see Section 2.1.3.5. ↩︎
  240. A designation decision pursuant to article 52(1)’s third sentence may be unnecessary where the two-week period for notification under article 52(1)’s first sentence (see Section 2.1.1.3.) is only marginally exceeded. ↩︎
  241. Compare Commission Guidelines (n 41) para 39 which indicate that the extent to which a model surpasses article 51(2)’s training compute threshold will be taken into account in the procedure to contest classification under article 52(2) and (3). ↩︎
  242. See Section 2.1.1.2.5. ↩︎
  243. See, in particular, AI Act, art 2 (1) and (8); see also Section 2.1.1.2.5..; for a discussion of article 2(8)’s exclusion for research, testing and development activities, see forthcoming commentary on Article 2 in this work. ↩︎
  244. For pre-market placement notification obligations, see Section 2.1.1.2.5. ↩︎
  245. See Commission Guidelines (n 41) paras 27, 43 and 46. For a discussion of automatic classification under article 51(1)(a), see commentary on Article 51, Section 2.1.1.1. in this work; for a discussion of the effects of classification, see commentary on Article 51, Section 2.1.4. in this work. ↩︎
  246. See Bond and Abbady (n 57) 839–840, s 1; Förster and Straburzynski (n 64) para 66. In favour of the requirement of a Commission decision for classification under article 51(1)(a): Haar and Siglmüller, ‘Art. 51’ (n 8) paras 26–31; Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 12; Hofmann-Coombe (n 7) para 35; Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 3; Martini (n 3) para 197; Philipp Schöbel and Anna Maria Yang-Jacobi, ‘Systemische Risiken im Zeitalter generativer KI’ (2025) Recht Digital 627, 631; Claudio Novelli and others, ‘Generative AI in EU law: Liability, Privacy, Intellectual Property, and Cybersecurity’, 55 Computer Law & Security Review (2024) 106066 2–3. ↩︎
  247. See Commission Guidelines (n 41) para 46 which state that ‘the provider must comply with the obligations for providers of general-purpose AI models with systemic risk from the moment the model is classified as a general-purpose AI model with systemic risk’ and further clarify that, in case of a designation under article 52(1), ‘this is the moment when the model meets the condition laid down in Article 51(1), point (a), AI Act.’ ↩︎
  248. Opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 14. ↩︎
  249. See Section 2.2.2.3. ↩︎
  250. See Section 2.1.3.2. ↩︎
  251. See Section 2.1.3.3.1. ↩︎
  252. This argument is nuanced by the fact that classification under Article 52(2) may be understood broadly as encompassing both automatic classification and designation. ↩︎
  253. See Section 2.4.1. ↩︎
  254. See Section 2.4.1. ↩︎
  255. Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C 326/47 (“TFEU”), art 288(4). ↩︎
  256. Haar and Siglmüller, ‘Art. 52’ (n 50) para 27. ↩︎
  257. TFEU, art 278. ↩︎
  258. See commentary on Article 51, Section 2.1.4., para 85 in this work. ↩︎
  259. See commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  260. This commentary follows the terminology of the Commission Guidelines (n 41) para 33, which refer to the procedure under article 52(2) and (3) as the ‘Procedure for contesting classification’. ↩︎
  261. For a discussion of the scope of the procedure to contest classification under article 52(2) and (3), see Section 2.2.1. ↩︎
  262. Article 52(2) establishes that ‘[t]he provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, its general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk’. Article 52(3) builds upon article 52(2) by providing that ‘[w]here the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk’. ↩︎
  263. See Haar and Siglmüller, ‘Art. 52’ (n 50) paras 15–16 (‘Entscheidung der Kommission (Abs. 3)’). For the scope of the Article 52(2) and (3) procedure, see Section 2.2.1..; for the requirements for contesting classification, see Section 2.2.2. ↩︎
  264. See Haar and Siglmüller, ‘Art. 52’ (n 50) paras 11–14 (‘Exkulpationsargumentation (Abs. 2)’). For the Commission decision following a provider’s submission, see Section 2.2.3. ↩︎
  265. See AI Act, art 52(2) (‘with its notification’). A provider cannot contest classification in the absence of notification, in particular in cases of designation under article 52(1)’s third sentence or article 52(4)’s first subparagraph (see Section 2.2.2.3.). ↩︎
  266. See Section 2.1.1.2.2. ↩︎
  267. See Section 2.1.1.2.1. ↩︎
  268. See Commission Guidelines (n 41) paras 33–34; specifically for the applicability in case of article 51(2): Bond and Abbady (n 57) 841, s 3.2; Haar and Siglmüller, ‘Art. 51’ (n 8) para 39; Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 39; for the similar question regarding the scope of the notification obligation under article 52(1), first sentence, see Section 2.1.1.2. ↩︎
  269. See commentary on Article 51, Section 2.1.1. in this work. ↩︎
  270. See AI Act, art 52(2): ‘The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks […].’ (emphases added); see also Article 51(1): ‘A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; (b) […]’ (emphasis added). ↩︎
  271. See Commission Guidelines (n 41) paras 33–34; Bond and Abbady (n 57) 841, s 3.2; Haar and Siglmüller, ‘Art. 51’ (n 8) para 39; Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 39. ↩︎
  272. See AI Act, recital 112, third and fourth sentences; see also Section 2.1.1.2.1. ↩︎
  273. See Section 2.2.2.1. ↩︎
  274. See Section 2.2.2.4. ↩︎
  275. ‘The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that […]’ (AI Act, art 52(2), emphasis added). ↩︎
  276. The use of ‘and’ instead of ‘or’ in article 52(3) signals that the legislature did not conceive these as distinct requirements (see AI Act, art 52(3): ‘Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that […]’, emphasis added). ↩︎
  277. See Section 2.2.2.2. ↩︎
  278. See Section 2.2.2.3. ↩︎
  279. This is confirmed by article 52(3), which largely reflects article 52(2)’s requirements but does not use the notion ‘exceptionally’. See also Commission Guidelines (n 41) paras 33–42, which do not discuss ‘exceptionally’ as an independent requirement. ↩︎
  280. Article 52(3) supports this as it does not list this last subclause of article 52(2) as a requirement; rather, it states that the model ‘shall be considered to be a general-purpose AI model with systemic risk’ as a consequence of the Commission’s rejection of the provider’s arguments. ↩︎
  281. See Section 2.2.1. ↩︎
  282. See Commission Guidelines (n 41) paras 34, 37 and 40 which expressly mention (i) the case where ‘the provider’s arguments are aimed at rebutting the presumption that the model has high-impact capabilities and therefore does not present systemic risks’ and (ii) the case where ‘the provider’s arguments are aimed at demonstrating that their model does not present systemic risks despite having high-impact capabilities’. ↩︎
  283. For a discussion of the standard of proof and the kind of evidence the Commission expects providers to submit with their challenge to classification, see Section 2.2.2.4. ↩︎
  284. See Commission Guidelines (n 41) paras 34, 37 and 40; regarding the rebuttal of article 51(2)’s presumption of high-impact capabilities: Haar and Siglmüller, ‘Art. 51’ (n 8) para 39; Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 39; Schneider and Schneider, ‘Art. 51‘ (n 5) para 30. ↩︎
  285. See Section 2.2.1. ↩︎
  286. This would limit the provider’s argumentative scope to demonstrate the absence of systemic risk, but they could, however, still present arguments that the model does not have ‘a significant impact on the Union market’ and therefore does not present systemic risks (see AI Act, art 3(65); see also forthcoming commentary on Article 3(65) this work). ↩︎
  287. See also Section 2.2.1. ↩︎
  288. See Section 2.2.1. ↩︎
  289. See Section 2.2.1. ↩︎
  290. See AI Act, art 51(1)(a); further, see AI Act, recital 111, second sentence, and recital 112, second sentence. ↩︎
  291. See AI Act, art 52(2) and (3). ↩︎
  292. Commission Guidelines (n 41) para 37. ↩︎
  293. The Commission Guidelines (n 41) para 40 only examine the further case ‘where the provider’s arguments are aimed at demonstrating that their model does not present systemic risks despite having high-impact capabilities’ (emphasis added). ↩︎
  294. See Jonathan Kirschke-Biller and Anna Lena Füllsack, ‘Art. 3 Begriffsbestimmungen’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025) para 741; see also forthcoming commentary on Article 3(65) in this work. ↩︎
  295. See forthcoming commentary on Article 3(65) in this work; for a discussion of ‘specific to’, see also Philipp Hacker, Atoosa Kasirzadeh and Lilian Edwards, ‘AI, Digital Platforms, and the New Systemic Risk’ (2025) <https://arxiv.org/abs/2509.17878> accessed 7 January 2026, 24–25. ↩︎
  296. See forthcoming commentary on Article 3(65) in this work. ↩︎
  297. For article 51(1)(b)’s classification condition, see commentary on Article 51, Section 2.1.2. in this work; for designation under article 52(4)’s first subparagraph, see Section 2.3.1..; for the relationship between both provisions, see Section 2.1.3.1. ↩︎
  298. This follows from the general apportionment of the burden of proof under EU law, which requires each party to prove the facts supporting its claim or defence (see Koen Lenaerts, Kathleen Gutman and Janek Tomasz Nowak, EU Procedural Law (2nd edn, Oxford University Press 2023) 788, para 24.47; Case C-213/19 European Commission v United Kingdom of Great Britain and Northern Ireland [2022] EU:C:2022:167 para 221; Case C-187/22 P Laboratorios Ern, SA v European Union Intellectual Property Office [2022] EU:C:2022:547 para 17 (order)). ↩︎
  299. See AI Act, art 51(1)(a); see also AI Act, recital 111, second sentence, and recital 112, second sentence. ↩︎
  300. For this scenario, see Section 2.2.3.2. ↩︎
  301. For the effects of a model meeting article 51(2)’s training compute threshold on classification under article 51(1)(a), see commentary on Article 51, Section 2.2.2. in this work. ↩︎
  302. For the relevance of the timing of the Commission decision under article 52(3) with regard to its legal effects, see Section 2.2.3.2. ↩︎
  303. The word ‘its’ in article 52(2) refers to ‘the general-purpose AI model’. If specific characteristics of GPAI models in general were meant, it would read ‘their specific characteristics’ in article 52(2). ↩︎
  304. Compare article 2(b) of the Commission Delegated Decision of 23.11.2021 on further defining security, illegal immigration or high epidemic risks, C(2021) 4981 final, which defines ‘sets of characteristics’ as ‘distinguishing sets of observable qualities or properties identified based on information and statistics referred to in Article 33(2) of Regulation (EU) 2018/1240 and taking into account the data referred to in Article 33(4)(a) to (d) of that Regulation’ (emphasis added); further, see, ‘characteristic’, Cambridge Dictionary (online version) <https://dictionary.cambridge.org/dictionary/english/characteristic> accessed 16 December 2025 (‘a typical or noticeable quality of someone or something’); ‘characteristic’, Merriam-Webster Dictionary (online version), <https://www.merriam-webster.com/dictionary/characteristic> accessed 16 December 2025 (‘a distinguishing trait, quality, or property’). ↩︎
  305. For this interpretive uncertainty, see Bond and Abbady (n 57) 841, s 3.2 (‘To rebut the presumption that a GPAI model trained using more than 1025 FLOPs of computation presents a systemic risk, Article 52(2) requires the provider to identify “specific characteristics” which result in the model not presenting systemic risks. The AI Act does not provide any examples of specific characteristics which could be relied on to rebut the presumption.’); see also Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 7. ↩︎
  306. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 13 who argue that a model’s ‘specific characteristics’ do not need to relate to all the risks mentioned in recital 110, as otherwise article 52(2) would be without a significant scope of application. ↩︎
  307. See Commission Guidelines (n 41) para 33. Instead, they offer some guidance on arguments that may play a role in the Commission’s decision (see Section 2.2.2.4.). ↩︎
  308. Commission Guidelines (n 41) para 40. ↩︎
  309. Commission Guidelines (n 41) para 40 (‘In the case where the provider’s arguments are aimed at demonstrating that their model does not present systemic risks despite having high-impact capabilities, the Commission considers that arguments that a model does not present systemic risks because of mitigations already or planned to be implemented are not suitable grounds for a model being excluded from classification as a general-purpose AI model with systemic risk. In those cases, the model still poses systemic risks which must continuously be assessed and mitigated.’) ↩︎
  310. For a discussion of this interpretive approach, see Section 2.2.2.2.1. ↩︎
  311. For a discussion of this interpretive approach, see Section 2.2.2.2.2. ↩︎
  312. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 13 who argue against a narrow interpretation of ‘specific’, insofar as they reject requiring the characteristics to relate to all risks mentioned in recital 110. ↩︎
  313. See Section 2.2.2.2.2. ↩︎
  314. A cut-off point could be expressed in an absolute number or a percentage. For example, a characteristic could cease to be a ‘specific characteristic’ if one model has it (threshold = 1), fewer than five models have it (threshold = 5), or less than 10% of models have it (threshold = 10%). ↩︎
  315. Apart from obvious cases (for example, using a different label for the same model training technique), distinguishing between different characteristics and variants of the same characteristic might require a case-by-case determination. ↩︎
  316. As the procedure for contesting classification under article 52(2) and (3) is linked to the classification condition under article 51(1)(a) (see Section 2.2.1.), it appears worth considering limiting the subgroup to GPAI models classified under this condition. This would exclude GPAI models designated under article 52(4)’s first subparagraph based on the condition under article 51(1)(b). ↩︎
  317. Choosing all GPAI models as a reference group does not appear to be a defensible option. It implies in some cases exempting earlier GPAI models from systemic risk classification while classifying later GPAI models as presenting systemic risk, even when they share the same safety characteristic. This could give providers of later GPAI models the incentive to adopt a different safety characteristic instead of this demonstrably systemic risk-eliminating safety characteristic for the sole purpose of being able to request exemption under article 52(2). Having only GPAI models with systemic risk as a reference group avoids these issues to some extent (see para 87–88). ↩︎
  318. See AI Act, art 51(3). ↩︎
  319. See, ‘specific’, Cambridge Dictionary (online version) <https://dictionary.cambridge.org/dictionary/english/specific> accessed 16 December 2025. See also the French (‘caractéristiques spécifiques’) and German (‘besonderes Merkmal’) language version of article 52(2). ↩︎
  320. For example, the providers of the GPAI models with systemic risk with this safety characteristic may have not contested classification because of their commitment to voluntary adherence to article 55 obligations, or their model may present systemic risk for reasons unrelated to this safety characteristic. ↩︎
  321. These cases may be addressable by choosing the right differentiation criteria for when a difference between two characteristics is significant enough for them to be different characteristics and not variants of the same characteristic, as one could take the emergence of new scientific evidence or the interplay of a characteristic with the other characteristics of the model into account. ↩︎
  322. See Section 2.2.2.4. ↩︎
  323. See AI Act, art 53(7). It appears doubtful whether the list of GPAI models with systemic risk that the Commission shall publish under article 52(6) will contain sufficiently specific information that providers could use to contest classification under article 52(2) and (3) (see Section 2.5.) ↩︎
  324. For this general principle of EU law see Joined Cases C‑622/16 P to C‑624/16 P Scuola Elementare Maria Montessori Srl v European Commission, European Commission v Scuola Elementare Maria Montessori Srl and European Commission v Pietro Ferracci [2018] EU:C:2018:873 para 79 and the case law cited. ↩︎
  325. See Section 2.2.2.2.1. ↩︎
  326. For an overview of annex XIII’s criteria, see commentary on Article 51, Section 2.4.1. in this work; for its relevance for designation under articles 51(1)(b) and article 52(4)’s first subparagraph, see commentary on Article 51, Section 2.1.2.1.2. in this work. ↩︎
  327. See AI Act, recital 65 (‘In identifying the reasonably foreseeable misuse of high-risk AI systems, the provider should cover uses of AI systems which, while not directly covered by the intended purpose and provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable human behaviour in the context of the specific characteristics and use of a particular AI system.’, emphasis added). ↩︎
  328. See Bond and Abbady (n 57) 841, s 3.2 who observe that the AI Act does not provide any examples of ‘specific characteristics’. ↩︎
  329. Annex XIII specifically relates to designation of a GPAI model as presenting systemic risk under article 51(1)(b) and article 52(4)’s first subparagraph, but its criteria may be taken into account in the procedure for contesting classification under article 52(2) and (3) as well (see commentary on Article 51, Section 2.4.2. in this work). However, going so far as defining ‘specific characteristics’ under article 52(2) on the basis of the criteria listed in annex XIII appears to lack normative justification. The AI Act does not provide for an exact alignment of annex XIII criteria with ‘specific characteristics’ under article 52(2). Both provisions do not reference each other, and such an interpretation would ignore the difference between the different conditions under article 51(1)(a) and article 51(1)(b) (see commentary on Article 51, Section 2.1.2.1.1. in this work) they relate to. Moreover, annex XIII does not serve the same purpose of limiting the scope of argumentation as the parenthetical ‘due to its specific characteristics’ under article 52(2), as it merely contains a non-exhaustive list of criteria that the Commission shall take into account for designation under article 51(1)(b) and article 52(4)’s first subparagraph (see commentary on Article 51, Section 2.1.2.1.2.2. in this work). ↩︎
  330. See Bond and Abbady (n 57) 841, s 3.2. ↩︎
  331. For the Commission’s power to introduce such assessment instruments via delegated act, see article 51(3), which is discussed in commentary on Article 51, Section 2.3.1.3. in this work. Such assessment instruments could provide guidance on which characteristics are generally more suitable for demonstrating the absence of high-impact capabilities in a GPAI model (as one aspect of systemic risk classification) and therefore could be considered ‘specific characteristics’ under article 52(2). ↩︎
  332. See AI Act, art 96 for Commission guidelines on the implementation of the AI Act. ↩︎
  333. For a discussion of the requirements for designation under these provisions, see Section 2.1.3.3. (on designation under article 52(1)’s third sentence) and Section 2.3.1.1. (on designation under article 52(4)’s first subparagraph). ↩︎
  334. See Commission Guidelines (n 41) para 36 (‘Upon receiving a notification with arguments for why the model does not present systemic risks, the Commission will assess the arguments and decide whether to accept or reject them, in line with the rules of procedure of the European Commission and established principles of EU law.’). ↩︎
  335. See Section 2.1.3.5. Opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 14. ↩︎
  336. See Section 2.1.1.2.3. ↩︎
  337. See Erben and others (n 84) 42. ↩︎
  338. For a discussion of article 52(5)’s scope of application, see Section 2.4.1. ↩︎
  339. However, a provider failing to notify the Commission in breach of its obligation under article 52(1)’s first sentence may be fined under article 101(1)(a) (see Commission Guidelines (n 41) para 45). ↩︎
  340. See Section 2.1.1.2.3. ↩︎
  341. Moreover, by relaxing the ‘with its notification’ requirement, the procedure for contesting classification risks losing its connection to the initial classification of the GPAI model as presenting systemic risk under article 51(1)(a) and becoming a quasi-reassessment procedure. Given the express provision of a reassessment procedure in article 52(5) – albeit arguably only for models designated under article 52(4)’s first subparagraph (see Section 2.4.1.) – it appears questionable whether this is in line with the purpose of the procedure for contesting classification under article 52(2) and (3). ↩︎
  342. See Section 2.2.2.4. ↩︎
  343. See Commission Guidelines (n 41) para 35. ↩︎
  344. See Erben and others (n 84) 42. ↩︎
  345. Such pre-notification contacts are recognised as a means to ensure an effective notification procedure in the context of gatekeeper designation under the DMA (see Commission Implementing Regulation (EU) 2023/814 of 14 April 2023 on detailed arrangements for the conduct of certain proceedings by the Commission pursuant to Regulation (EU) 2022/1925 of the European Parliament and of the Council [2023] OJ L 102/6, recital 2: ‘In the process of preparing a notification pursuant to Article 3(3) of Regulation (EU) 2022/1925 and Article 2 of this Regulation and within a reasonable timeframe before this notification, an undertaking providing core platform services should be able to engage in pre-notification contacts with the Commission in view of ensuring an effective notification procedure pursuant to Article 3(3) of Regulation (EU) 2022/1925.’). The AI Act does not expressly provide for pre-notification contacts. However, it envisages in the context of notification the possibility that providers ‘start to engage with the AI Office early on’ (AI Act, recital 112, sixth sentence). ↩︎
  346. See Commission Guidelines (n 41) para 34 (‘The Commission notes that the burden of adducing evidence that the presumption deriving from the fulfilment of the quantitative thresholds should not apply should be borne by that provider.’); Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 7; Schneider and Schneider, ‘Art. 52’ (n 40) para 10. ↩︎
  347. For an analysis of the meaning of ‘sufficiently substantiated arguments’ under this provision, see Jan-Frederick Göhsl and Daniel Zimmer, ‘VO (EU) 2022/1925 Art. 3 Benennung von Torwächtern’ in Torsten Körber, Heike Schweitzer and Daniel Zimmer (eds), Immenga/Mestmäcker Wettbewerbsrecht, Band 1 (2025) paras 54–57. ↩︎
  348. In Case T-1077/23, Bytedance Ltd v European Commission [2024] EU:T:2024:478 (appeal pending, Case C-627/24 P), para 71, the General Court found that ‘[i]t is indisputably apparent from the terms “exceptionally” and “manifestly”, in Article 3(5) of the DMA, that the standard of proof required of the undertaking concerned is high, in the sense that the arguments presented by that undertaking must be capable of showing, with a high degree of plausibility, that the presumptions laid down in Article 3(2) of the DMA are called into question’ and that ‘[t]he standard of proof […] of the existence of mere “doubts” or “prima facie” evidence, is hence lower than that required by the DMA.’ ↩︎
  349. Commission Guidelines (n 41) para 37 (emphasis added). ↩︎
  350. One may note that ‘substantiated’ is not translated as ‘substanziiert’ but as ‘begründet’ in the German language version of article 52(2). The former translation would have appeared to be an obvious choice, given that ‘substantiated’ has also been translated as ‘substantiiert’ in the German language version of the parallel provision under article 3(5)(2) DMA. ↩︎
  351. Haar and Siglmüller, ‘Art. 52’ (n 50) para 12. ↩︎
  352. See AI Act, art 52(2): ‘exceptionally’. However, the amount of GPAI models potentially eligible for exemption will be influenced by the degree to which the training compute threshold under article 51(2) is adjusted to reflect technological developments (see AI Act, art 51(3)). ↩︎
  353. See Section 2.2.2.1. ↩︎
  354. Commission Guidelines (n 41) para 35 (‘information available to them at the time of notification about the model’s achieved or anticipated capabilities, including in the form of actual or forecasted benchmark results (for example based on scaling analyses)’). ↩︎
  355. Commission Guidelines (n 41) para 35. ↩︎
  356. Commission Guidelines (n 41) para 35. See also Commission Guidelines (n 41) para 39: ‘The Commission will also assess any other elements beyond cumulative training compute which influence the achieved or expected capabilities of the model, including forecasted or achieved benchmark scores.’ ↩︎
  357. See Commission Guidelines (n 41) para 39: ‘In its assessment of whether the model is amongst the most advanced models at the time of notification, the Commission will take into account the extent to which the cumulative training compute of the model is indicative of the model being amongst these models. In this regard, the extent to which the cumulative training compute of the model exceeds the threshold laid down in Article 51(2) AI Act will also be taken into account.’ ↩︎
  358. Commission Guidelines (n 41) para 40; similar: Schneider and Schneider, ‘Art. 52’ (n 40) para 10 arguing that indications of a supposedly intact risk management system are not sufficient by themselves. ↩︎
  359. See DMA, art 3(5), second to fourth subparagraph. ↩︎
  360. See Commission Guidelines (n 41) para 36; seemingly opposing view: Haar and Siglmüller, ‘Art. 51’ (n 8) para 28 and Haar and Siglmüller, ‘Art. 52’ (n 50) para 15. ↩︎
  361. See article 41(1) of the Charter which grants every person the right to have their affairs handled within a reasonable time by the EU institutions as part of the right to good administration. ↩︎
  362. See AI Act art 52(3) (‘shall reject’). ↩︎
  363. See Section 2.2. ↩︎
  364. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 20 (‘Bei mangelhafter Ausführung oder wenn sie die vorgebrachten Argumente und Nachweise für nicht überzeugend hält, hat die Kommission das Vorbringen zurückzuweisen.’); Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 7. ↩︎
  365. See Section 2.2.2.3. ↩︎
  366. For conditional positive decisions in EU state aid law, see Council Regulation (EU) 2015/1589 of 13 July 2015 laying down detailed rules for the application of Article 108 of the Treaty on the Functioning of the European Union [2015] OJ L 248/9, art 9(3) and (4); for conditional positive decisions in EU merger law, see Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings (the EC Merger Regulation) [2004] OJ L 24/1, art 8(2), second subparagraph. ↩︎
  367. For a discussion of automatic classification under article 51(1)(a), see commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  368. Commission Guidelines (n 41) para 42. For a discussion of whether a challenge to classification has suspensive effect, see Section 2.2.3.4. ↩︎
  369. Commission Guidelines (n 41) para 42. See also commentary on Article 51, Section 2.1.4. in this work for the effects of classification. ↩︎
  370. Commission Guidelines (n 41) para 42. This is similar to the effect of Commission designation under article 52(1)’s third sentence (see Section 2.1.3.5.). ↩︎
  371. See AI Act, art 52(1), first sentence. ↩︎
  372. The Commission Guidelines (n 41) do not expressly address this scenario. Under a competing interpretation, the Commission’s rejection decision itself classifies a GPAI model as presenting systemic risk (see Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 20; Schneider and Schneider, ‘Art. 52’ (n 40) para 11; apparently opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 16 who argue that a rejection decision cannot produce more far-reaching legal effects than a designation decision under article 52(1)). ↩︎
  373. The question of the temporal effect of the Commission’s decision on the provider’s challenge to classification comes with different implications depending on whether article 51(1)(a) is interpreted as automatically classifying GPAI models with high-impact capabilities as presenting systemic risk. This paragraph assumes – in line with the Commission Guidelines (n 41) paras 26–27, 41 – that classification under article 51(1)(a) is automatic and does not require a Commission designation (for a discussion of automatic classification under article 51(1)(a), see commentary on Article 51, Section 2.1.1.1. in this work). ↩︎
  374. Commission Guidelines (n 41) para 42; see also European Commission, ‘Guidelines on obligations for General-Purpose AI providers. General FAQ’ 3, <https://digital-strategy.ec.europa.eu/en/node/13981/printable/pdf> accessed 16 December 2025 (‘If the Commission accepts the arguments, the model will no longer be classified as a general-purpose AI model with systemic risk, and its provider will not be subject to the related obligations from the moment they are informed of the acceptance decision.’) ↩︎
  375. See AI Act, art 1(1). ↩︎
  376. Commission Guidelines (n 41) para 42. ↩︎
  377. Commission Guidelines (n 41) para 42. The Commission Guidelines (n 41) para 42 further assume that an acceptance decision will oblige the provider to renotify the Commission in such cases. As article 52 itself does not provide for such a renotification obligation, article 91 may provide a basis for the Commission to request renotification with its acceptance decision. Article 91(1) allows the Commission to request model providers to provide ‘additional information that is necessary for the purpose of assessing compliance of the provider with this Regulation’. Such a request must also meet the formal requirements under article 91(4). In particular, it must state its legal basis and the purpose of the request (see forthcoming commentary on Article 91 in this work). ↩︎
  378. Commission Guidelines (n 41) para 42. ↩︎
  379. See Commission Guidelines (n 41) para 36: ‘Upon receiving a notification with arguments for why the model does not present systemic risks, the Commission will assess the arguments and decide whether to accept or reject them, in line with the rules of procedure of the European Commission and established principles of EU law.’ ↩︎
  380. Schneider and Schneider, ‘Art. 52’ (n 40) para 11; Bond and Abbady (n 57) 842, s 3.2. ↩︎
  381. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 12. ↩︎
  382. Automatic classification under article 51(1)(a) is discussed in commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  383. For a discussion of automatic classification under article 51(1)(a), see commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  384. For the effects of classification, see commentary on Article 51, Section 2.1.4. in this work. ↩︎
  385. This question is not relevant where the provider’s challenge and the Commission’s decision precede classification under article 51(1)(a). For this scenario and the legal effects of a Commission decision, see Section 2.2.3.2. ↩︎
  386. To the same effect: Commission Guidelines (n 41) para 41: ‘The provider is subject to the obligations for providers of GPAI models with systemic risk from the moment when the model meets the condition laid down in Article 51(1)(a), AI Act. In particular, this means that presenting arguments with the notification does not suspend providers’ obligations as providers of GPAI models with systemic risk.’ ↩︎
  387. Article 52(3) only states that in case of a rejection decision ‘the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk’ without addressing the model’s classification status before the Commission’s decision; compare TFEU, art 278: ‘Actions brought before the Court of Justice of the European Union shall not have suspensory effect. The Court may, however, if it considers that circumstances so require, order that application of the contested act be suspended’. ↩︎
  388. For example, recital 110 states that ‘[s]ystemic risks […] can arise along the entire lifecycle of the model’. For the role of the model’s lifecycle under the AI Act, see Commission Guidelines (n 41) paras 22–24. ↩︎
  389. See Section 2.3.1. ↩︎
  390. See Section 2.3.2. ↩︎
  391. See Section 2.1.3. ↩︎
  392. Bond and Abbady (n 57) 843, s 3.3.2; Haar and Siglmüller, Art 52 (n 43) para 18; for an overview of annex XIII, see commentary on Article 51, Section 2.4.1. in this work. ↩︎
  393. Bond and Abbady (n 57) 842–843, s 3.3; Haar and Siglmüller, ‘Art. 51’ (n 8) para 59; Hecht (n 187) 34; see also Commission Guidelines (n 41) para 45 (‘The designation can occur: under Article 52(4) AI Act, if the Commission concludes that a model has capabilities or an impact equivalent to high-impact capabilities based on the criteria set out in Annex XIII AI Act […].’); unclear: Hofmann-Coombe (n 7) para 52. These reasons are discussed in-depth in Section 2.1.3.1. See also Section 2.3.1.1. on the requirements for designation under article 52(4)’s first subparagraph. ↩︎
  394. See Section 2.1.3.1. ↩︎
  395. AI Act, art 52(4), first subparagraph and art 51(1)(b); Schneider and Schneider, ‘Art. 52’ (n 40) para 12. For an analysis of the role of qualified alerts, see forthcoming commentary on Article 90 in this work. ↩︎
  396. See Hofmann-Coombe (n 7) para 52 who regards the reference to article 90(1)(a) as an argument against article 52(4)’s first subparagraph relating to article 51(1)(b). ↩︎
  397. For a discussion of the role of structured dialogues in the supervision and enforcement of GPAI model provider obligations, see forthcoming Introduction to Chapter IX and XII in this work. ↩︎
  398. See Bond and Abbady (n 57) 842, section 3.3.; see also Haar and Siglmüller, ‘Art. 51’ (n 8) para 21 who characterise designation under article 52(4) as the procedural consequence (‘formelle Folge’) of classification under article 51(1)(b); unclear: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 21. ↩︎
  399. Commission Guidelines (n 41) para 45. ↩︎
  400. See AI Act, recital 111, eleventh and twelfth sentences; see also Section 2.1.3.1. ↩︎
  401. For a discussion of article 51(1)(b)’s substantive requirements, see commentary on Article 51, Section 2.1.2.1. in this work. For an overview of annex XIII, see commentary on Article 51, Section 2.4.1. in this work. ↩︎
  402. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 10; see Section 2.1.3.2. ↩︎
  403. Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 10; to the same effect on the basis of article 41(2)(a) of the Charter: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 21; see also Section 2.1.3.2. ↩︎
  404. See AI Act, art 51(1)(b); see also AI Act, art 52(5), first sentence (‘Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII.’, emphasis added). ↩︎
  405. See Commission Guidelines (n 41) paras 26–27, 46; for a discussion of the effects of classification, see commentary on Article 51, Section 2.1.4. in this work. ↩︎
  406. See commentary on Article 51, Section 2.1.2.2. in this work. ↩︎
  407. See Commission Guidelines (n 41) para 27. ↩︎
  408. Bond and Abbady (n 57) 843, s 3.3.2; Haar and Siglmüller, ‘Art. 52’ (n 50) para 26. ↩︎
  409. AI Act, art 52(5), first and second sentences. For a discussion of reassessment under article 52(5), see Section 2.4. ↩︎
  410. A notable difference between both provisions is that article 51(3)’s wording (‘shall adopt delegated acts’) suggests an obligation to adopt delegated acts (see commentary on Article 51, Section 2.3.2. in this work), whereas article 52(4)’s second subparagraph merely empowers the Commission to adopt such acts. For the question of whether an amendment of annex XIII is within scope of article 51(3), see commentary on Article 51, Section 2.3.1.4. in this work. ↩︎
  411. See TFEU, art 290(1): ‘A legislative act may delegate to the Commission the power to adopt non-legislative acts of general application to supplement or amend certain non-essential elements of the legislative act.’ ↩︎
  412. Haar and Siglmüller, ‘Art. 52’ (n 50) paras 21–24; see Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 7. ↩︎
  413. Haar and Siglmüller, ‘Art. 52’ (n 50) paras 22–23; Bernsteiner and Schmitt, ‘Art. 51’ (n 7) para 7. ↩︎
  414. Haar and Siglmüller, ‘Art. 52’ (n 50) paras 22–23; Hecht (n 187) 33. ↩︎
  415. Bond and Abbady (n 57) 843, s 3.3.2. ↩︎
  416. See TFEU, art 290(1): ‘A legislative act may delegate to the Commission the power to adopt non-legislative acts of general application to supplement or amend certain non-essential elements of the legislative act.’; further, see Case C-286/14 European Parliament v European Commission [2016] EU:C:2016:183 para 40; Case C‑617/24, Siegfried PharmaChemikalien Minden v Hauptzollamt Bielefeld [2025] EU:C:2025:908 paras 23–24; Clara Saillant, ‘Article 97 Exercise of the Delegation’ in Ceyhun Necati Pehlivan, Nikolaus Forgó and Peggy Valcke (eds), The EU Artificial Intelligence (AI) Act: A Commentary (Wolters Kluwer 2024), 1339, s 3.2. ↩︎
  417. See European Parliament v European Commission (n 416) paras 41–42; Siegfried PharmaChemikalien Minden v Hauptzollamt Bielefeld (n 416) para 30; further, see European Parliament, Council and Commission, Non-Binding Criteria for the application of Articles 290 and 291 of the Treaty on the Functioning of the European Union [2019] OJ C 223/1 ss II.B. and C; Saillant, ‘Art. 97’ (n 416) 1339, s 3.2. ↩︎
  418. European Parliament v European Commission (n 416) paras 41–42; Siegfried PharmaChemikalien Minden v Hauptzollamt Bielefeld (n 416) para 30. ↩︎
  419. See Bond and Abbady (n 57) 843, s 3.3.2; opposing view: Hecht (n 187) 34–35; see also Haar and Siglmüller, ‘Art. 52’ (n 50) paras 21–24 (arguing that ‘specifying’ and ‘updating’ need to be interpreted narrowly). ↩︎
  420. For example, the Commission’s obligation to publish a list of GPAI models with systemic risk and to ‘keep that list up to date’ (AI Act art 52(6)) also encompasses the inclusion of new models in the list (see Section 2.5.). ↩︎
  421. Opposing view: Haar and Siglmüller, ‘Art. 52’ (n 50) para 24. ↩︎
  422. For an overview of annex XIII and its criteria, see commentary on Article 51, Section 2.4.1. in this work. ↩︎
  423. For a discussion of the non-exhaustive nature of annex XIII, see commentary on Article 51, Section 2.1.2.1.2.2. in this work. ↩︎
  424. AI Act, art 52(4), second subparagraph, in conjunction with AI Act, art 97(1). ↩︎
  425. AI Act, art 97(2); see Christina Brandt-Steinke, ‘Art. 97 Ausübung der Befugnisübertragung’ in Jens Schefzig and Robert Kilian (eds), Beck’scher Online-Kommentar KI-Recht (4th edn, C H Beck 2025) paras 19–20; Michael Kolain, ‘Art. 97. Ausübung der Befugnisübertragung’ in Mario Martini and Christiane Wendehorst (eds), KI-VO: Verordnung über Künstliche Intelligenz: Kommentar (2nd edn, C H Beck 2026) para 20. ↩︎
  426. AI Act, art 97(3); see Brandt-Steinke, ‘Art. 97’ (n 425) paras 22–23; Kolain, ‘Art. 97’ (n 425) paras 22–26. ↩︎
  427. AI Act, art 97(4); see Brandt-Steinke, ‘Art. 97’ (n 425) paras 27–30; Kolain, ‘Art. 97’ (n 425) paras 27–28; further, see AI Act, recital 173, second sentence (‘It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making’). ↩︎
  428. AI Act, art 97(5); see Brandt-Steinke, ‘Art. 97’ (n 425) para 31; Kolain, ‘Art. 97’ (n 425) para 29. ↩︎
  429. AI Act, art 97(6); see Brandt-Steinke, ‘Art. 97’ (n 425) para 33; Kolain, ‘Art. 97’ (n 425) para 30. ↩︎
  430. See Section 2.4.1. ↩︎
  431. See Section 2.4.2. ↩︎
  432. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25. ↩︎
  433. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25. ↩︎
  434. Haar and Siglmüller, ‘Art. 52’ (n 50) para 17. ↩︎
  435. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25. ↩︎
  436. Haar and Siglmüller, ‘Art. 52’ (n 50) para 17. ↩︎
  437. For the effects of classification, see commentary on Article 51, Section 2.1.4. ↩︎
  438. Neither the AI Act’s enacting terms nor its recitals indicate the existence of a provider’s right to systemic risk classification of its model where the requirements for classification are met. ↩︎
  439. See AI Act, art 3(64). ↩︎
  440. In particular, article 52(6), by its wording, establishes the Commission’s duty to publish and maintain a list of GPAI models with systemic risk, rather than conferring a right of the provider to be included in the list. ↩︎
  441. AI Act, arts 3(64) and (65). For a discussion of the meaning of ‘specific to’ and ‘most advanced’, see forthcoming commentary on Article 3(65) in this work and forthcoming commentary on Article 3(64) in this work respectively. ↩︎
  442. See also commentary on Article 51, Section 1.1. in this work. ↩︎
  443. See Code of Practice, Safety and Security Chapter (n 120) 2. ↩︎
  444. Statement from the Chairs and Vice Chairs responsible for the Safety & Security Chapter of the Code of Practice, <https://code-of-practice.ai/?section=safety-security#chair-statement> accessed 16 December 2025. ↩︎
  445. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 17. ↩︎
  446. See Section 2.5. ↩︎
  447. See Section 2.4.1. ↩︎
  448. AI Act, art 52(1), third sentence, and art 52(4), first subparagraph. For an analysis of their relationship, see Section 2.1.3.1. ↩︎
  449. See commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  450. See AI Act, art 52(5). ↩︎
  451. AI Act, art 52(5), first sentence. ↩︎
  452. For an application of article 52(5) in all cases of designation: Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 23 (under the premise that article 51(1)(a) does not allow for automatic classification); Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 8; Feiler, Forgó and Nebel, ‘Art. 52’ (n 80) para 7; opposing view: Hecht (n 187) 34; apparently also opposing: Hofmann-Coombe (n 7) para 52; left open by: Haar and Siglmüller, ‘Art. 52’ (n 50) paras 17, 25. ↩︎
  453. See Commission Guidelines (n 41) para 47. ↩︎
  454. See Section 2.4.2.2. ↩︎
  455. AI Act, recital 111, twelfth sentence. ↩︎
  456. This argument is weakened by the fact that recital 111’s second to tenth sentences relate to high-impact capabilities-based classification under article 51(1)(a), whereas recital 111’s eleventh and twelfth sentences relate to designation under article 52(4)’s first subparagraph, which is based on article 51(1)(b) (see Section 2.3.1.). Recital 111’s twelfth sentence touching on reassessment immediately follows the recital’s remarks relating to designation under article 52(4)’s first subparagraph and could therefore be regarded as only relating to it. However, this positioning within recital 111 could potentially also reflect the fact that article 52(5) was a late addition to article 52 in the drafting process (see fn 458). ↩︎
  457. For a discussion of Commission-initiated reassessment, see Section 2.4. ↩︎
  458. An oversight could potentially be explained by the legislative history of the AI Act. Article 52 is a product of the trilogue with article 52(5) specifically having been a particularly late addition to article 52 (see European Parliament, ‘Provisional Agreement Resulting From Interinstitutional Negotiations’ (2 February 2024, PE758.862v01-00 <https://artificialintelligenceact.eu/wp-content/uploads/2024/02/AIA-Trilogue-Committee.pdf> accessed 27 January 2026), 150, where article 52’s fifth paragraph is still numbered as paragraph 4a). Moreover, the assumption of a drafting oversight is not entirely implausible in light of further drafting errors contained in the AI Act’s text (see, for example, AI Act art 101(1), third sentence, (‘The Commission shall also into account [sic] […]’) or article 111’s title (‘general-purpose AI models already placed on the marked [sic]’)). ↩︎
  459. Where a provision of EU law – or a part of it, in this case the clause ‘pursuant to paragraph 4’ – is open to several interpretations, preference must be given to the interpretation which ensures that it is not rendered redundant and, thus, retains its effectiveness (see for example, RW v Österreichische Post AG (n 82) para 29 and Cristal Union (n 82) para 41; further, see Lenaerts and Gutiérrez-Fons, (n 82) 17–21). ↩︎
  460. For a discussion of Commission-initiated reassessment under article 52(6), see Section 2.4. ↩︎
  461. See Section 2.4.2.1. ↩︎
  462. See Section 2.3.1. ↩︎
  463. For the substantive requirements for classification under article 51(1)(b), see commentary on Article 51, Section 2.1.2.1. in this work. ↩︎
  464. AI Act, recital 111, twelfth sentence. The requirement of an ‘overall assessment’ does not, however, appear in the text of articles 51 and 52 itself and recitals may clarify the legislature’s intention but do not have binding legal force (see, for example, Case C-418/18 Patrick Grégor Puppinck and Others v European Commission [2019] ECLI:EU:C:2019:1113 paras 75–76). For the role that annex XIII criteria play for designation under article 52(4)’s first subparagraph in conjunction with article 51(1)(b), see commentary on Article 51, Section 2.1.2.1.2. in this work. ↩︎
  465. See Section 2.4.3. ↩︎
  466. See Section 2.4.3. ↩︎
  467. Designation under article 52(1)’s third sentence is limited to GPAI models of which the Commission has not been notified (see Section 2.1.3.2.). ↩︎
  468. Similar: Hilgendorf and Härtlein, ‘Art. 52’ (n 11) para 8 who argue that a provider who notifies the Commission early would be at a disadvantage where the reasons to contest classification pursuant to article 52(2) emerge after notification and that article 52(5) must therefore allow such providers to request reassessment. For the requirements for designation under article 52(1)’s third sentence see Section 2.1.3.3. ↩︎
  469. AI Act, art 52(5), third sentence; see Bond and Abbady (n 57) 844, s 3.3.3. ↩︎
  470. AI Act, art 52(5), fourth sentence. ↩︎
  471. Bond and Abbady (n 57) 844, s 3.3.3, leave this open (‘Providers are barred from making further reassessment requests for a particular model for six months after a decision rejecting a previous reassessment request. It is not clear how this restriction, or reassessment requests in general, will interact with any future changes to the criteria set out in Annex XIII.’). ↩︎
  472. Bond and Abbady (n 57) 844, section 3.3.3; Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25; Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  473. See AI Act, art 52(5), third sentence (‘new reasons that have arisen since the designation decision’); Bernsteiner and Schmitt Art. 52 (n 63) para 25. ↩︎
  474. See Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25. ↩︎
  475. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  476. See AI Act, art 52(4), second subparagraph. ↩︎
  477. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  478. See AI Act, annex XIII, points (e)–(g). ↩︎
  479. See AI Act, art 51(3) and art 52(4), second subparagraph. The wording of article 51(3) (‘shall adopt delegated acts’) even implies the Commission’s obligation to update the training compute threshold under article 51(2) under certain conditions (see commentary on Article 51, Section 2.3.2. in this work). ↩︎
  480. For an analysis of the notion of high-impact capabilities see forthcoming commentary on Article 3(64) in this work. ↩︎
  481. See Section 2.4.2.2. ↩︎
  482. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  483. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  484. See AI Act, art 51(3). ↩︎
  485. Apparently in favour of application of the legal standard established by article 52(4)’s first subparagraph in conjunction with article 51(1)(b): Haar and Siglmüller, ‘Art. 52’ (n 50) para 25 (‘In Sechs-Monats-Intervallen können Anbieter von KI-Modell mit allgemeinem Verwendungszweck mit systemischem Risiko beantragen, dass die Kommission erneut überprüft, ob die Voraussetzungen für eine subjektive Einstufung gem. Art. 51 Abs. 1 lit. b, Art. 52 Abs. 4 UAbs. 1 (noch) vorliegen […].’); in favor of applying the same legal standard for reassessment under article 52(5) as for challenges to classification under article 52(2) and (3): Schneider and Schneider, ‘Art. 52’ (n 40) para 16. ↩︎
  486. To this effect apparently: Schneider and Schneider, ‘Art. 52’ (n 40) para 16. ↩︎
  487. As evidenced by the interplay between article 51(1)(a) and article 52(2) and (3), a GPAI model can be considered to present systemic risks without necessarily presenting systemic risk as defined under article 3(65) (see commentary on Article 51, Section 2.1.1., para 15 in this work). The classification rules under Section 1. of Chapter V establish the term ‘general-purpose AI model with systemic risk’ (see article 51(1) and article 55(1)) and its variants as a technical term whose meaning derives from the classification process itself (see commentary on Article 51, Section 2.1.4., para 85 in this work). ↩︎
  488. For the effects of classification, see commentary on Article 51, Section 2.1.4. in this work. ↩︎
  489. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 25. ↩︎
  490. See Section 2.4.2.1. ↩︎
  491. See Section 2.4.1. ↩︎
  492. See Section 2.4.1. ↩︎
  493. For the procedure to contest classification under article 52(2) and (3), see Section 2.2. ↩︎
  494. See Section 2.4.1. ↩︎
  495. Bond and Abbady (n 57) 844, s 3.3.3. ↩︎
  496. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 24. ↩︎
  497. See commentary on Article 51, Section 2.1.2.1. in this work. ↩︎
  498. Bond and Abbady (n 57) 844, s 3.3.3. ↩︎
  499. See Section 2.4.2.1. and Section 2.4.2.2. ↩︎
  500. See Haar and Siglmüller, ‘Art. 52’ (n 50) para 32; Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 26; Hofmann-Coombe (n 7) para 54; Bond and Abbady (n 57) 844, s 3.4. ↩︎
  501. See Bond and Abbady (n 57) 844, s 3.4. ↩︎
  502. See Ho-Dac, ‘The EU AI Act and the Challenge of Protecting Fundamental Rights’ (2025) 62 Common Market Law Review 1299, 1311; Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 27. ↩︎
  503. AI Act, art 52(6); see Haar and Siglmüller, ‘Art. 52’ (n 50) para 32; Schneider and Schneider, ‘Art. 52’ (n 40) para 18; Hofmann-Coombe (n 7) para 54; Bond and Abbady (n 57) 844, s 3.4. ↩︎
  504. Schneider and Schneider, ‘Art. 52’ (n 40) 18. ↩︎
  505. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 28; Haar and Siglmüller, ‘Art. 52’ (n 50) para 31. ↩︎
  506. Haar and Siglmüller, ‘Art. 52’ (n 50) paras 31–32. The AI Act mentions intellectual property rights, confidential business information and trade secrets in several instances (see for example article 53(1)(b) and (7), article 55(3) and article 78(1)(a)). In particular, article 78(1)(a) requires that ‘[t]he Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular […] the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council […].’ Notably, article 52 contains no comparable provision to article 53(7) and article 55(3), both of which provide that ‘[a]ny information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.’ No apparent reason exists for this omission, since the obligation under the second sentence of article 52(1) may encompass confidential information (see Section 2.1.2.). ↩︎
  507. See commentary on Article 53, Section 2.1.2. in this work. One may note, however, that this right is also limited by ‘the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law’ (art 53(1)(b); see commentary on Article 53, Section 2.1.2.4. in this work). ↩︎
  508. Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 28. ↩︎
  509. See commentary on Article 51, Section 2.1.1. in this work. ↩︎
  510. See commentary on Article 51, Section 2.1.1.1. in this work. ↩︎
  511. See Section 2.2.3.4. ↩︎
  512. Compare Bernsteiner and Schmitt, ‘Art. 52’ (n 69) para 25 who argue that the Commission’s obligation to update the list under article 52(6) necessitates the Commission’s power to reassess classification. For further discussion of Commission-initiated reassessment, see Section 2.4. ↩︎
  513. Article 52(5) expressly only provides for provider-initiated reassessment of a model’s classification (see Section 2.4.). Beyond this, article 52(5) provides little interpretive guidance on whether article 52(6) encompasses the Commission’s obligation to conduct substantive reassessment of classification, as – under a literal interpretation – the scope of article 52(5) is limited. By its wording, it only governs provider-initiated reassessment requests for GPAI models that have been designated under article 52(4)’s first subparagraph, while remaining silent on both Commission-initiated reassessment and GPAI models classified under article 51(1)(a) and designated under article 52(1)’s third sentence. An analogous application of article 52(5) does not appear to be viable given its clear wording, which refers expressly to designation under article 52(4) (see Section 2.4.1.). Information on the regulatory purpose underlying article 52(5) necessary to make a strong argument for the interpretation of article 52(6) – whether as an argumentum e contrario or an analogy-type argument – is too limited. Moreover, according excessive weight to article 52(5) in the interpretation of article 52(6) potentially conflicts with the complementary nature of designation under article 52(4)’s first subparagraph (see AI Act, recital 111, eleventh sentence). ↩︎
  514. Article 33(6) DSA and Article 4(3) DMA contain comparable duties of the Commission to publish and update lists of VLOPs, VLOSEs and gatekeepers. However, these duties are unlikely to encompass the obligation to substantively reassess the VLOP, VLOSE and gatekeeper status, as such obligations are established separately under article 33(5) DSA and article 4(2) DMA. ↩︎
Contents
Submitted:  
Published:  
Updated:  
Cite
Copied to clipboard
Cite
Gregor Gindlin, 'Article 52: Procedure' (Cambridge Commentary on EU General-Purpose AI Law, 1 Mar 2026) <https://cambridge-commentary.ai/article-52/>
Copied to clipboard