EU AI Act
I. EU AI Act:
A. Full text of the Act
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689
B. Implementation Timeline:
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. However, its obligations are being introduced gradually. To reduce immediate burden, the Act provides transitional rules based on when a model or system is placed on the EU market. These rules can delay or lighten your compliance obligations, but only if you understand and follow the timeline carefully.
Below are the two most important compliance dates and how they affect your responsibilities:
August 2, 2025 — GPAI Compliance Begins
This is the key date for General-Purpose AI (GPAI) models, including foundational models like ChatGPT, Claude, and others. What you must do depends entirely on when the model was released.
1. GPAI models released after August 2, 2025
These models must immediately comply with the full obligations of Chapter V of the EU AI Act. This includes:
Comprehensive technical documentation for each model.
A detailed risk assessment conducted by the provider.
Ongoing monitoring and documentation of any modifications by downstream providers.
Clear verification that transparency and risk requirements are met before the model is placed on the market.
2. GPAI models released before August 2, 2025
These models benefit from transitional provisions, meaning you are not required to comply fully right away.
To preserve this benefit:
Ensure no substantial modifications are made after August 2, 2025. If such modifications occur, the model will be treated as new, triggering full and immediate compliance.
By August 2, 2027, providers must still submit technical documentation and training data summaries, even for models placed on the market before 2025.
During the transitional period, deployers must keep records, monitor usage, and ensure the system is used as originally intended.
August 2, 2026 — High-Risk AI Compliance Begins
This is the cutoff date for high-risk AI systems, such as those used in healthcare, critical infrastructure, or education. Whether your system is new or existing will determine the level of compliance required.
1. High-risk AI systems released after August 2, 2026
These systems must immediately comply with the full obligations of the Act. That means:
A documented risk management process.
Conformity assessments under applicable sectoral laws.
Detailed technical records covering design and system modifications.
Ongoing market monitoring to detect risks and respond to incidents.
2. High-risk AI systems released before August 2, 2026
These systems benefit from transitional provisions. You are not required to bring them into full compliance retroactively.
To retain this benefit:
Do not substantially modify the system after August 2, 2026. If you do, it will be treated as new and must comply immediately.
Continue to monitor safety and performance, and ensure the system is used within its original purpose.
Below is the complete timeline for the implementation of the EU AI Act’s provisions.
Note:
The timeline does not include the dates when supplementary documents clarifying the EU AI Act’s obligations will be released. Keep in mind that, in addition to the already vague obligations, there will likely be several additional provisions that could significantly increase or ease the compliance burden.
From February 2, 2025 apply:
Chapter I (General Provisions) that establishes subject matter and scope of application of the EU AI Act and obligation to ensure AI literacy. The EU AI Act defines “AI literacy” as the skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems, taking into account their respective rights and obligations. Additional information: by the AI Office (May 7, 2025).
Chapter II (Prohibited AI Practices) that lists specific AI practices that are deemed incompatible with Union values and fundamental rights and are therefore prohibited.
From August 2, 2025 will apply:
Chapter III, Section 4 (Notifying authorities and notified bodies) that defines the framework for designation, notification, and supervision of conformity assessment bodies, known as “notified bodies”.
Chapter V (General-Purpose AI Models) that introduces specific obligations for providers of general-purpose AI models, including transparency requirements, technical documentation, and measures to ensure the responsible development and deployment of such models.
Chapter VII (Governance) that defines roles of the European Artificial Intelligence Office and the AI Board. The AI Office will oversee the enforcement of rules concerning general-purpose AI models, coordinate the implementation of the Regulation, and provide guidance. The AI Board, composed of representatives from EU Member States, will contribute to consistent application of the Regulation across the Union.
Articles 99 and 100 (Chapter XII (Penalties)) which outline the financial penalties for non-compliance with various provisions of the EU AI Act. Fines are scaled depending on the nature and severity of the infringement, with maximum penalties reaching up to EUR 35 million or 7% of total worldwide annual turnover.
Article 78 (Confidentiality) that ensures the protection of confidential information obtained through the implementation of the Regulation, including trade secrets and intellectual property. Authorities, notified bodies, and all involved entities are bound by strict confidentiality obligations, unless disclosure is required for public interest or enforcement purposes.
From August 2, 2026 will apply:
The remainder of the EU AI Act, except Article 6(1), meaning that all remaining chapters, sections, and provisions not previously in force become fully applicable. This includes, among others:
Chapter III (High-Risk AI Systems), except Section 4 which already applies from 2025. This chapter sets the obligations for providers, importers, distributors, and deployers of high-risk AI systems, including requirements for risk management, data governance, record-keeping, transparency, human oversight, and post-market monitoring.
Chapter IV (Obligations of Certain Users and Other Actors) that defines responsibilities for users, product manufacturers, authorized representatives, importers, and distributors involved in placing AI systems on the market or putting them into service.
Chapter VI (Transparency Obligations for Certain AI Systems) that requires providers to ensure that AI systems interacting with humans disclose that fact, and that deepfake or emotion recognition systems signal their artificial nature, protecting individuals from deception.
Article 101 (Fines for providers of general-purpose AI models) that introduces targeted penalties for providers of general-purpose AI models.
Chapter VIII (Post-Market Monitoring, Market Surveillance, and Information Sharing) that introduces obligations for ongoing monitoring of AI systems, incident reporting, and cooperation with market surveillance authorities to ensure continued compliance after deployment.
Chapter IX (Codes of Conduct) that encourages the development and voluntary adoption of codes of conduct by providers of non-high-risk AI systems to foster trustworthy and ethical AI across the market.
Chapter X (Regulatory Sandboxes) that allows Member States to set up controlled environments in which AI systems can be developed, tested, and validated under regulatory supervision before entering the market.
Chapter XI (Union Database for Stand-Alone High-Risk AI Systems) that mandates the creation of a central public database containing information about high-risk AI systems placed on the EU market, aiming to increase transparency and traceability.
Regarding the dates for specific supplementary acts, templates, codes of practice, and other actions:
Note:
The timeline may currently be interpreted as indicative rather than strictly mandatory due to ongoing delays in preparation and publication of documents.
Commission’s delegated powers to amend annexes and technical provisions – August 1, 2024 The Commission is empowered for five years to adopt delegated acts addressing areas such as high-risk classification criteria, GPAI thresholds, technical documentation, transparency requirements, and conformity procedures. No specific deadlines apply to individual acts.
Codes of Practice for General-Purpose AI Models – May 2, 2025 These voluntary codes are intended to guide GPAI model providers and downstream users in meeting obligations under Chapter V. As of now, no finalised code has been published.
Common Rules for General-Purpose AI Models (fallback if Code is unavailable or inadequate) – August 2, 2025 If no suitable Code of Practice is adopted by this date, the Commission may adopt binding Common Rules via implementing act under Article 56(6). This mechanism is optional and not guaranteed in advance.
Implementing Act on the Post-Market Monitoring Plan template and required elements – February 2, 2026 This act will specify the structure and required content of post-market monitoring obligations applicable to providers and deployers, particularly in high-risk contexts.
Guidelines on Article 6 (high-risk classification) and list of relevant use cases – February 2, 2026 These guidelines, to be adopted by the Commission after consultation with the AI Board, are intended to support consistent classification of AI systems across the Union.
Notification by Member States of national penalty frameworks – August 2, 2026 Under Article 99, Member States must adopt and notify their national rules on penalties and enforcement measures by this date. Although some penalty provisions apply from August 2, 2025, this is the formal notification deadline for the complete framework.
C. Breakdown of the EU AI Act
The EU AI Act will eventually enter fully into force. Given the available but still incomplete information in the Act, and the significant legal burden imposed by other applicable EU legislation, one of the key steps toward compliance is to understand and monitor how the EU AI Act evolves over time.
As of now, August 2, 2025, is the key date to watch, as new obligations will begin to apply on this date, and further clarifications of the EU AI Act are expected to follow shortly afterward.
Content in Chapters:
Chapter 1. AI models and systems
Chapter 2. Your role in the AI value chain
Chapter 3. Net of the supervisory authorities
Chapter 4. Liability under the EU AI Act
Chapter 5. General obligations on AI literacy and transparency
Chapter 6. Obligations of actors in the AI value chain (we invite you to comment)
Sub-chapter 6.1. Operators of GPAI models
Sub-chapter 6.2. Operators of high-risk AI systems
Chapter 7. Special status of SMEs, startups, and innovators
Chapter 8. Influence of the EU AI Act on the world regulation
Overall Conclusion and Recommendations
Chapter 1. AI models and systems
The EU AI Act defines 8 types of the AI technologies:
AI Systems:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments [Article 3(1)].
In the view of the legislator, the notion of an “AI system” should be:
clearly defined and closely aligned with the work of international organisations working on AI to ensure legal certainty, facilitate international convergence and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field.
based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.
The AI systems can be used [Recital 12 of the Preamble]:
on a stand-alone basis,
as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded).
Depending on the scenario chosen, the scope of the applicable legal acts will differ:
For standalone AI systems the EU AI Act will define the set of legal obligations.
For AI systems as components the EU AI Act will apply complementary with the relevant product safety laws that will particularly define the set of measures to fulfil in order to place such product on the European market (conformity assessment).
General-Purpose AI (GPAI) Models.
‘General-Purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market [Article 3(63)].
Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks [Recital 99 of the Preamble].
GPAI models may be further modified or fine-tuned into new models [Recital 97 of the Preamble].
In the view of the legislator, the notion of an “GPAI model” should be set apart from the notion of AI systems to enable legal certainty.
AI systems and GPAI models differ in their legal definitions and nature:
Although AI models are typically integrated into, form part of AI systems, and are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems [Recital 97 of the Preamble].
However, their use may give rise to overlapping obligations and shared compliance responsibilities across the AI value chain:
When GPAI models are integrated or form part of an AI system, the rules of AI systems and GPAI models apply in the respective parts to the relevant AI value chain actors. In such situations, the cooperation between such actors is encouraged by the EU AI Act in order to fulfil the obligations of the Act. The rules for GPAI models should apply also [Recital 97 of the Preamble].
Clarification by the AI Office (, March 14, 2025):
Do AI systems play a role in the Code of Practice?
there are interactions between the two sets of rules, as general-purpose AI models are typically integrated into and form part of AI systems. If a provider of a general-purpose AI model integrates that model into an AI system, that provider must comply with the obligations for providers of general-purpose AI models and, if the AI system falls within the scope of the AI Act, with the requirements for AI systems. If a downstream provider integrates a general-purpose AI model into an AI system, the provider of the general-purpose AI model must cooperate with the downstream provider of the AI system to ensure that the latter can comply with its obligations under the AI Act if the AI system falls within the scope of the AI Act (for example by providing certain information to the downstream provider).
The EU AI Act provides specific rules for GPAI models and GPAI models that pose systemic risks.
General-Purpose AI Models with Systemic Risk:
The EU AI Act introduces the concept of systemic risk to describe certain GPAI models that could cause serious harm across sectors or societies if misused or poorly controlled.
‘Systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain [Article 3(65)].
High-impact capabilities are those that match or exceed the performance of the most advanced GPAI models [Article 3(64)]. Therefore, each new release by OpenAI, Google, xAI, Mistral, or Anthropic is presumed to be a GPAI release involving systemic risk. However, AI companies may demonstrate to the AI Office that a new GPAI model “exceptionally does not present systemic risks” and, as such, should not be classified as a GPAI model with systemic risks.
Considering their potential significantly negative effects, the general-purpose AI models with systemic risk should always be subject to the relevant obligations under this Regulation [Recital 97 of the Preamble].
General-Purpose AI Systems (GPAI systems):
When a GPAI model is integrated directly into or be a component of other AI system, this system should be considered to be General-Purpose AI system when, due to this integration, this system has the capability to serve a variety of purposes. [Recital 100 of the Preamble, Article 3(66)].
High-Risk AI Systems:
High-risk AI systems are those that have a significant harmful impact on the health, safety and fundamental rights of persons in the EU, [Recital 46 of the Preamble] taking into account:
both the severity of the possible harm and its probability of occurrence,
they are used in a number of specifically pre-defined areas specified in the EU AI Act. [Recital 52 of the Preamble]
In the view of the legislator, such limitated scope that is covered by the high-risk concept should minimise any potential restriction to international trade. [Recital 46 of the Preamble]
To be qualified as “high-risk”, AI system shall be, at least, one of the two:
Stand-alone AI systems | Stand-alone AI systems | Product or safety component of product |
---|---|---|
that applies in one of the sensitive areas listed in Annex III: Biometrics [Recital 54 of the Preamble] Critical infrastructure [Recital 55] Education and vocational training [Recital 56] Employment, workers’ management and access to self-employment [Recital 57] Access to and enjoyment of essential private services and essential public services and benefits [Recital 58] Law enforcement [Recital 59] Migration, asylum and border control management [Recital 60] Administration of justice and democratic processes [Recital 61-62] |
that applies in one of the sensitive areas listed in Annex III: Biometrics [Recital 54 of the Preamble] Critical infrastructure [Recital 55] Education and vocational training [Recital 56] Employment, workers’ management and access to self-employment [Recital 57] Access to and enjoyment of essential private services and essential public services and benefits [Recital 58] Law enforcement [Recital 59] Migration, asylum and border control management [Recital 60] Administration of justice and democratic processes [Recital 61-62] |
that is subject to third-party conformity assessment under one of the EU laws listed in Annex I. In particular, such products are: machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation [Recital 50 of the Preamble]. |
Exception: | No Exception Applies: | that is subject to third-party conformity assessment under one of the EU laws listed in Annex I. In particular, such products are: machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation [Recital 50 of the Preamble]. |
There may be specific cases in which AI systems do not lead to a significant risk of harm to the legal interests protected under those areas because they do not: materially influence the decision-making or harm those interests substantially. It will NOT be high-risk. |
If it profiles individuals (e.g. builds personality or behavioral profiles), it is ALWAYS high-risk. | that is subject to third-party conformity assessment under one of the EU laws listed in Annex I. In particular, such products are: machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation [Recital 50 of the Preamble]. |
GPAI systems that are qualified as high-risk:
GPAI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of GPAI systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless stated otherwise by the EU AI Act, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance with the relevant obligations under the EU AI Act and with the competent authorities [Recital 85 of the Preamble].
AI systems with limited risk
This category is not explicitly named in the EU AI Act, though its definition and name can be inferred from the Preamble.
“AI systems with limited risk” means AI systems referred to in pre-defined areas listed in Annex III that in specific cases do not lead to a significant risk of harm to the legal interests protected under those areas because they do not materially influence the decision-making or do not harm those interests substantially [Recital 53 of the Preamble].
A provider of the AI system with limited risk should:
draw up documentation of the assessment before that system is placed on the market or put into service and should provide that documentation to national competent authorities upon request.
register the AI system in the EU database established under the EU AI Act.
Prohibited AI systems
Aside from the many beneficial uses of AI, it can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict EU values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights [Recital 28 of the Preamble].
The list of the prohibited AI systems [Article 5]:
Subliminal and manipulative techniques that distort behavior [Recitals 28-29 of the Preamble]
Use of AI systems that deploy subliminal, manipulative, or deceptive techniques to impair decision-making and cause significant harm.
Exploitation of vulnerabilities due to age, disability, or socio-economic status [Recital 29 of the Preamble]
Use of AI systems targeting specific vulnerable groups in a way that distorts their behavior and results in significant harm.
Social scoring that leads to unjustified or disproportionate treatment [Recital 31 of the Preamble]
Use of AI systems by public authorities to evaluate or classify individuals based on behavior or personality traits, leading to unfair outcomes.
Predictive criminal risk assessments based solely on profiling [Recitals 42 of the Preamble]
Use of AI systems to assess the risk of criminal behavior using only profiling or inferred traits, without objective evidence.
Untargeted scraping of facial images for biometric databases [Recital 43 of the Preamble]
Use of AI systems to build or expand facial recognition databases by collecting images from the internet or CCTV without targeting.
Emotion recognition in workplaces and educational institutions [Recital 44 of the Preamble]
Use of AI systems to infer human emotions in employment or education settings, except when used for medical or safety purposes.
Biometric categorisation that infers sensitive personal attributes [Recitals 16, 30, 39 of the Preamble]
Use of AI systems to deduce characteristics such as race, religion, or sexual orientation from biometric data.
Real-time remote biometric identification in public by law enforcement [Recitals 17, 32-38 of the Preamble]
Use of such systems is banned except in narrowly defined, high-risk scenarios (e.g., locating missing persons, preventing imminent threats, identifying suspects of serious crimes), subject to prior authorisation and strict safeguards.
At the same time, the Preamble outlines limitations to these prohibitions. The prohibitions under the EU AI Act do not apply to:
practices that are legitimate under other EU legal acts, that is, practices that comply with EU values and fundamental rights [Recital 28 of the Preamble].
practices that are prohibited by EU law, including data protection law, non-discrimination law, consumer protection law, and competition law [Recital 45 of the Preamble].
For example, the prohibitions of manipulative and exploitative practices in the EU AI Act should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable law and medical standards, for example explicit consent of the individuals or their legal representatives [Recital 29 of the Preamble].
The European Commission must assess once a year whether to update the list of prohibited AI practices in light of technological developments and report any proposed changes accordingly [Recital 174 of the Preamble].
AI systems other than high-risk
The EU AI Act also applies to AI systems that do not present high or unacceptable risk.
The EU AI Act encourages providers and, as appropriate, deployers of AI systems other than high-risk [Recital 165 of the Preamble]:
to create codes of conduct, including related governance mechanisms, intended to foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the available technical solutions and industry best practices such as model and data cards.
to apply on a voluntary basis additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with the involvement, as appropriate, of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisations in the design and development of AI systems, and diversity of the development teams, including gender balance.
To ensure that the voluntary codes of conduct are effective, they should be based on clear objectives and key performance indicators to measure the achievement of those objectives. They should also be developed in an inclusive way, as appropriate, with the involvement of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisation.
Note:
It is important that AI systems related to products that are not high-risk in accordance with the EU AI Act and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, General Product Safety Regulation (Regulation (EU) 2023/988) would apply as a safety net [Recital 166 of the Preamble].
AI Type | Key Characteristics | Examples / Notes |
---|---|---|
AI system | Machine-based system that infers how to generate outputs such as predictions, content, recommendations, or decisions | Gemini embedded in Google Search |
General-purpose AI (GPAI) model | Displays significant generality and performs a wide range of distinct tasks. Not an AI system on its own. | Releases by OpenAI, Google, Mistral, Anthropic |
General-purpose AI model with systemic risk | Has high-impact capabilities and significant reach or foreseeable negative effects that can propagate at scale | Last releases by OpenAI, Google, Mistral, Anthropic, unless proven otherwise |
General-purpose AI system | AI system that, due to integration of a GPAI model, can serve a variety of purposes | Customer service AI based on a GPAI model |
High-risk AI system | Used in pre-defined areas (Annex III) with significant harmful impact on health, safety or fundamental rights | AI interview scoring system used in job recruitment to evaluate candidates’ speech and facial expressions. |
AI system with limited risk | Used in pre-defined areas (Annex III) but does not materially influence decision-making or harm protected interests substantially | AI chatbot on a bank’s website that answers customer service questions |
Prohibited AI system | Uses manipulative, exploitative, discriminatory, or abusive techniques contrary to EU values and fundamental rights | Emotion-recognition AI in classrooms used to monitor student engagement and attention levels, leading to disciplinary action. |
AI system other than high-risk | Not high-risk or prohibited. Mainly, this system is subject to other EU laws, including the Product Safety Regulation [Recital 166 of the Preamble], as well as voluntarily developed and applied standards and codes of conduct. |
AI outfit recommender |
Chapter 2. Your role in the AI value chain
The EU AI Act has extraterritorial reach:
It applies to any AI system placed on the EU market, regardless of the provider’s location
It also applies to AI systems used within the EU, even if they are developed, deployed, or hosted elsewhere
Therefore, each company is subject to the EU AI Act when it develops and deploys AI systems that are accessible to or affect individuals within the European Union.
The EU AI Act applies to [Article 2]:
providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
deployers of AI systems that have their place of establishment or are located within the Union;
providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
importers and distributors of AI systems;
product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
authorised representatives of providers, which are not established in the Union;
affected persons that are located in the Union.
These actors form the “AI value chain”, which refers to the sequence of stages and operators involved in the life cycle of AI systems and models, from their initial development through to their deployment and use in the EU market.
The EU AI Act highlights the importance of legal certainty across this chain and encourages cooperation and information sharing among actors. Robust documentation practices are promoted to accelerate the flow of information within the AI value chain.
The EU AI Act directly defines the following 7 roles in the AI value chain:
Operator (a general term for any actor in the AI value chain)
‘Operator’ means a provider, product manufacturer, deployer, authorised representative, importer or distributor [Article 3(8)].
In certain situations those operators could act in more than one role at the same time and should therefore fulfil cumulatively all relevant obligations associated with those roles. For example, an operator could act as a distributor and an importer at the same time [Recital 83 of the Preamble].
Provider (AI developer or AI trademark owner)
‘Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge [Article 3(3)].
The role of the Provider bears most of the responsibilities both for GPAI models and AI systems, and, at the same time, there is no high burden on the assignment of this role.
Key criteria to be qualified as a “Provider” is an involvement of an entity in:
Development: an entity develops an AI system or a GPAI model, or
Commercialisation: an entity but places an AI system or a GPAI model developer by another party on the market or puts it into service under its own name or trademark.
This commercialisation-first approach is based on the idea that it should be easy to identify the entity responsible for the system or model and to hold that entity accountable, as reflected in particular in the Product Safety Regulation.
For GPAI models | For AI systems | For AI system as a safety component of a product |
---|---|---|
Other operators may become providers if they: Modify a GPAI model in part, with obligations applying to the modified part Modify a GPAI model entirely, with obligations applying to the new model |
Other operators may become providers if they: Affix their own name or trademark to a high-risk AI system already on the market. Make a substantial modification to a high-risk AI system. Change the intended purpose of a non-high-risk AI system in a way that turns it into a high-risk one. |
If an AI system is a safety component of a product regulated under EU product safety laws, the product manufacturer of the product is considered the provider if the combined product is placed on the market under manufacturer’s name or trademark. |
Authorised Representative
The framework established by the EU AI Act is clearly aimed at ensuring there is an identifiable entity responsible for compliance. For providers established outside the Union, an authorised representative established within the Union must be appointed.
‘Authorised representative’ means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the EU AI Act [Article 3(5)].
Downstream Provider
There are entities that develop GPAI models (providers of GPAI models).
There are also entities that integrate such GPAI models into their own AI systems, either by developing an AI system around the GPAI model or by connecting the system’s interface to the GPAI model’s API.
The provider of the GPAI model may be the same entity as the downstream provider of the AI system (“vertical integration”), or it may be a separate entity.
‘Downstream provider’ means a provider of an AI system, including a GPAI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations [Article 3(68)].
Deployer (user of the AI system)
Unlike a downstream provider, a deployer uses the final product, which is the AI system itself.
‘Deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity [Article 3(4)].
Importer
The importer handles getting the AI system into the EU from a non-EU country for the first time it enters the market there.
‘Importer’ means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country [Article 3(6)].
Distributor
The distributor handles moving and making the AI system available within the EU once it’s already been introduced to the market by the provider or an importer.
‘Distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market [Article 3(7)].
Role | Definition & Key Characteristics | Examples |
---|---|---|
Operator | General umbrella term that includes any of the roles in the AI value chain: provider, product manufacturer, deployer, authorised representative, importer, or distributor. A single entity may hold multiple roles and must fulfill all related obligations. |
A US based entity that develops its own GPAI model and integrates it in its own GPAI system may act as a provider of the GPAI model and a downstream provider of the high-risk AI system. |
Provider | Entity that develops an AI system or GPAI model or places one on the market or into service under its own name or trademark, whether developed by itself or another party. This includes AI trademark owners. | A US company releasing a GPAI model under its brand in the EU. |
Authorised Representative | EU-based legal or natural person appointed by a non-EU provider to fulfill EU AI Act obligations on their behalf. A written mandate is required. | A French legal firm appointed by a Korean GPAI model provider to manage compliance within the EU. |
Downstream Provider | Entity that creates an AI system by integrating an existing GPAI model (e.g., through API or embedding) into its own product. May be the same as the GPAI provider (vertical integration) or a separate one. | A chatbot company that builds a service around a GPAI model like Claude or Gemini. |
Deployer | Final user of the AI system acting under its authority, excluding use in personal non-professional contexts. Does not alter or integrate the AI system. | A bank using an AI credit scoring system to assess loan applicants. |
Importer | Entity established in the EU that places on the market an AI system or model developed and trademarked by a party outside the EU. | A German tech wholesaler importing an AI medical diagnosis tool developed and branded in China. |
Distributor | Entity within the EU supply chain that makes an already marketed AI system available. Not the original provider or importer. | An electronics retailer offering AI-based smart cameras developed and imported by other companies. |
Chapter 3. Net of the supervisory authorities
The EU AI Act establishes a comprehensive network of supervisory authorities and bodies at both the national and EU levels.
EU Level:
European Commission:
The Commission plays a central role in the implementation, supervision, and governance of the EU AI Act.
Implementation and amendment of the EU AI Act
Adopts delegated and implementing acts to adapt the Regulation and issues guidance to support compliance.
Evaluates and updates key elements of the Regulation and may propose amendments based on reviews and technological developments.
Encourages standardisation and may adopt common specifications when harmonised standards are lacking.
Oversees AI regulatory sandboxes.
In relation to GPAI models, the Commission
Holds exclusive supervisory powers.
Designates systemic risk models, amends risk thresholds and criteria.
Updates documentation and transparency requirements.
Imposes fines for non-compliance.
In relation to high-Risk AI systems, the Commission
Amends the list of high-risk use cases under Annex III, adjusts classification criteria, and updates annexes related to conformity assessments and documentation.
Provides templates, guidelines, and monitoring mechanisms.
Manages the EU database for high-risk AI systems.
Coordination with other authorities
To fulfil its tasks, the Commission acts through the AI Office. References in this Regulation to the AI Office shall be construed as references to the Commission.
It coordinates with Member State authorities, receives notifications regarding national rules, designated authorities, and market activities.
It supports joint investigations, proposes voluntary contractual terms, appoints experts, and ensures the functioning of the European AI Board.
European Artificial Intelligence Office (AI Office):
The AI Office is a function within the Commission responsible for building Union-level expertise and supporting the implementation and enforcement of the AI Act.
Support for implementation of the EU AI Act
Assists in promoting AI literacy and public understanding of AI-related risks and obligations.
Contributes to the development of contractual best practices.
Develops tools, including templates for deployers’ compliance assessments.
Sandboxes
Assists national competent authorities in the establishment and development of AI regulatory sandboxes and facilitates cooperation and information-sharing among them.
Maintains a public list of AI sandboxes.
Offers technical support, advice, and tools for sandbox operation.
Coordinates access to sandbox exit reports, subject to confidentiality.
Helps establish a single interface for sandbox-related information.
In relation to high-Risk AI systems, the AI Office
Supports the Commission in providing guidelines specifying the practical implementation of the rules, along with a comprehensive list of practical examples of high-risk and non-high-risk use cases, after consulting the Board.
Is involved in developing standardisation requests for requirements set out for high-risk AI systems. It is consulted when the Commission considers establishing common specifications for high-risk requirements.
Develops a questionnaire template to assist deployers with fundamental rights impact assessments.
Drafts guidance to support providers in fulfilling serious incident reporting obligations.
Assists in preparing a template for post-market monitoring plans.
Acts as the controller of the EU database for high-risk AI systems and manages its operation.
Coordinates support for joint investigations across Member States concerning high-risk systems presenting serious risks.
Supports market surveillance authorities in accessing information on general-purpose AI models used in high-risk systems.
Cooperates in compliance evaluations for general-purpose AI systems used for high-risk purposes.
Where the same provider develops both the GPAI model and the high-risk system, monitors and supervises compliance with the Regulation using market surveillance powers.
Advises and assists Member States in developing national capacities and contributes to guidance development.
Facilitates the creation of voluntary codes of conduct for non-high-risk systems applying high-risk requirements.
In relation to GPAI models, the AI Office
Enforces Chapter V on behalf of the Commission.
Monitors compliance and investigates possible infringements.
Requests access to documentation and APIs.
Evaluates models and may require mitigation measures or restrictions.
Encourages and monitors codes of practice.
Receives alerts from the scientific panel.
Engages in structured dialogue with providers of GPAI models.
Coordination with other authorities
Provides the secretariat to the European Artificial Intelligence Board, nonvenes Board meetings and prepares the agenda, attends meetings without voting rights.
Supports the Board and national competent authorities in enforcement and cooperation efforts.
Scientific Panel of Independent Experts
The Scientific Panel consists of impartial experts with up-to-date technical and scientific knowledge. Members submit public declarations of interest and contribute to the effective enforcement of the EU AI Act.
Support for implementation and oversight
Supports the monitoring and supervision of GPAI models.
In relation to high-Risk AI systems, the Scientific Panel
Advises the AI Office and national surveillance authorities on technical and scientific aspects.
May contribute directly to Member State activities related to high-risk AI systems.
Participates in the development of evaluation tools and methodologies.
In relation to GPAI models, the Scientific Panel
Issues alerts on potential systemic risks posed by GPAI models.
Advises on classification and the design of evaluation frameworks.
May recommend that the Commission obtain specific information from providers.
Can be tasked with conducting technical evaluations of GPAI models.
Consulted in the development and review of codes of practice.
Cooperation with other authorities
Engages with national competent authorities on scientific and technical matters.
May provide direct assistance to Member States on enforcement matters.
Advisory Forum:
The Advisory Forum provides technical input and stakeholder perspectives to the Board and Commission. It includes representatives from industry, start-ups, SMEs, academia, and civil society, with permanent members such as the Fundamental Rights Agency, ENISA, and European standards bodies. It delivers opinions, recommendations, and an annual public report.
It may be consulted on standardisation and specification matters, and may participate in Board sub-groups as an observer.
European Data Protection Supervisor (EDPS)
The EDPS is responsible for supervising EU institutions, bodies, and agencies under the framework of the EU AI Act.
Supervisory Functions:
Acts as the market surveillance authority for Union-level entities, with the exception of the Court of Justice of the European Union when acting in its judicial capacity.
May establish AI regulatory sandboxes specifically for Union bodies.
Reports annually to the Commission on enforcement actions, including administrative fines.
In relation to high-Risk AI systems, the EDPS
Oversees the use of high-risk AI systems by Union institutions and agencies.
Has the authority to impose administrative fines in case of non-compliance.
Acts as a competent authority in the context of AI regulatory sandboxes involving Union entities.
Cooperation with other authorities
Participates in the European Artificial Intelligence Board as an observer.
Was formally consulted during the legislative drafting process of the EU AI Act.
Member State Level:
National Competent Authorities
This is a general term covering the national authorities responsible for supervising the application and implementation of the Regulation within their respective Member States. Each Member State must designate at least one notifying authority and at least one market surveillance authority as national competent authorities. They must exercise their powers independently, impartially, and without bias. Member States must provide them with adequate resources and ensure an adequate level of cybersecurity. They may provide guidance and advice, particularly to SMEs. They must communicate their identity and tasks to the Commission.
Notifying Authorities
Notifying Authorities are national competent authorities responsible for the assessment, designation, and monitoring of conformity assessment bodies (notified bodies).
Core responsibilities
Evaluate and designate conformity assessment bodies (notified bodies) in accordance with the EU AI Act.
Notify the Commission and other Member States of designated bodies.
Ensure impartiality, confidentiality, and the absence of conflicts of interest.
Monitor ongoing compliance and may suspend or withdraw designations if requirements are not met.
Coordination with other authorities
Engage in structured cooperation through sub-groups of the European Artificial Intelligence Board.
Market Surveillance Authorities
Market Surveillance Authorities are national bodies designated to enforce the EU AI Act and Product Safety Regulation (Regulation (EU) 2019/1020).
Data Protection Authorities act as market surveillance authorities for for specific high-risk AI systems in biometric identification, migration, justice, etc.
Each Member State appoints at least one authority, which must operate independently and be adequately resourced.
Core responsibilities
Investigate potential risks and receive complaints concerning AI systems.
Lead national efforts in market oversight of high-risk AI systems.
Evaluate conformity and oversee real-world testing procedures.
Respond to serious incidents and impose corrective measures or administrative fines.
Monitor proper classification and detect cases of misclassification.
Take enforcement action across all categories of AI systems.
Coordinate with the Commission on cross-border or Union-level matters.
Coordination with other authorities
May collaborate with the AI Office in compliance evaluations and investigations.
Participate in sub-groups of the European Artificial Intelligence Board.
Fundamental Rights Authorities
These authorities are designated by each Member State to ensure that AI systems comply with Union law on fundamental rights.
Core responsibilities
Supervise or enforce the respect of obligations under Union law protecting fundamental rights in relation to high-risk AI systems.
Request documentation and initiate or support testing of high-risk AI systems when rights-related concerns arise.
Cooperate with market surveillance authorities in investigations.
Receive information on incidents involving potential breaches of fundamental rights.
Authority | Role Summary | Key Responsibilities | |
---|---|---|---|
European Commission | Oversees and updates the AI Act | Adopts legal acts, manages the EU database, sets standards, enforces GPAI rules, amends Annexes, leads AI governance | |
AI Office | Executive branch of the Commission | Manages high-risk AI database, coordinates sandboxes, develops templates, enforces GPAI model rules, supports joint investigations | |
Scientific Panel | Provides scientific and technical advice | Issues risk alerts on GPAI, advises on classification and evaluation, supports enforcement and tool development | |
Advisory Forum | Brings stakeholder views into rulemaking | Provides expert opinions, represents industry, civil society, and standard bodies, gives input on specifications | |
EDPS | Supervises AI use by EU bodies | Oversees high-risk AI use by EU institutions, can fine non-compliance, runs AI sandboxes for EU bodies, reports to the Commission | |
Notifying Authorities | Approve conformity bodies | Designate, audit, and monitor conformity assessment bodies, report to Commission, ensure impartiality | |
Market Surveillance Authorities | Enforce AI Act and safety rules | Lead on high-risk system oversight, conduct tests, respond to incidents, fine violators, coordinate cross-border | |
Fundamental Rights Authorities | Safeguard human rights | Check compliance of high-risk AI with rights laws, request documentation, support or demand testing, escalate breaches |
Chapter 4. Liability under the EU AI Act
The EU AI Act establishes a liability framework that is enforced at two levels:
at the EU level, by the European Commission and the European Data Protection Supervisor (EDPS)
at the national level, by national authorities
At the EU level, the European Commission, through the AI Office, holds exclusive authority to supervise and fine providers of GPAI models for violations. Similarly, the EDPS has enforcement powers over EU institutions and bodies using AI systems.
At the national level, each Member State must designate competent authorities to enforce the rules and impose penalties. These authorities may include courts, market surveillance bodies (including data protection authorities), or other designated institutions, depending on the legal system of the respective Member State.
Enforcement measures must be effective, proportionate to the violation, and serve as a deterrent.
As with other EU legislation, both EU and national authorities are required to take into account several factors when imposing fines, including:
the seriousness and duration of the infringement,
whether it was committed intentionally or through negligence,
the size of the company, and
whether the company cooperated or attempted to mitigate the harm.
The maximum fines under the EU AI Act vary depending on the type of violation.
Violation Type | Private Sector | EU Institutions |
---|---|---|
Use of AI in prohibited practices like social scoring or real-time biometric identification in public spaces (Article 99 (3)) | Up to €35M or 7% of global turnover whichever is higher | Up to €1.5M |
For high-risk AI systems: breach of any obligations by providers, authorised representatives, importers, distributors, or deployers (Article 99(4)) | Up to €15M or 3% of global turnover whichever is higher | Up to €750,000 |
For all AI systems: violation of transparency obligations, including in the handling of deepfakes and the non-marking of AI-generated content (Article 99 (4)) | Up to €15M or 3% of global turnover whichever is higher | |
Breach of any obligations by notified bodies (Article 99(4)) | Up to €15M or 3% of global turnover whichever is higher | |
Providing false or misleading information to authorities (Article 99 (5)) | Up to €7.5M or 1% of global turnover whichever is higher | |
General-purpose AI Model Violations (Article 101(1)) | Up to €15M or 3% of global turnover whichever is higher | |
Startups and SMEs (Article 99 (6)) | Up to the referred percentage or amount, whichever is lower (not higher) |
Consequences for AI-related harm
Importantly, the EU AI Act does not create a new liability system for AI-related harm. Instead, it complements existing liability regimes. Individuals or entities harmed by an AI system can seek compensation under existing laws such as the New Product Liability Directive (Directive (EU) 2024/2853 repealing Council Directive 85/374/EEC), consumer protection regulations, data protection laws like the GDPR, and employment law. These rights remain fully intact and continue to apply alongside the EU AI Act [Recital 9 of the Preamble].
AI sandboxes and testing in the real world conditions
Additionally, developers participating in testing programs or regulatory sandboxes are not exempt from liability. If their systems cause harm during testing, they remain fully responsible.
Special rules for participants of the AI regulatory sandboxes [Article 57(12)]:
Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox.
Exception applies:
There will be no administrative fines under the EU AI Act if:
the prospective providers observe the specific plan and the terms and conditions for their participation in the AI sandbox and follow in good faith the guidance given by the national competent authority.
There will be no penalty under the other applicable law where:
other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance.
Note:
The exemption from administrative fines within the sandbox does not mean participants are immune from liability for harm caused.
The exception applies only to the “prospective providers” distinguished from “providers” who have already placed a system on the market.
Special rules for providers or prospective providers testing high-risk AI systems in real world conditions [Article 60(9)]:
The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions.
No exceptions applies.
Authority | Penalties | |
---|---|---|
European Commission (via AI Office) |
Exclusive power to impose fines on providers of GPAI models for Chapter V violations: • Up to €15M or 3% of global turnover (whichever is higher) |
|
European Data Protection Supervisor (EDPS) | Imposes fines on Union institutions, bodies, offices, agencies: • Up to €1.5M for prohibited AI practices • Up to €750,000 for other violations |
|
National Courts or Designated National Bodies |
Impose fines for infringements by operators (e.g. providers, deployers, importers) as per national rules. Penalties may reach: • Up to €35M or 7% (for prohibited AI practices) • Up to €15M or 3% (for other obligations) • Up to €7.5M or 1% (for false/incomplete information) |
|
Market Surveillance Authorities | Enforce compliance measures and refer breaches for penalty imposition. Their work leads to the possibility of a fine, but they may not be the body that issues the fine order itself, depending on the Member State’s legal system. |
|
Data Protection Authorities (for specific high-risk systems) |
Act as market surveillance authorities for AI systems in biometric identification, migration, justice, etc. May trigger enforcement and penalties, particularly when acting under GDPR-like mandates. |
Chapter 5. General obligations on AI literacy and transparency:
The EU AI Act sets general obligations concerning AI literacy and transparency that apply across the entire AI lifecycle. These obligations reflect the EU’s commitment to a trustworthy approach to AI and are intended to foster informed decision-making, mitigate risks, and safeguard fundamental rights, health, and safety.
Some obligations are already applicable, while others will come into effect with the main body of the Act on 2 August 2026. However, the direction is clear: all actors involved in the development, deployment, and use of AI systems must contribute to the safe, transparent, and responsible advancement of AI.
AI Literacy obligation
This obligation aim to equip individuals with the knowledge and skills needed to understand how AI systems function, assess their appropriate use, and critically evaluate their potential impact.
As of 2 February 2025, providers and deployers of all AI systems, irrespective of their risk level or generality of purpose, must take appropriate measures to ensure a sufficient level of AI literacy among their staff and any individuals involved in the operation or use of those systems on their behalf. This includes consideration of each person’s technical knowledge, experience, education, training, the context of use, and the individuals or groups impacted by the AI system (Article 4, Recital 20 of the Preamble).
To further promote the literacy obligation for all AI systems, while not imposing overly strict obligations, the EU AI Act encourages the development of voluntary codes of conduct to promote AI literacy, in particular that of persons dealing with the development, operation and use of AI (Article 95(2)(c)).
AI Transperancy obligations
The transparency obligations under the EU AI Act aim to dispel the perception of AI as a “black box” by fostering clear understanding of how AI systems and models function, how they should be used, and what potential impact they may have.
The concept of transparency is rooted in earlier EU-level guidance. In its Preamble, the EU AI Act recalls the 2019 Ethics Guidelines for Trustworthy AI, which introduced seven key ethical principles for trustworthy and human-centric AI. Among them, transparency was defined to include appropriate traceability, explainability, and awareness of AI involvement, ensuring that users are informed when interacting with AI, and that deployers understand the system’s capabilities and limitations. According to Recital 27, this foundational understanding of transparency should guide the development of voluntary standards and best practices under the EU AI Act.
Given the critical role of transparency in supporting safety and fundamental rights, the EU AI Act sets binding transparency obligations, tailored to different types of AI systems and models. These obligations vary depending on the AI system’s classification and use, and include, in particular:
General-purpose AI models are subject to specific transparency obligations, reflecting their foundational role in the AI value chain (Articles 53 and 55).
Providers of such models must ensure that downstream providers can understand and appropriately integrate the models into their own AI systems. To support this, providers are required to draw up, maintain, and make available up-to-date technical documentation and other relevant information describing the capabilities and limitations of the model.
This documentation must include the elements listed in the applicable annexes of the Regulation and must be made available to the AI Office and competent national authorities upon request. These measures are intended to ensure safe integration, regulatory compliance, and informed use of general-purpose AI models throughout the ecosystem (Recital 101, Articles 53 and 55).
High-risk AI systems are subject to detailed transparency obligations due to their potential impact on health, safety, and fundamental rights. These obligations apply to both providers and deployers of such systems and are intended to ensure informed and responsible use in real-world settings (Article 13).
Provider obligations: To address concerns related to opacity and complexity of certain AI systems and help deployers to fulfil their obligations, transparency should be required for high-risk AI systems before they are placed on the market or put it into service.
The provider must design and document the system in a way that enables deployers to understand how it functions, assess its capabilities and limitations, and operate it appropriately in its intended context Where appropriate, illustrative examples must be included, for instance, on intended and precluded uses. Instructions must be adapted to the knowledge level of the intended deployer and provided in a language easily understood in the Member State of deployment (Recital 72).
Deployer obligations: Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use.
Deployers of high-risk AI systems play a critical role in informing natural persons and should, where applicable, inform the natural persons that they are subject to the use of the high-risk AI system and have the right to explanation of individual decision-making (Recitals 91-93, Article 86).
Transparency obligations for certain AI systems (Article 50) Article 50 of the EU AI Act establishes specific transparency obligations for both providers and deployers of certain AI systems that directly interact with, or affect, natural persons. These obligations aim to ensure that individuals are appropriately informed when exposed to AI-generated content or subjected to AI processing, particularly where the risk of confusion, manipulation, or deception may arise.
III. Chapter 6. Obligations of actors in the AI value chain:
We invite you to comment.
All comments will be compiled and forwarded as part of a unified response.
Please share yours.
Operators of GPAI models:
Providers of General-Purpose AI models (Article 53)
Note:
The obligations for the providers of GPAI models should apply once the GPAI models are placed on the market. When the provider of a GPAI model integrates an own model into its own AI system that is made available on the market or put into service, that model should be considered to be placed on the market and, therefore, the obligations in the EU AI Act for models should continue to apply in addition to those for AI systems [Recital 97 of the Preamble].
The EU AI Act sets forth specific obligations for Providers of General-Purpose AI (GPAI) models, particularly in Articles 53 to 55.
These obligations apply progressively, beginning August 2, 2025, and are aimed at ensuring transparency, safety, and accountability in the development and deployment of GPAI technologies.
GPAI obligations apply to any entity placing a GPAI model on the EU market or putting it into service, irrespective of whether the model is embedded in a standalone product or used as part of a broader AI system.
Examples of AI systems that may be covered: If you use one of them, the new legal regime applicable to these AI systems will apply to you, in particular, regarding the transparency of the AI systems in use and the importance of this knowledge for fulfilling your data protection, consumer protection, and product safety obligations. |
ChatGPT Claude Gemini Llama Mistral AI Grok Qwen |
Key obligations:
Draw up and keep up-to-date technical documentation of the GPAI model, with a minimum set of elements outlined in Annex XI [Article 53(1)(a)]:
A general description of the general-purpose AI model.
A detailed description of the elements of the model and relevant information of the process for the development.
Create and maintain information and documentation for downstream providers integrating the GPAI model, with a minimum set of elements outlined in Annex XII [Article 53(1)(b)]:
A general description of the GPAI model.
A description of the elements of the model and of the process for its development.
Note:
The two obligations above ↑ do not apply to providers of GPAI models released under a free and open-source licence (with publicly available parameters) unless those models are considered to present systemic risks [Article 53(2)].
Examples of AI systems covered: | GPT-2 |
Establish a policy to comply with copyright law [Article 53(1)(c), Article 46].
Draw up and make publicly available a sufficiently detailed summary of the content used for training the model [Article 53(1)(d)].
Coperate with the European Commission, the AI Office, and national competent authorities in fulfilling their responsibilities under this Regulation, including by providing documentation upon request [Article 53(3)].
Note:
Providers of GPAI models have a particular role and responsibility along the AI value chain, as the models they provide may form the basis for a range of downstream systems, often provided by downstream providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products, and to fulfil their obligations under this or other regulations. Therefore, proportionate transparency measures should be laid down, including the drawing up and keeping up to date of documentation, and the provision of information on the general-purpose AI model for its usage by the downstream providers [Recital 101 of the Preamble].
Featured concepts:
Technical documentation for Providers of GPAI models [Annex XI, Section 1]
The GPAI’s technical documentation is a crucial aspect of the obligations for providers of GPAI models, allowing regulatory bodies to understand the model’s characteristics, development process, and potential for integration into various AI systems.
The content of this documentation is set in Section 1. Annex XI of the EU AI Act. It shall contain at least the following information as appropriate to the size and risk profile of the model:
1. A general description of the general-purpose AI model including:
the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;
the acceptable use policies applicable;
the date of release and methods of distribution;
the architecture and number of parameters;
the modality (e.g. text, image) and format of inputs and outputs;
the licence.
2. A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:
the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;
the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;
information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;
the computational resources used to train the model (e.g. number of floating point operations), training time, and other relevant details related to the training;
known or estimated energy consumption of the model.
This documentation will serve three purposes:
Help regulators understand the model and evaluate its potential risks.
Support downstream providers (like developers and businesses) in safely integrating the model into their products.
Show good faith that your company is building powerful models responsibly with clarity, traceability, and accountability.
Documentation for downstream providers Annex XII
The primary purpose of this documentation is to enable Downstream Providers (those who build upon GPAI models) to have a good understanding of the capabilities and limitations of the general-purpose AI model. This understanding is crucial for them to effectively integrate the model into their AI systems and to comply with their own obligations under the EU AI Act.
According to Annex XII, the information provided in this documentation must contain at least the following elements:
1. A general description of the general-purpose AI model including:
the tasks that the model is intended to perform and the type and nature of AI systems into which it can be integrated;
the acceptable use policies applicable;
the date of release and methods of distribution;
how the model interacts, or can be used to interact, with hardware or software that is not part of the model itself, where applicable;
the versions of relevant software related to the use of the general-purpose AI model, where applicable;
the architecture and number of parameters;
the modality (e.g. text, image) and format of inputs and outputs;
the licence for the model.
2. A description of the elements of the model and of the process for its development, including:
the technical means (e.g. instructions for use, infrastructure, tools) required for the general-purpose AI model to be integrated into AI systems;
the modality (e.g. text, image, etc.) and format of the inputs and outputs and their maximum size (e.g. context window length, etc.);
information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies.
If you are a downstream provider yourself, receiving this information is essential. Without it, you risk using the model incorrectly or failing to meet your own obligations under the AI Act.
Copyright obligations of the Providers of GPAI models:
Providers of general-purpose AI (GPAI) models are required to comply with EU law on copyright and related rights, including by:
publishing a summary of the content used to train their models, regardless of whether the training data includes copyrighted material.
According to Recitals 107-108 of the Preamble of the EU AI Act, the summary must include:
A general overview of the main data collections or datasets used in training, such as large public or private databases or archives.
A narrative explanation of other data sources used to train the model, even if they are not grouped into formal datasets.
Sufficient detail to provide a comprehensive understanding of the training data sources, while respecting trade secrets and confidential business information.
implementing a documented policy on conformity with copyright and related rights.
While the EU AI Act does not impose detailed content requirements for this policy, it must explicitly state that the Provider respects any reservation of rights expressed under Article 4(3) of Directive (EU) 2019/790 (Digital Single Market Directive) and outline the mechanisms used to comply with such reservations in practice, including through the use of state-of-the-art technologies.
Directive (EU) 2019/790 introduces two exceptions to copyright for text and data mining (TDM):
Scientific Research Exception (Article 3): Allows research organizations and cultural heritage institutions to perform TDM on content they have lawful access to, strictly for scientific research purposes. This exception is mandatory and cannot be overridden by contractual terms.
General TDM Exception (Article 4): Permits any individual or entity with lawful access to content to conduct TDM for purposes beyond scientific research, including commercial applications. However, this exception applies only if the rightholder has not expressly reserved their rights.
“Text and data mining” (TDM) means any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations (Article 2(2) of the Directive (EU) 2019/790).
Rightholders can opt out of the Article 4 exception by reserving their rights in an appropriate manner. For online content, this typically involves using machine-readable means, such as metadata or website terms of use. In other contexts, contractual terms or unilateral declarations may suffice. It’s important to note that this opt-out mechanism affects only the Article 4 TDM exception and does not impact the mandatory exception for scientific research under Article 3.
Note:
Any company that offers a GPAI model in the EU must follow EU copyright rules, even if the training of the model happened outside the EU. This rule exists to make sure all companies follow the same standards. No provider should have an unfair advantage by using looser copyright rules in another country while still doing business in the EU. [Recital 106 of the Preamble]
These copyright-related obligations are intended to:
increase transparency regarding the data sources used to train GPAI models.
empower copyright holders and other parties with legitimate interests to monitor how their rights are respected and to take legal action under existing Union copyright law if necessary.
The AI Office is empowered to develop a standard template that providers must use when preparing the required summary of training data. This template ensures that summaries are clear, consistent across providers, and focused on narrative explanations of the data sources used.
The AI Office is also responsible for monitoring whether providers comply with their obligations under the EU AI Act, specifically:
The publication of the summary of training data.
The implementation of a documented copyright compliance policy.
However, the AI Office is not mandated to perform a work-by-work copyright analysis of the training data. Its monitoring function does not extend to the enforcement of copyright law, which remains governed by Union and national legal frameworks outside the scope of the AI Office’s authority.
Конец формы
Providers of General-Purpose AI models with systemic risk (Article 55):
The EU AI Act creates a special category for GPAI models that have systemic risks. These are powerful models that, because of their reach or capabilities, could cause serious problems across society if they are misused or fail.
If your company builds a GPAI model that falls into this category, you must meet stricter requirements compared to regular GPAI providers.
How is systemic risk defined?
A GPAI model is considered to have systemic risks if:
It has high-impact capabilities that could affect critical sectors like health, safety, democracy, public security, or infrastructure.
Its wide availability or misuse could cause large-scale disruptions or harm.
Large, frontier models are usually presumed to have systemic risks unless proven otherwise. It includes the latest versions of ChatGPT, Claude, Gemini, and other models that appear in the news almost every week.
Key obligations:
In addition to the obligations under Article 53 and Article 54:
Prepare additional technical documentation [Annex XI, Section 2].
In addition to the general description, development process details, and architecture/parameter count required for all GPAI models under Section 1, providers of GPAI models with systemic risk must also provide a detailed description of the system architecture focusing on the interaction and integration of software components.
Perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing to identify and mitigate systemic risks [Article 55(1)(a)].
Assess and mitigate possible systemic risks at the EU level, including their sources, that may stem from the development, placing on the market, or use of these models [Article 55(1)(b)].
Keep track of, document, and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures [Article 55(1)(c)].
Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure [Article 55(1)(d)].
Why these duties exist:
The EU AI Act recognizes that GPAI models with systemic risks are not like ordinary products. They are powerful tools that, if misused, could cause harm at the scale of entire industries, communities, or even societies.
These extra duties are designed to:
Protect public safety
Safeguard democratic institutions
Prevent large-scale technological accidents
Limit the risk of models being used to create biological, chemical, or cyber threats
Featured concepts:
‘Systemic risk’ [Recitals 110-115 of the Preamble, Article 3(65)]
The EU AI Act introduces the concept of systemic risk to describe certain GPAI models that could cause serious harm across sectors or societies if misused or poorly controlled.
A GPAI model has systemic risk when:
Its high-impact capabilities are so advanced that its use or misuse could cause large-scale disruption.
Its influence is broad enough that problems with the model could impact important areas like public safety, democratic processes, or critical infrastructure.
The AI Office and national authorities will use technical evaluations, testing protocols, and other tools to assess whether a model presents systemic risk.
High-impact capabilities are those that match or exceed the performance of the most advanced GPAI models. Therefore, each new release by OpenAI, Google, xAI, Mistral, or Anthropic is presumed to be a GPAI release involving systemic risk. However, AI companies may demonstrate to the AI Office that a new GPAI model “exceptionally does not present systemic risks” and, as such, should not be classified as a GPAI model with systemic risks.
Bearing “systemic risk” matters because providers are subject to additional obligations. These obligations are aimed at identifying and mitigating risks and ensuring an adequate level of cybersecurity protection, regardless of whether the model is provided as a standalone system or embedded within an AI system or product.
GPAI models could pose systemic risks which include, but are not limited to:
any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety;
any actual or reasonably foreseeable negative effects on democratic processes, public and economic security;
the dissemination of illegal, false, or discriminatory content.
Systemic risks should be understood:
to increase with model capabilities and model reach,
can arise along the entire lifecycle of the model, and
are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools [i.e., AI agents], novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors.
In particular, international approaches have so far identified the need to pay attention to risks from:
potential intentional misuse or unintended issues of control relating to alignment with human intent;
chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use;
offensive cyber capabilities, such as the ways in vulnerability discovery, exploitation, or operational use can be enabled;
the effects of interaction and tool use, including for example the capacity to control physical systems and interfere with critical infrastructure;
risks from models of making copies of themselves or ‘self-replicating’ or training other models;
the ways in which models can give rise to harmful bias and discrimination with risks to individuals, communities or societies;
the facilitation of disinformation or harming privacy with threats to democratic values and human rights;
risk that a particular event could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity or an entire community.
Failing to manage systemic risk appropriately can lead to enforcement action, penalties, and restrictions on market access within the EU.
Model evaluation [Recital 114 of the Preamble]
Model evaluation under the EU AI Act requires providers to systematically test and review their GPAI models to:
Identify weaknesses, vulnerabilities, or unintended behaviors
Verify that technical safeguards are working as expected
Ensure that potential risks, especially systemic risks, are being managed before and after the model enters the market
Evaluation must be documented and updated throughout the model’s lifecycle, not just at the launch phase.
To achieve the objectives of identifying and mitigating risks, providers shall carry out the necessary model evaluations, which must include the following:
Carrying out evaluations in accordance with standardised protocols and tools that reflect the state of the art.
Conducting adversarial testing of the model, both internal and/or external (e.g. red teaming), including thorough documentation and a detailed description of the measures implemented for such testing.
Performing model evaluations prior to the model’s first placement on the market, and ensuring ongoing assessment and mitigation of systemic risks throughout the model’s lifecycle.
Establishing risk-management policies, including accountability and governance processes, implementing post-market monitoring, taking appropriate measures throughout the model’s lifecycle, and cooperating with relevant actors across the AI value chain.
Evaluation is not a one-time event, but a structured and ongoing process to verify that the model remains safe, reliable, and aligned with its intended purpose.
Serious incidents and corrective measures [Recital 115 of the Preamble, Article 3(49)]
If, despite efforts to identify and prevent risks, the model causes a serious incident, the provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the European Commission and national competent authorities.
“Serious incident” means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person, or serious harm to a person’s health;
(b) a serious and irreversible disruption of the management or operation of critical infrastructure;
(c) the infringement of obligations under EU law intended to protect fundamental rights;
(d) serious harm to property or the environment.
“Corrective measures” are not explicitly defined. However, the requirement to report “possible corrective measures” in response to serious incidents implies that providers are obligated to implement actions to mitigate the impact or prevent the recurrence of such incidents.
These actions may include “immediately taking the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate” [Article 20(1)].
Adequate level of cybersecurity protection [Recital 115 of the Preamble]
The EU AI Act requires providers of GPAI models, particularly those with systemic risks, to ensure that their models and the infrastructure supporting them are protected against cybersecurity threats throughout the entire lifecycle of the model.
This requirement is part of the broader goal of the Act – to protect individuals, public safety, and critical services from the misuse, theft, or corruption of AI technologies.
Cybersecurity protection related to systemic risks associated with malicious use or attacks should duly consider:
accidental model leakage,
unauthorised releases,
circumvention of safety measures, and
defence against cyberattacks, unauthorised access or model theft
[this is especially relevant in the context of OpenAI and DeepSeek controversy].
That protection could be facilitated by securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved.
Authorised Representatives of Providers of General-Purpose AI models (Article 54)
If a company that builds a GPAI model is based outside the EU, it must appoint an Authorised Representative located inside the EU to act on its behalf.
The Authorised Representative plays a critical role. It is the company’s official contact point for EU regulators, and it takes on specific legal responsibilities to make sure the GPAI provider follows the EU AI Act.
Key obligation:
Verify that the technical documentation is drawn up and all relevant provider obligations are met [Article 54(3)(a)].
Keep a copy of the technical documentation and the provider’s contact details available for the AI Office and national authorities for 10 years [Article 54(3)(b)].
Cooperate with the AI Office and competent authorities, including when the model is integrated into AI systems, by providing all necessary information and documentation upon request to demonstrate compliance [Article 54(3)(c, d)].
To be addressed, in addition to or instead of the Provider, by regulatory bodies on all compliance matters [Article 54(4)].
Terminate the mandate (a formal, written authorisation given by the Provider) if the Provider acts contrary to the EU AI Act, while informing the AI Office [Article 54(5)].
Note:
All above-mentioned obligations do not apply to Providers of AI models released under a free and open-source licence (with publicly available parameters) unless the models present systemic risks.
Providers of modified or fine-tuned General-Purpose AI Models (Recitals 97, 109 of the Preamble)
“In the case of a modification or fine-tuning of a model, the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation”,
Recital 109 of the Preamble
Any entity that fine-tunes or modifies a GPAI model becomes a Provider, but only in relation to the modified part of the GPAI model.
This new Provider is responsible solely for its changes, not for the original model.
Scope of Obligations:
The scope of obligations for the new Provider is limited in subject (i.e. only the modified part), but not limited in volume.
The full extent of Provider obligations under the EU AI Act applies to the modified part of the GPAI model.
Additionally, Providers who modify GPAI models must document all changes and, where applicable, follow any relevant rules set by the initial Provider.
Three Practical Scenarios
1. The original Provider modifies the GPAI model
If the entity that modifies the model is also the original Provider, it must update the technical documentation of the original GPAI model to reflect the changes made to the GPAI model.
2. A different Entity partially modifies the GPAI model:
Such entity becomes the Provider for the modified part. This situation triggers the need for cooperation between the new and initial Provider across the AI value chain.
Both the modified model and the new Provider depend on the technical documentation and technical characteristics of the original GPAI model. However, unlike in cases involving substantial modifications to high-risk AI systems (Article 25(2)), there is no explicit mechanism requiring cooperation in the case of GPAI model modification.
The obligation to cooperate here is based on the spirit and objectives of the EU AI Act, as reflected in the Preamble.
As a result, a gray area emerges. The initial Provider cannot prevent others from modifying or fine-tuning its GPAI model but is not formally required to cooperate. The new Provider, meanwhile, must fulfil its own obligations, while relying on a base model that may carry residual reputational or technical consequences for the initial Provider.
This undermines legal clarity, particularly as the original Provider is still expected to demonstrate the integrity of its model, which may be publicly questioned due to downstream modifications.
3. A different Entity fully modifies the GPAI model
If an Entity completely modifies the GPAI model, creating a new model, it becomes the sole Provider for that new GPAI model.
In this case, the initial Provider shall not be part of the AI value chain for the new GPAI model and, accordingly, shall have no obligation to cooperate. However, it is unclear from the EU AI Act whether this scenario is explicitly addressed.
Questions Yet Unanswered:
The Code of Practice for GPAI model providers will not resolve these uncertainties, and guidance from the AI Office will be necessary (, March 14, 2025). It is unclear when these uncertainties will be clarified and resolved, and whether such clarity will arrive before the obligations for GPAI models become mandatory.
Clarifications by the AI Office (, March 14, 2025):
If someone fine-tunes or otherwise modifies a model, do they have to comply with the obligations for providers of general-purpose AI models?
General-purpose AI models may be further modified or fine-tuned into new models (recital 97 AI Act). Accordingly, downstream entities that fine-tune or otherwise modify an existing general-purpose AI model may become providers of new models. The specific circumstances in which a downstream entity becomes a provider of a new model is a difficult question with potentially large economic implications, since many organisations and individuals fine-tune or otherwise modify general-purpose AI models developed by another entity.
In the case of a modification or fine-tuning of an existing general-purpose AI model, the obligations for providers of general-purpose AI models in Article 53 AI Act should be limited to the modification or fine-tuning, for example, by complementing the already existing technical documentation with information on the modifications (Recital 109). The obligations for providers of general-purpose AI models with systemic risk in Article 55 AI Act should only apply in clearly specified cases. The AI Office intends to provide further clarifications on this question.
Regardless of whether a downstream entity that incorporates a general-purpose AI model into an AI system is deemed to be a provider of the general-purpose AI model, that entity must comply with the relevant AI Act requirements and obligations for AI systems.
What is not part of the Code of Practice?
The Code of Practice should not address inter alia the following issues: defining key concepts and definitions from the AI Act (such as “general-purpose AI model”), updating the criteria or thresholds for classifying a general-purpose AI model as a general-purpose AI model with systemic risk (Article 51), outlining how the AI Office will enforce the obligations for providers of general-purpose AI models (Chapter IX Section 5), and questions concerning fines, sanctions, and liability.
These issues may instead be addressed through other means (decisions, delegated acts, implementing acts, further communications from the AI Office, etc.).
Nevertheless, the Code of Practice may include commitments by providers of general-purpose AI models that sign the Code to document and report additional information, as well as to involve the AI Office and third parties throughout the entire model lifecycle, in so far as this is considered necessary for providers to effectively comply with their obligations under the AI Act.
The basis for this Role is laid down in the Preamble:
Recital 97: the GPAI “models may be further modified or fine-tuned into new models”.
Recital 109: “Compliance with the obligations applicable to the providers of general-purpose AI models should be commensurate and proportionate to the type of model provider. … compliance with those obligations should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups, that should not represent an excessive cost and not discourage the use of such models.
In the case of a modification or fine-tuning of a model, the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation”.
That’s all for now.
II. Opinion:
Our view on the current version
The EU AI Act entered into force on August 2, 2024. It applies to nearly all AI models and systems made available in the EU, either independently or as part of a product.
As we approach the first year since the EU AI Act became effective, we have drawn the following conclusions regarding its impact on business regulatory risks and compliance burdens: the burden is high, even though the Act’s provisions still lack legal precision.
1. The EU AI Act functions as a complementary framework to existing legislation.
The EU AI Act repeatedly emphasizes its role as complementary to other EU legislation relevant to AI systems and models. Despite its title, the “AI Act” should not currently be seen as the primary regulatory framework for AI developers to define a full set of obligations. Its current value lies in how it references and integrates with pre-existing legal frameworks that already govern AI-related risks, including:
Data Protection (GDPR, ePrivacy Directive, etc.): The AI Act clarifies that data subjects retain their rights under data protection law. It means that AI system training, development, deployment, and usage shall respect data subject rights.
Intermediary Services (DSA - Regulation (EU) 2022/2065, formerly Directive 2000/31/EC): The AI Act complements the obligations for providers of intermediary services embedding AI. It explicitly acknowledges that AI systems can be intermediary services and be subject to the DSA’s risk management framework, compliance with which is presumed to fulfil corresponding AI Act obligations for certain risks.
Product Safety and Market Surveillance (Regulation (EU) 2023/988, Regulation (EU) 2019/1020, formerly Directive 2001/95/EC, etc.): For high-risk AI systems that are components of products covered by existing product safety laws, the AI Act compliance is assessed as part of the conformity assessment required under those existing laws. Notified bodies under those other acts are entitled to control conformity with the AI Act requirements.
Throughout the EU AI Act, there are many examples of such interconnectivity, either through direct references or by indirectly overlapping with the obligations of other laws.
2. The EU AI Act lacks standalone clarity and relies heavily on future implementing instruments.
The Act repeatedly mandates or empowers the European Commission and the AI Office to develop or facilitate a wide range of supplementary documentation, including guidelines on practical implementation, delegated acts for amendments, implementing acts for common specifications and real-world testing procedures, templates for technical documentation and transparency summaries, codes of practice, and common rules.
The documents related to General-Purpose AI Models must be developed by August 2, 2025, when the obligations for these models and the penalties for non-compliance are set to take effect.
As of now, the Code of Practice for General-Purpose AI Models remains in draft form and fails to clarify several critical points, particularly regarding the treatment of modified or fine-tuned GPAI models.
It is still unclear how current uncertainties will affect those covered by the new rules beginning August 2, 2025. At the same time, it is clear that penalties in Europe are strict and designed to deter breaches of the EU AI Act (Chapter XII).
3. To comply with the EU AI Act, businesses must first comply with other EU laws.
The complex and staggered integration of the AI Act into the EU legal landscape means that an effective compliance strategy begins with:
Identifying the technical and functional characteristics of the AI system,
Determining which legal frameworks apply beyond the AI Act itself.
Only after clarifying these factors can businesses assess how the AI Act overlays these obligations, particularly regarding:
Transparency, which is often addressed through DPIAs and LIAs under the GDPR,
Conformity assessments under product safety regulations,
Data quality, which under the AI Act blends considerations of copyright, privacy, and consumer protection.
Our view on compliance strategy
With key provisions of the EU AI Act beginning to apply from 2025 and 2026, businesses should treat compliance planning as an immediate operational priority, even in the absence of full regulatory clarity.
Begin with mapping your AI systems and legal context. Identify which of your systems are general-purpose or high-risk, and determine which existing EU frameworks already apply, such as the GDPR, DSA, and product safety rules. These form the foundation of your AI compliance strategy.
Use existing compliance processes as the base. Many AI Act obligations build on data protection (DPIAs, transparency), product law (conformity assessment), and platform regulation (risk management). Review and extend your current practices rather than building separate AI-specific processes in isolation.
Prepare for GPAI and high-risk system deadlines. For GPAI models, ensure that technical documentation and risk assessments are being developed now to meet the August 2, 2025 obligations. For high-risk AI systems, readiness should align with the August 2, 2026 timeline unless a system is modified earlier.
Preserve transitional exemptions through control. Avoid substantial modifications to GPAI or high-risk systems placed on the market before their compliance dates if you intend to rely on transitional provisions. Document all updates to prove stability.
Stay alert to supplementary documents and national enforcement rules. Several key documents, including the Code of Practice, post-market monitoring templates, and national penalty frameworks, are expected by early to mid-2026. Track publication dates and be ready to incorporate new requirements as they become available.
Taking action now, based on what is already known and binding under existing law, is the most effective way to reduce exposure to penalties and ensure future flexibility under the EU AI Act.
III. Context:
Latest news:
1. Launch of Voluntary Code of Practice for General-Purpose AI 🎯
On 10 July 2025, the European Commission published the final version of its Code of Practice for general-purpose AI (like ChatGPT), offering guidelines on transparency, safety, copyright, and security. It is voluntary, designed to assist businesses and reduce compliance burdens as the August deadline approaches .
This move defies industry calls for a delay and sets out clear steps such as model documentation forms on data sources, energy usage, and licensing .
2. EU Stands Firm, Refuses to Delay Roll-out
Despite petitions from major tech firms (Apple, Google, Meta) and over 45 European companies asking for a “pause” or “stoptheclock” on the Act’s provisions, the Commission confirmed the schedule remains:
2 August 2025: Obligations for generalpurpose AI models (GPAI) come into force.
2026–27: Staggered implementation of highrisk AI rules .
“No stop the clock. No grace period. No pause,” confirmed Commission spokesperson Thomas Regnier .
3. Widespread Industry Pushback
CEOs from Siemens, SAP, Airbus, ASML, MercedesBenz, Philips, Mistral and others have expressed concerns that the overlapping AI Act and Data Act are stifling innovation. Some are calling for a twoyear freeze on GPAI compliance .
Siemens CEO Roland Busch labeled the Data Act "toxic" and stressed data access, not computing power, is the real bottleneck .
📌 OpenAI signals readiness to sign the GPAI Code
Following the EU’s publication on July 10, 2025, of the GeneralPurpose AI Code of Practice, OpenAI acted swiftly. On its Global Affairs blog (July 11, 2025), the company announced it will sign the voluntary Code to demonstrate compliance with EU obligations ahead of the August 2, 2025 deadline .
OpenAI noted this move aligns with its existing safety and transparency frameworks—including the Preparedness Framework, System Cards, safety hub, and red-teaming efforts—which it touts as industry-leading .
AI-relevant Regulation in the EU (July 2025):
“what is already known and binding under existing law”
Although the EU AI Act will eventually provide a dedicated legal structure for AI governance, it will not replace existing legal obligations. Instead, it will operate alongside them, complementing and expanding the current regulatory landscape.
The EU already has a broad legal framework governing artificial intelligence, using tools and regulations from data protection, cybersecurity, and consumer protection.
Below is a list of acts explicitly referenced in the EU AI Act as regulating and affecting actors in the AI value chain, alongside the EU AI Act itself:
Note:
The 30 acts listed below do not represent an exhaustive list of laws that may apply to the AI you use or develop. The full scope of applicable legislation will depend on the characteristics of the AI system, its area of application, and the countries in which it is available.
Most importantly, this list covers EU-level acts. Member States may adopt and enforce additional national legislation that is not introduced at the EU level and is based on their own national priorities and legal frameworks.
1. Data Protection & Privacy
GDPR (Regulation (EU) 2016/679)
Sets rules for processing personal data of individuals in the EU. Applies globally to anyone handling such data. Defines legal bases, user rights, data breach duties, and documentation requirements.
ePrivacy Directive (Directive 2002/58/EC)
Regulates privacy in electronic communications. Covers consent for cookies, traffic and location data, and confidentiality of communications.
Regulation (EU) 2018/1725
Governs how EU institutions process personal data. Aligns with GDPR, covering lawfulness, transparency, rights of individuals, and oversight by the EDPS.
Law Enforcement Directive (LED) (Directive (EU) 2016/680)
Applies to personal data processed by law enforcement. Sets rules for lawful use, safeguards for sensitive data, and rights of data subjects.
2. Digital & Cybersecurity
Digital Services Act (Regulation (EU) 2022/2065)
Sets legal duties for online services like marketplaces, social media, and search engines on content moderation, transparency, algorithm use, risk management, and user redress.
Cybersecurity Act (Regulation (EU) 2019/881)
Creates EU-wide cybersecurity certification for Information and Communication Technology (“ICT”) products, services, and processes.
Data Governance Act (Regulation (EU) 2022/868)
Creates rules for data sharing and reuse of protected public-sector data.
Web Accessibility Directive (Directive (EU) 2016/2102)
Requires public websites and apps to meet EU accessibility standards, with monitoring and feedback systems.
3. Consumer Protection
Unfair Commercial Practices Directive (Directive 2005/29/EC)
Prohibits misleading, aggressive, or unfair commercial practices in business-to-consumer marketing.
General Product Safety Regulation (Regulation (EU) 2023/988)
Sets baseline product safety rules. Requires traceability, incident reporting, and recalls when needed.
Representative Actions Directive (Directive (EU) 2020/1828)
Allows consumer groups or public bodies to take businesses to court on behalf of consumers harmed by illegal practices.
New Product Liability Directive (Directive (EU) 2024/2853 repealing Council Directive 85/374/EEC)
Holds producers strictly liable for harm from defective products, even without fault.
4. Market Surveillance & Product Compliance
Regulation (EC) No 765/2008
Sets rules for checking that products sold in the EU meet safety and quality standards. Forms the basis for CE marking and enforcement by national authorities.
Market Surveillance Regulation (Regulation (EU) 2019/1020)
Enhances product surveillance, enforcement against online sellers, and importer responsibilities.
Regulation (EU) 2024/900
Sets rules for political advertising transparency. Requires clear labelling of political ads, disclosure of who paid for them, and limits on how personal data is used for ad targeting.
Directive 2014/31/EU
Applies to non-automatic weighing instruments (e.g. retail scales). Sets accuracy and marking rules.
Directive 2014/32/EU
Covers measuring instruments (e.g. gas meters, taximeters). Harmonizes technical and conformity rules.
5. Financial & Insurance Regulation
Solvency II (Directive 2009/138/EC)
Sets capital, governance, and risk management standards for insurers and reinsurers.
Capital Requirements Directive (Directive 2013/36/EU)
Regulates access to banking. Sets prudential rules, capital buffers, and oversight of institutions.
Insurance Distribution Directive (Directive (EU) 2016/97)
Imposes conduct, transparency, and product oversight rules for insurance sellers and distributors.
6. Employment & Labour
Information and Consultation Directive (Directive 2002/14/EC)
Requires employers to inform and consult workers on major business and employment decisions, including ensuring that workers and their representatives are informed about the planned deployment of high-risk AI systems in the workplace, where the conditions for such obligations are not already fulfilled under other legal instruments.
7. Justice, Asylum & Whistleblower Protection
Directive 2013/32/EU
Defines procedures for granting, rejecting, or withdrawing asylum. Sets standards for fairness and legal aid.
Directive (EU) 2019/1937
Protects people reporting breaches of EU law, including the EU AI Act.
8. Critical Infrastructure & Essential Entities
Directive (EU) 2022/2557
Requires critical entities (e.g. transport, energy, banking) to assess risks and ensure resilience.
Without any limits
76-78 avenue des Champs-Élysées Staircase D, 2nd floor, 75008 Paris
+33 6 50 37 41 44 info@claimsip.com