On August 2, 2025, the EU imposed rules on AI models for general use: transparency, documentation, and supervision to supervise their systemic impact.
August 2, 2025 will mark a founding date in the history of technological regulation in Europe. On that day, the first obligations of the European regulation on artificial intelligence, better known as AI Acts, will officially enter into force for a very specific category of systems: IA models for general use (General Purpose AI Models, or GPAI). Through this stage, the European Union affirms its desire to regulate AI at the root, where the most powerful technologies are designed, trained and disseminated.
A structural response to systemic AI transformations
Since the mid -2020s, the dazzling progress of generative artificial intelligence embodied in particular by models such as GPT -4, Mistral AI, Claude or Gemini have turned the border between support technology and autonomous system. These models are no longer simple tools programmed for a specific task: they have become cognitive infrastructure capable of adapting to a plurality of contexts, industries and audiences.
Faced with this rise in power, AI Act has introduced an unprecedented legal category: that of AI models for general use, defined by the text as “an artificial intelligence model, including when such a model is trained on a large amount of data, in large-scale self-learning, which displays a great generality, and is capable of performing a wide range of distinct tasks, whatever the model The market, and which can be integrated into a variety of downstream systems or applications, with the exception of AI models used only for research, development or prototyping before their marketing. »(AI Act, article 3, point 63)
August 2, 2025 corresponds to the entry into force of the first binding obligations weighing specifically on the developers of these models.
Transparency and accountability requirements
At the heart of the new provisions is the imperative of algorithmic transparency. Now, any GPAI supplier deployed in the Union will have to publish detailed technical documentation, including:
- A description of the capacity of the model (functionalities, known limits, test results),
- a summary of training data, in particular the indication of content covered by copyright,
- A set of responsible use recommendations for integrators and end users.
This requirement is part of a logic of ascending accountability, aimed at better equipping regulators, user companies, but also researchers and civil society, to understand the functioning of these increasingly autonomous technologies.
The distinction of systemic risk models
The text does not stop at only basic requirements. It also introduces a more limited, but more sensitive category: that of GPAI called “high systemic risk”. These models are identified as such when their dissemination, use or influence on economic or social processes reaches a critical threshold, capable of altering fundamental balances – whether security, access to information, or cognitive sovereignty.
Suppliers of such models will have to submit to reinforced audits, publish risk relationships, and demonstrate continuous efforts to limit abusive uses, including cybersecurity, technical robustness, or malicious exploitation.
Towards shared European governance
The entry into force of the first obligations is accompanied by an institutional reorganization. Each Member State of the European Union must, on that date, have appointed a national AI supervision authority. These bodies will have the mission of ensuring the application of the regulation on their territory, collaborating with the European Office of AI and exchanging data as well as alerts in a coordinated framework on a continental scale.
This decentralized, but integrated architecture is inspired by the precedents posed by the regulation of digital (like the GDPR), but introduces a unprecedented proactive and technical dimension in the supervision of systems.
the European Union as a regulatory laboratory
August 2, 2025 will not be a simple administrative milestone. It represents the first operational realization of an ambitious project: to make Europe a space of trust for AI, capable of combining technological innovation, protection of fundamental rights, and democratic sovereignty.
By placing ex-ante obligations on the models themselves and not only on their uses, AI Act inaugurates a systemic approach to digital regulation. It remains to be seen, in the months and years that will follow, if this promise will be able to translate into effective, understandable and measurable practices, at a time when the dynamics of AI evolve at an exponential rate.




