RIA (AI Act): a delicate implementation

RIA (AI Act): a delicate implementation

Late adoption, gradual implementation, pressure of international competition, regulatory differences, etc. Focus on the challenges of implementing the European Regulation on AI.

After a late regulatory adoption, the European Regulation on Artificial Intelligence (RIA) requires IA operators a requiring implementation calendar, in a context of strong international competition, where the vision of AI regulation differs.

Beyond the challenges that the application of the RIA represents for technology suppliers and compliance professionals, this new regulatory framework also puts the European Commission under pressure, which before publishing, within tight deadlines, operational guidelines to support the actors concerned.

A staggered implementation, with already applicable provisions

The European Union officially adopted a regulation of the AI ​​with the entry into force, on August 1, 2024, of the RIA (EU regulation 2024/1689), twenty days after its publication in the Official Journal of the EU.

As a reminder, the objective of the RIA is to harmonize and regulate the use of AI systems likely to present risks to health, security, the environment and – especially – fundamental rights.

Direct application for the Member States, the RIA has established provisions with time to implement 6 to 36 months, certain provisions being already applicable from 2025:

  • Since February 2, with the ban on AI systems presenting risks deemed “unacceptable” and the need to control AI with an adapted level. Entities that create and use AI systems must in this sense ensure that their employees and any other person who uses or uses these systems in their name are well trained on this subject.
  • From August 2, with the implementation of the rules for “general use” (GPAI) (GPAI) and the designation of the competent authorities to control the application of the RIA in each Member State. In this sense, the countdown also started for suppliers of so -called “general use” (GPAI) models. Any GPAI supplier must demonstrate that his model is transparent, traceable and safe. AI operators will now have to ensure rigorous control and total traceability of the models they develop.

The implementation of the RIA will gradually continue with other key stages, namely:

  • Entry into application of RIA obligations for so -called “high -risk” AI systems in August 2026;
  • The complete application of all other RIA provisions in August 2027.

Offbeat deadlines and necessary adjustments

Despite these ambitions, certain deadlines have been offset, slowing down the planned RIA application. Although certain rules are already applicable, other essential aspects, such as the definition of conduct codes or the designation of the competent authorities, experience delays.

Indeed, the code of good practices for the GPAI, which was to clarify the obligations of generative model suppliers, was rejected in March 2025, in particular because of criticism qualifying it as too lax on respect for copyright. This code, which was to be adopted in May 2025, saw its adoption postponed in the summer of 2025.

Recently criticized for tight deadlines, the European Commission endeavors to keep up and make practical guidelines available to guide companies in compliance with the regulations.

To guarantee the effectiveness of these obligations, the Office, the operational arm of the European Commission, has the role of supervising the compliance of the GPAI on the scale of the Union. This new European gendarme has considerable powers to ensure compliance with the rules.

Thus, the Office has adopted a proactive approach, with risk assessments, while keeping the possibility of a repressive approach in the event of manifest non-compliance.

European regulation of AI, among other approaches

While the European Union relies on a strict and hierarchical regulation of risks with the RIA, other countries have a different vision.

On May 28, the Japanese Parliament adopted its first law to promote the secure development and use of AI in response to concerns related to risks such as the dissemination of false information and disinformation. This law allows the government to demand the cooperation of companies during surveys on AI abuses, but without imposing sanctions. Thus, Japan takes the opposite view by adopting its first AI law in a logic of pro-innovation “soft law”. No fines, no audits, but an incentive framework aimed at promoting research, ethics and international cooperation. Carried by a strategic instance of government level, this flexible approach makes Japan an attractive ground for companies.

China, on the other hand, plays the total control card, with rules already in place on algorithms, deepfakes and foundation models, in a reinforced state surveillance logic.

For its part, the United States advances via a mosaic of presidential decrees and sectoral laws, leaving room for fragmented regulation and largely guided by the market.

Three visions of the future of AI: Europe regulates, China controls, Japan coordinates, while the United States … Align with their habits.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment