The massive adoption of AI raises crucial issues in terms of security and ethics. In order to supervise this technological revolution, the EU adopted AI Act, an unprecedented legal framework.
It aims to introduce harmonized rules within the Member States to guarantee high standards for the quality and safety of AI systems, while promoting ethical and responsible adoption.
Regulate to minimize the risks linked to the uses of AI
According to a recent survey, 35 % of French companies already use AI solutions, and among these, 72 % point out an improvement in their overall productivity (1). To supervise these growing uses, AI Act is based on a risk classification approach. This distributes AI systems into four main categories: on the one hand the unacceptable risk which prohibits systems such as social notation or certain forms of mass monitoring. The high risk, on the other hand, requires strict obligations such as the establishment of a risk management system, the governance of training data, and human surveillance. In France, sectors such as health, education and public security are directly affected by these requirements. Finally, the limited risk which requires transparency obligations, and the minimum risk which is not covered by this law.
Meet the requirements of AI Act, an issue for French companies
For companies, including SMEs, compliance with AI Acts represents a considerable challenge. High -risk AI developers will not only have to ensure the quality of the data used, but also provide detailed documentation on deployed systems. This documentation must include several elements. First of all, a precise mapping of the risks linked to the use of AI, the implementation of human supervision mechanisms for automated decisions and traceability of algorithms to guarantee transparency. Initiatives such as “Regulatory Sandboxes” (“regulatory sand bins”), however, offer significant support for startups and SMEs (2). These controlled spaces make it possible to innovate while testing solutions in a secure framework and in accordance with legal requirements.
Data protection and quality at the heart of regulations
Data management constitutes one of the pillars of AI Act, both in terms of quality and compliance with the rights related to access and use of training, validation data but also of the output data generated by the AI. Companies must therefore guarantee that the data used to train their models are not biased, traceable and in accordance with the GDPR and copyright. This requirement is critical in sectors such as finance or the public sector, where algorithmic biases could lead to discrimination at the time, for example, the granting of credits or access to social benefits. In addition, to ensure the protection of privacy, the CNIL details certain actions to follow by developers of artificial intelligence systems. The latter must integrate the principles of protection of personal data (“Privacy by Design”) from their design to guarantee rigorous management and monitoring of learning data (3).
Guarantee an AI of confidence to stimulate innovation
In addition to the quality, the confidence of users, consumers but also companies is a crucial issue. By imposing strict and detailed rules, AI Act aims to strengthen this confidence, in particular in the sensitive sectors. For example, in the health field where one in two caregivers incorporates AI technologies into its practice (4), transparent and secure solutions are essential to guarantee patient safety. In parallel, Europe relies on quality labels and certifications to differentiate companies respecting standards. These initiatives could allow French companies to position themselves as trusted partners in a global AI market estimated at $ 500 billion by 2028, a multiplication by four compared to the size of the estimated market in 2023 (5).
Prepare the implementation of AI Act, an essential step
For companies and organizations, preparing for the entry into application of AI Act is essential. An approach that goes through several stages. First of all, they must carry out an inventory of systems to identify AI systems subject to specific obligations. They must also ensure the training of teams in order to make employees aware of legal and technical issues. In addition, strategic collaboration is essential. It consists in choosing trust partners, to the fact of legal requirements, throughout the development chain and use. Finally, organizations must invest in compliance tools in order to use technological solutions to guarantee monitoring and compliance.
Beyond legal obligations, AI Act is an opportunity for European operators to define an ethical and responsible framework around artificial intelligence. Companies must anticipate changes and transform these obligations into strategic levers to remain competitive and innovate. By capitalizing on the values of quality, security and confidence, France could become a leader in the development of a responsible AI, thus inscribing the country as a key player in the ecosystem of the European and potentially global AI.
——————
(1) All about AI – AI statistics
(2) https://www.europar.europa.eu/regdata/etudes/brie/2022/733544/eprs_bri (2022) 733544_en.pdf
(3) CNIL – Frequent questions about AI Act
(4) https://pulselife.com/fr-fr/blog/post/barometre-ia-en-sante-alliee-ou-menace
(5) https://fr.statista.com/infographie/33548/projection-du-chiffre-ffaires-mondial-du-secteur-intelligence-artificiale-par-segment/