Created by researchers from INRIA and CNRS and former TF1 and QWANT, the start-up targets the industrialization of AI detection.
Label4.AI announced this July 7 a lifting of around 1 million euros from Business Angels to embark on the French and European market with its solutions for traceability and detection of content generated or manipulated by AI. The company was created in all discretion in December 2024 by six co -founders, including 4 researchers from INRIA and CNRS with former TF1 and QWANT. The tour de table has just been finalized.
The start-up intends to serve in an industrial way all the activity sectors needing whether content has been partially or fully carried out or even manipulated by an artificial intelligence. “Access to AI tools is such that we are witnessing a general loss of the ability to control the distinction between organic and synthetic. However, it is essential for any economic and individual actor to know the nature of the content to which it has to do. It is a question of transparency but also of security, public order and in our case here in Europe, of sovereignty,” declares Nicolas Bodin Guittard, CEO of Label4.ai. “The other major issue is the rate imposed by the AI: the detection of modified content must be done in an industrial, automated and adapted manner to the specificities, tools and challenges of each company. It is the Label4.ai proposal,” he adds.
There are many use cases: detection of false, fraud and intrusions, marking of legal documents, fight against cybercrime and fake news. Potential customers are just as varied, between research engines, social platforms or e-commerce, banks & insurance, audit companies, Legaltech, the public and institutional sector, including the police and the army, and even the industrial and port sector … Five proof-of-concept are in progress, in online research, insurance and audit.
Label4.AI acts on two axes: advanced forest analysis, either the detection of content generated or manipulated by IA and digital watermarking or digital tattoo of any type of content generated by AI, image, video but also text and audio from their generation, so that it can more easily detect them once online. One of the specifics of Label4.ai is to tattoo the content as close as possible to the user, when it leaves the AI tool created by each company for its business needs. “To detect the synthetic nature of the content, we have backed by the expertise of researchers from the Lille CNRS. But even the most advanced Forensic technology implies a risk of error, of the order of 1 in 1000. This is why it is necessary to supplement it by a tattoo system applied to the content generated by IA where the chances of being mistaken. The expertise of the Inria de Rennes, “explains Anthony Level, co -founder and CSO of Label4.ai.
Label4.ai has an important asset indeed: its scientific committee, which notably brings together Teddy Furon, research director of Inria in Rennes world known for his work in Digital Watermarking, as well as two researchers at CNRS specializing in forest analysis and an Italian researcher based in Naples expert in Deepfake detection. “It is the adoption of ACT, and especially its article 50, which stimulated us to create label4.ai”, comments Anthony Level. “From August 2026 in Europe, all generative AI systems, whatever they are, will have to leave their exit so that the public knows that it is a content generated by artificial intelligence,” he said. Label4.AI declares to participate in the work of the European IA office “to assist them with the development of the standard for marking the content generated by AI under the ACT AC”.




