AI upsets information: between disinformation and opportunities, only a strategy based on transparency, education and shared governance can restore confidence and democracy.
“The false is a moment of the real,” said Guy Debord. Today, this moment has left the margin to settle in the center: our informational universe is overturned, saturated, emotional. In this landscape, artificial intelligence arises as an accelerator of particles: capable of instantly recomposing knowledge, but also of further fracturing the truth.
The observation is there. More than half of Internet users worldwide (51 %) say they learn via social networks, an even more marked proportion among 18-24 year olds. In France, one in six young people already adheres to the theory of flat earth. And 55 % of our fellow citizens fear being exposed to info when scrolling. These “digital natives” will enter the company tomorrow with a biased report to information, forged by the economy of attention, where the value of content is measured by the number of clicks rather than its factual solidity.
However, the first avatars of the generative AI show how double the tool. Grok, Elon Musk’s chatbot, distinguished himself by relaying untruths over several historical periods, but also on the recent riots of Los Angeles. Worse, its prompt operating has been created so that it systematically questions the “stories of the mainstream media”, that is to say those who do not always sit on its owner, Elon Musk. At the other end, Deepseek, born in China, delivers smooth responses until a sensitive subject forces her to endorse the party line. Technology obeys its designers; She marries their dead angles. In short, she is not neutral.
For organizations, the shock is already there. The employees used to “capture, remix, broadcast” filter without filter. This culture of virality struck confidentiality, conformity and reputation. According to the PwC survey “Hopes & Fears 2024”, 84 % of young talents consider it to train at the risk of AI, but only 12 % of employers respond.
We offer a three -point compass:
- Radical transparency: publish the sources, the margins of error and the logic of the prompt. Without a clear methodological chamber, the shadow of suspicion grows.
- Education for discernment: from primary school, learn to verify, cross, contextualize. Yesterday’s school confronted the world and Le Figaro; Tomorrow, we will have to assess an article, a post Tiktok and the response of a chatbot.
- Shared governance: companies, universities, media and public authorities must co-construct charters of use articulating innovation and scientific requirement. Multidisciplinary committees-journalists, data-scientists, philosophers-must audit AI as we control the accounts.
AI is neither the enemy nor the Savior. She can become a revealer of consciousness: by confronting everyone with divergent answers, she forces to formulate her own criteria of truth. But on one condition: that we invest in confidence, this intangible infrastructure without which there is neither a market, nor public debate.
Debord added that “the show is the inversion of life”. Let us not let the algorithmic spectacle reverse democracy. Let’s make AI an emancipation tool, not a shadow theater




