In front of our screens, Artificial Intelligence seems to know everything. In just a few seconds, she writes perfect texts with stunning confidence.
But what is really happening in the machine’s circuits when it doesn’t have the answer? Is AI capable of simply saying “I don’t know,” or does it prefer to fill in the gaps with beautiful lies? Behind this illusion of absolute knowledge lies the greatest fault of this technology: its unfortunate tendency to “hallucinate”. Clearly, she prefers to invent facts from scratch rather than leaving us without answers. A fascinating flaw that reminds us of an essential truth: an AI does not think, it calculates.
When prediction masquerades as knowledge
Generative Artificial Intelligence is often perceived as a vast encyclopedia capable of assimilating all of human knowledge. This representation is misleading. An AI does not understand what it writes: it predicts. It calculates, from billions of data points, the probability that one word follows another. Sense is not his compass; statistical likelihood is.
This is where the illusion is born. Because his sentences are structured, fluid and confident, we interpret them as a sign of mastery. We have learned to associate the clarity of a speech with the competence of the person giving it. However, in the case of AI, the form does not guarantee anything in substance. An answer can be perfectly worded and yet inaccurate. Apparent coherence then becomes a substitute for truth.
When this predictive mechanism reaches its limits, the problem worsens. Incapable of admitting its ignorance, the model supplements areas of uncertainty with invented elements. These are what we call “hallucinations”: not random errors, but the logical consequence of a system designed to produce a response, even in the absence of reliable information.
The consequences of this major defect were recently documented by the HalluHard study, published in early 2026 by EPFL researchers. Their work demonstrates that even the most recent, Internet-connected AI models are wrong nearly 30% of the time on complex questions. Even more worrying, the study highlights a “snowball effect”: during a long conversation, the AI becomes entangled in its own inventions and ends up locked in an alternative reality that it itself created.
Reality is already full of striking examples. In the United States, lawyers presented judges with false case law entirely invented by AI to defend their clients (1). In the academic field, researchers have warned of the creation of false scientific articles from scratch, with very serious but fictitious titles and author names (2). The health field is not spared, with studies proving that AI generates false clinical references or incorrect drug dosages (3). Finally, AI can even invent lives and defame innocent people: Australian mayor Brian Hood was wrongly accused by ChatGPT of having served time in prison for corruption, when in reality he was the whistleblower who denounced this same affair (4)!
When responsibility escapes the machine
Faced with these abuses, the question of responsibility arises. If a serious decision is made following a machine hallucination, who is guilty? Since AI has neither legal status nor moral conscience, the burden of verification inevitably falls on humans, who will have to answer alone for their actions before the law.
This presents a danger that is not only cognitive, but legal. Human beings are very vulnerable to “automation bias”: faced with a screen that delivers clear and assertive answers, our critical thinking retreats and we tend to delegate our judgment to the machine. This reflex is not neutral: it transforms intellectual inaction into a concrete risk, because the responsibility for decisions taken on the basis of these responses lies entirely with humans.
In our society of immediacy, where access to “pre-prepared” summaries eliminates the research effort, we lose the essential gestures of verification and comparison of sources. However, this rigor is not just a question of accuracy: it is the condition for us to be able to legally and morally assume the consequences of a decision. By getting used to the comfort of certainty delivered in a fraction of a second, we risk entrusting the machine with choices for which we will remain legally responsible, without having exercised the rigor necessary to assume them.
This is why keeping “humans in the loop” is not a simple precaution: it is a moral and legal obligation. Generative AI must remain an assistant: a synthesis tool, a creative partner or a starting point for a project. But the validation of information, the publication of content or the final decision-making relies on humans, who retain responsibility for their choices. Field experience, understanding of the context and moral conscience are not luxuries: they are guarantees that our decisions can be assumed legally and ethically.




