Should we stop a race to AI whose arrival we do not know? 700 experts think so. Others counter that a pause would lose a crucial advantage. Between caution and progress, the debate is open.
Their names are Geoffrey Hinton, deep learning pioneer and former Google researcher; Yoshua Bengio, director of the Quebec Institute of Artificial Intelligence (Mila) and co-winner of the Turing Prize; and Stuart Russell, professor of computer science at the University of California, Berkeley, specialist in the security of intelligent systems.
All three are among the main signatories of an international appeal (1) of more than 700 experts and personalities calling for a pause in the development of so-called “superintelligent” artificial intelligence.
Alongside them, entrepreneurs, philosophers and public figures come together to warn of a possible drift: a race for algorithmic power carried out without real governance or understanding of its consequences.
A call to slow down before the point of no return
The signatories plead for “the temporary cessation of any development of artificial intelligence whose capabilities exceed those of current systems”, as long as no scientific consensus guarantees their safety. It is not a question of rejecting innovation, but of pausing, the time to assess the consequences of a technological evolution which now seems to exceed our capacity to understand.
While the major players in the sector, OpenAI, Google DeepMind, Anthropic and even Baidu, engage in an escalation of ever more complex models, researchers fear a drift: technological progress for its own sake, to the detriment of human mastery.
A concept as fascinating as it is indeterminate
The word superintelligence crystallizes these concerns. It designates an AI whose faculties would surpass those of humans in almost all areas: logical reasoning, creativity, strategic planning, even moral sense. But the reality is more vague: no one really knows what such an entity would be, or how to measure it. Is it an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a more efficient system than our current models? This semantic vagueness is at the heart of the problem: how to stop what we struggle to define? Since the work of psychologist Howard Gardner, we know that there is not a single intelligence, but a plurality of intelligences(2): linguistic, logical-mathematical, spatial, kinesthetic, musical, interpersonal, intrapersonal or even naturalistic. Each individual mobilizes a unique combination of these forms of intelligence, which cannot be reduced to performance in calculation or abstract reasoning.
But a world race without a referee…
Suspending this progression appears to many to be a utopia. How can we establish a global pause when economic and geopolitical interests diverge? The United States, China and the European Union are engaged in fierce competition to dominate the strategic sector of artificial intelligence. To slow down unilaterally is to risk losing a decisive advantage. But for the signatories, the lack of international coordination makes the pause essential. But for the signatories, the lack of international coordination makes the pause essential.
They call for the creation of a public and independent body responsible for overseeing the most advanced developments. An idea which echoes, in Europe, the AI Act, the regulation recently adopted by the European Union. This pioneering text establishes a classification of AI systems according to their level of risk and imposes strict obligations of transparency, traceability and human control.
But for many experts, this framework remains insufficient in the face of the challenges of potentially self-improving artificial intelligence. The AI Act sets rules for compliance, but it does not yet address the issue of superintelligence, which escapes any current assessment grid. The call for caution then becomes an ethical requirement: think before accelerating.
Lucid ignorance or how to admit uncertainty
The researchers themselves recognize the irony of the situation: they fear a phenomenon that they do not yet know how to describe. Superintelligence is currently a theoretical horizon, almost a projection of our anxieties and our ambitions. But it is precisely this uncertainty that justifies caution. If we don’t know the exact nature of the finish line, should we really keep running without looking at where we’re putting our foot? The question is no longer just technological. It is philosophical, political and deeply human. The possibility of superintelligence questions less our capacity to invent than to govern ourselves. Perhaps this is the sign of true intelligence: knowing how to stop before the machine thinks (and acts) for us.
(1) Open letter published by the organization Future of Life Institute (FLI) on October 22, 2025, entitled “We call for a prohibition on the development of superintelligence (…)” https://superintelligence-statement.org/
(2) Gardner, H. (2004). Forms of intelligence: The theory of multiple intelligences




