While the United Nations General Assembly in New York begins, several dozen researchers and executives of artificial intelligence have published a letter calling for an international supervision of the development of this technology.
In an open letter made public on Monday, September 22, a collective of specialists in the artificial intelligence sector called on governments to agree on a common regulation base. The signatories claim the implementation of international rules intended to prevent the most dangerous uses of AI.
International safeguards deemed urgent by the signatories
According to extracts published by Le Figaro with AFP, the letter underlines that “AI has immense potential for the well-being of humanity, but its current trajectory has unprecedented dangers”. The signatories argue for the definition of “red lines” set for major players in the sector, through “international agreements” aimed at supervising certain developments deemed risky.
The authors specify that it is not a set of exhaustive rules, but “minimum safeguards”, presented as the “smaller common denominator on which governments should be understood to contain the most imminent and unacceptable risks”. These risks include the triggering of pandemics, the massive disinformation dissemination, damage to national security, a brutal increase in unemployment, or even human rights violations.
This initiative is carried by several organizations, including the IA (CESIA) security center, based in France, The Future Society, and the Center for a Human-Compatible IA of the University of Berkeley. Twenty other partner organizations also support the process.
Support from research and industry
The letter was co -signed by several recognized personalities of the academic world. Among them, Geoffrey Hinton, Nobel Prize in Physics 2024, considered one of the pioneers of modern AI, and Yoshua Bengio, professor at the University of Montreal, often quoted as one of the most influential experts in the field.
The text also brings together signatories from the industry, like Jason Clinton, IT security manager at Anthropic, and several Deepmind employees (IA of Google) and Openai.
According to Le Figaro, the approach comes while many companies in the sector concentrate their efforts on the development of general artificial intelligence (AGI), a technology that would aim, or even exceed, all human intellectual capacities.
The authors of the letter recall that previous ones exist in international cooperation around sensitive technologies. They cite in particular the Treaty of Nuclear Non-Proliferation (TNP), which entered into force in 1970, or the Convention on the Prohibition of Chemical Weapons, applied since 1997. In their eyes, a similar framework, even minimal, could make it possible to anticipate the potential drifts linked to AI on a global scale.




