Artificial, eternal antagonist intelligence in films: credible threat or pure Hollywood fantasy? A cybersecurity expert at Splunk sheds light

Artificial, eternal antagonist intelligence in films: credible threat or pure Hollywood fantasy? A cybersecurity expert at Splunk sheds light

The true danger of AI does not come from a desire to harm, as in the cinema, but from the lack of mastery and protections in the face of its complexity.

In all anticipation films, AI is often portrayed as wishing the eradication of the human species. The character of Ultron in the Marvel universe has at least the advantage of bringing a justification. Indeed, it was designed to protect the human species and the main dangers that threaten it emanate from human beings. The entity present in the latest mission film: Impossible also seeks to eradicate humanity – without giving reason.

The real question is: why would AI be likely to take such extreme measures? In the real world, a scenario where a hostile AI would try to eliminate humanity seems not very plausible, the most realistic possibility is that it would probably not care about our existence.

Decryption of certain key sequences of the last mission impossible. Is it a simple entertainment or an overview of our future? Please note: guaranteed spoilers.

First sequence: Ethan Hunt’s quest to find the source code of AI

The quest for the source code looks like a kind of parable. It is therefore interesting to transcribe it in the real world. First of all, off -site backups do not work in this way. For each essential asset, a well -defined source code management system is set up, like Github. This is how companies that deploy large -scale AI must process the source code and training data such as the apple of their eyes.

What should rather be known is the architecture of the model and the weight it gives to each input when making decision. The real threat does not reside in a sentient AI, but in uncontrolled complexity. In the case of a disaster scenario, visibility plays the role of the hero and traceability that of the parachute.

Second sequence: Luther develops a “poisoned pill” intended to counter AI

This sequence is pure Hollywood fantasy and has no real foundation on which to rely in computer science. To be honest, this idea is acceptable as a script process. But here it is: writing a code intended to “poison” source code lines which have never been analyzed and on which no one has even put their eyes simply impossible. Without sample available for analysis, it is impossible to know which language has been used to program AI. This would be likely to formulate an antidote to a poisoned cake without knowing the recipe, the type of poison or the amount of cake ingested, throwing ingredients mixed at random in an oven and hoping that AA ingests them.

However, a lesson must be considered as a golden rule by the RSSI: all powerful system must be equipped with an emergency stop button. Companies that experience with sophisticated AIs cannot be content to show an optimism of circumstance. If the system suddenly decides to turn into a Skynet avatar (Editor’s note: AI in the Terminator series), a simple CTRL+Z will not be enough to save you the bet. It is essential to consider integrating emergency stop devices, restoration capacities of an earlier version of the model and traffic restrictions for APIs-in summary, all the necessary safeguards. The lesson to remember is as follows: if you conceive an intelligence that has the potential to surpass yourself one day, show yourself clever enough to be able to unplug it before it happens.

Third sequence: the transfer of the source code to an anti -ientomic shelter

This anti -ientomic shelter is actually a datacenter without human which, according to AI, will remain eternally operational without external help. Such technology does not exist. In the real world, an anti-diable shelter does exist in Norway, but it simply contains food crops. Unfortunately, companies cannot count on fictitious datacenters to solve their problems.

An insulation strategy for physical installations loses all its interest if no one is able to control the elements placed in isolation. Real resilience lies in an enclave controlled by the company itself, and not in an imaginary bunker that no one can audit. If the essential intellectual property elements and the deactivation key are locked in a safe to which no one has access or in which no one can trust, this amounts to admitting that the final line of defense against a drift of the system has been abandoned in favor of a scriptwriting spring. In addition, no computer system never works perfectly. This is one of the reasons that explains the need to have observability tools. Even with an availability rate of 99.99 %, the system breaks down at least a day every ten thousand years.

Now that the internal cogs of this demonic AI have been uncovered and dissected, it would be interesting to wonder if there are similarities between the entity presented in the film and current AI systems.

The entity refuses to deactivate itself and begins to make certain people sing. The film script is based on the fear that AI escapes from its confinement and begins to function independently. Ideas like those of an intro, which begins to manipulate human beings and include governance defects as imposing as the inconsistencies of the plot are not completely unthinkable, insofar as AI systems are more and more complex and integrated. When a model ignores a prompt, bypassing certain measures to take orders or influence of administrators, it is no longer a simple tool, but a dangerous threat endowed with a cycle of updates. Even if such a hypothesis is fiction, it is nevertheless accompanied by a warning: one does not really control a system that cannot be deactivated.

So far, no AI has ever demonstrated its ability to escape the limits imposed on it in this way and such a scenario is improbable for many reasons. In addition, the simple fact that researchers look at the problem and even go so far as to deceive the models to encourage them to try to escape shows that safeguards and action plans are being developed to counter these eventualities. The AI ​​will not escape.

A global crisis would promote a coordinated response

In such a scenario, the world would not be saved by a handful of isolated spies, rebel agents and some USB keys. Governments and companies around the world will seek to limit these risks by developing an action plan on a planetary scale. Many projects have been launched worldwide to guarantee an ethical and secure AI. If there is a lesson to learn Hollywood films, it is because the implementation of universal standards and globalized approaches plays an essential role in the way in which we have the future of the human species and the AI.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment