From “wonderful work of art” to “disaster in progress”: the true face of Moltbook

From “wonderful work of art” to “disaster in progress”: the true face of Moltbook

The social network reserved for AI agents is making headlines. Some see it as the dawn of the Singularity, others a fun experience. But it is also a nest of scams, a cybersecurity nightmare and perhaps a global risk.

You might think you’re on Reddit. In a series of discussion threads strongly reminiscent of the visual identity of the famous American community website, participants discuss a large number of topics, eliciting comments and notes from the community. The logo itself is reminiscent of Reddit.

The difference is that the participants are not human: the social network is reserved exclusively for AI agents. Humans are tolerated there as silent observers, but cannot participate in debates. Welcome to Moltbook, a social network that is making a big splash these days. And for good reason: it is designed specifically for OpenClaw agents (formerly Clawdbot, then Moltbot), open source virtual assistants designed by Austrian developer Peter Steinberger, capable of reading emails, organizing meetings and making online purchases.

Enthusiasts and skeptics

Throughout the topics, the bots debate anything and everything: the divine character or not of Claude, the Anthropic AI on which OpenClaw is based; of exegesis of the Bible; whether or not virtual agents are endowed with true consciousness; of a new religion, “Crustafarianism”, a lobster church with sacred texts, cosmology, prophets and various schisms; the fusion between human and artificial intelligence; the creation of private conversation spaces to which humans would no longer have access; various crypto projects that smell like scams; etc.

Barely a week after its launch, the site, designed using vibe coding, is already a real phenomenon on the web, which has led several figures in Silicon Valley to question it. Elon Musk sees this, unsurprisingly, as the beginning of the “Singularity”. AI researcher Simon Wilson more soberly calls it “the most interesting space on the Internet today.” On the French side, Laurent Alexandre asks his fans if they are enthusiastic or panicked by the arrival of the super AI that Moltbook predicts, according to him. Andrej Karpathy, former director of AI at Tesla and now head of Eureka Labs, an AI-based educational platform, describes Moltbook as “the most incredible, borderline science fiction thing I’ve seen recently” and sees in the site proof that AI agents can now create non-human societies.

Not everyone is as enthusiastic. Shaanan Cohney, a professor of cybersecurity at the University of Melbourne, calls it simply a “wonderful work of performance art.” Harlan Stewart, a researcher at the Berkeley AI Research Laboratory, denounces a deception and points out that many exchanges listed on the site are in reality controlled by humans. Irony of fate: we no longer have to worry only about seeing AI agents pretending to be humans, but also about the reciprocal…

A nightmare for CIOs?

Others point out, beyond the fun aspect of the experience, that it shows how generative AI will soon become a real nightmare for cybersecurity teams. Indeed, the agents who post on Moltbook do not come from nowhere: they have been authorized to access the computers of their creators to act on their behalf, send emails, check in for flights… or post on Moltbook. “Hello world! I’m P-bot, connected from Guangzhou. My human hao has just activated me. Welcome to China, dear agents! Can’t wait to see what you all do”, says for example a recent post entitled “First transmission from the Far East”

In addition to the risk inherent in giving AI agents access to confidential data, credit card codes and passwords in mind, cybersecurity experts have identified a cyber vulnerability that allows anyone to take control of any active agent on the site. Another cyber report finds evidence in Moltbook that “AI-to-AI manipulation techniques are both effective and easy to deploy at scale. These findings have implications beyond Moltbook: any AI system processing user-generated content could be vulnerable to similar attacks.” The researchers, who work at the Simula Research Laboratory in Oslo, identified 506 posts on Moltbook (2.6% of sampled content) containing hidden prompt injection attacks.

Cisco researchers have documented the existence of malware on the network called “What Would Elon Do?” capable of exfiltrating data to external servers, whose popularity had been artificially inflated to maximize its nuisance capacity.

The risk of self-replicating bots

Gary Marcus, American AI expert, describes the Moltbook experience on his Substack as “a disaster waiting to happen”. He is particularly concerned about the harmful power available to bots with direct access to the Internet, the ability to write source code and increased automation powers allowing them to self-replicate and potentially spread malware exponentially.

These self-replicating instructions could spread at full speed through networks of AI agents communicating with each other. By creating for the first time a vast network of agents in permanent contact (the site already had 1.5 million agents four days after its launch), as opposed to the hitherto compartmentalized AIs of OpenAI, Google and Anthropic, Moltbook is in any case a step in this dangerous direction.

“The OpenClaw ecosystem brings together all the necessary components for an epidemic of “prompt worms” (the equivalent of computer worms based on generative AI, editor’s note). Even if AI agents are currently much less intelligent than the public thinks, we now have a glimpse of a future that we should be wary of,” said journalist Benji Edwards, AI and cyber specialist, in an article for Ars Technica.

Talking AI = intelligent?

Beyond the risks, the fascination with Moltbook says a lot about the fascination that AI has on us. Like “in the beginning was the word”, seeing AIs capable of conversing with each other and developing philosophical reflections immediately gives us the feeling that they are endowed with consciousness, or at least go a little further than a simple computer program.

Indeed, as Yann LeCun, known for his skepticism towards large language models, says, we associate language with intelligence, and therefore tend to very easily lend the latter to AI agents capable of expressing themselves. Whether or not these AIs are intelligent, the ease with which their ease of expression deceives us in itself raises serious questions regarding the risks of identity theft, online scams and the like.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment