Defamation lawsuits, a new headache for AI giants

Defamation lawsuits, a new headache for AI giants

Chatbot hallucinations are not without harm. Public figures, companies and individuals who feel wronged by the infamous inventions of ChatGPT, Gemini or one of their competitors are starting to retaliate.

Can OpenAI be found guilty of defamation if ChatGPT spreads hurtful false information about a public figure, company or individual? This question must undoubtedly cause headaches for the legal departments of large American AI companies, as several cases have been spilled and lawsuits begin to pile up.

In October, Tennessee Senator Marsha Blackburn, known for her desire to better regulate Big Tech, sent an incendiary letter to Google. The elected Republican had in fact discovered that the chatbot Gemma, designed by the company, attributed to her a fanciful case of rape of a police officer. Google promptly apologized and removed Gemma from its AI Studio.

Cascade trial

This time, things didn’t go any further. But others have taken the plunge, like Robert Starbuck. In April, this American right-wing influencer took Meta to court, denouncing an image accompanied by a text, both generated by the large language model Llama, which accused him of having participated in the riots of January 6, while he was actually at his home in Tennessee. Rather than risking a lawsuit, Meta reached an amicable agreement (a common procedure across the Atlantic), notably involving the recruitment of Mr. Starbuck as an advisor to Meta AI, precisely in order to limit errors and cases of defamation.

Already in 2023, American radio host Mark Walters sued OpenAI after ChatGPT falsely claimed to one of his colleagues, journalist Frederik Riehl, that Mr. Walters had embezzled funds from a non-profit organization. The complaint was, however, dismissed, on the grounds that Mr. Riehl did not believe the facts attributed by ChatGPT to Mr. Walters and was able to quickly verify that they were false.

But erroneous allegations can sometimes have much more serious consequences. Wolf River Electric, an American specialist in the installation of solar panels, noted last year that an unusual number of customers had canceled their contracts. After investigation, the company realized that Gemini was sharing false information that the government had sued it for deceptive sales practices. Due to the AI ​​mode inserted in Google, this information appeared at the top of the results each time a user typed the name of the company in the search bar. Wolf River Electric, which claims to have lost 25 million in sales in 2024 because of this error, is currently in a lawsuit against Google.

Use of Section 230 and the First Amendment?

Faced with lawsuits filed in the United States, several lines of defense are available to Big Tech. The first is to invoke article 230. Resulting from a law put in place in 1996, the article protects web platforms by stipulating that they are not responsible for illegal content published there. This means, for example, that an American cannot sue Google for having referenced a link to a website accusing him of being a sexual assaulter, or attack Facebook if the article in question is shared there by users of the social network. However, it is not certain that Article 230 can apply in the present case, given that it is the chatbots designed by these companies which generate the contentious content, and not human users.

American jurisprudence (remember that American law is very jurisprudential) has not yet ruled on this, according to Mitch Jackson, an American lawyer. “Section 230 is a potential defense that hasn’t really been tested in the courts yet. The key question is: who is the ‘information content provider’ of the AI response? If I post a defamatory comment on a forum, I am the information content provider and the forum (as an interactive computer service) has immunity. But with ChatGPT, there is no human author for the defamatory phrase it comes from “It is the model itself that produced this language. OpenAI (or Google, or whoever) created the model and designed its operation. So one could say that the AI company is in fact the developer, therefore the content provider, of this specific output. Section 230 does not protect a company when it is responsible (even partially) for the content in question.

Another option would be to invoke the First Amendment, which defends free speech against government censorship. Contrary to popular belief, the First Amendment does not offer total and unconditional protection: in this case, defamation is not protected by it. Spreading lies about someone with the intention of causing harm is therefore punishable by prosecution.

“However, AI companies could invoke the First Amendment in a more nuanced way. They could argue that the outputs of their AI constitute a form of algorithmic speech or content generation that should benefit from some protection in order to avoid inhibiting innovation and public debate. Or they could claim that making them responsible for each incorrect statement produced by their AI constitutes in fact a government action that would excessively stifle free expression,” says Mitch Jackson. Here again, the debate is not yet settled, and the next decisions of the American courts will provide a better idea.

French law more problematic for Big Tech

These cases are, however, far from being confined to the United States. In Europe, Arve Hjalmar Holmen, a Norwegian citizen, launched a procedure against OpenAI via the NGO NOYB (None of Your Business), specializing in the defense of digital privacy, after ChatGPT falsely claimed that he had killed two of his sons.

In Ireland, where most Big Tech has set up its European headquarters, TV journalist Dave Fanning has also launched proceedings against Microsoft for a news feed generated by the company’s AI wrongly accusing him of sexual assault. Paul Tweed, an Irish lawyer, represents several European individuals who are attacking Meta, Google and OpenAI for defamation cases linked to AI chatbots.

“The Digital Services Act is very clear: everything relating to the definition of the illegal nature of content is referred to national law, which in the present case must therefore define what constitutes defamation or not,” explains Eric Le Quellenec, lawyer specializing in digital law, partner at Flichy Grangé Avocats.

What does French law say regarding the possibilities of defamation actions against a chatbot? “Contrary to American law, French law does not require that the author of a defamation be motivated by an autonomous malicious will. The intentional nature of the offense is thus almost systematically established, the author being able to hardly maintain that he was unaware of the scope of his remarks”, explains Matthieu Chirez, lawyer at the Paris bar, partner at the firm JP Karsenty et associés.

“As a result, the French regime appears significantly more protective than the American regime. This notable difference could lead the French courts to find the intention to defame more easily than was the case in the case of Mark Walters against OpenAI,” believes the lawyer.

But here again, the novelty of the technology raises new questions that French law will have to address. “Future litigation should raise new questions, including, on the one hand, the way in which the law will approach the phenomenon known as “hallucinations” of AI, that is to say the generation of entirely fictitious information; and on the other hand, the way in which the sources used by the AI ​​to develop its responses will be evaluated.”

In this regard, the entry into force of the IA Act opens another possibility to certain European plaintiffs: sue not for defamation, but for dissemination of false information. However, this only applies in the case where the AI ​​creates a false story from scratch, not in the case where it simply uses unreliable sources. “If the AI ​​hallucinates, it then becomes theoretically possible to hold the company behind it responsible, by evoking commercial denigration if it is a company that is the victim of the fake news for example,” believes Eric Le Quellenec.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment