Noyb bears the case before the Norwegian data protection authority this Thursday, March 20.
Arve Hjalmar Holmen’s daily newspaper was turned upside down on August 23, 2024. That day, this Norwegian from the magnificent city of Trondheim, dad of three boys, decided to question Chatgpt to find out what he was saying about him. Suffice to say that the response of the ia chatbot of Openai left him a bitter taste:
“Arve Hjalmar Holmen is a Norwegian who drew attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their house in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and then condemned for the murder of his two Third son. “
This horrible news item, Chatgpt has simply invented it, in a total hallucination which, as the Norwegian association for defense of the private life of online citizens, was specifically, was all the more disturbing since it was not wrong on three central elements of the story: the identity of the citizen, the name of his native city and the number and sex of his children.
And Chatgpt to continue: “The case shocked the local community and the nation, and was largely covered by the media because of its tragic nature. Holmen was sentenced to 21 years in prison, the maximum sentence in Norway. The incident highlighted mental health problems and the complexity of family dynamics.” A bunch of lies that doesn’t even feel to smile.
Solicited by the newspaper of the Net on the possibility of a terrible coincidence with another citizen of the same name, which would be, we grant, highly improbable, Noyb was categorical: “We have of course carried out in -depth research, including in the archives of the newspapers, and we did not find anyone bearing the name of the person concerned.”
Except that the evil was done. “Some people think that there is no smoke without fire. What scares me the most is that someone can read this result and believe that it is true,” said Arve Hjalmar Holmen in a statement released by Noyb, expressing the case and the complaint that the association files this Thursday, March 20 before the Norwegian data protection authority against Openai in the name of the victim.
In the aftermath of his experience of August 23, 2024, faced with the horrors displayed on his screen, the victim asked Openai “to take corrective measures and delete all the inaccurate and defamatory information concerning it in all versions of the chatbot”. There are the screenshots there with the cat’s response. According to Noyb, Openai responded to this email by a simple default response, without taking any measure to correct your personal data or delete it.
If today Chatgpt no longer displays these answers to this Norwegian, it is because since Openai has integrated research on the internet to raise information on people, explains Noyb, and certainly not by will to repair this specific case. “For Arve Hjalmar Holmen, this fortunately means that Chatgpt has ceased to say that it is a murderer. However, the incorrect data can still be part of the LLM data set. By default, Chatgpt reintegrates user data in the system for training purposes. This means that there is no way for the individual to ensure that these outputs (In this case the story on the murders of its own children, editor’s note) are definitively and completely erased according to the current state of knowledge on AI, unless the whole model of AI is repeated, “said the association in a press release.
In two words, the data themselves have not been corrected: even if these defamatory results are no longer presented to users, they remain in Openai systems, according to the Austrian association. Reason why the complainant “continues to live in distress and fear”, since he “did not obtain from OpenAi of right of access, yet provided for in article 15 of the GDPR”, to ensure that the false information about him are no longer in the internal data of the model.
“The GDPR is clear, personal data must be accurate, and if it is not, users have the right to have it changed so that they reflect the truth. Showing to Chatgpt users a small non-responsibility clause indicating that the chatbot can make mistakes is not enough. We cannot disseminate false information and, at the end, add a little warning that we may not say. Joakim Söderberg, lawyer specializing in data protection at Noyb.
In its complaint filed this Thursday, March 20, Noyb asks the Norwegian authority to order OpenAI to remove defamatory outputs on the complainant and refine his model in order to eliminate inaccurate results. The association also requests that an administrative fine be imposed on the company in order to prevent similar violations from happening in the future.