OpenAI imposes its innovations via the “fait accompli strategy”, in disregard of copyright and image rights. Without regulation, creation becomes impoverished, threatening the creative ecosystem.
In the midst of the constant outbidding of AI players, the release of Sora 2 from OpenAI made a lasting impression. Beyond the enthusiasm and fears aroused by the incredible demonstration videos flourishing on TikTok or
The fait accompli strategy as the spearhead
With each launch of a new OpenAI, its share of appropriations in disregard of the rights of creators. Scarlett Johansson does not agree that the future “Sky” application uses her voice to interact vocally with Internet users in ChatGPT 4o? The company does not take this into account. Videos circulate, articles relay the affair, buzz and notoriety grow. Same approach when launching the Sora image generator: OpenAI introduced the “Ghibli effect” function without worrying about the studio’s agreement. Result ? The company collects one million new users in one hour.
The mechanism is simple: we let the excitement take place for a few days to have maximum adhesion. New users willingly take part in the game of defying the forbidden, exhilarated by the experience. Once the controversies arise, the company apologizes and offers controls or removing the function. This approach has allowed OpenAI to achieve unrivaled dominance in terms of adoption, daily use and general public visibility. ChatGPT had 400 million weekly users before Sora launched. Three months later, there are 800 million…
This technique called “fait accompli strategy”, used in particular by Uber in its early days, aims to create economic and social dependence by becoming essential. Once established, the company can negotiate regularization with the States from a position of strength.
A company in the image of its time: outrageous
For the release of its new model called Sora 2, OpenAI is adopting its new trademark: drawing inspiration from pop culture universes to demonstrate its power and attract attention. This time, we take the Avatar and Game of Thrones franchises as a reference. Regardless of whether the authors agree or not, the company is no longer in its infancy and has become a financial juggernaut that no longer seems to have limits. All that counts is the search for visibility, even if it means deliberately trampling on intellectual property rights.
Beyond the lack of exemplary nature of this company, the message sent by OpenAI to its users is catastrophic: it legitimizes them to use its tool to appropriate a work for their own purposes, without worrying about the rules, legal frameworks and consequences for the creators of the original works. It is therefore becoming commonplace to see the misappropriation of pop culture characters like Mario or Pikachu generated ad nauseam on social networks. The videos that emerge destroy the image of these patiently constructed creations.
But as if that were not enough, we are taking another step forward by exploiting the image of Stephen Hawking on skate ramps, in MMA fights or in Formula 1 races. This trend quickly spread with other celebrities like Michael Jackson and Martin Luther King. What do they have in common? They’re all dead, so no need for their consent. This opens a new legal battle over image rights. Their beneficiaries may beg to respect the image of the deceased, but it is difficult to control the situation, even for OpenAI.
Do not lose sight of the precariousness of the authors.
Faced with this complete looting, it is still very difficult to assert one’s rights. The IA Act, which was to serve as a regulator from 2025, is constantly postponed due to political and financial issues. Apart from specifying on deepfakes that the content was generated with AI, nothing has yet been imposed or put in place from a legal point of view. Companies are waiting for case law to act, including the lawsuit filed by Disney and NBC Universal against Midjourney. They accuse the latter of having trained their product by using their content without authorization. While awaiting the long-term outcome of this trial, certain heavyweights in the sector are trying to react, or even act as safeguards.
This is the case of Adobe, which allows creators to claim ownership of their work via a watermark system on images. The free “Adobe Content Authenticity” application is in public beta and should provide a tentative response. While waiting for better, what can we do on our scale to protect creation? AI is often presented as a revolution, but that is not an excuse to break all the rules and break free from the established order. Let’s see it more as an evolution. Copyright and intellectual property law still apply, AI or not. It’s up to us to be responsible and see to it. Why is this important? If their rights are violated, how many creative people will persist if they don’t think they can make a living from it? And in the long term, fewer creators means less creation. Paradoxical for OpenAI, when we know that it is the raw material to power the creative part of its AI engines…




