X + XAI: When Elon Musk’s AI feeds on our tweets

X + XAI: When Elon Musk's AI feeds on our tweets

The X-XAI merger raises major concerns about data protection and the reliability of information, calling for urgent regulation of the exploitation of online content by AI.

The recent acquisition of X (formerly Twitter) by XAI, the artificial intelligence startup of Elon Musk, raises serious concerns about the protection of personal data and the reliability of online information. This merger now allows Xai to exploit public tweets and images shared on X to train its AI models, a practice that raises many ethical and legal questions

Data confidentiality: blurred consent

X’s privacy policy authorizes the use of public data, but most users probably ignore that their publications and images are used to supply AI models. The opt-out mechanism via the parameters or the privatization of the account is the responsibility for individuals instead of ensuring real informed consent. Data protection should not depend on “hidden parameters” but constitute an explicit and proactive choice for each user.

Uncomfortable users with the exploitation of their content for AI training should immediately check their confidentiality parameters and consider returning their account private. The inclusion of public images in these datasets is particularly problematic. The photographs often contain metadata, biometric characteristics or sensitive information likely to be diverted. Without strong guarantees, this could lead to unforeseen consequences such as identity theft or unauthorized use of facial recognition.

Data reliability and biases amplification

The training of AI models on X content also poses a major data reliability problem. Unlike verified sources, X does not apply a systematic fact-checking, which means that AI systems may absorb and amplify erroneous information, biases or harmful content5. Without appropriate safeguards, these models could generate misleading information, strengthen echo rooms and erode confidence in AI technologies.

Impact on public speech

With the growing integration of AI into social platforms, its decisions concerning prioritization, elimination or highlighting content may influence public discourse unpredictably. Moderation policies must evolve parallel to AI to guarantee equity, prevent the dissemination of harmful content and maintain confidence in digital communication.

Towards stricter regulation

Faced with these issues, it is crucial that companies massively operating public data are held responsible for their processing and protection. The exploitation of content generated by users for AI requires explicit and informed consent, not simply buried clauses in a privacy policy2.

The merger between XAI and X represents a major turning point in the use of social networks data for IA training. It highlights the urgency of establishing robust regulatory frameworks to protect users’ privacy and guarantee the integrity of online information. Without rapid action, we risk seeing a digital ecosystem emerge where the confidentiality of data and the veracity of the facts are constantly under pressure.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment