AI in business: between mediator and scapegoats

AI in business: between mediator and scapegoats

The AIs become social entities in business: mediators, scapegoats or confidants, they transform our working relationships and shake up traditional human roles.

The emergence of artificial intelligence in the form of chatbots in our work environments has profoundly changed our relationship to the machine. By dialoguing with them as with a colleague, we have crossed a threshold: that of anthropomorphization. This cultural and cognitive shift has paved the way for an unexpected but irreversible phenomenon: AI is becoming a full -fledged social entity within organizations. And, as such, it already begins to occupy roles that neither HR, Nor CIOs, nor work sociologists had anticipated.

Conversational interaction, or programmed anthropomorphization

The massive introduction of conversational AI has legitimized an illusion: that of a machine with which we speak. This simple modality of exchange, so natural to humans, was enough to lend him intentions, a personality, even feelings. In UX Design, it is a well -known strategy: the cat creates emotional proximity to the machine. But in the world of work, this familiarity generates a deeper mutation: we no longer use AI, we coexist with it. And this change of posture prepares the emergence of new social dynamics.

AI as neutral mediator

In some technophile companies, AIs are already used as mediators of internal conflicts. The example of affectivabot, an HR assistant deployed in a large American service company, is edifying: by analyzing the written and oral exchanges between two collaborators in tension, the AI ​​identifies the tension points, suggests reformulations, and offers resolution scenarios. Its apparent neutrality paradoxically makes it more “credible” than a human manager. Employees say they feel less judged, more listened to, better understood – even by a machine (1).
This function of algorithmic mediator could be generalized: do we imagine an AI tomorrow in an ethics committee, arbitrating the priorities between teams, or moderating tensions in a project group? The analogy with a simultaneous translator of emotions and intentions is not exaggerated. The AI ​​here becomes a social third party stabilizer, capable of fluidifying exchanges where humans often fail to remain impartial.

The algorithmic scapegoat

Conversely, other organizations turn AI for less noble purposes. When decisions become unpopular (change of planning, non-promotion, refusal of training, etc.), some managers invoke algorithm: “It is not me, it is the management of resources that decided it”. This phenomenon, already documented at Amazon or Uber (2), is based on a removal strategy. The AI ​​is used here as a screen or a fuse: it takes the frustrations in place of the hierarchy.
This shift is dangerous. Because it transforms the AI ​​into a sacrificial object. Rather than assuming the tensions inherent in management, humans delegate bad news to machine. However, this practice weakens the confidence of collaborators towards decision -making processes, while blurring the real distribution of responsibilities.

The invisible confidant

Beyond the uses supervised by the company, some employees spontaneously develop personal interactions with their AI assistant. Internal studies at Microsoft (3) have shown that employees use their co -pilot as a “professional diary”: they formulate their frustrations, their doubts, even their career development projects.
This emergence of the role of digital confidant could seem harmless, but it raises major ethical questions: what does the AI ​​of these confidences do? Who are they accessible to? And above all, what is it said of relational impoverishment in the company, if the only sure interlocutor becomes an agent without conscience?

A systemic mutation of human-machine relationships

These roles – mediator, scapegoat, confidant – are only the first avatars of what will become of AI in organizations. They all testify to the same phenomenon: AI is no longer perceived as a simple tool, but as a presence at work. A presence that shapes relations, redistributes powers, and redefines the way in which we interact – not only with it, but also between us.
This shift calls for an overhaul of regulatory, training and work analysis frameworks. If AI becomes a social player, it should also be treated as such: with rights, responsibilities, governance. The question is therefore no longer if it will happen, but how we will supervise this tilting.

Towards an anthropology of AI in the company?

This transition opens a fruitful field of research for the social sciences. It is no longer just a question of assessing the technological impact of the AI, but of understanding its symbolic integration in organizational cultures. By becoming social figures – sometimes tutelary, sometimes demonized – challenges our conception of the collective, otherness, and social regulation at work.
What if the future of management was no longer only played between humans, but also with non -human entities?

Notes and sources

1. Bailenson, Jeremy N. The Infinite Conversation: Ai and the New Social Contract at Work. Stanford University Press, 2023.
2. Rosenblat, Alex, and Luke Stark. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication, Vol. 10, 2016.
3. Microsoft Research. “The Copilot Diaries: UNXPECTED USES OF WORKPLACE AI.” Internal White Paper, 2024.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment