AI has established itself as an everyday tool in businesses. Generative, conversational, predictive: its uses are spreading at high speed, often at the initiative of the professions themselves.
This acceleration, if it holds promise, raises a central question: how can we prevent AI from being established as a fait accompli, without a framework or real control?
Behind the technological enthusiasm, one observation systematically comes up: AI is progressing faster than skills, faster than governance, and faster than regulatory frameworks. Three issues structure the debate today: control of uses, the real capacity to produce value, and the management of risks linked to identity, sovereignty and compliance.
Exploding uses, lagging governance
The risk is not so much AI as the speed at which it is adopted, often without solid foundations. A recent Zscaler study reports a 93% increase in the use of AI in business. A dazzling adoption, largely unregulated, marked by the massive use of external tools and by increasing exposure of data.
Internal data, sometimes sensitive or strategic, are thus injected into models whose training, hosting or reuse conditions remain unclear for many organizations. Faced with this reality, a binary approach would be counterproductive. It is neither about blocking AI nor adopting it indiscriminately, but about making clear trade-offs: what uses are authorized, with what tools, for what types of data and under what responsibility.
Governing AI then becomes a strategic imperative. It is no longer just a technological subject, but a business subject, at the crossroads of innovation, risk and compliance.
Producing with AI: the blind spot of the human factor
Contrary to the most optimistic speeches, AI does not instantly transform employees into augmented experts. It does not make anyone more efficient without effort, without learning, without changing practices. Even today, the majority of companies are at the experimentation stage, with only a few managing to truly industrialize uses.
The main obstacle to value creation is therefore not technology, but people. Without training or acculturation, AI tools generate as much confusion as they gain. Producing with AI requires a change of posture: moving from the logic of a miracle tool to a logic of co-construction between humans and machines.
This involves sustainable investments in skills, but also an overhaul of business processes, responsibilities and validation methods. Otherwise, AI risks remaining an expensive gadget, or even an operational, legal or reputational risk factor. In this context, giving the IT department a central steering and governance role becomes essential.
Identity, sovereignty and regulation: governing risks
The question of sovereignty naturally arises, provided we move away from ideological postures. Sovereignty is not an end in itself; it is above all a method of risk governance. Regulatory pressure will intensify: data protection, traceability of algorithmic decisions, liability in the event of error or bias, compliance with future European frameworks.
Companies must be able to document, justify and control their use of AI. Who trains the models? With what data? Where are they hosted? Who bears final responsibility for the decisions produced? So many questions that will need to be answered, including in front of the supervisory authorities.
From this perspective, sovereignty, understood as the ability to understand, manage and control, becomes a key lever for establishing lasting trust around AI.
Taking back control of AI does not mean slowing down innovation. On the contrary, it means creating the conditions for responsible, sustainable and value-creating adoption, capable of withstanding the test of time, usage and regulation.




