The agentic AI is not just a simple evolution of the generative AI. Deploying independent agents in business requires complex challenges in terms of orchestration, security and governance.
Companies have barely recovered from the surge chatgpt as they must take a new technological shock. After chatbots and co -pilots integrated into professional applications, autonomous agents present themselves as the third wave of generative AI. They are thus called because they are able to perform complex tasks by connecting to the data of an organization. By focusing on the potential of major language models (LLM), agentic AI offers new automation opportunities for business processes, exceeding the field of the possible of Robotic Process Automation (RPA).
Regarding emerging technology, feedback is still rare. However, the agency AI seems to have passed the evangelization phase to go to the experimental phase. Deloitte provides that this year, 25% of companies using generative AI will launch pilot projects or concept evidence (POC) of agency AI, and this figure will double in 2027.
The French Maché shows a good appetite. According to a survey by the Institute OpinionWay carried out on behalf of Salesforce, 81% of French decision-makers recognize the positive impact of this “digital workforce”. 84% see autonomous agents as a lever to make their processes reliable and reduce human errors.
1. Define eligible processes
If the interest is real, the success of an agentic AI project meets their own criteria. It is first to define the eligible processes taking into account its specificities. “The interest of AI agents is based on their ability to make decisions and carry out actions independently with the most limited human supervision,” says Xavier Cimino, senior managing director strategy at Publicis Sapient. “In the field of software development, IA agents can gradually carry out tests independently, go up the bugs and correct them.”
Global Managing Partner, Strategy & Transformation at Wipro Consulting, Caroline Monfrais evokes other possible use cases. “In the case of the detection of fraud cases in a bank, an agent may, for example, act independently for transactions below 100 euros,” she notes. Other examples, a carrier will be able to optimize the management of its logistics chain by analyzing the return of the field and a telecom operator automate the first level of its HELP Desk, and direct calls to human operators beyond a confidence threshold.
“The definition of the role and objectives is only the first step in the life cycle of an AI agent”
The point in common with these use cases: the ability to cut a process into automatic unit tasks. The repetitive criterion of these tasks strengthens the eligibility of this process at the agentic AI while its criticality distances it. Agenic AI also implies a new approach. Unlike an RPA software robot, “an agent is not a tool but a virtual colleague who occupies a role, making decisions based on predefined parameters”, observes Caroline Monfrais. “It is a question of defining this role and then deciding to entrust it to a human, an agent or both.”
The objective assigned to the agentic AI will also be set at the start of the project. “The business contribution must be clear, concrete and visible as a time saving, an increase in productivity or an improvement in customer satisfaction”, judge Tanguy Perrot, Director Business Value Services at Salesforce. This involves involving the business management concerned by automation from the upstream phase.
“The definition of the role and objectives is only the first stage of the life cycle of an AI agent”, according to Anthony Hié, Chief Innovation & Digital Officer of the Excellia Higher Education Group which currently experiments with agentics to automate student recruitment campaigns. “The next design and orchestration step makes it possible to assess the needs for resources and connectors. Afterwards, there is the training, re -training and error management phase. Then comes monitoring for monitoring performance indicators and cost control. Finally, we go to tests and deployment.”
2. Integrate agents into the information system
Unlike generative AI models, autonomous agents do not work in a vacuum. Their efficiency depends on a fluid connection with the company’s information system. To carry out their tasks, the agents will question the data hosted in disparate environments via the APIS and secure connectors.
Regarding emerging technology, a standard is missing to guarantee this interoperability. Launched at the end of 2024, the Context Protocol (MCP) model could be this one. Developed by Anthropic, this open protocol has been taken up by most market players including Microsoft for Copilot. MCP aims to universally connect AI to business applications, databases or cloud services without going through proprietary connectors.
It is, moreover, to have to ensure the deployment of AI agents and their orchestration so that they can work together. In recent months, an increasing number of software development kits (SDK) and Frameworks have appeared as Langchain, Llamaindex, Langgraph or Openai SDK agents. Co -founder and VP Science of decision at Moov AI, Olivier Blais listed them in a blog post. “Many of these frameworks are available in open source, which raises the question of their maintenance over time,” tempers the expert.
Of course, the quality of the data is, as for any IA project, a key element of success. “To make the right decisions, an AI agent must have access to reliable information as is the case for his human colleague,” says Caroline Monfrais.
3. Respond to specific risks
Getting the company’s information system and databases, AI agents mechanically increase the risk exposure surface. In an online guide, Publicis sat list these specific threats to agentic AI. Integration into decision -making processes first of all intensifies the risk of corruption of data. “Malvery actors could inject biased or misleading data to manipulate AI answers and actions, with potentially serious consequences,” notes the consulting firm. To mitigate this risk, it is necessary to “ensure the integrity of the data, to validate the sources and to set up continuous monitoring”.
Another identified threat: blind optimization. AI agents, especially those trained by strengthening learning models, could misguided the reward system by maximizing performance indicators without achieving the targeted objectives. “An AI agent responsible for optimizing web traffic could use tasting tactics or generate misleading content to artificially inflate metrics,” Publicis alerts.
For Xavier Cimino, “there is a long process of identification and attenuation of the risks to be carried out before considering a production”. Companies can notably use synthetic data in the first development phases in order to validate the potential of a use case without manipulating sensitive data.
Finally, keep a human eye in the loop. “Although IA agents are intended to act independently, it is crucial to have a certain level of human supervision, especially at the beginning, to verify that everything is going well and adjust the parameters if necessary”, judges Caroline Monfrais. Anthony Hié goes so far as to advise to set up a button stop, an emergency stop button to stop everything in the event of a drift.
“Regarding emerging technology, it is difficult to predict the real costs once the way is made”
To avoid the black box effect, an agentic AI project will also respond to the principle of explanability. “A company must be able to go up and understand the decision chain that has enabled an AI agent to go from point A to point B,” said Xavier Cimino. “Human must be able to open the hood and explain the process.”
The agentic AI also refers to the concept of responsibility. “If an agent makes an error with an end customer, the organization that deployed it is held responsible for it,” warns the consultant. “Its confidence capital can be engaged. It is therefore advisable to use agentic AI sparingly and to get your hands on simple and low risk use cases.”
Last risk and not least: the explosion of costs. “Regarding emerging technology, it is difficult to predict the real costs once the way is made,” alert Xavier Cimino. “With a token rate, the invoice can quickly fly away. It will be a shame to cancel the automation gains in technical costs. It is an economic balance to find.”
4. Ensure man-machine collaboration
IA agents work together but also – this is another specificity of agentic AI – with human beings on the same process. To make them collaborate harmoniously, it is necessary to ensure the distribution of tasks between autonomous agents and employees. By automating repetitive tasks, the agentic AI should allow employees to focus on higher added value tasks.
“We are in a logic of collaboration and not of competition”, reassures Xavier Cimino. “AI agents are more there to increase the employee and not to replace it.” To reduce resistance to change, according to Caroline Monfrais, it is advisable to show pedagogy and adopt clear communication on the assets and limits of IA agents.
If the autonomous agents mainly concern the white passes, the next stage of the AI, with the arrival of the physical agentic AI (physical agent AI), will touch the blue passes. As its name suggests, it is a question of extending generative AI to physical space. With on-board intelligence, robots, machine tools or autonomous vehicles will learn to carry out complex actions in the real world and to interact with humans around them.




