Described as revolutionary and disruptive, AI agents have become the new pillar of innovation in 2025. As with any cutting -edge technology, this development is not without compromise.
Will this new mixture of intelligence and autonomy really inaugurate a new era of efficiency? Or does the ability of agents to act independently widens the attack surface for cyberrencies, making them potentially risky?
Last February, France welcomed Summit, an international meeting to question and encourage innovation in artificial intelligence technologies. In recent years, the AI has developed at high speed, tearing up an ever more important place in the daily life of the French. With it, the development of a hegemony of the United States, followed by China in this new market, and a bit of Europe.
Unlike the generative AI tools to which we have familiar, IA agents represent the next border of artificial intelligence. While generative AI tools widely known as Chatgpt, Gemini, Grok Le Chat or Claude treat users’ entries and generate text from learned data models, agency AI goes further – by taking decisions and actions in an autonomous manner to achieve specific objectives. Think of Robocop or I, robot and you will not be so far from reality. Quite disturbing, isn’t it?
However, in the hands of organizations, these agents could revolutionize whole sectors, from the automation of customer interactions to the unfailing management of logistical operations. More realistic applications of IA agents include customer service boots, personal assistants, financial advisers, and even autonomous vehicles.
Take the example of a personal assistant with agentic AI: this type of agent can rely on data-based decision-making, automatic learning and logical reasoning to reserve flights, write and send emails, and even automate complex work flows without human intervention. According to the Ipsos-Cesi survey, almost nine out of ten French people say they have heard of it and 39 % use it. Among these users, 15 % use it as part of their work or their studies, and 33 % in the private sphere.
The adoption of AI agents is gaining momentum, with sectors such as finance, health and retail increasingly integrating these autonomous technologies to rationalize their operations and improve customer experience.
The agency AI within organizations can offer unrivaled efficiency and optimization of operations. In doing so, these agents can reduce human errors and increase productivity.
But, as with any revolutionary technological advance, this net passage from reactive assistance to proactive automation includes its share of risks. The most worrying is undoubtedly the potential widening of the attack surface available to cybercriminals.
The agentic AI could increase the degree of sophistication, personalization and the extent of social engineering and phishing attacks, especially by e-mail. The generative AI has already strengthened phishing capacities, allowing targeted and convincing attacks on a large scale. According to information on Egress threats, AI is mentioned in 74.8 % of phishing tool kits analyzed, with 82 % referring to Deepfakes.
The agentic AI pushes this threat even further by introducing an element of automation in these attacks. This could give rise to more dynamic, adaptive and persistent phishing campaigns, capable of learning and reacting to the behavior of users in real time. The autonomous nature of these AI agents would allow attackers to deploy and manage large -scale phishing operations with a minimum of human intervention, making their detection and their prevention even more complex.
In addition, 63 % of cybersecurity officials express their concern about the use of deepfakes in cyber attacks, while 61 % are concerned to see cybercriminals exploit generative AI chatbots to improve their phishing campaigns. These statistics highlight the severity of the situation and the need for robust countermeasures.
Although these risks must be taken seriously by organizations, it is important to note that they are not entirely new. In many cases, they represent an evolution of the existing threats that we have encountered with the previous forms of AI. The key difference lies in the increase in increased scale that agentic AI provides.
Organizations wishing to adopt agentic AI must consider all risks and carry out an in -depth evaluation before deployment. These measures may include the implementation of strong authentication, including multifactorial authentication, while ensuring that software is regularly updated and corrected.
From an organizational point of view, clear guidelines and ethical frameworks should be established for the functioning of AI agents. Finally, organizations must invest in continuous training of employees on IA -related security.
It is obvious that IA agents have both significant advantages and risks. A multifacette approach combining human expertise and intelligent AI technologies is therefore crucial. Particularly in the field of email safety, where social engineering attacks are constantly evolving, a mixture of in-depth training and detection systems supplied by AI offers the most complete approach.
By taking advantage of the capacity of AI to identify the subtle patterns specific to the contents generated by AI, while relying on human vigilance and critical mind, organizations will be better armed to defend themselves against the most sophisticated phishing attempts.
While we are starting this new era of agentic AI, remaining informed, adaptable and proactive in terms of security will be essential to take advantage of the profits while limiting its risks.