When a cybersecurity consultant discovers that the real risk of defensive AI is not technical but cognitive.
I’m going to tell you something that most cybersecurity experts will never admit in public.
For fifteen years, I have lived in the bowels of cyber governance. Risk analyses, NIS2 compliance, continuity plans, maturity audits: my daily life is helping organizations see what they refuse to look at. At the same time, I develop applications, I maintain websites, I write. I always did it. But since AI agents moved from promise to everyday tool, the scale has changed. Today I can launch tasks in parallel that would have taken me weeks. I became, on paper, ten times more productive.
And I’m slammed to the ground.
Not tired like after a long day of work. Empty. The kind of fatigue that can’t be fixed with a good night’s sleep, because it doesn’t come from the body. It comes from the head. Because my brain no longer spends its days just executing, but also arbitrating, validating, contextualizing, deciding, continuously, on flows that I would never have been able to process two years ago.
Simon Willison, co-creator of Django and one of the most respected voices in global software engineering, just said exactly the same thing on Lenny Rachitsky’s podcast, in early April 2026: piloting code agents in parallel mobilizes all of his twenty-five years of experience, and by eleven in the morning, his cognitive day is over. Nathan Baschez, entrepreneur and author, reacted by describing the same shift: programming, once an activity comparable to a calm puzzle game, now resembles a permanent debate where its raw power reaches the limit of its capacity to absorb information and decide.
These testimonials come from the world of software development. But the phenomenon is strictly identical in cybersecurity.
Cognitive debt: the real bug of 2026
The concept has a name. It has been circulating in academic literature since February 2026, notably carried by Margaret-Anne Storey, professor of software engineering at the University of Victoria, in an article which made the rounds in the tech community before being presented as a keynote at the ICSE TechDebt conference. Cognitive debt is the deficit in understanding that accumulates when the speed of production exceeds the human capacity to maintain a coherent mental model of what is happening.
Technical debt lives in the code. Cognitive debt lives in the heads of the people who are supposed to answer for it.
Translate this into a modern SOC. A cybersecurity analyst in 2026 no longer correlates alerts by hand. It supervises autonomous agents which detect, investigate, propose remediations and generate attack scenarios. The tool supported execution. But humans must always keep in mind the business context, the regulatory framework, the history of incidents, the real state of the infrastructure, and arbitrate between several probabilistic recommendations without always understanding how they were produced. He no longer does the job. It orchestrates autonomous work. And his brain hasn’t been updated for that.
The result is predictable and it is already measurable. Either the analyst blindly follows the machine, this is what human factors researchers call automation bias, a phenomenon documented since the 1990s in aviation but whose scale is exploding with AI agents. Or it finds itself paralyzed by a volume of data that no human mind can prioritize at the required speed. In both cases, the decision slows down at the exact moment it should speed up.
Why is it a business risk, not an HR problem?
I still see general management treating cyber team burnout as a workplace well-being issue. This is a categorization error. This is an operational risk.
Take a classic incident from 2026: an AI agent deployed in your SIEM reports a suspicious correlation between an abnormal DNS flow and an RDP connection from an unusual station. The alert is there, correctly generated, technically explainable. But the analyst who receives it has just validated eighty recommendations in three hours. His mental model is saturated. He classifies it as false positive. Two weeks later, the exfiltration was confirmed.
Who is responsible? The tool worked. Humans too, within the limits of what their brain could absorb. The problem is in the interface between the two, in what no one has measured, modeled or even anticipated.
Meanwhile, cybersecurity budgets continue to balloon on the tooling side. More agents, more coverage, more escalations. We pay for increasingly powerful technologies to shift the load onto humans who are already saturating. MTTR, the average response time to an incident, stagnates or increases in organizations that have massively deployed defensive AI without adapting their human processes. And the senior profiles, those who have the experience to mediate in the noise, are starting to leave. Not because the job no longer interests them, but because supervising independent agents eight hours a day is not the job they chose.
What I did
I found an answer. There is nothing technological about it.
Every morning, I get up at four o’clock. Not to work. To sit down, and do nothing but focus my attention for an hour. I’ve been practicing concentration meditation for years, a structured mental training that aims to develop the ability to maintain sustained focus on a single object, without getting caught up in the flow of thoughts. With my partner, we even opened a center dedicated to these contemplative practices in Gran Canaria, where we live. This is not a parenthesis in my professional life. This is what makes it possible.
I say this without any proselytism. What interests me here is the mechanism. Concentration meditation does the exact opposite of what AI requires of the brain all day long. Where AI agents impose attention dispersed between parallel streams, meditation entails the ability to return to the signal in the noise. Where the supervision of agents encourages permanent reactivity, meditation builds a space of perspective which allows us not to get lost in the perceived emergency. This is not wellness. It’s decision-making hygiene.
Top athletes have been doing it for a long time. Fighter pilots go through mental preparation protocols before each flight. Surgeons practice centering techniques before operating. In all these professions, cognitive load is the limiting factor, not technical skill. Cybersecurity in 2026 has just entered this category, and it doesn’t know it yet.
What businesses should learn from it
I’m not saying that every CISO should start meditating at dawn. I am saying that organizations that deploy defensive AI without questioning the cognitive capacity of their teams are creating a risk that they do not measure.
Concretely, this means integrating cognitive training into cyber skills development programs, in the same way as technical training. This means measuring mental workload as an operational indicator, not as a quality of work life topic. This means setting clear limits on the number of agents supervised by one person simultaneously, as Willison himself eventually realized he couldn’t last all day at that pace. This means requiring publishers to formulate their agents’ output in business decision language, not in technical probabilities that no one has time to contextualize. And that means creating decision-making recovery protocols, protected areas where nothing is supervised, where we reconstruct the mental model that the day has fragmented.
The AI wave in cybersecurity is a reality. Autonomous agents in SOCs are here to stay. The question is no longer whether to adopt them. The question is whether organizations will continue to invest exclusively in the power of their machines while ignoring the only resource that really decides: the brains of those who pilot them.
In 2026, defensive AI does not lack intelligence. It lacks humans trained not to fry their brains using it.




