The AI divide does not separate generations, but rather company cultures: those that stimulate curiosity and those that impose rigid thinking. Explanations.
Until now, we have led the AI Gap debate in a completely wrong way. Since ChatGPT appeared in November 2022, the dominant narrative has been one of a generational struggle: tech-savvy Gen Z employees use AI tools effortlessly, while their older colleagues struggle with prompts and struggle to adapt. It’s a gripping story, but much of it is fictional.
The Real Metrics of AI Success
Data from the St. Louis Fed reveals a more nuanced reality. Certainly, younger employees show higher adoption rates of artificial intelligence. But the strongest indicator is not age, but education level: people with a bachelor’s degree are 30% more likely to use AI than people without an equivalent degree. Even more fascinating, a study of teachers found that trust in AI is correlated with knowledge of the technology, not age, gender, or professional experience. The conclusion is telling: the AI gap is not about birth date, but about approach to learning.
This is important because companies make strategic decisions based on a flawed idea. They invest in reverse mentoring programs, in which younger employees train their elders, or, in the worst case, they tacitly consider that a whole part of their staff is resistant to AI. In doing so, they overlook the real divide: that which exists between workplaces that encourage experimentation and those that do not.
Let’s look at what’s really driving AI adoption in practice. It’s not technical mastery: the user interfaces of most AI tools are deliberately simple. This is also not the generational routine in the use of new technologies: many millennials and Generation Z employees feel overwhelmed by the possibilities offered by AI. Rather, it is about something much more fundamental: the willingness to learn playfully, to experiment, on a small scale, and to fail often. Because learning and failure are inseparable.
Create a space for collective discovery
This is precisely where the problem lies for most organizations. They introduce AI tools with training and best practice guides, and treat their implementation as a software update rather than a cultural change. But AI is not Excel: it is a technology that rewards curiosity, iteration and collective discovery. Employees who succeed with AI are not necessarily the youngest or most tech-savvy; these are the ones who feel confident enough to experiment.
A study conducted by IBM highlights this challenge: 33% of companies cite limited AI skills as their biggest barrier to adoption. However, skills that are lacking can be taught. It is more difficult, but also more valuable, to create an environment in which a 55-year-old accountant feels as capable of experimenting with AI as a 25-year-old developer. An environment where mistakes are viewed as learning opportunities, not career risks. An environment where knowledge from individual experiences can be brought together to form organizational knowledge.
Of course, AI is not without risks: we have seen countless reports of hallucinations and errors. But this is precisely where the key lies: curiosity and critical thinking must go hand in hand. Successful organizations not only create spaces for experimentation, but also the necessary safeguards: secure testing environments, clear training measures on AI risks, and a culture that encourages playful learning without neglecting necessary caution. No one should blindly trust AI tools, even in an experimental setting.
This is the paradox of our time. AI tools have never been more accessible, but most organizations are creating intentional barriers to adoption. They demand a return on investment before embarking on experimentation. They punish mistakes instead of rewarding learning. They isolate AI initiatives in silos instead of encouraging cross-functional experimentation.
The companies that will win the AI race share a fundamental belief: there is no silver bullet yet. We all improvise along the way. Organizations that recognize this reality – the structured freedom to experiment, robust knowledge sharing mechanisms, and the creation of a work culture that values curiosity over certificates – gain a sustainable competitive advantage.
The goal is to create a workplace where each employee has a set amount of time to experiment with AI tools relevant to their role. Where there is a common repertoire of AI experiments: what worked, what didn’t work, what surprised us. Where the janitor’s discovery of using AI for predictive maintenance is just as valued as the data scientist’s new algorithm. This is not a dream, this is what AI pioneers are already doing.
The fog of AI is roped through
The real divide in AI is not between young and old, between those who are more or less familiar with the technology, or even between graduates and non-graduates. It lies between organizations that unleash the inherent curiosity of their staff and those that remain trapped in the classic IT shackles. Between companies that view AI as a classic IT project with a fixed end date and those that view it as an ongoing cultural change.
At this turning point, where AI models are improving every day and use cases are multiplying exponentially, the question is not whether your staff can adapt, but whether you give them the green light and support to try. Organizations that answer “yes” to this crucial question will not only bridge the foggy AI gap; they will master it brilliantly.
The future does not belong to the companies with the youngest staff or the biggest AI budgets, but to those who have the courage to say: “We don’t know what lies in the fog, but we know that our rope will hold strong.” Let’s go. »




