95 % of IA POC fail according to MIT: can we really trust it?

The AI ​​of tomorrow: hybrid, open source and autonomous

We will examine the MIT report on the failure of the POC in AI, put the context, criticize the method used and identify what really fails … and what works discreetly.

Imagine this: you are CEO of a large company, investing massively in artificial intelligence. You have deployed an ambitious IA strategy (POC, training, recruitment), and now you read in Fortune that 95 % of generative AI pilot projects fail. The figure has the effect of a thunderclap. Your investors are worried. The market reacts: Actions related to AI fall. A deaf fear settles – that of an AI bubble about to burst.

This scenario is not a fiction. This is exactly what happened in August 2025, following the publication of a MIT report, attributed to a group named Project Nanda. The message was clear, brutal, viral: the generative AI fails in companies. But is that really what the report says? And above all, can we trust a study that speaks of failure when thousands of employees already use AI every day-often in secret?

In this article, we will dissect the report of MIT, analyze its context of exit, criticize your methodology, and above all, help you understand what really fails – and what, on the contrary, already works, in silence.

The context of the market: a bomb in time in a fragile environment

  • A nervous market, looking for a scapegoat

The MIT report intervened in a context of high tension on the markets, where AI, carried by an enthusiasm sometimes excessive since 2022, supported – even overvalued – the valuations of the Tech sector. This dynamic has come up against the disappointing announcement around GPT-5, reviving the underlying fear of disenchantment.

It was in this climate of uncertainty, where the market was looking for a reason to doubt, that the study has the effect of a spark: the shock figure of “95 % of the POC” offered a simple and punchy story, immediately exploited by the sellers uncovered and amplified by media in quest for virality.

  • Disinformation is invited to the table

Add a dose of disinformation to that. Exaggerated rumors on the expenses of AI of giants like Meta, relayed without verification, fueled distrust. The MIT report, although presented as academic, was interpreted selectively, even distorted.

And yet, almost no one had read the report.

Is the methodology of the report: is the study of MIT solid?

  • Accessibility and credibility: where is the report?

Here is a first alert signal: the report was difficult to access. You had to fill out a form to access it.

This document is not an academic publication evaluated by peers, but a field report presented as a scientific study, relayed in the press via media summaries largely inspired by a fortune article.

The group behind the report, Project Nanda, describes itself as working to “build the fundamental infrastructure for an internet of AI agents”. Interesting, but not exactly a classic MIT research laboratory. It is not a group of artificial intelligence teachers, but rather an experimental initiative.

Already, scientific credibility is tainted.

  • Sample too small, too biased

The report is based on:

– Interviews with 52 organizations

– 153 responses to a survey

– An analysis of public ads!

It is extremely limited to claim to draw conclusions on all companies using generative AI.

Imagine that you wanted to understand the behavior of French consumers, and that you question 150 people in Paris. You might have insights, but you couldn’t say: “95 % of French people prefer tea coffee.”

Worse: no data on the size of the companies, nor on the functions of respondents. Were they developers? Executives? Trainees? We don’t know.

And above all, the relationship does not distinguish between the types of AI. He talks about “generative AI”, but in reality, he seems to focus on co -pilot tools (like Github Copilot or Chatgpt for daily tasks), not on AIA autonomous agents, which are the future.

  • Definition of success: a bar placed too high?

The report says that 95 % of POC fail because they do not have a measurable impact on productivity or the income statement.

But how do we measure this?

According to the authors, they sought mentions in press releases and dry deposits. In other words: if there is no official communication, it does not count.

This is to say that a doctor did not save from life, on the pretext that he has never published a medical study.

Many productivity gains are local, silent, informal. An employee wins 2 hours a day thanks to an AI assistant, but his business does not communicate it. Is it a failure? No. It is a real gain, but invisible.

  • A blatant bias in favor of marketing?

The report says that 50 % of AI budgets go to sales and marketing.

It’s absurd.

All credible studies (McKinsey, Forrester, BCG) show that IA investments are relatively balanced between IT, operations, HR, Finance, and Marketing.

This 50 % figure strongly suggests that the sample was biased – probably too concentrated on marketing or commercial teams.

And if the authors spoke especially to marketers, then of course, they have heard of chatbots, generation of content, automated campaigns … But not AI in the logistics chain, predictive maintenance, or risk management.

  • Blurry terminology: what is a “pilot”?

The word “pilot” is used without clear definition. In some companies, a pilot is a test on 5 users. In others, it is a large -scale deployment with impact measures.

And “implementation”? The report does not specify if this means a tool used by a department, or a system integrated into the entire ERP.

Without clear definitions, the conclusions are blurred. And a figure like “95 %” becomes a statistical fiction.

What the report omits: the economy of the shadow AI

  • The truth that nobody wants to see

Here is the most fascinating part of the report – and the least relayed by the media: 90 % of employees regularly use language models (LLM), even if only 40 % of companies have bought subscriptions.

In other words: employees use AI in secret.

They use chatgpt and other tools in secret, without talking to their hierarchy. It is the economy of shadow AI. And yet, they use it several times a day, every day.

“The AI ​​does not fail. It succeeds – but apart from the control of the company.”

  • The value goes to the individual, not to the organization

The report admits a crucial truth: currently the value of the generative AI mainly returns to the individual, not to the company.

If the developer saves time, the marketer increases his content production and the financial analyst accelerates the synthesis of his reports – where does the created value go?
For the moment, nowhere. Because the company has not yet built the processes, defined the policies, or chose the tools to capture, measure and industrialize these gains on the scale of the organization.

Why do the POCs fail – really?

If it is not because of the ineffectiveness of AI, then what blocks production?

  • Lack of strategic vision

Many companies are launching POCs without a clear objective. “Let’s do an AI!” Project, without knowing what to automate, why, for whom, and what benefits to wait.

A POC must respond to business pain, not technological fashion.

  • Lack of AI governance

No use policy. No security framework. No risk management. DSIs and ciso are often absent from the conversation.

“We cannot industrialize what we do not control.”

  • Culture of secrecy and lack of training

Fearing that the use of AI would make their post obsolete, some employees use it in the shadows – without framework, without support, and often … without control.

  • Difficulty moving from the pilot phase to production

It is the big leap from the POC to production. It’s necessary

– Integrate AI into existing systems

– Train users

– Measure the impact

– Validate operating budgets

– Obtain the approval of the directions

And often, no one is responsible for this passage.

What if we asked the question badly?

Perhaps the real problem is not that AI fails, but that we are badly successful.

– If an employee wins 10 hours a month, it’s a gain.

– If a customer service responds faster, it’s a gain.

– If a product is launched 3 weeks earlier, it is a gain.

But if these gains are not consolidated, centralized, or monetized, then they do not appear in financial reports.

The MIT report does not measure AI’s failure, but the failure of AI governance.

Conclusion: towards a responsible, controlled, and industrial AI

The MIT report has a merit: he asked an important question. But he provided too simplistic, too biased, too publicized.

Yes, a lot of AI POC does not go into production. But not because AI does not work. Because companies have not yet learned to domestic.

The real challenge is not technical. It is organizational, cultural, strategic.

So what to do?

  1. Stop fearing the ia of the shadows. Recognize it, supervise it, form your teams.
  2. Clearly define what a “success” is – not only in figures, but in agility, in satisfaction, in innovation.
  3. Create a dedicated role: Chief Ai Officer, or responsible for AI governance.
  4. Go from experience to industrialization. AI is not a gadget. It is a strategic lever.
  5. Invest in autonomous AI agents, not just in co -pilotes.

In summary: AI has not failed. It is our way of adopting it that must evolve.

The MIT report is a mirror. It does not show that AI fails. He shows that we are not yet ready.

But the good news? AI is already there, in offices, in emails, in reports. She works, silently, effectively.

The role of management is not to stop it, but to guide it, to structure it and to align it with the strategy.

Because tomorrow, it will no longer be the companies with the most POC that will win. These will be the ones who will have the most AI agents in production.

And that day, the real success will not be in a report. It will be in your results.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment