Why do some data teams deliver in a few hours what others take weeks to produce?

Why do some data teams deliver in a few hours what others take weeks to produce?

A silent revolution is shaking up data: AI is increasing “everything as code”… and legacy is already slowing down those who delay.

A discreet but relentless revolution is taking place within data teams. At first glance, nothing would have changed: same dashboards, same pipelines, same speeches on transformation. However, beneath this deceptive continuity, the gap is already widening. On the one hand, teams that AI powers. On the other, teams that she barely helps. Not because the former are superior, but because they work in a language that the machine can read. From this divide are born the decline of legacy environments, the rise of “everything as code”, the drop in production costs and, ultimately, a new hierarchy of performance.

A silent fracture

Productivity gaps between data teams are not due to talent or budgets. They are based on a simple question: can the AI ​​read the technical environment in which the team operates?

For eighteen months, a dividing line has emerged. On the one hand, teams working in SQL, YAML or Python, all versioned in Git. For them, AI acts as an immediate lever: it reads a project, grasps the logic, writes a model, adds tests, updates the documentation. So what took hours yesterday now only takes a few minutes.

On the other, teams still stuck in proprietary graphical interfaces: ETL in drag and drop, BI tools configured by menus, business logic locked in unreadable XML files. These teams lack neither skill nor rigor. But their environment remains a black box for AI. However, at a time when this is establishing itself as the basis of productivity, being illegible amounts, in fact, to becoming slow.

The difference is therefore not so much about the people as the technical framework in which they work.

Real gains, but very unevenly distributed

AI has established itself in tech teams. The DORA Report 2026 indicates that 90% of professionals in the sector use it on a daily basis, and the Stack Overflow Survey 2026 specifies that 84% of developers use it or plan to do so. Also, a meta-analysis conducted with Microsoft, Accenture and several Fortune 100 companies estimates productivity gains at 26% on average, with peaks at 39% among junior profiles.

But this average hides the essential. Some teams earn more than ten hours per week, others almost nothing. AI is not a magic wand, it is an amplifier. It accelerates what is well structured and stumbles over what is not. In short, it rewards good environments as much as it reveals bad foundations.

Legacy tools were not designed for this world

The problem with Informatica or Talend is not that they are bad tools, but that they were designed for a world where humans click, configure and correct, not for a world where an AI agent collaborates with them.

When business logic is locked in proprietary visual flows, stored in indigestible XML, without native Git, without automated tests and without usable documentation, the LLM becomes blind. And humans also remain hampered: each task remains manual, each evolution laborious, each automation difficult.

According to dbt Labs, legacy ETL teams spend 80% of their time on maintenance and only 20% on value creation. For its part, Macif, by migrating from Informatica to dbt, reduced its processing times from two hours to five minutes and its licensing costs from one million to 200,000 euros per year. Datafold, finally, estimates the legacy ETL migration market at more than ten billion dollars, a clear sign of technical debt becoming industrial.

The new paradigm: “everything as code”

We talk a lot about “BI as code”, but the movement is broader. What data teams are experiencing in 2026 is none other than the transition to a model where each component of the stack is written in text. And therefore becomes readable, understandable and manipulable by an AI agent.

The infrastructure goes through Terraform or Pulumi. Pipelines, by dbt or SQLMesh. BI evolves with Evidence or Hex, in Markdown and SQL versioned in Git. The orchestration is based on Airflow or Dagster. Data quality, on Great Expectations or Soda. As for the documentation, dbt docs, MkDocs or README, it ceases to be the poor relation of the system, because everything that is textual can be enriched, corrected and maintained by an LLM.

The reason remains the same: code, versioned, testable, readable by an LLM, automatable by an agent. It’s a chain. If a link is missing, then the agent stops and the human takes over the mouse.

The cost of production is collapsing, and what we deliver is changing in nature

What used to take days can now be resolved in hours. Exploring a new codebase, which used to take one to two weeks, can now take only thirty minutes. Writing unit tests for a dbt pipeline, which used to take a day, can now be completed in one to two hours, including documentation. You will have understood, this revolution is not without effects.

This acceleration produces two major effects

The first reshuffles the “make vs buy” cards. For a long time, companies purchased SaaS tools not because they were ideal, but because developing in-house cost too much time and too much money. We paid for convenience, even if it meant using only a small portion of the features. But when an agent can build a tailor-made, maintainable and versioned front in a few hours, the equation changes.

The second effect concerns the deliverables themselves. The dashboard is no longer the reflex response. In reality, 80% of dashboard requests correspond to one-off questions that a governed conversational platform handles faster, better and without maintenance debt. And when an interface is really needed, building it to measure often becomes simpler and less expensive than configuring a generic tool. The data products of tomorrow will therefore be less standardized dashboards than governed chatbots, data apps, auto-generated reports or APIs, designed in a few days for a specific need.

Time plays against wait-and-see

At every technological turning point, the temptation is the same: wait. Observe, delay, postpone in the name of prudence. But history is harsh on wait-and-see people. Ecommerce, mobile, cloud: each time, those who delayed had to run behind leaders who had already left.

The good news is that this transformation can be gradual. No one migrates an entire stack in a weekend. On the other hand, we can start with a pipeline, a report, a set of tests. Then measure the gains. And, very often, that is enough to convince you.

If you manage a data team or its technological roadmap, four priorities are essential:

1. Switch to “everything as code”

This is the condition for everything else. As long as business logic remains locked in proprietary interfaces, AI cannot amplify anything.

2. Rethink deliverables

The majority of dashboard requests relate to one-off questions that governed conversational addresses better, faster and without maintenance.

3. Reevaluate “make vs. buy” trade-offs

Each license renewal must now be examined in light of the new production costs.

4. Train teams in AI workflow

Working in a terminal, rereading generated code, building a robust context: the data engineer of 2026 writes less code than he supervises. Also, this change in posture requires structured support.

Basically, everything is at stake now. Each quarter spent in a stack that the AI ​​cannot read is a quarter of lost productivity, and one more quarter behind those who have already switched. Those who make this shift today will not only save time, they will gain a lasting lead.

Sources

DORA Report 2025 (dora.dev)

Stack Overflow Developer Survey 2025

Meta-analysis MIT Economics — Copilot Experiments (Microsoft, Accenture, Fortune 100)

dbt Labs — “Talend & Informatica Migration”

Datafold — “The Hidden Opportunity in Legacy ETL Migrations”

La Macif — Informatica migration to dbt (dbt Labs case study)

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment