How to doubt that AI is an innovation accelerator? However, an essential truth must be admitted: its performance fundamentally depends on the data that feeds it.
Everywhere in the business, we discuss new practices and adoption of AI, and we can easily understand why. Already, its various use cases create an unprecedented competitive advantage for those who are in fact, increasing productivity and efficiency. Tomorrow, agents will even be able to run the business, and generate new opportunities. But it is clear that companies do not all derive the same benefits. Some take a considerable lead when others remain blocked at the planning stage, stressing the gap which may exist in the concrete implementation of the AI.
How to doubt that AI is an innovation accelerator? In a few months, we have gone from the foundations of major language models (LLM) and the optimization processes of their results (called RAG) to the advent of specialized AI agents. To understand why, it is necessary to admit an essential truth: the performance of the AI depends fundamentally on the data which feeds it. It is as true for AI models as for AI agents. To reach the level of critical value that companies expect, which are also called agentic intelligence, AI agents must access quality contextual data covering the entire organization.
In my opinion, this equation can be resolved from the following three principles:
Saaks applications as we know them no longer exist. Place to business applications
For years, the SaaS model has succeeded in filling the gap between human needs and those of data systems. For many, the daily experience of the company data is still exclusively through the prism of these SaaS applications.
However, AI completely redefines the way humans access and govern their data, with a view to collaborating. The future lies in the use of agent agents vis-à-vis suppliers, operating on architectures centered on data and carried out by agency intelligence.
Corporate data architectures being complex, business applications have so far provided friendly graphic interfaces (GUI) around proprietary data silos. It made sense as long as the users were human alone. But with agentic AI, we have crossed a CAP: now, it is the data contained in these applications that count more than the applications themselves, making the main mission of the SaaS obsolete.
Ironically, the interfaces oriented user of SaaS applications were an asset. But in the AI era, they no longer bring any value to the agents. The latter directly access data, without human intervention. It is therefore useless to integrate them into the data platforms, since they “cross them”, offering intelligent, personalized and conversational interactions.
And that does not come from the distant future: this is already happening today and accelerates. McKinsey plans that by 2030, 30 % of business work will be done by agents, when Deloitte estimates that 50 % of companies will have adopted them by 2027. Result: richer lessons, with less direct human interactions.
Likewise, on-site data (on-premise) keep growing, and this trend will continue. It will even be an acceleration factor for the adoption of AI by companies. Is that what you expected? Let me explain.
The disappearance of on -site data? On the contrary, the most critical data in the world is accommodated locally!
Since the rise of the cloud, we have often presented the data on site as obsolete. It is quite the opposite: they feed some of the most critical applications in the world. And with AI, their role will grow. As Arthur Lewis, president of Dell Technologies Infrastructure Solutions Group, indicates in March 2024: “83 % of global data is stored on site. They will not disappear even though their volume increases. This is a fact that you have to accept if you want to understand how to help companies adopt AI.
Why do these data remain on site? Because customers and businesses need it. Certain sectors, such as financial services, insurance, health or the public sector work completely or mainly on site not for lack of modernity, but because it is coherent with their regulatory requirements and their operational needs. The data they manage is ultra sensitive, and they want to keep control of it while accessing where they are. The possibility of deploying AI on site, using local data and models will only make the need for on -site data grow.
To know who belongs the data will become essential
AI brings opportunities, but also new rules and new risks. These issues are not new to data professionals, AI only makes them amplify them. However, data security and governance will become crucial as AI will impose itself. With AI as an accelerator, what was critical becomes essential.
The property and the choice of corporate data architectures count for analytics, and even more for AI. The ability to question multiple sources without having to move or clone the data becomes essential. Deploying AI in a hybrid environment then becomes the key to guarantee real and effective governance. Not to mention that data sovereignty is now imposed as a non -negotiable requirement. Otherwise, projects may be hampered or abandoned, due to the lack of compliance with the regulations.
Clearly, we live a pivotal moment to rethink data architecture and make it compatible with AI, based on the three fundamental pillars that are access, collaboration and governance.
The fact remains that many companies have not yet been able to take action, failing to know how to bring AI to their data rather than the reverse. Little by little, however, a model for analytics is emerging: that of the lakehouse, this architecture which combines the advantages of a data warehouse and a data lake, and which allows AI to grasp it naturally. We then speak of “lakeside ai” an AI fueled by the Lakehouse. And this is indeed the most direct and secure path to guarantee IA agents developed by the company access to any data source or SaaS application. More than a good technical architecture, it is a real business strategy. This prepares the AI of tomorrow based on solid foundations, centered on data, governance and interoperability.
To summarize: useful AI will not magic. Adopting a Lakehouse approach means putting the rails for a relevant, secure and really transformative AI. It is a strategic choice, not just technical.




