AI is entering a new era: embodied, equipped with memory and causality. The issue is no longer the size of the models, but their ability to understand and act in the real world.
Embodied machines with real-world insight to react in real time
Artificial intelligence is experiencing a pivotal moment. After a first phase where LLMs experienced spectacular performances, and continue to have a profound impact on all sectors, the first fundamental limits are emerging. Current models, increasingly trained on their own results, face what researchers call “model collapse”: a progressive decline in quality, exponential costs, diminishing returns and conceptual fragility.
The next stage of AI will be a matter of understanding that will involve interaction with the real world, memory, causality, and an intelligence capable of acting responsibly. It is towards this “embodied” AI, closer to living things, than to simple statistical calculations, that we are heading.
The emergence of physical AI
The next breakthrough will come from machines capable of acting, adapting and remembering. The first humanoid robots are already here, companies such as Figure AI and Agility Robotics already have humanoids in their warehouses, but their main challenge is not mechanical: it is cognitive.
Vision-language-action (VLA) models show progress in linking instructions to sensor data. However, they often remain fragile. Move one object behind another, and the system fails. Rearrange the room, and he gets confused. The transition from reaction to anticipation will mark the beginning of physical AI: intelligence anchored in experience.
Memory as the basis of intelligence
Today’s robots look like strong and agile athletes, but they often forget their past performances. Conversely, an animal relies on its experience to move around in a cluttered room, to avoid danger, without having to recalculate everything each time. Memory is the foundation of intelligence. Without it, there is neither continuity nor understanding of the world.
From 2026, the most advanced systems will begin to integrate memory, reasoning and basic contextual models. They will begin to remember past interactions, adapt in real time, and show early signs of planning. These systems will process sensor data, visual information and temporal sequences as naturally as current models process text, to develop multimodal intelligence that seamlessly combines vision, sensing, language and understanding of the environment.
World models and causality
The most significant breakthrough, from 2026 onwards, will be AI systems that build contextual models: digital representations of physical reality that enable rapid adaptation to new environments. These systems will develop an intuitive understanding of physics, similar to biological intelligence, by grasping concepts such as weight, balance, structural integrity, and managing spatial relationships without explicit mathematical programming. These AI systems will thus be able to reason about how objects can be manipulated and how spaces can be navigated.
Industrial applications will drive this change. Manufacturing systems will adapt to new production requirements within hours. In buildings, in logistics, in agriculture, AI will combine physical, safety, environmental and temporal data to make more robust and safer decisions.
Security, ethics and shared intelligence
As these systems become more advanced, security and ethics must be built into the design to enable innovation, not hinder it.
Post-quantum security measures and democratic governance mechanisms will be integrated into AI architectures; transparently for the former to protect against current and future threats, and contextually for the latter to enable rapid adaptation to changing regulations while maintaining consistent ethical standards.
At the same time, advances in resource-efficient AI will allow these technologies to be widely disseminated to local, industrial, medical or educational stakeholders, thanks to systems adapted to specific operational needs. This democratization will stimulate innovation in previously underserved sectors.
The future does not belong to larger models. The real turning point will be the moment when a machine demonstrates this adaptive intelligence that we observe in life: understanding what matters, ignoring the rest, and acting accurately in a complex environment.




