Improvement of Gemini 2.5, new Deep Think mode, generalization of Ai Overviews in the United States… Google is hitting strong this year with a very large number of announcements on the generative AI.
Google I/O, Google’s annual high mass for developers, was held this May 20 at the company’s headquarters in Mountain View. And as in 2024, the group emphasized generative artificial intelligence. From web research to shopping through the code, here are the new features that have retained the interest of the JDN.
A new update from Gemini 2.5
Google considerably improves its Gemini 2.5 Pro and Flash models. The flash version will use 20 to 30% less tokens. 2.5 Flash and 2.5 pro, they will benefit from the audio modality at the exit for an even more natural conversation experience, with the voice.
But the big novelty lies in the launch of Deep Think, an advanced mode of reasoning for Gemini 2.5 Pro, capable of considering several hypotheses before responding. The mode applies a process of “parallel thinking” which assesses different avenues for reflection. In the immediate future, Deep Think will be accessible only for trusted testers, before a general deployment in the coming months. Finally, Google also announces the deployment of the MCP protocol in the SDK of the Gemini API.
Google’s ultimate goal is to transform Gemini into a “universal AI assistant” capable of understanding the context of the user, planning and acting on anyone on any device. “We are working to extend our best multimodal foundation model, Gemini 2.5 Pro, to make it a World Model capable of planning and imagining new experiences by understanding and simulating the aspects of the world, as does the human brain,” explains Demis Hassabis, CEO of Deepmind.
The native audio generation with Veo 3 and the arrival of Imagen 4
Google takes advantage of I/O to announce two new models: Veo 3 for the generation of video with sound atmosphere and Imagen 4 for the image generation. Veo 3 becomes the first model capable of generating not only animated images (video) but also sounds and dialogues. The model can thus integrate background noises (birds, cars, planes, etc.) and conversations between characters. Finally, realism still rises with a notch with a model probably Sota in photorealism when it comes out.
On the imagen side, the 4th version arrives with significant improvements in terms of quality and precision. The model manages to generate images with an exceptional dive (impression of sharpness). Imagen 4 can also generate images with resolutions up to 2K. Perfect for prints of good quality (one of the main limits of previous models of image generation). The models are available today.
Generalization of AI mode in the United States
On the SEO Google side announces the deployment of AI mode (not to be confused with AI Overviews) in its search for all users in the United States. Accessible from a new dedicated tab on the search pages and in the Google application, AI mode allows you to search in depth information on a subject, using generative AI. The AI offers sourced and contextualized summaries.
Ai Overviews in 200 countries
Google deploys its new research experience based on generative AI in 200 countries and now more than 40 languages. Google claims that the use of research with AI Overviews has already generated an increase of more than 10% of requests. The Mountain View giant also specifies that the AI Overviews results are now generated at a speed close to traditional research.
A new shopping experience
Finally, Google takes the opportunity to present the Agentic checkout, a new feature in the shopping experience which will allow users to automatically follow the price of a product and make the purchase independently. By defining criteria (size, color, budget), the user can ask the AI to monitor prices and finalize the purchase at the right time. The functionality will be launched in the coming months in AI mode in the United States. For the time being, only Ticketmaster, Stubhub and Resy platforms would be compatible.
Improvement of Gemini in Workspace (Gmail, Meet, Deep Research)
Gemini in Workspace is still improving. In GmailGemini will be able to generate answers to smarter emails adapted to your personal tone. AI will use your previous answers to understand your writing style.
Google Meet Also made a leap forward with the integration of the instant vocal translation. It is now possible to translate conversations between different languages in real time, while preserving the voice, tone and nuances of each speaker. The functionality is initially available for Google AI Pro subscribers and ultra in English and Spanish.
Finally, in the Gemini application, the mode Deep Research is enriched with new capacities. It is now possible to integrate your own PDFs or images in advanced research. User data will then be crossed with all the sources traveled on the web to provide an even more complete and personalized report.
A new plan at 250 dollars per month
Google introduced Google Ai Ultraa new premium package at $ 249.99 a month in the United States with extended AI capacities. The package is in line with the premium offers of Openai and Anthropic at 200 dollars per month. The subscription offers access to the most efficient IA models in priority and to experimental features. It allows for example to access VEO 3 exclusively or to Deep Think mode by Gemini 2.5 Pro in particular. Google AI Ultra users will also be the only ones who can access Project Mariner, the Operator equivalent of Openai at Google.
In parallel, the old AI Premium plan, now renamed Google Ai Probenefits from new features without additional cost. Google AI Pro subscribers will notably have access to Flow video mounting capacities with the VEO 2 model, as well as early access to Gemini in Chrome (the assistant will be available on demand in the browser).
Jules, a code assistant
Finally, Google in turn presents His autonomous code agent: Jules. Like Openai Codex, Jules can integrate directly into existing code deposits, understand the full context of a project, and carry out complex development tasks. He uses, like the OpenAi tool (still) a virtual machine to carry out all the tasks in a asynchronous way. The tool, based on Gemini 2.5 Pro, is available in public beta, without waiting list.




