With Gemini 3, Google launches its “universal” AI (and lots of new features)

With Gemini 3, Google launches its “universal” AI (and lots of new features)

Alongside its new AI model that can be operated in all environments, Google is enriching its application with agentic capabilities and increasing the number of tools for professionals and the general public.

Google hits even harder than usual. If we list even some of the announcements for November 18, there is of course the release of Gemini 3, its best generative AI model to date, which combines cutting-edge multimodal capabilities (simultaneous understanding of text, image, video and code) and unrivaled reasoning power, but in addition several agentic features for the Gemini application and its digital workplace and finally a new, even more intelligent vibe coding platform.

Above all, with Gemini 3, Google is not just announcing a new model: the company is now claiming a truly “universal” AI, designed to work homogeneously in all its products. This is a first. From launch day, Gemini 3 is simultaneously deployed in Google Search, in the Gemini application, and in all developer tools, via AI Studio, Gemini CLI and the API. AI is no longer a component added to products, but a unified platform, capable of operating equally in the browser, in a mobile application or in a development environment.

Gemini 3 Pro, an ultra-versatile reasoning model

Gemini 3 is explicitly presented by Google as its smartest model. It is considered the world’s best model for multimodal understanding and on-the-fly code generation. The model obtained 1501 points on the LMSys Arena, surpassing the previous record holder by 50 ELO points… which was none other than Gemini 2.5 Pro. In multimodal analysis on MMMU Pro, the model reaches 81%. In code generation, the model also performs: the model obtains 76.2% on SWE-Bench Verified (in a single test), a clear improvement compared to the 59.6% of Gemini 2.5 Pro. Even for competitive coding problems (LiveCodeBench Pro), Gemini 3 Pro scores higher (2439) than its predecessor (1775).

“Gemini 3 Pro outperforms 2.5 Pro on all major benchmarks by a significant margin,” says Tulsee Doshi, product manager for Gemini models at Google DeepMind. Beyond the figures, it is above all the perceived quality that emerges from the first tests: “The answers are more useful, more concise, better formatted.” The main novelty lies not only in the capabilities of the model, but in the deployment strategy adopted by Google. For the first time, Gemini 3 arrives simultaneously in the Gemini application, in the AI ​​mode of Google Search (more than 2 billion monthly users for AI Overviews), and in the developer tools via AI Studio and the Gemini API.

Gemini 3 Deep Think: parallel thinking taken to the extreme

Google is also introducing Gemini 3 Deep Think, an advanced reasoning mode that pushes the model’s capabilities even further. This mode, which will be available in the coming weeks for Google AI Ultra subscribers, is based on a different approach to classic LLM reasoning. “Unlike usual reasoning that generates thinking tokens sequentially, Deep Think creates a parallel reasoning structure,” explains Koray Kavukcuoglu, CTO of DeepMind. “The model explores different hypotheses in parallel, then uses its capabilities to select the best conclusion.”

Deep Think is not designed for everyday queries, but for problems that lie at the boundaries of human and AI capabilities, Google says. The company highlights several categories of problems where this mode excels: advanced mathematics, complex algorithms, data analysis, multi-constraint problems in engineering, etc.

As with previous releases, the price lists should be made public within hours of the model’s publication.

The Gemini application is enriched

With 650 million monthly active users and a tripling of daily requests over the last quarter, the Gemini application is experiencing meteoric growth. Google is taking advantage of the launch of Gemini 3 to introduce several features directly into the Gemini application.

Gemini Agent: a general agent in the Workspace suite

The most ambitious feature launched experimentally is undoubtedly “Agent” mode. The tool represents “the first step towards a true generalist agent, something that can work across your different Google products when you give it control,” said Josh Woodward, VP of Google Labs and Gemini.

“Today, if you go to Gemini and type ‘help me control my mailbox’, it will give you general strategies on how to archive, delegate, or take action. Tomorrow, if you make this request with Agent enabled, it will allow you to take personalized actions on your behalf,” explains the VP.

For example, if a small business owner simply asks “help me control my inbox,” Gemini will take action for them. The agent then automatically breaks down the request into identified tasks, creates a structured plan visible in overview, and categorizes the different messages. The user can then select certain tasks to add to their Google Calendar or reject them, while the agent writes draft responses ready to send to clients.

Visual Layout: immersive interfaces generated on the fly

Available in the Labs section of the application, Visual Layout allows the user to guide the user in their query using a new experience close to the interactive widget. For example, if the user asks Gemini to help him plan a trip to Italy, the model will interact with him using information entry elements (form, checkbox, elevator, etc.). It’s a real return to the interface.

Dynamic View: to facilitate learning

Also available experimentally from launch, Dynamic View takes Gemini 3’s agentic capabilities even further by coding complete interactive experiences. For example, if the user asks a question about Van Gogh. Gemini 3 then generates a mini live website allowing you to navigate step by step through the painter’s career, with detailed explanations and contextual information for each period.

Google Antigravity: Google’s new vibe coding platform

Another big news: Google announces the arrival of a new vibe coding platform. Named Antigravity, the latter fundamentally rethinks the way developers interact with AI. Antigravity’s major innovation lies in the integration of the “computer use” capability, a functionality that was sorely lacking in existing code agents. Concretely, the agent can now operate autonomously across three environments simultaneously: the code editor, the terminal, and an integrated Chrome browser.

Where traditional code agents are limited to generating code that the developer must then test manually, the Antigravity agent can itself launch the application in the browser, interact with the interface as a user would, identify bugs or usability problems, then return to the editor to correct the code. At each stage, the developer can intervene, modify tasks, add comments or specific instructions, then let the agent resume its work. Antigravity integrates Gemini 3 Pro, but also the Gemini 2.5 model for computer use (browser control) and Imagen 3 for image generation. The public preview will arrive soon on Mac, Windows and Linux.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment