Antigravity: we tested Google’s AI IDE

Antigravity: we tested Google's AI IDE

Launched in November 2025, Google Antigravity promises an experience redesigned around AI. An IDE that is struggling to attract developers.

Google Antigravity will not have supplanted Cursor or Windsurf. The integrated development environment launched by Google has not been a huge success since its launch. With major technical problems at launch and quotas considered ungenerous, although free, the IDE is struggling to attract the community. Should we give it a chance or is Antigravity a project doomed to disappear? To form an objective opinion, the Journal du Net still decided to test the tool, two months after its release.

The initial goal: an agentic IDE

Announced on November 18, 2025 alongside Gemini 3, Antigravity was intended to be a breakthrough in the IDE landscape. Where Cursor or Copilot assist the developer, Google promised an “agent-first” platform capable of delegating complex tasks to autonomous agents. The architecture is based on two distinct interfaces: a classic editor view (fork of VS Code) for assisted development, and a manager view which orchestrates several agents in parallel. These agents control the editor, the terminal and an embedded Chrome to write, test and validate code. Google’s bet is to have a complete AI capable of coding and “seeing” interfaces and iterating until obtaining the desired result.

At launch, Antigravity only worked with the Gemini models (Pro, Flash, Deep Think). Google then opened the ecosystem to Claude Sonnet 4.5 and Opus 4.5, renowned for their code performance, as well as to GPT-OSS-120B from OpenAI. The IDE is available on Windows, macOS and Linux. Google also integrates the Model Context Protocol (MCP).

Antigravity can be used for free with a personal Google account with quotas that the publisher describes as “generous” (a term criticized by field feedback). For more comfortable limits, you must switch to a Google AI Pro subscription at $19.99 per month (quotas refreshed every five hours) or Google AI Ultra at $249.99 monthly. These plans provide access to the full Google AI ecosystem: Gemini 3 Pro with Deep Think, Flow for video creation via Veo 3.1, NotebookLM with audio summaries, and Project Mariner. For teams, the Developer plan goes through Google Workspace AI Ultra for Business, with no public pricing schedule communicated.

The JDN test

For this test, we install Antigravity on a Windows 11 device. At first glance, the application is heavy and requires at least 1 to 2 GB of RAM, which is a lot for a simple IDE. Generally speaking, the menus are quite complex and lack intuitiveness. We don’t recognize the Google sign. Was the fork redeveloped too quickly? Once launched, you must select the classic working folder. As soon as the workspace is configured, the editor consists of two main windows: the classic editor and the manager, which allows you to manage the work of the agents. Everything is modular and allows multi-screen developers to be more efficient. Problem: on the move, it becomes very complex to use the tool easily on a laptop PC. Damage.

For the test, we will try to create a small monitoring web application. The latter will have to scrape the content of several American tech sites and produce a report on the tech trends of the day. The principle: obtain an interactive and fairly visual HTML report. To achieve this goal, we define clear rules in Google Antigravity (Customizations -> Rules -> Workspace). The goal is to guide the AI ​​and prevent it from getting into unnecessary complexities (see below).

Once the rules are defined, we design a prompt adapted to Antigravity. We start with Planning mode in Agent Manager which allows the model to develop a plan before taking action.

Prompt:

Je souhaite créer un outil d'agrégation et d'analyse d'actualités tech.

OBJECTIF GLOBAL :

Développer une application web complète qui scrappe automatiquement plusieurs sites d'actualités tech américains, extrait les informations clés, analyse les tendances, et génère un rapport HTML interactif et visuellement impressionnant.

SPÉCIFICATIONS TECHNIQUES :

1. ARCHITECTURE BACKEND (Python)

  - Utiliser Beautiful Soup et Requests pour le scraping

  - Cibler UNIQUEMENT ces 4 sources (vérifiées comme scrapables) :

    * The Verge (section Tech) : https://www.theverge.com/tech

    * VentureBeat (AI section) : https://venturebeat.com/category/ai/

    * TechCrunch : https://techcrunch.com/category/artificial-intelligence/

    * Ars Technica : https://arstechnica.com/

  - IMPORTANT : Avant de scraper, utilise le navigateur intégré pour vérifier la structure HTML de chaque site et identifier les sélecteurs CSS appropriés

  - Extraire pour chaque article : titre, date, résumé (150 premiers mots), URL source, auteur si disponible

  - Implémenter une gestion d'erreurs robuste (timeout de 10s, gestion 404, rate limiting avec pause de 2s entre requêtes)

  - Ajouter des User-Agent headers pour éviter les blocages

  - Stocker temporairement les données dans un fichier JSON structuré

  - Ajouter une fonction d'analyse de tendances qui identifie les mots-clés les plus fréquents (top 15)

2. GÉNÉRATION DE RAPPORT

  - Créer un générateur qui transforme les données JSON en rapport HTML

  - Le rapport doit inclure :

    * Section "Latest News" avec les 8 derniers articles (cards avec thumbnail si disponible)

    * Section "Trending Topics" avec nuage de mots-clés visuels (wordcloud style)

    * Graphique de fréquence de publication par source (Chart.js - bar chart horizontal)

    * Timeline des articles des dernières 48h

    * Section "Top Sources" avec compteur d'articles par site

  - Design PREMIUM obligatoire :

    * Thème moderne avec glassmorphism sur fond gradient

    * Animations au scroll (fade-in, slide-up)

    * Typographie Google Fonts : Inter pour le corps, Outfit pour les titres

    * Palette HSL harmonieuse : bleu électrique (#4F46E5) + violet (#7C3AED) + accents cyan

    * Cards avec effet hover 3D subtil

    * Micro-animations sur les boutons

3. INTERFACE WEB (HTML/CSS/JavaScript)

  - Page unique responsive (mobile-first)

  - Header avec logo et titre "Tech News Analyzer"

  - Bouton "Analyze Now" qui déclenche le scraping

  - Loader animé stylé pendant le traitement (spinner + texte)

  - Affichage dynamique du rapport généré avec smooth scroll

  - Mode sombre par défaut avec toggle light/dark (icône lune/soleil)

  - Footer avec timestamp de dernière analyse

4. VALIDATION ET TESTING

  - CRITIQUE : Utiliser le navigateur intégré pour :

    * Inspecter la structure HTML de chaque site cible AVANT de coder les scrapers

    * Tester que chaque scraper fonctionne individuellement

    * Vérifier l'affichage du rapport avec données réelles

    * Valider les animations et la responsiveness mobile

  - Créer des screen recordings des tests pour documentation

  - Vérifier que le scraping gère les timeouts et erreurs proprement

5. LIVRABLES

  - Code Python modulaire (un fichier par scraper + main.py)

  - Docstrings complètes avec exemples

  - Fichiers HTML/CSS/JS séparés et commentés

  - README.md détaillé avec :

    * Instructions d'installation

    * Guide d'utilisation

    * Screenshots du résultat final

    * Troubleshooting des erreurs courantes

  - requirements.txt pour les dépendances Python

  - .gitignore approprié

CONTRAINTES TECHNIQUES :

- Python 3.10+

- Pas de framework backend (Flask/Django) pour rester simple

- Tout en local, pas de déploiement cloud nécessaire

- Compatible Windows 11

RÈGLES DE DÉVELOPPEMENT :

- Suivre PEP 8 strictement pour Python

- Type hints obligatoires

- Commenter les sections complexes

- Privilégier la clarté à la performance

- Chaque fonction doit avoir un seul rôle

- Gestion d'erreurs avec try/except et logs informatifs

- Le design visuel doit être WOW au premier regard - éviter tout style "basic"

WORKFLOW :

1. Créer l'Implementation Plan complet avec architecture détaillée

2. Utiliser le browser pour analyser la structure HTML des 4 sites

3. Développer les scrapers un par un avec tests individuels

4. Créer le système d'analyse de tendances

5. Développer le générateur de rapport HTML

6. Créer l'interface web avec animations

7. Tests d'intégration complets via browser

8. Documentation finale avec screenshots

ATTENDU FINAL :

Une application qui, en un clic, scrappe les 4 sites, analyse les tendances tech émergentes, et génère un rapport HTML

The planning mode works as advertised: the agent breaks down the development stages, launches Chrome to analyze the pages to be scraped, inspects the tags via JavaScript, and requests human validation before each code execution. This friction slows down the process, the back and forth between the agent and the browser in the background makes the sequence heavier, but it secures the whole thing and eliminates the risk of wild execution. In the end, we obtain a structured development plan, ready to be executed by the agent. Reassuring, but time-consuming (around 20 to 25 minutes in total).

The plan generated by Antigravity:

Implementation Plan - Tech News Analyzer Goal Description Build a "Tech News Analyzer" web application that scrapes 4 major tech, extracts key information, analyzes trends, and presents a premium HTML report. The goal is to automate the gathering of tech news and provide a high-quality visual summary. Proposed Changes Architecture Backend: Python scripts (modular) Data Storage: Temporary data.json Frontend: Generated HTML report + Simple landing page to trigger analysis Server: http.server customized to handle the "Analyze Now" trigger Component: Scrapers (backend/scrapers/) We will implement individual scrapers for each source using requests and BeautifulSoup. Base Scraper: A parent class or utility for common logic (User-Agent rotation, error handling, rate limiting). The Verge Scraper: Target https://www.theverge.com/tech Container: div.duet--content-cards--content-card Title: h2 a or a(href*="/tech/") (verify specific class duet--content-cards--content-card-link) Date: time Strategy: Fetch detail page for summary if not present (often missing on index). VentureBeat Scraper: Target https://venturebeat.com/category/ai/ Container: article Title: h2 a Summary: p.ArticleListing__excerpt or p.font-paragraph (Available on index). TechCrunch Scraper: Target https://techcrunch.com/category/artificial-intelligence/ Container: li.wp-block-post or div.loop-card Title: a.loop-card__title-link Strategy: Summary missing on index. MUST fetch detail page to get div.entry-content p or meta description. Ars Technica Scraper: Target https://arstechnica.com/ Container: article Title: h2 a Summary: p.excerpt (Available on index). Component: Analysis (backend/analysis.py) Trend Analysis: Load JSON data. Tokenize titles and summaries. Remove stop words. Calculate frequency distribution. Extract top 15 keywords. Component: Report Generator (backend/generator.py) Template System: Use Python's string.Template or f-strings to inject data into a robust HTML structure. Visuals: Chart.js for the "Sources frequency" graph. Wordcloud logic (HTML/CSS based or pre-generated image, or D3.js cloud). Data injection for "Latest News" cards. Component: Web Interface (main.py & frontend/) index.html: Landing page with "Analyze Now" button. main.py: A http.server subclass. GET /: Serves index.html. POST /analyze: Triggers the scraping/analysis pipeline, returns the generated report URL or content. Verification Plan Automated Tests Unit Tests: We will write simple assertions in a tests.py to verify: Scraper functions return dictionaries with expected keys (title, url, etc.). Analysis function returns a list of keywords. Browser Testing: Use browser_subagent to: Visit the generated report.html. Check for console errors. Verify that cards are populated. Manual Verification Visual Check: Verify the "Premium" look and feel (gradients, glassmorphism). Verify responsiveness on mobile view (using browser devtools emulation). Validate the "Analyze Now" flow works from the landing page

Once the plan is generated, it’s time to take action. We ask the AI ​​to execute the plan. The system runs smoothly, encoding the requested functions and files. Generating the project takes more time than with Claude Code, but the UI result is significantly more polished. Using embedded Chrome brings a real plus: the AI ​​can see the rendering and implement changes live. The first version is solid but remains basic. We then ask the AI ​​to implement a clustering of topics in order of importance, all in French with Gemini (via the API).

The final UI is completely functional and operational. Antigravity fulfilled its role. Only problem: after a few iterations, Antigravity started crashing and consuming more than 5 GB of RAM. It was necessary to restart to return to a stable state. Would we have achieved the same result with Cursor or Windsurf? Yes, most likely, but with a few more iterations.

Potential wasted by instability

Antigravity works. The integration of embedded Chrome to visually validate the code produces a real difference compared to Cursor or Windsurf, and the Planning mode effectively structures complex tasks. But over the course of iterations, the experience deteriorates: response times lengthen, crashes multiply, and the application ends up requiring a complete restart, several times. This chronic instability hampers daily use. Either Google rushed the launch of Antigravity without sufficient investment, or the very architecture of the VS Code fork poses structural problems that the team was unable to resolve. Too bad, because the potential is there.

For now, Antigravity remains a testing ground for curious developers, not a daily production tool. Technical instability disqualifies it for everyday use. It remains to be seen what Google intends to do with this project. The publisher has a long history of discontinuing promising products. If Antigravity follows this trajectory, it will join the Google cemetery. If, on the contrary, Mountain View seriously invests in stabilization and optimization, the IDE could become a credible alternative to Cursor and Windsurf.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment