Claude Code VS Codex CLI: What is the best autonomous code agent?

Claude Code VS Codex CLI: What is the best autonomous code agent?

Anthropic and then Openai unveiled autonomous code agents to prompt natural language this year. Features, models, price … We tell you everything.

After the AI ​​of code assistants, make way for independent code agents. These new tools promise to generate code quickly for a complete project with a minimum of human intervention. Anthropic shot the first in February 2025 with Claude Code, Openai followed in April with CLI code. Comparison of these new agents who save working hours for developers.

Functionality: a finer adjustment of autonomy with Codex CA

Claude Code and Openai Code CLI are not designed with the same philosophy. Anthropic offers a complete local agent in local calls with API calls to its models. Claude Code’s source code is owned and it is only possible to use anthropic models. For its part, Openai technology is completely open source (Apache 2 license). The official tool only supports OPENAI models but a Fork, named Open-Codex, was developed by the community to use a multitude of third-party suppliers.

Claude Code has run from a terminal, it can include the entire code present in a project (spread over several files), generate and edit total independent code, run unit tests. It also manages Git operations (commits, requests) very well. The tool is truly focused on the most total autonomy (by giving access to automatic modifications). The interaction is done in natural language and with very simple controls ( /INIT, /REVIEW). It can execute bash commands and also has an Extended Thinking mode to deploy more advanced reasoning capacities.

For its part, CODEX AI also works (as its name suggests, air conditioning command line interface) in a terminal. The principle is the same: the tool can edit files, generate code, correct bugs or question your code. CODEX CLI can also process multimodal requests (images + texts), useful for providing screenshots for example.

Finally, the OpenAi agent has an additional asset: it is possible to configure the granularity of AI autonomy. Three modes are available:

  • Suggestion (default mode) which authorizes the agent to read the code and modify it with your approval,
  • Auto edit which allows you to automatically edit the code but requests authorization for bash orders
  • Full Auto, The agent can read and modify the code alone and execute Bash commands alone. However, it is partitioned in a sandbox isolated from the Internet. In this mode, the agent will therefore not be able to install the dependencies necessary for a project if they are not available locally.

Functionality

Claude Code

COD

Local installation

Open source

Third -party model support

✅ (via fork open codex)

Terminal interface

Multi-fichiers understanding

Autonomous code generation

Code edition

Execution of unit tests

Git management (Commites, PR)

Bash commands

“Extended Thinking” mode

✅ (via the O D’OPNAI family)

Multimodal treatment (images + text)

Configurable levels of autonomy

Natural language interaction

Internet insulated sandbox

✅ (in full auto mode)

Models: Claude Code limited to anthropic AIs

On the model side, Openai’s strategy is much more open. CODEX CLI uses Codex-Mini default, the new OPENAI model for code, in engine model. But it is also possible to choose from all the models of the API Response (using the argument -M). But the real codex force lies in its fork Open-Codex. The version of the community thus supports OPENAI models (obviously), the Gemini models, the models present on OpenRouter, the XAI models and finally the open source models with Ollama.

In contrast, the choice of model on Claude Code is quite restrictive. By default the agent uses Claude 3.7 SONNET (the best model of anthropic to date, with excellent code performance). It is possible to configure Claude Code with another model from the Anthropic family (specifying the model from the anthropic_model environment variable) but that’s it. Claude Code, however, leaves the choice of inference service: directly from its homemade API, from Amazon Bedrock or from Google Vertex.

Billing at token at Openai, Anthropic leaves the choice

On pricing, Anthropic and Openai offer two different strategies. OPENAI simply invoices the use of the model from the API. With the Default Codex-1 model, the costs are 1.50 dollars per million tokens as a starter and $ 6 per million tokens out. It is thus possible to generate approximately a million lines of code for a price between 25 and 75 dollars (depending on the calculation method, between 5 and 10 million tokens used).

Anthropic offers two pricing models for Claude Code. The first is a payment to the Token: Claude 3.7 Sonnet costs 3 dollars per million tokens as a starter and $ 15 output, representing around 75 to 150 dollars for a million lines of code. More interesting, Anthropic includes Claude Code in its Max and Max+packages. These subscriptions allow unlimited use similar to the web interface, with volumes of progressive requests: max at 100 dollars offers five times more queries than Pro, while Max+ at 200 dollars offers up to twenty times more. Concretely, Max authorizes between 50 and 200 prompts every five hours, and max+ between 200 and 800 prompts, depending on the complexity of requests.

Criteria

OPENAI (codex-mini)

Anthropic (Claude 3.7 Sonnet)

Pricing

Token

To token + unlimited packages

Entry tokens price

$ 1.50 / million

$ 3 / million

Exit tokens prices

$ 6 / million

$ 15 / million

Cost estimate 1m code lines

$ 25-75

$ 75-150

Additional options

API only

Max ($ 100) and Max+ ($ 100) packages

Number of prompt max

Np

50-200 prompt / 5h

Number of prompt max+

Np

200-800 prompt / 5h

Claude Code or Codex CLI: Which agent should you choose?

Claude Code has the major drawback of being a closed environment, a limitation that allows you to use only anthropic models. Its price becomes interesting only with the Max subscription formulas, which offer unlimited use at an attractive price for developers with regular needs.

Conversely, Openai Cli is more flexible at particularly competitive prices at Openai. For occasional use, Openai Cli will clearly be the most judicious choice. Finally, developers looking for the most economical option can turn to Open-Codex Cli, ideally coupled with a model available via Ollama, a locally turning. An economical solution which however requires a robust machine capable of turning a high quality code and reasoning model.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment