GPT-5-CODEX: 4 simple tips to code faster (and better) with AI

GPT-5-CODEX: 4 simple tips to code faster (and better) with AI

GPT-5-CODEX replaces GPT-5 within the OPENAI Code Agent. There are some techniques to maximize the performance obtained with the model.

With GPT-5 and GPT-5-CODEX, new good promptting practices appear when others are confirmed. Openai himself gives good advice to optimize the results obtained on his models, especially for code generation. The start-up has thus published a very rich cookbook to prompt its flagship GPT-5 model.

Although OpenAi’s advice was initially designed for GPT-5, the recent arrival of GPT-5-CODEX does not question the relevance of these recommendations. GPT-5-CODEX being essentially a fine-5 GPT-5-5 specialized for code, the good practices that we detail today therefore apply perfectly to the two models. Whether you use GPT-5 for development tasks or have migrated to GPT-5-CODEX, these techniques will allow you to make the most of the two models in agent configuration.

1. Precise in your prompts

Be precise in your instructions at GPT-5. Like GPT-4.1, the model is quite literal and will take into account all the prompt instructions. Then avoid contradicting you and even worse to use few shots that contradict each other. If your prompt is too vague or contains incoherent instructions, the model will tend to “think” to understand what is wrong. You will then consume more tokens and get a less precise result.

OPENAI also provides a tool to optimize your prompts to GPT-5. Once the prompt is subjected for optimization, the AI ​​applies corrections and returns the ideal prompt.

2. Evive the firm and limiting instructions

Unlike previous models which sometimes required being “pushed” with authoritarian instructions, GPT-5 and GPT-5-CODEX react better to a more nuanced language. Imperative capital formulations or repeated injunctions can create the opposite effect of that sought. Imputive formulations in capital letters as “always check the code before subjecting it” or “necessarily testing each function” can paradoxically harm performance. The model, in his desire to scrupulously follow these strict guidelines, risks over-analyzing or performing redundant checks.

For example, rather than writing “you absolutely must analyze all the project files before any modification”, favor a formulation like: “Examine, if possible, the structure of the project to understand the context before making changes”. The model will be more effective and will respond without generating over-reflection.

3. Encourage the model to think and act independently

It is advice that may seem quite contradictory compared to the previous ones but it is not. To carry out its mission, GPT-5 and GPT-5-CODEX sometimes need instructions pushing them to think before acting and acting independently without soliciting too regular human feedback. This is particularly the case if you want to develop a complete application from Zero. In your prompt, Openai recommends including a section Who asks the model to reflect first on what makes excellent application, for example, before embarking on the generation of the code.

For example, to develop a task management application:

Create a React task management application with TypeScript.



 - Réfléchissez d'abord aux critères qui définissent une excellente application de productivité

 - Etablissez 5-6 catégories d'évaluation (UX, performance, architecture, etc.) 

- Utilisez ces critères pour concevoir et valider votre solution 

- Iterez jusqu'à ce que votre approche respecte vos standards dans chaque catégorie 

In addition, GPT-5 tends to solicit the user for clarifications too quickly in the face of uncertainty. To avoid these interruptions, Openai recommends using tags To clarify the model to continue until the task is complete, to deduce the most reasonable approach in case of doubt, or to document your hypotheses rather than requesting human feedback.

Example of a prompt with instructions of persistence:

Refactorisez cette base de code React pour améliorer les performances et la maintenabilité. 

 

- Continuez l'analyse et les modifications jusqu'à ce que tous les problèmes identifiés soient résolus 

- Si vous rencontrez du code ambigu, choisissez l'interprétation la plus probable basée sur le contexte

 - Documentez vos hypothèses et décisions plutôt que de demander confirmation 

- Ne terminez que lorsque le refactoring est complet et fonctionnel 

4.Four your code conventions in XML

The latest recommendation is not new and was already recommended with GPT-4.1 or even the latest Claude models. The use of XML tags makes it possible to clearly structure the instructions and significantly improve understanding of the model. The XML is also particularly effective in providing detailed context on your code conventions. Rather than listing your rules in a dense paragraph, Openai recommends using tags ,, Or . For example to specify your naming and organizational conventions:





- Variables en camelCase (userName, isActive)

- Constantes en UPPER_SNAKE_CASE (API_URL, MAX_RETRIES)

- Fichiers en kebab-case (user-profile.js, api-client.ts)





- Une fonction par fichier quand possible

- Maximum 100 lignes par fonction

- Commentaires en français pour la logique métier



Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment