Prompts: 5 context engineering techniques to gain precision

Prompts: 5 context engineering techniques to gain precision

After the engineering prompt, place the engineering context. Context allows models to respond more precisely with more predictable behavior to your responses.

What if context was the main lever to improve the precision of LLMs? This is the theory developed by a team of researchers from the Generative Artificial Intelligence Research Lab at Shanghai Jiao Tong University. Specialists propose a methodological framework, context engineering, allowing AI models to better “understand” intentions. The result is more precise responses that are more in line with expectations. We have selected five concrete techniques that arise from this framework and allow you to improve your prompts.

1. Structure and mark your context

Structuring the context addressed to LLMs is one of the main optimization levers. To effectively structure the context (of which prompts are a part), researchers recommend using explicit tags (“role”, “action, “example”, “format”, etc.). Too little used, this technique makes it possible to prioritize information and reduce the cognitive load of the model. This is why XML or YAML prompting is so effective. Researchers take in particular the case of the CodeRabbit code agent which takes advantage of this technique.

Example :

(GOAL) Résumer les points clés d’un entretien client

(DATA) Transcription complète (ci-dessous)

(CONSTRAINTS) Résumé < 200 mots, ton neutre

(OUTPUT FORMAT) JSON avec { "insights": (), "sentiments": (), "next_actions": () }

Another constraint: Researchers recommend stabilizing system prompts because even small changes, such as adding a timestamp at the start of the prompt, can invalidate the entire cache.

2. Regularly summarize what the model has understood

An increasingly developed practice but still under-exploited in production, the summary of previous exchanges allows the model to reduce the general context of the discussion while retaining the important elements. The summary is all the more important for an agent who must maintain a general context when carrying out his task. The summary can be stored in the natural language context or in a hierarchical manner for more complex contexts. For example, it is possible to use a JSON framework to formalize the summary:

 "étapes_accomplies": (

    "lecture du rapport",

    "résumé du rapport"

  ),

  "points_restants": (

    "Générer une critique du rapport",

    "Produire un tableau de résumé"

  ),

For example, Claude Code and Gemini CLI use this technique. Claude Code summarizes his working environment, the scripts and the steps carried out to keep a compact trace. Gemini CLI remembers command line sessions and extracts prioritized summary notes for subsequent sessions.

3. Keep errors and their corrections in context

Another, more counterintuitive technique: keeping the LLM errors in context. In order to take care of the context and the overall understanding of the model, the researchers insist on the importance of keeping traces of the errors made by the model previously. Everything is even more relevant in the case of an agentic system. By retaining errors, the model will tend to analyze its own reasoning logic, avoid repeating its errors, and reinforce its consistency over the long term. To further reinforce the relevance of this approach, it is advisable to provide previous errors with an explanation of the cause or better yet the correct solution. This is how Manus and CodeRabbit go about guiding the model.

Example of a summary of previous errors (with tags):

(ERROR) 2025-11-04 14:03 Wrong function name used: "getDatas()" instead of "getData()".

(CORRECTION) 2025-11-04 14:05 Updated call to "getData()" and verified outputs.

(NOTE) Cause: typo from prior completion; solution: review function naming conventions.

4. Fewer tools, better described

Quality is better than quantity. This maxim also applies to prompting methods. In an agentic configuration, it is better to activate few tools in order to maximize the consistency and performance of the model. Each additional tool increases decision complexity and the risk of error: the more similar options the model has, the more it hesitates and slows down its reasoning. The researchers point out that the performance of a model deteriorates beyond 30 tools and almost systematically collapses beyond 100 tools. Specialists recommend documenting each tool in the context by precisely indicating the framework of use by varying the semantic field used.

Example of a properly defined tool:

{

  "tool": "SearchPapers",

  "purpose": "Rechercher des articles scientifiques (arXiv) par mots-clés et année.",

  "when_to_use": (

    "L'utilisateur demande des sources académiques récentes",

    "Besoin d'URL arXiv + titre + année"

  ),

  "when_not_to_use": (

    "La réponse ne requiert aucune source externe",

    "La requête porte sur des blogs, médias généralistes ou podcasts"

  ),

  "inputs": {

    "query": {"type": "string", "required": true, "example": "context engineering"},

    "from_year": {"type": "integer", "required": false, "default": 2023}

  },

  "outputs": {

    "papers": {

      "type": "array",

      "items": {

        "title": "string",

        "url": "string",

        "year": "integer"

      },

      "max_items": 10

    }

  },

  "constraints": (

    "Ne jamais dépasser 10 résultats",

    "Éliminer les doublons d'URL",

    "Interdire les domaines hors arxiv.org"

  ),

  "disambiguation_rules": (

    "Si la requête est vague, demander 1 précision (mot-clé supplémentaire) avant exécution",

    "Ne pas enchaîner avec d'autres outils automatiquement"

  ),

  "error_handling": {

    "no_results": "Répondre: Aucun article pertinent trouvé depuis 'from_year'. Proposer 2 mots-clés alternatifs."

  },

  "usage_examples": (

    {

      "input": {"query": "context engineering LLM", "from_year": 2024},

      "expected": {"papers": ({"title": "...","url": "https://arxiv.org/abs/XXXX","year": 2025})}

    }

  )

}

5. Vary your examples of few-shot prompting

It has gradually established itself as one of the most effective methods for monitoring the accuracy of LLMs: providing relevant input–output examples helps the model capture the expected intent and format. Issue ? Recent work shows a frequent drift: models tend to mimic examples instead of generalizing, which produces a reproduction bias. To limit it, researchers recommend greatly diversifying the examples presented (formulations, order, styles, input sizes, borderline cases) to avoid reproduction and force generalization.

Example :

(SYSTEM) Tu extrais les champs demandés. Si une info manque, mets null. Réponds UNIQUEMENT en JSON.


(TASK) Extraire: { "client": str, "produit": str, "quantite": int, "date": "YYYY-MM-DD" }


(EXEMPLE A – formulation courte, ordre standard)

INPUT: "Paul a acheté 2 mugs le 2025-10-12."

OUTPUT: {"client":"Paul","produit":"mug","quantite":2,"date":"2025-10-12"}


(EXEMPLE B – texte long, synonymes, ordre inversé)

INPUT: "Le 03/09/2025, commande validée: trois carnets pour Mme Jeanne Dubois (réf. CARNET)."

OUTPUT: {"client":"Jeanne Dubois","produit":"carnet","quantite":3,"date":"2025-09-03"}


(EXEMPLE C – registre informel, pluriels, casse mixte)

INPUT: "Hier (12.08.2025) Hugo a pris 5 BOUTEILLES d’eau."

OUTPUT: {"client":"Hugo","produit":"bouteille d’eau","quantite":5,"date":"2025-08-12"}


(EXEMPLE D – cas limite: info manquante)

INPUT: "Camille veut 4 tirages photo mais n'a pas encore fixé la date."

OUTPUT: {"client":"Camille","produit":"tirage photo","quantite":4,"date":null}
Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment