One hundred million users two months after its launch: never before has a technology captured collective consciousness to this extent.
The emergence of ChatGPT in the global digital landscape has crystallized hopes, curiosities and anxieties. But as is often the case in the history of innovation, excitement precedes reflection. The challenge now consists of rigorously determining the uses where artificial intelligence will really be of service. In particular, what if we asked ourselves the question of its impact on sustainability issues?
So-called “generative” technologies are distinguished by their ability to produce text, code, images or reasoning, by aggregating existing corpora using statistical modeling, and proposing a plausible result. If we can debate the qualifier of intelligence associated with such an approach, fundamentally anchored in déjà vu, we cannot deny that this automatic production of content is impressive and has numerous areas of application.
However, their physical cost is far from negligible. The environmental impact of artificial intelligence, although difficult to quantify precisely, is beginning to raise serious concerns. Far from the immaterial representations that often surround digital technology, AI is based on very concrete infrastructures: oversized GPUs, giant data centers, excessive consumption of electricity, water and resources in the broad sense. In a context where CSR and HSE departments are increasingly asked about obligations linked to energy efficiency – particularly in data centers – this subject cannot be avoided. Finally, training a large-scale model and its use both have a significant environmental impact. On this subject, the French AI startup Mistral demonstrated rare transparency, publishing on July 22 – in collaboration with Ademe and Carbone 4 – a study on the impact, particularly on CO₂ emissions, of the training and use of its models. This study reveals that if a request made to their conversational agent is equivalent, in carbon footprint, to only 5 meters traveled by car, the training of just one of their models is equivalent to traveling around the Earth twice. This figure alone should encourage a form of sobriety. On the side of companies, large users of AI, the tension becomes evident between technological performance imperatives and emissions reduction objectives that they impose on themselves as part of their CSR trajectory. More generally, certain voices like Jean-Marc Jancovici no longer hesitate to pose a fundamental question head-on: what, exactly, is the social utility of these tools that are so costly for the environment?
AI is an amplifier. Far from being an end in itself, it positions itself as a powerful catalyst, capable of amplifying human and systemic capacities. Its fundamental role is to optimize the execution of tasks, providing unprecedented speed and efficiency. Therefore, the societal value of AI is inseparable from the nature of the objectives that its user seeks to achieve. In the current context, marked by the growing urgency of challenges linked to sustainability – whether environmental, social or economic – the integration of AI to address these issues appears not only as a relevant approach, but also as a legitimate necessity.
In the sustainability sector, there is therefore a great temptation for companies to want to “do AI” at all costs as the promise seems so good: easy reporting, automatic error detection, association of emissions factors in the blink of an eye… However, the reality is quite different. More than in any other sector, CSR and HSE data must be perfectly auditable. However, AI is notoriously a black box which prevents the traceability of data from its collection to reporting. How can we justify the data control heuristics put in place if we relied on AI to do the sorting? The association of data with an emissions factor provided by AI? Because the response to a particular indicator is extremely vague due to lack of data, which did not bother the AI but would have alerted a human contributor?
Let’s not be dazzled by the prowess of AI. Its statistical nature means that the content it generates appears plausible to us, gains our confidence and monopolizes our attention. Because, however important it may be, it is not data that will transform our businesses. It is our ability to make this data actionable and to set it in motion, by surrounding it with clear governance, and by developing the collective capacity to grasp it. We need to be extra vigilant when associating AI with the production of strategic and auditable content. Let’s use it to automate the configuration of our tools, to improve quality by analyzing the responses to questionnaires that we would never have taken the time to process manually, by using it as an assistant to train on sustainability topics, but let’s never lose control and traceability of our data. And, above all, let us not lose sight of the fact that it is only a tool to achieve our sustainable performance objectives. The credibility of our companies depends on it.




