The rise of AI will not only be accompanied by job destruction. Algorithm exegete, specialist in data annotation or risks and AI governance are among the professions that will appear
170 million: this is the number of jobs that AI could create by 2030, according to a study by the World Economic Forum. Some of these jobs correspond to professions that already exist and which will be stimulated by the AI economy. Construction professions, for example, will be in high demand with the all-out investments of big tech in giant data centers, as well as those in education, while lifelong training will become an imperative to adapt to the changes brought about by AI. But some of these jobs will also be completely new. What might they look like? What positions will be in high demand from companies and what skills will be needed to fill them? Here are some ideas.
Algorithm exegete
Comprising millions, even billions of parameters and as many tokens, large language models (LLM) are renowned for their opaque operation, which sometimes leads them to hallucinate or emit unexpected responses that leave even their designers perplexed. This is the famous black box phenomenon, which makes algorithms difficult to use for businesses.
As this technology begins to be deployed in areas as sensitive as reading CVs, granting bank loans or even making medical diagnoses, it is necessary to ensure that the AI systems deployed are regularly updated and work correctly. For companies, this will become increasingly necessary to avoid major malfunctions or an ethical scandal, but also simply to remain legal, as a growing number of governments put in place regulations around AI.
For all these reasons, it is likely that most companies will tomorrow have among their staff an exegete capable of dissecting the functioning of the algorithms used and translating it into natural language for users, managers and regulators. A position that will require both in-depth knowledge of how LLMs operate, an ability to popularize and soft skills to dialogue and build relationships with various stakeholders.
Data Annotation Specialist
Given the astronomical costs of training a large language model, for most companies, adoption of this technology will not involve training new models, but rather fine-tuning, adapting an existing model by retraining it on company-specific data.
But data from complex and highly regulated sectors such as health, finance or law require detailed business expertise coupled with solid technological background to be adequately read, annotated and processed. A DNA sequence is, for example, a series of letters, which will appear in this form in a traditional database, but which an expert speaking the language of science will be able to correctly interpret, for example to determine which sequence encodes which type of proteins, which a philistine would not be able to do. It is therefore very likely that we will see an increase in the future of positions for data annotation specialists, who will combine great expertise in a specific field with mastery of data science. The young American company Mercor has already specialized in this niche.
Engineer specialized in AI deployment
Once the data is annotated, the algorithms are retrained and the AI agents are designed, they must be effectively deployed and disseminated within the organization. A mission which will be carried out by specialists who are at the same time seller, consultant and IT developer. It is also likely that they will be seconded for several months within the client company, and supervise the entire process ranging from refining the algorithms to the design of chatbots and their deployment within the appropriate teams.
Here again, this job will require a mix of hard and soft skills, since it will require both in-depth mastery of AI algorithms and an ability to listen to teams, understand their needs and teach them how to use AI to respond to them. It will also be necessary to have the ability to juggle between different mental universes. In IT, for example, it regularly happens that a software update causes a problem which will be corrected a few days later. But in a sector like industry, it is unthinkable to interrupt an assembly line for several days, or even to stop the operation of a tram line over the same period. Likewise, we can accept a greater risk of error when designing an algorithm dedicated to connecting profiles on a dating application than when a similar algorithm is deployed in a sector where lives are at stake, such as health, or highly regulated, such as finance. An engineer specializing in the deployment of AI must be aware of these differences and know how to make the right trade-offs.
AI Risk and Governance Specialist
As mentioned above, regulations governing the use of algorithms are being put in place around the world, while lawsuits surrounding chatbots are also starting to multiply. AI governance specialists will have the mission of ensuring that the algorithms used will be in line with the regulations in force in a given geographical area, but will also have to manage risks to avoid lawsuits launched by individuals or collective actions led by dissatisfied customers.
Their mission will also be to ensure that the bots used by the company do not open up privacy or cybersecurity vulnerabilities, for example by leaking confidential data or entering a malicious actor into the SOC. Rather than a single expert, it is therefore likely that companies, at least large ones, have several, with different specialties (law, regulation, cybersecurity, management of masses of data, etc.).
Specialist in custom AI chips
In 2025, OpenAI signed an agreement with the American Broadcom with a view to designing its own tailor-made AI chips, following in the footsteps of Google, whose latest Gemini model, with performance widely praised by the AI world, was trained on its in-house TPUs. Amazon also designs its Trainium (for model training) and Inferia (for inference) chips. Based on the technology of the Israeli start-up Annapurna Labs, acquired in 2015, they are installed in the company’s data centers. Meta is also testing its own Meta Training and Inference Accelerator (MTIA) chips.
For major AI players, designing custom chips provides an advantage over generic Nvidia chips. But other companies, like Tesla, have also embarked on the adventure. As AI becomes more widespread, it is likely that other large companies will follow suit to obtain semiconductors calibrated for their very specific business uses. These companies will be hungry for engineer profiles with solid skills in semiconductor design, but also good familiarity with their specific business issues.




