Why does the generative make mistakes?

Why does the generative make mistakes?

Ceased for its power and versatility, the generative AI is invited in all sectors and is presented as a miracle solution to our productivity problems.

But behind this magic promise, the reality is more nuanced. Can we really consider it as a tool, in the same way as a spreadsheet or a search engine?

No, generative AI is not a tool in the literal sense

A clean sense is a reliable device, simple to use, designed to effectively perform specific tasks and which saves time and efficiency. But when this definition is applied to the generative AI, certain deviations appear.

Indeed, theoretically a generative AI must understand the instructions of the user, take into account the context, generate an original content and learn from their feedback to improve. If access to these technologies is generally simple, their optimal use requires some practical experience, especially in the formulation of (prompt) instructions.

In addition, the same request can produce different results at separate moments, due to the probabilistic nature of the operation of the models, which will destabilize the user. You can for example request 2 times in a row your biography from a generative AI of the market and see differences in the response. Yes, “Chatgpt can make mistakes”, where humans will tend to trust the result.

Its probabilistic character by nature, generates errors

This probabilistic character also generates an undesirable phenomenon: hallucinations. These are false or invented answers, but formulated in a completely credible way, which makes their detection complicated. This is accentuated by the fact that AI trains from content freely available on the web whose quality is de facto variable.

For example, false information integrated into learning is difficult to correct: it takes an average of nine true information for it to be “downgraded”. Aggravating factor, the increase in false content on the web detects its performance even more in time.

New technologies to reinforce its reliability

To counter the phenomenon, technologies have been developed and improve the reliability of the results. The RAG approach for example (Retrieval -Augmented Generation – Generation increased by recovery) consists in enriching the user’s request by information extracted from reliable sources, which makes it possible to limit errors. Another more technical method is fine-tuning, which specializes AI on specific data in order to obtain more just 90% responses in targeted contexts.

In conclusion, although solutions exist to make the results more reliable, the use of generative AI requires continuous training because its optimal perimeter of use is still vague for a large number of non -specialist users. In addition, an almost systematic human supervision is necessary to verify the results.

What complicates and makes it uncertain its integration into automated processes in the professional world and therefore makes the expected return on investment even more uncertain.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment