MAMMOth, it doesn’t lie: lessons from bias-free facial recognition

MAMMOth, it doesn't lie: lessons from bias-free facial recognition

European MAMMOth project: how to reduce bias in facial recognition and improve the reliability, fairness, transparency and compliance of AI systems.

While artificial intelligence improves the efficiency of many processes every day, its deployment still raises legitimate concerns: the data sets that feed its algorithms may be incomplete, include only partial or distorted information, and therefore undermine neutrality. These biases weigh on algorithms, and help to reproduce existing stereotypes and amplify systemic discrimination by automating them.

The risks are particularly great when it comes to facial recognition: as MIT Media Lab noted in 2018, facial recognition software error rates were much higher for black women (34%) than for white men (less than 1%) due to the prevalence of images of white men in training databases. Used, among others, for police services and surveillance systems, these biased algorithms can lead to errors with serious consequences.

Even without talking about critical errors, these biases can have implications in daily life: from banking applications to airport security, facial recognition, when it goes smoothly, can make a cumbersome identification step a smooth and satisfying experience. Thus, any demographic bias is more than just a technical flaw. This is an attack on the right to equal access to essential digital services, such as opening a bank account or registering for social or health services.

It is in this context that the MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project was launched in November 2022, supported by the European Research Executive Agency and 12 European partners from the academic, associative and private world, to analyze the biases present in AI systems and develop analysis and correction tools to respond to them. 3 years later, it is time to take stock.

Fairer decisions for more inclusive recognition

The analyzes carried out as part of MAMMOth highlighted a central problem: the under-representation of entire sections of the population in the datasets that feed AI tools creates performance gaps in recognition, particularly in cases of identity verification, where variations in the tints of ID photos can harm the reliability of the results for certain demographic groups.

By developing methods to broaden the diversity of data by integrating artificial ID photos representing a wider range of skin tones, the gap in accuracy between light and dark skin was reduced by more than 50%. From an industry perspective, this means a better user experience for many of their customers.

Better fraud prevention

Aside from ethical considerations, biased systems are also an open door to vulnerabilities. If facial recognition tools leave entire sections of the population in the lurch, they also produce blind spots that can be exploited by fraudsters.

By better training the models, the tests carried out as part of MAMMOth enabled an 8% increase in verification accuracy, without increasing the volume of data required. The homogenization of algorithmic performance helps reduce gray areas and prevent potential flaws.

A more transparent AI

One of the major contributions of the project is the development of tools to document and explain the decisions of the algorithms. The MAI-BIAS toolkit developed in MAMMOth provides a standardized framework for assessing the fairness of a model before it is released to the market.

In addition to making algorithms intelligible by guaranteeing their transparency, this approach helps companies comply with European law on artificial intelligence, facilitating audits, and allowing organizations to clearly justify the reliability of their systems.

A continuous correction

The fight against bias cannot be limited to a one-off audit. An ethical and fair framework must ensure continuous monitoring and regular retraining of AI models as they develop to avoid the emergence of certain deviations throughout the life cycle of the model.

For companies, this monitoring is a guarantee of stability and compliance, which allows them to deploy scalable systems capable of remaining reliable, whatever the evolution of data volumes, uses and image acquisition conditions.

Legacy lessons

The technical advances enabled by the project go far beyond the scope of facial recognition, because the tools and techniques developed can be reproducible and adopted by all organizations keen to reduce the biases present in the AI ​​models they use. And the list of sectors that can benefit from it is long: financial services, health, education, e-commerce, mobility… the lessons learned by MAMMOth lay a solid foundation for improving the reliability and fairness of a multitude of systems, for the direct benefit of users.

Thus, integrating ethical considerations into its algorithm monitoring processes is not a brake on innovation, but an engine of growth. In addition to providing a comparative reputational advantage to companies and organizations that are concerned about bias, the reworked models under MAMMOth have shown that more responsible AI can be deployed at scale, without sacrificing performance.

By advocating a use of AI that is not only efficient but fair, companies are making a strategic choice. They strengthen the trust of their customers, and differentiate themselves in a market where algorithmic responsibility is establishing itself as a new standard.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment