IA and Bank: How to reconcile performance and risk control?

IA and Bank: How to reconcile performance and risk control?

Artificial intelligence (AI) redraw the world banking landscape with the key an improvement in uses, increased security and an increase in performance.

But the more AI integrates with key processes, the more financial institutions, security, ethics and compliance issues. The AI ​​revolution will therefore be also regulatory.

Supervise the deployment of AI to exploit its full potential

According to McKinsey, the generative AI could generate up to $ 340 billion in annual gains for the banking sector, in particular thanks to the automation of commercial processes, the personalization of offers and the reduction of operating costs (1). However, the reality is more nuanced: a recent study by Capgemini Research Institute reveals that only 6 % of retail banks have already developed a roadmap for a transformation on the scale of the company based on AI even as the majority of their managers are convinced of the interest it represents (3). The potential is there, but its deployment requires method, governance and, above all, vigilance.

The basics of a European regulatory framework

Faced with this generalization of IA uses, the European Union, through AI Act, has classified AI systems according to their level of risk, companies being imposed on obligations of transparency, auditability and supervision. The Dora regulations, on the other hand, strengthen the digital resilience of financial organizations by imposing strict rules in terms of operational continuity. The NIS2 directive, for its part, expands cybersecurity obligations to a larger number of actors. Finally, more transversal texts such as DSA (Digital Services ACT) and DMA (Digital Markets ACT) aim to guarantee a safe and fair digital space. The challenge for businesses is to ensure that their AI models are reliable, traceable and compliant, throughout the life cycle.

Towards integrated governance of AI models

In this context, the concept of Model Risk Management, that is to say the management of models linked to models, becomes strategic. The challenge is clear: it involves mapping all the deployed models, assessing their risks, monitoring their behavior in real time and generating auditable relationships for regulators. Effective governance must be based on several pillars: first of all, a centralized inventory. This must reference all the AI ​​models used in the organization, and classify them according to their nature (statistical, machine learning, generative) and their criticality. Then, rigorous risk assessment processes must include algorithmic biases, potential strategic impact and ethical implications. Financial institutions must also carry out real -time monitoring to detect statistical drifts or performance anomalies. Another key element, automated, standardized reporting and in accordance with regulatory expectations. Finally, collaborative governance must be implemented, mobilizing both business departments, conformity functions, data science teams and risk managers. This demanding framework is no longer an option. It constitutes a condition of sustainability for any banking organization which wishes to capitalize on the AI ​​over time.

Model Risk Management: a regulatory imperative

On the one hand, developers and data scientists mainly use technical platforms allowing the creation and deployment of models. On the other hand, governance teams – generally positioned in the second line of defense, risk side – require specific Model Risk Management (MRM) or AI governance solutions. This separation of roles is fundamental: governance teams are ultimately responsible for the application of internal policies and compliance with regulations. Their needs go beyond simple development tools. They must be able to carry out comprehensive risk assessments, produce reports on board levels, and ensure constant supervision of production models. Facilitating innovation must be compatible with the regulatory requirements and risk management policies of the establishment.

Acculturation at AI: a major challenge for companies

If international working groups emerge regularly in order to work on the harmonization of good practices, the strengthening of supervision standards, and the anticipation of regulatory developments, internal communication also plays a decisive role. It makes it possible to understand the importance of these devices, to raise awareness among all stakeholders in the issues, and to facilitate the adoption of new practices. This approach involves an appropriation of concepts, a shared understanding of objectives, and the implementation of a common language between the different actors of the company IA ecosystem. Experience feedback shows that the approaches imposed “from above” without this work of explanation and support often come up against resistance which considerably slows down the transformation.

To take full advantage of this AI revolution, banks will have to combine technological innovation and methodological rigor. A responsible AI is a supervised, supervised and therefore governed AI. This governance is the price to pay so that banks can be transformed permanently … without transgressing.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment