9Insurance Business ReviewSEPTEMBER 2025To ensure reliable AI models, companies should consider implementing a system that includes checks of impartiality, transparency, responsibility, and accountability, having the right controls and security measures in place, constant tracking of reliability, and the protection of customer datastakeholders, i.e., customers, management, and regulators. As such, transparency and explainability are key requirements to ensure that AI/ML algorithms are functioning properly.Striving for Modern Model Risk Management Framework AI/ML algorithms are often embedded in larger AI application systems, such as software-as-a-service (SaaS) offerings from vendors. Risk Management cannot be an afterthought or addressed only by model validation functions such as those that currently exist in financial services. Companies need to build risk management directly into their AI initiatives so that oversight is constant and concurrent with internal development and external provisioning of AI across the enterprise. To tackle these challenges without constraining AI innovation and disrupting the agile ways of working that enable it, banks need to adopt a new approach to their existing MRM framework.1. Data pre-/post-processing controls: data pipeline testing, data sourcing analysis, statistical data checks and data-usage fairness.2. To build a model that achieves good performance, various stakeholders other than model developers, i.e., Business, IT, risk, and compliance, should be engaged to check if the model actually solves the problems stated during ideation: model-robustness review, business-context metrics testing, data-leakage controls, label-quality assessment, data availability.3. Upon completing the model or identifying a few shortlisted models, evaluate the performance of the model(s) and engage with the model owner regularly to ensure it fits business usage before moving the model into production or deployment. 4. The evaluation and monitoring at every stage of the MLOps and MRM life cycle. Tools such as model interpretability, bias detection, and performance monitoring should be built in so that oversight is constant and concurrent with AI development activities and consistent across the enterprise. ConclusionThe technological challenges posed by new uses of data and innovative applications of AI have been addressed over the past few years. However, there have been many reports of AI models going awry over the past years. From gender and race discrimination in loan applications to misidentification of pictures of certain races. This has only served as a reminder that using AI can create significant risks. Especially in a highly regulated environment like banking, where the risk of not properly addressing these model risks can be high, organizations must quickly adapt to address the AI/ML model risk challenges.
<
Page 8 |
Page 10 >