Standardizing & Streamlining Algorithmic Bias Assessment in the Insurance Industry
Challenge
One of the leading global reinsurance providers started an internal AI risk and compliance assessment process to address growing regulatory concerns, and demonstrate it was effectively governing its AI and mitigating potentially harmful bias in its models. Their entire process was managed through Excel and was incredibly burdensome on technical development teams, requiring significant hours per report.
Solution
With Credo AI’s Responsible AI (RAI) Platform and Lens, the company found a complete solution that met their needs. The reinsurance company worked with Credo AI to develop a set of custom Policy Packs that operationalized the company’s internal risk and compliance assessment policies within the Responsible AI Platform. Any AI use cases in development across the team are now registered for governance, and the governance team can manage and track progress through the risk and compliance assessment process from the Credo AI UI, rather than updating them manually through various spreadsheets and documents.
Impact
The team also implemented Credo AI Lens within its MLOps pipeline to programmatically run pre-deployment model assessments for bias and performance as part of the model build package. By integrating Lens with the RAI Platform, the company ensured that data scientists are always running the required set of bias and performance tests whenever they build a new model and those assessment results are automatically sent back to the RAI Platform for reporting. Technical teams no longer need to gather assessment requirements from the governance teams, nor do they need to manually write code to run standard bias and performance assessments. Lens and the RAI Platform provide them with everything they need to quickly generate required technical evidence for governance without significant manual effort.