BID® Daily Newsletter
Sep 1, 2020

BID® Daily Newsletter

Sep 1, 2020

Using AI To Reduce Bias

Summary: AI decision-making systems can help community financial institutions to eliminate bias in credit decisions. We explain how to do it.

The New York Times reported that the movie "b" will feature the "first fully autonomous artificially intelligent actor," Erica. She was created by a Japanese roboticist and she is practicing lines with local actors, awaiting production to start.
Artificial intelligence (AI) is not only being utilized by Hollywood, but also becoming more widely used in the financial industry too. Fraud detection and online customer engagement are a few ways. Some are also using it to help make better credit decisions and to reduce racial and gender bias. While the use of machine-learning algorithms within AI decision-making systems may not completely eliminate bias, some contend it's still better than relying on traditional decision methods. To keep you updated on this, we share learnings from a roundtable by The Brookings Institution.
Thought leaders participating in these roundtable discussions detailed several best practices for developers of algorithms. First of all, leveraging cross-functional work teams to draft "a bias impact statement" helped create a framework to minimize bias during development. Then, processes would be put in place to detect and mitigate bias found after the algorithm is created. While this may sound quite technical, we still think it could be important for community financial institutions (CFI's) in the future.
While CFIs will likely partner with vendors to develop decision-making algorithms, bank executives and their staff still need to determine the goal, scope, and factors to include. They also need to create a plan to monitor the institution's subsequent underwriting results to make sure bias is being reduced without introducing unintentional new predispositions.
To give you some idea of how to start, the overarching blueprint for developers to follow should be detailed in the bias impact statement crafted by the institution. This statement should be a template to guide the developers through the design, implementation, and monitoring stages. The development team will need to answer questions such as: Is the training data sufficiently diverse and reliable? What will be the threshold for measuring and correcting for bias? What's the feedback loop for the algorithm for developers, internal partners, and customers?
Once those types of questions are answered, it is important for CFIs and their vendors to consider how to best measure error rates. When detecting whether an algorithm introduces bias, it's better to analyze the equality of error rates between groups -- whether there are more mistakes for one group of people than another. For instance, if you have more men than women coming up in a job-matching algorithm, deeper analysis may be necessary.
The monitoring process should also be done manually (by humans), but make sure sensitive information used in forming and testing the algorithm is protected. CFIs should also periodically audit the data collected for the algorithmic operation, as well as regularly obtain feedback from all stakeholders impacted by its decision-making.
AI-based decision algorithms may not be right for most CFIs just yet. But, as the costs to develop these algorithms decrease and the technology becomes more mainstream, they will likely be used by most financial institutions. For now, we hope we have given you some things to think about.
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

PCBB’s President’s Top Predictions for CFIs in 2025
We interviewed PCBB President Mike Dohren about the key trends he anticipates affecting CFIs in 2025, including regulatory changes, mergers and acquisitions, lending trends, and technology.
2024 in Review: Part 3 of 3 — Technology & Cybersecurity
In this third part of our review of 2024, we look at the challenges and opportunities arising from continued digital adoption, the uptake in AI, and the increased threat of cyberattacks.