In the classic 1987 movie, Robocop, the hero — a cyborg created from the remains of murdered police officer Alex Murphy — is eventually able to regain some of his humanity and override his programming to go after the killers. While this movie may be science fiction, the idea of artificial intelligence (AI) needing human elements is part of the broader AI discussion over three decades later.Community financial institutions (CFIs) using AI should consider adopting a similarly humanized approach to this machine learning technology. It could enable them to harness its full potential while avoiding an intrinsic pitfall, namely algorithmic bias. We review how AI is used by financial institutions and provide you with three strategies to help mitigate bias.The case for AI
AI is increasingly vital to remaining competitive in today’s business world. Large banks have invested heavily and long recognized its benefits in these key areas.
AI is increasingly vital to remaining competitive in today’s business world. Large banks have invested heavily and long recognized its benefits in these key areas.
- Risk management. Big banks are implementing AI within middle-office functions to assess risks, detect and prevent payments fraud, improve processes for anti-money laundering (AML), diversify credit risk, perform regulatory checks, and enhance cybersecurity.
- Customer service. Increasingly, financial institutions are leveraging algorithms to smooth customer identification and authentication processes, provide support through chatbots and voice assistants, give personalized recommendations, and, ultimately, deepen customer relationships.
- Cost savings. According to Business Insider Intelligence, the total potential cost savings for financial institutions from AI-based applications is estimated to be $447B by 2023, with front and middle office functions accounting for over 90% of this total.
CFI examples
With the emergence of fintechs and regtechs, CFIs are now able to compete on a more level playing field with the megabanks through affordable AI solutions. For example, a $119MM-asset CFI in WI is working with a fintech to provide a cloud-based system that will gather all financial data and create a complete customer profile. Different modules will then allow them to offer additional services, detect suspicious activity, and increase the ease of transactions. “If I can do a million transactions a day at a penny or two pennies, that adds up to a lot of noninterest income,” says the institution’s CEO.Another CFI has found that leveraging AI can provide a wider community with access to credit. According to the Board chair of a $2.45B-asset CFI in WA: “With the predictive underwriting models and lower fraud rates delivered by the platform, we will provide a more inclusive experience and be able to offer a broader range of lending solutions for our customers.”Overcoming the challenge of bias
But the rapid introduction of AI in financial institutions is not without hiccups. The main challenge is bias created by AI algorithms, leading to unfair or discriminatory practices. Left alone, it can spiral out of control. Bias can be unintentionally introduced at different stages. First, in the input data — incomplete or unrepresentative data will skew the result. Second, in development — algorithms are built by development teams, which may themselves also introduce subconscious bias. Finally, in the continuous learning phase — unintended consequences of past behavior can perpetuate an unwanted decision.In a recent PwC survey, over 40% of respondents cited “making AI systems responsible and trustworthy” as a bank’s top challenge. With this in mind, here are three strategies CFIs could use to mitigate the potential bias.
With the emergence of fintechs and regtechs, CFIs are now able to compete on a more level playing field with the megabanks through affordable AI solutions. For example, a $119MM-asset CFI in WI is working with a fintech to provide a cloud-based system that will gather all financial data and create a complete customer profile. Different modules will then allow them to offer additional services, detect suspicious activity, and increase the ease of transactions. “If I can do a million transactions a day at a penny or two pennies, that adds up to a lot of noninterest income,” says the institution’s CEO.Another CFI has found that leveraging AI can provide a wider community with access to credit. According to the Board chair of a $2.45B-asset CFI in WA: “With the predictive underwriting models and lower fraud rates delivered by the platform, we will provide a more inclusive experience and be able to offer a broader range of lending solutions for our customers.”Overcoming the challenge of bias
But the rapid introduction of AI in financial institutions is not without hiccups. The main challenge is bias created by AI algorithms, leading to unfair or discriminatory practices. Left alone, it can spiral out of control. Bias can be unintentionally introduced at different stages. First, in the input data — incomplete or unrepresentative data will skew the result. Second, in development — algorithms are built by development teams, which may themselves also introduce subconscious bias. Finally, in the continuous learning phase — unintended consequences of past behavior can perpetuate an unwanted decision.In a recent PwC survey, over 40% of respondents cited “making AI systems responsible and trustworthy” as a bank’s top challenge. With this in mind, here are three strategies CFIs could use to mitigate the potential bias.
- Question the data. CFIs need to look for all kinds of bias in the data and actively work to prevent it from being reflected in the system. For example, historically, minorities and women have been underserved in the credit market. Using historical data is only likely to magnify the problem. To eliminate this type of bias, consider introducing an algorithm to reweigh the data.
- Team humans with machines. Let the machines perform your lower functioning activities that benefit from speed and scalability. But, use highly trained people — both in the design and operational phases — to ensure the models are accurate and fair. Review your algorithms regularly to check they are performing against expected outcomes.
- Ensure accountability. An understanding of bias, and a willingness to spot it, is essential at all levels of the organization. Responsibility for detecting bias shouldn’t rest solely with your technical team. Establish an ethics committee, with people from diverse backgrounds and departments, and take action to correct any issues.
Financial regulators are working hard to put appropriate AI oversight in place. In the meantime, financial institutions need to do all they can to make sure their use of AI is not only creating important efficiencies, but that it is also fair and transparent.