“The Giving Tree,” one of Shel Silverstein’s most famous children’s books, tells the story of the relationship between a boy and an apple tree. Throughout the boy’s life, the tree is a constant source of joy for him. As a child, she provides him with apples to eat and limbs to climb on — all of which makes her just as happy as him. As the boy ages, however, the things he takes from her become more extreme, beginning with the sale of her apples for money and moving on to the use of her branches to build a house. Despite the one-sided nature of things between them, the tree never worries about whether her relationship with the boy is fair or balanced; she merely considers her love for him and the joy it brings her when she can meet his needs and make him happy.Teachers and parents alike have long used “The Giving Tree” to teach the value of not keeping score, tallying things up, or worrying about fairness. For financial institutions, however, the importance of fairness cannot be understated. In fact, fairness in lending has become such an important issue that many financial institutions have begun using fairness testing software to make sure there aren’t unknown biases within the artificial intelligence (AI) programs many organizations now rely on to determine loan worthiness.Invisible Bias
Countless businesses have embraced AI programs, statistical models, and machine learning (ML) algorithms for their ability to rapidly sift through large quantities of information and extract key data and patterns, absent the influence of human opinions or biases. For a refresher, AI assists computer systems through math and logic-based algorithms to mimic human reasoning and help companies improve their data integrity, provide a larger range of data sources, and make better decisions. Machine learning is a subset of AI and functions as an algorithm that helps a computer recognize patterns in any data set it’s given, to the point where the computer will begin to improve on its own, based on the data.In the case of financial institutions, AI/ML algorithms are appealing for their ability to weed out risky borrowers and foster more consistent lending. The problem is that many of the algorithms that organizations use to determine loan worthiness actually have biases of their own that, in most cases, are not visible to the institutions using them.In the same way that statistical outcomes can be manipulated based on the type of data used to generate them, factors such as data limitations and inadequate diversity among the information technology engineers that program AI/ML software can result in financial institutions unknowingly sustaining some of the very biases and prejudices they hoped to eliminate with AI/ML.Take, for instance, AI programs that rely on historical loan data. This historical data doesn’t account for factors such as zip codes and areas of the country that are recognized as credit deserts. Borrowers in these areas may be forced to rely on payday lenders because of the absence of traditional banks. AI that relies on the historical data that excludes this type of loan applicant would need to be programmed to offset the lack of data on similar borrowers. Without offsetting factors, AI winds up perpetuating existing prejudices and unfair treatment.Similarly, without being programmed to do so, AI/ML algorithms can also prove biased against women who take a break from their careers to help raise children — recognizing them as credit risks due to income gaps. Kareem Saleh, CEO of fairness testing software vendor FairPlay, revealed that “25% to 33% of the time, the highest-scoring folks from minority backgrounds that get declined would have performed at least as well as the riskiest folks that those lenders are currently approving.”Noticeable Issues
More attention is being paid to the weaknesses of AI algorithms, particularly among federal regulators. Six different federal regulatory organizations, including The Office of the Comptroller of the Currency, the Federal Trade Commission, and The Consumer Financial Protection Bureau, are all looking into updating existing regulations, guidance, and laws surrounding needed updates over the use of AI in machine learning in regards to fairness concerns.As a growing number of financial institutions rely on AI/ML to determine lending worthiness, regulators want to be sure that organizations are aware of potential biases that may exist within their algorithms, and that they are taking appropriate measures to offset them. This is especially important given that research has uncovered disproportionately high home loan denials of Latino and Black borrowers within Fannie May and Freddie Mac, related to their use of automated credit scoring models.Under the Microscope
With financial institutions’ embracing of AI/ML unlikely to wane, it is important for organizations that utilize the technology in their lending practices to take a deep dive into the algorithms they are using to ensure that they are not unintentionally furthering biases. This is also important for any AI/ML you might be using for your hiring process, which we’ve previously covered.Since doing this can be difficult on your own, organizations that employ AI/ML should look into fairness testing software to check for any biases they may be unaware of, and to ensure that their lending practices are as fair as possible — particularly since it seems inevitable that regulators will eventually require such checks.
Countless businesses have embraced AI programs, statistical models, and machine learning (ML) algorithms for their ability to rapidly sift through large quantities of information and extract key data and patterns, absent the influence of human opinions or biases. For a refresher, AI assists computer systems through math and logic-based algorithms to mimic human reasoning and help companies improve their data integrity, provide a larger range of data sources, and make better decisions. Machine learning is a subset of AI and functions as an algorithm that helps a computer recognize patterns in any data set it’s given, to the point where the computer will begin to improve on its own, based on the data.In the case of financial institutions, AI/ML algorithms are appealing for their ability to weed out risky borrowers and foster more consistent lending. The problem is that many of the algorithms that organizations use to determine loan worthiness actually have biases of their own that, in most cases, are not visible to the institutions using them.In the same way that statistical outcomes can be manipulated based on the type of data used to generate them, factors such as data limitations and inadequate diversity among the information technology engineers that program AI/ML software can result in financial institutions unknowingly sustaining some of the very biases and prejudices they hoped to eliminate with AI/ML.Take, for instance, AI programs that rely on historical loan data. This historical data doesn’t account for factors such as zip codes and areas of the country that are recognized as credit deserts. Borrowers in these areas may be forced to rely on payday lenders because of the absence of traditional banks. AI that relies on the historical data that excludes this type of loan applicant would need to be programmed to offset the lack of data on similar borrowers. Without offsetting factors, AI winds up perpetuating existing prejudices and unfair treatment.Similarly, without being programmed to do so, AI/ML algorithms can also prove biased against women who take a break from their careers to help raise children — recognizing them as credit risks due to income gaps. Kareem Saleh, CEO of fairness testing software vendor FairPlay, revealed that “25% to 33% of the time, the highest-scoring folks from minority backgrounds that get declined would have performed at least as well as the riskiest folks that those lenders are currently approving.”Noticeable Issues
More attention is being paid to the weaknesses of AI algorithms, particularly among federal regulators. Six different federal regulatory organizations, including The Office of the Comptroller of the Currency, the Federal Trade Commission, and The Consumer Financial Protection Bureau, are all looking into updating existing regulations, guidance, and laws surrounding needed updates over the use of AI in machine learning in regards to fairness concerns.As a growing number of financial institutions rely on AI/ML to determine lending worthiness, regulators want to be sure that organizations are aware of potential biases that may exist within their algorithms, and that they are taking appropriate measures to offset them. This is especially important given that research has uncovered disproportionately high home loan denials of Latino and Black borrowers within Fannie May and Freddie Mac, related to their use of automated credit scoring models.Under the Microscope
With financial institutions’ embracing of AI/ML unlikely to wane, it is important for organizations that utilize the technology in their lending practices to take a deep dive into the algorithms they are using to ensure that they are not unintentionally furthering biases. This is also important for any AI/ML you might be using for your hiring process, which we’ve previously covered.Since doing this can be difficult on your own, organizations that employ AI/ML should look into fairness testing software to check for any biases they may be unaware of, and to ensure that their lending practices are as fair as possible — particularly since it seems inevitable that regulators will eventually require such checks.