When Henry Ford introduced the assembly line in 1913, it revolutionized the manufacturing industry. By breaking steps down into smaller, repeatable tasks, the assembly line drastically reduced production times for the earliest cars from 12 hours to just one hour and 33 minutes per vehicle. This innovation not only boosted efficiency but also made cars more accessible to the average consumer. Today, generative artificial intelligence (AI) can be used to offer a similar opportunity to streamline credit risk assessments within financial institutions.According to a McKinsey survey of senior credit risk executives, 20% reported already using generative artificial intelligence (AI). Another 40% said that they would implement AI in the next 3 to 6 months; 20% said they would implement AI in 6 to 12 months. The final 20% said that they would implement AI in the next 1 to 2 years. In other words, half of those credit risk executives are already using AI or will start using it this year. Their thoughts about how to use AI can help community financial institutions (CFIs) consider how they might apply AI to managing their own credit risks.How will they use AI? The executives McKinsey surveyed are using or considering AI applications that use large language models to summarize, combine, and analyze language and data. They can create reports, emails, summaries, and other complex forms of natural language, and can also generate structured data or instructions that other software tools can follow. That makes AI applications potentially useful in:
- Client engagement. CFIs can put generative AI to work by offering customers tightly personalized product menus, based on the information that a CFI already has about those customers. AI can also write emails and meeting summaries, suggest what to do next, or help customers pick products that work for their individual situations.
- Credit decision and underwriting. Generative AI can compare submitted paperwork against a list of required documents and let the user know what’s missing or flag problems. It can also write emails asking for missing customer information and compile whatever customer information you do have. Customer service-based generative AI can even follow task sequences, which lets it get information from appropriate sources, find relevant ratios, compare those ratios against standard thresholds, and summarize results in credit memos for human credit officers to review — all at the behest of users who don’t need substantial programming or advanced modeling skills. If your CFI approves the credit request, AI can draft contracts or write emails that tell customers about credit decisions or explain next steps.
- Portfolio monitoring. A portfolio manager could use generative AI to automatically create performance and risk reports or summarize analyst reports on possible ways to optimize portfolios. Systems based on generative AI can help identify borrowers who would benefit from extra attention or build optimization strategies for subsegments of a loan book that are in keeping with a CFI’s overall risk profile.
- Customer assistance. When lending issues occur, generative AI can draft emails or letters to the borrower, point out possible loan restructuring options, and walk borrowers through the restructuring process. The technology can even help coach interactions between agents and customers.
Portfolio monitoring is the most popular area to use generative AI, followed by credit applications, controls, and reporting. The respondents see a little more possibility for generative AI in wholesale credit over retail.A McKinsey survey showed that credit risk executives are either already using generative AI or plan to start soon. Client engagement, credit decisions and underwriting, loan portfolio monitoring, and customer assistance are all spots where AI can help run credit risk. As AI continues to integrate into financial practices, it offers significant opportunities to enhance efficiency and accuracy. However, CFIs should be prudent in their AI adoption, ensuring that any AI applications align with their overall risk management practices and do not introduce unforeseen vulnerabilities. Careful implementation and ongoing oversight will be key to leveraging AI effectively while maintaining sound risk management.