In the HBO series “Westworld,” a theme park where AI-powered robots serve as the citizens in a mock western town, things start to go awry when the robots become a bit too realistic. They start to exhibit human emotion and even exercise autonomy over themselves, despite being programmed to perform specific characters for park visitors. The robots start to rebel, some even escaping and posing a danger to the real humans who designed them and the park they live in.Luckily, ChatGPT, a trending AI-powered chatbot, has the potential to create realistic writing and interact with humans without causing the safety concerns as the robots in “Westworld”. Could this technology replace human writers? In some cases, maybe, but the answer is a lot more complex than that (and so are ChatGPT’s other potential uses beyond writing). As knowledge about this form of artificial intelligence evolves, it’s clear that ChatGPT contains both possibilities and perils.What’s less in the news is the fact that other aspects of generative AI can do even more for financial institutions.
How ChatGPT Fits into the World of AI
Launched in November 2022, ChatGPT is a kind of generative artificial intelligence. There are currently two prominent kinds of generative artificial intelligence: generative adversarial network (GAN) and generative pre-trained transformer (GPT).ChatGPT, the kind of GPT that’s been in the news most recently, is an artificially intelligent chatbot that’s built on top of OpenAI’s GPT-3 family of GPT language models. Its main function is to mimic human conversation, but it can also write and debug computer programs and compose music, fiction, poetry, song lyrics, and essays. It’s that very ability to imitate human writing, in fact, that’s at the core of the Writers Guild of America strike.Despite everything it can do, ChatGPT has its problems. Its chat responses are uneven. Sometimes it writes nonsensical or incorrect chat responses, even though these answers may initially sound plausible. Because it sifts through a huge aggregate of online material to generate its responses, it can sometimes generate racist or sexist material.It can also lead to legal problems. A recent study from information technology services and consulting company Capgemini found that 60% of businesses that employ AI have faced legal scrutiny, with 22% of these firms facing customer backlash in the wake of decisions made by their AI systems.So what can AI do for community financial institutions (CFIs)? This new form of generative AI may be able to handle some obvious tasks, such as responding to customer chats or handling routine email, as long as actual people supervise the technology. It can free up humans to do more substantive work. But the uses of generative AI go deeper than that.Fraud Detection Using Generative AI
Generative AI can be trained to detect anomalous and potentially fraudulent transactions. One study saw researchers build a GAN that produced artificial fraudulent transactions. Then the GAN compared genuine data to the synthetic data to build its ability to pick out the fraud. The GAN “learned” to find potential problems, an ability that could be very helpful for financial institutions that deal with large transaction volumes. This is key for CFIs, with their millions of records.Data Privacy
GAN's ability to build synthetic data makes it a resource for creating shareable data in place of actual customer data that CFIs can’t share because of rules around customer privacy. The same synthetic customer data can help train GANs to help CFIs discern a customer’s creditworthiness and find opportunities for cross-selling.Without proper training, however, a GAN will propagate biases in the data, and that could lead to legal trouble when customers aren’t treated equitably.Risk Management
Generative AI can also help minimize risk-management losses. A GAN can estimate value at risk for a given time period or create hypothetical situations that can help predict financial market movement. By using a GAN to generate novel and assumption-free scenarios, bankers can see the potential effects of volatility, based on historical data patterns, and maintain appropriate levels of risk exposure.Generative AI Can Even Explain ItselfCustomers who are turned down for a loan can get an explanation from a GPT model. The technology can write a polite, useful denial letter that organizes the reasons for the denial from simple to complex. When customers understand the reasons their loan applications were denied, they can apply again in a more informed way.AI holds a lot of promise for CFIs. It can help manage risk, detect fraud, and even handle routine tasks. But it can also cause problems and has yet to be fully regulated. A CFI that wants to take advantage of AI needs skilled help and lots of older data to adequately train an AI system before it can reap the potential benefits. We also recently explored ChatGPT’s capabilities by asking it to write a BID article about its value in banking. See what it came up with here.
How ChatGPT Fits into the World of AI
Launched in November 2022, ChatGPT is a kind of generative artificial intelligence. There are currently two prominent kinds of generative artificial intelligence: generative adversarial network (GAN) and generative pre-trained transformer (GPT).ChatGPT, the kind of GPT that’s been in the news most recently, is an artificially intelligent chatbot that’s built on top of OpenAI’s GPT-3 family of GPT language models. Its main function is to mimic human conversation, but it can also write and debug computer programs and compose music, fiction, poetry, song lyrics, and essays. It’s that very ability to imitate human writing, in fact, that’s at the core of the Writers Guild of America strike.Despite everything it can do, ChatGPT has its problems. Its chat responses are uneven. Sometimes it writes nonsensical or incorrect chat responses, even though these answers may initially sound plausible. Because it sifts through a huge aggregate of online material to generate its responses, it can sometimes generate racist or sexist material.It can also lead to legal problems. A recent study from information technology services and consulting company Capgemini found that 60% of businesses that employ AI have faced legal scrutiny, with 22% of these firms facing customer backlash in the wake of decisions made by their AI systems.So what can AI do for community financial institutions (CFIs)? This new form of generative AI may be able to handle some obvious tasks, such as responding to customer chats or handling routine email, as long as actual people supervise the technology. It can free up humans to do more substantive work. But the uses of generative AI go deeper than that.Fraud Detection Using Generative AI
Generative AI can be trained to detect anomalous and potentially fraudulent transactions. One study saw researchers build a GAN that produced artificial fraudulent transactions. Then the GAN compared genuine data to the synthetic data to build its ability to pick out the fraud. The GAN “learned” to find potential problems, an ability that could be very helpful for financial institutions that deal with large transaction volumes. This is key for CFIs, with their millions of records.Data Privacy
GAN's ability to build synthetic data makes it a resource for creating shareable data in place of actual customer data that CFIs can’t share because of rules around customer privacy. The same synthetic customer data can help train GANs to help CFIs discern a customer’s creditworthiness and find opportunities for cross-selling.Without proper training, however, a GAN will propagate biases in the data, and that could lead to legal trouble when customers aren’t treated equitably.Risk Management
Generative AI can also help minimize risk-management losses. A GAN can estimate value at risk for a given time period or create hypothetical situations that can help predict financial market movement. By using a GAN to generate novel and assumption-free scenarios, bankers can see the potential effects of volatility, based on historical data patterns, and maintain appropriate levels of risk exposure.Generative AI Can Even Explain ItselfCustomers who are turned down for a loan can get an explanation from a GPT model. The technology can write a polite, useful denial letter that organizes the reasons for the denial from simple to complex. When customers understand the reasons their loan applications were denied, they can apply again in a more informed way.AI holds a lot of promise for CFIs. It can help manage risk, detect fraud, and even handle routine tasks. But it can also cause problems and has yet to be fully regulated. A CFI that wants to take advantage of AI needs skilled help and lots of older data to adequately train an AI system before it can reap the potential benefits. We also recently explored ChatGPT’s capabilities by asking it to write a BID article about its value in banking. See what it came up with here.