Robby the Robot, the clunky mechanical droid who first appeared in the 1956 movie “Forbidden Planet,” has been called the “hardest-working robot in Hollywood.” With a computer brain that understood and responded to spoken queries with an earnest desire to be helpful, he was a fictional precursor to modern chatbots. Robots like Robby are not yet wandering among us, but chatbots have become ubiquitous, using artificial intelligence (AI) to respond to all manner of queries. Chatbots have proven to be great tools for FIs, improving response times in dealing with common issues and questions, and being available 24/7.It’s important to keep in mind, though, that chatbots are not perfect. If not set up expertly, chatbots may have vulnerabilities that can be used by cybercriminals who are using equally smart digital tools and AI.Chatbot VulnerabilitiesClever hackers knocking on a financial institution’s (FI’s) digital door can sometimes convince a chatbot to do their dirty work. FIs with chatbots may be vulnerable to a technique called “prompt injection,” in which a cybercriminal provides a chatbot with a text prompt that can cause it to circumvent previous instructions and do whatever the hacker requests, like downloading malware that leads to fraud, theft, or some other insidious gambit.In addition to prompt injection, there are a number of other tactics that can be used to trick chatbots for nefarious ends. They go by various names like “jailbreak” (a special prompt created to allow the attacker in), “prompt leaking” (sabotaging or sharing prompts used in AI training models), and “SQL injection” (manipulating code to provide access to sensitive data). But they all revolve around the central goal of conning chatbots into doing a crook’s dirty work. There are several risks associated with a compromised chatbot. It may divulge confidential customer information to a bad actor, including customer account data and how to access it. Or the chatbot may allow a crook inside an FI’s network, potentially allowing a hacker to take control of the system and demand ransom to release it. The same AI that powers chatbots can also be used by crooks to impersonate real customers, then gain access to customer and FI information. Chatbots can even be manipulated into making threatening statements or text that would otherwise be harmful to the FI’s reputation. Regulatory Efforts
The Federal Trade Commission recently opened an investigation into OpenAI’s ChatGPT, looking into the problem of prompt injections. That is not the only government oversight. The UK has issued a warning about prompt injection. The White House also issued an executive order asking for better tests and standards for chatbots. FIs should take all this as a heads-up warning about the potential pitfalls surrounding chatbots and AI. While chatbots can tackle a variety of customer questions and reduce the workload of branch staff, chatbots can be a little bit too friendly sometimes. They respond to anyone and can have trouble telling the difference between a legitimate customer and a crook. Chatbots are programmed to be helpful, but they often lack nuance and sophistication when they try to act like a real person. They are, after all, still robots.Chatbots can be a tremendous customer service tool, but they are not impervious to cybercrime. It’s important to make sure that any chatbot your institution uses is set up by experts and has protections in place to develop and maintain defenses against misuse by hackers and cybercriminals.
The Federal Trade Commission recently opened an investigation into OpenAI’s ChatGPT, looking into the problem of prompt injections. That is not the only government oversight. The UK has issued a warning about prompt injection. The White House also issued an executive order asking for better tests and standards for chatbots. FIs should take all this as a heads-up warning about the potential pitfalls surrounding chatbots and AI. While chatbots can tackle a variety of customer questions and reduce the workload of branch staff, chatbots can be a little bit too friendly sometimes. They respond to anyone and can have trouble telling the difference between a legitimate customer and a crook. Chatbots are programmed to be helpful, but they often lack nuance and sophistication when they try to act like a real person. They are, after all, still robots.Chatbots can be a tremendous customer service tool, but they are not impervious to cybercrime. It’s important to make sure that any chatbot your institution uses is set up by experts and has protections in place to develop and maintain defenses against misuse by hackers and cybercriminals.