BID® Daily Newsletter
May 13, 2024

BID® Daily Newsletter

May 13, 2024

The Growing Threat of Deepfakes

Summary: Advancements in AI are making deepfakes a growing threat for financial institutions. Knowing the red flags to look for and educating employees and customers about such threats need to be a major part of CFIs' security measures.

As reality TV producers scramble to come up with ever more content, the shows being produced just keep getting stranger, even in the category of food preparation. Take, for instance, “Is it Cake?”, a gameshow where contestants can win up to $50K by baking a cake replica of an object that is realistic enough to fool celebrity judges. Completed cakes are placed among four of the objects they have been made to look like and judges must determine which are real and which is the cake.
Unfortunately, such realistic fakes are not limited to the baking industry. As artificial intelligence (AI) has become exponentially better, it has helped criminals create deepfake videos realistic enough to fool the security programs many financial institutions rely on to detect fraud.
The low cost and widespread availability of ever-more-sophisticated AI applications are making it easier for criminals to use them. As a result, fraudsters are now embracing the very same methods to perpetuate deepfakes that financial institutions are using to try to counteract them. Deepfake audio and video have become so realistic that bad actors have been able to successfully create believable videos of high-profile individuals such as Mark Zuckerberg. In one recent case, an employee at a bank in Hong Kong was instructed by fraudsters posing as executives from his bank to initiate a large wire transfer. He even participated in a video conference with all of the “executives” who assured him that he should complete the transfer, only to later discover that they were video deepfakes.
Given that AI-generated recordings are believable enough that they can pass for well-known recognizable individuals, fraudsters have embraced the fact that there is an even greater chance of using deepfakes to mimic ordinary people, particularly in the case of individuals that organizations may not be that familiar with. One way that fraudsters are doing this is through what are referred to as “Frankenstein” identities, where scammers use a combination of legitimate information and fake ID details to create phony identities that can be used to set up bank accounts and open lines of credit.
Such fraud is problematic for the banking industry, where biometrics are a common way of authenticating customers. Since criminals can now flawlessly replicate an individual’s voice and appearance, both through audio and video, CFIs can no longer trust biometrics as a stand-alone method for identity confirmation. As a result, the banking industry is scrambling to find new ways to verify the legitimacy of transactions.
Forewarned Is Forearmed
With deepfake activity advancing so quickly, CFIs need to be aware of criminals’ latest tactics. One that the banking industry should have on its radar is a new form of mobile malware that has emerged in China known as GoldFactory or GoldPickaxe, which has the capability to collect facial recognition data on individuals through social media, identification documents, and even SMS exchanges between mobile devices (texts) — all of which can be used to create deepfakes believable enough to circumvent many biometric security measures. So far, scammers’ usage of GoldFactory malware has been limited to the Asia-Pacific region, but it is likely just a matter of time before they cast a wider net. The GoldFactory trojan must be installed on an individual’s phone, requires unusual installation procedures, and has so far been found on phony Google Play stores.
Until new methods of identity verification that aren’t susceptible to AI-based fraud are identified, CFIs should focus on educating employees about deepfakes and driving home the importance of looking out for them. If people are well versed in the signs of fraud that they should be watching out for, they are less likely to be fooled. 
Among the things that CFIs should be on the lookout for in digital communications are skin tones that don’t look quite right, discrepancies between the size of bodies and faces in photos or videos, the placement of any shadows that don’t look quite right, and pauses in speech that are inconsistent.
CFIs can also enhance security measures surrounding deepfakes, like issuing customers physical devices that can be used for verification. Institutions can also educate customers on how to protect their information and the risks that have resulted from widespread AI usage. Keeping employees and customers informed about specific malware such as the GoldFactory trojan is also crucial, as people need to understand the importance of verifying anything they click on or download to an electronic device. Similarly, people should be reminded how easy it has become for AI to replicate the voice and digital appearance of everyday people and the importance of thinking about any requests for personal information and ensuring that they are legitimate.
Behavioral biometrics is another way that CFIs can protect themselves and their customers, where AI learns an individual’s typical digital usage and behavior patterns — whether online or on mobile devices — and is able to flag when any suspicious changes emerge, such as someone suddenly visiting a part of a website or app they have never used before and already being able to rapidly navigate it.
As fraudsters step up their usage of deepfakes and their quality becomes exponentially better, CFIs need to be aggressive in their efforts to fend off such fraud. Biometrics remain an important method of identity verification but should not be used as a standalone security measure. Also, educating employees and customers about red flags to watch out for is a critical component in protecting against deepfakes.  
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

Don’t Overlook the Risks of AI-Backed CRMs
AI-backed CRM systems offer major benefits, but they also involve risks. We outline the challenges of using AI for customer service and share training and security tips to minimize risks.
The Crucial Role of Information Security Officers
Information security officers are in short supply, yet in high demand. We look at steps to reduce turnover, prepare for the possibility of an ISO’s departure, and ease the stress if one leaves.