BID® Daily Newsletter
Feb 6, 2024

BID® Daily Newsletter

Feb 6, 2024

The Risks Lurking in the AI Shadows

Summary: Employees using unapproved, public generative AI poses a major security risk for your business. We review what shadow AI is, how it leaves your business vulnerable, and what you and your employees can do to minimize the risk.

Stairs are everywhere. The average home contains between 14 and 16 steps. Climbing stairs is so commonplace that most people don’t think about how dangerous they can be. In fact, accidents on stairs account for more than 1MM injuries and roughly 12K deaths each year. There’s a reason that soap operas depict people dramatically falling down a long staircase, leaving viewers gasping and wondering if the character will survive the tumble.
Although much newer to us than stairs, generative artificial intelligence (AI) tools are also something that people use without considering the risks. Businesses need to be aware of the risks that using generative AI can create — particularly when the usage is unknown to an organization’s information technology (IT) department.
Lurking in the Shadows
As a growing number of people embrace generative AI to assist with mundane and time-consuming tasks within their jobs, organizations are unknowingly finding themselves exposed to the risks of what has been dubbed shadow AI. Similar to shadow IT — the use of IT software applications, devices, and systems by a company’s employees without approval from the IT department — shadow AI is the use of unapproved AI applications. With shadow AI, employees will use free, public large language versions of AI, such as ChatGPT, unbeknownst to their IT departments.
Research from Gartner found that in 2022 alone, 41% of employees utilized or modified technology without the knowledge of the IT departments within their organizations. By 2027, that number is expected to increase to 75% of employees.
This practice opens organizations up to hackers, who can use these programs to bypass a business’ IT security, creating the possibility that each employee could ultimately serve as a portal into that organization’s information and data.
Not All AI Is Created Equal
ChatGPT may be one of the most recognizable AI platforms, but it is not the only option. There are important differences that businesses need to be aware of regarding AI apps. Take, for instance, OpenAI, which uses information that individuals input into the program to train its model for other users. Unlike infrastructure as a service (IaaS) — cloud computing services provided by a third party that enhance security with features such as unique encryption keys — most artificial intelligence as a service (AIaaS) programs rely on user data input to better train their programs.
Such programs often archive and store any data users enter into AIaaS apps. When employees use unauthorized AIaaS platforms without knowing the risks or being trained by IT on how to use AI without compromising company or customer data, it opens employers up to data breaches and potential regulatory risks. That is exactly what happened with OpenAI, which experienced a data breach in March 2023 that the company admitted may have allowed hackers access to its chat histories. 
How To Use AI Safely
Given the fact that AIaaS allows for the possibility of any individual to unintentionally create an entry point for hackers into an organization’s data, employee education about such risks is critical. There are a few different approaches that businesses can take to create greater controls around AI usage among their employees:
  • Ban the use of generative AI. Businesses might choose to follow the lead of major companies such as Samsung and Amazon by banning generative AI usage among employees. This can be accomplished simply by restricting employee access to domains of known AI platforms through any company-owned devices. 
  • Use in-house generative AI programs. These programs can be purchased and provided for company-wide usage. Private generative AI programs are more secure and provide greater transparency for IT departments.
  • Monitor shadow AI usage. The use of software as a service (SaaS) can give you automated solutions that can detect when shadow AI is running on an employee’s computer or when someone’s business credentials are used by any application tool — particularly since many people unintentionally grant access permissions without even realizing it. Failing to take such precautions could open companies to the possibility of regulatory actions regarding data. 
As generative AI use becomes more widespread, community financial institutions need to be aware of the risks created by shadow AI and educate employees about how to avoid those risks. Discuss with your IT team or service what your options are to protect your institution from AI risk. If there are authorized uses of AI already within your business, implementing company guidelines regarding its usage and employing SaaS to detect unauthorized usage can help reduce the risks it creates. 
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

The Importance of Bluetooth Security
As Bluetooth usage becomes more prevalent and scammers increasingly use it as a way to hack people’s devices, CFIs should pay closer attention to related security measures.
Crypto Fraud Is Growing Rapidly
While the number of fraud complaints related to cryptocurrency schemes is still small compared to all complaints, the dollar losses have skyrocketed, eclipsing all other types of financial fraud losses.