TITLE: Shadow AI Puts Company Data at Risk as Workers Share Sensitive Info
The Growing Shadow AI Problem in Workplaces
New research reveals a concerning trend in today’s workplaces: three out of five employees (59%) admit to using AI tools that haven’t been approved by their companies. This practice, known as “shadow AI,” is creating significant security vulnerabilities across organizations.
Widespread Data Sharing with Unapproved Tools
Even more alarming is that 75% of employees using shadow AI acknowledge sharing sensitive company data through these unvetted platforms. What’s particularly surprising is that 57% of direct managers actually support their team members’ use of unauthorized AI tools, according to findings published by Cybernews.
Leadership Leading the Charge in Shadow AI Usage
Contrary to what one might expect, executives and senior managers are actually the most likely to use shadow AI tools, with a staggering 93% adoption rate. Managers follow at 73%, while professionals show slightly more restraint at 62% usage.
Types of Sensitive Information Being Compromised
The types of data being shared through unapproved AI platforms are particularly concerning:
- Employee data (35%)
- Customer information (32%)
- Internal documents (27%)
- Legal and financial data (21%)
- Security-related information (21%)
- Proprietary code (20%)
Awareness vs. Action on AI Risks
Despite this risky behavior, most workers (89%) recognize that AI poses potential threats to their organizations. Nearly two-thirds (64%) acknowledge that data breaches could result from shadow AI use, and 57% say they would stop using unapproved tools if a breach occurred. However, very few are taking preventive measures proactively.
The Critical Need for Better AI Policies
Žilvinas Girėnas, Head of Product at nexos.ai, emphasizes the danger: “Once sensitive data enters an unsecured AI tool, you lose control. It can be stored, reused, or exposed in ways you’ll never know about.”
Many organizations are struggling to address this challenge effectively. A quarter (23%) of companies still have no official AI policy whatsoever, while only half (52%) provide approved AI tools for their employees. Even when tools are provided, only one in three workers feels these meet their actual needs.
The Path Forward for Organizations
The solution lies in companies developing more robust AI policies and providing the right types of tools that genuinely meet employee needs. As Cybernews Security Researcher Mantas Sabeckis concludes, “Companies should look into ways to incorporate AI into their processes securely, efficiently, and responsibly.”
This analysis builds on recent research that highlights how shadow AI use is putting company data at risk as workers continue to share sensitive information through unapproved channels.