According to Dark Reading, Tenable has launched a new product called Tenable One AI Exposure as an add-on to its exposure management platform. The tool is designed to detect, map, and govern the use of AI platforms across all enterprise infrastructure, specifically targeting the risks of “shadow AI” and corporate data exposure. The capability, which was originally previewed last summer, integrates technology from Tenable’s acquisition of Apex Security and is now part of a product called Tenable AI Aware. Initially, it offers deeper detection for Microsoft Copilot and OpenAI’s ChatGPT, with support for Google’s Gemini coming in a future update. The system works by extending Tenable’s existing vulnerability scanners to find artifacts of AI usage and can enforce policies, detect misconfigurations, and even remediate threats through automated orchestration or ticketing in systems like ServiceNow.
The AI Security Gold Rush Is On
Here’s the thing: Tenable is far from alone in this scramble. The article points out that competitors like CrowdStrike, Rapid7, and Wiz are all rolling out their own flavors of AI security and governance. Analyst Andrew Braunberg calls GenAI and Agentic AI an “important new attack surface,” and every major exposure management or CNAPP vendor is racing to cover it. It’s a classic land grab. The market has decided that AI governance isn’t a niche problem anymore—it’s a core requirement that needs to be baked into the existing security workflow. So Tenable’s move is less about innovation and more about table stakes. They have to have this feature now.
Beyond Just Finding the AI
What’s more interesting than the “who” is the “how.” Tenable’s approach, detailed in their blog post, isn’t just about discovering that ChatGPT is being used. It’s about context. The tool tries to map how an AI model or workflow connects to your specific infrastructure, identity systems, and sensitive data stores. That’s the crucial shift. The real risk isn’t that an employee uses Copilot; it’s that Copilot, through some automated agent, has been granted access to a database full of customer PII. Connecting those dots automatically is the hard part, and it’s where the real value will be. Can it actually understand the data flow and the blast radius of a misconfiguration? That’s the billion-dollar question for all these vendors.
The Remediation Question
And then there’s the fix. It’s one thing to light up a dashboard with a bunch of scary red alerts about unsanctioned AI use. It’s another to actually do something about it effectively. Tenable says it can remediate through patch management for simple issues and workflow orchestration for complex ones, creating tickets in IT service management tools. That’s smart, because it leverages existing processes. But let’s be real. Enforcing a policy that blocks a popular AI tool employees find useful is a political and cultural nightmare for many IT teams. The technology can tell you the “what” and maybe even the “how,” but the “who gets to decide” and the “how do we change behavior” parts are much, much harder. The tool provides the lever, but someone still has to pull it.
A Market In Formation
So where does this leave us? We’re in the early, messy phase of a new security category—call it AI-SPM (AI Security Posture Management) or AI exposure management. The capabilities are evolving fast, and the integration depth with major platforms (Microsoft, Google, OpenAI) will be a key battleground. Tenable has a solid foothold with its existing vulnerability customer base, which is a good starting point. But they’re up against cloud-native heavyweights like Wiz and endpoint titans like CrowdStrike. This isn’t just a software feature war; it’s a race to see which existing security platform can most convincingly own this new problem. The next year will be all about who can move from basic discovery to truly intelligent, automated governance that doesn’t grind business productivity to a halt. Buckle up.
