Ninety-nine p.c isn’t a statistic you anticipate to see in a safety report. However that’s the discovering from a brand new survey of 500 U.S. CISOs: 99.4% of organizations skilled not less than one safety incident tied to their SaaS or AI ecosystem in 2025. Solely three respondents reported zero incidents. Three.
The survey, carried out by Consensuswide, lined firms starting from 500 to 10,000 workers throughout all main {industry} verticals. It requested 17 questions on safety posture, tooling, incidents, and preparedness. These organizations have been working a mean of 13 devoted safety instruments every when these incidents occurred. Monetary providers corporations, essentially the most security-invested sector within the survey, averaged 15.6 instruments—and nonetheless skilled SaaS provide chain assaults at 26% above the cross-industry price.
The Risk Has Moved
I had a chance to talk with Amir Khayat, co-founder and CEO of Vorlon, about what the info reveals. His clarification begins with how enterprise workflows have essentially modified—and why safety monitoring hasn’t saved up.
Conventional SaaS automation is deterministic—if this, then that. It breaks the second a variable modifications. AI brokers work otherwise. They use massive language fashions to interpret intent, deal with edge circumstances on the fly, and choose instruments and APIs primarily based on real-time targets slightly than hard-coded paths. That creates a monitoring drawback that safety instruments weren’t designed for.
“When habits is deterministic, you possibly can outline regular and alert on deviation,” Khayat stated. “When an agent is reasoning its method by means of a workflow, establishing a behavioral baseline turns into a essentially completely different drawback.”
Most enterprise safety structure was constructed round what Khayat calls the entrance door: consumer logins, credential validation, permission audits, and community perimeter controls. That lined two distinct entrances—human customers coming by means of browsers, and service-to-service APIs on the infrastructure stage. Instruments like CASBs, WAFs, and cloud safety posture administration have been constructed for these patterns. The habits was predictable sufficient to outline regular and detect deviation.
The engine room is a distinct scenario fully. An AI agent resolving a routine IT ticket would possibly autonomously contact identification techniques, permissions, and configurations throughout Okta, Slack, GitHub, DocuSign, and payroll platforms—all in minutes, with no human concerned. Every system logs its personal slice. No one sees the total image. The agent isn’t following a identified sample as a result of it’s deciding the sample because it goes. That doesn’t appear like a suspicious login. It doesn’t set off a configuration alert.
Asking the Flawed Questions
The instruments most enterprises are working have been constructed to reply particular questions: what are the configurations, who has what permissions, is something misconfigured? These are worthwhile questions. They’re simply not the suitable questions when an AI agent is shifting information by means of a legit OAuth-authorized integration.
The questions that matter in that situation are: what is that this agent really doing, what information is it touching, and is that habits according to what it was approved to do. As Khayat put it: “You may have 15 of them working and nonetheless be blind to that exercise.”
When CISOs have been requested to price their instruments throughout 11 particular functionality limitations, between 83% and 87% of organizations reported some stage of limitation on each single one. The vary spans solely 4 proportion factors throughout all 11. That’s not proof that some distributors are outperforming others—it’s proof that the complete class was constructed across the similar assumptions, and people assumptions don’t maintain for the agentic layer.
Confidence Versus What Truly Occurred
Almost 90% of CISOs surveyed claimed robust or complete OAuth token governance. However 27.4% have been breached by means of compromised OAuth tokens or API keys that very same 12 months. About 79% claimed complete, real-time information move mapping throughout SaaS and AI. However 86.8% stated they will’t really see what information AI instruments are exchanging with SaaS functions. These numbers can’t concurrently be true.
Khayat traces that again to the distinction between configuration-layer governance and runtime governance. Most organizations know which tokens exist, can audit permissions, and may revoke tokens manually. What they don’t have is visibility into whether or not energetic tokens are getting used constantly with their supposed scope, or whether or not a token’s habits has drifted. Understanding a token exists isn’t the identical as understanding what it’s doing proper now.
ITDR platforms that observe non-human identification exercise run into the identical wall—they usually cease on the authentication layer. They’ll let you know an agent is logged in. What they will’t let you know is what that agent did with information as soon as it was inside: what it queried, what it moved, the place it despatched it, and whether or not any of that was inside scope. 83.4% of CISOs stated distinguishing between human and non-human habits is a present limitation of their instruments. That quantity needs to be a part of each dialog about enterprise AI safety proper now.
Extra Funds, Similar Structure
Greater than 86% of organizations plan to extend SaaS safety spending in 2026. 84% plan to extend AI safety spending. Funds directed on the similar instrument classes will produce the identical outcomes. The 99.4% breach price occurred at 13 instruments on common. Including a 14th instrument that displays the entrance door received’t change something within the engine room.
Khayat’s argument is that the layer itself wants to alter—from configuration auditing to runtime monitoring. Behavioral baselines constructed round information interplay slightly than login patterns. Actual-time token governance tied to precise utilization, not simply stock. And the power to reconstruct a forensic timeline of agent exercise throughout each linked system after one thing goes incorrect. When a provide chain assault executes by means of a SaaS integration, the blast radius extends to each system that the token was approved to entry. With out that reconstruction functionality, scoping remediation and assembly regulatory disclosure timelines get tougher than they need to be.

