The 2026 Data Threat Report from Thales lands with an uncomfortable message for enterprises already racing ahead with automation: the biggest security problem is no longer just what AI can do wrong, but what it is allowed to do at all. Based on research conducted by S&P Global 451 Research, the report shows that 61% of organizations across sectors like automotive, energy, finance, and retail now see AI as their top data security risk. That anxiety isn’t rooted purely in science-fiction fears of malicious AI. It’s about access. As AI systems shift from experimental tools into trusted operational actors, they are being granted privileges that look a lot like those of insiders, often without the same scrutiny or guardrails.
What’s changing is the definition of insider risk itself. The report makes it clear that this is no longer just a human problem. Automated systems are being trusted quickly, sometimes recklessly, and when identity governance, access policies, or encryption practices are weak, AI doesn’t just exploit those gaps, it magnifies them. A misconfigured permission that might once have exposed a small dataset can now ripple across environments at machine speed, touching cloud platforms, SaaS tools, analytics pipelines, and development systems in seconds rather than weeks.
As enterprises embed AI into daily workflows, from customer service and analytics to software development and decision support, these systems are being given broad, automated access to sensitive data. In many cases, the controls applied to machines are looser than those imposed on human employees. The analysis report highlights a worrying visibility gap behind this trend. Only 34% of organizations say they know where all of their data resides, regardless of criticality, and just 39% can fully classify it. Nearly half of sensitive cloud data remains unencrypted. When AI systems are allowed to ingest and act on data spread across cloud and SaaS environments, that lack of visibility makes enforcing least-privilege access feel more like an aspiration than a reality.
Identity infrastructure has quietly become the primary attack surface. Credential theft is now the leading technique used in cloud attacks, cited by 67% of organizations that experienced incidents. At the same time, half of respondents rank secrets management as one of their top application security challenges. This reflects how difficult it has become to govern machine identities, API keys, and tokens at scale. AI doesn’t just use credentials; it depends on them, and when those credentials are compromised, the blast radius can be enormous.
Attackers, unsurprisingly, are not standing still. Nearly 60% of organizations report experiencing deepfake-driven attacks, while 48% say they have suffered reputational damage linked to AI-generated misinformation or impersonation. AI is not introducing entirely new categories of threat so much as accelerating existing ones. Human error already plays a role in more than a quarter of breaches, and once automation is layered on top, small mistakes can propagate faster and farther than ever before. One overlooked configuration or misissued token can suddenly operate at industrial scale.
Investment patterns suggest awareness is growing, but adaptation is lagging. About 30% of organizations now dedicate specific budgets to AI security, which signals that leadership teams recognize the shift. Yet a majority still rely on traditional security programs designed around human users and perimeter-based defenses. Those models struggle in environments where machines authenticate continuously, access data autonomously, and act without direct oversight. The operating assumptions have changed, but many security strategies have not.
The report’s conclusion is less about slowing AI adoption and more about redefining trust. AI is intensifying existing risks by increasing their speed, scale, and reach, not replacing them. As automated systems gain deeper access to enterprise data, identity, encryption, and data visibility need to be treated as foundational infrastructure, not optional add-ons. Organizations that bake strong governance into their AI strategies from the start will be far better positioned to innovate without accidentally turning their most powerful new capability into their most dangerous insider.
Leave a Reply