Wallarm’s newly released 2026 API ThreatStats Report lands with the kind of clarity CISOs usually only get after an incident review, except this time the damage is already mapped, counted, and correlated. Based on large-scale API attack telemetry, published vulnerabilities, confirmed exploitation, and API-related breaches disclosed throughout 2025, the report makes a blunt case: APIs are no longer just an application security concern tucked somewhere under DevSecOps. They are the single most exploited attack surface in production environments, and attackers are winning not through exotic techniques, but through repeatable failures in identity, access control, and exposed interfaces, executed at machine speed and massive scale. It’s uncomfortable reading, the kind that makes you pause mid-paragraph and think about how many APIs are quietly holding up your own critical systems right now.
The numbers refuse to stay abstract. Out of 67,058 vulnerabilities published in 2025, Wallarm identified 11,053 as API-related, already a significant slice at 17%. But when those numbers are aligned with what actually gets exploited, the picture sharpens further. An analysis of CISA’s Known Exploited Vulnerabilities list from the same period shows that 43% of KEVs were API-related, making APIs the single largest exploited surface in the dataset. This isn’t theory or future risk modeling. APIs dominate exploit reality today, and they do so consistently across industries, stacks, and maturity levels. Treating API risk as secondary to “core” infrastructure security is increasingly detached from how breaches are actually happening.
The report also draws a clean, almost unavoidable line between AI security and API security. In 2025, Wallarm tracked 2,185 AI-related vulnerabilities, and 786 of them overlapped directly with APIs. That’s 36%, a proportion that repeats almost exactly when looking at exploited vulnerabilities as well. As Wallarm Founder and CEO Ivan Novikov puts it, every AI application, agent, or workflow is mediated through an API, which means the blast radius of API mistakes grows dramatically as AI adoption accelerates. AI doesn’t create an entirely new security problem; it amplifies existing ones, especially when APIs are exposed, trusted too broadly, or insufficiently monitored.
One of the more telling shifts highlighted in the report is behavioral rather than technical. Wallarm’s API ThreatStats Top 10, ranked by observed attack volume, shows attackers increasingly favor abuse over bugs. Logic abuse, trust failures, and resource consumption attacks now outpace classic code-level flaws in real-world frequency. Cross-Site issues moved to the top spot by attack volume in 2025, while Injections remain a reliable high-impact threat and Broken Access Control continues to enable scalable exploitation across accounts, tenants, and services. The message here is subtle but important: many successful API attacks don’t rely on clever payloads, they rely on systems behaving exactly as designed, just in ways defenders didn’t anticipate or constrain.
Agentic AI adds another layer of risk, and the report flags Model Context Protocol as an early warning signal rather than a niche curiosity. Wallarm identified 315 MCP-related vulnerabilities in 2025, accounting for 14% of all published AI vulnerabilities, with growth accelerating sharply mid-year. MCP issues surged by 270% from Q2 to Q3 and were linked to a Top 10 API breach involving thousands of exposed MCP servers. It’s a familiar pattern in a new wrapper: a fast-moving integration standard, rapid adoption, and security controls lagging behind functionality. Anyone who has lived through earlier API framework booms will recognize the rhythm, even if the acronym is new.
Perhaps the most sobering finding is how easy most of these vulnerabilities are to exploit. According to the report, 97% of API vulnerabilities can be triggered with a single request, 98% are rated easy or trivial to exploit, and 99% are remotely exploitable. In nearly six out of ten cases, no authentication is required at all. These characteristics align perfectly with automated abuse, where scale matters more than sophistication and real-time defenses matter more than post-hoc detection. Tools that only alert after exploitation has occurred are structurally mismatched to this threat model, a gap many organizations are still underestimating.
For CISOs, the breach analysis cuts through any remaining ambiguity. The most damaging incidents in 2025 were not driven by elite adversaries using zero-day chains. They were driven by exposed APIs, weak identity handling, and predictable trust assumptions. AI platforms and tooling accounted for 15% of API-related breaches, tying software as the largest category in the dataset, which reinforces how closely innovation velocity and risk exposure are now coupled. Improving AI security, in practical terms, means fixing API security, and improving API security doesn’t require chasing the next shiny attack class. It requires systematically closing identity gaps, reducing exposure, and designing against abuse before automation turns familiar weaknesses into material business risk.
Leave a Reply