2025 API ThreatStats Report: AI Vulnerabilities Surge 1,025%, 99% Connected to APIs

Wallarm’s 2025 API ThreatStats Report uncovers a dramatic 1,025% rise in AI-centric security flaws over the past year. Researchers cataloged 439 AI-related CVEs in 2024, and nearly every one—99%—traced back to insecure APIs. These include injection flaws, misconfigurations, and a sharp uptick in memory corruption exploits tied to AI’s reliance on high-performance binary endpoints.

AI technologies have exploded across industries, but APIs that power AI models often lack robust security. Over 57% of AI-enabled APIs are publicly exposed, while only 11% employ strong authentication and access controls. Attackers exploit these weak points to inject malicious code, siphon training data, or even manipulate machine learning pipelines. Wallarm’s researchers see these tactics succeeding in major breaches, such as those targeting Twilio and Tech in Asia, where attackers bypassed insufficient API protections to gain unauthorized access.

A standout finding is the new “Memory Corruption & Overflows” category in the Top-10 threat list. AI workloads push hardware boundaries, triggering buffer overflows and integer overflows that let attackers execute arbitrary code or crash systems. This kind of flaw used to be rare in web applications but has surged as binary APIs become standard in high-performance AI contexts. Malicious actors quickly seize these opportunities, using them to exfiltrate data or take over critical infrastructure.

API issues are now the number one attack vector, eclipsing older exploit types like kernel or supply-chain vulnerabilities. More than half of CISA’s known exploited flaws involve APIs, underscoring the shift to attacks that aim for direct entry points. Legacy endpoints—like .php files or AJAX calls—add another layer of exposure, because they often remain unpatched in production environments, from healthcare providers to government agencies.

Wallarm’s analysis covers 99% of 2024’s API-related CVEs and bug bounty disclosures, classifying them by CWE categories to produce actionable insights. Security teams can use these findings to prioritize fixes, especially for APIs supporting AI services. Strong memory-safety checks, real-time threat monitoring, and tightened authentication should become the norm.

Organizations that embrace AI must address API security head-on. Failure to do so risks data theft, operational chaos, and damaged reputations. As AI reshapes core business operations—from predictive modeling to customer engagement—protecting the APIs behind these systems is no longer optional.

Download the report:
https://www.wallarm.com/resources/2025-api-threatstats-report-ai-security-at-raise