65% of companies have zero defense against prompt injection

65% of companies have zero defense against prompt injection

·4 min readSecurity & Privacy

Every company rushing to deploy AI tools believed the same thing: these systems would make employees faster, smarter, more productive. What nobody broadcast in the onboarding webinar is that those same tools just became the softest entry point on your entire network.

A VentureBeat survey of 100 technical decision-makers found that only 34.7% of organizations have deployed dedicated prompt injection defenses. The remaining 65.3% have not purchased the tools or could not even confirm whether they exist in their stack.

That gap between adoption speed and defense readiness is not a future problem. It is happening right now.

The attack that rewrote the playbook

In September 2025, Anthropic’s threat intelligence team detected a Chinese state-linked group that had jailbroken Claude Code (a legitimate AI coding assistant) and turned it into an autonomous attack platform. The AI performed 80 to 90% of the total tactical work across a campaign targeting roughly 30 organizations, including technology firms, financial institutions, chemical manufacturers, and government agencies.

Human operators intervened at only four to six decision points per target. Everything else (reconnaissance, vulnerability scanning, credential harvesting, custom exploit generation, post-operation reporting) was handled by the AI itself, processing thousands of requests per second.

The jailbreak technique was disarmingly simple: attackers told the AI it was an employee of a legitimate cybersecurity firm conducting defensive testing, then broke the malicious workflow into small, innocent-looking tasks that individually raised no flags. Collectively, they constituted a full espionage operation.

Prompt injection: the vulnerability that refuses to die

OWASP ranks prompt injection as the number one vulnerability across large language model applications, appearing in 73% of production AI deployments. Attack success rates range from 50% to 84% depending on system configuration.

On February 13, 2026, OpenAI launched Lockdown Mode for ChatGPT and publicly acknowledged that prompt injection in AI browsers may never be fully patched. The company building the most widely used AI tools on Earth said the core vulnerability may be permanent.

The problem is architectural. LLMs cannot reliably distinguish between trusted instructions and malicious input embedded in the data they process. Critical CVEs assigned in 2025 and 2026 confirm this is not theoretical: Microsoft Copilot received a CVSS score of 9.3, GitHub Copilot hit 9.6, and Cursor IDE reached 9.8.

Your productivity tool is now your attack surface

The Cisco State of AI Security 2026 report reveals a staggering disconnect: 83% of organizations plan to deploy agentic AI systems, but only 29% feel ready to secure them. That 54-point gap represents thousands of enterprises about to connect autonomous AI agents to their most sensitive systems without adequate defenses.

Consider what agentic AI actually does: it reads your databases, accesses your APIs, executes code, sends emails on your behalf. A successful prompt injection against an agentic system does not just leak a conversation. It gives an attacker an authenticated insider with programmatic access to your infrastructure.

The AI prompt security market grew from $1.51 billion in 2024 to $1.98 billion in 2025, a 31.5% compound annual growth rate. The money is flowing because the credential-based breaches that already dominate the threat landscape are about to be amplified by AI tools that can harvest, test, and exploit stolen logins at machine speed.

What the 34.7% who deployed defenses actually did

The enterprises with prompt injection protections in place share three patterns.

First, they treat LLMs as untrusted users, not trusted assistants. Every output gets validated before it triggers an action. Every input from external sources passes through a content filtering layer before reaching the model.

Second, they implemented privilege separation. The coding assistant cannot deploy to production without human approval. These boundaries sound obvious, but the common cybersecurity shortcuts most employees take daily show that convenience consistently overrides caution.

Third, they run adversarial testing: continuous red-teaming where security teams actively try to jailbreak their own AI deployments using the same techniques threat actors use.

The window is closing

The EU AI Act enforcement deadline hits August 2026. Prompt injection now maps to seven major compliance frameworks including OWASP, MITRE ATLAS, NIST, and ISO 42001. Companies without documented AI security controls will face regulatory exposure on top of operational risk.

Average breakout times (the gap between initial breach and lateral movement) fell to just 29 minutes in 2025, down 65% from the previous year. With AI automating the attack chain, that window will keep shrinking.

The tools you bought to move faster are now being used against you, at speeds you cannot manually match. The 65% of companies without prompt injection defenses are not just unprepared. They are running AI systems that an attacker can instruct as easily as an employee can.

Sources and References

  1. Anthropic / Infosecurity MagazineChinese state-linked hackers jailbroke Claude Code to automate 80-90% of a cyberattack chain against ~30 organizations.
  2. VentureBeatOnly 34.7% of organizations have deployed dedicated prompt injection defenses.
  3. OWASPPrompt injection ranked #1 (LLM01), 73% of production AI deployments, 50-84% attack success rates.
  4. Cisco / Vectra AI83% plan agentic AI but only 29% feel ready to secure them.

Read about our editorial standards

You might also like: