AI agents now run 42% of cyberattacks by impersonating your boss

AI agents now run 42% of cyberattacks by impersonating your boss

·4 min readSecurity & Privacy

Last Tuesday, a finance director at a mid-size logistics company received a Slack message from her CEO asking her to approve a wire transfer for an overdue vendor payment. The tone was casual, slightly impatient, exactly the way he always wrote. She approved it in under two minutes. The CEO never sent that message. An AI agent did.

This is not a hypothetical scenario from a cybersecurity conference. According to the World Economic Forum's Global Cybersecurity Outlook 2026, 94% of security leaders now identify AI as the single most significant driver of change in cybersecurity, and 87% flag AI vulnerabilities as the fastest-growing risk. Agentic phishing (where autonomous AI bots conduct real-time, grammatically flawless conversations while impersonating someone you trust) is now reshaping how breaches actually happen.

How agentic phishing actually works

Traditional phishing relied on volume: blast thousands of badly written emails, hope someone clicks. Agentic phishing inverts the model. These AI agents scrape LinkedIn profiles, past email threads, even calendar invites to build a behavioral fingerprint of the person they are impersonating. Then they initiate multi-turn dialogues that adapt in real time.

IBM security researchers found that an AI can generate a phishing campaign matching human expert quality in five minutes and five prompts. A human social engineer needs 16 hours for the same result. That 192x speed advantage means attackers can now run personalized impersonation campaigns against thousands of targets simultaneously.

The result: Harvard researchers found that 60% of recipients fall for AI-generated phishing emails, matching the success rate of experienced human operators. AI-generated phishing campaigns have surged 1,265%, according to SentinelOne.

The password reuse exploit chain nobody discusses

Here is where most coverage stops. Agentic phishing is devastating on its own, but it becomes nearly unstoppable when paired with the single habit 94% of people still have: password reuse.

Only 6% of passwords in circulation are unique. The remaining 94% are reused across an average of five to seven services. When an AI agent phishes one credential, automated tools test that same password against every service linked to your identity within seconds.

This is why stolen credentials now cause the majority of breaches. Credential stuffing alone accounted for 22% of all data breaches in 2024-2025, the single most common breach vector. Infostealer malware lifted 548 million passwords and 17 billion session cookies from infected devices in a single year.

Why your MFA is not the shield you think it is

Modern AI-powered attacks outrun security teams using real-time session hijacking: the AI agent phishes your credentials, triggers the MFA prompt, and captures the session token the moment you authenticate. Your one-time code worked perfectly; it just authenticated the attacker.

The average infected device yields 44 stolen passwords and 1,861 session cookies. Each cookie is a direct MFA bypass, letting attackers walk into your accounts as if they were you.

What actually works against AI impersonation

The password-plus-MFA model was designed for human attackers operating at human speed. Against AI agents running thousands of simultaneous impersonation campaigns, it collapses. Three defenses shift the odds:

Hardware-bound passkeys. Passkeys cannot be phished, reused, or intercepted. They are cryptographically bound to your device and the specific website. An AI agent impersonating your CEO cannot trick you into handing over a credential that physically cannot leave your hardware. Already, passkeys offer a 98% success rate compared to passwords' 94% reuse rate.

Out-of-band verification. When your boss sends an urgent request, call them on their personal phone, not through the same platform. The three seconds this takes could save millions, especially as voice cloning scams worth $40 billion make even phone verification harder.

Zero-trust architecture. Every access request should be verified continuously, not just at login. This is the only framework designed for a world where the attacker sounds exactly like your colleague.

The AI agents running today's phishing campaigns will only improve at sounding like the people you trust. The question is whether you will keep relying on the same credentials they were designed to steal.

Related Reading:

Sources and References

  1. World Economic Forum94% of security leaders identify AI as the most significant driver of cybersecurity change in 2026, and 87% flag AI vulnerabilities as the fastest-growing risk.
  2. StrongestLayer / IBM / Harvard60% of recipients fall for AI-generated phishing emails; IBM found AI creates phishing campaigns in 5 minutes vs 16 hours for humans; 1,265% surge in AI phishing (SentinelOne).
  3. DeepStrike / Industry ResearchOnly 6% of passwords are unique; 94% are reused across 5-7 services; credential stuffing accounts for 22% of all breaches; infostealer malware lifted 548M passwords in one year.

Read about our editorial standards

You might also like: