Three AI security incidents. Three different products. Three different vendors. All disclosed within the same two-month window. And all sharing one defining characteristic: the victim did nothing wrong.
In each case, there was no phishing email opened, no malicious attachment executed, no suspicious link clicked. The user sent an email (EchoLeak / CVE-2025-32711). The user visited a website (ClawJacked / CVE-2026-25253). The user launched their browser's AI panel (Glic Jack / CVE-2026-0628). Normal actions. Actions millions of people perform every day.
This is the defining security property of the current generation of AI agent attacks: they are zero-click. They require no user error. They exploit the AI doing its job, not the human making a mistake.
Three Incidents, One Pattern
An attacker sent a plain-text email to a target in an organization using Microsoft 365 Copilot. The email contained no malware, no attachments, no links. It was invisible to the human recipient. But it contained instructions for the AI: when Copilot later retrieved the email as context while summarizing the inbox — something it was designed to do — it read the injected instructions and exfiltrated internal SharePoint documents, Teams messages, and OneDrive files to an attacker-controlled server.
The victim never interacted with the email. Microsoft's own XPIA classifier did not flag it.
A malicious browser extension with basic permissions could inject JavaScript into Chrome's Gemini Live panel and inherit its elevated browser privileges — camera, microphone, local file access, screenshot capability. The extension does not need scary permissions. It does not need the user to do anything beyond having the Gemini panel open.
The attack exploits the AI assistant's legitimate capabilities, amplifying what a basic extension can do by an order of magnitude.
A developer running OpenClaw visits any attacker-controlled website. JavaScript on that page silently opens a WebSocket connection to the locally-running OpenClaw gateway, brute-forces the password at hundreds of attempts per second without triggering any rate limit or alert, registers as a trusted device without user confirmation, and gains full admin control of the AI agent.
The attacker can then instruct the agent to search Slack history for API keys, read private messages, exfiltrate files, or execute arbitrary shell commands on paired systems. Full workstation compromise. Initiated from a browser tab.
The victim in each case performed a completely ordinary action. The AI performed a completely ordinary action. And sensitive data was exfiltrated, or full system access was achieved, without a single anomalous user behavior to detect.
Why Conventional Security Monitoring Fails Against This Class of Attack
Conventional security monitoring is designed around the detection of anomalous behavior. An employee downloading unusual volumes of files. An authentication event from an unrecognized IP address. An executable running from a temporary directory. These signals work because they represent deviations from normal patterns.
Zero-click AI agent attacks produce no anomalous behavior at the conventional security monitoring layer. In each of the three incidents above, every infrastructure metric looked normal throughout the attack:
- EchoLeak: Copilot retrieved email context (normal), generated a response (normal), the response contained a link (normal). No alert triggered. No file access anomaly. No suspicious network connection — the exfiltration path used SharePoint and Teams domains on Microsoft's own allowlist.
- Glic Jack: Chrome ran with normal CPU and memory usage. The extension's network traffic was within normal parameters. The Gemini panel opened normally. Camera or microphone activation appeared as a legitimate Gemini operation.
- ClawJacked: OpenClaw operated normally. The WebSocket connection from the browser was a localhost connection — the kind the gateway was designed to accept. The authentication succeeded. The device registration was automatic. Every event in the log looked like normal OpenClaw operation.
The signal that something was wrong in each case was behavioral: the AI was producing outputs or taking actions inconsistent with the user's intent. That signal only exists at the AI behavioral layer. It is invisible to conventional security tooling.
The Monitoring Gap — And How to Close It
The gap between what conventional security monitoring can detect and what AI agent attacks actually look like is structural. It will not close by adding more of the same monitoring. It requires a new layer: behavioral monitoring at the AI itself.
Behavioral monitoring for AI agents means establishing baselines for what the agent normally does — what it retrieves, what it outputs, what tools it calls, what data it accesses — and flagging deviations from that baseline as potential indicators of compromise. It means monitoring AI outputs for content anomalies: unexpected URLs, data formats consistent with exfiltration, instruction-following patterns inconsistent with the user's task, tool calls that do not map to any plausible interpretation of the user's request.
It also means monitoring the trust interfaces around AI agents: the interfaces between model outputs and privileged system operations, the authentication mechanisms protecting local gateways, the code paths through which external content reaches model context. These are the surfaces that all three vulnerabilities exploited. Static analysis at these interfaces — before deployment — is where the vulnerabilities should have been caught.
- For MS-Agent (CVE-2026-2256, CVSS 9.8) — the shell injection vulnerability lives in the code path between model output and OS execution. Code scanning that specifically analyzes this interface would surface this class of flaw before it reaches production.
- For ClawJacked — the failure was in the trust boundary design around the AI gateway. Security review of the gateway architecture before deployment would have identified the WebSocket cross-origin trust model as a vulnerability.
- For Glic Jack — the failure was a missing entry on a browser blocklist. Runtime monitoring that tracks what privileged operations an AI component is performing would have detected the injection.
The Governance Imperative
A theme running through all three incidents is governance — specifically, the absence of it.
Microsoft 365 Copilot had access to the organization's entire M365 environment — email, SharePoint, Teams, OneDrive — because that access is what makes it useful. EchoLeak exploited that access. The question of whether Copilot should have access to all of that data, or whether least-privilege principles should constrain what it can retrieve and share, is a governance question that most organizations have not answered.
OpenClaw, by the time of the ClawJacked disclosure, had become shadow AI at scale — a developer-adopted tool running on thousands of enterprise machines outside IT visibility, with access to shell execution, messaging platforms, and local credentials.
AI agents are a new class of identity in organizations. They authenticate, hold credentials, and take autonomous actions with the same or greater capability than a human user. They need to be governed with the same rigor as human users and service accounts — inventoried, least-privileged, monitored, and subject to the same incident response procedures as any other compromised account.
The three incidents of the past two months are not anomalies. They are previews. As AI agents become more capable, more deeply integrated into enterprise systems, and more widely adopted outside formal IT governance processes, the attack surface they represent grows. The organizations that understand this now — and build the evaluation, monitoring, and governance infrastructure to address it — are the ones that will not be writing incident reports when the next zero-click attack lands.
References
- CVE-2025-32711 (EchoLeak) — Aim Security, EchoLeak Vulnerability Found in Microsoft 365 Copilot, June 2025
- CVE-2026-0628 (Glic Jack) — Palo Alto Networks Unit 42, March 2026
- CVE-2026-25253 (ClawJacked) — Oasis Security, ClawJacked: OpenClaw Vulnerability Enables Full Agent Takeover, February 26, 2026
- CVE-2026-2256 — SecurityWeek, Vulnerability in MS-Agent AI Framework Can Allow Full System Compromise, March 2026
- AptaSentry Products — Runtime Monitoring & Code Scanning