
The core risk is not classic remote code execution, but an indirect prompt-injection style attack: untrusted input is written into an artifact (logs) that may later be treated as a trusted troubleshooting context by an AI agent.
This matters because OpenClaw’s “agent gateway” model can expose a powerful control plane, and security failures often arise when “untrusted” data sources collide with privileged automation.jfrog+1
According to the advisory published by Eye Security, OpenClaw logged certain WebSocket request headers, including Origin and User-Agent, without adequate sanitization in affected versions.
If an attacker can reach an OpenClaw gateway interface, they could send crafted header values that end up embedded verbatim in log lines, creating a “poisoned” log trail that persists beyond the initial connection attempt.
The practical impact depends on how logs are consumed downstream, especially in workflows where operators ask the agent to diagnose errors, and the agent pulls recent logs into its reasoning context.
In that situation, injected content could be misinterpreted as operator instructions, trusted system messages, or structured records, potentially steering troubleshooting steps, influencing decisions, or manipulating how the agent summarizes events.
Simply searching OpenClaw’s default port (18789) on Shodan reveals thousands of exposed instances accessible over the internet, highlighting a large and growing attack surface for opportunistic probing.
Even if exploitation is “context-dependent,” log poisoning is attractive because it can be performed cheaply and repeatedly, and it targets the AI layer’s interpretation rather than a single memory-corruption bug.
Mitigations
OpenClaw addressed the issue in version 2026.2.13, and the advisory explicitly states that versions prior to 2026.2.13 are affected. Teams running OpenClaw should prioritize upgrading to 2026.2.13 (or later), then review gateway exposure to ensure the service is not reachable from the public internet without strong access controls.
Defenders should also treat agent-consumable logs as an untrusted input channel and apply standard hardening patterns: sanitize or redact user-controlled header fields before logging; cap header sizes to reduce payload room; and separate “human debugging logs” from “agent reasoning inputs” so the model does not ingest raw, attacker-influenced telemetry by default.
Where possible, implement monitoring for unusual header patterns and spikes in failed WebSocket connections, since these can be early indicators of attempted poisoning.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post Critical “Log Poisoning” Vulnerability in OpenClaw AI Agent Allows Malicious Content Injection appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
