
Related: How ‘observability’ drives security
Attackers do not rely on one silver bullet. They move incrementally. They probe. They chain together small weaknesses that, in isolation, look harmless. The problem is not a lack of data. It is a lack of context.

Signal vs. noise
A single vulnerability, a single anomalous login, or a single configuration drift rarely explains real risk. Even mature vulnerability frameworks, with precise CVE classification and continuous scanning, only describe theoretical exposure. They do not reveal which weaknesses are actually exploitable in a given environment. The gap between known vulnerability and real-world risk has widened beyond what human intuition alone can manage.
Risk becomes legible only when events are evaluated together.
This is the premise behind “toxic combinations.” Instead of treating alerts as isolated incidents, toxic combinations calculate cumulative risk. A low-severity flaw paired with unusual identity behavior and unexpected network movement may indicate far more than any one signal alone. Context turns fragments into patterns.
Context at the source
Consider a benign example: a researcher accessing servers outside their normal region. That alone may be acceptable. But if that same identity is running an outdated dependency, using weak cryptography, and operating with stale credentials, the cumulative profile changes. Individually, these are routine findings. In combination, they suggest exposure.
The same logic applies to account lifecycle events. Provisioning a new user is standard. If that user accesses most sensitive files within minutes of creation, context changes again. For organizations protecting AI models, intellectual property, or regulated data, these combinations matter more than any single alert.
The question then becomes how to generate and evaluate that context efficiently.
Fewer, better alerts
Historically, security tools have treated the operating system kernel as a boundary. Events are collected, exported, and analyzed downstream. That model assumes meaning is derived later, in centralized systems such as a SIEM.
Technologies such as eBPF shift that assumption. By allowing safe programs to run inside the kernel, teams can observe system calls, network flows, and file operations in real time and correlate them at the source. Instead of shipping raw events for later interpretation, context can be built closer to where behavior occurs.
The technical detail is less important than the architectural shift. Insight is generated earlier. Correlation happens before data leaves the workload. Signals arrive pre-shaped, not as disconnected fragments.
This has practical consequences.
First, it reduces noise and operational cost. Aggregating repeat events into compact records lowers storage and processing overhead. More importantly, it limits alert fatigue. Teams stop responding to isolated anomalies and start responding to contextualized risk.
Second, it enables feedback loops. When certain combinations consistently correlate with real incidents, policies can be tuned. Detection thresholds become grounded in observed exploit paths, not abstract rule sets. Security moves from reactive alarm management toward a clearer mapping of actual attack surface.
In complex, AI-driven environments where workloads are ephemeral and identities are fluid, legibility is the real control. Volume alone does not create security. Context does.
Security observability is evolving beyond raw telemetry. The next phase is about understanding how low-level events relate to one another across systems and identities. Toxic combinations are one way to operationalize that shift.
The goal is not more alerts. It is fewer, better ones — signals that reflect how attackers actually operate.
About the essayist: Jeremy Colvin is a senior engineer at Isovalent.
The post GUEST ESSAY: Real cyber risks arise when small flaws combine and alerts are viewed in isolation first appeared on The Last Watchdog.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.

