
AI has become a must-have for cybersecurity teams and not just because vendors are adding AI to everything. It enables deeper analytics on an ever-increasing volume of alerts. In doing so, it reduces the risk that the signals of an attack will get lost in the noise. It also allows analysts to ask more complex questions of the data in order to identify new and previously unknown attacks.But there is some concern over how AI is being sold and positioned. For a decade, we’ve heard security vendors talk about how machine learning and AI will replace the SOC level 1 analyst. But does that make sense? Can it really replace a level 1 analyst, or is it just a tool? What can it offer, and how do you make the most of it?
With that in mind, Enterprise Times editor Ian Murphy talked with Brett Candon, VP International of Dropzone AI. The company says it is delivering the world’s first SOC analyst, so it seemed a good time to talk with Candon to see what they were doing.
Are we really going to replace level one SOC analysts?
Security teams are always undermanned, partly due to a skills shortage, the cost of training and retaining a SOC analyst. Candon says that no customer is telling him they are overstaffed. Instead, they are understaffed and want help. That help has to come from tools, and this is where Dropzone is positioning itself.
Candon sees Dropzone as an AI analyst working alongside existing staff. He said, “We’re not a SIEM, we’re not a detection engine, we’re not a SOAR. You don’t write playbooks. You don’t tell us if this happens, then do this.
“We are a new member of your team. An autonomous AI SOC analyst that is there to do investigations that the humans are in full control of.”
The control of the investigations is important. SOC analysts learn and build their critical thinking skills by doing triage. It gives them an understanding of the alerts that occur and how they show how an attack unfolds.
However, triage is time-consuming, and the number of alerts that SOC analysts are dealing with is leading to burnout. As Candon commented, it’s leaving organisations exposed because alerts are not getting investigated. It’s also leading to a high turnover of staff for many organisations.
It should come as no surprise, therefore, that customers are telling Candon, “We want to get rid of manual investigations completely. We want to eradicate that whole level one triage piece.”
This approach will lead to a reduction in critical thinking and knowledge over time. How this will manifest itself remains to be seen. One solution may be to use the Dropzone explainability feature. Customers should ensure there is a human-in-the-loop approach that includes the SOC analyst reviewing how the tool reached its conclusions. That will improve training and knowledge.
How is Dropzone helping the SOC?
The key here is the ability to analyse alerts and triggers. Candon says the company uses an internal methodology called OSCAR. It stands for:
- O – Obtain/Observe: Ingest the alert (e.g., from Microsoft Defender, SIEM, case queue).
- S – Strategize/Plan: Extract key fields, understand type, map to tactics and techniques.
- C – Collect: Gather evidence from systems, CTI sources, historical observations and any context memory related.
- A – Analyse (via recursive questioning): Generate multiple parallel investigation threads.
- R – Report/Recommend/Respond: Produce a conclusion with evidence, narrative, and suggested actions, and if enabled, it can assist with response.

The most interesting piece of the methodology for SOC teams will be the Analyse phase. Candon explained how it scales, saying, “We’ll build, say, 20 investigation threads that we want to go out and ask questions on. Each of those investigation threads may have five to ten questions. Each of those threads may go into six, seven, ten different systems and CTI [cyber threat intelligence] sources.“
He went on to say, “We go through a recursive reasoning process. We ask multiple questions of integrated systems and CTI sources to gather evidence. These help pave the way to the final conclusion.”
That rephrasing enables the investigation to target different angles, such as how this impacts the user, host, process, network path, domain, etc. It allows a wider breadth of investigation than most teams handle today.
More importantly, this use of parallel threads would require a team of analysts and would consume a large amount of time. This is a single investigation and a single resource that is returning detailed information to users.
Does this deliver data that can be used in legal cases?
There is a constant battle when an incident occurs between IT operations, security and the legal and governance teams. The former is focused solely on the restoration of systems and data to minimise downtime. Ask the C-suite, and they will unanimously tell you that it is the most important thing for the business.
But that rush to restore has consequences. For security teams, they need as much data as possible to uncover the root cause and ensure there is no repeat. The incident response teams inside security are also looking for other markers. Not just who and why, but also what. That is often handed off to third parties to investigate with what data is left.
Legal and governance teams need data that is of a standard that meets the threshold of being admissible in a court. It has to be shown to be unaltered during the handling process. In electronic terms, that means storing it in an immutable store where there can be no question over its accuracy.
Candon says that this is something that Dropzone considered right from the start. He said, “Every thread produces findings that get stored as evidence. We’re asking many, many questions of the different tool sets, and from that, we’re building evidence.”
He continued, “At a summary level, you get a range of stats. For example, we had 20 investigation threads that we created. We went out to seven different data sources, we collected 67 pieces of evidence, and we got all of that back to them in six minutes.”
Evidence Lockers deliver that legal surety
To ensure that the evidence is kept secure, he says that Dropzone uses an Evidence Locker tied to every single investigation. Everything is placed into that evidence locker and can be queried by an analyst. Analysts can use this to check conclusions, as a base for other investigations or to learn.
Just as important is what is captured. The data comes from authentication logs, data access commands and other sources. This is data that the organisation already has, and which is already subject to current audit and evidence controls.
As Candon has already noted, all the evidence gathered is placed into a unique Evidence Locker for an investigation. But it stores more than just the data gathered. It records every step that was taken from the questions to the systems, tools and CTI sources that were queried.
It also places them in order and under the right investigation thread. That is particularly important given the parallel nature of the tool. What each step contributed to the conclusions and findings is also recorded.
This is a goldmine of data for legal, compliance and audit teams. Dropzone transparently shows where that source data was collected from and how, allowing easy traceability back to the source.
How does the AI SOC analyst get smarter?
One interesting question is how does the AI SOC analyst get smarter? One of the concerns about AI in cybersecurity is how it is trained to spot attacks. Everyone uses a set of training data. That means we know the AI can spot known knowns, e.g. the things we tell it are bad. Over time, it can use its contextual model to get better at spotting new attacks that use known techniques or indicators of compromise.
The problem is, how do you keep the model updated? How do you add to the knowledge that it has?
The AI is trained and utilises third-party LLMs with which Dropzone has commercial relationships. Among those that Candon mentioned are OpenAI and Gemini. It also uses those models when it is doing its investigations. It goes out to them to ask questions and pulls that data back in.
What is not clear is how it then validates that data. Hallucinations and AI slop are a known problem, but Candon says Dropzone is managing that.
For example, the company uses a context engineering approach. That determines how prompts are structured and what information is passed to the LLM. Candon said, “We do context engineering to ask the right questions of these LLMs, all your security tools and of 20-plus CTI sources that come included.”
But, as anyone who has used LLMs will know, there is always a risk of leakage. To prevent that, Candon says that the commercial contracts have no-store, no-train and detonate-after-use clauses. This means that the LLMs are legally blocked from using anything in the prompt. Additionally, as all prompts are stored in the Evidential Lockers, the audit and compliance teams are able to see what information was used.
Can you use Evidential Lockers for fine-tuning?
No. Dropzone is clear that it doesn’t train LLMs with new client data. Instead, Dropzone told Enterprise Times that it “asks new questions with each investigation, collecting fresh evidence and log data, such as what has been historical activity over the last 30 days for a user or IP. This fresh evidence is then used to supply the analysis of the incident.
“This approach helps customers feel secure about their data usage, and gives Dropzone the ability to coach the process and experience in the ecosystem and leverage foundational models at scale, rather than train a proprietary model over time.”
That seems disappointing. Not allowing external LLMs to access data is absolutely the right decision. However, not allowing the Evidential Lockers to be used by the customer to continuously train its AI SOC analyst seems short-sighted. It should be seen as no different to training a human and helping them develop their critical thinking skills.
This leaves customers having to wait for the major LLMs to update their information on those attack paths and for that to be incorporated by Dropzone. It also runs counter to how you would expect to learn from data.
At present, the best you can do is treat it like a junior SOC analyst and use it to scope how it works. That allows for operational fine-tuning. All conclusions should be reviewed by other analysts to spot issues with how the AI interprets context. To correct that, organisations will need to update the context memory and strategies that the AI SOC analyst uses.
Should Dropzone look to provide a mechanism to mine the Evidential Lockers as a data source, it’s likely that customers will adopt that. Hopefully, Dropzone will look at a way for this level of fine-tuning and the benefits it would bring.
Removing the AI slop and hallucinations
AI is prone to mistakes, otherwise referred to as AI slop and hallucinations. They occur due to poor contextual mapping, garbage data, and AI desperately trying to please. In effect, you could argue that AI isn’t necessarily a tool designed for qualitative responses.
Candon accepts the risk of error, saying, “Are there going to be hallucinations with any AI tools? Yes, there are.” Importantly, he went on to say that there is a range of mitigation strategies that organisations can adopt.
The three he called out are:
- Don’t ask the LLM to guess.
- Use LLMs to interpret and orchestrate, not to fabricate evidence.
- Cross-check against logs and CTI, and require proof.
He went on to say, “We are not just flagging something in front of an LLM and saying, is this good or is this bad? We are not asking our Dropzone AI analyst to guess. We are pushing our Dropzone AI analyst to prove.”
But proof assumes that it can get to a level of confidence in the data. As anyone who has worked with data will know, levels of confidence and trust in data vary. It is a very specific issue when dealing with AI.
What happens when the system isn’t confident?
When the system cannot reach full confidence, Candon said, “If the answer that we get back doesn’t help us move closer to a conclusion, then we’ll ask another question in a different way. We’ll keep going through this recursive questioning until we get to what we feel is mathematically comfortable for the conclusion.
“When we come to our conclusion, if we are not certain, it doesn’t get put as malicious or benign. It gets put as maybe suspicious, or maybe it’s incomplete. If we can’t access certain systems that we know we need to access to be able to answer this completely, we’ll mark it as incomplete.
“Is it a silver bullet, and is it always going to be 100% right? No, it’s not.”
That’s a refreshing level of honesty about the use of AI and one that organisations must take on board. Those who want to just leave all investigations to the AI need to think about how they manage an indeterminate conclusion. This is why we still need humans alongside the AI.
It’s also why we need to fine-tune the model. If an analyst has to then spend time taking that to a conclusion, that needs to be fed back into the model. It will not always be operational fine-tuning, but pointers, maybe, to different CTI sources or materials.
Are we about to replace the SOC Analyst?
The most important thing here is that the role of a level one SOC analyst is far from dead, but it is evolving. The key thing will be how organisations look at their process for training and developing their human assets alongside their AI tools.
As Candon told us, AI SOC analysts are not perfect, but they can do a lot of things faster than a human. It is an advanced tool that will remove a lot of the mundane and the heavy lifting from teams. But it also needs guidance in the same way you would guide and develop your own staff.
At the moment, that guidance is purely around the operational approach that the AI SOC analyst is taking. Advancing its knowledge and making it a more complete AI will depend on future upgrades.
Enterprise Times: What does this mean
What will be important is how the Dropzone customer base evolves. Will it end up with predominantly enterprise customers, or will Managed Security Service Providers (MSSP) become the bigger entity? Both have very specific needs.
For the enterprise customer, the ring-fencing of all their data is core to security. It doesn’t want any risk of that data leaking out. The MSSP has a split concern. It will want customers to be certain their data is ring-fenced, but it also knows that there is much to be gained from sharing knowledge.
It’s not unreasonable to think that an MSSP will want to be able to do a high-level aggregation of the evidence gathered to create a system that allows it to deliver a much better service. That will require more than just fine-tuning on Evidential Lockers. It will require trusted anonymisation of the data. However, this is not something that Dropzone has been asked for at the moment.
For those worried that the AI SOC analyst is coming for your job, fear not. It is still a long way from replacing humans and will rely on humans to do that checking and verification. Just treat it as a tool and make the most of what it delivers.
The post Dropzone is reimagining the Level One SOC Analyst appeared first on Enterprise Times.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
