AI in the SOC: What's Real, What's Hype, and What's Next

The “AI SOC” is one of the hottest topics in security right now, but what does it really mean in practice?


We break down where AI is actually helping security teams work faster and smarter, where it’s still falling short, and what’s hype versus what’s truly useful in the SOC.

posted on
May 13, 2026
Transcript

Brian Moody: Today we're talking about AI in the SOC. It's all over. Everybody's talking about it. But is it hype? Is it real? And really, what are some of the challenges that are associated with bringing AI into the SOC?

 

Shahin Pirooz: So I think parsing that is important. AI in the SOC is not a bad thing. That's not hype. AI SOC, that's a whole different thing.  

There are a lot of companies out there that are positioning their technology as a replacement for security operations, and that is 100% hype today. There are plenty of thought leaders in the industry, people who are shaping how security happens for our country and for the globe that have come out and said, "We are not at that point where AI can be a security analyst." And so from that context, AI SOC is a hype.  

AI in the SOC is an enabler, just like AI in the workforce is an enabler. It's basically a productivity tool and it can help.

 

Brian Moody: Well, one of the things that I keep seeing when I kind of search this topic is that AI is a force multiplier. Yes. It's not a replacement layer.

 

Shahin Pirooz: 100%.

 

Brian Moody: Or a replacement. And I will go out on a limb and I'll make the statement right now, AI will never replace an analyst.

 

Shahin Pirooz: That's a hard statement to make, but I think it'll be a long while.

 

Brian Moody: I think from a standpoint of what AI does for us today, and not only what I think everybody dreams that it's going to do, I think the human aspect when it comes down to security, and you've said this so many times, the tools and technology give us so much data, but it's the human mind that says do I go left here?

 

Shahin Pirooz: Right.

 

Brian Moody: So when you get down into the investigation, and really start digging into what I think the nitty-gritty of it. I think the human aspect is really, really hard to replace.

 

Shahin Pirooz: So I think there's a lot of things in tech that are happening right now to reduce the overheating problem of the chips that enable AI.  

So the biggest problem we have right now is that we don't have enough capacity to cool down the GPUs that are required to do billions of instructions or character-based large language models. The challenge that creates is we're limited to how big a language model we can use, and therefore it's difficult to replicate a human because a human goes much bigger than that, the experiential aspect of it. So the—

 

Brian Moody: And it takes a lot less to cool them down.

 

Shahin Pirooz: Maybe, depends. Depends on the day, depends on the day and the topic. But ultimately where things are going, the ability to change the way chips work, the ability to use technologies like diamonds are one of the things, basically growing synthetic diamonds for the chip substrate.  

There's a company that's doing that right now. It's a better conductor of electricity. It's going to do a lot more with a lot less heat and cooling requirement. Less heating, so therefore less cooling requirement. So as chips become more effective, as our cooling requirements reduce, as our ability to do much more with much less increases, I think there is an opportunity for building large language models that can replicate the human there's something fishy here, but we're nowhere near that today.

 

Brian Moody: I think that that's a long way off. So let's talk about this hype. When people talk about the AI SOC or AI SOC? What are they really driving at?

 

Shahin Pirooz: So, I'm going to take that parsing that we talked about in the beginning. So let's start with an AI SOC.  

There is a lot of new security companies that are coming out and saying they are an AI SOC or an AI SIEM. Where we really get into trouble is, leveraging an AI platform from within the SIEM is a good thing. It's not a bad thing because it's a force multiplier, as you were describing. It enables the ability to parse tens of thousands of lines from a log more quickly than a human could possibly do it. It allows correlation rules to be applied much more quickly than traditional correlation engines could. It is a really nice adjunct plugin to the traditional way raw logs and correlations and deduplications and all of those things happened historically.  

When we say AI SOC, there is an implication there, and there's a lot of MSPs that are getting drawn into, I want to get this AI SOC instead of paying for a human-based SOC because it's cheaper, it's fraction of the cost, it's pennies on the dollar. But the reality is it goes back to you can't possibly replace humans yet. Is that going to be a day? Is that going to be something that happens?  

Yes, there are definitely industries today where AI is taking out the analyst function. For example, legal analysts are becoming a scarce position because contract processing, contract evaluation, finding risk in contracts doesn't take as much experience when you have a senior lawyer that can review the analysis of an AI then it's a force multiplier for that space.  

Similarly, when you're looking at billions of lines of raw log data and processing it and then presenting review, a senior analyst can evaluate that data and say, no, something still looks funky, something doesn't look quite right, as opposed to taking it at face value and running with it.  

The problem with an AI SOC is there is nobody there to say you have to take it at face value. So where it's hype is an AI SOC can't be the only solution. You still have to have analysts, senior analysts, to process the data that comes out of it or to do the investigations, right? It really depends. The agentic revolution is allowing some aspects of investigations to happen. And, could that be a force multiplier as well? Absolutely. The challenge that we're facing, though, is every single—we talked about this in a previous conversation—every single tool in the security stack is building AI into their tool, but it's a siloed AI.  

It only understands their ecosystem. It only understands their dataverse, and I'm not using that word on purpose, Microsoft, it just came to mind. But everybody has their own equivalent to a dataverse within their manufacturer stack, if you will. And that construct is limited because that's the only data you have visibility into.  

So if you have tools from 5 or 6 different manufacturers, now you have 5 or 6 different AIs that don't talk to each other, that can't interact, that don't play nice with each other. And now you need an analyst who looks at the outputs of 5 or 6 different AIs that all may be hallucinating.

 

Brian Moody: Well, I'm going to get to a point that you've made and really what I think one of the real competitive advantages that WhiteDog brings, but we'll get to that.  

But you have made this comment many, many times. What you just said is, is that the AI within these tools is really truly for the hygiene of that manufacturer's tool. It's not necessarily across multiple of those, the hygiene for our environment. Because it's specific to keeping their tool on track with, as you said, their kind of siloed dataset.

 

Shahin Pirooz: And playing the same force multiplier game, it is a force multiplier for their data, right? But as we've also said repeatedly, there is no one tool that is a security stack. There's not one in the market.

 

Brian Moody: So, I mean, the hype is around—again, you brought up a lot of good points, but the hype is also around—I mean, AI is advancing triage much, much more quickly. I think one of the things that we see is it finds patterns far more quickly than if you think about the billions of lines of telemetry that come in.  

No human being can possibly look at all that and be able to kind of really detect those patterns. The enrichment of the data I think is another aspect of where AI, with respect to the hype of AI in the SOC, again, not AI SOC, right? Where we really have the ability to enrich that data.  

So, steal a little bit of thunder, talk about one of the advantages that we talk about WhiteDog, that what WhiteDog does that's unique, I think, with respect to how our SOC operates. We've talked about that one tool, we've talked about the AI in the tool. Now we deploy multiple technologies, which we do, but talk a little bit about how the WhiteDog management interface—what's different about us from a standpoint of where we apply the AI, the normalization, and what advantage that brings?

 

Shahin Pirooz: Yeah. Our AI under the surface is called Cybro. Cybro basically is doing all the things we just talked about. But what it's doing differently than all these players, it is playing that cross-ecosystem a neutralizer, if you will.  

It is taking the data, the alarms, the alerts, the events from all of the tools in the stack and bringing them into our own equivalent to a database, dataverse, and then applying all the logic associated with finding patterns, finding threats.  

What this allows us to do, what most SIEMs stop at, and I've said, and I've written articles on the fact that the SIEM is dead.

 

Brian Moody: Dead, broken.

 

Shahin Pirooz: Yeah. SIEMs don't work because SIEMs are based—let's go back in time for a second. We always had log management software because it was important to collect logs from our applications, from our servers, from our network, from all the—

 

Brian Moody: It's the trail, it's what's happening.

 

Shahin Pirooz: Yeah—well, it's to be able to go and fix a bug. If there's a bug, you can go into the log. That was the original context of logs.  

Then, some smart people decided that, hey, we have these logs, let's parse these logs and find security events that are happening in these logs. Let's find patterns that imply there's a security issue happening within the data.  

The problem with SIEMs and log analysis is that it's a rearview mirror. It tells you what happened in the past. It doesn't tell you what's happening right now. So, SIEMs are log management software with a security lens. But they're still log management software.  

They collect the raw logs from a bunch of different sources, and then they try to draw correlations between them that says, we believe that this user, Bob 21, is the same as Bob Smith, and so we're going to correlate those together. We believe that Bob 21 logged onto this machine, so it must be Bob Smith because we made the other correlation. But there's really no direct mapping of the nodes because there is no context as to what the tools are returning back other than a name. And they try to map those. The additional context might be IP addresses. If the tool returns a MAC address, that gives you a better idea that it's an endpoint that is the same endpoint. So there's levels of what data they get back from each tool that allows that correlation to be cleaner and tighter.  

The difference in what we do is we do the same collection from multiple telemetry sources, including APIs and including our sensors, which are real-time analysis of data, and all of the tools that we use. And that data then goes through the same standard deduplication, filtering, compression, all the things that we do before we send it into Cybro. That level of normalization that happens next is different. Normalization ends at the correlation engine for most SIEMs.  

Normalization for us is normalizing to two asset types: users and endpoints. So entities and users. So when we say UEBA, it's User and Entity Behavioral Analysis. So what we settled down to is you have this concept of a user and an entity, and it's literally email addresses and IP addresses. So those are the two things that we can knock down and verify that these are the same thing across the board.  

Now, given that IP addresses are overlapping and have all these kinds of issues, what we've done is we've built engines that say, we know that this data from this tool for this customer belongs to this customer. So the IP addresses that show up are not overlapping with anybody else's, it's their IP addresses, and we're able to get down to the raw thing.  

The next thing we do that is unique in terms of the approach is we don't try to enrich the data post-analysis. We don't try to enrich the data when somebody's analyzing the logs. We enrich the data in the data source, in the database, in the root, in the dataverse, if you will. And that enriched data stays there and it gives us visibility to say that we saw this entity or user across these 6 different tools, and this is the type of alerts and alarms and issues that we're seeing coming from this device.  

It allows us to very quickly identify malicious activity that is happening across DNS, email, identity, endpoint, network, applications, data in one single entity type, and then be able to apply vulnerability data against it, be able to apply behavioral data against it, and quickly map what is the outcome of the behavior that we're seeing.  

And if that outcome looks like there's malicious intent, we can jump to the correlation rules that say there is tactics and techniques from the MITRE ATT&CK matrix that are being taken advantage of, and there's a vulnerability on the target system where they're trying to connect to, and the tactics look like they're taking advantage of that vulnerability, therefore this is a malicious act.  

And with that flow, we're able to reduce dwell time from 6 months to 6 minutes for 8 years running.

 

Brian Moody: So instead of the AI kind of visually happening up here where the tools are, independently within the tools, we're collecting that down into the WhiteDog dataverse, right? And then we normalize it.  

So when we talk about rapid triage, enriching the data, normalization of the data, the AI —Cybro—that we're deploying now is at a different level, right?

 

Shahin Pirooz: So we've talked about years when we first started WhiteDog, there was this concept of interstellar dust that we were talking about, and we call it intercyber dust as it relates to Cybro.  

In fact, in some of our early renderings of Cybro, Cybro wore a helmet, a space helmet and a spacesuit. And the idea was that Cybro lives in cyberspace. And in cyberspace, just like in interstellar space, astronomers look at the changes in the interstellar dust to see that something went through that space.  

Similarly, what Cybro is doing for us on the backend is we are looking at the changes in the intercyber dust to see that something went through that space. That's how we're able to reduce dwell time so quickly. That's how we're able to identify something smelly is in this space, even though we can't see it and we can't put our finger on it. We know there's something there.

 

Brian Moody: Ladies and gentlemen, that is the definition of a founder, right? Right there when they start talking about space dust.  

So we've talked about things that work, right? And why they work, why AI is important in the SOC and utilize the SOC. So let's talk a little bit about what the limits are. Where it doesn't work and maybe some of the issues that, you know, our folks watching, what are some of the limits and the failures in the sense of AI in the SOC today?

 

Shahin Pirooz: Great question, and we've written this AI implementation checklist as a takeaway that each of you can go to on our LinkedIn post for this live stream after this event and download this checklist.  

And there's really 6 pitfalls. Those are over-trusting the AI output. So let's think about AI SOC for a second. If you think that AI SOC is the end-all be-all, you fell into pitfall number one. If you're taking raw logs and running them through, pick a generative AI, and you trust the output without having a security specialist evaluate it, you fall into pitfall number one.

 

Brian Moody: Trust but verify, right?

 

Shahin Pirooz: Trust but verify. And verify with somebody who knows how to verify. If that's you, great. If it's not you, bring somebody to the table. Poor data quality. Bad data in is bad output out. There is no two ways around that. Ugly in, ugly out. That's always the way it's been. That's always the way it's going to be.  

AI is no different. If you give AI bad data, it's going to return bad assessments and analysis of that data. So the cleanliness of your data model and data structures and how you do the normalization we just talked about is critical. So do you have software developers developing this AI SOC, or do you have security people developing this AI SOC?

 

Brian Moody: Well, and ask yourself this question is how much effort today is the cyber community, and I'm talking about the bad side of cybersecurity, the hackers, the criminals in this space. How much time are they putting into polluting the dataset?

 

Shahin Pirooz: As much as they can.

 

Brian Moody: As much as they can.

 

Shahin Pirooz: Number 3, automating the wrong task. So, let's say, I'm going to go extreme and give you a weird example that nobody would do, but just to give you context.  

Let's say that you're getting an alarm that Bob's account has another account takeover and you're getting tired of the noise. You automate squashing or quieting down that alert. That is the wrong task to shut down because Bob's account someday will be taken over because Bob has terrible hygiene with his account, clearly.  

 

Brian Moody: Bob, if you're out there, we apologize.

 

Shahin Pirooz: In my past company, that was Fred. Maybe we'll switch back to Fred.  

Number 4, weak integration. This, I think, is one of the most critical things that go failure. There's a lot of brilliant software developers out there, and they build a mousetrap that is phenomenal. But if that mousetrap doesn't integrate with anything else or integrates poorly, doesn't understand the data models that are coming into it, that integration is bound to fail.  

And an AI without the ability to integrate into the source data structures that we're talking about, without an understanding of those source data structures and understanding the context, in this case security, is doomed to fail.  

So if you're building it yourself, make sure you understand the ecosystem of data and how it works and what every single alert type means, because summarizing when you don't understand it all is doomed for failure.

 

Brian Moody: You talk about types of workflows that are out there or even processes, right? Stick AI on top of a broken workflow—

 

Shahin Pirooz: You'll have a broken workflow.

 

Brian Moody: You're going to have a bigger mess because, I mean, we've talked about this before, at the speed of which agents and AI work, it'll break faster. It's going to break much faster and be a much bigger mess.

 

Shahin Pirooz: Number 5, hallucinations and blind spots. So how many of us have used just standard generative AI and you get a response and you're like, wait a minute, that doesn't seem right. Isn't this the right thing? And the response is, “Oh, you're right.”

 

Brian Moody: Oh, you're right.

 

Shahin Pirooz: Let me give you the right answer now. And then you're like, how do you know this is the right answer? Well, because I checked it again. Against what sources?  

And that's hallucination. It's basically taking and responding with a dataset that may or not be valid data. And when you question it, it has the ability to go back and inspect other sources. So you have to be clear with your prompts and say, I want you to go use reputable sources. And you've all seen the TikTok of prompt generation. So, be very careful of hallucinations.  

Blind spots is also critical though. You can create blind spots by yourself. If your AI model that you're putting in place only looks for certain things, but isn't looking for anything that looks anomalous and doesn't understand what anomalous means, you may not get alerts on things that are critical. Again, is it a software developer creating this, or is it somebody with security experience and context that knows how to look for things that don't exist?  

Our entire job, in a security operations context is to look for stuff that has never happened before. How are you going to tell AI to do that?

 

Brian Moody: And I think other comments that I've seen in other data points, you know, AI doesn't do great necessarily with tone either from text or email and understanding what that is. You know, we've seen comments come back where AI has taken a false positive and it made it critical. And escalated it to a criticality point where the human looks at it and goes, “Wait a minute, that's just Bob.”

Shahin Pirooz: That's just Bob.  

Brian Moody: Poor Bob. But yeah, all very valid points.

 

Shahin Pirooz: There's one more. And it's the ever-expanding attack surface. And the issue with this one is sure, you can continue to add context around new attack surfaces that come up, but the biggest attack surface that is contextually really a problem here is the AI attack surface.  

So just like Brian said earlier, bad actors are spending their cycles trying to figure out how to add garbage into our data ecosystem, specifically to try to do what's called poisoning. And poisoning in the context of your security AI is a big deal because what if it stops looking for behavior that this bad actor is trying to do because this bad actor poisoned that as good data?

 

Brian Moody: Well, the other aspect, and matter of fact, we've seen this example in customers is where these hackers are mimicking normal behavior. They're mimicking approved behavior, or we've talked about this with respect to agents and piggybacking on agents and permissions, but we've seen several instances and responses where the AI or the SOC or the data is, “That's a normal behavior. That's an approved application.”  

But it turned into something nefarious ultimately when the human analyst really started to look into it.

 

Shahin Pirooz: I mean, the most recent threat out there right now is that the bad actors are using Teams to make it look like it's legit calls coming in or legit contacts coming in. And you accept it, and Bob's your uncle.

 

Brian Moody: Bob now becomes your uncle. Poor Bob. Again, I apologize, Bob.

 

Shahin Pirooz: Next time we'll call him Robert.

 

Brian Moody: So actioning AI now into the SOC, where do they start, Shahin? I mean, so does this work for small SOCs? You know, what degree, if we're talking to the community now, where do they begin?

 

Shahin Pirooz: So, if you don't have a SOC yet, don't think you're going to build a SOC with AI. That's the fundamental thing. Partner with somebody who does security operations and vet them to make sure that they've figured out how to integrate AI into their stack and their future-proofing. Not just the enablement and force multiplier of AI in their workflows, but also the readiness and understanding of what AI threats are so that they can protect against those.  

We've said this for years. If you haven't already started building this in the previous 5 years, you're never going to catch up. It doesn't make sense to try to do it now. I've always been of the belief in every business I've ever been a part of, you've got to evaluate core versus context. Am I doing something that is core to my business and fundamental to what I decided to build, or is this context that I need to add to my business and therefore not core, but I need to bring in a partner whose core it is?

 

Brian Moody: So AI is a—

 

Shahin Pirooz: But, sorry.

 

Brian Moody: There's a but to this.

 

Shahin Pirooz: I didn't answer the question.  I said don't do it, which isn't really an answer to the question.  

If you're going to go down this effort, it is not as simple as I'm going to go into Copilot Designer and Studio and build agents, and turn on against the Microsoft Dataverse the ability to parse data and process data and dump all my data there.  

Bring in subject matter expertise who know how to build AI but have a context in security to help you train that AI. Staff someone who knows how to train the AI and fine-tune the data on a regular basis. It's not a one and done. It's a continuous improvement. Just like monitoring a SIEM is continuous fine-tuning, it's continuous improvement and fine-tuning of your AI platform. So if you're gonna do it, it's an investment of time, resources, and money.

 

Brian Moody: I think starting with a specific area, maybe around a specific data type is another good area. The other aspect is I think you need to run this in parallel from a standpoint of having that human component as well as having the AI component and, you know, test it, analyze it, validate it, verify it. And as you move through time, if the AI is not beating your analyst—

 

Shahin Pirooz: Your junior analyst.

 

Brian Moody: Your junior analyst. Then then boot it, right? From a standpoint of it may not work.

 

Shahin Pirooz: And that's perfect guidance. And thinking about what to do first is alert triage. It's simple. You can say, when you see an alert of this category, this type, this definition, I want you to triage it this way, send everything else through to my analyst.  

So you can do one alert at a time and start taking noise away from your analyst, assuming of course you've built a SOC and you're doing this. If not, partner. this is where I need that glistening ding.

 

Brian Moody: So I mean, I really just think key takeaways is AI SOC, I don't think it's there today. The claims are being made. Red flag. Caution you, be very, very careful with the claims of, "We got AI SOC, this is gonna fix your problem."

 

Shahin Pirooz: The other thing to be wary of is AI SOCs that also have a human component to it. The real question is, is that human component fine-tuning their tool or is it really behaving like a SOC analyst?

 

Brian Moody: So, a force multiplier, not a replacement. I think that's the big key that we kicked here. And I will, again, I'm gonna stand by, it's gonna be a long, long time before we see AI replace.

 

Shahin Pirooz: You said never.

 

Brian Moody: Well, I still think never. I just think there's an aspect that technology goes so far. I don't think we're ever really going to replace the human aspect in functions like this.

So anything in close?

 

Shahin Pirooz: Don't forget to pick up the one sheet, the AI Implementation Checklist. Please grab that, do with it what you want. If you're interested, we offer complimentary health checks, security health checks.  

So we can very quickly do an external posture scan of your domain and give you visibility into what threats we find there, including a dark web analysis, including pen testing of your external—complimentary.

 

Brian Moody: With that, I love complimentary. Nothing's free.

 

Shahin Pirooz: Not a darn thing is free.

 

Brian Moody: Anyway, thank you for joining us today. Look forward to speaking with you again at the next WhiteDog SoundBytes.  

Let's talk!

We’ve Got a Shared Goal, To Secure Your Customers