Shadow Agents: A CIO's Security Blind Spot

Agentic AI didn’t just give us smarter copilots—it quietly created a new class of shadow AI agents inside the platforms you already trust. With just a few clicks, employees can enable AI agents inside Microsoft 365 and other SaaS tools without formal review, yet acting with the same identity and data access as the user.

posted on
February 18, 2026
Transcript

Brian Moody: So an interesting topic, kind of a bit of an extension of the topic that we talked about last week, you know, security around APIs, security around agents. We touched on the security aspect of that last week and things that you do in your environment in understanding what agents, you know, and what APIs you were using in your environment. This week's a bit of an extension of that. So we talked the security angle last week, and we'll touch that again today. But really, instead of the CISO issue, what we're talking today are shadow agents, a CIO's issue, and why.

 

Shahin Pirooz: Yeah, the issue is we're getting a lot of people out there are creating agents to interact with well-known and structured and trusted platforms. For example, Office 365.

 

Brian Moody: Trusted platform.

 

Shahin Pirooz: Yes, we kind of all trust it. So what's happening is we're seeing, there was something that triggered that kinda set off this topic beyond last month's topic, which was let's talk about agents as identities. And it's important to treat them like digital identities and not like just another tool. It's not another tool. You need to get more controls around it, make sure there's, understand what access it has, who owns it, who, let's say, hired it. And so effectively, agents should be treated much like a digital identity, similar to what you would with a carbon identity. That was the theme of last month's topic.  

Right on the tail of that Microsoft released a new feature which basically opened about, I think, 300 agents from third-party manufacturers into Office 365 that allowed end users to enable them and turn them on for your corporate data. So we already knew there were third-party agents that users were bringing into the environment. And in many cases, we thought they're gonna have to get approval to turn this thing on. But this tweak, this switch that got flipped by Microsoft made it so these agents just showed up and anybody in the company can turn them on and point them at their own data.

Brian Moody: So, great lead into what is a shadow agent?

 

Shahin Pirooz: Yeah. It's, you know, we've dealt with shadow IT for decades and just, you know, just to contrast and correlate those two, shadow IT is this notion that marketing or sales or finance are gonna go out with their credit card and buy a SaaS application that they implement without the CIO knowing.

 

Brian Moody: Or IT knowing anything about it.

 

Shahin Pirooz: Yeah. They're gonna go do it, and six months, a year, two years later you'll find out that marketing has a whole different marketing platform than the one that is sanctioned by the company, as an example. No offense to my marketing team. Don't come kill me after this. But that was traditionally how we understood shadow IT, and CASB actually evolved as a way to get your arms around shadow IT so that we can get visibility into traffic leaving our corporate walls and understanding where that traffic was going so we can understand what our people are using that we aren't aware of, so we can get our hands around shadow IT. So the birth of CASB came, you know, 15, 20 years ago, maybe longer, I'm getting old and time's running together, but in the context of shadow IT and how do we get our arms around it.  

So fast forward to two, three weeks ago when Microsoft released 300 agents, shadow agents into the Microsoft ecosystem where there's a marketplace that anybody can turn on, not just the VP of marketing, but anybody in marketing, not just the VP of finance, but anybody in finance can turn on these agents that have access to your corporate data housed in SharePoint, Teams, OneDrive and email.  

Implications? Some of your people have pretty important information that shouldn't be released to third-party platforms. So shadow agents are much like in contrast to shadow IT which was CIOs didn't know something was happening in an untrusted environment, shadow agents are CIOs don't know something is happening in a trusted environment that we believed was trusted up until two weeks ago where all of a sudden now all these untrusted agents have access to our trusted environment.

 

Brian Moody: I'll take that a step further from a standpoint of the big differentiation, you know, what are shadow agents from a standpoint of that people are turning these on, and the key behind this now is this is happening in trusted environment, right? So shadow IT, as you said, is really where folks bring in applications, that came in, and the applications and/or the data stored oftentimes existed outside of IT control. So outside of security controls, et cetera. But those were applications and datasets.  

The unique thing now about shadow agents are these are agents being enabled in trusted environments. The difference though is, and I think the scariest part, and we talked about this last week, is these agents perform a function. They make decisions. They're actionable. You know, I mean, applications and data where we were, you know, processing things and we had data, the security landscape now from a standpoint of what these agents are capable of doing now, the scale at which they do them and the speed at which they do them at, a CIO now has no idea what's happening in the environment.

 

Shahin Pirooz: And who has access to their walled garden.

 

Brian Moody: Well, and if something happens you know, and again, we can talk about, we'll talk about some of the risk factors as well, but when we get down into what happens and the CIO... If something happens, the answer to the board is, "AI did it." To the regulators, that's not gonna fly.

 

Shahin Pirooz: "It wasn't me."

 

Brian Moody: "It wasn't me." So how'd we get here? Like, I mean, I think there are some key drivers behind this, but how did we get here?

 

Shahin Pirooz: Yeah, the key drivers are a simple evolution. It's not a switch that flipped. It's we started with being able to chat and ask questions from intelligent systems, artificially intelligent systems that appeared to have intelligence and inferencing capabilities. Still massive if-then-else logic in it, but they're able to take that logic and based on programmatic enablement be able to make inferences about where the question was going and therefore where the answer had to be. So we started with question-answer. We then went to generative AI which was not just interacting from a question-answer perspective, but able to create content. Now we've gone to agentic, which is now you took and completed the cycle.  

So it's one thing to generate content from an AI perspective. It's another thing for an AI to be able to take action on content, to be able to now, based on triggers, I want you...  

So example, and we brought this up a few months ago in one of our tech talks. An example is, "I want you to write an email response to this email I got from a frustrated customer."  

Generative AI does that no problem. Now you gotta take it and figure out if you're gonna send it, if you're gonna adjust it, if you're gonna tweak it, whatever. Generative AI also gave you the ability to go through those tweaks and iterate and say, "I don't like that tone, change the tone," and adjust it to your liking.  

Now, to send it, you are pushing a button to send it, but if you take agentic AI and bring it in, you now can have an agent that says, "Generate this content."  

Generative AI generates this content. The agent says, "That doesn't match the style of this individual. Change it." Once it's adjusted to the liking of the agent, the agent then passes on to another agent and says, "I need you to send this to the following people." So no human involved. It happened in picoseconds, not hours or minutes, and the process is untethered.  

So you now have a series of agents that can take, generate content, act on that content, and make decisions based on that content. So it gets to a point where... Now, these things don't have intent. That's probably the most important thing. I wrote an article recently which talked about MoltBot and how MoltBot and MoltBook really created this ecosystem. And it's like a lab gone wild where this ecosystem of agents is interacting with each other, creating religions, creating deities, creating drug dealers that are selling drugs which are little bits of code that change the behavior of other agents.  

And it's become an interesting lab experiment and people could quickly leap to these things are scary. This is like, they're gonna take over the world. This is Skynet. The reality is there is no intent behind these actions. They're simply following a series of prompts to get to where they're going. So if you are careless, just like if you hand a weapon to someone, if they are careless and don't know how to use that weapon, they're going to harm themselves or somebody else. Same thing with agentic platforms. If you say, "Go and create blank by any means necessary," you are giving an awful lot of leeway to something that doesn't have intent or malice in mind, but it might find that something that is unacceptable to us as humans is acceptable by any means necessary to achieve the task.  

So guardrails are really critical here and we've gotten here because we didn't think about the guardrails first. We thought about those after.

 

Brian Moody: Yeah. And I think the real concern is, as you said, they execute these pieces from a security landscape, but not just that but a term that I've been seeing in a lot of print is blast radius. Is that blast radius with respect to the enablement of that agent to just freely execute? And as you said, picoseconds. I got to look that up to see what a picosecond is. But I'm sure it's pretty damn small.

 

Shahin Pirooz: It's pretty small.

 

Brian Moody: But from a standpoint of that is that the impact could be devastating from a standpoint of the blast radius of an automated agent executing something. It has happened before you know it, and the impact can be just dramatic.

 

Shahin Pirooz: We can't react at the speed they can act. It's the simplest way to put your head around that.

 

Brian Moody: Yeah. So I think a couple other key areas, you know, how we got here is, you know, the whole AI revolution right now is the pressure for most organizations to implement. Boards are pushing for productivity based upon AI.

 

Shahin Pirooz: Yep.

 

Brian Moody: And I think that's a big piece of companies implementing it without the guardrails just because they're trying to satisfy a board or pressure to become productive.

 

Shahin Pirooz: Well, this is an interesting repeat of shadow IT. Because same thing. The the board gave a lot of pressure to say, "We need to be faster in our go to market. We need to be faster in our product development. We need to be faster in closing our books."  

And the executives that ran their division within a company decided, "IT is not doing it for me. They're a roadblock." I'm going to go figure out how to solve this myself and bring in third parties to talk to us, implement an ERP solution that's SaaS-based that IT knows nothing about, implement a CRM solution that's SaaS-based that IT knows nothing about. And the implication is the same here.  

We've got the same pressure from the board saying, "We need AI. We need... We're falling behind. Our competitors are moving." And the CIO is dealing with compliance. Regulatory concern around, "We're exposing ourselves. We have risk."  

And so how do you get this balance? How do you balance that scale to say, we have to have compliance because we spent decades building our controls and we are governed and regulated, but we need to move faster. And there's definitely answers to that. There's a lot of ways to achieve both and get there. But to your point, or to our point, it's the guardrails. You have to think about those guardrails first. You have to think about these agents as digital employees, not as a piece of software.

 

Brian Moody: Right. You mean going to IT and saying, "Hey, we wanna do this." Let's order the equipment, order the app, ship it, implement it. We'll have that environment ready for you in 90 days. So that doesn't work anymore?

 

Shahin Pirooz: No. I want it tomorrow.

 

Brian Moody: So I think, for me, that was one of the key points that I was gonna make as well is, SaaS is an absolute driver with respect to how we got here, right? We're implementing that. I think the other critical piece is kind of the no-code, low-code AI implementations that are going on now that are kind of driving how companies are stepping into this AI revolution.  

You know, the other aspect is the AI agents exist now inside of the SaaS applications we're using. So again, back to the risk of the CIO, is he has a trusted controlled environment that they've implemented, right? And we're releasing these things now within this trusted environment, and back to your point, without guardrails.

 

Shahin Pirooz: It's literally like somebody hired a subject matter expert. Nobody knows who hired them. They have admin rights. Nobody knows who gave them the rights.

They have the ability to do what they need to, and they work faster than any other employee that we have. That's what we're dealing with here when somebody turns on agentic AI in trusted platforms. Like, who hired Bob? How did Bob get all this access? Why is Bob moving so fast? He's creating platform changes in seconds instead of minutes. You know, so those are the things that in order for a CIO to stay on top of this and get ahead of it, you need to start thinking about, "How do I address this problem?"

Not stop it but address it. Just so we're clear.

 

Brian Moody: And no, so we're not advocating stop it, right? I mean I think the critical point is AI is here to stay and it should be embraced. It's how we embrace it, right?

 

Shahin Pirooz: And the board is right. If you're not doing it, if you're not enabling it, your competition will leave you behind.

 

Brian Moody: No question. Well, I think the critical risk factors that come into play here, and you brought up a very key point, is the CIO often gets tasked with the kind of compliance aspect of this. We have to be in compliance, right? We're regulated in certain ways, and in some industries many ways.  

But if you look at really the key risk factors, you know, starting operationally, financially, security and compliance, and the methodologies in which these shadow agents act is, I mean, operationally immediately, right, back to the point of, "Oh, AI did it," the CIO oftentimes doesn't know what's happening in the environment because of the actions that are being taken by the agents.

 

Shahin Pirooz: And, I'll tell you what, there's plenty of times where an employee did it, but guess what? The CIO is also impacted by the employee that did it. This is no different. This is IT infrastructure—and we're not trying to put pressure on CIOs. The real important thing to pull away from this is, this is something that's here and real, and shadow IT is something that was real and it caused a lot of problems because spend went through the roof and we had duplicate environments. The same thing is gonna happen here.  

You got to get your arms around this thing, and you gotta create controls so when your users need to use agents to accelerate and become faster and be more productive, that you know how to monitor and track and audit their behavior and activity.

 

Brian Moody: So financially, you brought up already that, you know, you brought up the idea of enabling an email or responding to an email. If you recall in our last January SoundByte, we talked about the financial impact of this particular agent approving a vendor, approving an invoice, authorizing a quote. You know, making payments. So it can have a dramatic financial impact on your environment with respect to the way these agents execute. And then the security aspect of risks that we talked about is, you know, they act inside a trusted environment, but they're completely unregulated. Again, and we talked about this last time, is they access controls, they access permissions, they access applications and data, all outside of our traditional security infrastructure.

 

Shahin Pirooz: To clarify, they can be unregulated, but they can also be regulated. So really, we're not here to say that this is a problem that can't be solved. You can implement controls, and access, and identity monitoring, and specific identities to each agent today in the platform without buying any new software.  

So this is not magic we're talking about, and some, "You got to buy this thing from WhiteDog or else you won't be able to solve this." This is really an advisory in the context of, there's a problem, get your arms around it, and the tools you're using today will continue to function and provide support for these agents.

 

Brian Moody: I think the other issue now, stepping from kind of security to compliance, again tying back to our poor CIO, and again, honest we're not trying to beat up on the CIO today, but just, you know, talking to the challenges that they have is a lot of these agents, they're not logging their action. So when we talk about compliancy and we talk about coming back to who did what, when did it happen, you know, a lot of this is not being reported back to a SIEM or some type of logging infrastructure.

 

Shahin Pirooz: Well, it's the audit trails themselves that are difficult to follow, and mostly it's because they don't... folks don't create the identities for these digital identities.  

So if you create an identity for each agent instead of give it access to a generic app, app access in your Office 365, you now have the ability to say, "I know who requested this action to happen, and I know which agent did the work."  

So if you then need to shut down that agent because it's run away for some reason, you know which agent to go and shut down quickly.

 

Brian Moody: Right. Talk a little bit about, again, we touched on this last time, but the kind of the security aspect of these agents, right? I mean, hackers through prompt injection or taking control of their access.

 

Shahin Pirooz: Yep.

 

Brian Moody: Talk a little bit more about that aspect of the security.

 

Shahin Pirooz: So there's, there's two things. One of them is most people implement agents as a default with full access, and it's a traditional IT... I don't wanna use the word failure, but it's the only thing that's coming to mind. It's a traditional IT misstep. Let's start by opening access and then we'll start cutting back access until we figure out what this particular application needs, in this case, this digital identity.  

I think with agents, we need to think about it differently. We need to scope what that agent is supposed to do in the context of an employee. It's a digital employee, what should this digital employee be allowed to do? What should they have access to? And what actions are they allowed to take on their own without approval? So similar to hiring an employee, you don't hire junior person out of college and give them rights to write checks to anybody, anybody, right? So, same, same construct, right?  

You have a digital employee just like a physical employee, you need to treat it like an employee. Let's talk about the implications though. So bad actors are doing things like creating emails and documents and content that are going into and training these agents that make a request seem like a legitimate request. That is prompt injection. So when they send a prompt in, that prompt is triggering, look at that legitimate thing and take action on it, which is send $100,000 to the Caymans.  

So if you don't constrain and restrict what that agent can do, it can send $100,000 to the Caymans without approval. A bad actor's just taken $100,000 out of your bank account and sent it to their Cayman's account. And that's a very difficult thing to recover. You can call in the federal agents and get them to try to help you track this down but it will be a long drawn out thing, and the likelihood that you get your money back is very small.

 

Brian Moody: All right, so straight talk. So this is kind of a little bit of a summary like we did last SoundBytes. What can CIOs do now? I mean, from a standpoint of their organizations? What are some steps they can take?

 

Shahin Pirooz: So we we created a little takeaway that's available with this post that you all can go grab that's very light, quick one-pager that talks about what's the 30-day framework for how to handle this thing. And it's really not rocket science here. We're not, we're not trying to make this difficult.  

It's from day one to ten, get visibility of the environment. Understand, discover what shadow agents are in your environment. Super simple. So same as any security world, right? Assets, what assets do I have? Tighten the tenant, the platform, the settings in the next five days. So reduce the sprawl of these things.  

So the example I gave is my director of operations came to me and said, "We have 300 agents that just popped up that anybody can turn on in our tenant." We of course turned it off overnight, immediately. Like, within 15 minutes, we shut that down. Go inspect that. Go see what Microsoft did to your tenant, and lock down what you don't want your users to be able to turn on without you.  

Establish an agent registry. Think of this as your HR registry. You know who your employees are, you know information about your employees, you know when they started, what role they have, and then by extension of that, what rights and access and everything else they should have based on their team and department. Similarly, you're gonna create a registry that says, this is when this agent started. This is the function this agent's supposed to be doing. This is the team or owner of this agent, and here's the guardrails and controls in place for this agent.  

Integrate it with your existing controls. So again, bring your identity controls in place, put monitoring on top of it so that you can audit the activities it does, and then apply data protection in case it makes a mistake and deletes data so that you can recover your data. Don't assume these tools, they're, they're, they're software at the end of the day, and software makes mistakes, and if it deletes stuff that's critical and you don't have a way to recover, you need to recover. It's the traditional disaster recovery mantra and mindset.  

Resilience comes from protection. So start with protecting yourself with identity, with access controls, and with data protection. And then really the last... that was five days, sorry, 10 days for that integrating controls.  

The last 10 days are create the guardrails and the culture for how to use these things. This is now you're putting in controls that say this is how you're allowed to enable and what are the approval steps for turning on the controls. And now that we've got these guardrails in place, and we have ACLs and we have identities and we have data protection, what's the process for people to engage agents? What's the cultural training and the change management you have to do to bring them into the fold?  

If any of you are out there scribbling like crazy, stop. Just grab this off the LinkedIn live stream and it will also be on our website at some point. So, it's going back to basics, treating these agents like employees. They're digital employees, but they are employees.

 

Brian Moody: And so for our partners who are attending today, this document will also be in your portal down in your partner portal, down in the marketing hub. So we'll make sure to post this there so you have access to it there. So really, I mean, we talked about three kinda key points last week as well, as you called out discover, kinda decide and put some controls around it, and then direct, from a standpoint of, you know, how we use AI and control what kind of authorized AI components that exist in the in the infrastructure.

 

Shahin Pirooz: Yep. And, you know, the thing that I often get asked when I have this conversation, 'cause this has been a topic for the last three months we've been talking about, so I have partners who say, "This is all great, but are you expecting me to figure out how to do identity controls? Like, what are you guys doing, what's WhiteDog got to help us?" We have a new product we're bringing to the table. It's a cloud identity and access management portfolio that allows you to extend identities to cloud assets, like agents or APIs or whatever. But it's really the ability to take identity and take it to digital assets as opposed to just people.

 

Brian Moody: Fascinating topic. I just have, you know, you brought this up a month and a half ago with me, and I started digging into this and I was just like, "Wow." I mean, massive challenge for CISOs and their counterparts, CIOs as well, but I think from a standpoint of, you know, addressing this in your environment, getting approved AI, because, you know, you cannot not have AI, as you said, right? We're gonna fall behind if we don't. But it's how we use it, how we're controlling it, how we're guarding it, those are they're key components with respect to any infrastructure.

 

Shahin Pirooz: Yeah. I think it's important. A year ago, Brian came up or coined this saying "They're no longer hacking in, they're logging in." And I've always lived by that mantra that imitation is the highest form of flattery. We are being flattered day in, day out on Brian's words. There's people, companies, organizations that are actively now saying those exact same words, and putting them up everywhere, but I think it's more parroting than really understanding the implication of what those words mean.  

It means that of the five pillars that we protect, email, DNS, identity, endpoint, network—the most critical—the most critical of those pillars or attack vectors is identity. 90% of attacks start in email. 80% of malware needs DNS. 73% of attacks that get to an endpoint end up in an encryption. That same 70% also ends up moving laterally, so the network. So all of those are below 100% of the attack vector. 100% of every single attack has an identity hack involved in it.  

Identity is the way people are getting in. They are not any longer hacking, they are logging into your environment. And imagine now that you have these agents that you have no identity around, and they're using these agents to do things in your environment on their behalf.

 

Brian Moody: So I'm gonna be little selfish here. You know, this is where we love to have kind of thought-provoking conversations in SoundBytes, but I've gotta tie back to what we do. We talked last week about how these issues really circumvent the traditional security tool environment that we have. So, we put it in endpoint, we put in mail, we put in networks. I mean, we put these tools in. They're reactive, folks. That is the nature of the way they were developed. They're designed to do something when something happens.

 

Shahin Pirooz: Exactly.

 

Brian Moody: And in almost every case, we found out now, and we know that the average dwell time, and you have said this and drilled this into me, and we've looked at stats, it's six months plus. And oftentimes, the hacker will react more quickly, but the net is they're bypassing these traditional tools, and they're utilizing infrastructure that we've been talking about for the last two months in order to do that.

 

Shahin Pirooz: Yep.

 

Brian Moody: Talk about our ASM platform, attack surface management.

 

Shahin Pirooz: Yeah. It's all good and great to have a defensive layer. So, most people, when they think about security, they think about defense in depth. And defense in depth is the right way to secure any environment.

 

Brian Moody: It's the foundation.

 

Shahin Pirooz: It's the foundation, but the best defense is a good offense.

 

Brian Moody: Love that term.

 

Shahin Pirooz: That adage has not changed in centuries. In order for you to be able to defend well, you have to understand what you're defending against. In order for you to know what you're defending against, you have to know what holes the bad actors are gonna take advantage of.  

Our attack surface management portfolio, that goes across all of the attack vectors we've talked about, there's about eight different vectors in total, covers telling you, letting you know what risks are there that the bad actor are gonna take advantage of so you can close those gaps, close those holes before they do. We aim to not only be your defense in depth partner, we aim to give you the tools to prevent the bad actor from ever having to test that defense in depth layer.

 

Brian Moody: This is the proactive approach to security. The foundation protects against things that happen, but understanding the vulnerabilities that exist in your environment is the critical aspect to protecting. How do we defend against something we don't know? And what we've been talking about for the last two months are APIs, when agents attack, shadow agents, things happening in your environment that you don't know about. How do you defend against that?  

So, what I find most fascinating about our attack surface management platform is it's constantly evaluating the environment for vulnerability. This is the point. It's looking for vulnerabilities. So, we're looking for changes, we're looking for behavior, we're looking for action that occurs in the environment that it's not supposed to happen, but it's not necessarily something that's going to trigger the foundational toolset.

 

Shahin Pirooz: And it's not always malicious.

 

Brian Moody: True.

 

Shahin Pirooz: Configuration drift happens because there's a project that needs access to something for a short period of time, and the hole's never plugged. Or we're on a deadline, and we have to open up firewall ports for XYZ so that we can hit our deadline, and it's never closed. It's unless you're proactively looking to see what exposures you have, you're not gonna know that that was left open.  

And as a incoming CIO into a new company, you're now relying on somebody coming in and evaluating that environment as a pen test or something else to tell you what exposures you have. And I love our industry. I've been in security for 30 years, and we don't always get pen testing right. We miss one thing or another here or there. It's not a perfect game. It's a human against an environment. Whereas if you're continuously monitoring and scanning the environment from an automated pen testing and vulnerability scanning and exploit evaluation perspective, you're gonna have a much better visibility into configuration drift, as well as actual new risks.

 

Brian Moody: Well, this gets back to something that you have said from day one, almost a decade ago, is the importance of humans watching this. The security operations center, human eyes on the glass, because you've said, we can stand all this up, and it's like putting a guard tower up, but you don't put a guard in it. They're over your wall before you know it, and then it's too late.

 

Shahin Pirooz: Yup.

 

Brian Moody: And I think looking at and watching these behaviors. So you said, "Identity, identity, identity," and I think one of the biggest key pieces of what we bring, too, is our ISPM, our identity security posture management, is are there changes in Entra ID? Are there changes occurring in Active Directory? Did someone un-retire account? Now, these are all natural, normal behaviors. It happens. People retire an account. People un-retire account.

 

Shahin Pirooz: Yup.

 

Brian Moody: A person moves in the organization, and permissions are added or taken away. But is anybody watching that, right? Does someone understand why did that happen?

 

Shahin Pirooz: Sometimes, but not always.

 

Brian Moody: Not always.

 

Shahin Pirooz: Yeah, and sometimes it's on a holiday weekend where everybody's enjoying the Super Bowl, and they miss it. Bad actors love holiday weekends. Almost every incident response we've done has been on a holiday weekend.

 

Brian Moody: Holiday weekend. Thanksgiving is a favorite, we found out.

 

Shahin Pirooz: Thanksgiving, Christmas, any long weekend.

 

Brian Moody: But I think that's just a critical aspect, and everything that we've talked about is about behavior, and this whole shift with agents acting in an environment, it's all about behaviors, you know? So, I mean, and UEA, UEBA, is this isn't new terminology or a new ideology in the world of security. But the critical piece, I think, is the ability to have a platform that watches that.

 

Shahin Pirooz: I'm going to coin a new term, new acronym, and watch, we'll get copied on this too.

 

Brian Moody: Here we go.

 

Shahin Pirooz: It's instead of UEBA, which is user and entity behavior analysis, it should really be IEBA, identity and entity behavior analysis.

Because it isn't a user, it's the identity doing something. We don't really know what a user does. We know that their identity is doing something.

 

Brian Moody: Right.

 

Shahin Pirooz: And the same thing applies if you take a digital employee and apply an identity to it, now we can baseline that digital employee's identity behavior and, from that, be able to determine if there's variance from that baseline.  

So, I just spent time again with my director of operations responding to a lengthy questionnaire from one of our longtime customers who hired a new analyst, and this individual's super curious, which is a great challenge to be able to help him understand what we're doing and how we help. But it also helped us document a lot of our approach to how we're doing threat hunting and evaluation so that it's not just institutional knowledge, but it's documented and available and we created a set of FAQs out of that practice.  

But one of the key things, there was a theme across, you know, I would say 20, 30% of the questions was, "How are you doing threat hunting?" And the entire answer to all of it is behavior analysis. Where we use AI is to behaviorally understand how something is different than what it was for the past 12 months. So, baselines and we apply: we take data from sensor data from the local network, flow analysis, we take sensor data from logs, from APIs, and pulling all that data back into a centralized repository and applying behavioral AI to it so that we can see patterns and trends and then when these deviate from the baseline patterns and trends, alarms go off and get us visibility which is why we're able to claim we take that six months down to six minutes of dwell time.  

We find bad actors within minutes of them starting to do something, not hours, days, weeks or months.

 

Brian Moody: So, ask yourself internally, so for our customers that are watching, you know, how do you address this issue in your environment? For our partners or folks that are partnered with WhiteDog and you're selling to customers, these are conversation starters. This is how you open the door to a conversation, you know, ask a customer, "Do you understand what agents that you have within your environment? What's happening?" It starts the conversation. Please engage us. We're happy to go out on those calls or to shadow you or assist you in these conversations, and then it drives into the operational aspect of how we deploy the security and how we implement platforms that can help us understand our vulnerabilities and understand the behaviors that exist in our environment.  

So with that, we'll close February '26 SoundBytes. Thank you for joining us. Shahin, thank you for your thoughts today and please reach out to us if you have any questions, we'd be happy to help.

Let's talk!

We’ve Got a Shared Goal, To Secure Your Customers