When AI Agents Attack: The New Insider Threat

AI copilots, digital workers, and autonomous agents are quietly becoming the new “employees” inside organizations—reading email, touching production systems, moving data, and making decisions at machine speed. But what happens when those agents are over-privileged, misconfigured, or hijacked by an attacker?

posted on
January 14, 2026
Transcript

Brian Moody: Good morning. January 2026 WhiteDog Sound Bytes. Welcome. Happy New Year to all of our partners, all of our customers welcome to 2026. 

Shahin Pirooz: That year went by so freaking fast. It was a blink.

 

Brian Moody: So Brian Moody global sales and channels here with our founder Shahin Pirooz.

 

Shahin Pirooz: Hi, everyone.

 

Brian Moody: So we have a very, very interesting topic for you guys today so when AI agents become a threat. So there's all this talk, I mean, you see it in the headlines all the time IBM's firing 15,000 people UPS 30,000 people globally, and we are letting these people go, why? Because AI is coming in to enhance our operations. So we're replacing carbon, is the word that Shahin wanted me to use today for us human carbon forums. But we're replacing people with the efficiency of AI, and those AI agents in our environment. But what happens when those AI agents become a threat? So that's kind of our topic today. I would ask you, the viewers that joined us, so think about this, are you deploying these AI agents in your environment? And we're here today to talk a little bit how we're securing those.

 

Shahin Pirooz: Yeah, it's a great question and great point because all we're seeing now is when generative AI became a thing, everybody was out there saying, "You've got to start consuming this." If you're not... Even us, we said, "If you're not consuming generative AI to get an advantage in time and productivity across your organization, you're falling behind. your competitors are using it." Now, it's agentic AI.  

Agentic AI is this notion of leveraging agents that can do things. Generative AI creates content, so you merge or marry generative AI with agentic AI to create content and then take actions on that content, whether it be send emails or review emails, and based on how you review those emails, make decisions or respond or go generate leads or business development, or whatever it is, there's these digital employees that we are pulling into our infrastructure versus our traditional carbon employees. And I'm not saying that every organization's carbon footprint, and I don't mean from the economic standard of carbon footprint, but from a people perspective, it will be impacted by bringing agents in. But certainly your productivity could be hugely enhanced and your time to respond to customers could be accelerated, your ability to respond consistently could be accelerated.  

There's a lot of power in these agentic AIs. But just like bringing a human in and not training them and giving them the keys to your financial data and access to your bank account is a bad idea, it's a really bad idea to do the same thing with a digital technology, and that's what's happening.

Brian Moody Right.

Shahin Pirooz: People are moving aggressively and fast, and the impact of that is that not every scenario is thought through and there's not enough guardrails and it creates a insider risk generated by these digital identities versus our carbon identities.

 

Brian Moody: So you kind of set the stage a little bit already. I mean, so what we're talking about here now is with agentic AI, we're now enabling action, so these are things that our human employees used to do.

And so here's the scary part is, I think, is that these AI agents are executing actions with impunity. So one of the kind of interesting things that I saw was, you know, humans have some intuition, right? Where we tend to be questioning beings. AI agents don't question from that standpoint. And so we're enabling them to take action. So kind of, you've set the stage a bit, but let's dig into this a little bit further. So, you know, we're not talking about ChatGPT in in your browser, right? This is not the kind of component we're talking about. We're talking about actual function where, as you just brought up, agents are able to interact with the ERP system.

They're interacting with data. They're moving data. They're opening emails and taking action on those.

Shahin Pirooz: Right.

 

Brian Moody: So this is interesting. So dig in a little bit more about what this looks like in a company you know, from a standpoint of those agents.

 

Shahin Pirooz: Yeah, let's talk about I think you painted a decent picture of what it looks like in terms of what they do. There's a lot of capabilities they can do. Think of any repetitive task you can create an AI agent for. But let's talk about how they become a risk. How does an AI agent all of a sudden become a threat vector or an insider threat?  

The answer to that is just follow this process for a second. You created an AI agent to be able to approve invoices.... because, and you put a, you put some limits on it. You say you can approve invoices under $10,000 and just get those done so we don't have to spend cycles going through back and forth, and here's, we're gonna train the model to say if it's coming from this place and the invoice is within this dollar amount, go ahead and approve it. Problem is that when a bad actor gets into a network, they impact the environment.  

What's really scary about agentic solutions is that a bad actor can get into your environment and modify the data or poison the data. We've heard data and prompt poisoning. So imagine that they come in and they go into the database without really triggering any alarms and make changes that put a false vendor into your approved vendor list. And then they send an invoice in from that false vendor and they do some prompt injection to say approve this, this is an authorization from the CEO.

 

Brian Moody: And pay the account.

Shahin Pirooz: And pay the account. Now all the sudden your AI agent is following all the rules you laid for it in the database you laid the information in, in the data structures you built, doing the things you said, but however, they've adjusted the limit from $10,000 to $2 million, or $200,000, or whatever the number is, and now your AI agent is able to send to an account number in the Caymans and Bob's your uncle, they got your money and they're running. So that's one scenario.

 

Brian Moody: So that kinda gets back to my point about humans being somewhat questioning your intuitive, right? So if someone is human approving that there's, there's maybe a wait a minute moment, right?

 

Shahin Pirooz: I'd like to I'd like to say you're right, but we have plenty of humans who have been caught by spearfishing ... that have paid hundreds, hundred thousand dollars to an invoice. Example of that is, you know, the perfect spearfishing that we've seen repeatedly is somebody sends in an email from an actual vendor and five minutes later an email comes in from that vendor saying, and it's a $100,000 invoice, they come back in and they say, "You know what? I forgot to tell you, we've changed our account numbers. Please remit payment to this account number."  

And it looks exactly like the email from the first person, except there's some Cyrillic characters in the name instead of actual characters. The domain looks the same, but it's off just slightly. The signature in the email is exactly the same, and the email they sent you is in the body of it so it looks like they replied to it. Somebody moving quickly, like an AI agent or a human agent, is going to pay that invoice out and you have a risk. And really the answer is, just like what you do for those human agents, what every organization has instituted is two sets of eyes. You're not allowed to pay an invoice unless a second person looks at it. Same thing happens with AI agents. Put guardrails in and have an approval flow for certain things, like if it's payments, always have a human click approve before it goes out. Then, then you have that ability to say, you know, we've got a second set of eyes on this, the rules should work. And that helps you, if you use that data, you can now train that agent even better, so less and less things have to be disapproved over time.

 

Brian Moody: Well, I mean, again, you're implementing some form of multifactor there, you know? Or something.

 

Shahin Pirooz: Yep.

 

Brian Moody: AI agent doesn't, human has that aspect. So, I mean, this is one kind of really just simple example, so again, you know, for, for our viewers that are watching in your environments, are you implementing the type of security protocols on top of your AI agents you know, like you do for your employees? And I think that's the difference. You know, we talk about identity access management, you know, IAM, you know, as an acronym. We're seeing more and more; we're not implementing these same type of security protocols that we apply to humans to cloud environments, to devices. We're not putting these on the actual AI agents. They pretty much have, you know, super user access to the environment.

 

Shahin Pirooz: Yeah, that's a typical new technology mistake that we've made for generations. We have a new technology, we wanna make sure it doesn't prohibit somebody from doing their work, so what we say is give them, and I'm gonna use an IT example, give them domain admin. They'll be able to get their work done. Fast-forward 10, 15 years later and we're like, "Oh, wow, it's a really horrible idea to give anybody domain admin, even the domain admins." We should use privileged identity management to elevate privileges when they need it, not always have it, because guess what? The bad actors come in with the password of a domain admin that they got on the dark web because that person was using the email address and password on Target that got compromised and their password's now on the dark web.  

So it's the same thing, and what ends up happening here and that is really more challenging is almost all AI agents are built on top of APIs. So in order for them to interact with APIs, they have to have API keys, and if the bad actor gets their hand on those API keys, they can do everything an agent can do. And if you gave the agent too many rights, now they can cause all kinds of problems inside your environment, approve invoices, create accounts, delete accounts, do whatever they need to do.

 

Brian Moody: Well, it also, you know, we've, we've talked about zero trust. You know, for those of you that follow us a bit you've seen we've talked about zero trust many, many times on this on this podcast. And so talk a little bit about what does that look like? Because I think this is, again, this is, this is a shift. Again, you know, we talk about this oftentimes about the kind of the shift in thinking and how we apply security in, in our environments and in our customers' environments. But talk about zero trust a little bit and how that really kind of begins to apply to, to AI agents.

 

Shahin Pirooz: I think we latch onto the word trust in zero trust and try to personify it. And what I mean by that is...when we say trust, trust is a term that implies comfort. I'm comfortable with this person, I'm comfortable telling them information, I can have dialogue with them, I can tell them my deepest, darkest secrets. It's not about control. When we talk about zero trust, it's not about comfort, it's about control. It's you assume no trust, therefore you control everything. So it's the opposite of, "I'm comfortable with this person," you get, I'm willing to test this person and give them a little bit of information and see what they do with it, and if they prove me wrong and they are trustworthy, then we're gonna tick it up a little bit and we'll keep ticking it up until we get to a point."  

So zero trust is this notion get the mindset that you wrap yourself around, it is not about comfort, it is about control. And if you start thinking about that, it's the same thing that needs to apply to digital identities that we apply to carbon identities. We have implemented IAM infrastructures, single sign-on solutions, multi-factor authentication, device authentication, device trust. We've done all these things to make sure that the person is who they say they are, the device is what we think it is, and the person and the device match and they should be together.  

And all of those factors are really part of the zero trust model. Same darn thing has to apply to an agent. It is just a digital identity, it is a digital employee, and if you start thinking about these things as digital employees and you start treating them like employees, you're gonna create the structures that will benefit you and the organization. So zero trust implies don't trust; it's inspect what you expect. So if you expect something to be doing X, Y, and Z, inspect that it is doing just X, Y, and Z and nothing else. So limit privilege, limit access, limit... Control the environment as opposed to feel comfortable that it's doing what it's supposed to do.

 

Brian Moody: 'Cause the other aspect I think that we see is that if you think about your environment and when you deploy these AI agents and you made a critical point with their access and they ride on top of the APIs, is that we don't put any limitations. There's no, there's no timeframe on these, you know, like every now and then you'll see. I mean, how many times in your organization have you said, "Well, it's time to change your password," right? We've got some time limits on things.  

We're not limiting these AI agents; we see them as a piece of code and they have forever access and so, from that standpoint, what are steps that companies can take to begin impacting this. I mean, you just talked about limitations and control.

 

Shahin Pirooz: So, number one, treat it like an identity. So put the same controls you put around an identity. Number two is put guardrails around it and guardrails are, for example actions that can impact the environment meaning creating user accounts approving monetary exchanges setting up whatever in the organization, but something that think of it as managerial approval, get the managerial approval. It's just an employee, it's just a digital employee, so make sure you put the guardrail around it that says, "Whenever you're doing these kinds of things, your manager has to approve it."  

And in order to do that, you have to understand what is the agent doing, who owns the flow that the agent is doing. So just like an employee, is this person in HR, is this person in IT, is this person a whatever? What's the scope of work they have and what's the level of access that scope of work should give them? Don't create unlimited, untethered access. Create accounts for this thing that gives it the ability to do what it needs to do and not more.

 

Brian Moody: So one of the other things that we've done with AI, and I think this is, for me, this is one of the scarier parts of someone getting control of one of these, these, you know, acts by these agents and/or taking advantage of the permissions they have is if we look at traditional security you know, how we manage the, the telemetry that's coming from those tools, right, it's coming into a security operation center in most cases, if you have one, but, or it's coming into a SIEM and that data is being analyzed.  

Talk a little bit about the speed at which AI agents operate and this is one of the key things and how, how does the modern SOC and/or the modern SIEM keep up with the speed at which AI agents execute?

 

Shahin Pirooz: So you bring up a good point that I left off. What are some of the controls we can put in place. When we say SOC and SIEM, these are entities in our environment. We have user and endpoint entities in our environment, so user and device entities, the way we think of the world. So UEBA needs to be extended to because it is, it is now these agents are also an entity in our environment, and when we look at behavioral analysis of a user, a system how it's connecting, we also should be looking and I'm gonna come back to the speed question in a second, I didn't lose sight of that.  

But we need to be looking at the times that an agent is interacting, the things that the agent is doing. If the agent is all of a sudden going and downloading HR files when it's supposed to be reviewing financial information or reviewing support tickets and responding to support tickets, there's something wrong. It, if it never accessed the file system and if you don't have behavioral monitoring going on that knows that the behavior or the pattern changed, there was a baseline, it was doing the same thing for a year, now all of a sudden it's doing three things it had never done before.  

If you don't have that baseline behavioral analysis, you will have a blindside as to when the agent starts to go stray and doing things that a bad actor is guiding it to do. We cannot comprehend the speed at which these things can move. They can do hundreds of transactions in the time it would take a human to do one. And so the speed is, you know, it's exponentially faster than what a human agent, which is why so many companies are so excited about it and rightfully so, and we are not saying don't. There's, I've, I've had plenty of peers in the industry that would say things like, "Stick your head in the sand and don't do this." Not the right answer. We, you know, as security experts, we need to think about the world in terms of these things, these evolutions in technology are gonna happen, what's the right way to secure it so that we don't impede productivity, so we enhance productivity but also enhance security in this new productive context.

 

Brian Moody: So I think for our viewers, if you're looking at your organization and you're looking at your AI agent footprint, you know, we talk about attack service all the time. The attack surface grows exponentially with respect to AI agents because of what they do, how they interact in your environment, the things they touch at the speed at which they could exfiltrate data, at the speed at which they can, as you said, approve invoices or, you know, approve money transfers. It would take a SOC a moment to catch up. And that moment is oftentimes too late.

 

Shahin Pirooz: Which is why the guardrails are so important. It's, you, you cannot expect the when, when this agent can do 100 transactions in, in a matter of seconds, once the SOC sees those transactions and the behavior is bad, those 100 transactions are done. It's post facto.  

But the way to go around that, the way to address it is to, like I said, monitor the behaviors of the system, monitor the interactions it has with its parallel systems around it, with the ecosystem it's connecting to, monitor identity, monitor the access to APIs and the use of identity in, in whatever context, limit the scope of its rights and roles so that it is not able to do more than what you expect it to do. Like use these words, these mantras are always things that any CISO should be thinking about, inspect what you expect.... inspect what you expect, always. And so if it's not doing what you expect in that inspection and the inspection is through logs, through behavioral analysis tools, through event analysis, those things are going to inform you that something is happening that shouldn't happen before those 100 transactions hit.

 

Brian Moody: So let's talk a little bit about WhiteDog and, and kind of where your mind is and what we're doing about this. I mean, we've hit on quite a few points here. Let's wrangle this in.  

From your perspective, where do you see WhiteDog going with respect to helping to address this? And to summarize a bit, maybe a checklist. And I know from a standpoint you and the product team, you guys are already down this road with respect to our customers and our partners. But talk a little bit about where WhiteDog is gonna go with this, and then a little bit more about what can people begin to do here?

 

Shahin Pirooz: Yep. I'm gonna read for you the six or so checklist items that we think are important. And what I would say is this is what we've done to start into the process of figuring out how best to monitor and manage agents within our infrastructure and by extension in customers' and partners' infrastructure.  

And for MSPs, all of the tools' ecosystems are talking agent, agent, agent, agent. There are platforms that are agent-based that make your life easier. There's support agents, there's remote agents, there's agents everywhere. So we're not saying don't use them, we're saying make sure you have proper guardrails in place.  

So the checklist is super simple, and it's no different than what you would do with any new technology. Number one, inventory. Understand what you have, how many agents you have, and what they're doing. Understand the owner and the purpose. So who owns this process that this agent is trying to activate, and what's the purpose of this agent in that process? Is it doing the whole process? Is it doing four steps? Is it doing 10 steps?  

Next thing is access review. Don't give it more rights than it needs. Like that's, these are commonsensical. They're not, I'm not, this is no rocket science here.  

And then flag any high-risk actions is the next one. So flag anything that is high risk and monitor that even more closely. Every time those actions are taken, make sure you're logging them appropriately so that you can monitor them so that when they happen out of sequence or they happen too frequently or they happen in a context they shouldn't, that you can have kill switches and kill the thing. Stop it from doing more.  

Monitoring and alerting. We talked about this. You have to be able to take that data and collect it and analyze it and pull behavioral analysis from it to see that the agent is behaving the same way it should, that you expect.  

And then, the final step is the kill switch step I was talking about, which is make sure you have runbooks for what you do when the agent goes rogue or misbehaves, and make sure you have a kill switch so you can stop it.  

So that six-step checklist we've developed is going to be posted along with this recording. So those of you who would like to grab it and use that as a starting point, it's a great first step in doing this.  

But to answer your question, Brian, we've spent a lot of cycles thinking about this notion of digital identity in context of identities in general. So digital identity, carbon identity. We spent a ton of time developing our identity platforms, our identity security, our identity monitoring, our identity management, our authorization layers. We've developed a lot of solutions in this space.  

And so we are further going to expand specifically to deal with agents and implement what the industry is calling CIAM solutions. The idea is customer identity and access management is what the acronym is. But ultimately it means it's not just your IAM that's in Azure and AWS. It's for every single resource that's working in the environment. It could be people, it could be digital assets. So imagine it being identities for using ChatGPT, identities for people using ChatGPT or an agent using ChatGPT. Imagine it being identities for your agent and how it interacts with your active directory or your support system or your CRM or whatever.

So we will be expanding our portfolio to include CIAM solutions, C-I-A-M. And I don't have a date for you all yet, but it's coming. That is something we believe needs to happen in this context in crusade to make sure the agentic world is secure.

 

Brian Moody: Okay, let's land this plane. When you brought this up, this is a fascinating topic for me. We continue to talk about paradigm shift with respect to security and how you're addressing security. Because again, the old models, there's a key piece of foundation about what we implement for security, but there's no end game, folks. There's not an end game to this security game, right? It's a constant battle; it's a constant fight.  

And I think one of the other key things, when you brought up monitoring, we talk about the importance of security operations. We talk about the importance of the human as much as AI, the human aspect of security. And we can't emphasize enough the need for human eyes on this traffic.

 

Shahin Pirooz: And it's important to be able to have agents on this traffic as well, because if you think about as these things move faster and faster and faster, solutions that are monitoring these things need to be able to monitor faster and faster and alarm faster. So we, of course, use agents on the backend to help our analysts to accelerate what they're doing, what they're reviewing and pop up the things they need to evaluate, so combination of agent human interface if you will, which is accelerating our agents', our analysts' ability to review gazillions and gazillions of log entries.

 

Brian Moody: So quick key takeaways. You've talked about quite a few things. We've hit on quite a few topics. You know, the top one or two things you want our viewers to take away from today.

 

Shahin Pirooz: So, start small. Take your first steps.

 

Brian Moody: Great point.

 

Shahin Pirooz: Yeah, 100%. Don't have analysis paralysis and look at this as, "Oh my God, this is a huge thing to bite off." Take small steps, and every small step is gonna get you to the end point.

Implement some sort of zero trust construct. So trust does not equal comfort. Trust equals control. Just remember that mantra.

And ultimately treat these agents like people. They're your digital employees, versus your carbon employees, and treat them as such.

 

Brian Moody: Yeah, great point. So, if this topic hit a nerve with you, and we hope it does because it did with us. We thought, "Wait a minute."  

Feel free to reach out to us. We'd love to have a conversation with you. Like we said, we've got a checklist of some key points that Shahin reviewed. There's a little bit more data in there. We'll kind of include this with this podcast, and as always thank you for being our customer. Thank you to our partners, and we'll talk to you again next month.  

Let's talk!

We’ve Got a Shared Goal, To Secure Your Customers