How AI & Automation are Reshaping Cybersecurity for MSPs

AI is transforming cybersecurity—bringing both new defenses and new dangers. In this session, we break down how AI is reshaping the threat landscape and what MSPs can do to scale protection.

posted on
September 10, 2025
Transcript

Brian Moody: So today we're taking on the topic of AI, and truly, the challenges that AI is bringing to cybersecurity, and frankly, how it's reshaping it. And so we're going to talk a little bit about that. We're going to kind of walk down four key areas today that we think is important to our MSPs, is important to our partners, important to customers. What are the challenges of AI? What types of threats and things that really are changing the way we respond and how security responds.  

We're going to talk about implementing automated detection and response capabilities that allow us to effectively respond to this kind of new AI approach. And then for our MSPs and for our partners that are looking at building a business, how can they scale their business? What can they do to respond to this? And then selfishly, I'm in sales, how does WhiteDog help and how do we kind of address this?

 

Shahin Pirooz: So this was when we were leading up to this Sound Byte, I was having a conversation with Kirstin, who's our CMO, and she basically said, "Haven't we talked enough about AI?"  

And I said, "The entire world is overflowing with AI." It's tip of the tongue. It's tip of ear. It's right there hanging in the atmosphere, and it's become as bad from a marketing perspective as cloud was, where you can't tell if this cloud was some infrastructure inside your own data center or if it was in somebody else's data center, if it was in a co-location. The word cloud became ubiquitous across everything. And it made everything a little cloudy... meatballs.  

So part of why we wanted to talk about this one more time is because we've spent a bunch of time talking about the power of AI as it relates to productivity and improving workflows for your employees and helping you to outpace your competitors. We've also spent a little bit of time on AI-powered attacks, and it's that same level of productivity is being taken advantage of by the bad actors.  

So the bad actors aren't sitting around waiting for us to get good with AI. They're, as usual, ahead of us. They're engineers just like us, and they're taking the powers of an augmented toolset that can help them develop attacks faster, that can help them make attacks that are better at defense evasion. They're helping them to create content that looks more like the native language of the country they're attacking. Those are all things that are kind of becoming challenges that as security professionals we now have to protect against at a faster pace because it's coming at a faster pace than it was before.

Brian Moody: Speed and scale.

Shahin Pirooz: 100%.

Brian Moody: Two quick words that I think describe most of what AI is doing. So we are, on the protection side, implementing and working and using AI, but as you just said, the cybercriminals are taking advantage of that same technology to increase their speed and capability and scale at which they can attack folks. If you think about the scale, and just in in simple terms, the ability for AI to scrape information today is dramatic.  

So for individuals, for companies, if you think about a targeted CEO, they can scrape his social profile, his business profile, LinkedIn, corporate announcements, and from that information, those attacks become so targeted and so defined with that information, it's really difficult to see that that is in fact a fake email or a fake approach coming from, you know, maybe a hacker that could be from customer or from a social, from what have you. It's so detailed now, it's really tough to see that it's not what it appears to be.

 

Shahin Pirooz: Well, it's even gotten worse. Part of that first category that you hinted at, which are what are the types of AI attacks that are coming at us. One of the top ones that really hit this year in '25 was the ability to do deepfakes. Deepfakes basically are there's how many recordings of me are there out there? Of talking at these Sound Bytes, having conversations that are really allowing somebody to take that video, the tone of my voice, the way I speak, how I use words, and there's a lot of words out there. As we mentioned upfront, I like to talk. So able to take that video content and it doesn't take as many frames as we have out there. It takes only a handful of frames. It only takes a handful of videos at your company picnic or at your company announcement or at your family retreat or whatever.  

Finding those things is not difficult. It was not difficult before AI. It's even less difficult today. And the generative research tools are able to go grab that data and bring it back, and now they're able to create deepfake videos, which is you talking with your own voice and then having that be a voice conversation with one of your employees to say, "Hey, I'm in Barbados. I got stuck. My work credit card's not working. Please transfer these funds."

So, it's challenging, like you said, to identify and pick those things apart. So really, it has to be there’s a mix of human process and the second point you made, which is automation.

Brian Moody: Well, we're adding this on top of bot technology. And as we know, bot technology is not new by any means. We use it across all different types of platforms for evaluation. But now, take this advanced AI capability to automated bot, to chatbot, I can have real-time conversations and real-time literally response. There's no lag.  

And then you also think about the dynamic language capability where, if you think about phishing, we don't laugh at phishing emails anymore. When they came, they were so bad, we used to chuckle.

Shahin Pirooz: Yeah.

Brian Moody: 'Cause it was really obvious that they were fake. You can get your million dollars and it's spelled wrong. We chuckled. Now there are no grammatical errors. It is dialed.

Shahin Pirooz: And the graphics look like the company they're trying to impersonate.

Brian Moody: They're perfect.

Shahin Pirooz: Everything looks really good.

 

Brian Moody: But this bot technology, I mean, they have the ability to have real-time conversations now. Now come into that deepfake piece that you just brought in, that voice is your CEO's voice. Or your CFO's voice. And you're having a real-time conversation with whom you think is that individual.

 

Shahin Pirooz: There was enough awareness of deepfakes in the context of if you get a call from your child or your spouse that they've been kidnapped and send money right away, and it's even coming from their own phone number, to not immediately jump to conclusions that that's what happened and to do a little investigation, open up Find My and see where they are, text them, call them, and keep the person on the phone and ask questions. Those are all things that we heard about and were scary, because obviously if my daughter calls me saying, "They've got me. Send money," I'm going to freak out.

 

Brian Moody: I had my own mother get hit with this attack.

 

Shahin Pirooz: Yeah.  

But, we don't spend a ton of time talking about the corporate implication of this, which is what Brian just hinted at. When you have the CEO or CFO of a company calling from their own phone number, saying, "Transfer funds to this account, because my credit card doesn't work in this country," how many finance employees are going to take the time to double-check that?  

So that's where the human process comes where you're having second sets of eyes and this has to come from the executives down to say, you need to tell your finance team that you're never gonna ask that question, and that if you ever ask that question, to double-check it and have triple checks and have your secret word, if you will, that lets you know it's a real thing.  

But it's so critical to be able to have process anticipating somebody taking advantage of these things, as opposed to it happening and then you losing hundreds of thousands of dollars that you're not gonna recover.

 

Brian Moody: So, you tie to some of these things as we talk about in the corporate world, even in a sense in the personal world, it becomes kind of these social attacks. We've seen so many. MGM is probably the biggest one in the last two years that was purely a social attack, right? I think the next challenge around AI, and we talk speed and scale, hackers had to develop this code before, right? They had to write this. They had to do the investigation on how to get around a tool, how to get in an environment, how to execute processes against operating systems or applications. Speed and scale AI does it now in milliseconds.

 

Shahin Pirooz: You literally get on and you say, "Write me a script that gets around EDR tools anti-tampering feature."

 

Brian Moody: Yep. And this is the scary piece is I think that now they're generating polymorphic code that actually has the ability to be somewhat sentient from a standpoint of know that I just got scanned by a security tool. The code can detect that and then go into a benign mode. And have the ability to hide itself from these tools.

 

Shahin Pirooz: And adjust its code base.

 

Brian Moody: And adjust its code base.

 

Shahin Pirooz: So the next time it runs it doesn't look the same.

 

Brian Moody: You just took the words out of my mouth. I was gonna say so the next time it executes or copies to the next machine, it's morphed itself into something a little different. So as we look back to traditional tools where signatures, you know, where I'm looking at defined signatures or defined hashes, well, that tool is morphing and that signature's not the same.

 

Shahin Pirooz: Polymorphism is not something new in code. I think it started, the first polymorphic code was written in '71 or '73. So, it's nothing new, but it's being used at a scale that is very difficult to manage against. Back then, we didn't have networks like we do today.

So, you didn't hit one machine and then be able to, from there, hit millions of machines on the internet. You were able to only hit whatever machines were in that close network with that system, which was usually a lab or a school or something like that back then. Today, the impact is much larger, and the ability to write these polymorphic codes is so simple.

 

Brian Moody: Right.

 

Shahin Pirooz: It's like having a conversation with your own developer that's creating content for you. And then some testing and fine-tuning and you've got a viable product.

 

Brian Moody: So, we're highlighting just a couple of these types of attacks that are coming in and the dramatic impact that that AI has on them, right? Because it truly is reshaping the cybersecurity attack.

 

Shahin Pirooz: So, what's this mean for MSPs?

 

Brian Moody: Well, I think the key is how do they then implement some level of security for their customers? What are they bringing to market as a service that's able to address this? And I think the challenge they're running into is the traditional tools that have been their foundation, the lock, stock, and barrel from a standpoint of how they've been protecting, they don't work anymore.

 

Shahin Pirooz: Well, they work in their silo, fundamentally.  

So the issue is, I think we've discussed in multiple of these Sound Bytes, the need to have multi-layered security, which means multi-tooled security. And on average, medium-sized enterprise, you're looking between 10 and 20 tools. So as you're going to your customers, and you are slightly higher than mid-size because you're going across customers, you now have 20 tools that you have to manage to provide layered security across that environment, across email, identity, DNS, endpoint, network, across data, and it keeps expanding and expanding. The attack surface is getting bigger and bigger.  

The five things I just mentioned are the core critical five layers that you need to put energy into, but each of the tools in that category, and there may be two or three tools in each category that you have to implement, because one's not enough, each of those tools has their own AI built into it, and they have their own response capabilities built into it, but no two of them talk to each other. Those AIs are siloed and strict to the content and the telemetry that that tool has, but there's a wealth of telemetry across those 20 tools, and there is not a single player in the market, besides WhiteDog, that has taken the telemetry across that platform of multiple enterprise grades, technologies, and pulled it together to be able to put deep models against it and be able to take that correlation of data across platforms and identify a threat faster than anybody else in the industry.  

And what that translates to is the world has shifted to a platform-based approach to security as opposed to a tool-based support or model against building your own security environment. Tools just don't cut it. A tool is not enough. And, as we just described, you need 20 tools at a minimum to get a proper security portfolio, but now you’ve got to implement those tools, configure those tools, fine-tune those tools on a regular basis, figure out how to take information from one to the other and be able to correlate the data across those tool silos or planes, and all of that takes a lot of time and resources and development.  

And I'll be honest, doing what MSPs do today is a lot of work to be able to add all of sudden a whole security company on top of the company that you're doing.

 

Brian Moody: Right.

 

Shahin Pirooz: So this is, you know, why the market keeps saying partner with an MSSP, partner with a security provider. Because you want to focus on the things that have helped make you invaluable to your customers, not those things that are table stakes in commodity, which security has become.

 

Brian Moody: Absolutely. And, I think, you talk about that layered approach, but I used this analogy the other day in a talk that I gave. It's not just the layer, but it's the defense in depth.  

So, you look at all the vectors that you called out and the analogy that I was using was like a house, you can't just lock your front door and leave your back door open, or you've got your window or your garage door, or you have different access points that are within an organization or an enterprise, and same with a house.  

But if you think about the window, so you think about email or endpoint, the defense in depth is okay, on my window I have a latch. I've latched it. So that's a single layer. Well, I might have a pin in my window, so that's a little bit. That window's now got two points of lock. Well, I can go get that wooden dowel and put it in the window. So now I have, in a sense, a defense in depth.  

Talk a little bit about what we bring to market and how we do that across the WhiteDog platform from a standpoint of it's not just a tool performing that function. We've provided some layers there within that to create defense in depth.

 

Shahin Pirooz: So that same email scenario is a good example. We can talk about if you look at the email category, which is the single largest threat vector besides identity. Brian likes to say that people aren't breaking in, they're logging in. And it's because identity is involved in 100% of attacks. But email is, besides identity, is the single largest vector for attacks.  

93% of all attacks start with email. So imagine if you could take that 93% and reduce it down to some single digit percentages. How much less noise and issue you would have in terms of identifying and detecting threats. So, what have we got in our email security stacks today? We've got security awareness and phishing simulation, and some companies feel that and antivirus at the gateway is sufficient. So, let's talk about those three components.  

Security awareness is great. It's important to make your people aware of what's going on. There's a lot of compliance requirements around making sure you're regularly training your people, so it's a great checkbox to say, we've done this, we are doing a good job of communicating and training our staff.  

But to expect that that staff is going to be your front line of defense is a mistake, because all of the attacks have gotten past an individual. And all it takes is one click no matter how many times they pass their security awareness training. And the attacks are getting better, they're becoming and looking more and more real, they don't look like phishing, it's not easy to spot. So that's a critical mistake to rely heavily on that.  

Phishing simulation teaches people what to look for, but again, it's still just a simulation and it takes one click. How many of your people click on the simulations by mistake? If they did there, they're probably going to do it when the real thing comes through.  

Gateway security. Now we're talking about we've moved past the latch. And we've moved past the awareness that you auto-latch. So let's talk about we're now going to put a pin in it. So putting a pin in it is gateway email security.  

There are 13 attack types that target email, and three of them are external attack types. They're antivirus, so identifying, signature-based this is a malware, this is something that's bad. They are URL-based attacks, so somebody who's sending a a link to a bad site, that can be identified and caught. And they are impersonation from a domain perspective where the domain doesn't match the company or person. So those attacks are fairly simple. And some of the gateway solutions go couple steps beyond that.  

But there are 10 attack types that most gateway solutions don't touch, and that's because all of those are at the inbox post the gateway. They're how you log in to the inbox, what tool you're using to access the inbox, what country you're in, what geography you're in when you're logging into the inbox. Have you behaved the way you were behaving before? What is the sentiment analysis for the email and the way you're writing it? Is it the same person? These are all things that are more deep knowledge and calling.  

This is what we call inbox detection and response. And so moving past the pin and moving towards the putting that piece of wood in the window, blocking the window in the next mechanism, is this what we call post-gateway or advanced phishing protection. Which goes and covers all 13 attack vectors and works in conjunction with security awareness training, phishing simulation, and gateway-based security.  

So, we can go to this depth on every layer of the security model and talk about how, for DNS, having firewall-based DNS isn't enough, and you can keep digging at each of these layers. But the idea here is that now you get to understand that there's so many tools in a proper security stack that there is no one tool that will ever solve this problem.  

And so you have to figure out now, "Now that I've got all these tools, how am I going to staff it to be able to support all this? How am I going to train my people to stay on top of the new tools? How do I evaluate the new tools?"  

And that's where the automation side of this conversation starts to come in.

 

Brian Moody: Well, I think even maybe a little bit deeper and you just brought it up. Is this person behaving like they were? So, now we talk about behavior heuristics, behavior of the environment. And it's not just people behavior, it's systems. How systems behave. We can garner data on what systems are talking to what systems, what data are they involved with? How the operating system is running—what is running on there?  

So we get into host intrusion from a standpoint of those particular host files. Are, are they changing? Is there a behavioral change? Are we seeing those key components? And those type of, and network traffic analysis, these pieces are forward thinking kind of threat hunting, which I think is a great segue into the nexus is how do you bring kind of automation, the threat hunting, the proactive side of security to respond to AI because, again, the traditional tools, they do so much, they're important, as I think we've stated, but how do we bring those key components and I think I know where you're going to go with this, but it's always great coming from you.

 

Shahin Pirooz: The two sides to this thing are the attacks are coming so much faster that you have to be able to match pace. So, the obvious answer is autonomous response. You need to be able to automate the core basic response.  

So one of the biggest challenges with technologies out there is that they rely on a technologist or a practitioner to identify and build the correlation rules that say when you see these things, behave in this way. A lot of the manufacturers make a best effort at the starting point of that, but if you rely on those updates, those updates don't necessarily come as fast as the the bad actors are putting 'em out.  

So for WhiteDog, for example, our threat intelligence team is constantly taking the indications of compromise that we see in incident responses we've done, in threats we've identified in a company that we've stopped, in new threats that are published on the internet and by all the security entities out there that are communicating them out to us from a government and industry basis perspective.  

And we take those things and we look at what is the reality and we create the concept of benign threats in our platforms, across all of our platforms. So creating our own threat intelligence feed and pushing that to all of our tools allows us to say that if we see this behavior anywhere in the layered security, in-depth security model, we need to trigger that, hey, this thing is happening, and then the correlation engines across the platform say, yes, it's also happening here. So we saw it in email, we see it in identity, we see it at the endpoint, we see it on the network.  

And the biggest challenge in the industry today is that it's a significant amount of effort to take the data across all those different layers and pull them together, whereas that's precisely what WhiteDog has built. And that's precisely what we present to our analysts so they can go do that deeper layer of investigation and say, is this real? Is this false? If this is a real threat, let's react.  

And the only way we're able to take dwell time down from six months to six minutes is because of the tremendous amount of automation we built into the platform. There's no human that could move that fast.

 

Brian Moody: So it's these keys that, I think from our perspective, that allow our MSPs to be able to have response in minutes.

 

Shahin Pirooz: Yes.

 

Brian Moody: Versus hours or days in their customers.

 

Shahin Pirooz: And knock on wood, we haven't had customers on multi-layered solutions been impacted in the eight years we've been operating as a company. We've certainly had companies who we've just done SIEM for that have been impacted, where we don't control the other tools. We've certainly had companies that are in one tool and the threat came in from another vector or a system that wasn't monitored, and we've done the incident responses for those and helped those customers recover quickly.  

But, when we've got the multi-layered DeltaDR, for example, deployed across a customer, we haven't been hit with those types of attacks and we've been able to identify the bad actors in minutes and sometimes sub-minute, from the time that we see some behavior. And that correlation is the key to that, the automation around that correlation and alert escalation. And everything is tied to user and entity behavior.  

The problem is most organizations, most OEM technologies out there, while they have some sort of UEBA in each of the stacks, it doesn't look across all of them. And while some SIEMs have the capability to do what I'm describing, that's not a natural and intrinsic behavior. It's something that you have to develop and fine-tune, so there's a lot of noise that comes from just pushing logs and alerts to SIEM and taking that noise and reducing it to meaningful information is the only way a SIEM is going to be valuable.  

And in the context of automation, a SOAR is a big factor there to be able to take the security and optimize the security events and optimize them into actionable alerts or file them and process them as things to investigate or call them noise and squash them down. That changes from minute to minute. It's not something that you can set and forget. This is not the Maytag men. This is literally people, this is the duck feet under the surface of the pond. Everything looks still and beautiful on the surface, but the duck feet are going crazy under the water trying to keep everything going.  

And that is why the industry is saying, as an MSP, partner with a security partner. I often argue partnering with an MSSP that grew out of an MSP is a challenge because that partner also does MSP services and they potentially compete with you and are inside your customer. It's hard to bring the frenemy into the gates.  

So partnering with one of the pure play platform players that are channel only, that aren't going to compete with you in the market, that's really where we recommend. Take that with a grain of salt. We are slightly biased in that context but we think that's the right approach for this.

 

Brian Moody: What? Us?

 

Shahin Pirooz: Slightly biased.

 

Brian Moody: So, we talk about this speed and scale, right? We've touched on the dynamics of the attacks that are coming in, importance of automation and detection and response capability. What can MSPs do today to scale their business? How can they kind of address the scale of implementing a platform, not tools, in addressing the security challenges and compliance really within in their customers today?

 

Shahin Pirooz: So, number one, I would say in the context of this AI thunderstorm, if we will, that is going on right now, leveraging AI to help you understand how to improve performance for your services outside of even security. There's a lot of great tools out there that are support-based tools that will take the ticketing platform that you're using, take the content from that ticketing platform, and help serve up quick answers to your analysts so that you can respond to your customers. That is a great first step to look at. How do I optimize our support so that I get more out of the investment I've made and I get more productive and I get more speed in response.  

Tying into that with a partner that can integrate with your ticketing system and be able to add content with regards to incidents and how they were solved and how they were addressed just lets that AI platform for you become even more powerful. So finding a partner like WhiteDog that has integrations to your ConnectWise, to your ServiceNow, to whatever ticketing system you're using to be able to integrate tickets back and forth so that it feels like one team from your customer perspective so it's transparent and it's your SOC, not somebody else's SOC.  

To go beyond that, in order for you to be able to go to market quickly and do security at a top level security player's level, as an MSSP, you need a platform that is comprehensive, not just tools, not just pieces parts. WhiteDog brings a very complex stack of enterprise-grade technologies.  

We have about 60 technologies in the stack. 50 of those are enterprise commercial technologies, 10 of them are open-source, that are already pre-configured, already integrated, already fine-tuned, and all it requires is provisioning your customers in those tools, which is why we can do a 30-day onboarding guarantee for any customer you bring to the table.

 

Brian Moody: I think one of the other key aspects behind that is, you talk about this all time, you can set that platform up, but if we don't have someone watching, right? You make this statement all the time. There's so much that technology does. We can have automated response and we can have these kind of key features, but if there's not human beings behind this that are watching 7 by 24. I love a statement you always made just like, you know, our threat analysts will look at something.  

Technology says one thing, but it's the human piece that goes, "Well, what if I go here?" And a lot of times, it's that ingenuity, that question, because the technology's not always going to ask that question, which leads to the threat hunting that actually leads to results within our SOC. So, you know, I think the other aspect of that is most organizations can't field a 7 by 24 security operations, and not only that, distributed. WhiteDog, we have in Cincinnati, Ohio, as well as here in San Jose, California, fully operational 7 by 24 security operation centers. That, I think, is critical to our ability to analyze, threat hunt, and respond.

 

Shahin Pirooz: I also think... We have a lot of end customers for our partners that have government associations, so having a U.S.-based and U.S. persons SOC functionality has become critical to many of our partners. A lot of players out there in the platform space are leveraging overseas resources for the analyst roles. It's more cost-effective, it's easier to scale, you can build up facilities quicker, but it doesn't meet regulatory requirements and those regulatory requirements are coming down even harder and harder now.  

The definition of supply chain has changed. MSPs are now in the supply chain. If your customer is servicing a government entity, you are now in the supply chain and you're responsible for CMMC certification for your environment. So the minute you have an outsourced SOC that is now in Costa Rica or Mexico or Singapore, you are no longer able to deliver services to some customers that have higher requirements by their government contracts effectively. So having U.S.-based locations is critical. U.S. persons in those locations, all of our analysts are U.S. persons is critical.  

And from a scale out perspective, not only do you have to find the people, train them, hire them, but the analyst role is a difficult one to keep and maintain and keep those people happy. It's a hard job, it's a thankless job, and so that becomes a turnstile for folks as they're trying to build out their environments.  

It is our responsibility to keep our people happy, to make sure we have a pipeline of resources so we can bring them in as we have churn and we've been fortunate enough to have great tenure and great resources and put them through our boot camp and our academy to get them up to speed. And I think that human factor is just as important, to your point.  

I had a recruiting firm reach out to me to have a conversation about how I hire. And I said "Well, it's simple. We go through an extensive recruitment process where we have panel interviews, then subject matter interviews, then leadership interviews, and when it comes to me, all I care about is one thing. Is that analyst curious?"  

Curiosity is the number one metric. There are others. I want somebody introspective, I want someone who is intuitive, someone who's innovative, and someone who is focused on teamwork and can interplay with the rest of the team and not be just a self-starter and focused on what they do and be, you know, tightly outside of a team individual performer. Not looking for that.  

Looking for a curious team player that is introspective, wants to grow, wants to continue to grow, has intuition that says, "This feels funny." Now I'm going to use my curiosity.

 

Brian Moody: I'm going to ask a question.

 

Shahin Pirooz: And then lastly, innovative to say, "You know, we've solved the same problem 14, 15 times. Let's automate it."  

So that's what we look for. That's the critical factor that makes us, I think, very different, is our people. While we embed AI in a lot of what we do and all the stacks that we use have AI embedded in them, I think that you cannot get away from having deep subject matter expertise that validates what the AI tells you. Because, as we all know, there's a lot of hallucination in these generative AIs.

 

Brian Moody: We've talked about how, I mean, it's difficult. We're not telling anybody anything that they don't already know. Right? I think from a standpoint of cybersecurity is different. But you made an interesting comment. Security's almost become a table stakes now.

This isn't a decision of whether you should have it or not. Everyone has to have it. I think the decision now comes down for our partners and those wanting to deploy or sell or implement cybersecurity is whether you do it yourself or you partner with someone like WhiteDog.

 

Shahin Pirooz: Yep.

 

Brian Moody: And I think like we've seen as most things that are as a service I would ask the question, like, why would you want to at this point?  

We've got eight years behind us in building this company that provide a platform that delivers a enterprise cybersecurity platform at economics that you couldn't touch. You couldn't touch those today. And the challenge continues to get higher and higher. The bar to implement this gets higher and higher, and I think we're ahead of the game. And I think that's what makes us such a great partner and enables you to be able to deliver a platform that addresses these challenges.

 

Shahin Pirooz: Agree.  

The last question you asked in that last bit was about how do you align with compliance, and it's more about compliance has become a couple things, but thing that most people lean towards is cyber insurance. Do we have cyber insurance? Are we cyber insurance ready?  

We just had an interesting engagement with a prospect that was hit with ransomware and the reaction was, "We're fine. We have cyber insurance."  

And that wasn't accurate. There's definitely much more risk associated. There was data exfiltrated, and that data was a regulatory risk to that individual company.  

But the first thing that cyber insurance is going to do is they're going to bring in some well-known tool. You pick it. And that tool provider will deploy their tool with the hopes of selling that tool and then they'll bring in a forensics team, and usually they're tied. And the outcome of that is, as you're an MSP, is you just got displaced by a tool and there might be services associated with that tool from that OEM that they're not going to care about your business with that customer.  

As a WhiteDog partner, all you have to do is tell the cyber insurance firm, "We already have a firm that has these technologies in place. They will work with your forensics team. You don't need to bring in any tools."  

And we've got all the tools that they recommend, at the top tier of their recommendations, so that you don't have to worry about that. So compliance is a couple of things. One of them is readiness.  

The other side of it is, as these regulatory concerns and the supply chain constructs are changing, you need to have checkboxes on your readiness to support your customers. You need to be able to address their security assessments of you, and not only do we do this deep ecosystem of security technologies and cybersecurity as a service, but also, we do governance, risk, and compliance so that you can apply that to your environment.  

We do asset lifecycle. There's a lot of functionality that goes beyond just the core, block and tackle, EDR, email security, DNS security. There's much more to the WhiteDog platform that makes you a holistic security player.

 

Brian Moody: Well, I think you get back to your feet under the water. You there's no end game to this, folks. There's no end game to cybersecurity. This is not a "I've implemented the tools, I'm good."  

So, think about how dynamic your own environment is. Think about your customers. How  much they change. How they grow their business. They add assets. They add capabilities. They add employees, resources. Their environment is constantly changing.  

How you keep up with that and secure it? There is no end game here. It's a constant evaluative approach, and that's where I think what we bring to the table with the layered approach, the attack surface management, our ASM capability, where we're always evaluating the attack surface. Identity, network, storage, email, endpoint.  

We're constantly evaluating the points of, did something change? Did you add something? Did someone leave something admin-admin? Did you do something in your business to leave a door open? And without that constant evaluation, it's when, not if.

 

Shahin Pirooz: It's guesswork.

 

Brian Moody: It's guesswork.

 

Shahin Pirooz: Yeah. 100%.  

So, we were at a conference not too long ago, and I think I shared this in a previous Sound Byte. The panel before me suggested that you should make policies that say, "Don't let your people use AI."  

That's the ostrich approach to dealing with this challenge and problem that's in front of us. It's not a problem, it's certainly a challenge. Putting your head in the sand doesn't make it go away. It just makes it so you can't see it.  

And I promise you that your staff, your people are taking advantage of it in ways you're not aware of, so become AI-literate, meaning understand generative AI. Understand agentic AI. Use agents to be able to automate some of the workflow we just talked about. SOARs alone aren't enough.  

You need to create customized solutions that align with how you want to go to market. Use agents that can interact with your partners and OEMs, and use partners that allow you to do that with API-first methodologies. Like, be able to take control of what you're trying to integrate in your ecosystem that you take to customers. At the end of the day, it's your name on the frontline, your reputation, that you said, "I picked partners and technologies that I feel comfortable about saying are going to help you stay secure."  

And we, in a similar way, have our name on the frontline right behind you, saying, we spend all of our time doing research and development to figure out if the partners we pick are doing exactly what we said they are, to you, in all of our collateral.

 

Brian Moody: So I'm not going to top that. I'm going to close with that.  

Thank you for joining us today. We'll be back again next month for October Sound Bytes, but appreciate you taking time to join us today to talk about AI. And as always, if there's anything we can do for you, you know where to find the pack.

Let's talk!

We’ve Got a Shared Goal, To Secure Your Customers