Compliance Unfiltered is TCT’s tell-it-like-it is podcast, dedicated to making compliance suck less. It’s a fresh, raw, uncut alternative for anyone who needs honest, reliable, compliance expertise with a sprinkling of personality.
Show Notes: How and Why to Vet Vendor AI Software Use for Security Risks
Quick Take
On this week’s Compliance Unfiltered, unlock the hidden risks driving AI security nightmares, and learn how proactive vendor vetting can save your organization from irreversible breaches.
As AI integration accelerates across industries, many organizations are blindly rushing in, unaware of the lurking dangers that could compromise sensitive data and even their reputation.
The CU Guys expose the critical gaps in vendor vetting practices and offers a clear roadmap to protect your business in the age of AI.
Read The Transcript
So let’s face it, managing compliance sucks. It’s complicated, it’s so hard to keep organized, and it requires a ton of expertise in order to survive the entire process.
Welcome to Compliance Unfiltered, a podcast dedicated to making compliance suck less. Now, here’s your host, Todd Coshow with Adam Goslin.
Well, welcome in to another edition of Compliance Unfiltered. I’m Todd Coshow alongside The Lemonade in your compliance. Arnold Palmer, Mr. Adam Goslin, how the heck are you, sir?
I am doing just fantabulous today. How about you, Todd?
cannot complain today, we get a chance to chat about kind of a hot button topic right now, specifically how and why to vet vendor software for AI use in for security risks. Now, everybody’s looking for a better, more efficient way to do everything.
And in 2026, that usually includes AI. How is this kind of a topic that should be on the forefront of everybody’s mind?
Well, I mean, the, the, the advent of AI has, you know, fairly quickly, um, may have taken a front row seat in a lot of organizations in, in just about every industry. Um, you know, there’s third party, you know, third parties that are integrating, you know, AI engines, chatbots into basic subscription packages, whether you’re wanted or not. And, you know, AI is getting packaged into, you know, office products, search engines, employees are using them without even, you know, a consideration for any of the potential security implications. So, you know, as, as AI permeates the workspace, uh, it’s not going away anytime soon, so, um, and that, that poses an issue for organizations.
They, they, AI presents some unique security challenges that, you know, most organizations aren’t fully prepared to address, um, let alone that, you know, depending on the organization security stance, you need to be able to consider, you know, locking down software so that, you know, your folks can’t inadvertently share inappropriate data with, you know, with third parties. So, um, you know, you need, you need a couple of different, you know, elements, um, to, to be able to, to be able to, to bring into play, um, you know, kind of a framework, if you will. So, um, you know, including your, you know, kind of your AI policy or approved software lists, your, you know, vendor vetting, you know, we already did, did a fair amount with, uh, you know, kind of AI, AI policies and approved software lists. So, you know, today we’ll, we’ll focus in on, uh, you know, on the, you know, kind of the vendor vetting, you know, leg of the stool, if you will, and, you know, how to, how to go about, you know, going through vetting vendors, et cetera.
So good times.
Good times indeed. Now, where should an organization start?
Well, as you’re, as you’re going in, first and foremost, just to kind of gloss over it, certainly every organization needs to have policies that are, you know, kind of governing the use of their, of their artificial intelligence. If you don’t, if you don’t have a standard for your organization, then people, people are, they’re just going to paint whatever they want to paint. And that typically means outside the line. So that’s not good for anybody.
But, you know, it’s, it’s going to introduce security risks for the organization that they, you know, they, they aren’t even prepared for. So, you know, before you can go through deciding to vet any vendors, you need to make sure internally you’ve kind of done the, the, the forethought and thought leadership around, you know, how is our organization going to approach the, the advent of AI. You know, defining, you know, acceptable uses for AI, what constitutes sensitive data within the organization, you know, these are, these are kind of critical first steps where, you know, you want a well communicated standard. So that you can, you know, have a, have a shot at ensuring that your proprietary or protected data, you know, you know, doesn’t end up in the, in the hands of an AI, AI system DB. So it’s, it’s kind of a good first step for organizations as they’re starting to, you know, starting to pin things together.
What are some of the considerations when defining your AI vetting strategy?
Well, as you’re going through, I mean, a good underlying tenant of any security and compliance program is having a list of approved software and, you know, as personnel find new software they’d like to use as you’re going through and kind of doing your annual review of the approved software list. You know, having, you know, having the having them making those requests having going through a vetting and validation process, you know, you need to as an organization, you need to be able to figure out is this the right right tool for the right purpose at the right price point, but also deciding on the security stance for, you know, for that particular tool.
So some things that will come into play are going to be, especially as it relates directly to AI, you know, are you are you going to allow the use of kind of a publicly available AI platform or, you know, does your organization require some form of a private instance? You know, there’s some of the, you know, some of the providers will offer private instances, you know, but, you know, the back can mean a virtual myriad of things in the in the AI world. So don’t don’t walk into it making assumptions. Oh, it’s a private instance. We’re cool. No, ask questions like what the hell does a private instance mean, you know, etc. You’re going to have to go to a greater level of depth. You know, you want to walk into that conversation kind of eyes wide open, you know, whether you’re going private and public instance, some type of an open source, AI capability, you know, all of that’s going to come into play. You know, in the next arena that really plays is, you know, how do you want to go about doing risk mitigation for the organization? You know, if you aren’t leveraging an AI platform for any any sensitive or internal data, then it greatly automatically it greatly mitigates the level of risk that the organization is going to take on.
But, you know, anytime that you’re bringing sensitive data into it, you know, you need your sensitive data be to be well defined. And, you know, a very clear well communicated policy to attempt to ensure no sensitive internal data lands up on, you know, on AI systems. You know, if your organization’s using AI enabled software, you know, in association with either your sensitive or internal use data, then you’re going to need to take extra care to, you know, to make sure that you’re thoroughly vetting, you know, all of those pieces of software. And, you know, one thing that I mentioned mentioned a minute ago was that, you know, was even even the existing stuff that you’ve already got approved. You know, it’s not good enough to say, well, just give this an example, you know, five years ago, I blessed the use of, you know, Office 365. That’s great. But in between five years ago and now, a couple things have changed. And so you’ve got to look even at the approved software, look at changes and modifications have happened within those, you know, as you’re going through and doing kind of your annual reviews.
You know, since AI is, you know, poking its head into a number of different, even already existing platforms, it puts a great deal of onus on the organization that’s kind of doing the, you know, doing the vetting and sanity checking to really be on their toes.
Yeah. I guess kind of the crux of this for me is like, what is it about the use of AI in vendor products that increases risks for an organization? Well,
know, I love referring to the blind stumble toward AI as the AI zombie walk, right? You know, I’ve seen multitudes of organizations just whooping all caution to the wind for all of the wonderment that AI had promised them, you know, with little or no consideration of any notion of risk.
And, you know, as your people are using AI for daily work, you know, and they may be unknowingly sharing sensitive data with insecure AI engines, you know, your standard vendor vetting reviews, you know, now need to include a thorough awareness of how AI is being implemented within, you know, within those tools. You know, there’s a certain inherent element of trust that’s involved in, you know, vendor relationships. But, you know, you still need to go in and verify, you know, some of the, you know, some of the points. I can’t help but harken back to, you know, remembering the, you know, the kind of firestorm that was the Edward Snowden, you know, blowing the whistle on the big tech companies violating their own privacy statements by, you know, injecting backdoors and sharing information with underbrewed parties. So, it’s not that big of a leap to imagine AI organizations that are, you know, inappropriately or even surreptitiously, you know, using your data to feed back into their engines so that they can use it for continuous improvement. So, you know, not a stretch at all, Adam, not a stretch at all. You know, probably more, you know, a notion here. And, you know, I mean, at the end of the day, I mean, you’re not putting, you know, putting sensitive data in and you don’t have as much to worry about. But, you know, one, certainly one specific risk that folks need to, you know, walk in eyes wide open is the trust but verify approach, especially as it relates to coding. A lot of the, you know, some of the AI platforms are, you know, spitting out code for people to go and leverage in their, you know, for internal development and using it as an acceleration tool. You know, bad actors literally are actively attempting to influence those learning models so that they can try to get the AI that’s, you know, creating this wonderment of code to inject backdoor vulnerabilities, you know, as it’s coming off the far end of the line. And, you know, they want these security holes to, you know, to appear, you know, as an expected outcome so that the, you know, other world developers just go, oh, yeah, thanks very much and go plug it straight into, you know, straight into their environments and, you know, and whatnot. I mean, at no point in the game should anybody be taking code from either from a machine, you know, or an unknown developer and tossing it through, you know, tossing it through unvetted, unvalidated and moving it up to production. So, you know, it’s just, you know, some of these things are kind of common sense, but in the same sense, you know,
Is it always that common?
Yeah, not so much. That’s not so much.
What are some of the vendor vetting questions that organizations should make sure that they ask when vetting their vendors?
Well, you know, in the, here’s my recommendation. In the earliest rounds of all of the vetting, right? First and foremost, just do vetting based on whether or not the vendor’s going to suit your purpose. You know, get yourself from whatever, 10 or eight options down to, you know, down to two or three. And start shortlisting the, you know, shortlisting the vendors based on the features and functions and things along those lines. And then, you know, going into progressively deeper levels of questioning based on, you know, based on the risk appetite that your organization has already established for itself. So, you know, certainly asking vendors about their security and compliance, you know, kind of posture, you know, what search are they going up against? Did they get validated and vetted by a third party? And as crazy as it is, as I feel it is, to have to actually articulate this, I have seen more cases than I can plausibly imagine where organizations that were ostensibly vetting vendors requested the security documentation, said, thank you, and stuck it on a shelf. And it’s like, you need to actually read what they gave you.
I would, you know, one fun word, two different stories. One was I had one organization that had ostensibly done their vendor vetting and they didn’t bother reading the, you know, reading the documentation, but when we were going through and getting kind of prepped up for an assessment, we found out that the vendor had handed them a piece of security confirmation that wasn’t even for the, like the product lines and services weren’t even included on the paperwork that, you know, that they were leveraging weren’t included on that paperwork that the vendor provided to them. In another case, I had a different company that again, didn’t bother to read it. And it was covering specific locations that did not include the locations that the organization was leveraging, you know, type of a deal. So it’s like, you actually have to crack these things open, read them, make sure that locations you’re leveraging, scope elements you’re leveraging, services that you’re leveraging that they’re actually listed in the documentation.
You know, what specific controls did they implement and have within their scope? You know, have they added, you know, AI capabilities? How are they protecting the environment and data? You know, around AI specifically, you know, you want to understand things like, you know, are we, is stuff going into a public pool? You know, can you as the customer garner private, semi-private instance of the AI engine? And if so, what does that mean? Can we host the, you know, host the AI ourselves so we know exactly where the data is? You know, if we’re using a private instance, does that mean that nothing touches either our data or the metadata? Yeah, for our arena, is the, this is a good question. Is your, quote, private model, is that being influenced by the learnings of somebody else from somewhere else?
You know, some type of baseline engine improvement feed. What is it that it’s going, you know, you’re putting your effort into making, you know, improving the learning of this particular AI instance.
Are there any other external influences that are going to sway the benefit of the platform that you’re trying to, you know, that you’re trying to train? You know, can I, you know, do I have the option to configure the instance of my software to exclude AI? I mean, I’ve seen a lot of, many organizations, especially I’ll kind of call it in the earlier days of AI, literally just making a blanket statement that said, you’re not going to use AI for anything related to our data. Well, you better have the capabilities to, you know, dial things on, dial things off and be able to handle things appropriately based on the expectations of your customer. You know, that’s certainly one of the, you know, one of the elements that’s going to be critical as folks are kind of going through that process.
No doubt. Now, what are some of the red flags that should be on folks’ mind as they’re doing the vendor vetting?
Well, you know, I mean, just don’t go into this guiding assumption that you’re going to get an astoundingly transparent response from the, especially from the massive AI companies, you know, make sure that we’re, you know, that we’re digging into it and where we can get evidence that’s verifying claims, you know. You know, there’s a couple of arenas where, you know, where I would call, I would wave the flag of a potential deal breaker, and that is where you’ve got, you know, a risk appetite mismatch where the vendor doesn’t possess the model that works in accordance with your risk profile. That should be, you know, one that, you know, that starts alarm bells going.
You know, another is a transparency gap where, you know, the vendor isn’t able to provide acceptable answers about how they’re using, you know, your data, your information, who they share it with, you know, et cetera. I mean, it’s one thing for me to go get on the phone and tell you, oh, yeah, no, no, you know, your stuff won’t, you know, won’t be touched by anything under the sun. It’s quite a different story to put those words in writing in your, you know, in your agreement. So, you know, it matters, if you will. You know, the other thing to kind of watch out for is the, is the, I’ll call it vague responses, especially, and, you know, no offense to the salespeople of the world, but a lot of the, yeah, a lot of the folks in the sales arena are just absolutely known for, you know, vague responses when they, you know, either they don’t want to answer your question or they don’t know the answer to the question. So, you know, if they can answer the question, you know, certainly you need to be cognizant of, am I talking to a salesperson or am I talking to a sales engineer? The sales engineer type, the one where the gearhead mentality is going to know things at a deeper level of depth and ought to be able to answer your questions in detail to your satisfaction. But, you know, if they’re obscuring or hiding, you know, the, you know, their real data practices with vague answers, that should be another thing that, you know, that kind of, you know, starts ringing alarm bells. I mean, you know, you want to take seriously that responsibility, you know, of being a good and appropriate steward of your organization’s security. So, you know, just remember that your customers, your employees, your partners, they are all counting on you to walk into this AI era, you know, with your eyes wide open. They’re, you know, there’s a lot of things that are kind of at stake as you’re going through that process. Take that responsibility seriously.
Absolutely. Parting shots and thoughts for the folks this week, Adam.
Well, you know, take your AI responsibility seriously, bottom line, I mean, a lot of people get sucked into the notion of the seductive nature of AI and the reality is that security risks have the capability to be both catastrophic and permanent. So once your sensitive data goes and gets ingested in some place that shouldn’t, there isn’t a magical undo button, you know, you want to stay ahead of the curve, you need to be proactive, you want to be skeptical of the software that your company considers using.
You know, you want to make sure as an organization you’re, you know, both establishing, enforcing and reinforcing your AI policy. You want to lock down that approved software list and, you know, make sure that you’re doing a thorough job of vetting, you know, both the new vendors as well as your existing vendors because, you know, your reputation could, you know, could depend on it. You don’t want to, you know, do you want to tie yourself to an organization that doesn’t have its act together? You know, I really want to encourage folks, you know, snap out of the AI zombie walk, start leading your organization, you know, toward a secure future, you know, where you’ve got a clear risk appropriate AI strategy for your company.
Not right there. That’s the good stuff. Well, that’s all the time we have for this episode of Compliance Unfiltered. I’m Todd Coshow and I’m Adam Goslin. I hope we helped to get you fired up to make your compliance suck less.