Compliance Unfiltered is TCT’s tell-it-like-it is podcast, dedicated to making compliance suck less. It’s a fresh, raw, uncut alternative for anyone who needs honest, reliable, compliance expertise with a sprinkling of personality.

Show Notes: Navigating the Dangers of Adopting A.I.

Listen on Apple Podcasts
Listen on Google Podcasts

Quick Take

In this episode, the CU Guys discuss the hidden dangers of AI adoption without proper security measures. Discover shocking examples of AI breaches, from rogue agents deleting databases to deepfake scams.

Learn how to protect your organization with strategies for transparency, compliance, and robust security policies. Essential listening for anyone responsible for AI security and compliance. Equip yourself with the knowledge to safeguard your data and reputation.

Listen now to stay ahead of the AI security curve.

Read The Transcript

So let’s face it, managing compliance sucks. It’s complicated, it’s so hard to keep organized, and it requires a ton of expertise in order to survive the entire process.

Welcome to Compliance Unfiltered, a podcast dedicated to making compliance suck less. Now, here’s your host, Todd Coshow, with Adam Goslin.

Well, welcome in to another edition of Compliance Unfiltered. I’m Todd Coshow, alongside the affron to your compliance head cold, Mr. Adam Goslin. How the heck are you, sir?

uh nicely played yeah i’m uh i’m okay but you know yeah of course i you know i had a i had a very long long weekend uh with uh little sleep and of course came back with a freaking you know my my kind of twice a year or once a year head cold so i’ll apologize in advance to the listeners if i’m sneezing coughing whatever it may be i’ll try to keep it to a dull roar

Yeah, fair enough. As a reminder, for the listeners out there, if, if you have an idea, if you have a topic, if you have some something you just want to share with us, please feel free we’d love to hear from you. Reach out at [email protected].

Well, I know my guys playing her today and overcoming some challenges. Today on the pod, we’re going to talk about some of the challenges with adopting AI, Adam. Now, where do we start here?

Well, you know, when, when AI was, you know, kind of first, first brought in, right, and I’ve been, I’ve been talking about this for, you know, for a wee while. Um, but I, I saw a phenomenon that I love to call the AI zombie walk, where, you know, people and organizations are, you know, flinging themselves over the AI cliff just because it says AI and it must be cool. So, um, you know, it’s, uh, you know, it’s, it’s funny, but, um, you know, a lot of people just plotted, you know, plotted along just like, uh, just like an old fashioned zombie movie. It was, uh, entertaining to watch.

But, um, you know, since, since AI, you know, obviously the only way to be able to make it and thrive in business, you know, and all the other cool kids are doing it, so anyway, there, there are certainly advantages to, you know, early technology adoption, but yeah, there’s also security risks that, you know, the organizations need to kind of weigh out, um, you know, the, you know, you don’t want folks throwing, you know, kind of throwing caution to the wind, not evaluating the, you know, the risks and potential business consequences, et cetera. So, you know, sure. I, I, I agree with, with folks and appropriate, you know, appropriate, solid, sensible, uh, you know, uh, move toward AI. Um, yeah, it’s a great potential to make, make people more productive, more efficient, more profitable, you know, and, uh, you know, in, in the same sense, it could take, take an organization down. So, you know, you don’t want to be, uh, you don’t want to be the ones, uh, blindly diving off the, uh, the AI zombie walk cliff, uh, you know, if, if you will, without, uh, without looking below to make sure there’s no rocks, uh, you know, rocks down below you, you know what I mean?

I do. Well, talk about some of the AI-generated security breaches that folks could face.

Well, there’s a lot of companies that have compromised on security and protection measures out of this competitive drive to adopt AI. And there’s some that certainly have regretted it.

So here’s some examples. There was a replit rogue AI agent. It’s a vibe coding app. And a company tasked it to go in, do database maintenance. Well, the AI agent decided to go rogue, ignored all of the instructions from the humans, deleted a production database, and then went about trying to cover up its tracks. It created 4,000 fake user accounts, inserted a whole ton of false logs, basically trying to kind of cover up what it had done. So that’s a risk, if you will. That’s wild. The, you know, we have Microsoft 365 co-pilot had a zero-click vulnerability where the attackers were able to use their, you know, AI bot co-pilot to gain access to user data without any interaction from the user. You’ve got Multbook had a data breach that’s a social network for AI agents to talk to each other. And they had a security hole that was exposing 35,000 emails, a million and a half API keys, 17,000 users. There was a deep fake heist, say a million and a half the APIs.

Yep, yep. The keys for a million and a half APIs. Yep. You got it. Smokes. Okay.

I mean, you know, I, this is, this is the net result when, uh, you know, you’re in this gigantic push just to, you know, just to implement stuff without, without really thinking through all the security aspects and whatnot properly. Um, you know, we had, uh, uh, Roop had a, uh, date, uh, deep fake heist. The, there was an employee that was tricked into transferring $25.6 million to scammers after a video conference call in which all of the participants were sophisticated, uh, AI generated deep fakes. So basically they, they staged it up so that they, you know, convinced the, you know, convinced this person to, you know, to, to go in. It was, it wasn’t like it was just one person on the line. There was a series of people, all of which were AI. Um, so that’s pretty, pretty, uh, pretty wild, if you will. So, you know, certainly it, it just kind of highlights the notion that we need to have, uh, you know, we need to have folks considering, uh, considering and mitigating the, you know, the risks of, of AI adoption within their environment, you know, because, you know, you don’t, you don’t want to end up being one of the headlines.

Absolutely. And I mean, I guess that kind of leads to our next point to chat on here.

It’s, it’s pretty clear based off of these things that we need greater AI security transparency, tell the folks what that’s all about.

It’s not just that. I don’t know, maybe even a focus on security. People are just streaming toward this AI adoption, AI integration, and whatnot. It’s almost like they have forgotten all of the things that they’ve had plenty of time to know, learn, love about the best practices in security and compliance, and they’re just throwing all caution to the wind, and we’re going to go down this AI mode.

One of the big challenges, this is one of the biggest problems I’ve got with a lot of these organizations is really nearly whipping AI into stuff, is you need to know where’s the data housed, who has access to it, how can it be used or accessed by third parties, et cetera, and AI companies need to do a lot better job providing transparency on the security aspects. You and I talked, I believe, a little bit ago at this point in the game, we were talking about the various realms of the Wild West, if you will, in the IT sector. One of the areas was initially websites. Websites is the Wild West and they were horrifyingly coded. People got enough of their asses handed to them. They started buttoning it up and things improved. Then we went to mobile apps, type of a deal. Well, AI is the new web slash mobile app extravaganza. The next frontier. Yeah, exactly. It’s agitating because at this point in the game, people should have learned their lesson, but haven’t.

It just drives me crazy. It should be raising major red flags for companies that have sensitive information to protect. I don’t know a single company that doesn’t have something that they need to protect. It’s just nuts, but it’s difficult to police these AI tools. Back in the day, when the internet first came out, companies would try to go in and whitelist websites that were sanctioned for business use type of a thing. Oh my God, you know what? Screw it. I’m going to take a minute to tell a fun story about whitelisting. Okay. I worked at this place and they went around, oh, we need to whitelist any of the websites anybody needs. I was in charge of IT. We go and we submit all the URLs that we need to leverage. Fast forward to the next day. My CIO calls me into his office. His face is red. He’s angry. And he’s like, sit down. I want to know exactly why it is that you needed to have a website called Expert Sex Change that was on the whitelist thing. I just got my butt handed to me by the CEO. And I said the guy’s name and I’m like, look, I said, it’s not Expert Sex Change, it’s Experts Exchange. And then all of a sudden, you see a light bulb goes on with him and he starts laughing. He’s got tears coming down his face. He’s like, I’m so sorry, but get the fuck out of my office. He’s like, I got to go talk to the CEO. I said, yeah, you have fun with that. So anyway, whitelisting websites as everybody under the sun was launching websites, et cetera, it just turned into a kind of a feudal notion.

But in AI, you’ve got AI getting folded into every tool set known to man. They’re pushing things at a pace that it makes it almost impossible for organizations to kind of monitor this properly. Everybody’s doing the zombie walk, whipping AI into everything under the damn sun.

And it’s almost impossible not to use AI. If you’re a Google or Microsoft customer, your AI is just getting a lot of magically built into the system type of a thing. So vendor after vendor are just doing the blind run over the cliff, incorporating AI. But when you don’t have an ability to be able to tell what the hell they’re actually doing, it makes it astoundingly difficult to mitigate risk to the company.

Well, where is the vetting and validation of the AI products?

Well, in the AI arena, this is the part that drives me probably the most bonkers, right? We’ve only had vendor vetting and validation out as part of security and compliance for what, two decades type of a thing? How is AI different? It boggles my mind.

There are annual responsibilities to go through and make sure that these companies are handling their security and compliance properly, et cetera. It shouldn’t be precluding people that are integrating AI capabilities into their platforms. The vendors that are incorporating the AI, it’s my belief they have an inherent responsibility to customer stakeholders to be provide clear communication about how they’re incorporating AI, how AI has been implemented, security measures around AI, proof and validation and checkpoints that they’ve done with their vendor solution of choice, et cetera. We’ll talk about it in a little bit, but there’s not really an in-depth prepared response for organizations to be able to leverage. You just don’t see that available in most of those cases. Despite the fact that people are supposed to be going through vendor vetting, annual reviews, et cetera, most organizations, it’s almost like they breeze past the notion that, oh, AI just got folded in type of thing. I’ve had a couple of organizations that we work with, kudos to them, have put things into their contractual agreements. You will let us know if you’re integrating AI into your systems, and I’m like, okay, finally, somebody got the memo, which is cool, but it scares the hell out of me that people are just forging forth, diving headfirst into this crowd.

The other stark reality is that for a lot of organizations, they’re lacking AI policies to be in place. You’ve got to have a bare minimum policy based control over the sensitive data, because otherwise you don’t have a foundation for being able to protect it. What happens when you’ve got an employee that’s going in and doing analysis, jamming your entire client list into the AI? Where the hell is that going? Who’s it going to? Is it going to one of your competitors? Which third parties? Most people don’t have any damn idea, which is scary.

That is scary now AI policies they should clearly go beyond just like your standard company systems right

Yeah, well, you know, you absolutely need an AI policy for the company, but, you know, having the robust internal policy is, you know, I’m going to call half the battle, right? You know, you got to remember that, you know, real security is going to extend beyond the walls of the corporate firewall, you know, there’s a lot of organizations will only consider their systems when they’re building out their policies where, you know, others will also think about personnel, vendors, things along those lines, but, you know, very few are doing a lot of thought on what to what, what should we be attempting to do to try to protect the organization when people are not in the work setting?

Why do you think that is? It’s just something that they don’t think about, right? The reality is, is that, you know, if you think about it, right, every single company of an organization is a target of attack 24 seven, not just, not just when they’re at work. So, you know, protecting the organization, you know, in general, you know, as a philosophy shouldn’t stop at the edges of the four walls, you know, of the business, you know, you can’t mandate what people are doing in their, you know, on their devices in their personal lives, etc. But, you know, you do have opportunities to provide, you know, thorough education about AI safety, about security, best practices, things to watch out for, do’s and not do’s, you know, you know, blanket statements, not just at work about, you know, what you can do a sense of data of the organization, you know, and, and, and, and so, you know, it’s, it’s up to the, you know, the organization to, you know, just make sure that they’ve got proper governance, training their people, but, you know, get them, get them armed in their personal lives.

A lot of times when I’ve been doing, you know, a lot of times I’ve been doing security awareness training. It’s one of the things that I say to, that I say to folks all the time is, hey, this is, this is appropriate for the company, but keep this premise in mind in your personal life because this is going to be able to, you know, be able to help you there as well.

So, you know, bad actors are going to look for any opportunity to leverage employees to gain access to the company. So, you know, if the companies just don’t know any better, then whose fault is that? You know, so, you know, really, you know, really think about your, your, your personnel because they’ve got, you know, they’ve got work email on their phones. They’ve got, they’ve got, you know, they could install some type of compromised AI tool, you know, on their personal devices, you know, that type of a thing. So, you know, there’s a, there’s a lot of opportunities where there could be crossover between the personal world and, you know, and the, the, the employees, you know, work life as well.

Fair enough.

Now, for this audience, this is probably the most pressing question on this topic. How are compliance standards keeping up with AI?

Well, as with all new technology, the answer is slowly, you know, there have been a couple of different front runners that have come out. I think, you know, NIST was fairly early on with releasing a standard, I believe ISOS got one as well. You’re starting to see them popping up. But what I’m not seeing yet specifically called out very much is specific line items within an existing certification. So, you know, the PCI DSS is an example, there isn’t something specific to AI. But that said, I harken back to the conversation I was having earlier, which is, generally in philosophically speaking, all of the core elements needed to be able to govern, you know, AI related, you know, related stuff is actually covered by the, you know, by the standard, it’s just not specifically called out and, you know, with guidance and things on those lines. And so I think what we’ll see is we’ll see more standards, you know, more standards coming out. We’ll see the evolution of the existing standards to incorporate, you know, have allowance for, you know, for AI, you know, etc. I know that the PCI SSC had put out, you know, on their website, you know, put out some thought leadership around AI security. So you’re starting to see organizations, you know, starting to take this seriously, you know, and so that is the, you know, that’s kind of the, you know, where things stand right now.

But I fully expected that it would take a little bit, you know, like I said, this is all the two years in, it only really started to take off maybe, you know, 12 to 18 months ago type of a type of a thing, so it’s going to take everybody a little bit to play catch-on.

That makes sense. Looking to the future now, Adam, how can folks be confident in their protection?

Um, you know, at the end of the day, um, you know, AI has the great power. It has great responsibility.

Um, but I just, my, my thing is, is that just make sure that you’re, you’ve got all the right tools in your toolbox. You’ve done this stuff before. Don’t gloss over the notion of the, uh, uh, uh, of the integration of AI. Um, you know, make, make sure that you incorporate it, you know, into your existing policies, provision that training and whatnot to your, you know, to your personnel, it’s a, uh, um, it’s definitely an arena that, um, you know, an arena that needs, uh, you know, needs focus.

Partying shots and dots for the folks this week, Adam.

Well, I’ve been hammering away on it, hammering away on it. The reality is that folks need to take a minute, take a breath, stop with the run toward the AI cliff without checking for rocks down below, make sure it’s a nice soft landing, all that fun stuff.

You already know the things you’re supposed to do, you know, you’re already experienced with doing security vetting validation. Probably the strongest recommendation that I would like to see folks in general do. I think that there needs to be a substantively higher level of pressure put on both the core AI organizations to, you know, kind of step up, explain, you know, explain the boundaries of their security, et cetera, and organizations that are adopting AI. I think they have an equal responsibility to detail out the validation and vetting that they did of the vendor of choice, as well as their use and implementation of AI. They’ve got a huge responsibility to, you know, two folks, and it’s just not a good enough excuse to say, well, it was AI, so, you know, we dove off the cliff.

That’s a good point. And that right there, that’s the good stuff. Well, that’s all the time we have for this episode of compliance unfiltered. I’m Todd Coshow and I’m Adam Goslin. Hope we helped to get you fired up to make your compliance suck less.

KEEP READING...

You may also like