Compliance Unfiltered is TCT’s tell-it-like-it is podcast, dedicated to making compliance suck less. It’s a fresh, raw, uncut alternative for anyone who needs honest, reliable, compliance expertise with a sprinkling of personality.
Show Notes: A.I. Grab Bag
Quick Take
On this episode of Compliance Unfiltered, Todd Coshow and cybersecurity expert Adam Goslin delve into the hidden dangers of AI’s rapid adoption. They uncover why organizations are neglecting essential safeguards, leaving sensitive data vulnerable, and how AI is being exploited as a malware command center.
With insights into recent security failures and emerging standards from ISO, NIST, and IEEE, this episode is a must-listen for security professionals and business leaders.
Learn how to implement responsible AI strategies and avoid becoming a cautionary tale.
Hit play to understand what’s truly at stake with AI.
Read The Transcript
So let’s face it, managing compliance sucks. It’s complicated, it’s so hard to keep organized, and it requires a ton of expertise in order to survive the entire process.
Welcome to Compliance Unfiltered, a podcast dedicated to making compliance suck less. Now, here’s your host, Todd Coshow with Adam Goslin.
Well, welcome in to another edition of Compliance Unfiltered. I’m Todd Coshow alongside the jalapeño poppers to your compliance happy hour, Mr. Adam Goslin. How the heck are you, sir?
I’m doing good, Todd. How about you?
I can’t complain. I cannot complain.
Just a reminder for the folks out there, Adam, if you’ve been listening to us for a while, you have some suggestions about some things that you’d like to hear in the future. You got some security related topics that you’d like us to chat about, give us a shout. We’d love to hear from you. Reach out to us, [email protected].
Well, Adam, we’re going to do a little bit of something different. This is the episode that we’re calling the AI Grab Bag. Now, there continues to be a lot of chatter. Things continue to heat up in the world of AI. It seems like we’ve got a number of topics in that area to discuss. Why don’t you kick it off for the folks today?
Sure. I mean, in the AI space, I mean, this has been relatively well-documented, you know, that I’m like, eh, let’s just kind of luggies our way into this and take it slow, and let’s be smart, and you know, all that fun stuff. And I don’t regret that approach, you know, as it relates to AI.
I mean, with any new technology, there’s certainly a lot of opportunity, but there’s also a lot of risk involved, et cetera. And really what you’re depending on is you’re depending on organizations to, oh, I don’t know, act in a scrupulous manner to, you know, take this stuff seriously, to, you know, do their part as an upstanding member of society that’s taking their responsibility seriously. So, you know, I mean, I don’t typically crack out the good old tinfoil hat that often, but, you know, this arena especially tends to remind me a lot of lessons that don’t appear to have been learned from the, you know, from the fine days of good old Edward Snowden. So, you know what? I was surprised about it. That was like 13 years ago that that hit the airwaves and everybody’s like, oh, you know, shock and awe, right? Oh my God, what do you mean that, you know, there were, you know, surreptitious programs that were collecting up information and data on everybody under the F and so on, most of it illegal, you know, yada, yada, yada from, you know, the biggest tech giants that existed at the time, sharing information surreptitiously with the governments, you know, and, and, and, you know, and, you know, you just, you sit and you think about, you know, just the nature of the information and data that exists today. And especially in this world of, you know, world of AI, I personally find it perplexing that more people haven’t been on the side of, you know, asking some real tough questions, some hard questions, you know, what are you doing with this information? Who are you sharing it with? Where does it reside? Who has access to it? And, you know, are there, are there back doors into the, you know, into the, into the vaults of data that, that these AI platforms are, you know, are grab bagging themselves, you know, so, you know, there’s just, there’s a, I think there’s a lot of lessons to be learned. You would hope that people learn the lessons, but, you know, it’s been, it’s, it’s been pretty entertaining kind of watching where people’s headspace is, you know, for the last couple of years on this, you know, kind of AI trek, if you will.
No doubt about it. And it’s funny how things quickly, how quickly things move in the tech space, like 13 years ago is an eternity as you’re talking about technology.
So now I know you’ve been extra cautious when it comes to what you call the zombie walk toward AI over the course of the last couple of years. Um, it seems like you’re not alone there. Tell us more.
Well, I mean, one of the things that as I walked into it is, you know, kind of a slow cautious, you know, let’s, you know, let’s wait for this arena to mature and, you know, and all that fun stuff. So, you know, I was, I was just kind of, I was wanting to watch things mature a little bit, kind of see how things were going to flush out, etc. But, you know, more and more we’ve done, you know, we did a pod not too long ago about the fact that, you know, there’s a ton of organizations, they don’t have anything in there currently within their policies or, you know, surrounding AI. The listeners can go back and, you know, go back and take a listen to that one.
But, you know, a lot of the things we’re talking about right now, these are things that have, you know, just. When I say recently come out, I mean, in the last couple of days type of a thing, but, you know, one example is there’s a mortgage technology firm that’s, it’s called Mortgage Brain, where they’re putting out basically warnings to, you know, to, you know, mortgage, mortgage companies and whatnot, advising them on, you know, both secure authentication, but also, you know, kind of some of the risks of the leveraging of AI and how they, you know, with the mortgage companies themselves, they have a ton of, you know, kind of sensitive data around the folks that they’re responsible to protect the data of and, you know, get provisioning guidance to those in the mortgage industry, not to, you know, be dropping client data into consumer AI tools and, you know, don’t put anything in there that you don’t want public, AKA don’t put sensitive data into AI platforms, etc. So it’s interesting that, you know, even in the mortgage arena, which, you know, from my past experience, interestingly enough, the mortgage and insurance sectors, you would think would be, you know, more in the forefront of the of security and controls and whatnot, just with the types of information and data that they have access to for their clients in order to do their business. Interesting for me, at least from my perspective, it almost seemed like those industries were lagging behind.
I don’t know if they got left behind or they were lagging behind, you know, in terms of the, you know, their control sets that they were, you know, that they were leveraging for those industries. You know, similarly, again, just very recently, you know, recently put out was the the EU Parliament is, you know, blocking, you know, AI tools because of various cyber and and privacy fears that they’ve got. So, you know, they basically wanted to, you know, send out a send out a note to all their other members saying they were disabling built in AI features, you know, on their on their corporate tablets after their IT department basically said, Oh, we can’t guarantee the security of the tools data. Well, why? Because you don’t you are the one that’s in control of it. You’re, you’re beholden to this third party that, you know, and trusting what they say about what they’re doing with it, where it goes, who has access, how are they using it, etc.
So it’s, it’s kind of a, you know, interesting that you’ve got more and more, you know, folks raising their hand saying, hey, you know, the stuff is cool and everything. But you know, we’re gonna, we’re gonna come put a kibosh on it for the time being, you know,
Yeah, I do. Now, there are some real tales to be told regarding AI in the news recently. Tell us about that recent issue with Microsoft.
Well, I think Microsoft, the Microsoft added some AI into a notepad. No, for those that have been around for a week or three, the notepad is the built-in text editor for the Microsoft operating system. So they decided to go and jam some AI into there and it ended up creating a security failure because the AI was so stupid easy for the hackers to go in and trick.
I think at least one of the problems is that I think that this is also, it has the smackings of a case where they’re using AI code generation to accelerate improvements to their software, to their platform, et cetera. And the recent documentation of a bug, which was, it was a command injection issue that they had within notepad where it would allow an unauthorized attacker to basically execute code over the network through the hole that they jammed into in the notepad. So they turned around once they were aware of it, turned around and went and fixed it. But part of my problem with some of these things that come up is that it’s, oh, I don’t know, how many dozens of controls all layered together to ostensibly serve to ultimately protect the organization and their customers that appear to be lacking. So I don’t know, let’s see, secure code reviews, where they fit into this mix. Where’s our functional testing that needs to come into play? Where is kind of a secure review of the new code that they’re putting in to take advantage of the AI capabilities and how in the absolute hell does something like command injection, kind of an attack vector that’s been around for a long time, how does something like that just get missed? As we’re going through, it just has all of the signs of organizations that just really aren’t taking this stuff seriously and certainly not living up to the expectations that folks ought to have around how they do what they do when it comes to these releases of code. I mean, do they have AI, you know, whipping code? Meanwhile, they got AI looking for security holes, you know, let’s bring some sanity back into the mix, let’s make sure that we have strong controls in place. I’m not saying that everybody’s perfect, but the fact that stuff like this is slipping through, I think it’s just indicative of that drive to push the AI functionality and throw caution to the wind type of thing. It’s disappointing that there’s organizations that are basically willing to roll the dice with the security of their clients.
You know, what’s interesting is that I feel like, and I’ve seen this in a couple of instances, I feel like people are, they’re just looking for a reason to believe whatever they’re seeing. And it’s like, oh, well, I mean, AI said it. And so, I mean, that’s the numbers that we’re going with. This is what it came through as. So, and I know that that sounds kind of silly, but like, ultimately, like that’s, in a business setting, you have a lot of people that are really just looking for a way to pass the buck.
Uh, and it’s not my fault. I mean, that’s what the numbers say. What do you want from me? Um, and I mean, we’re seeing more and more instances where these numbers are clearly fabricated by AI, just because they’re based off a plausible thing.
Come on now, A.I. wouldn’t make something up.
Uh, yes indeed. So yeah, no, it’s just, it’s an interesting kind of crossroads of technology and morality right now.
Bring the listeners up to speed, speaking of morality. Bring the listeners up to speed on the newly released issue discovered within Copilot and Grok.
Well, it was interesting because—and again, this just dropped a couple of days ago—researchers were showing that both Copilot and Grok can be capable of being abused as malware command and control relays to basically be able to proxy malware through the agent. So, you know, they’ve kind of codenamed it AI as a C2 proxy.
This was a TAC method that was demonstrated by Checkpoint where, you know, they were using anonymous web access combined with browsing and summarization prompts, and the same mechanism could also enable AI-assisted malware operations. So, generating reconnaissance workflows, scripting attacker actions, dynamically deciding what to do next as they’re executing an intrusion, you know, and whatnot. So it was just, you know, with all of these things that people are kind of in wonderment of, you know, you can’t lose sight of the fact that just as these tools, you know, and the opportunity for positive, you know, positive outcomes, you know, don’t ever underestimate the ingenuity of the bad actors out there, you know, the same platform that is, you know, bringing, you know, kind of advancements in technology is the same platform that the bad guys are going to use to, you know, use against us. So it was interesting to kind of see them being capable of leveraging, you know, both Copilot and Grok to, you know, as a relay channel for spewing, you know, spewing malware. That’s, it kind of brings a new, puts a new shine on it, if you will.
No doubt about it. Now, too long, not too long ago, we had the Super Bowl. But it appears there’s already been some fallout from backlash over the AI-related ads. Tell us more, because I’ve got my own feelings on this one, too. Yeah.
You know, and I don’t know how many, I’m guessing a lot of many of the listeners happened to watch the Super Bowl. I’m not sure if they remember this ad, but there was an ad during the Super Bowl that was basically touting, it was from Ring, and they were touting the capability for their doorbell cameras to be capable of assisting people in finding lost dogs. And I think they even had a go for memory, but I think they even had a stat in there about how, you know, they’d kind of found one lost dog a day or, you know, something along those lines. And it was one of those, it was portrayed in this kind of real feel good, you know, feel good type of, you know, honestly, you know what it smacked up for me, it smacked up those, you know, kind of the Budweiser commercials, right? Where like, you know, you’ve got the, you know, you’ve got the, you know, the dog coming over to the horse and was that kind of a feel, you know, type of thing. And so it comes out as this feel good, you know, feel good ad.
And meanwhile, then people start connecting the dots. Oh, geez, if we’re going to use the video off of the Ring cameras to go find lost dogs, then what exactly is going to be the difference between that and using the camera feeds for nefarious purposes, or for, you know, tracking comings and goings of individuals and it turns out
Adam, all I saw was a commercial for Skynet. I don’t know about you.
Well, it was interesting because there was, you know, one of the partners that they were looking at doing an integration with was an organization called Flock Safety, and looking in a little deeper to the Flock Safety crew, they were involved in some things with law for them, you know, et cetera, and, you know, it’s funny, as soon as they started to come back into the fold, as soon as they started kind of coming out and whatnot, there was apparently a lot of backlash that came about, and as it turns out, I think Ring ended up kind of stepping back from their, you know, relationship with Flock as a result, and so it was just kind of, maybe, I’m not holding my breath, but maybe it was just happenstance that all of this hit the fan, and then they decided to back out, but it seemed a little bit too timely, if you will, the relationship between, you know, between all of this, heartbreaking loose and them deciding to deep-sex that relationship.
The marketing executive that came up with the dog idea for the commercial absolutely got fired. Well, we’ve talked a lot about things going sideways. Any glimmers of hope out there?
Well, you know, there have been several standards that have come out with, you know, kind of security and compliance standards for AI related systems. So I know that ISO has put out specifically an international standard for AI management systems. There’s the NIST AI Risk Management Framework that’s out there, IEEE put out, you know, put out some stuff. There’s an EU AI Act, you know, that’s in the mix, you know, and, and, and. So there’s some standards that are coming out is kind of the one side of it.
And I think that even in the education arena, you know, the educational arena is starting to kind of step up, step up their education of the students as it relates to AI. I know I talked to, you know, a couple of different professors at a couple of different educational institutions and certainly, you know, the notion of security surrounding AI, you know, is starting to become more and more commonplace in terms of the coursework for folks that are in the security space, you know, going through kind of major, major educational institutions. More recently, CompTIA put out, you know, a SEC AI Plus track that where professionals can go get a, you know, go get a certification that its focus is the security of artificial intelligence systems and, you know, practical application of AI and cybersecurity. So, you know, there’s more and more we’re seeing, we’re seeing educational opportunities popping up so that we can, you know, train those that are going to be responsible for taking care of these systems in the, you know, in the future, be in a position where they can go ahead and do that. So that’s, that’s, that’s the glimmer of hope side.
Before we get to departing shots and thoughts, I had a follow-up question there. Have we heard anything about the requirements or enforcement coming out about these AI standards?
Is it something that companies can expect that there’ll be some standardized expectations around me sooner rather than later, or are we still kind of in the wait and see mode?
Ah, it’s, it’s still really fricking new. I mean, most of these standards have been kind of popping up. We’ll call it relatively recently. Um, some of them were even, you know, if you, if you can imagine, there’s been a lot of morphing that’s happened over the last couple of years, right? Um, and yeah, maybe it was a year ago that we started seeing standards coming out. It was like, people were in this rush to just try to put a wrapper around it. The problem is, is that, you know, you’re putting a wrapper around what I, uh, you know, it’s kind of like saying I’ve got a, you know, a one, a one meter by one meter by one meter cube of sand. And we want to, we want to put a ribbon around this, you know, and good luck. Um, it’s, uh, it’s tough because it just, it keeps shifting and morphing and changing, you know, and what not.
I think there’s generally speaking, and this is just more of a general statement. There are some, you know, kind of early, uh, there’s some early comers to the game, um, that, that you basically take their crack at the bat. Um, you know, we’re trying to, trying to get it right out of the gate. And it’s usually a, we’ll call it a educated guess, you know, about, or, you know, kind of where are things heading, but generally speaking, the, the real dialing in isn’t gonna, isn’t really going to be capable until these, the, the, the technology growth and the direction of it starts to stabilize when, when we’ve got really have some, uh, you know, some firm guidelines and firm structures that we can go and start wraparing, you know, wraparing real meat around, um, and most certainly the, the actual enforcement side of, you know, side of that coin. Oh, you could be, you can bet your, bet your ass that that’s going to trail by a wild, by a wild, uh, you know, type of thing right now. Again, we’re, we’re back into the wild west. Uh, you know, this just happens to be the latest wild west on the, in the security front, um, you know, type of a deal. So, uh, AI currently is undoubtedly taking the nod for the, uh, for the, for the eye of the wild west hurricane and the, in the cyber security space.
Partying shots and thoughts for the folks this week, Adam.
Well, I mean, as I sit here and kind of watching the show unfold, if you will, I can’t help but harkening back to the Snowden days. And I really, well, I really hope that we learn our lessons from the past. This zombie walk toward AI seems to be, certainly seems to be setting aside many of the lessons that should have stuck. And so that’s a little bit, not even a little bit. It’s a lot disappointing that we don’t have kind of more sensibility and kind of application of those lessons learned in light of the current day.
The one thing that’s kind of sad is that if you just think about it, just go back to your basics of all of the various standards and certifications that you’ve dealt with as a security and compliance professional over the years, those core tenets are definitely things that will have application within this space. So you look at things like access control and controlling the access to data, the protection of that data. Where is the data flowing from to who can basically put their fingers on it? What is it being used for? Things along those lines, gosh, just vendor, Jesus, vendor due diligence, right? Everybody under the sun on this AI zombie walk, and it’s like they set aside the notion of truly vetting what the hell is going on with this AI functionality that I’m about to go ahead and bring into my environment. So it’s tough when we already have controls in place and things that people should know and love today that get set aside in the fervor for AI. We just need to take this sensibly. There’s too many people that are throwing all caution to the wind, and especially for the listeners, I mean, I would sanity check the bullshit that’s coming out of the providers related to AI, especially if you’re talking about either directly or the potential for sharing or providing access to any sensitive data. They’re not doing the appropriate due diligence with these providers. They’re just jamming AI into everything under the sun. We’ve got to be able to collectively hold these people accountable and make sure that they have validated, vetted, third-party attested documentation on precisely how this is working and what they’re doing with the data and the information. Those should be entry points for organizations, but I’m not even sure it’s an afterthought at this point in the game.
And that right there, that’s the good stuff. Well, that’s all the time we have for this episode of Compliance Unfiltered. I’m Todd Coshow and I’m Adam Goslin. I hope we helped to get you fired up to make your compliance suck less.