Compliance Unfiltered is TCT’s tell-it-like-it is podcast, dedicated to making compliance suck less. It’s a fresh, raw, uncut alternative for anyone who needs honest, reliable, compliance expertise with a sprinkling of personality.

Show Notes: No AI Policy? Your Company is Flirting with Disaster

Listen on Apple Podcasts
Listen on Google Podcasts

Quick Take

On this episode of Compliance Unfiltered, the CU guys delve into the critical need for AI policies within organizations. As AI technology rapidly evolves, many companies find themselves unprepared, risking exposure of sensitive data through platforms like ChatGPT.

Adam emphasizes the urgency of implementing AI policies to protect against potential data breaches and compliance issues. Discover why having a robust AI policy is not just a best practice but a necessity in today’s digital landscape.

All this, and more, on this episode of Compliance Unfiltered.

Read The Transcript

So let’s face it, managing compliance sucks. It’s complicated. It’s so hard to keep organized and it requires a ton of expertise in order to survive the entire process. Welcome to Compliance Unfiltered, a podcast dedicated to making compliance suck less. Now here’s your host, Todd Coshow with Adam Goslin.

Well, welcome in to another edition of compliance unfiltered. I’m Todd Coshow alongside the sunshine to your compliance morning. Mr. Adam Goslin, how the heck are you, sir? 

I’m doing good. I thought for sure you were gonna go with a rainbow reference or something I was I was I was mentally preparing for pots of gold and all sorts of other things 

Well, if you have those to share, I’m sure our listeners will be interested as well. But today, we’re actually going to talk about another thing that folks might not have, and that is an AI policy. No AI policy? Your company is flirting with disaster. Tell us more on this one at a high level, Adam. 

Well, what’s interesting about, uh, about this arena is things, you know, things are rapidly changing, you know, I’ve talked several times about the, uh, you know, the zombie walk that people have been going on with AI for about the last, you know, whatever, 18 to 24 months now. Um, and, um, you know, now we’ve got Google’s indexing the, uh, any of those chat GPT conversations that were, uh, noted as being shared. Um, so depending on, you know, how people had set their, you know, their, their chat results, what they were, you know, busily, uh, you know, pouring into the chat GPT ears, uh, you know, etc., then, you know, organizations could be exposing, you know, sensitive data where, you know, anybody in the world can see those, uh, chat GPT conversations that potentially people’s employees have shared with colleagues, et cetera. So, you know, do you, you know, do you sit just blindly trust, you know, uh, trust the crew, uh, to, to do what’s right or, you know, or, you know, or what, um, you know, the, the, the AI arena, it’s got the potential for, you know, getting organizations into, um, you know, into a bunch of troubles. Um, you know, in a wide variety of ways, but you know, there’s, there’s few companies that, you know, are, are working up their AI policies. Um, you know, even talking about working up AI policies, let alone actually had them in place. So, you know, there’s a, there’s most of the compliance standards today. They don’t directly mandate, uh, you know, AI policies yet. Um, so there isn’t a lot of, you know, kind of, uh, incentive for people to actually go do something about this, et cetera. 

Well, I have a follow-up question on that actually and that is like do we anticipate any sort of AI regulatory body in the near future, like a standard for AI itself? 

we we shall see there are already um you know various standards that are pop kind of popping up related to you know more to ai you know ai components and if you think about it right do you look at something like uh you know any industry standard today um everything from hip at a pc i to iso to suck you know all of these standards have uh integrations into the standard about you know protecting sensitive data well it’s no different you know the protection of sensitive data really is it’s no different you know when I’m on you know on the the in scope systems and i should only be sharing you know appropriate information with the appropriate authorized individuals well i i mean how are you plausibly holding those standards you know in mind when you’re you know when you’re verbally pouring potentially pouring you know sensitive data into GPT as an example so you know it’s it’s it’s already theoretically indirectly there the the problem is is people aren’t you know they’re not making the connection to the dangers of you know of what they’re doing is is is part of the part of the trouble so you know and they and in a lot of cases you’ve got these organizations that um you know that even just going down the road of trying to you know try to cook together a you know a new policy you know and and what types of things do we want to have in there etc. It’s a lot of work you know it’s a it’s a lot of work it’s a lot of iterations a lot of people that are involved all that fun stuff so you know there’s a you know there’s a there’s a lot of organizations that you know they just don’t have have the drive or the interest to you know go and put valuable people time into you know into heading down that path but you know the reality is is that ai policies are more and more are you know going to become crucial

Yeah, no doubt. When would be a good time to consider adopting an AI policy? 

you don’t have your AI policy yet? Yesterday, a bigger way to put it. It’s like I was saying, if you don’t have the AI policy in place today, you’re already late to the game. There’s an employee adoption of AI at work that’s increasing exponentially. Whether the organization did or didn’t provide the personnel with any guidelines, half of them are using generative AI today. But 70% of organizations don’t even have general guidelines around the use of AI in the workplace. If you’re not going to give the personnel any guidance for safe and secure use of AI, then everything’s just going to be the wild west and the staff are open to use AI however they want. It’s effectively building toward a fine opportunity for bare minimum sensitive data exposure depending on what we’re doing. If you just bring it up a level, what if I’m working in a medical organization and I happen to be using patient data as part of a letter, I’m trying to go right to somebody. There’s a billion ways it can go wrong or sideways. Here at TCT, we integrated AI into the acceptable use policy for personnel. That being, for those that aren’t familiar, that being the document that governs the acceptable use of all the assets and technology for the organization. We wanted to get that integrated because it was starting to bubble up. Yesterday, to answer your question is when you should start contemplating doing something about it. 

Duly noted. Why, honestly, why is this important or is this policy so important? 

I mean the purpose of the policy is to meet or exceed compliance standards for the organization. In addition, it’s a part of the framework that helps to protect the organization. In many cases, what it is, is it’s personnel not kind of connecting the dots between the risks of what they’re doing and the systems that they’re leveraging. They know that they shouldn’t be spilling sensitive company data off to third parties. We know that and yet when I’m just going in and doing this one letter to a customer or I’m in the research and development arm of your organization and I’m trying to write a letter to the board talking about new developments that I’d like to do on my platform that are going to be in development for multiple years etc. There’s all sorts of things that could be happening within the organization that would basically open up risk to the organization itself. Some of them may appear benign but they’re still important elements for the organization to consider. When you’re using a so they’re dropping in client names, they’re putting in patient data, they’re dropping in corporate secrets. You never know what they’re going to go put in. It depends on which person, which department, what purpose etc. This is just an internal email that I need some help writing. It all depends on what they entered into the system is part of the issue. So they may be writing a sensitive letter to the patient. They could be violating HIPAA etc. So there’s a lot of different elements, intellectual property, client data and whatnot and now we’re exposing that information to third party. It’s outside of your environment. It’s not under your control. Today it’s kind of been more like a free-for-all of AI use and really the AI platforms have been popping up left, right and sideways as well. So you really don’t know what are these people doing with the information? Who are they sharing it with? Who backs them? Things along those lines. So there’s a lot of open questions if you will, which at the end of the day is going to put individual organizations at greater and greater risk given the speed of adoption that the frontliners are leveraging this for. Especially when in their personal worlds they use these types of systems for composing content. They’ve gained a familiarity with it already in their personal lives and bringing that over into the workplace. That’s kind of an easy next step. So they may or may not connect the dots on why it’s so important. 

Yeah, I mean, when you put it in that context, it makes complete sense. How does one go about drafting an AI policy? 

Well, I mean, there’s, there’s a couple, there’s a couple of common approaches for it. Um, you know, you can, you can take the route of, we are just going to blanketly forbid AI tools, you know, and, and no exceptions, um, you know, or you’re going to take some type of a middle of the road approach, um, you know, where you’re providing some guardrails and, you know, and direction, et cetera, depending on the organization, what they’re doing, whether the amount and volume of sensitive data they’ve got the different organizations are going to take, you know, kind of take a different, uh, a different context or approach to it. Um, but yeah, while either of those may be valid, you know, you need to just kind of think it through, uh, you know, it’s, it’s certainly cleanest to just say, no, no use of AI, um, but it’s, it’s, it’s really becoming less and less practical. I mean, if you think about it more and more and more of the people that are provisioning services that are already coming to your organization are integrating, you know, integrate, integrating AI capabilities and prime example, you know, Microsoft is now integrated, you know, co-pilot into everything under the dam song. So these could be tools that you already have that now have AI, you know, AI in them. So there’s a lot of consideration there.Um, certainly, you know, if you’re using AI, you know, within boundaries, then we need to figure out what types of sensitive information is it that the company has and, uh, you know, and, and, and start thinking things through. There’s various considerations. So what certifications or standards is your company subject to? And what’s, what’s the intended protection that they, you know, that they provide. So if you’re HIPAA compliant, then you got HIPAA data that you need to worry about if you’re PCI compliant, you’ve got credit card data. So, you know, you, you kind of think through the, the, you know, the various realms, do you have intellectual property? That needs to be protected. This could be software product diagrams, uh, manufacturing process, you know, and, and, and, um, you know, it could be a development and research that, you know, that that’s being done, like I mentioned earlier, you know, if you’ve got personnel or employees and you’ve got, you know, things like payroll information, so security numbers, personally identifiable information, maybe through, for insurance purposes, medical info as well. So, you know, almost any of your client information is going to be sensitive data.Uh, and include PII, you know, and could have account numbers, payment information, ACH banking, et cetera. So, you know, you want to kind of think through, you know, what is the, what is the scope and then, you know, the next element is really thinking through, you know, what are the, you know, what are the AI tools that we do, you know, that we want to allow, you know, uh, you know, what, which, which tools do we want to allow and, you know, various considerations like, you know, what are the rules and requirements for protecting company data and information, you know, um, you know, who is that data and information being shared with? How long, uh, does that, you know, that third party tool, leveraging AI, how long are they keeping the, the, the data and information, um, is that data ever expunged, uh, or is it kept on, on, on their internal systems indefinitely?Uh, what if the third party gets acquired? Do they still have your data? Does that data go to the person that bought them? You know, and, and, and so, you know, as, as you’re going through, you know, don’t just limit the policy for work devices, but you know, uh, you know, oftentimes the organizations will, uh, you know, restrict the use of AI on, on company equipment, you know, but they forget that employees could still be using AI stuff on their phones. So, um, you know, it’s, it’s, it’s kind of easy to, easy to make that mistake. You know, somebody goes in, logs into their web mail account and, you know, see something urgent that needs to be done and goes and cracks out chat GPT on their personal laptop so they can just get it done quickly. So, you know, making sure that the policy statements are directional, not only to the company equipment, but also the use of company data information, et cetera, uh, on the, you know, uh, uh, uh, you know, on, on those, uh, you know, on those systems as well, that’s going to be a part of the, you know, part, part of the consideration, if you will. 

Now, how should one review and finalize the AI policies? Is this a little uncharted territory for folks? 

Yeah. Well, it is certainly something that takes a second. For most organizations, quite honestly, they haven’t had to think through and write out new policy stuff in some period of time. As you go through and you’re writing it, you want to go through several rounds of review and you want to include detective and preventative measures that you intend to take. How are you going to ensure that the policy isn’t violated? How are you going to detect if it was violated? What are the consequences if it’s violated? These are all things that you want to go through and consider. The AI policy should be directional in nature. Don’t try to iterate out every single scenario, et cetera, but use a directional nature for it. As you’re going through and developing the AI policy, you want to keep a couple of things in mind. Being aware of those built-in AI modules, the platforms you’re already leveraging, I mentioned Microsoft a minute ago. If necessary, do you want to opt out of those features? Do you want to allow them, et cetera? You want to know how the platforms are implementing AI into their systems, whether they’re doing it themselves or embedding some third party. Think about where your data is going, who houses it, what do you want to allow as well? How could that data be used by an AI company and what’s acceptable for you? Consider scenarios from mergers, acquisitions, things along those lines because that could definitely play into it as well. As you’re going through that process, then rounds of review also, you need to consider the various people that you’ve got on staff, departments that you have, et cetera, and what are the implications for them. 

Well, speaking of, you know, what about considerations during rollout and training? 

Well, once you’ve got it finalized and you start rolling it out throughout the organization, you want to communicate with the personnel beforehand so they know it’s coming, provide opportunities for them to ask questions, being clear about the purpose of why the policy is being instituted at rollout, providing thorough training so that everybody understands what the policy is, how to follow it. After implementation, don’t go put it on a shelf, but we want to enable refresher training. Certainly, you want to review that policy on a regular basis in one of your manager meetings. As you first do the rollout, you’re going to start to get feedback and input and response and in some cases pushback, et cetera. They’re all considerations, especially with a new policy element. So, I would suggest use one of your manager meetings, leave this as a topic that’s kind of on the top of the list for a while, where you’re gathering up those inputs, bringing it to the management meetings and making any needed adjustments and things along those lines. So, this process is important because it’s going to assist in making that policy better and better, more usable, greater protections, and maybe things you didn’t even think about. So, it’s funny, the frontliners have a tendency to be able to come up with some really good ideas, make sure you’ve got that feedback loop in place. So, I would review that policy at bare minimum once a quarter for the first year and doing that in one of your quarterly management meetings and once everything settles in, then you can shift it back to a couple times a year and eventually bring it down to, you know, bring it down to annual. Certainly in terms of training, you want to make sure that you’ve got, you know, training and retraining of the employees, including it in your new hire training, including it in friendly reminders throughout the year, especially during that first year. And then as part of the annual security awareness training, you know, sessions that you have, make sure that you’re leveraging, you know, leveraging the content from that AI policy so that you can continue the review process. As well, as you’re going through that kind of rollout, again, you want to keep an eyeball on, you know, kind of vendors that you have, who’s rolling out what features, see if there are any additional implications to the existing, you know, existing policies that you already have rolled out. All of those are going to be helpful. 

Parting shots and thoughts for the folks this week, Adam. 

Well, if I haven’t said it enough, don’t sit around waiting to draft your AI policy. Start putting some thought into it, get it moving. The notion of monitoring, the monitoring of AI related capabilities into already trusted vendors, that’s gonna be an important one because AI is gonna keep popping up with your existing vendor list. You gotta know what these people are doing, what are they doing with it? How are we using these tools? What data information could be exposed? Things along those lines. That’s going to be a big part of ongoing vendor vetting validation and annual revalidation, especially as it relates to their AI capabilities. And I can’t underscore enough, take the input and the feedback from your front-liners seriously. You know, they oftentimes will come up with things that you don’t initially think about and make them part of the solution in terms of helping to make iterative improvements to your AI related policy stances. 

And that right there, that’s the good stuff. Well, that’s all the time we have for this episode of Compliance Unfiltered. I’m Todd Coshow and I’m Adam Goslin, hope we helped to get you fired up to make your compliance suck less.