Google is now indexing your shared ChatGPT conversations, potentially exposing your company’s sensitive data. Anyone in the world can see the ChatGPT conversations your employees have shared with colleagues. It’s imperative to have a policy in place for using artificial intelligence.
Artificial intelligence has the potential to get organizations of all types into a heap of trouble. Few companies are having the conversations they should have about AI policies, and fewer still actually have an AI policy in place.
Most compliance standards don’t directly mandate an AI policy (yet), so there isn’t a lot of incentive to do the work of drafting a policy that addresses the use of artificial intelligence within the organization. Yet, if you think about it, several industry standard controls should govern the use of AI (Sensitive Data is need-to-know, confidentiality provisions, protections of company IP, vendor vetting and more), although these controls are inferred to cover the use of AI.
It takes time, research, collaboration, multiple meetings, and several rounds of revisions to draft a policy like that. Most organizations have not taken the time to pull their valuable people resources away from their core responsibilities.
But an AI policy is critical for any organization. In fact, you could be setting your company up for tremendous risk if you haven’t already implemented a well crafted policy on artificial intelligence.
You Need an AI Policy Yesterday
If you don’t have an AI policy in place today, you’re already late to the game. Employee adoption of AI at work is increasing exponentially, even if your organization hasn’t officially addressed the use of AI. About half of employees are already using Gen AI, at least occasionally, but 70% of organizations don’t have either general guidelines or formal policies for using AI at work.
In other words, your employees have no guidance for the safe and secure use of artificial intelligence — your workplace is like the Wild West, and staff are free to use AI however they choose.
This is a sure recipe for inappropriate Sensitive Data exposure.
At TCT, we’ve integrated AI into our Acceptable Use Policy, which describes the acceptable use of assets and technology for the organization. We included AI into that policy about 18 months ago, and we advise our clients to create theirs immediately.
Related: Is AI an Intelligent Option in Security and Compliance Software?
Why Is an AI Policy Important?
The purpose of a policy is to meet or exceed the best practices or compliance standards that an organization needs to meet. In addition, it’s also part of the framework that protects the organization. That protection encompasses organizational systems, physical facilities, personnel, electronic data, and more.
Often, with a ChatGPT-style AI platform, the application in and of itself isn’t the cause for concern. It’s the question of what your employees are exposing to the system. Are they inserting client names? Are they entering patient data? Corporate secrets? You might be surprised.
For example, an employee may simply use AI to help them write a sensitive email or a letter to a patient. In the process, they enter the patient’s name and medical condition into a third-party tool. That action has just violated HIPAA requirements.
Or an employee might use ChatGPT to improve their efficiency or effectiveness. Without thinking, they enter intellectual property or customer data into the tool. Now a third party vendor is storing your company’s sensitive information — it’s outside of your environment, beyond your control.
Until now, we’ve seen a blind mad scramble toward AI from one organization to another. There are no industry-level best practices, and established regulations are scant. That puts individual organizations at tremendous risk, especially when they adopt AI tools with little to no thought about security and privacy issues.
At the employee level, it’s even worse. While your IT department may be aware of the security issues involved with AI adoption, your individual staff are largely clueless. Your employees are using various unvetted AI tools that you don’t even know about — tools that staff find on the Google Play Store, or from a random ad that popped up in a search results page.
How to Draft an AI Policy
There are two common approaches to AI policies. You can forbid AI tools altogether, without exception. Or, you can take a middle-of-the-road approach, and provide a set of guard rails that guide what tools are adopted and how they are to be used.
Either approach may be valid, depending on the context and needs of your organization. Certainly, it’s cleanest and simplest to disallow all use of AI, but for many companies today that’s becoming less and less practical.
What information can be entered into AI tools?
If you allow the use of AI within certain boundaries, then the first action item is to catalog all of the types of sensitive information your company stores. Start with the following considerations:
- What certifications is your company subject to, and what is the intent of their protections? For example, if you’re HIPAA compliant, you have medical data to protect. If you’re PCI compliant, you likely have credit card data to protect. This will quickly help you to identify the crown jewels among your sensitive data, but there’s much more than this type of data to protect.
- You may have intellectual property to protect. This can include everything from software code to product diagrams to secret processes to research and analytics.
- If you have employees, you have sensitive data, which includes payroll information, Social Security numbers, personally identifiable information (PII), and possibly medical information.
- Virtually all of your client information is deemed as sensitive data, including PII, account numbers, payment information, and problems you’re solving for them.
What AI tools can be used?
It is critical to the security of your organization to validate and vet every AI tool your company even considers using. For example:
- What are the rules and requirements for protecting company data and information?
- Who is that data and information being shared with?
- How long does the third-party tool keep that data and information?
- Is the data ever expunged, or is it kept on the third party’s internal systems?
- What if that third party gets acquired and they still have your data?
Don’t limit your policy to work devices. Often, organizations restrict the use of AI on company equipment, but they forget that employees may be using artificial intelligence on their phones. The real objective is to protect the organizational sensitive data, regardless of the use case.
It’s easy to do it without even thinking: someone logs into their webmail account over the weekend, sees an urgent request that has to be done by Monday, and they use ChatGPT on their personal laptop to crank it out fast.
Be sure your policy addresses AI use on personal devices as well as company equipment.
Writing the AI Policy
Next, start writing the AI policy for your organization, and make sure to involve all departments involved directly or indirectly with exposure to any of the already identified sensitive data. This should consist of several rounds of reviews and revisions. Your policy should include the detective and preventative measures you intend to take. How will you ensure the policy isn’t violated, how will you detect if the policy is violated, and what consequences will be in place if a violation occurs?
Your AI policy should be directional in nature. In other words, don’t try to call out every possible use case or AI tool that may or may not be allowed. If you need to blacklist or whitelist specific products, you can do that in your hardware and software inventory.
As you develop your AI policy, keep the following considerations top of mind:
- Be aware of any built-in AI modules in the platforms you’re already leveraging (e.g., Microsoft 365, HubSpot, etc.). If necessary, you can often opt out of those features.
- Know how these platforms are implementing AI into their systems — including whether or not they’re simply embedding a third-party AI chatbot into the software.
- Think about where your data is going and who houses it. What are you willing to allow?
- How could your data be used by an AI company? What is acceptable to you?
- Consider scenarios such as mergers and acquisitions, where your data may be controlled by a new entity with a different policy.
Rolling Out the AI Policy
Once the AI policy is finalized, roll it out throughout your organization. Communicate with your personnel before the rollout so they know it’s coming, and provide opportunities for staff to ask questions. Be clear about its purpose and why the policy is being instituted.
At rollout, provide thorough training so that your employees understand what’s in the policy and how to follow it.
After your AI policy has been implemented, don’t simply put it on a shelf and forget about it. Continue to provide refresher training to your employees, and include AI policy training in new hire onboarding.
Also review the policy on a regular basis in one of your managers meetings. Ask yourself if there’s anything the policy is missing, or if any adjustments need to be made. Talk through any issues you may be having related to the policy. Have any new concerns surfaced? Are employees discovering loopholes or workarounds that should be addressed?
This process is important, because the way things play out in practice may be different than you expected when drafting the policy. In addition, your company will need to keep up with AI technologies as they continue to evolve.
This is my recommendation for reviewing and updating your AI policy:
- Review the entire policy once per quarter for the first year. This could be done in one of your regular management meetings.
- Once the dust has settled and your employees are familiar with the policy, shift your reviews to twice per year.
- After some additional time, you can reduce your reviews to an annual basis, similar to your other organizational policies.
Train and Retrain Your Employees
Don’t just write the policy and include it in your training manual. Even your best employees will forget the details of your AI use policy. Train and retrain your personnel. Remind your staff of your requirements on a regular basis. Managers should have ongoing conversations with their direct reports. Post reminders in common spaces at your facility.
It is vital to leverage your internal IT and security/compliance resources to vet any use of AI organizationally, the same way that you would vet any other vendor that could have access to your data. Yes, it could initially mean you have an initial group of vendors to review — but when you’re talking about protecting the company, it’s absolutely worth the time and effort. Better yet – it’s a requirement to appropriately vet your vendors both as a best practice and under most industry standards.
Train your people to understand what constitutes sensitive data. It isn’t enough to tell your staff not to share sensitive data into ChatGPT, because sensitive data is more than trade secrets and credit card numbers. Sensitive data includes such mundane information as names and phone numbers.
Don’t Wait to Draft Your AI Policy
Too many organizations are jumping into the use of artificial intelligence, without doing their due diligence. I expect that many of those companies will wake up one morning to be greeted by headlines announcing that their sensitive data was breached via the AI vendor, or inappropriately used by an AI company they never vetted.
Artificial intelligence is very cool, and it can do some incredible things. But the awe factor should never outweigh your inherent responsibility to protect the sensitive data your organization is responsible to protect.
Need guidance for writing your AI policy? TCT can provide consulting services that reduce your risks and maximize your peace of mind. We’ll give you the confidence that you’re well protected. Contact us today!