Since AI was introduced commercially, I’ve noticed a phenomenon that I call the AI Zombie Walk: people and organizations embracing artificial intelligence carte blanche (like a zombie plodding toward some end with their arms outstretched), simply because it’s AI. “If it’s AI, we need to adopt it,” they say. Because apparently AI is the only way to survive and thrive in business, and all the other cool kids are doing it.

Certainly, there are some advantages to early technology adoption, but AI also poses security risks that must be weighed. When it comes to AI in the business sector, many organizations throw caution to the wind without considering the risks and potential business consequences. 

AI has great potential to make a company more productive, more efficient, and more profitable. It also has the potential to take down entire organizations overnight. It’s never a good idea to dive in head first into the bleeding edge of technology before you’re fully aware of the security implications. Eventually you’ll get burned. 

AI-generated Security Breaches You Could Face

Many companies have compromised on security and protection measures out of a competitive drive to adopt AI. Too often, they’ve regretted it. Here’s a handful of real-life examples from the past year that show why it’s critical for your organization to take AI security seriously:

  • Replit’s rogue AI agent. Replit is a vibe coding app. One company tasked it with doing database maintenance, but the AI agent went rogue, ignored instructions, deleted a production database, and then covered up its tracks by creating 4,000 fake user accounts and a ton of false logs.
  • Microsoft 365 Copilot’s zero-click vulnerability. Attackers were able to use Microsoft’s AI bot, Copilot, to gain access to user data without any interaction from the user.
  • Moltbook’s data breach. Moltbook — a social network for AI agents to talk to each other — had a major security hole that exposed 35,000 emails, 1.5M API keys, and 17,000 users.
  • The Arup deepfake heist. An employee was tricked into transferring $25.6 million to scammers after a video conference call in which all of the participants were sophisticated AI-generated deepfakes.

If you aren’t actively considering and mitigating the risks of AI adoption, you’re setting your company up to appear in similar headlines.

TCT Portal

Get to know TCT Portal

Nice to meet you!

We Need Greater AI Security Transparency

One particular challenge in the new AI landscape is knowing exactly where your data is being housed, who has access to it, and how it could be used by third parties. AI companies must do much better in providing transparency on these kinds of questions. 

As of today, AI is a lot like the Wild West, with few industry or government enforced regulations that they must follow regarding security and data disclosures. That should raise major red flags for any company that has sensitive information to protect (and I don’t know a single company that doesn’t have sensitive information to protect).

Part of the issue is that it’s inherently very difficult to police AI tools. Back in the day, when the internet first came out, companies would often restrict personal use by whitelisting a select list of websites that were sanctioned for business use. That quickly proved to be an exercise in futility as the internet exploded and every company in existence launched its own website. 

It’s like that today, but with artificial intelligence. AI integration into existing toolsets is pushing at such a pace that it makes things unbelievably difficult for a single organization to monitor effectively. Because everyone is doing the zombie walk and integrating artificial intelligence into their applications, it’s nearly impossible NOT to use AI in your organization. If you’re a Google or Microsoft customer, AI is automatically built into the system. 

Vendor after vendor has blindly jumped into incorporating artificial intelligence — but the lack of AI transparency means you can’t know how well these solutions are protected.

Where Is the Vetting and Validation of AI Products?

In the IT arena, it’s a given that organizations perform security validation and vetting of new vendors and solutions. Additionally, there are annual responsibilities for performance of compliance and security reviews. However, that validation often doesn’t include specific coverage when vendors incorporate new AI capabilities into their platforms. 

These vendors have an inherent responsibility to their customers and stakeholders to provide clear communication about how they’re incorporating AI and how AI is using their data. And they ought to be able to prove, under scrutiny of a third party assessment, that they’re being scrupulous with customers’ information. But currently, that isn’t available in most cases. Many organizations have high-level directional statements about their approach regarding AI.

Despite vendor vetting and annual reviews, which have been a staple for organizational security and compliance programs for over a decade, organizations are throwing caution to the wind. It amazes me how many decision-makers essentially say screw it and just dive into the latest AI technology because it makes incredible promises and sounds cutting edge.

There’s also the reality that many organizations still lack any kind of AI policy, which is a critical area to button up. You need policy-based control over all of your sensitive data — without that, you don’t have a foundation for protecting the organization. 

Imagine an employee needs to do an analysis on your current client list, and they run all of your client data through an AI program to generate that report? How do you know the data isn’t now available to unknown third parties?

No AI Policy? Your Company Is Flirting with Disaster

AI Policies Should Go Beyond Your Company’s Systems

You absolutely must have an AI policy for your company. However, even the most robust internal policies are only half the battle — true security means looking at your team’s habits beyond the corporate firewall. 

Many organizations only consider their systems when building out their security policies. Others will also think about their personnel and vendors. Far fewer do any thinking about their personnel outside of the work setting. The reality is that your employees are targets of attacks 24/7, not just when they’re at work.

Protecting your organization doesn’t stop at the four walls of your workplace. While you can’t mandate how employees use their devices in their personal lives, you can provide thorough education about AI safety and other security best practices. It’s important to look at your organization itself to ensure you have the proper governance, but also to train up your personnel so that they’re adequately armed in their personal lives.

Bad actors frequently look for opportunities to leverage employees to gain access to their companies — often because employees just don’t know any better. For example, your employees have their work emails on their phones; what happens when a worker installs a compromised AI tool on their phone? The more educated your workforce is, the better they’ll be able to protect your company while at work, as well as when they’re off the clock.

How Are Compliance Standards Keeping up with AI?

These vulnerabilities and risks aren’t going unnoticed by governing bodies. Expect standards councils to be updating their standards for AI very soon. Some governing bodies have already released AI standards — for example, NIST released an AI standard called AI RMF, which is currently available on TCT Portal.

In the meantime, several standards are providing thought leadership about AI security, such as the PCI Security Standards Council (PCI SSC). The PCI website has done a great job of providing thought leadership on the use of artificial intelligence, which I highly recommend.

Be Confident in Your Protection

AI is a tremendously valuable tool that has the capability to unlock enormous new business benefits. But this is true only if you can verify that the tools you’re using are protecting your data. If you’re going to use AI in your organization, you’d better have a strong capability to look at their security and their practices and do your homework.

Need somewhere to start? TCT’s Consultants can help you figure out your gaps and needs to develop a robust AI security policy. 

Compliance Consulting

Let TCT's consultants bear the burden

Say goodbye to the chaos of compliance

See ya later!
KEEP READING...

You may also like