News

AI copilots are making internal breaches easier and costlier to defend against

AI copilots are making internal breaches easier and costlier to defend against
Smilestudioap | Istock | Getty Images
  • Copilot programs using generative AI (like Microsoft's Copilot, GitHub Copilot, Salesforce's Einstein Copilot and Adobe Firefly) are the primary method of implementing gen AI at an organization.
  • However, these programs can increase the risk of insiders gaining access to information never meant for them.
  • As for what makes data insecure, it's not the gen AI itself, but the fact that organizations haven't cleaned up the data and set new access permissions in advance of copilot usage.

Security through obscurity is gone. At least, that's what data security experts like Matt Radolec, vice president of incident response at data security company Varonis, say.

Copilot programs using generative AI (like Microsoft's Copilot, GitHub Copilot, Salesforce's Einstein Copilot and Adobe Firefly) are the primary method of implementing gen AI at an organization, according to a recent Gartner report.

"Copilots have pass-through permissions," Radolec said. This means that whatever an employee can access, the technology can access — but the copilot has the benefit of being able to sift through a corporate-wide database on the fly, meaning it can share files and data to someone who may not have approval to access them.

In other words, copilots increase the risk of insiders gaining access to information never meant for them, whether the employee's intent is curiosity or something more malicious.

As for what makes data insecure, it's not the gen AI itself, according to Radolec. "It's because organizations haven't cleaned up the access to data that they're getting by a copilot," he said.

This is where permissions come into play.

Data permissions and zero-trust security

Permissions enable individuals or groups to access certain data, actions or operations across an organization. Cybersecurity best practices generally prefer the principle of least privilege, where an employee receives the minimum levels of access necessary to perform their job. This aligns with zero trust architecture, a cybersecurity model that seeks to eliminate attacks by not inherently trusting any user or device.

But people may get lazy with permissions, instead clicking buttons that share a document with everyone in the department or organization (or worse, everyone on the internet). "People tend to do the easiest thing, the one that keeps them from having to make more requests or set permissions five times a week," Radolec said. "But we have to shift it, not just as an industry but also as people towards taking care of this data just as we would if it was printed out."

While organizations want to believe everyone they've hired has the best intentions, malicious (and overly curious) insiders are a real threat.

"They're leaking it to competitors, they're using it for their own personal gain, or in some cases also committing things like identity fraud, wire fraud," Radolec said. Earlier this year, one client brought Varonis in to figure out how an administrative employee asked for the exact highest raise they could get — to the dollar. "They had abused their privileges as an admin to get to that data," he said. "This is the type of thing that these copilots make really, really easy."

On a more organized front, Radolec has seen criminal groups put five or six people in a call center with the intention of stealing identities at scale. "Copilots make that even easier. Now, they don't have to wait for the phone call to happen. They can say [to the copilot], 'Give me a list of customers and their social security numbers separated by customer lifetime value or total income.'"

There are a lot of ways to mitigate this risk through what Shawnee Delaney, CEO of human risk management consulting company Vaillance Group, calls the "security onion." For one, standards around collecting too much customer data for storage and sale are coming into play (Oracle's recent $115 million privacy settlement never went to trial but did at least garner awareness on the issue of over-collection of data).

To protect against bad actors within an organization requires a broad-brush approach that manages the technical and human sides. Delaney's experience recruiting spies as a former Defense Intelligence Agency case officer and standing up insider threat programs at companies like Merck and Uber lends itself well to the spectrum of threat an organization faces from within — and the ways in which to manage it.

Where technical and human cybersecurity meet

On the technical side of preventing internal threats from materializing, Delaney references the importance of conducting comprehensive risk assessments and third-party security audits, particularly with third-party gen AI tools. Organizations should look out for vendor security certifications like ISO/IEC 27001, complete regular software updates and perform continuous monitoring and anomaly detection.

Additionally, segmenting sensitive data can reduce the impact of a potential breach, and using strong data encryption methods for data at rest and in transit to protect your sensitive information will help mitigate any risks that arise from gen AI, Delaney says.

For engineering teams using Copilot for GitHub to support coding tasks, a Stanford study concluded that participants who had access to an AI assistant were more likely to write insecure code, which is more likely to be targeted by bad actors on the outside, inside or both. This proves how essential human review remains in the engineering equation, something GitHub recognizes. A GitHub spokesperson said their Copilot now features an AI-based vulnerability prevention system that blocks insecure coding patterns in real-time, and also has Copilot Autofix to find and fix code vulnerabilities.

There's also value in proceeding at a slow and steady pace with gen AI processes. Meredith Graham, chief people officer at managed service provider Ensono, is part of an AI executive oversight committee at her company. "I'm a member of it, even though I'm not a technologist, to make sure that we think about that people side of things," said Graham, referencing her expertise in the unintended consequences about how data could be used or flow down the pipeline.

Ensono uses Microsoft Copilot, but the team decided to limit access to it for security reasons. "We always want to make sure that data is only exposed to the people that need to know it," she said.

Instead, Ensono is developing its own tool so the company can better enable and control its data security. Only approved human resources personnel would be able to access certain personal and pay information about employees, for example. It intends to launch the tool slowly across the organization later this year.

"The biggest hurdle is putting the information that we know people need into that tool and then granting the right security," Graham said. "We're starting with sales and marketing, which is the data that's out there already, and then we'll start to transition to more secure data, like HR or financial data."

Whether it's filling technical gaps, understanding the wants and needs of employees or protecting an organization from bad intent, the ways to securitize enterprise-wide gen AI technology are multifold. Gen AI has made cybersecurity mistakes costlier, and insiders now pose a bigger threat — malicious or otherwise — than ever before. It's not gen AI's fault (Radolec recognizes "the natural beauty of AI"), but as the vehicle for corporate change, copilot-like technologies amplify the risk, making the time for locking down security now.

 

Copyright CNBC
Contact Us