68% of security leaders admit to unauthorized AI usage

Greg Pollock, head of Research and Insights at UpGuard

Cybersecurity and risk management vendor UpGuard has released its new “State of Shadow AI” report. The report details the widespread use of unapproved generative AI tools, or “Shadow AI,” by employees in the workplace. Data shows that employees worldwide are actively bypassing corporate governance at all levels, with a staggering 8 out of 10 employees using unauthorized AI tools. This widespread non-compliance extends all the way to the top—68% of security leaders, including CISOs, admit to incorporating unauthorized AI into their daily workflows. This is of increasing concern for organizations as employees expose their companies to greater security risks.

The report also highlights a critical AI security paradox. Despite 40% of employees reporting that they received AI safety training and have a better understanding of the risks, they are also the ones who use unapproved tools most frequently. This correlation suggests that compliance and security awareness campaigns need to evolve to accommodate employees’ increasing drive for productivity and confidence in new technology.

“Shadow AI has triggered a challenge in maintaining trust between employer and employee,” said Greg Pollock, head of Research and Insights at UpGuard. “Our data shows that increased security training and literacy does not curtail increased shadow AI usage; in fact, it increases it. Organizations need to better engage with their employees about AI to channel that curiosity appropriately.”

UpGuard’s research indicates that traditional security awareness methods are not effective against curtailing unapproved AI usage, and instead, are enabling “AI power users.” The paradox is further aggravated by seniority, with Shadow AI usage increasing alongside managerial authority; senior leadership across the organization is 50% more likely to use shadow AI.

“Does the world really need another study of shadow AI?,” Pollock asked. “That was my first thought going into this project. Reading dozens of previous reports did not change that impression: there’s a lot of shadow AI out there, and a lot of reports saying so. But the more I read, the more apparent it became that something important was missing. This endless supply was not meeting what was actually in demand. While existing research answered the question “is there shadow AI,” there wasn’t much on the more important question–why?

“The naive answer is that AI tools help workers accomplish their tasks, so of course they will use them.” Pollock stated. “On the other hand, it’s also useful for workers who want to maintain their employment to abide by company policies, so it’s not quite that simple. We need a more nuanced articulation of the incentives for and against using unsanctioned AI tools if we want an actor theory that can be operationalized to reduce risk from shadow AI.

“To understand why people use shadow AI, we need to be willing to consider more broadly why people do anything at all,” Pollock continued. “Yes, we want to get our work done, and we want to avoid punishment, but we are also social creatures, driven mostly by emotions and the need for belonging, working with imperfect information to optimize for our perceived in-group’s benefit. We need to add a little more texture to “people do what is good for them” to explain actual human behavior.

“I won’t recapitulate the entire report here, but that is a useful frame for reading it, and in particular for digesting the most challenging findings,” Pollock said. “The people most likely to use shadow AI are those who, on paper, should be the least likely: the AI experts and executives who feel like they have the intellectual or institutional authority to exempt themselves from the rules. (If you recognize yourself in that description, um, you aren’t alone.) Those findings are counter-intuitive if you view humans as meaty computers, and perfectly intelligible if you think about any of the people in your life. That is my big takeaway from this work: those concerned about the risks of shadow AI should engage with others in their organization as people. Our report discusses worker motives that are measurable in the aggregate, but human diversity is vast and the incentives driving people around you may differ. The risks of unapproved software are real, but so are the benefits that might be driving your coworkers to accept those risks. From here on, let’s just assume people are using AI tools, and instead start the conversation by asking why.

The report finds that:

A surprising 90% of security leaders themselves report using unapproved AI tools at work, with 69% of CISOs incorporating them into their daily workflows.

27% of workers trust AI more than their managers or colleagues for reliable information, further highlighting the growing divide of non-compliance between employees and corporate authority.

23% of CISOs know that passwords and other credentials are being shared with AI tools within their company, indicating that organizations are becoming increasingly exposed by the minute.

Furthermore, while 52% of employees are familiar with their company’s AI usage policy, 70% know of sensitive data shared with AI tools at their workplace

Unauthorized AI usage in the workplace will continue to rise unless reinforced governance is implemented. It is clear that the problem cannot be solved by blocking applications, as 41% of employees find a way around it.

For companies keen on creating a transparent environment, a strategic necessity is a shift from a fear-based approach of restriction to one of guided enablement. This new pivot must address the next steps: providing visibility, implementing intelligent guardrails, and offering vetted tools to make the secure path the path of least resistance.

Data for this report were sourced from two separate methods. The survey of security leaders was conducted by Dynata between August 18-31, 2025. The 542 respondents were security professionals in leadership positions at companies with more than 200 employees located in the US, Canada, the APAC region (comprising Australia, New Zealand, Singapore, and Malaysia), and India. The survey of employees was conducted using the Prolific platform between July 30 – August 11, 2025. The respondents were comprised of 1020 people in the US and UK who reported being currently employed and could provide the employee count and industry classification of their employer.