As generative artificial intelligence (AI) platforms rapidly reshape U.S. workplaces, there’s a growing rift between employee behavior and company policies.
Nearly half of employees said they were using banned AI tools at work, according to a survey by security company Anagram, and 58 percent admitted to pasting sensitive data into large language models, including client records and internal documents.
Why It Matters
The widespread, sometimes covert, use of AI tools like ChatGPT, Gemini, and Copilot is exposing organizations to mounting cybersecurity, compliance, and reputational risks.
The onus increasingly falls on employers to train their teams and set clear AI governance, yet recent reports indicate most are lagging behind. Workplace culture, generational attitudes, and inadequate training further muddy the waters, leading to what experts call “shadow AI” use.
What To Know
The findings were stark in cybersecurity firm Anagram’s survey of 500 full-time U.S. employees across industries and regions.
Roughly 78 percent of respondents said they are already using AI tools on the job, often in the absence of clear company policies, and 45 percent confessed to using banned AI tools at work.
Nearly six in 10 (58 percent) said they have entered sensitive company or client data into large language models like ChatGPT and Gemini. And 40 percent admitted they would knowingly violate company policy if it meant completing a task more efficiently.
“This poses significant threats. The content input into external AI systems may be stored or used to train models, risking leaks of proprietary information,” Andy Sen, CTO of AppDirect, a B2B subscription commerce platform that recently launched its own agentic AI tool, devs.ai, told Newsweek.
“The company may not be aware that AI tools have been used, creating blind spots in risk management. This could lead to noncompliance with industry standards or even legal consequences in regulated environments.”
A KPMG-University of Melbourne global survey of 48,340 professionals in April found that 57 percent of employees worldwide hide their AI use from supervisors, with 58 percent intentionally using AI for work and 48 percent uploading company information into public tools.
AI usage already has strong industry and generational divides.
Younger workers, particularly those in Generation Z, are at the forefront of AI adoption; nearly 50 percent of Gen Z employees think their supervisors do not understand the advantages of the technology, according to a 2025 UKG survey.
Many Gen Z workers have self-taught their AI skills and want AI to handle repetitive workplace processes, though even senior leaders encounter resistance and trust barriers in fostering responsible use.
“Employees aren’t using banned AI tools because they’re reckless or don’t care,” HR consultant Bryan Driscoll told Newsweek. “They’re using them because their employers haven’t kept up. When workers are under pressure to do more with less, they’ll reach for whatever tools help them stay efficient. And if leadership hasn’t set any guardrails, that’s not a worker problem.”
There’s also a lack of proper AI education, compounding risks in the workforce.
Fewer than half (47 percent) of employees globally say they have received any formal AI training, according to KPMG. Many rely on public, unvetted tools, with 66 percent of surveyed employees using AI output without verifying accuracy, and over half reporting mistakes attributed to unmonitored AI use.
Despite the efficiency gains cited by users, these shortcuts have led to incidents of data exposure, compliance violations, and damaged organizational trust.
What People Are Saying
Harley Sugarman, founder and CEO of Anagram Security, said in the company’s report: “With government resources shrinking, private companies must take on a bigger role in securing their networks and educating their teams. Our survey makes it clear: employees are willing to trade compliance for convenience. That should be a wake-up call.”
What Happens Next
Organizations are being urged to implement modern, transparent AI training and set clear guidelines so employees can learn, rather than hide, their AI competencies.
“It’s tempting for companies to simply block access to external AI tools, but this is challenging given how ubiquitous AI access is, and it may also stifle innovation,” Sen said. “A better solution is to create approved ‘AI playgrounds’ for employees…This way, companies gain the benefit of decentralized, rapid innovation while avoiding the risks of shadow AI.”