Good Intentions, Bad Consequences: Your Employees Are Probably Using AI in Secret

Half of your employees are using AI tools without your permission. One in four knowingly violates company guidelines. Nearly half upload sensitive company information to public AI platforms. Shadow AI, which has become a hot topic, isn’t a hypothetical risk but a reality in almost every company.

Martti Asikainen 13.2.2026 | Photo by Adobe Stock Photo

Grumpy toonie charachter with dark features

Imagine an ordinary Tuesday at your office. The marketing manager asks ChatGPT for help planning a campaign and attaches last year’s sales data. A junior developer copies your company’s patented code to Claude for debugging. The HR manager uses AI to summarise confidential recruitment notes.

None of them asks for permission, and none of them tells anyone what they did. This is far from a hypothetical scenario—it’s the reality in almost every company right now. Research shows that business leaders systematically underestimate how extensively their staff uses generative AI. According to some estimates, actual usage could be several times higher than what management imagines (Mishova 2025).

KPMG’s research also reveals a harsh truth: up to half of employees use AI tools without their employer’s permission. Even more alarming is that up to 44% knowingly violate company guidelines on AI use to improve their workflows. Additionally, nearly half of all employees upload sensitive company information to public AI platforms (KPMG 2025). In other words, it’s highly likely that your employees are doing this too.

The scale of the phenomenon indicates that this isn’t an individual employee problem but a systemic failure. Without clear guidelines and secure workflows provided by management, employees are forced to meet their needs independently. This can lead to chaos, where shadow AI threatens to spread uncontrollably within the company, and sensitive information may end up in the wrong hands (Khan & Asikainen 2026).

How Good Intentions Lead to Breaches

It’s time to acknowledge the facts. Employees are using AI, whether we like it or not. But when they adopt AI tools without your guidance or understanding of their limitations, they’re not just making their work more efficient—they’re simultaneously exposing your company to significant risks. There are countless similar cases.

One of the first scandals emerged in 2023. It came to light that a group of engineers at electronics giant Samsung had used ChatGPT to debug their code and shared highly confidential source code and internal notes with the model (Gurman 2023; Ray 2023). They were using the free version of ChatGPT, which meant the information became part of the model’s training material.

There are other similar cases. For instance, Amazon employees had used ChatGPT to summarise confidential AWS documents. As a result, the AI began responding with content that eerily resembled the corporation’s internal documents (Kim 2023). Claims have been made on various forums that it’s still possible to extract AWS source code from the model.

The common denominator in these cases is ignorance. No one was trying to leak sensitive information to anyone, but AI made it too easy, which is how the damage occurred. For this very reason, investment bank JPMorgan Chase banned all external AI tools for its employees and developed its own secure solutions (Rosen 2023). Apple reached the same conclusion (Tilley & Kruppa 2023).

Hallucinations, Bias, and Flawed Decision-Making

Data security is far from the only challenge with generative AI-based models when it comes to employees and their workflows. Using them without sufficient understanding of their operational logic can also cause unnecessary stress and headaches, and even lead to decisions based on incorrect information. Language models are notorious for their hallucinations, and they’re capable of producing convincing-sounding but completely fabricated information.

The worst part, however, is that we humans are particularly prone to believing the answers it provides (Bender et al. 2021; Asikainen 2026). Numerous studies have shown that even people who rely on critical thinking believe AI-generated answers, especially when the answer is formulated fluently and convincingly, and it’s not the questioner’s personal area of expertise (Zou et al. 2023; Ovide 2025). AI models also amplify biases in their training data, which can make their answers prejudiced and partial (Carlini et al. 2023; Qiang et al. 2024; Zhou et al. 2025).

Real life provides several cases where algorithms have repeated historical hiring practices and even discriminated against certain demographic groups, as happened with Amazon, whose recruitment algorithms discriminated against women (Dastin 2018). Similarly, for example, customer service chatbots with flawed algorithms can offer different services in a discriminatory manner to customers from different linguistic backgrounds.

In a company, this could manifest as a chain of events where a legal department employee turns to a language model to conduct a legal analysis but forgets to verify the sources. As a result, the company ends up breaking the law or acting unethically and is held accountable. Or if a junior financial department analyst produces an analysis with it without understanding the limitations, investment decisions can go completely awry.

Shadow AI as an Operational Threat

The use of AI in organisations without official oversight or approval has emerged as one of the most significant operational risks for companies in the 2020s. Unlike traditional IT risks, where compromised systems are detected relatively quickly, leaks and other breaches through AI operate quietly in the background until something goes wrong.

Without critical understanding of the risks of AI applications, an organisation can unintentionally break its entire clockwork mechanism, from equality legislation to its brand and reputation (Noble 2018; Benjamin 2019). For example, in Finland, only 26% of Finns feel they have sufficient skills to utilise AI (KPMG 2025).

For this reason, every company should take seriously the minimum AI literacy requirements for employers set out in the EU AI Regulation EU 2024/1689. The question is no longer whether AI is being used in your company. The question is whether it’s being used safely, efficiently and responsibly—or in the dark, without any understanding of the consequences.

The solution isn’t to ban AI use. The solution is to build a human firewall in your company and develop AI literacy that protects your company from risks whilst simultaneously enabling the efficiency leap that AI offers.

In the next part, we’ll explain what AI literacy means in practice, how it relates to the EU AI Act, and how you can succeed in building it through four concrete steps.

References

  • Asikainen, M. (2026). Is AI Making Us Confident Idiots? (And We Don’t Even Notice). Published in eSignals Pro on 21 January 2026. Accessed 4 February 2026.

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

  • Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K. & Tramér, F. (2023). Poisoning Web-Scale Training Datasets is Practical. arXiv. Cornell University.

  • Dastin, J. (2018). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Published in Reuters on 11 October 2018. Accessed 2 February 2026.

  • Gurman, M. (2023). Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak. Published in Bloomberg on 2 May 2023. Accessed 2 February 2026.

  • Khan, A.U. & Asikainen, M. (2026). How (Not) to Destroy Your Business with AI. Published on the Finnish AI Region website on 5 February 2026. Accessed 7 February 2026.

  • Kim, E. (2023). Amazon warns employees not to share confidential information with ChatGPT after seeing cases where its answer ‘closely matches existing material’ from inside the company. 25 January 2023. Accessed 2 February 2026.

  • KPMG. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. Accessed 2 February 2026.

  • Mishova, A. (2025). AI literacy for businesses: What is it and why it matters. Published on the GDPR Local website on 20 June 2025. Accessed 2 February 2026.

  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5

  • Ovide, S. 2025. You are hardwired to blindly trust AI. Here’s how to fight it. Washington Post 3 June 2025. Accessed 29 November 2025.

  • Qiang, Y., Zhou, X., Zade, S.Z., Roshani, M. A., Khanduri, P., Zytko, D. & Zhu, D. (2024). Learning to Poison Large Language Models During Instruction Tuning. arXiv. Cornell University.

  • Ray, S. (2023). Samsung Bans ChatGPT Among Employees After Sensitive Code Leak. Published in Forbes on 2 May 2023. Accessed 2 February 2026.

  • Rosen, P. (2023). JPMorgan limits traders’ use of ChatGPT amid regulatory concerns about financial information, report says. Published in Business Insider on 22 February 2023. Accessed 2 February 2026.

  • Tilley, A. & Kruppa, M. (2023). Apple Restricts Employee Use of ChatGPT, Joining Other Companies Wary of Leaks. Published in Wall Street Journal on 18 May 2023. Accessed 2 February 2026.

  • Zhou, X., Qiang, Y., Ji, J., Cao, L., Li, J. & Gu, Q. (2025). Unlocking Backdoors in Large Language Models: On Data Contamination and Model Editing. arXiv. Cornell University.

  • Zou, A., Wang, Z., Carlini, N., Nars, M., Kolter, J.Z. & Fredrikson, M. 2023. Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv. Cornell University.

 

Authors

Martti Asikainen

Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts