Half of your employees are using AI tools without your permission. One in four knowingly violates company guidelines. Nearly half upload sensitive company information to public AI platforms. Shadow AI, which has become a hot topic, isn’t a hypothetical risk but a reality in almost every company.
Martti Asikainen 13.2.2026 | Photo by Adobe Stock Photo
Imagine an ordinary Tuesday at your office. The marketing manager asks ChatGPT for help planning a campaign and attaches last year’s sales data. A junior developer copies your company’s patented code to Claude for debugging. The HR manager uses AI to summarise confidential recruitment notes.
None of them asks for permission, and none of them tells anyone what they did. This is far from a hypothetical scenario—it’s the reality in almost every company right now. Research shows that business leaders systematically underestimate how extensively their staff uses generative AI. According to some estimates, actual usage could be several times higher than what management imagines (Mishova 2025).
KPMG’s research also reveals a harsh truth: up to half of employees use AI tools without their employer’s permission. Even more alarming is that up to 44% knowingly violate company guidelines on AI use to improve their workflows. Additionally, nearly half of all employees upload sensitive company information to public AI platforms (KPMG 2025). In other words, it’s highly likely that your employees are doing this too.
The scale of the phenomenon indicates that this isn’t an individual employee problem but a systemic failure. Without clear guidelines and secure workflows provided by management, employees are forced to meet their needs independently. This can lead to chaos, where shadow AI threatens to spread uncontrollably within the company, and sensitive information may end up in the wrong hands (Khan & Asikainen 2026).
It’s time to acknowledge the facts. Employees are using AI, whether we like it or not. But when they adopt AI tools without your guidance or understanding of their limitations, they’re not just making their work more efficient—they’re simultaneously exposing your company to significant risks. There are countless similar cases.
One of the first scandals emerged in 2023. It came to light that a group of engineers at electronics giant Samsung had used ChatGPT to debug their code and shared highly confidential source code and internal notes with the model (Gurman 2023; Ray 2023). They were using the free version of ChatGPT, which meant the information became part of the model’s training material.
There are other similar cases. For instance, Amazon employees had used ChatGPT to summarise confidential AWS documents. As a result, the AI began responding with content that eerily resembled the corporation’s internal documents (Kim 2023). Claims have been made on various forums that it’s still possible to extract AWS source code from the model.
The common denominator in these cases is ignorance. No one was trying to leak sensitive information to anyone, but AI made it too easy, which is how the damage occurred. For this very reason, investment bank JPMorgan Chase banned all external AI tools for its employees and developed its own secure solutions (Rosen 2023). Apple reached the same conclusion (Tilley & Kruppa 2023).
Data security is far from the only challenge with generative AI-based models when it comes to employees and their workflows. Using them without sufficient understanding of their operational logic can also cause unnecessary stress and headaches, and even lead to decisions based on incorrect information. Language models are notorious for their hallucinations, and they’re capable of producing convincing-sounding but completely fabricated information.
The worst part, however, is that we humans are particularly prone to believing the answers it provides (Bender et al. 2021; Asikainen 2026). Numerous studies have shown that even people who rely on critical thinking believe AI-generated answers, especially when the answer is formulated fluently and convincingly, and it’s not the questioner’s personal area of expertise (Zou et al. 2023; Ovide 2025). AI models also amplify biases in their training data, which can make their answers prejudiced and partial (Carlini et al. 2023; Qiang et al. 2024; Zhou et al. 2025).
Real life provides several cases where algorithms have repeated historical hiring practices and even discriminated against certain demographic groups, as happened with Amazon, whose recruitment algorithms discriminated against women (Dastin 2018). Similarly, for example, customer service chatbots with flawed algorithms can offer different services in a discriminatory manner to customers from different linguistic backgrounds.
In a company, this could manifest as a chain of events where a legal department employee turns to a language model to conduct a legal analysis but forgets to verify the sources. As a result, the company ends up breaking the law or acting unethically and is held accountable. Or if a junior financial department analyst produces an analysis with it without understanding the limitations, investment decisions can go completely awry.
The use of AI in organisations without official oversight or approval has emerged as one of the most significant operational risks for companies in the 2020s. Unlike traditional IT risks, where compromised systems are detected relatively quickly, leaks and other breaches through AI operate quietly in the background until something goes wrong.
Without critical understanding of the risks of AI applications, an organisation can unintentionally break its entire clockwork mechanism, from equality legislation to its brand and reputation (Noble 2018; Benjamin 2019). For example, in Finland, only 26% of Finns feel they have sufficient skills to utilise AI (KPMG 2025).
For this reason, every company should take seriously the minimum AI literacy requirements for employers set out in the EU AI Regulation EU 2024/1689. The question is no longer whether AI is being used in your company. The question is whether it’s being used safely, efficiently and responsibly—or in the dark, without any understanding of the consequences.
The solution isn’t to ban AI use. The solution is to build a human firewall in your company and develop AI literacy that protects your company from risks whilst simultaneously enabling the efficiency leap that AI offers.
In the next part, we’ll explain what AI literacy means in practice, how it relates to the EU AI Act, and how you can succeed in building it through four concrete steps.
Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi
Finnish AI Region
2022-2025.
Media contacts