The Problem Is the AI Strategy — Specifically, the Lack of One

By the time your organisation finishes debating whether it needs an AI strategy, your employees will have already built one. It just will not be the one you wanted — and unlike the official version, theirs is already running.

Martti Asikainen 15.4.2026 | Photo by created with AI

This image is a narrative illustration depicting a man in a position of corporate leadership who is totally overwhelmed by the complexity of creating an artificial intelligence (AI) strategy for his organization, visually communicating the gap between a perceived "Easy Guide" and the actual messy reality of technical and ethical implementation.

Picture a typical Monday morning. A junior account manager at a mid-sized consultancy has a client report due at noon. She opens ChatGPT, pastes in three pages of meeting notes, and asks it to produce a summary with recommendations. The notes contain client names, budget figures, and a candid assessment of the client’s internal politics written by a partner who assumed it would never leave the firm.

The summary is ready in forty seconds. The report goes out at 11:58. Two floors up, a person in legal is summarising contract documents with a cool new AI tool a neighbour recommended over the weekend. Meanwhile, HR is using a different one entirely to screen CVs. Nobody has told them to do any of this. That being said, nobody has told them not to either. Nothing goes wrong. Or at least nothing visibly goes wrong.

The client is satisfied. The partner never finds out. The account manager does the same thing the following week, and the week after that. So does the person from legal and the entire HR department. By the time anyone thinks to ask what tools are in use, the workflows are established, the habits are formed, and the data has already left the building dozens of times.

This is a hypothetical scenario. But in the same breath, it is also the current state of most organisations — the majority of which, by most available evidence, have yet to build a coherent AI strategy. And the longer that strategy stays unbuilt, the harder the problem becomes to manage.

The Strategy Vacuum Does Not Stay Empty

Organisations often treat the absence of an AI strategy as a neutral state. It’s a pause before the decision is made. Truth to be told, it is not. A strategy vacuum is never empty. It fills immediately you create one, and it fills with whatever tools individual employees find useful, affordable, and available.

Research published by KPMG found that up to half of employees use AI tools without their employer’s permission, and as many as 44 % knowingly violate company guidelines in order to improve their own workflows (KPMG 2025). A separate finding from the same study revealed that nearly half of all employees upload sensitive company information to public AI platforms. Not out of carelessness, but because they are genuinely useful and no sanctioned alternative exists.

The scale of this tells us something important. This is not an individual compliance problem. It is a structural consequence of organisational inaction. When management does not provide tools, guidance, or a framework for using AI responsibly, employees do not stop using AI. They use it anyway — invisibly, without guardrails, and without any shared understanding of what the risks might be.

This phenomenon, called shadow AI, has been discussed primarily as a cybersecurity and data protection risk, and it is both of those things (Khan & Asikainen 2026). But it is also something else: a signal. Employees reaching for AI tools outside official channels are not being malicious or reckless. They are solving real problems with the resources available to them. The organisation has simply chosen not to be one of those resources.

The Longer You Wait, the Harder It Gets

Here is what makes delayed action more than just a missed opportunity. Shadow AI does not remain static while the strategy is being developed. It compounds. Every week without a strategy is a week in which new workflows become established, new tool dependencies develop, and new habits form. The marketing team that has been using a particular AI assistant for six months has now built their entire content pipeline around it. 

Asking them to switch to an officially approved alternative, or to stop using it altogether, is no longer a simple policy decision. It is a change management challenge. The same goes for legal and the HR department. You are no longer just introducing a policy. You are asking people to give up something that has made their working lives easier, without being entirely sure you can offer something better.

The same dynamic plays out across all departments, each developing its own informal AI culture, its own preferred tools, its own workarounds. By the time the organisation produces a strategy, it is not starting from a blank canvas. It is attempting to impose structure on a landscape that has already been settled — unevenly, inconsistently, and without any of the governance frameworks that a proper strategy would have provided from the start.

This is not a reason to abandon the effort. It is a reason to begin now, and to build in a way that accounts for the reality on the ground rather than the state the organisation wished it were in. Time is not on your side.

The Investment Trap

The numbers from Deloitte’s 2026 State of AI in the Nordics report make the problem concrete. 76% of Nordic organisations plan to significantly increase AI investment this year, while strategic preparedness has dropped from 61% to 43% and talent preparedness has collapsed from 33% to just 14% in twelve months (Deloitte 2026). Spending is accelerating in the opposite direction from readiness — and that is not a coincidence or a temporary lag.

The report’s explanation is straightforward: organisations that deployed off-the-shelf generative AI tools developed a false sense of readiness. Operational integration then revealed that deploying tools and building workforce capability to use them are fundamentally different challenges. The infrastructure was never the hard part.

Compounding this, only 20% of Nordic organisations have appointed someone responsible for measuring value from AI initiatives, compared with 32% globally (Deloitte 2026). Organisations are tracking ROI, building frameworks, producing reports — but in four out of five cases, nobody owns the question of whether any of it is actually working. Investment without accountability is not a strategy. It is optimism with a budget.

The Strategy That Arrives Too Late

There is a particular failure mode worth naming explicitly. Organisations that delay AI strategy development often compensate by producing something comprehensive, formal, and slow. A committee is formed. A framework is drafted. Consultants are engaged. The resulting document is thorough, carefully worded, and entirely disconnected from what employees are actually doing.

This type of strategy does not replace shadow AI. It exists alongside it, largely ignored, because it does not address the specific tools people are using, the specific workflows they have developed, or the specific problems they were trying to solve when they reached for an unsanctioned AI assistant in the first place.

The question is not whether to have a strategy. The question is whether the strategy reflects organisational reality or only organisational aspiration. Most organisations, if they are honest, already know which side of that line their strategy falls on. The document exists. The committee met. The framework was approved. But the person in legal is still using the tool their neighbour recommended. Knowing is not the problem. Acting on what you know is.

The People Who Already Know the Answer

Here is the part that most AI strategy processes get exactly wrong. The employees who have been using AI tools outside official channels, downloading browser extensions, signing up for free tiers, quietly building automations during lunch breaks have already done a significant amount of the strategy’s underlying work. 

They know which tasks are genuinely improved by AI assistance and which ones produce confident-sounding output that still requires a human to catch the errors. They know which tools are actually useful for their specific workflows and which ones were impressive in a demo and useless in practice. They know where the time savings are real and where the promised efficiency gains evaporate the moment a task requires any nuance.

This knowledge is almost never captured. It sits in individual workflows, in shared folders, in Slack threads between colleagues who have figured something out and are passing it along informally. The strategy process, if it happens at all, tends to involve senior leadership, a technology team, and possibly an external consultant, people who have often done considerably less hands-on AI work than the junior staff they are governing.

Excluding this knowledge from the strategy development is not just a missed opportunity. It is a governance error. A strategy built without employee input will be a strategy built on assumptions about how work happens rather than how work actually happens. It will mandate tools that do not fit the workflow, prohibit tools that have already become essential, and generate workarounds of its own. 

The irony is that the same people whose unsanctioned tool use represents the problem are also the most efficient path to the solution. The most effective AI strategies are not handed down. They are built upward, from the people closest to the work. Engaging employees not as subjects of governance but as sources of intelligence is not a concession to informality; it is the difference between a strategy that gets followed and one that gets filed.

The Turn Most Strategies Miss

At this point, the natural instinct is to reach for a framework: audit existing use, identify risk categories, produce a policy, train the staff, document the compliance. All of that is necessary. None of it is sufficient on its own. The structural problem is that AI governance built primarily around risk mitigation tends to produce environments where employees are clear on what they cannot do and unclear on what they can. 

A list of prohibited platforms, a clause about sensitive data, and a reminder to check AI outputs before sharing them. This is a constraint system, not a capability system. It reduces the most visible risks whilst leaving the underlying dynamic unchanged, and employees will keep on suffering from real productivity problems and no supported path to solving them.

The Deloitte findings make this dynamic visible. Among Nordic organisations, those with 40% or more of the workforce having access to approved AI tools jumped from 37% to 56% in a single year (Deloitte 2026). That is a meaningful increase in sanctioned access. Yet in the same period, strategic and talent preparedness both fell sharply. Broader access, in other words, did not produce better organisational readiness, because access without capability, ownership, and strategic clarity is not enablement. It is proliferation.

The organisations that manage this well do not just produce policies. They produce sanctioned alternatives that are actually better than the unsanctioned ones. These tools meet employees where they are, fit into existing workflows, and handle the data governance questions without placing the entire burden on individual judgement. That is definitely a harder problem to solve than writing a policy document.

The Cost of Waiting Is Already Being Paid

The decision to delay AI strategy development is often framed as prudence. A responsible choice to wait until the technology is better understood, the regulatory environment is clearer, or an internal consensus is reached. What this framing misses is that the delay has costs, and those costs are not hypothetical future risks. They are being paid right now, in the form of data exposed to unsanctioned platforms, decisions made on the basis of AI outputs that no one has verified, and a widening gap between the official position on AI and its actual practice.

Shadow AI incidents cost organisations significantly more to resolve than standard security incidents, primarily because of the time required to identify what data was involved and who had access to it (Zorz 2026). The EU AI Act, which becomes fully enforceable in August 2026, places explicit obligations on organisations to ensure employees have adequate AI literacy and that AI use is appropriately governed, obligations that an organisation without a strategy is structurally unable to meet (EU 2024/1689). 

And a strategy without a named owner is a document, not a commitment, which is precisely the situation the Deloitte data describes: organisations tracking KPIs and producing ROI reports whilst nobody owns the question of whether any of it is actually working. The Nordic picture makes the stakes concrete. These are organisations that outperform global peers on technical infrastructure, 55% report high preparedness compared to 43% globally, and yet their strategic and talent readiness is collapsing. (Deloitte 2026). 

More spending, stronger foundations, declining direction. The conclusion is not that Nordic organisations are doing something uniquely wrong. It is that technical readiness, on its own, does not produce strategic coherence. It just makes the gap between capability and direction more expensive. The question, when still in the deliberation phase, is therefore not whether an AI strategy is necessary. It is whether the cost of continued delay is one the organisation has consciously decided to accept, or simply one it has not yet got around to calculating.

The next time you find yourself in a meeting where someone says “we are still working through our AI position,” ask a simpler question. What is everyone doing in the meantime? In most organisations, the honest answer is that they already know. That knowledge is where the strategy should start. Mapping it is the first step, and it is one you do not have to take alone. If you are unsure where to begin, the Finnish AI Region offers free support in mapping and developing AI strategy as part of its work across the region.

References

Deloitte. (2026). State of AI in the Nordics: Deloitte’s State of AI in the Enterprise report series — Nordic cut. Deloitte AI Institute.

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.

Khan, A. U. & Asikainen, M. (2026). How (Not) to Destroy Your Business with AI. Finnish AI Region. https://www.fairedih.fi/en/2026/02/05/how-not-to-destroy-your-business-with-ai/

KPMG. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. KPMG International.

McKinsey & Company. (2025). The state of AI in 2025: Agents, innovation, and transformation. McKinsey Global Institute.

NIST. (2023). Artificial intelligence risk management framework (AI RMF 1.0). National Institute of Standards and Technology.

Zorz, M. (2026). AI went from assistant to autonomous actor and security never caught up. Help Net Security. https://www.helpnetsecurity.com/2026/03/03/enterprise-ai-agent-security-2026/

Authors

Martti Asikainen

Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts