In the first part of this series, we showed that AI’s impact will be felt most acutely in office work, not just tech jobs. But awareness alone won’t protect your organization. The companies and regions that thrive in the years ahead won’t be those with the lowest AI exposure — they’ll be the ones that prepare now. Here’s how to start.
Umair Ali Khan & Martti Asikainen, 14.12.2025 | Photo by Adobe Stock Photos
In our previous article, we’ve explored why it’s critical to understand how AI will reshape your workforce. Knowing exactly which roles, when, and what to do about it—that’s powerful. In the previous article, we mapped where AI’s impact will land. Now we show you how to turn that insight into action: the questions to ask, the strategies to deploy, and the steps to take before disruption becomes crisis.
The pattern is clear from research: AI will reshape office and administrative work before it transforms most other sectors. But this clarity creates a choice. Organizations and regions can either prepare systematically now, or react in crisis mode later. The difference between these approaches will determine which workers successfully transition and which are left behind.
The good news: AI exposure doesn’t have to mean unemployment or chaos. When leaders understand not just the size of the disruption but its specific shape, where it’s concentrated, which tasks are affected, how quickly it’s likely to move,they can design responses that actually work.
This article shows you how to move from awareness to action.
Here’s a crucial insight that many leaders miss: two places can have similar overall AI exposure and still face very different challenges. The topline number, “X% of jobs exposed to AI”, tells you almost nothing about what you should actually do.
Task-level exposure frameworks show that aggregate rates can conceal substantial differences in how AI affects specific occupations and tasks. Regions or organizations with similar headline exposure may nevertheless face very different adjustment challenges (Colombo et al. 2025).
Consider two hypothetical regions, both showing 40% workforce exposure to AI:
A
Region A is home to several large financial services companies and a tech hub. Most of the AI exposure is concentrated in these two sectors. The finance firms employ thousands of back-office workers processing transactions, compliance documents, and customer queries. The tech companies have large teams of developers, data analysts, and IT support staff. Outside these sectors, the region has traditional manufacturing, retail, and construction with much lower exposure.
A
B
Region B has a more diversified economy with balanced employment distribution. No single sector dominates, but AI exposure is spread across multiple industries including healthcare administration, municipal government services, logistics coordination, small business accounting, educational administration, and retail management.
B
Each sector has moderate exposure, but the impact touches nearly every organisation. On paper, both regions face 40% exposure. But their challenges are completely different.
Region A can focus its response. It can work directly with the major employers in finance and tech, design sector-specific retraining programs, and create clear transition pathways, helping document processors become compliance analysts, or IT support staff become systems architects. The training needs are concentrated and predictable.
Region B needs a fundamentally different strategy. No single training program will work across all affected sectors. Instead, the region needs broad digital literacy initiatives, general AI skills that transfer across industries, and flexible support systems that can adapt to different organizational needs. A healthcare administrator and a logistics coordinator both face AI exposure, but they need different specific skills even though they share common foundations.
Reviews of AI and employment impacts highlight that exposure varies widely across sectors and occupations, reinforcing the need for differentiated policy and training responses rather than one-size-fits-all solutions (Das & Mujeebunnisa 2025). This is why understanding the pattern of exposure matters as much as the total amount. Before you design any programme, any policy, or any investment strategy, you need to answer: Is our risk concentrated or distributed? The answer fundamentally shapes everything that follows.
The most useful idea here is not a specific number or metric, but a mindset: Treat AI exposure as something you can plan for, not just something that happens to you. In practice, that means asking very concrete questions.
For Governments and Policymakers:
For Companies:
This kind of thinking helps you decide where to start with training and experimentation. It prevents you from trying to AI-transform everything at once.
The most common mistake in AI workforce planning is assuming that jobs either survive completely unchanged or disappear entirely. The reality is messier and more interesting: most jobs transform.
Think about what happened to accountants when spreadsheet software arrived.
The job didn’t vanish — it evolved. Accountants stopped spending hours manually calculating figures and started spending that time on analysis, strategy, and advisory work. The tedious calculation work got automated, but the need for financial judgment increased.
AI is creating a similar pattern in office work, but faster and across more tasks. The future of most administrative and professional roles isn’t “no humans”. It’s humans overseeing AI. This shift from doing every task manually to supervising and refining AI-supported workflows fundamentally changes what skills matter.
This approach is consistent with research showing that AI often complements human skills by increasing demand for judgement, coordination, and contextual understanding, rather than simply substituting for labour (Brynjolfsson et al. 2025; Mäkelä & Stephany 2024).
Instead of running generic AI awareness sessions that leave people anxious but directionless, effective leaders focus on three concrete transitions:
1.
First, identify which roles are clearly exposed. Don’t waste limited training resources on people whose work won’t change soon. A construction site supervisor has very different AI exposure than a financial analyst or an HR coordinator. Target your efforts where they’ll matter most.
2.
Second, train people to supervise rather than execute. This is the core shift. Consider a customer service team that currently handles routine queries manually.
With AI, their job becomes: reviewing AI-generated responses for accuracy and tone, handling complex cases that require human judgement, identifying patterns where the AI consistently fails, and improving the system over time.
They move from answering questions directly to managing a system that answers questions, stepping in when needed.
3.
Third, design reskilling paths that build on existing expertise. The instinct to tell everyone to “learn to code” or “become a data scientist” is both impractical and unnecessary. Instead, help people move into adjacent roles that use skills they already have.
Finance staff who currently process invoices can become financial analysts who interpret patterns and advise on strategy. Customer service representatives who handle calls can become customer relationship managers who focus on complex problem-solving and retention.
HR coordinators who manage paperwork can become organizational development specialists who design better systems.
In our consultancy work with Finnish SMEs, we’ve seen this transition pattern work repeatedly. The most successful implementations follow a predictable path:
They start small—picking one repetitive workflow like invoice processing or customer query routing. They train existing staff to supervise the AI system: checking outputs, handling exceptions, and refining prompts or rules. As confidence grows and the team masters supervision skills, they gradually expand to more workflows.
Finally, they redeploy the freed-up capacity toward higher-value work that genuinely requires human judgment: strategic planning, relationship building, complex problem-solving, or innovation.
The goal isn’t to eliminate jobs but to reshape them. People move from routine execution to strategic oversight. This isn’t just semantically different, it’s economically different too. Workers who master AI supervision become more valuable, not less, because they can leverage technology to accomplish what was previously impossible at their scale.
Because AI adoption doesn’t follow linear historical patterns, workforce planning increasingly emphasizes scenario-based approaches rather than simple trend extrapolation (OpenAI 2024; WEF 2025).
For policymakers and business leaders, one of the most powerful uses of AI exposure data is scenario planning. Think of it as a flight simulator for workforce transformation. A way to test different strategies and see their consequences before committing real resources or making irreversible decisions.
Traditional workforce planning often relies on extrapolating past trends: if employment in sector X grew 2% last year, we assume similar growth next year. But AI doesn’t follow linear patterns. A task that seems safe today can become automatable almost overnight when a new model or tool emerges. This unpredictability demands a different approach.
Instead of trying to predict a single future, scenario planning explores multiple possible futures. What if AI adoption happens faster than expected in some sectors but slower in others? What if government policy deliberately slows adoption in certain areas to allow time for retraining? What if a major employer suddenly announces an aggressive automation program?
A regional government might model three scenarios. In Scenario A, they invest heavily in targeted retraining for the two most exposed sectors (finance and healthcare administration).
In Scenario B, they spread resources more thinly across broad digital literacy programs for all workers. In Scenario C, they focus on supporting worker transitions after displacement occurs rather than preventing it.
By modeling all three, they can see which approach produces better outcomes given their specific exposure pattern, budget constraints, and political realities.
A mid-sized company with large administrative and customer service teams might test different adoption speeds. What happens if they automate invoice processing and level-one customer support within six months? What if they phase it in over two years?
The faster scenario might save money sooner but risk losing institutional knowledge and facing employee resistance. The slower scenario might cost more but allow smoother transitions and better retention of experienced staff who can train the AI systems.
This scenario-based approach allows you to:
The “flight simulator” metaphor is apt because, just like pilot training, the goal is to practice making decisions under various conditions before real stakes are on the line.
Just as pilots practice emergency procedures hundreds of times before facing real turbulence, your organization should stress-test different AI adoption scenarios before committing resources. You can crash in the simulator, learn from it, and adjust your approach. You can’t afford to do that with people’s livelihoods.
The time to build that muscle memory is now, while the consequences are still hypothetical.
The task-focused view of AI risk aligns with broader policy discussions, which emphasize that the future of work will be shaped less by headline job losses and more by how effectively institutions anticipate task-level disruption and support reskilling and adaptation (Colombo et al. 2025; UNRIC 2025; WEF 2025).
Theory matters, but implementation matters more. Here’s a concrete framework for moving from awareness to action. These steps work whether you’re leading a regional government initiative, managing a company department, or advising organizational leadership.
Start by identifying which roles in your organization or region spend the most time on digital, repetitive, information-processing tasks. This isn’t guesswork, you need actual data. Don’t rely solely on job titles or organizational charts.
Two people with the same title might spend their time very differently. Instead, survey workers about what they actually do each day, review detailed job descriptions, and observe workflows directly. Look for tasks that involve processing documents, entering data, searching for information, answering routine questions, scheduling, reporting, or coordinating activities that follow clear rules.
The key question isn’t “Could AI theoretically do this job?” but rather “What percentage of this person’s working hours are spent on tasks that AI tools can already handle reasonably well?” A role might be 80% exposed (most tasks are repetitive and digital) or 20% exposed (mostly requires human judgment and physical presence). This can only be determined by mapping your exposure.
Once you’ve mapped exposure, analyze the pattern. Is it concentrated in a few departments or sectors, or spread widely across your organization or region?
This determines your strategic approach. Concentrated exposure means you can design focused interventions, working directly with the affected teams, creating sector-specific training, and building targeted transition pathways. Distributed exposure requires broader initiatives — general digital literacy, cross-sector AI skills, and flexible support systems.
Neither pattern is inherently better or worse, but they demand fundamentally different responses. Getting this wrong wastes resources and leaves workers unprepared.
You can’t transform everything at once. Start with workflows where success is most likely and most valuable.
Look for these characteristics: AI tools already exist and are proven for this task (you’re not betting on future technology). The tasks are highly repetitive and well-defined (clear inputs and outputs). Workers are open to change rather than resistant (often true where current work is tedious). Success would free up significant capacity for higher-value work (the return on investment is clear).
A good first pilot might be automating invoice processing, customer query routing, or document summarization. A poor first pilot would be automating complex negotiations, creative design work, or anything requiring deep contextual judgment. Not because AI will never handle these, but because starting there increases your risk of failure and resistance.
For each exposed role, identify adjacent positions that use similar skills but require more human judgment. Map out realistic transition paths that build on existing expertise rather than starting from zero.
This requires understanding both the current role and potential future roles in detail. What skills do workers already have? What additional skills would let them move into less-exposed positions? How long would training take? Are there enough of these positions to absorb displaced workers, or do you need to create new roles?
Focus training on skills that increase in value when paired with AI: judgment about edge cases, coordination across teams, contextual interpretation, exception handling, strategic thinking, and continuous system improvement.
Start with one or two well-defined pilots. Set clear success metrics. Not just “did the AI work?” but “did workers successfully transition to supervision roles?” and “did we maintain or improve service quality?”
Measure outcomes rigorously, gather honest feedback from participants, and iterate based on what you learn. Some things will work better than expected; others will reveal unexpected challenges. Scale the successes, kill the failures quickly, and adjust your approach continuously.
Resist the temptation to declare victory prematurely or to excuse failures as inevitable. Both learning and accountability matter.
Throughout this process, maintain transparent communication about what’s changing and why. Workers facing automation fear uncertainty more than change itself. When people understand the plan, see that leadership is thinking seriously about their futures, and have opportunities to shape the transition, resistance decreases significantly.
Involve workers in the design process. They understand their tasks better than anyone and often spot problems or opportunities that managers miss. Celebrate successes publicly to build momentum and confidence. When things go wrong, acknowledge it honestly and explain what you’re learning.
This isn’t just about being humane (though it is that). It’s strategically smart. Workers who feel valued and included become advocates for change rather than obstacles to it.
These are the main points you can carry into your own context:
The pattern is clear from both research and practice: AI will reshape office and administrative work before it transforms most other sectors. This isn’t speculation. It’s already happening in organizations across Finland and globally. The only real question is whether we prepare systematically or react in crisis mode.
The difference between these two paths is stark. Organizations and regions that map their exposure now, design differentiated responses based on their specific exposure pattern, and invest in transition pathways will navigate this shift successfully. Their workers will move from routine execution to strategic oversight, becoming more valuable rather than displaced. Their operations will become more efficient while retaining institutional knowledge and human judgment where it matters most.
Those that wait for the disruption to fully arrive will face a much harder reality: scrambling to help workers whose roles have already disappeared, dealing with social and economic costs that could have been avoided, and competing for scarce training resources when everyone needs them simultaneously.
The tools for planning are available now. The research shows where to look and what patterns to expect. The successful case studies demonstrate what works. The window for proactive preparation is open, but it won’t stay open indefinitely.
This isn’t a counsel of despair. It’s a call to action. AI exposure creates both risk and opportunity. The risk is still real. Some tasks will become automated, some workflows will change dramatically, and some workers will need to transition to new roles. But the opportunity is equally real.
Organizations can become more effective, workers can move into higher-value activities, and regions can position themselves as leaders in the AI economy rather than victims of it. The choice is yours. Will you use this knowledge to prepare your workforce and organization systematically?
Or will you join the long list of those who saw the change coming but chose to wait until it was too late to plan effectively? The answer to that question will determine not just your organization’s future competitiveness, but the livelihoods of the people who depend on you for leadership.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–942. https://doi.org/10.1093/qje/qjae044
Colombo, E., Mercorio, F., Mezzanzanica, M., & Serino, A. (2025). Assessing job exposure to artificial intelligence through large language models. In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25). https://www.ijcai.org/proceedings/2025/1066.pdf
Das, S. K., & Mujeebunnisa, S. (2025). Impact of artificial intelligence on human jobs. International Journal of Sciences and Innovation Engineering, 2(9), 254–264. https://doi.org/10.70849/ijsci
Mäkelä, E. & Stephany, F. (2024). Complement or substitute? How AI increases the demand for human skills. arXiv. https://doi.org/10.48550/arXiv.2412.19754
OpenAI. (2024). Jobs in the Intelligence Age. How AI is changing work and creating new roles —and what we can do to prepare.
United Nations Regional Information Centre for Western Europe (2025 November 10). Artificial intelligence and the future of work: Disruptions and opportunities. Retrieved December 11, 2025, from https://unric.org/en/ai-and-the-future-of-work-disruptions-and-opportunitie/
World Economic Forum. (2025). The Future of Jobs Report 2025. Insight Report.
Senior Researcher
Finnish AI Region
+358 294471413
umairali.khan@haaga-helia.fi
Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi
Finnish AI Region
2022-2025.
Media contacts