AI was supposed to make knowledge work easier and more efficient. Instead, three-quarters of employees report increased workloads, and nearly half admit they have no idea how to extract the promised productivity gains. What went wrong – and more importantly, how can we fix it?
Text by Martti Asikainen, 27.11.2025 | Photo by Adobe Stock Photos
The adoption of artificial intelligence in organisations is growing explosively, but when we scratch beneath the glossy surface, an interesting contradiction is revealed. Microsoft reports (2024) that as many as 75 per cent of knowledge workers utilise AI in their work, and Copilot usage grew by 150 per cent in the first half of 2024.
These figures paint a picture of triumphant adoption – yet Upwork Research (2024) reveals a startling contradiction: 77 per cent of employees report that AI has actually increased their workload, and 47 per cent have no idea how to achieve the promised productivity benefits. The technology meant to liberate us appears instead to be burying us in work.
This paradox has been identified across various fields. This is evidenced by several different studies, such as the aforementioned Upwork Research survey (2024). According to the ‘State of AI at Work’ survey conducted by American software company Asana, digital fatigue is increasing, and employees are more exhausted than ever before (2025). Microsoft’s ‘Work Trend Index’ report (2024) also indicates that overtime and work outside working hours have increased significantly.
There are undoubtedly many reasons why we end up doing more work even though we have tools at our disposal that should ease our everyday lives and make our work more efficient. Firstly, AI creates and increases productivity expectations for personnel. Rather than organisations seeking to renew themselves, scale, and fix fragmented working life, many have ended up automating controlled chaos instead.
Secondly, in many cases, the time we save by using AI may be spent checking, editing, and correcting erroneous information in content produced by AI agents or colleagues using AI (Asana 2025). The same survey also revealed a significant training gap, with as many as 82 per cent of knowledge workers stating that training is essential for the effective use of AI agents, but only 38 per cent of respondents’ organisations have provided it.
The result did not come as a surprise to me. As an AI trainer in, amongst other things, the ReiluAI project and as a heavy user myself, I emphasise in our various research, development, and innovation projects that everyone should get to experiment, influence, and learn AI applications, and that organisations should provide appropriate training. Everyone should have equal opportunities to develop themselves and their work.
At the same time, I cannot help but ask myself why, for example, an HR specialist should know how to create mediocre videos with AI. Or why do we need an image of a golden retriever sitting on a rock in the middle of a lava flow? Or an animation of a cat on a hot tin roof? I believe these questions are also related to one of AI’s core problems.
Instead of making our work more efficient, we do more unnecessary work that someone else would do considerably faster and much better. We do things because we can do them, and certainly not because we should do them. This is partly evidenced by the fact that so far, AI’s productivity benefits have remained modest, even though its adoption has been rapid across almost all sectors (Maslej et al. 2024; OECD 2025).
AI has developed rapidly, but its content is, at best, only mediocre. It produces fluent, grammatically correct text, yet it tends towards a distinctive style: enthusiastic superlatives (‘game-changing’, ‘revolutionary’, ‘incredible’), lengthy em dashes to connect thoughts, and an almost relentlessly optimistic tone that feels distinctly American in character. For Finnish readers accustomed to more restrained communication, this can feel jarringly out of place. This observation has sparked national amusement and broader discussion about AI’s cultural biases (Güven 2025).
Research shows that AI-generated text (and why not other content as well) exhibits recurring patterns and typical characteristics, and that large language models produce erroneous and even fabricated information. The phenomenon is commonly known by the term ‘hallucination’ (e.g. Ji et al. 2023). As worn as the saying may be, AI is only as good as the data used in its training. Moreover, when everyone uses the same tools in the same way, the result is uniformity of content.
Then, what distinguishes us from our competitors is no longer the ability to produce a lot of generic content, but the ability to think critically and produce something genuine and human with AI assistance. Skilful prompting can distinguish one from the crowd for a while, but this does not prevent the inevitable uniformisation of content.
Microsoft’s early adopter study (2023) reports that Copilot users are 29 per cent faster at certain tasks, but this does not tell the whole truth. The study only concerned a limited group of early adopters, and speed does not yet mean a better end result. Nearly half of knowledge workers still do not know how to use these tools effectively (Upwork 2024).
Many utilise AI only superficially for summarising emails or producing and translating simple texts. If our only use case is to ask AI to write a version of text that we could write ourselves in five minutes, we are not exploiting its full potential. AI is most helpful when it frees us from routine tasks that take time but do not require creativity or deep expertise.
Such tasks include, for example, writing meeting minutes, quick translation tasks, and searching for trends, statistics, news, and research as background material. In my work with projects and communications, AI can analyse a large amount of feedback and identify recurring themes from it that can be utilised in strategic planning. Additionally, it can make the first draft of my text when I myself suffer from fear of the blank page.
The use of AI requires critical thinking about what it is suitable for and what it is not. It requires understanding of the diversity of tools, their applicability, limitations, database, and cultural biases. On the other hand, at the same time, it goes without saying that only the expert themselves knows which tools are best suited to their tasks. For this reason, organisations should invest even more in experimentation and training.
Furthermore, creating real value requires the ability and opportunities to redesign work, data foundation and architecture, change management, and AI literacy from personnel (McKinsey & Company 2025; Wade et al. 2025; Stanford Human-Centred AI Institute 2025). Simply distributing licences is not enough.
Workplaces need clear principles about when and where it is worthwhile to use AI, and when human input is more important and even required. Workshops are needed where people learn together to identify or create those work tasks where AI truly makes work more efficient – and those where it only takes time or causes more work and costs. Above all, courage is needed to say that not everything needs to be done with AI, even if it were possible. It is therefore ultimately also about attitudes, where curiosity is elevated to a central value, and where failures can be discussed openly and learnt from.
At the same time, we must understand that integrating AI into workflows does not happen in an instant. It requires continuous dialogue at different levels of the organisation: between management and employees, between different departments, and between technology suppliers and organisations. The goal should be smarter use of AI than at present. Instead of continuing to automate controlled chaos in workplaces, we should stop to ask what work we really should be doing.
This article has been published as part of Haaga-Helia’s Artificial Intelligence and Equality in Work Communities (ReiluAI) project, which promotes the equal and ethical use of artificial intelligence in expert and knowledge work in a sustainable and inclusive manner. The project offers workplaces practical, easily accessible means and tools that support the development of fair AI practices and employee participation in the age of artificial intelligence. The project’s main funder is the Finnish Work Environment Fund.
Asana Work Innovation Lab. 2025. The State of AI at Work. San Francisco. Asana. Accessed 11 November 2025.
Güven, S. 2025. Kertooko ajatusviiva tekoälyn käytöstä? Näin kommentoivat asiantuntijat. Published on MTV’s website 8 November 2025. Accessed 11 November 2025.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., Madotto, A., & Fung, P. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38.
Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., & Clark, J. 2024. Artificial Intelligence Index Report 2024. Stanford Institute for Human-Centred Artificial Intelligence. Accessed 11 November 2025.
McKinsey & Company. 2025. The state of AI in 2025: Agents, innovation, and transformation. Survey. Accessed 11 November 2025.
Microsoft WorkLab. 2023. What can Copilot’s earliest users teach us about generative AI at work?. Microsoft Corporation. Accessed 11 November 2025.
Microsoft. 2024. Work Trend Index: AI at work is here. Now comes the hard part. Microsoft Corporation. Accessed 11 November 2025.
OECD Organisation for Economic Co-operation and Development. 2025. OECD Compendium of Productivity Indicators 2025.
Stanford Institute for Human-Centred Artificial Intelligence, HAI. 2025. Artificial Intelligence Index Report 2025. Stanford University. Accessed 11 November 2025.
Upwork Research Institute. 2024. From burnout to balance: AI-enhanced work models. Upwork Global Inc. Accessed 11 November 2025.
Wade, M., Trantopoulos, K., Navas, M. & Romare, A. 2025. How to Scale GenAI in the Workplace. Published in MIT Sloan Management Review 8 July 2025. Cambridge. MIT Sloan School of Management. Accessed 11 November 2025.
Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi
Finnish AI Region
2022-2025.
Media contacts