AI as Sparring Partner, Not Replacement: Why Algorithms Can Be Fairer Managers Than Humans

Algorithmic management raises fears, but at its best it can make leadership more equitable than traditional human management. As routine tasks become automated, managers gain time for what truly matters: leading people and thinking strategically.

Text by Martti Asikainen, 5.11.2025 | Photo Adobe Stock Photos

man and a robot sitting on a table backs towards the camera

Algorithmic management has grown at a tremendous pace in recent years. At its best, it serves as a sparring partner for managers and handles the mundane tasks that consume their days. It monitors wellbeing and flags when absences accumulate. It remembers milestone dates. It assists with shift planning. It also makes sense of extensive and complicated employee satisfaction surveys and opens up new possibilities for personal development when combined with intelligent data use.

These might sound trivial, but their impact is profound. Managers spend vast amounts of time on administrative routines that require no human judgement whatsoever—yet someone still needs to do them. When an algorithm handles scheduling, a line manager can focus on asking an employee how their project is actually progressing. When the system identifies wellbeing risks from data, the manager can hold a proper support conversation rather than drowning in spreadsheets (Jarrahi et al. 2023).

The numbers are striking. According to an OECD report (2025), 90 per cent of American companies already use algorithmic management systems, alongside 79 per cent in Europe and 40 per cent in Japan. Around 60 per cent of line managers report improved decision-making quality with algorithmic tools, as data becomes readily available in real time. The caveat? This reflects managers’ subjective experience, not employees’ lived reality of being managed.

No Favourites, No Prejudices

Algorithmic management saves time and improves quality. More importantly, however, machines can be more impartial leaders than humans. They process data according to pre-set parameters, without personal prejudices or momentary moods, bringing much-needed objectivity to recruitment and performance reviews.

Humans are inherently biased. We all have favourites, prejudices, and bad days when patience runs thin. A good leader recognises these biases and tries to correct for them, but perfect objectivity remains impossible. An algorithm treats everyone identically, regardless of personal chemistry or daily circumstances.

Preliminary findings from Haaga-Helia University of Applied Sciences confirm that employees view algorithmic management positively in many respects, with fairness emerging as a key theme (Asikainen & Lahtinen 2025). Systems are perceived to treat employees more equitably than human managers. 

Tasks involving encouragement and control — measuring work efficiency, setting targets — are areas where algorithm-based systems can even exceed human performance.

International research echoes this. Studies of Amazon warehouse workers show that high-performing and experienced employees, alongside beginners, rate algorithmic systems positively precisely because of fairness (Hirsch et al. 2023). When systems operate transparently and consistently, they can increase trust in employers, even when working conditions are challenging and efficiency demands are high.

The Fairness Paradox

Yet algorithms aren’t automatically fair. A poorly designed system can reinforce existing prejudices or create new ones. If a recruitment algorithm is trained on data where certain groups are under-represented or portrayed negatively, it learns to discriminate. If performance evaluation relies solely on quantitative metrics, it favours easily measurable work whilst penalising qualitative and innovative contributions.

Technology demands caution. Transparency, privacy, and fairness must form the foundation. Whilst algorithms can be more impartial than humans, they still reflect their creators’ values and assumptions. This is where their potential lies—a potential often underestimated in leadership discussions.

When a human makes a biased decision, proving or challenging it is nearly impossible. With algorithms, we can examine the logic, test different scenarios, and correct flaws. A transparent algorithm is easier to hold accountable than an opaque human. Research confirms that lack of transparency significantly weakens workplace atmosphere and trust, diminishing commitment and motivation (Parent-Rocheleau & Parker 2022; McParlja & Connolly 2019).

Problems remain, however. In the OECD study, two-thirds of managers cited at least one ethical or administrative issue with algorithmic management. Common concerns include accountability in error situations, decision transparency, and employee wellbeing. Who bears responsibility when an algorithm errs? Many organisations have yet to answer this question.

The positive news: most companies are taking action. According to the OECD report (2025), 89 per cent of organisations have guidance for using algorithmic tools. Common practices include user guidelines, employee consultation, and regular audits. Awareness exists, and organisations are attempting to anticipate problems.

The Human Element Remains Essential

In algorithmic management, AI analyses data and suggests solutions, but it cannot grasp workplace dynamics or individual needs as humans can. Imagine an algorithm detects that an employee’s productivity has fallen 20 per cent over three months. It flags the issue and proposes actions. Useful, certainly—the matter won’t slip through unnoticed.

But what happens next? A human manager must have a conversation. Perhaps the employee is struggling with a personal crisis. Perhaps they’re frustrated because their best qualitative work goes unmeasured. Or perhaps they’re burnt out from doing two people’s jobs. An algorithm remains blind to such factors, seeing only the numbers it’s been instructed to examine.

Good management combines data’s objectivity with human understanding. AI doesn’t replace managers, it makes them better. When routines become automated, time opens up for strategic thinking and leading people.

This opportunity shouldn’t be squandered through fear of technology or its misuse, as examples worldwide demonstrate – as workers often see algorithmic control as complementing, but sometimes also dominating, human management. (e.g. Harrington 2021; Soper 2021; Hirsch 2024).

Regulation Catches Up

As algorithmic management spreads, regulation must evolve. The EU AI Act sets strict requirements for high-risk AI systems, including job application and employee management tools (EU 2024/1689). Systems must be transparent, safe, and under human oversight. Organisations should prepare for stricter requirements around transparency and employee rights.

In Finland, occupational safety authorities have begun examining algorithmic management’s effects. The Occupational Safety and Health Act requires employers to ensure reasonable mental workload—even when work is managed algorithmically (OSH Act 738/2002, Sections 8 and 25; Finnish Institute of Occupational Health 2023). The General Data Protection Regulation protects employee data and grants the right to human assessment instead of automatic decisions (EU 2016/679).

Employees have the right to information about decision-making and to demand human assessment over automation. Regulation isn’t an obstacle to development but ensures technology’s benefits are distributed evenly whilst minimising harm (European Commission 2021; OECD 2022).

Combining Human and Machine

The question is no longer whether management should be human or algorithmic. The best results emerge when we combine both strengths: machines bring consistency and efficiency; humans bring empathy and understanding. Used correctly, algorithmic management can improve working life quality (Immonen 2024). Accepting this enables us to build workplaces where technology serves people, not the reverse.

The OECD report (2025) encourages experimentation whilst warning that technology alone doesn’t solve management problems—in the worst case, it creates more. Most critically, organisations must adopt algorithmic management strategically and ethically, because this concerns not just efficiency but power: who defines employees’ targets, who interprets data, and who bears responsibility for machine-made decisions.

Finland has every opportunity to pioneer future management models where technology serves people according to our values and algorithms function as tools rather than power-wielders. A strong culture of trust, high digitalisation, and traditionally functional workplace cooperation create an excellent foundation for developing responsible algorithmic management. 

References

Asikainen, M. & Lahtinen, A. (2025).  Algorithmic management spreads across Finnish workplaces – younger workers show greater acceptance than their older colleagues. Published on STT (Suomen Tietotoimisto) on 16 June 2025. Haaga-Helia University of Applied Sciences’ Press Release. Accessed 27 October 2025.

European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation). Official Journal of the European Union.

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (AI Act). Official Journal of the European Union.

European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). COM/2021/206 final.

Harrington, C. (2021). As Amazon Workers Organize, They Stress: ‘We Are Not Robots’. Published on Wired’s website on 9 April 2021. Condé Nast Publications. New York City. Accessed 27 October 2025.

Hirsch, F. (2024). Algorithmic control in non-platform organizations: Workers’ legitimacy judgments (ICIS 2024 Proceedings). AIS eLibrary.

Immonen, J. (2024). Johtajana tietokone. Algoritmisen johtamisen vaikutuksia työntekijöihin. Foundation for European Progressive Studies. Bruessels.

Jarrahi, M. H., Möhlmann, M. & Lee, M.K. (2023). Algorithmic Management. The Role of AI in Managing Workforces. MIT Sloan Management Review, 1-5.

Lahtinen, H. & Valtonen, T. (2025). Raportti: Algoritminen johtaminen yleistyy Pohjoismaissa – tutkijat varoittavat sen vaikutuksista työntekijöiden hyvinvointiin. Published on Finnish Institute of Occupational Health’s website on 8 September 2025. Accessed 27 October 2025.

McParlja, C. & Connolly, R. (2019). Employee Monitoring in the Digital Era. Managing the Impact of Innovation. Proceedings of the ENTRENOVA Conference. Rovinj. Croatia.

OECD (2022). AI Principles and Policy Observatory.

OECD (2025). Algorithmic management in the workplace: New evidence from an OECD employer survey. OECD Publishing. Paris.

Parent-Rocheleau, X., & Parker, S. K. (2022). Algorithms as work designers. How algorithmic management influences the design of jobs. Human Resource Management Review, 32(3), 100838.

Soper, S. (2021). Fired by Bot at Amazon: ‘It’s You Against the Machine’. Published on Bloomberg’s website on 28 June 2021. Bloomberg News. New York City. Accessed 27 October 2025.

Tuomi, A., Jianu, B., Roelofsen, M., Ascenção, M.P. (2023). Riding Against the Algorithm: Algorithmic Management in On-Demand Food Delivery. In: Ferrer-Rosell, B., Massimo, D., Berezina, K. (eds) Information and Communication Technologies in Tourism 2023. ENTER 2023. Springer Proceedings in Business and Economics. Springer, Cham.

Finnish Institute of Occupational Health. (2023). Toimintakertomus 2023. Helsinki.

Occupational Safety and Health Act of Finland. 738/2002. Finlex.

Vuori, J. & Asikainen, M. (2025). When Algorithms Take the Reins: How Digital Management is Reshaping the Future of Work. Published on Finnish AI Region’s website on 17 Juli 2025. Accessed 27 October 2025.

About This Article

This article is published as part of Haaga-Helia University of Applied Sciences’ “RoboBoss —Artificial Intelligence in Expert and Knowledge Work Management” project, which examines algorithmic management in expert and knowledge work, areas where AI’s role remains under-studied. The project is funded primarily by the Finnish Work Environment Fund.

Martti Asikainen

Communications Lead
+358 44 920 7374
martti.asikainen@haaga-helia.fi

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts