Enlightenment philosophers believed that knowledge liberates. The EU’s AI Act relies on the same belief, imagining that an enlightened citizen can identify deepfakes. This article examines the transparency paradox and three alternative paths forward. This is the third and final blog post in a series exploring democracy and transparency in the age of artificial intelligence.
Text by Martti Asikainen, 14.10.2025 | Photo by Adobe Stock Photos (Revised 12.11.2025)
Slovakia’s parliamentary elections in 2023 marked a turning point in Europe. Two days before the parliamentary elections, an audio recording spread on social media in which the pro-European opposition leader Michal Šimečka, who was seeking to become prime minister, was allegedly discussing election manipulation with a well-known journalist (Kováčik & Frankovská 2024). The media reported on it immediately, and as a result the recording garnered hundreds of thousands of listens. Šimečka and the journalist systematically denied the authenticity of the recording, which was later revealed to be a deepfake.
The defeat in the elections of Šimečka, who had long been leading in opinion polls, fuelled speculation that these were the “first” elections decided by deepfakes (Conardi 2023; De Nadal & Jančárik 2024). More was to come in the 2024 presidential elections, the final round of which was marked by rampant disinformation and various deepfake attempts to smear the two leading candidates, Ivan Korčok and Peter Pellegrini (see Kiripolská & Barca 2024; EDMO 2024).
But let us consider an alternative scenario for a moment. What if every manipulated social media post had carried a clear label indicating that the content might be AI-generated? Would that have prevented the damage in the parliamentary and presidential elections? Unlikely. Research provides a grim answer to this question. Estimates suggest that humans identify deepfakes with 55.54 per cent accuracy – essentially no better than guessing on a coin flip (Diel et al. 2024). And what is even more concerning is that a simple warning about possible manipulation does not significantly improve recognition accuracy.
This creates the transparency paradox. The European Union’s AI Act (EU 2024/1689) is built on the Enlightenment belief that knowledge liberates, and that rational citizens can critically evaluate content when they know its source. But cognitive psychology reveals this assumption to be flawed. The power of disinformation lies in its ability to exploit human thinking patterns. We are all susceptible to cognitive biases, such as confirmation bias – the tendency to seek information that reinforces our existing beliefs (see Ecker et al. 2022; Soprano et al. 2024; Marma 2025). Transparency is necessary, but it is also insufficient. At worst, it creates an illusion of security whilst in reality leaving us defenceless.
The transparency paradox is not a simple problem. In my view, it is built on three mutually reinforcing cognitive mechanisms that render mere warnings ineffective. The first is a phenomenon I call warning fatigue. Imagine your typical day on social media. You scroll through Facebook in the morning, browse Instagram on the bus, check X at lunch, and during your coffee break you let your thoughts wander far away with the help of TikTok. You see a vast amount of content every day. Estimates range from three hundred to several thousand items. If each of these, or even half of them, is labelled as AI-generated, what happens to your brain?
In medicine, there is a phenomenon known as alert fatigue. When a nurse sees dozens of medication warnings per day, they begin to override them automatically (see Olakotan & Yusof 2020; PSNet 2024). The brain cannot maintain a high state of alertness continuously (Sundermann et al. 2024). It adapts. The warning becomes background noise that is filtered out completely unconsciously. On social media, this phenomenon is even stronger because we do not consume content thoughtfully and deliberately, but scroll through it mindlessly.
The average viewing time for a single piece of content on Facebook is 1.7 seconds. Hardly anyone believes we could, in that time, read the warning, process its meaning, pause to assess credibility, and still make a rational decision about its truth value. The cognitive load is simply too great (Jacob et al. 2025). At the same time, it is clear that when almost everything is labelled with a warning, nothing stands out or is emphasised amongst it all. The warning loses its meaning in the same way as the notice on cigarette packets that smoking is harmful to health and that smoking kills. This is not a hypothetical threat, but a psychological law that will inevitably materialise when transparency labels become widespread on social media (see Lewandowsky et al. 2017).
The second phenomenon or mechanism is the supremacy of emotional intelligence. A 2023 meta-analysis comprising 56 separate studies and 20,000 participants revealed something alarming. People who knew in advance they were watching a deepfake still could not identify it significantly better than those who had not been warned. With foreknowledge, recognition accuracy improved by only 4–8 percentage points (Diel et al. 2024; Groh et al. 2021). This may be partly because rational knowledge and emotional impact are weighted in somewhat different brain regions.
When you see a video in which a public figure you admire says something shocking, or a politician you despise confesses to a crime, your first reaction is not usually analytical. It is emotional. And this emotional reaction leaves a trace, even if moments later you rationally understand it to be a forgery. In psychology, this phenomenon is called the mere exposure effect. In practice, it means that mere exposure influences our attitudes (Zajonc 1968; Bornstein 1989). This suggests that if we see a deepfake video of a politician behaving unethically, part of our brain may register it as evidence, even if we rationally know it to be a forgery. This lingering suspicion that there is no smoke without fire influences our decision-making (Nosek et al. 2015; Lewandowsky et al. 2017).
The third phenomenon is distrust. It is not about what happens when you see one piece of content labelled as AI-generated, but about what happens when you see thousands of them. When every video, audio recording and image carries the caveat that it may be artificially produced, a general distrust gradually develops towards everything seen and experienced on the internet. If anything can be a forgery, how can you trust anything anymore?
This phenomenon is called the information entropy problem. When there is too much noise in a system, the signal is lost. Communications professionals have long been seeking a solution to how to cut through and stand out from all the noise, as the amount of information and knowledge grows exponentially whilst human capacity to process information and knowledge grows linearly. Democracy and nationhood are based on the existence of shared values, truths and facts on which we can debate. But if everything is questionable, and if every video evidence is potentially a forgery, then the epistemological foundation of democracy inevitably begins to erode.
Next, it is worth asking: who benefits from such chaos? Authoritarian regimes and cybercriminals. For example, Russia’s disinformation strategy is not based on making you believe the lies it produces. It is based on making you doubt everything – including the truth. When you no longer know what to think or what to believe, you withdraw from politics and become a passive citizen. And that is precisely what authoritarian powers typically want (McKenzie & Isis 2025; Puscas 2023; Bradshaw & Howard 2019).
Thus transparency can paradoxically also weaken democracy instead of strengthening it. Transparency can create epistemic relativism, in which the concept of truth itself becomes unclear. And this is not an entirely hypothetical scenario, as we have already seen it materialise in Russia, where years of information warfare have created a society in which citizens do not trust any source of information, nor do they care what happens in politics or in the surrounding world (Floridi 2023).
But if transparency does not work, why has the EU built its entire AI Act around it? The answer lies in the legacy of Enlightenment philosophy and the structural constraints of the political system. Western democracy is based on Enlightenment philosophy – the belief that rational knowledge liberates. The philosopher Immanuel Kant once wrote: Sapere aude – dare to know. His idea was that when citizens have knowledge, they can make rational decisions.
This is admittedly a beautiful idea that has inspired the development of democracies for centuries. But at the same time, it is based on the assumption that people consume information thoughtfully and analytically, which may no longer hold true in the 2020s. Humanity is plagued by information overload, which causes friction in the wheels of the information society, and because of which readers and audiences no longer have the energy to filter essential information from the non-essential themselves (Asikainen 2022). Enlightenment philosophers could not even imagine a world in which every citizen would be exposed to a stream of hundreds or even thousands of messages competing for their attention in milliseconds.
Let alone algorithms designed to keep people hooked on produced content, and certainly not to make them wiser. The EU’s AI Act is built on this same Enlightenment optimism (2024/1689). It assumes that if citizens know the content is artificial, they will be able to evaluate it more critically. This is quite a naive approach to the crisis and problem in our laps. On the other hand, it may also be that transparency is the politically easiest solution. It does not require large investments, nor does it interfere too radically with business models in the commercial sector. It does not require the establishment of new institutions, but is a compromise to which every EU member state can adapt.
Compare this to an alternative in which the EU would require, within its geographical area, that all major social media platforms install automatic deepfake detection systems, which would certainly work better than transparency but would probably cost tens of billions of euros, and would thus also face massive resistance from technology companies. There would be lengthy legal battles. In addition, it would require the establishment of a new European supervisory authority. In practice, this alternative is politically impossible, which is why the EU chooses transparency. Not because it is the most effective solution, but because it is the only politically feasible solution. This is the tragic compromise of regulation, in which the best possible is not the same as the best achievable.
Despite all my criticism, I do not believe we are powerless in the face of deepfakes. There are three different approaches that could offer real protection against deepfakes and other manipulation attempts, but each of them requires courage and resources. The first protection is the technological safeguard I mentioned earlier, which would shift the responsibility for identification from humans to machines. Instead of relying on every citizen being able to identify a forgery, we would build systems that do the filtering automatically on our behalf.
The technology already exists. Current AI detection tools can identify deepfakes in laboratory conditions with accuracy as high as 84–90 per cent – significantly better than a human with 55.5 per cent recognition accuracy (Masood et al. 2023; Rana et al. 2022; Diel et al. 2024). On the other hand, the problem is not a lack of technology, but its implementation. To achieve this, the EU would need to mandate that every major social media platform – from TikTok to Meta, YouTube and X – install automatic detection systems. When a user shares an image or video, the system would scan it immediately, and if it identified a harmful forgery, it would warn the user and verify before sharing whether they want to share it.
This would not, however, be a perfect solution. Firstly, small social media channels that fall outside the regulatory body would gain an unfair competitive advantage over major players. Secondly, research results have shown that the recognition accuracy of systems tested in laboratory conditions drops dramatically in real-world conditions, and would have to be in a constant arms race with forgers and generative AI (Masood et al. 2023; Rana et al. 2022). But despite everything, it would be a significant improvement on the current situation. It would create a protective layer that is not dependent on every citizen’s AI literacy.
The second potential approach is to build an entirely new infrastructure for verifying content provenance. Instead of labelling what is AI, we label what is definitely authentic. I toyed with the idea back in 2023 that we could certify content created without AI. The certificate could be compared, for example, to the Fairtrade equivalent, which aims to improve the position of smallholders and plantation workers in developing countries. An AI certificate would guarantee that a job has been created from the consumed content (Asikainen 2023).
The Coalition for Content Provenance and Authenticity (C2PA) has already developed technical standards for this. The idea is to embed metadata in every piece of media that tells its origin. It would reveal who created it, with what device, and when. This metadata is almost impossible to forge (Fraser 2025). However, implementation has been slow because it requires global coordination. The EU could, if it wished, force European platforms to use some equivalent model, but most content flows from beyond our geographical borders. If a Chinese, American or Russian platform does not participate in this, the system leaks.
Secondly, such a method requires user trust, which may be in short supply when one considers the current state of the world. In the current climate, many would suspect that a certification system is just a new way to control information and monitor people’s activities online. Nevertheless, this is in many ways a promising direction. It would create a long-term structure that would make creating deepfakes more difficult. A kind of digital-age equivalent of signatures. On the other hand, achieving this would require enormous political will and massive investments in digital infrastructure.
The third way is clearly the slowest, but its foundation is the most sustainable. In practice, the EU could teach the next generation to live in the age of AI. This would not mean simple courses on identifying deepfakes, but a fundamental change in which we teach people to think more critically. Research shows that training in deepfake identification improves people’s recognition accuracy by only a few percentage points, which is far too little to be significant (Mai et al. 2023).
Imagine school education in which children would learn from primary school onwards how to identify a reliable source or evaluate the strength of evidence. Or how emotionality affects decision-making, and how to recognise confirmation bias. These are not technical skills but cognitive meta-skills that protect not only against deepfakes but also against all other kinds of manipulation.
The EU’s AI Act (2024/1689) also recognises this need. One of the many requirements of the AI Act is the prerequisite that organisations develop the AI literacy of their staff who utilise AI (European Commission 2024). However, the requirement is vague, and its implementation, at least at this stage, is still rather unclear. Effective change would require national-scale educational programmes extending from nurseries to universities. Even then, the problem would be time.
Educating a generation takes about 20 years, but the crisis is here and now. For example, in 2023, an estimated 500,000 deepfakes were already shared on social media – over 1,300 every day, and one every minute (Baptista et al. 2023; Ulmer & Tong 2023). The EU does not have time to wait 20 years. Education may be the right solution in the long term, but it does not protect us today.
In my view, none of the three means I have presented is sufficient on its own to protect people from deepfakes. Technological protection is effective, but it is in a constant arms race with forgers. Institutional certification creates a long-term structure but requires global coordination, which is almost impossible to achieve. Education is the most sustainable solution but too slow to respond to an acute crisis. An effective strategy requires all three simultaneously.
The EU needs immediate protection from technological solutions. It needs long-term infrastructure and certification systems. In addition, it needs generational change through education. These are not alternatives but layered defence mechanisms that complement each other.
The EU’s greatest weapon for years has been its ability to spread its regulatory standards to global markets. This is called the Brussels Effect – a phenomenon in which EU rules become de facto global standards because companies do not want to maintain separate systems for different markets (Bradford 2019). The General Data Protection Regulation (EU 2016/679, GDPR) is a classic example. Although it is an EU regulation, in practice every major technology company in the world complies with it because it is easier to build one data protection system than several operating simultaneously. For this reason, many American and Asian companies have already begun preparing for the EU’s AI Act, even though it does not yet directly concern them.
But with AI, the Brussels Effect may encounter its limits. There are three critical problems with this. Firstly, deepfakes do not respect borders. GDPR worked because data transfer is anything but easy. Companies need physical servers within the EU’s geographical boundaries for it, whereas AI-generated content spreads quite freely. A Russian disinformation operation does not care about EU rules. It creates a forgery in Moscow and shares it in Brussels a couple of seconds later (see Asikainen 2025).
EU regulation is powerless if it cannot stop the flood of content coming from elsewhere, as the greatest threats come from countries that do not comply with EU regulations. McKenzie and Isis (2025) document how a Moscow-funded global news network has infected Western AI tools with Russian propaganda. China, for its part, is developing its own AI systems that do not comply with Western standards. The United States, meanwhile, is taking a more innovation-friendly approach, which in practice means less regulation.
Thirdly, one must also consider that excessive EU regulation may drive European technology companies elsewhere. If regulation makes doing business too expensive or complicated in the EU, technology companies can pack their virtual suitcases and move to more lightly regulated jurisdictions. This risk is not entirely hypothetical, as this happens all the time in tax planning, for example, when companies move their addresses from high overall tax rate countries to more lightly taxed states.
In practice, the EU faces an impossible situation. It can regulate its own internal market, but it cannot stop the global flood of content. It can demand transparency, but it cannot force Russia or China to comply with it. It can create the strictest standards in the world, but if they knock European technology companies completely off the global playing field, they may make the EU technologically entirely dependent on other great powers.
The solution would require global coordination and a UN-level agreement on the ethical use of AI, but dreaming of such a thing is completely absurd in the current geopolitical climate. The West cannot agree with China and Russia on basic questions, let alone complex technological standards. Thus, EU leadership and the Brussels effect are simultaneously necessary and insufficient. It is necessary because someone must show the way, but at the same time insufficient because one cannot protect oneself alone from global epistemic chaos.
The European Commission has described the deepfake crisis as a question of democracy’s fate (European Commission 2025a). This is not an exaggeration. It is not just about a technological challenge, but also about whether pure democracy can function at all in a world where truth is negotiable. Democracy is based on three fundamental principles. Firstly, that there are shared facts that we can agree upon. Secondly, that citizens can make informed decisions based on these facts. Thirdly, that public debate is based on truth rather than propaganda and lies.
Democracy is based on three fundamental principles. Firstly, that there are shared facts that we can agree upon (Lewandowsky et al. 2023). Secondly, that citizens can make informed decisions based on these facts (Council of Europe 2022; Curran & Bruttin 2025; European Commission 2025b). Thirdly, that public debate is based on truth, not propaganda (Van Dyk 2022; Kavanagh & Rich 2018). Deepfakes threaten all three. First, they shatter the idea of shared facts, when any video can be a forgery. Then they make informed decision-making impossible when one can no longer trust one’s eyes. Finally, they poison public debate when truth is negotiable and power decides instead of arguments (Floridi 2023).
And this is not a future threat. This is happening right now. By the end of 2024, 20 US states had passed laws against political deepfakes. Yet although there were hundreds of cases of political deepfakes, not a single criminal charge was brought because it is not clear how to prove intentional manipulation when the technology is already so advanced (Gray 2025). In other words, the legal system has not kept pace with technological development.
At the same time, deepfake fraud has increased explosively. Keepnet Labs (2025) reports that deepfake-based scam attempts now occur every five minutes. In CEO fraud, the average loss is $280,000 per case. The total economic damage in 2024 exceeded $25 billion. The financial incentives to create forgeries are enormous – and growing every day.
But economic damage is trivial compared to the erosion of democracy. When truth loses its meaning, democracy becomes a power struggle. It is no longer a Habermasian system in which the best arguments win, but a system in which the strongest propaganda wins. At that point, democracy no longer differs much from authoritarianism.
Article 50 of the EU’s AI Act is not the solution, but it is a start. It acknowledges the problem and creates a framework for addressing it, but it does not protect us. Transparency is necessary but at the same time insufficient. We need technological safeguards that identify forgeries automatically, institutional infrastructures that verify content authenticity, and education that teaches the next generation to live in the age of AI. We need all of these simultaneously.
Above all, however, we need honesty. We must acknowledge that the current approach is insufficient and be prepared to make difficult decisions and massive investments. Symbolic gestures will not save democracy. Enlightenment philosophers believed that knowledge liberates. Perhaps they were right, but only if the knowledge is true. In the age of AI, protecting truth requires more than warning signs – it requires new kinds of epistemic infrastructures that rebuild trust in the digital world.
The EU’s AI Act is a testing ground for whether Europe is ready to adapt to the future. Article 50 does not solve the epistemic crisis, but it shows that the EU recognises the problem. The question is, is recognition enough, or do we need more radical change? By 2050, we will either live in a world where these new infrastructures have been successfully built, or in a world where no evidence is reliable or sufficient anymore. In the latter world, democracy or the legal system cannot function, and we will have lost something fundamental about what it means to live in a free society.
To emphasise the transparency paradox: the more we label content as artificial, the less we trust anything. The solution is not less transparency, but smarter transparency. The kind that recognises our cognitive limitations and builds protective mechanisms around them. Otherwise, warning signs are just an illusion of security whilst democracy erodes. This was the third and final part of a three-part blog series on transparency in the age of AI. You can find the previous parts in the links below:
Aarts, A., Anderson, J. E., Anderson, C. J., Attridge, P., Attwood, P., Axt, A., Babel, J., Bahnik, M., Baranski, S., Barnett-Cowan, E., Bartmess, M., Beer, E., Bell, J., Bentley, R., Beyan, H., Binion, L., Borsboom, G., Bosch, D., Bosco, F. Pen, et al. (2015). Estimating the reproducibility of psychological science. Science, 349. https://doi.org/10.1126/science.aac4716
Alper, A. (2023). Deepfaking it: America’s 2024 election collides with AI boom. Published by Reuters, 31 May 2023. Accessed 10 October 2025.
Asikainen, M. (2022). Part of the solution instead of the problem. Published in eSignals, 25 November 2022. Haaga-Helia University of Applied Sciences. Accessed 10 October 2025.
Asikainen, M. (2023). Should content produced without AI be certified for the sake of consumer protection? Published in eSignals, 28 June 2023. Haaga-Helia University of Applied Sciences. Accessed 10 October 2025.
Asikainen, M. (2025). How disinformation spreads through LLM grooming. Published in eSignals Pro, 9 June 2025. Haaga-Helia University of Applied Sciences. Accessed 10 October 2025.
Baptista, D., Smith, A., & Harrisberg, K. (2023). Whose voice is it anyway? Actors take on AI copycats. Thomson Reuters Foundation. Published by Reuters, 20 October 2023. Accessed 10 October 2025.
Bornstein, R. F. (1989). Exposure and Affect: Overview and meta-analysis of research, 1968–1987. Psychological Bulletin, 106(2), 265–289. American Psychological Association.
Bradford, A. (2019). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. Project on Computational Propaganda.
Chandra, N. A., Murtfeldt, R., Qiu, L., Karmakar, A., Lee, H., Tanumihardja, E., Farhat, K., Caffee, B., Paik, S., Lee, C., Choi, J., Kim, A., & Etzioni, O. (2025). Deepfake-Eval-2024: A multi-modal in-the-wild benchmark of deepfakes circulated in 2024. arXiv.
Cihon, P., Maas, M. M., & Kemp, L. (2020). Should artificial intelligence governance be centralised? Design lessons from history. arXiv preprint arXiv:2001.03573.
Conardi, P. (2023). Was Slovakia’s election the first to be swung by deepfakes? Published in The Times on 7 October 2023. Accessed 10 October 2025.
Curran, N., & Bruttin, T. (2025). No facts? No freedom. Democracy depends on access to reliable information. Published on the European Broadcasting Union website, 10 November 2025. Accessed 12 November 2025.
De Nadal, L., & Jančárik, P. (2024). Beyond the deepfake hype: AI, democracy, and “the Slovak case”. Published 22 August 2024 in the Misinformation Review. Cambridge: Harvard Kennedy School. Accessed 10 October 2025.
Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M., & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Computers in Human Behavior Reports, 16, 100538.
Ecker, U. K., Lewandowsky, S., Cook, K., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1, 13–29.
European Commission (2024). Artificial Intelligence Act: EU countries give final green light to the Commission’s proposal. 12 June 2024, Brussels. Accessed 10 October 2025.
European Commission (2025a). Commission launches consultation to develop guidelines and a code of practice on transparent AI systems. 6 September 2025, Brussels.
European Commission (2025b). Stronger measures to protect our democracy and civil society. Published on the European Commission website, 12 November 2025. Brussels. Accessed 12 November 2025.
Council of Europe (2022). Recommendation CM/Rec(2022) on access to information and democracy. Adopted by the Committee of Ministers, Strasbourg. Accessed 12 November 2025.
European Digital Media Observatory (EDMO) (2024). EU Elections. Disinfo Bulletin – Issue No. 2/2024. European Commission, Brussels. Accessed 12 November 2025.
Floridi, L. (2023). The ethics of artificial intelligence for international relations: Challenges and opportunities. Ethics & International Affairs, 37(3), 345–357. Cambridge University Press.
Fraser, M. (2025). Deepfake statistical data 2023–2025. Published on Views4You, 27 May 2025. Accessed 10 October 2025.
Gray, C. H. (2025). Political deepfakes and elections. Middle Tennessee State University. Published 6 December 2024, updated 11 January 2025. Accessed 10 October 2025.
Groh, M., Epstein, Z., Firestone, C., & Picard, R. (2021). Deepfake detection by human crowds, machines, and machine-informed crowds. Proceedings of the National Academy of Sciences, 119(1), e2110013119.
iProov (2025). iProov Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes. Published 12 February 2025. Accessed 10 October 2025.
Jacob, C., Kerrigan, P., & Bastos, M. (2025). The chat-chamber effect: Trusting the AI hallucination. Big Data & Society, 12(1). Sage Publishing.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38.
Kavanagh, J., & Rich, M. D. (2018). Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. RAND Corporation. Published online 16 January 2018. Accessed 10 October 2025.
Keepnet Labs (2025). Deepfake Statistics & Trends 2025: Growth, Risks, and Future Insights. Published 24 September 2025. Accessed 10 October 2025.
Khalil, M. (2025). Deepfake statistics 2025: The data behind the AI fraud wave. Published on DeepStrike, 8 September 2025. Accessed 10 October 2025.
Kiripolská, K., & Barca, R. (2024). How Fake Accounts Spread a Hoax in Slovakia’s Election Race. Published on VSquare, 10 October 2024. Accessed 10 October 2025.
Kováčik, T., & Frankovská, V. (2024). How AI-generated content influenced parliamentary elections in Slovakia: The Slovak Police will investigate the recording for a third time. Published on the Central European Digital Media Observatory website, 25 November 2024.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.
Lewandowsky, S., Ecker, U. K. H., Cook, J., Van der Linden, S., Roozenbeek, J., & Oreskes, N. (2023). Misinformation and the epistemic integrity of democracy. Current Opinion in Psychology, 54(101711). Elsevier.
Mai, K. T., Bray, S., Davies, T., & Griffin, L. D. (2023). Warning: Humans cannot reliably detect speech deepfakes. Published in PLOS One, 2 August 2023. Accessed 1 October 2025.
Marma, K. J. S. (2025). The Science of Disinformation: Cognitive Vulnerabilities and Digital Manipulation. Published in Modern Diplomacy, 9 February 2025. Accessed 10 October 2025.
Masood, M., Nawaz, M., Malik, K. M., Javed, A., Irtaza, A., & Malik, H. (2023). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied Intelligence, 53(4), 3974–4026.
McKenzie, S., & Isis, B. (2025). A well-funded Moscow-based global “news” network has infected Western AI tools worldwide with Russian propaganda. Special Report, NewsGuard’s Reality Check.
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture: Author guidelines for journals could help promote transparency, openness, and reproducibility. Science, 348(6242), 1422–1425.
Olakotan, O. O., & Yusof, M. M. (2020). Evaluating the alert appropriateness of clinical decision support systems in supporting clinical workflow. Journal of Biomedical Informatics, 106, 103453.
PSNet Editorial Team (2024). Alert Fatigue. Published on UC Davis PSNet, 15 June 2024. AHRQ Patient Safety Network. Accessed 10 October 2025.
Puscas, A. (2023). Artificial intelligence, influence operations, and international security: Understanding the risks and paving the path for confidence-building measures. UNIDIR.
Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 25494–25513.
Soprano, M., Roitero, K., La Barbera, D., Ceolin, D., Spina, D., Demartini, G., & Mizzaro, S. (2024). Cognitive biases in fact-checking and their countermeasures: A review. Information Processing & Management, 61(3).
Stix, C. (2021). Taking the European AI Act seriously. Nature Machine Intelligence, 3(6), 446–448.
Sundermann, M., Clendon, O., McNeill, R., Doogue, M., & Chin, P. K. L. (2024). Optimising interruptive clinical decision support alerts for antithrombotic duplicate prescribing in hospital. International Journal of Medical Informatics, 186, 105418.
Ulmer, A., & Tong, A. (2023). Deepfaking it: America’s 2024 election collides with AI boom. Published by Reuters, 31 May 2023. Accessed 10 October 2025.
Van Dyk, S. (2022). Post-truth, the future of democracy and the public sphere. Theory, Culture & Society, 39(4), 37–50.
Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology Monograph Supplement, 9(2), Part 2.
Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi
Finnish AI Region
2022-2025.
Media contacts