Point of View: The Deepfake Crisis Is Already Here – But Why Can't We See It?

The European Union is responding to the deepfake crisis with transparency: synthetic content must be labelled and users informed of its use. But is this enough when AI systems themselves don’t know which of their content is true? When laws change slowly and AI exponentially? This is the first installment in a three-part blog series.

Text Martti Asikainen, 1.10.2025 | Photo Adobe Stock Photos (Revised 10.11.2025)

Face identifying with a scanner

We are currently witnessing the end of an epistemic era, where seeing is no longer believing. 

For thousands of years, humans have operated on the principle that our senses, particularly sight and hearing, provide reliable pathways to truth. This has been one of the fundamental pillars of our worldview, which is crumbling to dust as AI systems become ever more effective at content manipulation.

AI’s ability to produce artificial yet authentic-looking and authentic-sounding content has already reached a level that undermines our understanding of authenticity, evidence, and verification itself (Vaccari & Chadwick 2020; Floridi 2024). Above all, the challenges facing policymakers today are unprecedented both in scale and technical complexity.

Whilst just ten years ago creating convincing fake content required Hollywood resources and a director of Christopher Nolan or Peter Jackson’s calibre, nowadays even a malicious teenager can produce a convincing deepfake with AI in under ten minutes (Cruz 2025; Westerlund 2019).

Explosive Growth in Political and Financial Fraud

Between July 2023 and July 2024, researchers identified a total of 82 large-scale, politics-related deepfakes targeting public figures in 38 different countries (Insikt Group 2024). Simultaneously, based on private sector observation data, the use of fraud technologies exploded. Sumsub, an identity verification service, reported that deepfake attempts detected in its system increased more than tenfold between 2022 and 2023.

In North America, growth was 1,740%, and in the Asia-Pacific region 1,530% (Sumsub 2024). Whilst these figures describe only one technological ecosystem, they reflect a broader trend where fraud technologies are rapidly spreading across different sectors of society. According to researchers, this development represents a paradigm shift with profound and significant consequences for societal trust and the credibility of public institutions (Bradshaw, Bailey & Howard 2021).

The European Commission has recognised the threats posed by deepfakes in its consultation documents. It warns of scenarios including AI impersonating humans, interpreting emotions without consent, and producing synthetic content at a pace that fact-checking cannot follow (European Commission 2025a). But in its rhetoric, the Commission still speaks of an emerging threat, which can be considered a fundamental error. The threat of deepfake technology is not emerging; it has already materialised on numerous occasions.

The Crisis Is Already Here

Let’s examine concrete examples from the past couple of years. In January 2024, thousands of Democratic voters in New Hampshire, United States, received AI calls mimicking President Joe Biden’s voice, urging them not to vote in the state’s primary elections. The consultant responsible later received a $6 million fine from the Federal Communications Commission (Seitz-Wald 2024; Bond 2024).

In 2023, a fabricated audio recording circulated in Slovakia, claiming a political leader was planning to rig elections. This occurred just before parliamentary elections, resulting in considerable political instability and erosion of trust in the democratic system and those in power (Surfshark 2024).

In India, the renowned politician Muthuvel Karunanidhi was shown participating in a political conference in 2024, despite having died in 2018. The AI-resurrected deceased was used to boost his party’s support. In Argentina, AI-generated content claimed in 2024 that presidential candidates had withdrawn from elections just hours before voting commenced (ibid. 2024).

In Germany, the Russian Storm 1516 network established over 100 AI-driven websites between 2024 and 2025, spreading deepfaked videos ahead of federal elections (Zamji 2024). The Commission’s interpretation of the threat’s timeline may prove fatal when attempting to tighten regulatory measures. This is not a future threat but a present crisis, where technological development inevitably progresses years ahead of legislation (Mahieu et al. 2021).

Humans Cannot Reliably Detect Deepfakes

The European Commission’s approach is based on a strong assumption that an informed user can make rational decisions about synthetic content, but research reveals this to be far from reality. The statistics are frankly devastating. Studies show that users repeatedly fail to recognise synthetic content. General detection accuracy is approximately 55.5%, barely better than random guessing. This figure was obtained through a meta-analysis of 56 different studies involving a total of 86,155 people (Diel et al. 2024).

Human detection rates fall even further with high-quality deepfake videos, where the detection rate is only 24.5% – meaning humans correctly identify advanced deepfakes in less than a quarter of cases (Khalil 2025a). One recent study suggests that for audio, humans would recognise 73% of deepfaked audio tracks (Mai 2023; Schlenker 2024).

However, the study’s results can be questioned, as participants reported being disappointed with the quality of machine-generated accents and background sounds. According to respondents, the audio tracks had problems with pronunciation styles and pauses. The most advanced AI models can produce entirely or nearly natural images and speech. According to an experiment conducted in early 2025, only 0.1% of humans can reliably identify all fake and authentic media content when different media content types are mixed together (iProov 2025). The research experiment was conducted in the United Kingdom with 1,200 participants.

Worst of all, developing people’s AI literacy may not have much impact on detection rates. According to a recent study, systematic training where participants are shown examples of deepfakes improves detection ability by an average of only 3.84% (Mai et al. 2023). From this, one can conclude that this is an epistemic crisis that can only be addressed through legislative means.

Deepfakes as Tools for Financial Crime

Political discourse around AI focuses primarily on democratic threats, but simultaneously we must acknowledge the economic dimension of deepfakes. Statistics show that deepfakes have become a highly profitable tool for financial crimes, which in turn reinforces the notion that discussions around transparency are insufficient.

According to statistics, fraud attempts involving deepfakes grew globally by as much as 3,000% in 2023 as generative AI tools became common amongst consumers (Onfido 2024). The number of identity thefts using deepfakes also increased 31-fold in just one year (Sumsub 2023). The most common reason is people’s inability to distinguish high-quality deepfakes from genuine content.

Currently, fraud attempts occur approximately every five minutes (Keepnet Labs 2024). One of the most well-known cases is from 2024, when international engineering firm Arup was attacked using a combination of traditional phishing and modern technology. It all began when a company employee in the finance department received an email that appeared to come from the company’s UK Chief Financial Officer (CFO).

This wasn't a traditional hacking, but rather socially engineered fraud enhanced by technology.

The message instructed an urgent and confidential money transfer. The employee suspected the message, but their suspicions dissipated when they were invited to a video conference to discuss the matter. On the video call, the employee saw and heard a person who looked and sounded like the CFO. Other familiar colleagues were also present.

In reality, all meeting participants except the victim were deepfake personas created by AI. Convinced by the visual and audio evidence, the employee made 15 separate account transfers, ultimately sending a total of $25.6 million to the fraudsters. This wasn’t traditional hacking, but rather socially engineered fraud enhanced by technology, as Arup’s Chief Information Officer later described the incident.

Similar attempts have been made at WPP, the world’s largest advertising agency, where criminals used an audio clone of CEO Mark Read’s voice in a Microsoft Teams meeting. At Ferrari, one executive received a WhatsApp call where a cloned voice mimicked CEO Benedetto Vigna. In both cases, employees noticed small inaccuracies in the speech, which prevented them from falling for the scam (Khalil 2025b).

Transparency Labels Are Not Enough

Deepfakes are not a problem that transparency labels can solve. When the financial benefit is substantial and technical barriers low, regulation needs much more than warning signs. The European Union must create binding technical standards and effective oversight mechanisms and sanctions that exceed the benefits gained from misuse or enabling it.

Companies whose applications are used to commit fraud should also be held accountable if they fail to ensure their products are safe. The epistemic crisis caused by deepfake technology is not a future threat but an already-realised reality. Concrete cases from New Hampshire to Slovakia demonstrate that the technology has caused harm to democratic processes and economic security.

The research evidence is indisputable. Humans cannot reliably identify deepfakes. General detection accuracy is only 55.5%, and for high-quality videos it drops to 24.5%. Training improves detection ability only marginally, by an average of 3.84%. Detection tools also lose 45–50% of their effectiveness in real-world conditions.

Nor are AI-generated content detection tools convincing. Whilst it’s true that laboratories in proper research centres report detection accuracy rates exceeding 90%, when top-tier detection systems are taken into the Wild West of real life, in the worst cases they lose as much as 45–50% of their accuracy in detecting deepfakes (Chandra et al. 2025).

The technology is already within consumers’ reach, and financial incentives entice criminality. To survive this transformation, we need new kinds of epistemic infrastructures that preserve trust, truth, and legitimacy. If such measures are not implemented, actions like Article 50 of the AI Act may offer political comfort but little real protection against the digital fraud crisis.

References

Bond, S. (2024). How deepfakes and AI memes affected global elections in 2024. Published on National Public Radio (NPR) website 21 December 2024. Accessed 1 October 2025.

Bradshaw, S., Bailey, H., & Howard, P. N. (2021). Industrialized disinformation: 2020 global inventory of organized social media manipulation. Oxford Internet Institute.

Chandra, N. A., Murtfeldt, R., Qiu, L., Karmakar, A., Lee, H., Tanumihardja, E., Farhat, K., Caffee, B., Paik, S., Lee, C., Choi, J., Kim, A. & Etzioni, O. (2025). Deepfake-Eval-2024: A multi-modal in-the-wild benchmark of deepfakes circulated in 2024. ArXiv.

Cruz, B. (2025). 2025 deepfakes guide and statistics. Published on Security.org website 26 August 2025. Accessed 1 October 2025.

Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M. & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Computers in Human Behavior Reports, Volume 16, 2024,100538, ISSN 2451-9588.

European Commission. (2025a). Commission launches consultation to develop guidelines and a code of practice on transparent AI systems. Brussels. Published 6 September 2025. Accessed 1 October 2025.

Floridi, L. (2024). AI and the end of truth? Epistemic challenges of generative systems. Philosophy & Technology, 37(2), 15. Springer Science+Business Media.

Insikt Group. (2024). Targets, Objectives, and Emerging Tactics of Political Deepfakes. Threat Analysis. Recorded Group. Published 24 September 2024. Accessed 1 October 2025.

iProov. (2025). iProov Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes. Published on iProov website 12 February 2025. Accessed 1 October 2025.

Keepnet Labs. (2025). Deepfake Statistics & Trends 2025: Growth, Risks, and Future Insights. Published on Keepnet Labs website 24 September 2025. Accessed 1 October 2025.

Khalil, M. (2025a). Deepfake statistics 2025: The data behind the AI fraud wave. Published on DeepStrike website 8 September 2025. Accessed 1 October 2025.

Khalil, M. (2025b). AI Cybersecurity Threats 2025: How to Survive the AI Arms Race. Published on DeepStrike website 6 August 2025. Accessed 1 October 2025.

Mahieu, R., Van Hoboken, J., & Ausloos, J. (2021). Measuring the Brussels effect through access requests. Journal of Information Policy, 11, 301–331. Penn State University Press.

Mai, K.T., Bray, S., Davies, T. & Griffin, L.D. (2023). Warning: Humans cannot reliably detect speech deepfakes. Published in PLOS One 2 August 2023. Accessed 1 October 2025.

Onfido. (2024). Identity fraud report 2024. Onfido.

Schlenker, D. (2024). Listen carefully: UF study could lead to better deepfake detection. University of Florida News. Published 15 November 2024. Accessed 1 October 2025.

Seitz-Wald, A. (2024). Telecom company agrees to $1M fine over Biden deepfake. Published on NBC News website 21 August 2024. Accessed 1 October 2025.

Sumsub. (2023). Deepfake detection and fraud statistics. Sumsub.

Sumsub. (2024). Identity theft and fraud statistics. Sumsub.

Surfshark Research. (2024). 38 countries have faced deepfakes in elections. Surfshark Research. Published on Surfshark website 9 December 2024. Accessed 1 October 2025.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1). Sage Journals.

Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52. World Scientific.

Zamji, X. (2024). AI-driven Russian disinformation campaign targets German elections. Euractiv.

Martti Asikainen

Communications Lead
+358 44 920 7374
martti.asikainen@haaga-helia.fi

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts