Point of View: Transparency or Magic Tricks? The EU AI Act in Crosswinds

The EU AI Act aims to regulate AI’s Wild West landscape, but its transparency requirements raise a fundamental question: where does deception end and artistic freedom begin? Humanity has always used artificial means to enhance storytelling. Synthetic media can enrich culture if regulation focuses on context and purpose rather than the technology itself. This is the second installment in a three-part blog series.

Text by Martti Asikainen, 9.10.2025 | Photo by Adobe Stock Photos (Revised 11.11.2025)

Magician on the stage. Behind you can see a shape of EU flag and map.

The EU AI Act seeks to bring order to the Wild West of artificial intelligence, but its transparency requirements collide with a fundamental question about the boundary between deception and artistic freedom. At its best, synthetic media can enrich culture if regulation can distinguish context and purpose from the technology itself.

When a magician whispers a warning to their audience before the performance, telling them that everything is merely sleight of hand, is it still magic? The European Union’s AI Act (EU 2024/1689) finds itself in a rather similar situation, requiring that every AI-generated trick be labelled and disclosed to the audience in advance. But unlike in the world of entertainment, this is not merely about artistic experience—it concerns safeguarding democracy and truth in a changing world.

The EU AI Act must balance on the edge of the impossible. How does one create transparency-demanding regulation in a society built upon magic, metaphor, and artistic freedom? The Act seeks to expose ‘magic tricks’, even though these very elements form the core of human culture. At its best, synthetic media can enrich storytelling and culture if regulation can distinguish context and purpose from the technology itself.

The AI Act Was Born of Necessity

The European Parliament voted on the EU AI Act (EU 2024/1689) in March 2024, and the Council of the EU sealed the text on 21 May 2024. The AI Act was then decided to be implemented in stages, with its first phase entering into force on 1 August 2024 and the final phase in January 2027. National implementation will take considerably longer.

The AI Act has numerous positive aspects. A single rulebook for 27 member states reduces regulatory fragmentation and provides businesses with legal certainty and predictability in the complex EU internal market. Its strictest requirements target high-risk systems, whilst low-risk applications are subject primarily to light transparency requirements only.

High risk refers to, for example, credit ratings and lending decisions in the financial sector, the justice system and law enforcement, critical infrastructure, as well as recruitment and job applications. The impact of low-risk systems on individual rights and security is virtually non-existent, which is why they primarily concern customer service bots, content production tools, and home automation systems.

The AI Act is expected to reduce unnecessary administrative burden and focus efforts where their impact is greatest. Simultaneously, prohibitions on harmful practices come into force, substantially improving individual rights. The Act prohibits, for instance, unacceptable risks such as social scoring, remote identification and biometric surveillance in public spaces, as well as AI-driven behavioural manipulation and exploitation (EU AI 2024; Bird & Bodo, 2023).

Article 50 of the Act also requires separate notification when users interact with AI, and similarly, AI-generated content, such as deepfakes, must be marked with appropriate watermarks or other identifiable features. This is expected to improve trust and prevent deception, though there are also fears that it will kill the magic.

Transparency as Cobblestone of Democracy

Article 50 of the EU AI Act requires separate notification when users interact with AI, and similarly, AI-generated content, such as deepfakes, must be marked with appropriate watermarks or other identifiable features. This is expected to improve trust and prevent deception, though there are also fears that it will kill the magic.

In practice, the EU thereby seeks to create informed consent for algorithmic decision-making and use, which is like a digital counterpart to informed consent in medicine (see Kim & Routledge 2022; Castelluccia & Le Métayer 2019). It is commendable for many reasons to inform people in advance if they are dealing with artificial intelligence (e.g. Leslie 2019). At its foundation lies the goal of strengthening individual autonomy and trust by providing transparency regarding how algorithms affect their treatment, data, and choices in the digital environment.

The urgency of transparency requirements became concrete in 2024 when multibillionaire Elon Musk shared an AI-manipulated campaign video of Kamala Harris without disclaimers. The video gathered over 129 million views in mere moments (Milmo 2024). The incident starkly demonstrated how synthetic media can rapidly and extensively distort political discourse and potentially even influence democratic processes.

When synthetic media can convincingly simulate reality in a manner that the average voter cannot distinguish from genuine material, transparency is not merely a technical requirement—it is a democratic necessity. Without clear labelling, users cannot make informed decisions about what information to trust, whom to vote for, or how to form their understanding of public figures and societal issues.

The Impossible Boundary Between Satire and Deception

The California case from 2024 also offers a cautionary example of how difficult implementing transparency requirements is in practice. Following the video Musk shared, California Governor Gavin Newsom signed three laws concerning deepfakes. The laws prohibited, amongst other things, deepfakes intended to mislead voters within 120 days of an election. In his speeches, Newsom referred, amongst others, to Musk and his post.

Soon afterwards, Christopher Kohls, who identified himself as the video’s creator and as right-leaning, challenged the signed law in court (Korte 2024). According to Kohls, the content he created and Musk subsequently shared was political satire. Consequently, federal judge John Mendez blocked the law’s enforcement as unconstitutional (Zeff 2024). According to the judge, the law violated freedom of speech by restricting political expression in a manner that exceeded constitutional limits.

The California case reveals regulation’s central paradox, as it is impossible to say who determines what constitutes satire and what constitutes deception. Kohls is a well-known political influencer and cannot by any measure be considered a comedian or satirist. On the other hand, the question simultaneously arises whether all synthetically produced political content is automatically deceptive, or can it also be legitimate societal criticism, and thus also welcome.

The EU's Approach and the Artistic Exception

The EU AI Act (2024/1689) has sought to learn from California’s mistakes. According to Article 50 of the Act, content produced or modified by AI, such as deepfakes, synthetic voices or images, must be marked as artificial. The exception, however, is manifestly artistic, creative, or satirical content, where the labelling requirement has been limited. In such contexts, it is sufficient to inform the audience in an appropriate manner that content has been produced or modified with the aid of AI, provided this does not ‘prevent the presentation or enjoyment of the work’.

In practice, this means that in art, entertainment, and creative expression, AI-generated content may be used without watermarks, provided the audience is given understandable information about the material’s artificial nature, for instance in the work’s description, opening credits, or in another contextually appropriate manner. This aims to protect the audience’s right to know and combat deception whilst not ‘killing the magic’, which is deeply connected not only to artistic freedom but also to experiencing it (European Commission 2024; Fiore 2024).

But the solution creates as many questions as it provides answers. Let us consider for a moment a situation where a Finnish film director makes a dystopian documentary in which AI generates speeches by an imaginary prime minister as criticism of the current political climate. Is this artistic commentary deserving of an exception, or dangerous deception requiring large warning texts? What if the same video is shared on social media detached from its original context? How will it then be treated?

Who decides what is ‘manifestly’ artistic, and on what grounds? When does a Netflix series’ AI-rejuvenated actor require labelling, and when is it simply part of the film’s magic? Is a museum’s virtual, AI-based guide art or an information service? How an artistic, creative, satirical, or similar work is defined, like appropriateness, will likely be left to national courts to decide—which creates uncertainty and possibly 27 different interpretations of the same regulation (European Commission 2025).

The Arms Race Between Technology and Regulation

Regulatory challenges are not limited merely to legal and ethical questions. The technology itself also involves four central problems that make implementing transparency requirements complex. Firstly, the AI field has already become the Wild West, whose taming would require Bass Reeves-like tough customers (see Wheeler 2023; European Commission 2024). The technology has spread widely, tools are available to many, and the platforms on which content is shared often operate outside EU jurisdiction or at the very least in its grey area.

Secondly, AI systems hallucinate information in ways that cannot be fully anticipated. When the system itself does not always know what in its produced content is fact and what is fiction, implementing and requiring various warnings and watermarks may be easier said than done (e.g. Ji et al. 2023). How can one mark content as artificial when the system itself may confuse facts and fiction in ways that even its creators do not fully understand?

Thirdly, watermarks and technical identifiers are vulnerable to manipulation. They can be removed, modified, or bypassed with generative AIs built into applications or by changing image size, and when content is copied and shared onwards on different platforms, the original markings easily disappear into the bitstream (see Li et al. 2024; Hitaj 2019). Additionally, some AI systems are open source, which means they can be modified so that identifiers never make it into the final product.

Fourthly, it must be noted that the definition of evolving technology also produces challenges. Legislators repeatedly attempt to tie regulation to a fixed definition of evolving technology, which is much the same as trying to nail jelly to a wall (Bryson 2022). Generative AI development is currently so rapid that no one can predict what kind of systems we will have in two or even five years’ time.

Going through the process in the EU and national parliaments takes time, and when regulations finally enter into full force in all EU countries, technology will have already changed fundamentally from the moment the regulation was created. This poses a continuous challenge for the EU to keep pace with development without losing the regulation’s significance or ability to steer development in an ethically sustainable direction.

The Question of Oversight and Enforcement

The Act’s practical implementation also raises the question of oversight. Whilst the Act creates uniform rules, its enforcement largely falls to national authorities. Who in practice oversees that an AI artwork in a Finnish art gallery is appropriately labelled? And how are social media platforms obliged to check billions of images and videos shared daily?

Moreover, sanctions will vary according to the severity of the violation. For the most serious violations, such as use of prohibited AI systems, fines of up to €35 million or 7 per cent of a company’s global annual turnover may be imposed (2024/1689). Fines for violations of transparency obligations, on the other hand, are smaller but still considerable. The question is whether these sanctions will be applied consistently, or will the EU find itself in the same situation as with GDPR, where enforcement has been variable.

On the other hand, the stakes are also enormous. High requirements may also test the EU’s geopolitical authority on the global playing field if their oversight cannot be organised in a credible manner. The EU is attempting to create the so-called Brussels Effect phenomenon, where its regulatory standard spreads to become global practice, just as GDPR did regarding data protection (Murphy 2025; Li & Chen 2024; Bradford 2019). But if Chinese or American AI companies ignore EU markets or create content that circumvents the Act’s requirements, regulation becomes toothless. There is also a risk that European creative companies will move their operations to jurisdictions where regulation is lighter.

Technology That Does Not Know Itselft

The EU AI Act seeks to create clarity in a world that is inherently unclear—both technically and culturally. AI’s hallucination problem makes transparency almost a philosophical paradox, for how can something be transparent when it does not itself know what is transparent or what kind of content it produces.

Simultaneously, the cultural question also deepens. Humanity has always used artificial means to enhance art and storytelling—from theatrical make-up to special effects, from forgeries and trompe-l’œil paintings to digital effects. None of these are ‘real’ in the sense that they all seek to create an illusion or magic that transcends reality.

Viewed from this perspective, synthetic media and deepfakes can at their best even enrich cultural narrative rather than merely threatening it. Consider, for instance, a documentary in which AI revives a dead language and gives historical figures the ability to speak in their own voices in an educational context. Or an artwork that uses synthetic media to comment on society’s relationship with technology. What about a film in which an ageing actor can continue their role digitally rejuvenated?

The question reveals regulation’s tendency towards black-and-white thinking. Not all synthetic content is deceptive. The problem is not the technology itself, but the contexts and ways in which it is used. The same tools that can threaten democracy can also advance art, education, and cultural preservation.

Towards Context-Dependent Ethics

At its worst, the EU AI Act creates a deep tension between two important values. On one side is the democratic demand for truth and transparency, and on the other, the right to imagination and creating illusion (Helberger, Pierson & Poell 2018). But perhaps this very tension forces us to think more carefully about what we truly wish to protect when we regulate AI and its content.

The solution may not lie in stricter regulation of technology, but in better understanding of context. Rather than asking ‘is this AI’, we should ask for what purpose this is being used, for whom this has been made and who is the audience, and what is the likely impact of its use on different stakeholders.

The same deepfake video can be dangerous disinformation in a political campaign, but entirely acceptable in a satirical talk show or educational context where its purpose is precisely to criticise the possibility of manipulation. Similarly, an AI-rejuvenated actor serves artistic vision, but the same technology could be harmful if used to create a forgery in which a person appears to do something they have never done.

The EU AI Act represents the first genuine attempt to navigate these crosswinds. Its greatest challenge is not technical but fundamentally philosophical: how can we protect truth without killing creativity, and how can we bring transparency to the darkest corners of the digital world whilst preserving the power of illusion and magic?

The answer is not found merely in watermarks or warning texts, but in understanding that the ethics of synthetic media does not depend on the technology itself, but on in whose hands and for what purposes it is used. Transparency is a valuable goal, provided it does not simultaneously destroy the magic that makes art and culture meaningful.

The magician can whisper to the audience that their magic is an illusion, and yet the enchantment can succeed if the viewer understands the context and chooses to surrender to the experience. Similarly, AI-generated content can be labelled and yet impactful, if the labelling does not kill the purpose but clarifies it. The ultimate success of the EU AI Act depends on whether it can make this distinction—separate deceptive magic tricks from those magic tricks that enrich human experience.

References

Bird, R., & Bodo, B. (2023). The European Union’s Artificial Intelligence Act: A new paradigm for regulating AI. European Journal of Risk Regulation, 14(1), 1–20. Cambridge University Press.

Bradford, A. (2019). The Brussels Effect: How the European Union Rules the World. Published on 19 December 2019. Oxford. Oxford University Press.

Bryson, J. J. (2022): Europe is in Danger of Using the Wrong Definition of AI. Published in Wired online magazine on 2 March 2022. Accessed 3 October 2025.

EU Artificial Intelligence Act. (2024). High-level summary of the AI Act. Brussels. European Union. Published on the website on 27 February 2024. Accessed 3 October 2025.

European Commission. (2024). Artificial Intelligence Act: EU countries give final green light to the Commission’s proposal. Brussels

European Commission (2025). Governance and enforcement of the AI Act. European Commission. Brussels. Website updated 25 July 2025. Accessed 3 October 2025.

Castelluccia, C. & Le Métayer, D. (2019). Understanding algorithmic decision-making: Opportunities and Challenges. European Parliamentary Research Service. Brussels. European Parliament.

Algorithmic decision-making and transparency in the EU: Impacts, ethics and governance challenges. Brussels. European Parliament.

Fiore, M. (2024). Attempts to stop “Fake News” may threaten satire. Stanford Journalism Fellowships. Stanford University. Published on 10 December 2024. Accessed 3 October 2025.

Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. Oxfordshire. Taylor & Francis.

Hitaj, D., Hitaj, B. & Mancini, L.V. (2019). Evasion Attacks Against Watermarking Techniques found in MLaaS Systems. 2019 Sixth International Conference on Software Defined Systems (SDS), Rome, Italy, 2019, pp. 55-63.

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38.

Kim, T.W. & Routledge, B.R. (2022). Algorithmic transparency, informed consent, and the right to explanation. Published in Business Ethics Quarterly on 5 May 2021. Cambridge University Press & Assessment.

Korte, L. (2024). Creator of Kamala Harris parody video sues California over election ‘deepfake’ ban. Published in Politico on 18 September 2024. Accessed 3 October 2025.

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute.

Li, G. & Chen, J. (2024). From brussels effect to gravity assists: Understanding the evolution of the GDPR-inspired personal information protection law in China. Computer Law & Security Review, 54, 105048. Elsevier.

Li, G., Chen, Y., Zhang, J., Guo, S., Qiu, H., Wang, G., Li, J. & Zhang, T. (2023). Warfare: Breaking the Watermark Protection of AI-Generated Content. arXiv:2310.07726. arXiv.

Milmo, D. (2024). Elon Musk accused of spreading lies over doctored Kamala Harris video. Published in The Guardian on 29 July 2024. Accessed 3 October 2025.

Murphy, R. (2025). Mapping the Brussels Effect: The GDPR Goes Global. Published on the Center of European Policy Analysis (CEPA) website. Last updated 7 August 2025. Accessed 3 October 2025.

Zeff, M. (2024). Judge blocks California’s new AI law in case over Kamala Harris deepfake. Published in TechCrunch on 2 October 2024. Accessed 3 October 2025.

Wheeler, T. (2023). The three challenges of AI regulation. Published on the Brookings Institution website on 15 June 2023. Accessed 3 October 2025.

About the Author

Martti Asikainen

Communications Lead
Finnish AI Region
+358 44 920 7374
martti.asikainen@haaga-helia.fi

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts