The clock is ticking: Europe's AI safety rules are about to get real

As new EU regulations take effect this August, companies deploying artificial intelligence face a stark choice: comply or pay the price.

The European Union flag on a light blurry background

Text: Martti Asikainen, 6.8.2025 Photo: Adobe Stock Photos

Nauha, jossa FAIR:in yhteistyökumppanien logot.

The buzzer is about to sound for artificial intelligence companies operating in Europe. On 2 August, a critical phase of the EU’s groundbreaking AI Act came into force, bringing with it some of the world’s strictest requirements for managing AI-related incidents.

For companies that have grown accustomed to the relatively light-touch regulatory environment that has characterised the AI boom, the new rules represent a fundamental shift. The European Union isn’t just asking nicely anymore – it’s demanding that AI providers implement robust systems for detecting, reporting, and responding to serious incidents within 72 hours of becoming aware of them.

According to industry analysis, the EU AI Act represents more than routine regulatory compliance – it signals that organisations need to fundamentally rethink how they manage risk in an AI-powered world.

The stakes are substantial. Companies that fail to comply with serious incident reporting requirements face potential penalties, though specific fine amounts have not yet been detailed by regulators.

The 72-hour ultimatum

At the heart of the new requirements lies Article 73 of the AI Act, which establishes notification requirements for serious incidents or malfunctions, which must be reported within seventy-two hours of becoming aware of them.

But the regulatory timeline presents a significant operational challenge: the 72-hour window requires a process that captures critical information during active response. 

For companies used to dealing with technical glitches internally before deciding whether to escalate, this represents a significant shift in operational approach.

The implications extend far beyond simple notification. Companies must track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay. 

This means having systems in place that can simultaneously manage an ongoing crisis while generating the documentation that regulators will scrutinise.

Operating in EU

The August deadline specifically targets providers of general-purpose AI models – think ChatGPT, Claude, or similar large language models. But the broader AI Act casts a much wider net. 

Any serious incident triggered (directly or indirectly) by an AI system that affects a critical entity’s infrastructure in the domain of energy, transport, health, drinking water, wastewater and space, requires that the competent authority be notified.

This means that a hospital using AI for diagnostic imaging, a transport company deploying autonomous vehicles, or an energy firm using AI for grid management could all find themselves subject to these reporting requirements under the Act’s broad definition of high-risk AI systems.

The regulation applies regardless of where companies are headquartered, as long as they operate in the EU market.

The compliance challenge

Meeting these requirements demands substantial changes to how companies operate. According to compliance analysis, organisations must implement automatic event logging systems that create tamper-proof records of AI system behaviour. 

They need cross-functional incident response teams capable of coordinating legal, technical, and communications responses simultaneously. The regulation also requires automated post-incident reporting that documents impact assessments, corrective measures taken, and affected parties – all while managing an active crisis situation.

The parallels with existing regulations like GDPR are notable, as the AI Act borrows from established frameworks. Companies that have successfully navigated GDPR compliance may find themselves better positioned, though the technical complexity of AI incidents introduces new challenges.

A new era for AI governance

The EU AI Act represents a significant regulatory development in artificial intelligence governance. It reflects Europe’s approach to shaping AI development standards, following patterns established with regulations like GDPR. As the August deadline approaches, companies operating in the AI space face the need to implement robust incident response capabilities to meet the new requirements.

The clock is ticking, and for AI companies operating in Europe, time is running out to get their houses in order.

Read more

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts