The new AI regulation (AI Act), which came into effect in August 2024, aims to ensure the reliability of AI systems used within the EU and safeguard fundamental rights. This regulation adopts a risk-based approach, imposing obligations primarily on companies that provide high-risk AI systems.
Juuli Venkula 10.9.2024
AI is a rapidly advancing technology that presents businesses with numerous opportunities while introducing new regulatory challenges. The EU’s long-awaited AI Act (2024/1689) finally came into force on August 1, 2024. The AI Act will be implemented gradually within two years of its entry into force, with certain exceptions. This marks the world’s first significant regulatory framework for AI systems, designed to ensure that AI is safe, transparent, and respectful of fundamental rights. Additionally, the Act seeks to create a harmonized internal market for AI within the EU, promote the adoption of AI technologies, and stablish an environment that supports innovation and investment. (European Commission, 2024).
It is crucial for small and medium-sized enterprises (SMEs) to understand the practical implications of this regulation and how it will affect the use of AI in their business operations. With responsible AI practices, companies can position themselves as industry leaders and differentiate themselves from competitors. This article outlines the key aspects of the AI Act that SMEs need to be aware of and offers guidance on how to prepare for the changes it will bring.
The AI Act classifies AI systems into four risk categories based on their intended use: prohibited, high-risk, specific transparency risk, and minimal-risk AI systems. Each category comes with different levels of obligations that companies must comply with.
SMEs need to accurately identify which category their AI systems fall into to ensure they meet regulatory requirements.
In addition to the risk classification, companies must also consider their role within the AI ecosystem—whether they are providers, deployers, or other stakeholders. According to Article 3 of the AI Act, a provider is an entity that develops an AI system and places it on the market under its name or trademark. A deployer, on the other hand, is an entity that uses an AI system under its authority, excluding non-professional use by individuals. The obligations outlined in the AI Act vary depending on the role, basically with providers subject to stricter requirements than deployers.
By understanding these classifications and roles, SMEs can better navigate the regulatory landscape and ensure that their use of AI aligns with the new standards.
Most of the AI Act’s obligations will take effect on August 2, 2026. However, requirements related to prohibited risk AI systems will be enforced six months after the regulation’s enactment, and those concerning general-purpose AI models will apply 12 months after. Despite this phased timeline, companies should begin preparing for the new regulations.
A study by the Finnish Ministry of Economic Affairs and Employment (TEM, 2023) revealed that many Finnish companies find the regulation’s complexity and unpredictable impacts challenging. The AI Act’s combination of Article 113, Recital 180, and Annex 15 is heavy, even for legal experts, underscoring the need for support services and guidance.
For most SMEs, the use of AI systems falls into the categories of specific transparency risk or minimal risk. The Initiative for Artificial Intelligence (2023) estimates that about 42% of companies fall into the minimal-risk category, meaning that many businesses utilizing AI will need to enhance transparency in their operations. Meanwhile, approximately 18% of companies are expected to fall into the high-risk category, though around 40% are uncertain about their risk classification (Initiative for Applied Artificial Intelligence, 2023). The risk classification can be complex and not always clear-cut due to potential exceptions. The European Commission is currently preparing guidelines to clarify the criteria for high-risk classifications (European Commission, 2024).
Certain industries and specific regulations may place AI systems in the high-risk category. For SMEs providing high-risk AI systems, the AI Act introduces new and significant obligations. Providers must assure the fully compliance with the AI Act requirements before bringing a high-risk AI system to market or deploying it.
Other obligations for high-risk AI system providers include establishing a risk management system, data governance practices, comprehensive technical documentation, record-keeping, transparency measures, user notifications, human oversight, accuracy, robustness, and cybersecurity safeguards. Additionally, Article 16 outlines further obligations for providers, such as registering in the EU’s public database.
The obligations for deployers of high-risk AI systems are specified in Article 26, including for example following usage guidelines, informing users of risks, and retaining log data for a specified duration. In practice, this may require companies providing and deploying high-risk AI systems to update their technical and administrative processes, such as conducting technical assessments, revising documentation, and performing safety tests. National market surveillance authorities will monitor compliance throughout the AI system’s lifecycle.
Beyond preparing for these obligations, it is essential to educate all employees on the risks and responsibilities associated with AI use. Proper training ensures that the company has the necessary expertise to utilize AI safely and ethically. Companies should develop internal guidelines for employees on the responsible use of AI systems. To fully and safely harness the opportunities AI offers, SMEs must invest in continuous AI skills development.
Complying with the AI Act’s requirements will demand adequate knowledge and resources from companies. The regulation includes provisions specifically designed to support SMEs, such as Article 62, which prioritizes access to regulatory sandboxes that serve as testing environments. Collaborating with higher education institutions can also be an effective way to develop the necessary expertise and receive support in implementing the regulations. Further guidance from the European Commission and national authorities is expected, so it is crucial for companies to stay informed about these developments.
To summarize the steps for SMEs to prepare for the AI Act, I recommend the following actions:
By following these steps, you can leverage AI’s opportunities safely and effectively. When done correctly, AI can provide significant competitive advantages and create new business opportunities for companies.
European Commission 2024. Artificial Intelligence – Questions and Answers. Available: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683 . Accessed: 9.9.2024
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
Inivative for Applied Artificial Intelligence 2023. AI Act: Risk Classification of AI systems from Practical Perspective. A study to identify uncertainties of AI users based on the risk classification of more than 100 AI systems in enterprise functions. Available: https://www.appliedai.de/assets/files/AI-Act_WhitePaper_final_CMYK_ENG.pdf Accessed: 9.9.2024
TEM. (2023) EU:n tekoälyasetusehdotuksen vaikutukset suomalaisyritysten liiketoimintaympäristöön. Julkaisuja 2023:46. Available: https://urn.fi/URN:ISBN:978-952-327-613-0
This writing is part of the FAIR project publications. Finnish AI Region (FAIR) offers low-threshold expertise to companies in the fields of artificial intelligence, augmented reality, high-performance computing, and cybersecurity. FAIR, which provides free services, aims to accelerate and expand the adoption of artificial intelligence in small and medium-sized enterprises.
Finnish AI Region
2022-2025.
Media contacts