What SMEs Need to Know About the AI Act: Key Responsibilities and Obligations

The new AI regulation (AI Act), which came into effect in August 2024, aims to ensure the reliability of AI systems used within the EU and safeguard fundamental rights. This regulation adopts a risk-based approach, imposing obligations primarily on companies that provide high-risk AI systems.

Juuli Venkula 10.9.2024

Nyrkki joka kuvastaa lakia, taustalla EU-lipun tähdet sekä pyöreä verkko.
Nauha, jossa FAIR:in yhteistyökumppanien logot.

AI is a rapidly advancing technology that presents businesses with numerous opportunities while introducing new regulatory challenges. The EU’s long-awaited AI Act (2024/1689) finally came into force on August 1, 2024. The AI Act will be implemented gradually within two years of its entry into force, with certain exceptions. This marks the world’s first significant regulatory framework for AI systems, designed to ensure that AI is safe, transparent, and respectful of fundamental rights. Additionally, the Act seeks to create a harmonized internal market for AI within the EU, promote the adoption of AI technologies, and stablish an environment that supports innovation and investment. (European Commission, 2024).

It is crucial for small and medium-sized enterprises (SMEs) to understand the practical implications of this regulation and how it will affect the use of AI in their business operations. With responsible AI practices, companies can position themselves as industry leaders and differentiate themselves from competitors. This article outlines the key aspects of the AI Act that SMEs need to be aware of and offers guidance on how to prepare for the changes it will bring.

Company Obligations are Based on Risk Classification and Role

The AI Act classifies AI systems into four risk categories based on their intended use: prohibited, high-risk, specific transparency risk, and minimal-risk AI systems. Each category comes with different levels of obligations that companies must comply with. 

SMEs need to accurately identify which category their AI systems fall into to ensure they meet regulatory requirements. 

Below is a summary of the risk classifications under the AI Act.

  1. Prohibited Risk. The AI Act specifies certain AI systems that are completely banned. These include applications that manipulate users subconsciously or use biometric recognition without sufficient legal justification. SMEs must ensure that such systems are not part of their operations.

  2. High-Risk. The high-risk AI systems are at the core of the AI Act. This category includes AI systems used in areas such as recruitment, education, or creditworthiness assessments. Stricter requirements apply to high-risk systems, including the need for a robust risk management system, detailed technical documentation, and the record-keeping of log data. SMEs that develop or provide high-risk AI systems must be particularly careful, as non-compliance can result in significant penalty fees.

  3. Specific Transparency Risk. AI systems classified as having a particular risk of transparency require that users are informed when they are interacting with an AI system. Enhancing transparency, particularly in consumer-facing applications, helps users identify AI-generated content. Examples include chatbots focused on content creation or user interaction.

  4. Minimal Risk. AI applications in the minimal risk category are not subject to specific obligations under the AI Act because they pose little risk to human safety or fundamental rights. Nevertheless, companies can voluntarily establish new codes of conduct to increase transparency. Examples of minimal-risk AI systems include spam filters and AI-based recommendation engines.

  5. General Purpose AI Models. The regulation also addresses general-purpose AI models, which are increasingly used as components in various AI applications. Key obligations include ensuring transparency throughout the value chain, such as clearly labelling AI-generated content and deepfake materials and addressing potential systemic risks associated with the most advanced models (European Commission, 2024).

Be mindful of your role

In addition to the risk classification, companies must also consider their role within the AI ecosystem—whether they are providers, deployers, or other stakeholders. According to Article 3 of the AI Act, a provider is an entity that develops an AI system and places it on the market under its name or trademark. A deployer, on the other hand, is an entity that uses an AI system under its authority, excluding non-professional use by individuals. The obligations outlined in the AI Act vary depending on the role, basically with providers subject to stricter requirements than deployers.

By understanding these classifications and roles, SMEs can better navigate the regulatory landscape and ensure that their use of AI aligns with the new standards.

Company Responsibilities and Obligations in Practice

Most of the AI Act’s obligations will take effect on August 2, 2026. However, requirements related to prohibited risk AI systems will be enforced six months after the regulation’s enactment, and those concerning general-purpose AI models will apply 12 months after. Despite this phased timeline, companies should begin preparing for the new regulations. 

A study by the Finnish Ministry of Economic Affairs and Employment (TEM, 2023) revealed that many Finnish companies find the regulation’s complexity and unpredictable impacts challenging. The AI Act’s combination of Article 113, Recital 180, and Annex 15 is heavy, even for legal experts, underscoring the need for support services and guidance.

“The AI Act’s combination of Article 113, Recital 180, and Annex 15 is heavy, even for legal experts, underscoring the need for support services and guidance.”

For most SMEs, the use of AI systems falls into the categories of specific transparency risk or minimal risk. The Initiative for Artificial Intelligence (2023) estimates that about 42% of companies fall into the minimal-risk category, meaning that many businesses utilizing AI will need to enhance transparency in their operations. Meanwhile, approximately 18% of companies are expected to fall into the high-risk category, though around 40% are uncertain about their risk classification (Initiative for Applied Artificial Intelligence, 2023). The risk classification can be complex and not always clear-cut due to potential exceptions. The European Commission is currently preparing guidelines to clarify the criteria for high-risk classifications (European Commission, 2024).

Certain industries and specific regulations may place AI systems in the high-risk category. For SMEs providing high-risk AI systems, the AI Act introduces new and significant obligations. Providers must assure the fully compliance with the AI Act requirements before bringing a high-risk AI system to market or deploying it. 

Other obligations for high-risk AI system providers include establishing a risk management system, data governance practices, comprehensive technical documentation, record-keeping, transparency measures, user notifications, human oversight, accuracy, robustness, and cybersecurity safeguards. Additionally, Article 16 outlines further obligations for providers, such as registering in the EU’s public database. 

The obligations for deployers of high-risk AI systems are specified in Article 26, including for example following usage guidelines, informing users of risks, and retaining log data for a specified duration. In practice, this may require companies providing and deploying high-risk AI systems to update their technical and administrative processes, such as conducting technical assessments, revising documentation, and performing safety tests. National market surveillance authorities will monitor compliance throughout the AI system’s lifecycle.

Beyond preparing for these obligations, it is essential to educate all employees on the risks and responsibilities associated with AI use. Proper training ensures that the company has the necessary expertise to utilize AI safely and ethically. Companies should develop internal guidelines for employees on the responsible use of AI systems. To fully and safely harness the opportunities AI offers, SMEs must invest in continuous AI skills development.

Complying with the AI Act’s requirements will demand adequate knowledge and resources from companies. The regulation includes provisions specifically designed to support SMEs, such as Article 62, which prioritizes access to regulatory sandboxes that serve as testing environments. Collaborating with higher education institutions can also be an effective way to develop the necessary expertise and receive support in implementing the regulations. Further guidance from the European Commission and national authorities is expected, so it is crucial for companies to stay informed about these developments.

Preparation

To summarize the steps for SMEs to prepare for the AI Act, I recommend the following actions:

  • Understand Risk Classification: Identify and inventory which risk category your AI systems fall into. Ensure they comply with the regulation’s requirements. Also, consider future AI systems and those used by partners, and update risk levels as needed.
  • Prepare for Regulation in Advance: For high-risk applications, companies must prepare for stricter regulations, such as increased monitoring and documentation. Even for AI systems with specific transparency or minimal risk, focus on ensuring transparency. Early preparation helps avoid penalties, fosters responsible AI solutions, and facilitates compliance.
  • Understand Your Role: Are you an AI system provider or a deployer? Most obligations apply only to providers.
  • Invest in Employee Training and AI Expertise: Educate your staff, develop internal guidelines, and continuously enhance your company’s AI expertise.
  • Stay Informed on Developments: SMEs should be proactive and prepare for more stricter AI regulations, including at the national level in Finland. The AI Act will also lead to a review and reassessment of domestic legislation, so staying informed about developments is crucial. Also, additional guidance from the Commission is expected.

By following these steps, you can leverage AI’s opportunities safely and effectively. When done correctly, AI can provide significant competitive advantages and create new business opportunities for companies.

List of references

European Commission 2024. Artificial Intelligence – Questions and Answers. Available: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683 . Accessed: 9.9.2024

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

Inivative for Applied Artificial Intelligence 2023. AI Act: Risk Classification of AI systems from Practical Perspective. A study to identify uncertainties of AI users based on the risk classification of more than 100 AI systems in enterprise functions. Available:  https://www.appliedai.de/assets/files/AI-Act_WhitePaper_final_CMYK_ENG.pdf Accessed: 9.9.2024

TEM. (2023) EU:n tekoälyasetusehdotuksen vaikutukset suomalaisyritysten liiketoimintaympäristöön. Julkaisuja 2023:46. Available: https://urn.fi/URN:ISBN:978-952-327-613-0

This writing is part of the FAIR project publications. Finnish AI Region (FAIR) offers low-threshold expertise to companies in the fields of artificial intelligence, augmented reality, high-performance computing, and cybersecurity. FAIR, which provides free services, aims to accelerate and expand the adoption of artificial intelligence in small and medium-sized enterprises.

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts