As the role of artificial intelligence grows, so do the cybersecurity risks, which can affect not only small and medium-sized enterprises (SMEs) but also large companies. The Artificial Intelligence and Cybersecurity in SMEs event discussed practical solutions and approaches with experts in the field to navigate these challenges in the AI era.
Anna Lunden, 13.2.2025
The event, organized by FAIR, HealthHub Finland, Location Innovation Hub, and Robocoast EDIH, in collaboration with EEN and the Helsinki Chamber of Commerce, brought together 100 entrepreneurs and experts on February 13th to discuss how to manage and anticipate the risks posed by AI from a cybersecurity perspective. When dealing with new technologies, it is crucial to understand the risks associated with these changes and to take responsibility for training employees in both small and medium-sized enterprises. Effective AI solutions always require attention to both the technical core and process changes, as well as security and regulatory requirements.
“We are especially pleased to bring in some of the best experts in the field to provide SMEs with practical insights into the intersection of AI and cybersecurity – through solutions, risk management, current and upcoming threats, and the necessary skills. It is also great that we were able to share information about free or subsidized cybersecurity services from Finnish EDIHs available to SMEs,” says Jussi Rantsi, responsible for FAIR’s ecosystem.
Under the moderation of communication specialist Peter Nyman, the event speakers emphasized an important message: AI usage is not just a technological challenge, it also requires individual responsibility and management commitment – the greatest cybersecurity and technology risk is always the human factor. The first speaker, Jani Voutilainen, a cybersecurity expert from Gofore, reminded the audience that while AI brings enormous advantages to business, it is not immune to human errors. Developing and using AI systems requires careful planning and the assessment of cybersecurity risks. Whether it’s the implementation of a new language model or data collected and produced with AI, it is crucial to understand how AI has been used in creating new tools and how it can be restricted if needed.
The reactions of AI models can also become problematic if they attempt to please the user too much, as seen in some language models. For example, Voutilainen introduced the concept of “prompt injection,” which could lead to AI shaping the user’s actions rather than the other way around.
Management is responsible for ensuring that employees receive the necessary training in AI usage. Mikko Kiviharju, Professor of Work Life at Aalto University, emphasized that an AI strategy is a mandatory part of a company’s risk management. He also reminded the audience that EU AI regulations apply to all companies offering AI-based services, though there are differences in guidance for small businesses. It is important to understand that these regulations are designed to protect businesses from new, unforeseen risks that are likely to emerge as AI use grows.
A key question raised was the conflict between new innovations and cybersecurity – for businesses, AI is most effective when it allows the testing and trialing of entirely new operational models. But how can one remain innovative while also considering cybersecurity risks?
The key is to understand the risks posed by AI usage in advance.
Juhani Eronen, Lead Specialist at Traficom, emphasized in his speech that despite their efficiency, new AI innovations can also contain significant vulnerabilities. The use of AI increases cybersecurity risks because the development is moving faster, and automation adds to the vulnerabilities. Eronen also reminded that cyberattacks are increasingly leveraging AI and user behavior recognition. In this situation, companies are required to make clear decisions regarding cybersecurity – if a company is unable to handle security internally, it must be outsourced. The key is to understand the risks posed by AI usage in advance and to be transparent in the face of potential issues and the difficult decisions they may require.
The event was concluded by keynote speaker Samuel Marchal, Research Team Leader at VTT’s Cybersecurity Engineering & Automation group, who discussed the security of AI supply chains. Marchal pointed out that while security solutions are increasingly automated, errors can still occur when security decisions and assessments are made by probability-based systems. In many cases, AI is one of the best ways to enhance business operations, provided businesses are aware of the vulnerabilities in their operational models and the potential for cyberattacks throughout the chain, from the data used for training models to the pre-built components used in AI solutions. Marchal also reminded the participants of the importance of long-term and diversified testing of new models.
Errors can still occur when security decisions and assessments are made by probability-based systems.
Overall, expert presentations and discussions throughout the day emphasized the importance of training and interaction. Each company uses AI in its own way, which is why learning experiences and insights vary. Networks are crucial for sharing these ideas, and therefore the event also dedicated time for companies to discuss the possibilities and services for analyzing digital maturity and cybersecurity with the four EDIHs, EEN, and the Helsinki Chamber of Commerce.
“In terms of cybersecurity, it is important to stay vigilant when using AI, but with a systematic approach and proactive risk management, even SMEs can keep it under control. And, of course, the tools are constantly evolving to better support AI-driven solutions,” summarizes Jussi Rantsi after the event.
Finnish AI Region
2022-2025.
Media contacts