EU launches hunt for 60 independent AI experts to support the implementation and enforcement of the AI Act

The European Commission is inviting independent experts to assess the risks of foundation models like GPT and Gemini. Read on to learn more about the initiative and how to apply.

Euroopan komission edessä olevia EU-lippuja

Text: Martti Asikainen, 17.6.2025. Photo: Adobe Stock Photos

Nauha, jossa FAIR:in yhteistyökumppanien logot.

The European Commission has launched a sweeping call for experts to join a newly formed Scientific Panel on Artificial Intelligence, tasked with guiding the continent through what it describes as the volatile and uncertain terrain of general-purpose AI (GPAI). 

Sixty independent specialists will be appointed by September to advise the EU’s AI Office on the risks and impacts posed by advanced foundation models—systems like GPT, Gemini and open-source equivalents that are now central to economic and social life but raise profound regulatory challenges.

Established under Article 68 of the EU’s landmark AI Act, the panel will serve as an independent advisory body, supporting the oversight and enforcement of one of the most ambitious AI regulatory frameworks in the world. That alone is reason enough to be excited.

From frontier hype to forensic oversight

The creation of the panel follows the adoption of the AI Act in 2024 and coincides with the law’s staged enforcement, beginning with general-purpose AI models in August 2025. As AI systems increasingly exhibit emergent behaviours and growing autonomy, the Commission is under pressure to ensure that model development remains aligned with fundamental rights, public safety, and democratic values.

“At the FAIR, we strongly encourage experts in Finland to apply for this unique opportunity to contribute to the implementation of the EU AI Act. Finland boasts world-class talent in artificial intelligence, and this is a chance for our experts to bring their knowledge, values, and vision to the European stage. By taking part, we not only help shape the future of trustworthy AI in Europe but also ensure that Finland’s voice is heard where it matters most,” says FAIR’s Communications Lead, Martti Asikainen.

The panel’s mandate includes identifying systemic risks across the EU, contributing to benchmarks and tools for model evaluation, classifying high-risk GPAI models, and supporting both national and cross-border enforcement activities. It is expected to provide crucial input in scenarios involving misinformation, cybersecurity threats, biothreat misuse, and potential failures in model alignment or control.

EU officials have emphasised the need for a body that is scientifically rigorous, multidisciplinary, and free from industry influence. According to internal documents, selected members must operate with full independence, and must not be employees, consultants or otherwise tied to any AI model provider. 

Declarations of interest will be published for public scrutiny

Guarding against unseen risks

The AI Act introduces a novel classification of GPAI systems with “systemic risk,” based on thresholds such as computational scale, generality of use, and possible misuse across domains. These classifications are critical to how models will be regulated, and the panel is expected to play a key role in shaping those determinations.

The areas of expertise sought reflect the complexity of the task: from adversarial testing, watermarking and incident response, to economic forecasting, red teaming, and the study of emergent capabilities such as long-horizon planning or recursive self-improvement.

“This scientific panel is a powerful testament to Europe’s commitment to developing responsible and trustworthy AI. With deep expertise in artificial intelligence and strong foundations in education, ethics, and innovation, Finland is exceptionally well placed to contribute. We would be proud to see Finnish experts helping to shape a European approach that balances technological progress with societal well-being,” envisions Asikainen from FAIR.

The Commission’s official call for applications outlines eight specific domains of expertise, including evaluation of GPAI model capabilities, risk assessment methodologies, technical risk mitigations, misuse and deployment systemic risks, cyber offence risks, cybersecurity, emergent systemic risks, and compute measurements and thresholds.

Civil society groups have welcomed the transparency obligations built into the panel’s structure, but some have raised concerns about the speed at which new AI capabilities are emerging. Digital rights advocates have cautioned that effective oversight will require not only technical depth, but also legal and ethical competence across the expert group.

How to apply—and what's at stake

The Commission has implemented stringent measures to ensure independence from industry influence. Candidates “shall be able to demonstrate independence from any provider of AI systems or general-purpose AI models, requiring that, at the time of expressing interest to the scientific panel, and throughout the term of office, the candidate shall not be an employee of, or in a contractual relationship with a provider of an AI system or general-purpose AI model”.

All candidates must complete a detailed declaration of interests covering employment history, consultancy work, research funding, financial investments, intellectual property rights, and even relevant interests of close family members. These declarations will be made publicly available for scrutiny.

Applications are open until 14 September 2025 at 18:00 CET. Candidates must submit a motivation letter, CV (ideally no more than four pages), and a detailed declaration of interests. A PhD or equivalent experience in a relevant field is required, along with demonstrated independence from industry involvement.

Appointments are for a 24-month term, renewable. Experts may be called on to act as rapporteurs, attend hearings, support enforcement efforts, and contribute to EU-level recommendations. Remuneration is available for those assigned to formal outputs or working tasks; travel and subsistence costs will also be covered.

The Commission has pledged transparency throughout: the names, CVs, and conflict-of-interest declarations of selected panel members will be published online, and panel outputs—such as qualified alerts or thematic reports—will be made available unless confidentiality obligations apply.

Europe's regulatory gambit

While the initiative represents a strong commitment to science-based AI governance, observers say its success will depend on whether the panel can act not merely as an advisor but as a proactive force in shaping how AI is evaluated, classified and constrained.

The AI Act rules on general-purpose AI will become effective in August 2025, making the panel’s establishment particularly urgent. As AI development accelerates, the EU’s approach stands apart in its insistence on enforceable regulation, public interest governance, and long-term societal resilience.

In the coming months, much will rest on whether the right experts step forward—and whether they are empowered to speak truth to power in an industry where commercial interests and public safety concerns increasingly collide.

At a glance: EU Scientific Panel on AI

  • What: 60-member expert panel supporting the implementation of the AI Act
  • Focus: General-purpose AI (GPAI) systemic risks, classification, enforcement
  • Requirements: PhD or equivalent; demonstrated independence from AI providers
  • Deadline: 14 September 2025, 18:00 CET
  • Contact: EU-AI-SCIENTIFIC-PANEL@ec.europa.eu

You can find more information from European Comissions website.
White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts