Finland's Statistics Agency Fights Back Against Flawed AI Search Results

Statistics Finland develops database interpreter to combat inaccuracies as Google’s AI summaries cause 17% drop in website visits. The problem stems from how AI systems retrieve information. 

Text by Martti Asikainen, 2.12.2025 | Photo Adobe Stock Photos

Picture of mobile phone screen with icons of different LLM models

Finland’s official statistics agency has launched an initiative to tackle a growing problem: artificial intelligence is providing inaccurate information about statistical data, and it’s costing them visitors.

Statistics Finland reported a 17 per cent decline in website visits via Google searches during the first 11 months of this year compared to the same period last year, a drop the agency attributes to Google’s AI-generated search summaries keeping users from seeking more detailed information.

“We’ve noticed that the information provided by search assistants is often incorrect,” Director General Ville Vertanen said to Yle News.

The problem stems from how AI systems retrieve information. Rather than accessing statistics databases directly, Google’s AI compiles answers from data previously sourced from Statistics Finland’s website or elsewhere, potentially serving outdated or inaccurate information to users who need current data.

We're talking about a global problem

Vertanen said to Yle News that the issue extends beyond Finland, with AI problems becoming a heated topic amongst statisticians worldwide.

The concern is well-founded. A recent study coordinated by the European Broadcasting Union and led by the BBC found that 45 per cent of AI assistant responses had at least one significant issue, including hallucinated details and outdated information.

The international research, which involved 22 public service media organisations in 18 countries working in 14 languages, evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini and Perplexity. Professional journalists assessed the responses against key criteria including accuracy, sourcing and context.

The study identified that 31 per cent of responses showed serious sourcing problems, with missing, misleading or incorrect attributions, whilst 20 per cent contained major accuracy issues.

Building an AI interpreter

To address these challenges, Statistics Finland is developing what Vertanen calls an interpreter for its databases—a system that would teach AI how to read statistics properly.

The proposed solution uses the Model Context Protocol, which connects AI applications to external systems such as statistics databases. The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools.

With this technology, AI could search a database for answers to questions such as “how has inflation developed in Finland over the last three years?”, pulling information directly from authoritative sources rather than relying on potentially outdated web content.

Statistics Finland’s database is enormous, and due to its size, finding specific information can be challenging. Vertanen said to Yle News that a database interpreter would make it easier to track down the right data for everyone, “whether you’re an official, a citizen, a journalist or a researcher”.

The agency plans to start a pilot programme towards this goal next year.

Why AI gets statistics wrong

The fundamental issue lies in how AI systems operate. Large language models predict the next word in a sequence based purely on statistical calculation, which makes them appear fluent but leaves them prone to making things up. They have no inherent ground truth to rely on.

Research has shown that Google’s AI Overview often repeats outdated answers simply because they represent the most common version found online. The system leans heavily on consensus rather than accuracy, meaning popular but incorrect information can dominate over newer, correct answers.

Google maintains that its tests indicated the accuracy rate for AI Overviews is on par with featured snippets, with inaccurate information being presented in just a small number of cases. However, critics argue that even small percentages represent significant risks when millions rely on these summaries for information.

Trust and consequences

The implications extend beyond inconvenient statistics. According to the Reuters Institute’s Digital News Report 2025, seven per cent of total online news consumers use AI assistants to get their news, rising to 15 per cent of under-25s.

“This research conclusively shows that these failings are not isolated incidents,” said EBU Media Director Jean Philip De Tender. “They are systemic, cross-border and multilingual, and we believe this endangers public trust.”

The BBC-led EBU study found that AI assistants “routinely misrepresent news content no matter which language, territory or AI platform is tested”.

Statistics Finland’s initiative represents a broader push amongst data providers and news organisations to ensure AI systems access authoritative information directly rather than relying on potentially flawed aggregations. Whether this approach can scale to address the wider problem of AI inaccuracy remains to be seen.

White logo of Finnish AI Region (FAIR EDIH). In is written FAIR - FINNISH AI REGION, EDIH
Euroopan unionin osarahoittama logo

Finnish AI Region
2022-2025.
Media contacts