Companies face new reputational risks as artificial intelligence systems can be systematically manipulated to spread misinformation and propaganda. You can read the complete article with more information and detailed recommendations by clicking here.
Martti Asikainen, 1.6.2025
Businesses embracing artificial intelligence to streamline customer service and internal operations are facing an emerging and deeply concerning threat: the infiltration of AI language models by foreign propaganda and malicious actors.
A recent investigation by the Nordic fact-checking network EDMO NORDIS has exposed how Russian disinformation, disseminated through the Pravda network, is seeping into widely-used AI systems.
These large language models (LLMs)—the backbone of chatbots, virtual assistants and analytic tools—are susceptible to a technique dubbed LLM grooming, in which external actors subtly train or bias the models through the vast textual data they ingest.
According to FAIR’s expert Martti Asikainen, a member of the SOMA network (Social Observatory for Disinformation and Social Media Analysis), and a former fact-checker at the award-winning Faktabaari, the ramifications for businesses could be profound.
“Imagine your customer support chatbot suddenly making bizarre claims about your products or parroting fringe political propaganda,” Asikainen said. “For many businesses, that’s not just embarrassing—it’s catastrophic.”
While foreign state actors remain a central concern, Asikainen warns that competitors could just as easily exploit the same tactics.
“A rival company could manipulate AI models to recommend their own products, or worse, to discredit yours,” he noted. “It’s a form of corporate sabotage that operates beneath the radar.”
With OpenAI’s ChatGPT alone boasting nearly 800 million monthly users, the potential scale of manipulation—and damage—is difficult to overstate. Asikainen identifies three core business risks stemming from manipulated language models:
Brand Damage: A compromised AI assistant repeating false medical claims or conspiracy theories could ignite a PR crisis overnight, particularly in sensitive industries like healthcare or finance.
Product Reliability: Groomed AI tools may deliver erroneous or even dangerous responses. Some manipulated systems could contain latent triggers—behaving normally until prompted by a specific keyword or query.
Distorted Decision-Making: With many firms using AI for market analysis and strategic planning, even slight data contamination could lead to misguided investments or policy decisions.
“Imagine a CEO greenlighting a new product based on AI-generated insight that was subtly biased to favour a competitor,” Asikainen said. “That’s no longer hypothetical—it’s plausible.”
Research by NewsGuard found that more than one-third of responses from certain AI models contained misleading pro-Russian narratives. These narratives originate from the Pravda network, a Kremlin-aligned ecosystem of 182 domains in 74 countries, publishing in a dozen languages.
With an estimated 3.6 million propaganda articles annually, the network’s reach extends far beyond direct readership. These texts often find their way into the training datasets used by commercial AI models via scraping and indexing engines.
To counter this growing threat, Asikainen advises businesses to adopt a more security-conscious approach to AI implementation. He outlines five key defensive measures, which are data hygiene, source restriction, ongoing testing, agile updating and staff training.
“Vigilance is critical,” Asikainen said. “If an AI tool begins generating biased or implausible suggestions, staff should be empowered to question it, not blindly follow it.”
Asikainen’s warning extends beyond corporate firewalls. He argues that the capacity to manipulate AI systems at scale introduces a new era of information warfare—one that blurs the line between algorithm and adversary.
“This isn’t just a technical vulnerability—it’s a societal one,” he cautioned. “If AI systems can be manipulated like people—but faster, and en masse—we’re entering uncharted territory in how ideas spread and public opinion is shaped.”
For companies, the message is clear: the AI tools that promise efficiency and innovation must be deployed with the same caution and rigour as any other security-critical system. In the wrong hands—or trained on the wrong data—they may become liabilities instead of assets
Dive deeper into this topic—read the full article here.
Finnish AI Region
2022-2025.
Media contacts