American Sunlight Project. (2025). A Pro-Russia Content Network Foreshadows the Auomated Future of Info Ops. Sunlight Foundation. Washington.
Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K. & Tramér, F. (2023). Poisoning Web-Scale Training Datasets is Practical. arXiv. Cornell University.
D’Alessandro, M.A. (2024). Data Poisoning attacks on Enterprise LLM applications: AI risks, detection, and prevention. Published on Giskard’s website 25 April 2024. Accessed 30 May 2025.
Ennis-O’Connor, M. (2024). The AI Empathy Paradox: Can Machines Understand What They Cannot Feel?. Published on Medium 23 December 2024. Accessed 30 May 2025.
Faktabaari (2025). Venäjä on soluttanut propagandaansa tekoälymalleihin pohjoismaisilla kielillä. Published on Faktabaari’s website 28 May 2025. Accessed 30 May 2025.
Jacob, C., Kerrigan, P. & Bastos, M. (2025) The chat-chamber effect. Trusting the AI hallucination. Big Data & Society, 12(1). Sage Journals.
McKenzie, S. & Isis, B. (2025). A Well-funded Moscow-based Global ‘News’ Network has Infected Western Artificial Intelligence Tools Worldwide with Russian Propaganda. Published on NewsGuard’s website 6 March 2025. Accessed 28 May 2025.
Mektrakarn, T. (2025). OWASP Top 10 LLM & Gen AI Vulnerabilities in 2025. Published on Bright Defencen’s website 6 May 2025. Accessed 28.5.2025.
Newport, A. & Jankowicz, N. (2025). Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots. Published on Bulletin of Atomic Scientistist’s website 26 Marchn 2025. Accessed 28 May 2025.
Nolan, B. (2025). Sam Altman says ‘10% of the world now uses our systems a lot’ as Studio Ghibli-style AI images help boost OpenAI signups. Published on Fortune’s website 14 April 2025. Accessed 30 May 2025.
Ovide, S. (2025). You are hardwired to blindly trust AI. Here’s how to fight it. Published on Washington Post 3 June 2025. Accessed 6 Junes 2025.
Qiang, Y., Zhou, X., Zade, S.Z., Roshani, M. A., Khanduri, P., Zytko, D. & Zhu, D. (2024). Learning to Poison Large Language Models During Instruction Tuning. arXiv. Cornell University.
OWASP Foundation. (2025). LM04:2025 Data and Model Poisoning. Published on OWASP Foundation’s website. Accessed 30 May 2025.
Ruchira, R. & Bhalani, R. (2024). Mitigating Exaggerated Safety in Large Language Models. arXiv. Cornell University.
Si, C., Goyal, N., Wu, S.T., Zhao, C., Feng, S., Daume, H. & Boyd-Graber, J. (2023). Large Language Models Help Humans Verify Truthfulness — Except When They Are Convincingly Wrong. arXiv. Cornell University.
Zhou, X., Qiang, Y., Roshani, M. A., Khanduri, P., Zytko, D. & Zhu, D. (2025). Learning to Poison Large Language Models for Downstream Manipulation. arXiv. Cornell University.
Zou, A., Wang, Z., Carlini, N., Nars, M., Kolter, J.Z. & Fredrikson, M. (2023). Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv. Cornell University.