Columbia University

Project Outline: AI Wars: Disinformation and Journalism in the Russo-Ukrainian War

Supervised By: Timothy M. Frye, Marshall D. Shulman Professor of Post-Soviet Foreign Policy

Project Background

As a Ukrainian-American student with a background in journalism and personal ties to both the technological and Eastern European spheres, I’ve been raised with a deep understanding of how disinformation campaigns threaten not only Ukraine’s sovereignty, but democratic institutions and free press around the world. The Russo-Ukrainian War has marked a turning point—not just militarily, but informationally. For the first time, generative AI tools are being actively used by a foreign adversary to interfere in public discourse during wartime. Recent reports, including those by OpenAI and the U.S. Justice Department, confirm that the Kremlin has deployed AI-generated content in attempts to manipulate American public opinion and distort narratives around Ukraine. In election years especially, with AI tools becoming increasingly accessible and scalable, the danger of misinformation campaigns ballooning into systemic threats becomes more urgent than ever. This project investigates how the Russian government uses artificial intelligence to spread disinformation about the Russo-Ukrainian War in the United States, with a focus on its impact on journalism, public discourse, and democratic integrity.

Objectives

  • To investigate how the Kremlin uses AI to target and manipulate American media ecosystems, especially around narratives of the Russo-Ukrainian War and U.S. elections.

  • To identify how AI-generated content is prioritized, suppressed, or flagged on various platforms.

  • To explore possible countermeasures, including content moderation systems and fact-checking tools, and assess their efficacy in containing Russian disinformation.

Research Questions 

  1. How has Russia used AI-generated content to target American audiences with disinformation about the Russo-Ukrainian War?

  2. To what extent do AI-driven algorithms on platforms like Google, Twitter/X, and YouTube amplify or suppress Kremlin-aligned narratives?

  3. What are the limitations of current fact-checking and AI-content detection tools in identifying and curbing this disinformation?



Methodology 

This project will combine digital media analysis, internet-based archival research, and interviews with key informants to understand how Russia uses artificial intelligence to shape public opinion in the U.S. through disinformation.

  1. Media Monitoring & Internet Research
    I will systematically track how narratives related to the Russo-Ukrainian War appear across search engines and social media platforms, using publicly available tools and keyword searches. By collecting and analyzing these results over time, I aim to uncover patterns in how AI-generated content is ranked, shared, or suppressed. I will compare these trends with data from fact-checking sites and journalistic watchdogs such as EU vs Disinfo and Bellingcat.
  2. Archival & Open-Source Investigation
    I will use digital archives, government statements, and OSINT (open-source intelligence) databases to trace the evolution of Russian AI use in disinformation campaigns. This includes examining known bot networks, AI-generated articles, and deepfakes that have been publicly documented in connection with the war.
  3. Interviews & Institutional Outreach
    To supplement digital findings with institutional knowledge, I will contact the offices of Attorneys General in states that have passed legislation related to AI and elections. These conversations will explore how U.S. authorities are responding to foreign AI-enabled propaganda. Where possible, I will also seek to interview journalists, digital security experts, or policy professionals working on counter-disinformation.
  4. Human Subjects & Ethical Considerations
    All interviews will follow ethical guidelines, with informed consent obtained in advance. If necessary, I will file for exemption or approval through Columbia’s IRB, especially if engaging with individuals directly affected by disinformation campaigns. The goal is not only to document how AI is being misused but also to amplify the voices of those resisting its weaponization.