Artificial intelligence (AI) is rapidly transforming the information space, reshaping how fabricated content is created and spread. While large language models (LLMs), image generators, and GPTs can unlock creativity and accelerate work processes, they can also be used to create false or misleading narratives meant to manipulate public opinion.
This project seeks to deepen understanding of how AI can be weaponized for misinformation and disinformation campaigns, the vulnerabilities that creates for both social cohesion and state security, how adversarial states might exploit those vulnerabilities, and how states can — and should — enhance cooperation to mitigate these growing risks.
This research is generously sponsored by the Korea Foundation.