Generative AI continues to blur the lines between synthetic and authentic content. While many have grown skeptical of believing everything they see, the emerging challenge lies in distinguishing between real and AI-generated audio. From robocalls and voice phishing to fabricated ransom demands, the misuse of AI-manipulated speech is on the rise. While there are existing tools to combat and address this challenge in the Global North, these solutions are often trained on narrow linguistic datasets that fail to reflect the language diversity of the global majority. Given the widespread deployment of AI-manipulated audio and deepfake technology across parts of the Global South, it is crucial to define protections for sustained resilience for those in the region.
Join the Strategic Foresight Hub for a timely conversation with Responsible AI Fellow Pamposh Raina as we examine critical gaps in audio detection systems, explore how they can be strengthened, and consider what more equitable AI safeguards should look like going forward.
Featured Speakers

Pamposh Raina, Head of Deepfakes Analysis Unit, Trusted Information Alliance
Pamposh Raina leads the Deepfakes Analysis Unit (DAU) of the Trusted Information Alliance, a cross-sector alliance in India. The DAU is a collaborative project that focuses on countering A.I.-generated misinformation.

Matt Masterson, Senior Director Elections and Societal Resilience, Microsoft
Matt Masterson is a Senior Director with the Tech for Society team at Microsoft. Previously, he served as a non-resident fellow at Stanford University’s Internet Observatory. Prior to that he was Senior Cybersecurity Advisor at DHS/CISA, where he focused on election security issues.

