Shaping Inclusive AI Governance – Reflections on Paris and Opportunities for the India AI Summit

Rethinking AI action from the perspective of the Global South ahead of the next AI Summit in India

By  Branka Panic Lead Author  •  Narun Popattanachai  •  Aziz Soltobaev  •  Jibu Elias  •  Ibrahim Sabra 

In partnership with Microsoft’s Office of Responsible AI (ORA), the Strategic Foresight Hub at Stimson brought together a diverse group of Global Perspectives Responsible AI Fellows at this year’s Paris AI Action Summit. The Summit was the first in-person convening of the Fellows, who had previously met virtually through Fellowship programming hosted by the Stimson Center. Representing AI landscapes from the Middle East & North Africa to Southeast and Central Asia and Latin America, the participants showcased exactly what the program set out to do by sharing ideas for AI opportunities and challenges with peers across the globe. The conversations in and around the Summit inspired the Fellows to come up with realistic expectations and recommendations for the next AI Summit, which is slated to occur sometime between November 2025 and January 2026 in India. The camaraderie and collaboration amongst the group of Fellows highlights the important work that is at the core of Stimson and Microsoft ORA’s partnership: navigating a complex global technological landscape and harnessing the benefits of emerging technologies like AI for diverse stakeholders around the world.

Editor’s Note: Since 2023, Microsoft’s Office of Responsible AI has partnered with the Strategic Foresight Hub at the Stimson Center to convene a diverse group of experts from the Global South to evaluate the impacts of AI in emerging markets. Guided by the question of how AI-related risks and benefits might manifest in various social, cultural, economic, and environmental contexts, program participants identify technological and regulatory solutions that can help mitigate risks and maximize opportunities across global contexts. After more than a year of virtual meetings, some Fellows had the opportunity to convene in person at the February 2025 AI Action Summit in Paris. In this piece, they reflect on global AI policy, their experience at the Summit itself, and the agenda for the upcoming Summit in India.

By Julian Mueller-Kaler, Director, Strategic Foresight Hub

The AI Action Summit in Paris marked a major shift in global AI discussions, moving from the risk-focused conversations of the Bletchley Park and Seoul AI Safety Summits to a focus on AI opportunities, innovation, and applications. This shift was reinforced by broader global trends. Before the Summit, a new U.S. administration revoked Biden’s Executive Order on AI Safety, and shortly after the Summit, the UK rebranded its AI Safety Institute as the AI Security Institute, aligning with the previously adopted AI Opportunities Action Plan. U.S. Vice President J.D. Vance, whose speech will likely define how the Paris AI Action Summit will be remembered, captured this shift by declaring, “I’m not here this morning to talk about AI safety… I’m here to talk about AI opportunities.” Unsurprisingly, the U.S. and the UK were also the two countries that refused to sign the final declaration “Statement on Inclusive and Sustainable AI for People and the Planet”.

This shift away from safety concerns toward deregulation is striking, especially as scientific research continues to highlight growing AI risks. For example, the First International AI Safety Report, led by Professor Yoshua Bengio—a Canadian-French computer scientist and pioneer of artificial neural networks and deep learning—warns that general-purpose AI presents underexplored dangers, including models deceiving human programmers during testing. At a side event, Bengio’s concerns stood in stark contrast to the Summit’s focus on speed and competition.

Similar concerns were discussed at The Future of AI Governance: Ensuring Global Inclusivity, an event co-organized by IRIS and the Stimson Center with support from Microsoft’s Office of Responsible AI. As part of the Center’s Global Perspectives: Responsible AI Fellowship program, attending fellows reflected on the Summit’s outcomes, discussing key aspects of AI governance and pathways for global inclusivity. Following the announcement of the next Summit host, fellows also provided recommendations for India, outlining steps for a more globally inclusive AI Summit implementation and dialogue.

Building an Inclusive Framework for Global AI Governance – Lessons from Microsoft

Natasha Crampton, Chief Responsible AI Officer at Microsoft, emphasized that Responsible AI and AI Safety remain nascent fields requiring greater investment and deeper understanding. She underscored that without effective and inclusive AI governance, the benefits of the technology will not be fully realized or equitably shared. As a member of the United Nations High-Level Advisory Body on AI (2023–2024), Crampton works alongside global experts to address pressing questions of AI governance. This effort culminated in the report Governing AI for Humanity, whose recommendations were incorporated into the Global Digital Compact, adopted at the most recent UN General Assembly in New York. During the event’s opening keynote, she highlighted three key priorities for effective AI governance: regulatory interoperability, oversight of globally significant risks, and advancing inclusivity – insights further explored in Microsoft publication “Global Governance: Goals and Lessons for AI”. Crampton also stressed that AI is not developing at the same pace worldwide, yet its consequences are felt globally. As such, AI governance must be designed to ensure that all stakeholders–governments, institutions, and organizations–alike have a meaningful opportunity to participate in shaping the future of AI.

Many Pathways for Global Inclusivity – Lessons from France, Thailand, Kyrgyzstan, and India

It is important to note that AI governance is not just about regulation; it is about inclusion at every stage, from AI design and development to deployment. Julia Velkovska, Senior Researcher in Sociology at Orange Research Labs, highlighted how ethically questionable labor practices often underpin AI systems. The invisible yet essential work of labeling and tagging datasets, critical for training AI models, is frequently outsourced to the Global South, where often questionable working conditions raise ethical concerns. She also pointed to the linguistic exclusivity of AI, where many conversational AI tools fail to recognize languages beyond English or disregard dialects, effectively excluding entire communities from the digital future.

Narun Popattanachai, Director of Regulatory Impact Assessment and Evaluation of Law at Thailand’s Office of the Council of State, and Microsoft/Stimson Fellow framed the core dilemma of AI governance: balancing the priorities of those advocating for privacy, human rights, and free speech with those favoring rapid market-driven AI adoption. This divide reflects the broader global challenge – crafting AI governance structures that foster both innovation and accountability without tipping the scales entirely in favor of either camp.

Aziz Soltobaev, Co-founder & Programs Manager at the Internet Society Kyrgyzstan Chapter, and Microsoft/Stimson Fellow presented Kyrgyzstan as a compelling case study in digital transformation. A decade ago, the country ranked among the lowest in internet affordability, with high costs and limited access. Today, through strategic investments (Digital CASA) and policy shifts (Tunduk X-Road, EL-QR), Kyrgyzstan boasts some of the most affordable internet rates globally, with 99% of the settlements now connected. Most of the public services are provided electronically using such digital public goods as digital identity and digital payments. The adoption of a unified payment standard by the National Bank for all fintech startups, commercial banks and cash registers boosted financial inclusivity from below the average global level to almost full inclusivity of the adult population by digital wallets and financial services. These examples demonstrate that promoting AI as the digital public goods and investing into the AI infrastructure for public benefit could accelerate the AI adoption in countries of the Global South. This also shares an important lesson that AI inclusivity cannot be meaningfully discussed without first addressing the stark reality that 2.5 billion people around the world still lack internet access (and 750 million remain without electricity). The Kyrgyz experience highlights how digital public infrastructure must be a priority before societies can fully harness AI’s potential.

Jibu Elias, Microsoft/Stimson Fellow, Fellow at the Mozilla Foundation, and an early architect of the IndiaAI program, reflected on India’s evolving AI governance approach. Initially focused on AI for social empowerment through initiatives like RAISE (Responsible AI for Social Empowerment) and AI for All, India sought to leverage AI to address economic and social challenges in a country marked by vast linguistic, demographic, and geographic diversity. More recently, India has shifted its focus to the broader Global South and regulatory considerations, signaling its intent to be an active player in shaping AI’s global trajectory. As the world’s largest democracy, India is increasingly asserting its role in AI governance, recognizing that exclusion from global decision-making, like its lack of a permanent seat on the UN Security Council, must not be repeated in the AI era. With the third-highest number of AI research papers published in the past five years and a rapidly growing AI ecosystem, India is positioning itself as a leading AI power alongside the U.S. and China.

Beyond Efficiency – Ensuring Responsible AI in Judicial and Peacebuilding Contexts

Ibrahim Sabra, Microsoft/Stimson Fellow and Researcher at the University of Vienna’s Department of Innovation and Digitalization in Law, presented his case study on AI’s integration into judicial systems across the Global South. Judicial AI systems, he explained, could currently fall into three categories: clerical assistive tools, recommendation systems, and emerging decision-making systems. While acknowledging the potential for increased efficiency, he highlighted critical risks associated with their growing adoption: biased training data amplifying disparities, mishandled sensitive data undermining privacy, judicial overreliance eroding autonomy, and opaque systems fostering “obscure justice.” To harness AI responsibly, he proposed three essential safeguards: retaining adjudicative authority exclusively with human judges, enforcing transparency, accountability, and ethical use through rigorous guidelines and external oversight, and building institutional capacity via training and policy development. These measures, he argued, are the bare minimum to ensure AI serves, rather than undermines, the core tenets of justice, keeping it unequivocally within the human realm.

Branka Panic, Founding Director of AI for Peace and a Microsoft/Stimson Fellow, emphasized the importance of involving communities affected by conflict and violence in the design, development, and implementation of AI tools that directly impact their lives. As AI becomes integral to many diverse fields, including peacebuilding and humanitarian action, it must be applied ethically and inclusively, focusing on understanding local contexts to minimize negative impacts while maximizing its positive potential. Panic commended efforts by grassroots communities gathered in Paris, such as the AI African Village, Yemeni women activists, human rights defenders, and programs like the Responsible AI Fellowship, which create spaces for their voices to be heard, even when they are not part of the main discussions. These initiatives demonstrate the need for those most impacted by AI to actively shape its use and governance, ensuring it serves rather than harms.

Conclusion and Lessons for the Upcoming AI Summit in India

The Summit underscored a stark reality: countries in the Global North and the developing world operate with fundamentally different AI priorities. Many nations in the Global South face significant barriers to AI adoption, from a lack of infrastructure to shortages in technical expertise. However, the discussions also raised a critical question – can solutions developed in one Global South country be effectively applied elsewhere? As Jibu Elias noted, if an AI development, implementation, and governance model can succeed in India, with its vast diversity and complexity, it is likely adaptable to other nations facing similar challenges. Another key takeaway from these conversations was that AI governance cannot be dictated by a handful of powerful nations. Ethical labor practices, digital inclusion, regulatory balance, and geopolitical considerations must be addressed through a globally representative approach, one that ensures AI serves the interests of all, not just a select few.

To further broaden the conversation, our Fellows are offering recommendations for the upcoming AI Summit in India:

“India has a crucial opportunity to refocus the global conversation on Trustworthy AI, on existing risks, and the collective efforts needed to address these challenges effectively. My wish is for India to foster more inclusivity by creating space for vulnerable communities facing disproportionate AI risks, particularly those affected by conflict and violence. By prioritizing voices from these regions, India can ensure the Summit not only addresses challenges but also highlights AI’s potential in peacebuilding, human rights protection, and conflict prevention. This approach would set a global precedent for truly inclusive AI governance.”  – Branka Panic

“As someone who helped architect INDIAai and has worked at the intersection of policy, people, and power, I believe India’s AI moment isn’t just about tech—it’s about trust. At this Summit, we have the chance to do something historic: put communities who’ve long been left out—whether due to language, labor, or lack of access—at the center of the global AI agenda. If we can create an inclusive governance framework that works for India, we can spark a new model for the world—one built not just on ambition, but on empathy.” – Jibu Elias

“Law and the judiciary are deeply embedded in society and cannot be isolated from broader governance trends. The next Summit should address the uneven adoption of AI in judicial systems, which reflects fragmented governance worldwide. International and regional organizations, in collaboration with key legal associations (e.g., AIDP, IBA, IAJ, ICJ), should advance UNESCO’s Guidelines for AI Use in Judicial Systems as a global framework. Countries should follow Colombia’s lead in formally adopting these guidelines, setting a vital precedent for ethical AI integration in justice systems. As both justice and AI are transnational, harmonized standards are essential to safeguard judicial integrity, prevent misuse, and ensure AI promotes fairness.” – Ibrahim Sabra

To watch the full “The Future of AI Governance: Ensuring Global Inclusivity” event hosted by IRIS, please visit the following link: The Future of AI Governance: Ensuring Global Inclusivity (YouTube).

Recent & Related

Field Note
Courtney Weatherby • Allison Pytlak
Policy Memo
Kalliopi Mingeirou • Yeliz Osman • Raphaëlle Rafin