A discussion evaluating the role of artificial intelligence and social justice in developing countries

Given the varied capacities of countries in the Global South to integrate emerging technologies in accordance with their societal needs, it is difficult to define international best practices. Especially when it comes to building and maintaining ethical standards for boosting economic and technological opportunities. While the idea of digital sovereignty – defined as a government’s right to regulate the information its citizens are exposed to and engage with – itself is intriguing, it often comes with potential abuse of power to either control or assert dominance over their respective citizens. Many emerging markets within the Global South are in a transition phase from a traditional market-based economy to a digital, technology-driven growth model. This crossroad can often leave citizens with inequitable access to technology, legal protections, and a lack of education or technological literacy, furthering social injustice. When developing national AI frameworks and strategies, it is therefore important that policymakers and governments consider the socio-economic needs of their citizens, countries, or regions.

On October 31, 2024, fellows participated in a discussion on the topic of AI in the Public Sector. To seed the conversation, four fellows presented case studies and research findings that highlighted existing uses of AI in public services and in support of critical infrastructure strategies to leverage the technology for economic growth and to safeguard the wellbeing of citizens.

Discussion Summary

The third discussion session focused on AI’s potential to transform public services—from boosting efficiency and data-driven decision-making to automating routine tasks. While these benefits are promising, concerns remain about job displacement, costs for taxpayers, privacy, and ethics. Before presenting their case studies, speakers highlighted both the value of AI adoption in the public sector as well as the challenges it poses. One the one hand, there was little doubt that AI has the potential to improve service delivery by generating insights from large datasets, increasing accuracy and citizen satisfaction, and automating backend tasks to boost productivity. On the other hand, however, automation may displace workers and create social hardship for those affected. Speakers highlighted that national strategies must invest in upskilling and reskilling programs to keep workers engaged in tech-driven environments. The high cost of AI implementation, including technology, infrastructure, and training—was mentioned as another concern, especially for developing countries that must weigh benefits against financial constraints. Sustainable funding models, involving external funders, public-private partnerships, and innovative financing can help offset expenses and were mentioned as best practices. Most critically, fellows agreed that privacy and ethical risks must be always addressed. Without safeguards, AI can be misused for surveillance and introduce bias into decision-making processes.

The first case study examined AI and citizen engagement in Latin America, highlighting how poor public experiences often stem from mismanaged state resources and limited understanding of bureaucratic processes. Responsibly used, generative AI can help bridge this gap by making institutional roles clearer and more accessible. Taína for example, a Smart Government AI system based on advanced machine learning, provides personalized communication to citizens seeking public services. Optimized for local Spanish dialects and accessible via voice and text on mobile devices, the program serves a broad population. It continuously learns from interactions to anticipate future needs and proactively connects users with relevant public and private services. Notably, it can complete procedures in a single session, bypassing typical bureaucratic delays. As citizen participation grows, Taína becomes more effective by collecting and structuring new data, ultimately strengthening the relationship between governments and their citizens.

The next presenter discussed the Oxford AI Readiness Index 2023 results for Central Asia—Kazakhstan, Uzbekistan, Tajikistan, Kyrgyzstan, Turkmenistan, and Afghanistan—highlighting efforts by the first five to pursue a harmonized regional AI approach. Kazakhstan ranks as the most AI-ready, though still behind India and Türkiye, and is supporting tech startups through the Astana Hub’s partnership with Google via the Silkway Accelerator. Tajikistan, which published the region’s first national AI strategy, as well as Uzbekistan have made notable progress in digital transformation. According to the presenter, Kazakhstan’s five-year AI strategy (2024–2029), for example aims to develop a proprietary KazLLM with 100 billion tokens, an open-source version with 40 billion tokens, and a national AI infrastructure accessible to stakeholders through the National Information Technologies Joint-Stock Company. While Uzbekistan’s strategy includes a $50 million investment in AI R&D, projecting an additional $1.5 billion in market value, Tajikistan’s plan allegedly considers adopting Chinese facial recognition technology to build a ‘safe city’ in Dushanbe. The remaining three countries lack formal AI strategies, though Kyrgyzstan operates two supercomputers and is developing Kyrgyz NLP with a private firm. Rights-respecting and responsible AI frameworks remain largely unknown in the region, with civil society often addressing issues related to natural language processing (NLP) and LLM development. To boost awareness, the presenter recommended tools like the GSMA Mobile Literacy Toolkit and educational videos to improve digital and AI literacy in the region.

Thailand—the third case study—has taken an innovative approach to AI and regulatory development. According to the presenter, policymakers use a so called “relevancy checker”, an LLM model that interprets international legal obligations and generates Thai-language keywords, enabling cross-referencing with national laws for gap analysis. Furthermore, Thailand published a non-binding AI strategy in 2021, allowing different government sectors to pursue independent projects. A key government priority, reinforced by the relevancy checker, is modernizing foundational laws and fostering a fair, equitable digital market. Recommendations include requiring disclosure when AI is used, allowing users to refuse AI-augmented services, mandating audits of platform technologies, and ensuring fair competition among core service providers. The second recommendation supports the evolving UN Model Law on Automated Contracting, which affirms that a contract should not be invalidated solely because no natural person was involved in reviewing its formation.

The final case study examined the use of AI in the judiciary across the Middle East and North Africa. In Egypt, AI transcription tools—developed by the Ministry of Communication and Information Technology—have improved courtroom efficiency and advanced judicial reform, now operating in 66 criminal and economic courts. A unified online petition system has also cut petition processing times from two weeks to four hours. The UAE’s Ministry of Justice, for example, launched Aisha, an AI assistant that enhances access to legal information at court entrances, supports judges with case analysis, and provides legal insights to lawyers. Turkey’s judiciary created an AI virtual center to improve case assignment, workload distribution, and employee evaluations. Like Aisha, it offers legal precedent analysis and may eventually author opinions and rulings. While these tools boost efficiency, they also raise concerns about judicial autonomy. Overreliance on AI insights may undermine judges’ interpretive judgment and erode public trust, especially in cases where AI lacks moral nuance or empathy; or is trained with biased datasets.

In the subsequent discussion, fellows agreed that successful AI integration requires a holistic approach that fosters stakeholder collaboration and addresses ethical and social concerns to establish responsible governance frameworks. Public sector organizations should identify appropriate AI use cases and define clear, achievable project objectives in partnership with other stakeholders to ensure feasibility and availability. These projects must be backed by adequate funding to support skilled personnel and necessary infrastructure. Clear government frameworks should outline roles, responsibilities, and ethical as well as legal standards. In emerging markets, closing the talent gap and aligning digitization with real societal or security needs—not just global optics—is essential. It was highlighted that given the high cost of digital capacity building, countries cannot afford to implement AI systems that are inaccessible or misaligned with local contexts.

Another important point discussed was the need that future procurement ought to prioritize stronger stakeholder understanding and resource sharing with more developed economies, especially as AI adoption grows across the Global South. AI proposals from these regions often lack complexity, clear guidelines, and strategic direction, with data gaps contributing to reluctance in data disclosure. To address this, one participant recommended partnering with global private-sector actors for secure data banking and storage. Establishing human-in-the-loop practices early on might also be critical to prevent automation bias—especially in contexts like Türkiye, where AI use in criminal justice could lead to wrongful imprisonment or unjust bail decisions without proper training, testing, or human oversight.

AI adoption in the public sector has the potential to transform everything, from access to government services to the creation and enforcement of laws. However, it also brings ethical, social, and economic risks that, if unaddressed, could worsen inequality and instability. To ensure equitable and effective integration, national AI strategies and governance frameworks must be collaboratively developed by diverse stakeholders and continuously monitored.

This summary captures the extensive discussion and insights shared during the meeting held on October 31, 2024, at 10:00am EST while maintaining the anonymity of individual participants as per the Chatham House Rule.

A special thank you to Isaac Halaszi, Research Assistant for the Strategic Foresight Hub at the Stimson Center, who contributed to this event summary.