Case Studies
With an AI-enabled future on the near horizon, it is imperative to evaluate the lessons of earlier technological evolutions to best inform approaches in a new era. Over the past three decades, the digitalization journey of Central Asia, characterized by its mobile-centric nature, has served as a reminder that emerging technologies may not always follow predicted paths. Rather, broader regional patterns reveal how local constraints and conditions can lead to more innovative adaptations. Proper consideration of these lessons will ensure AI becomes a tool for broader social development, focused on creating systems that are as powerful as they are accessible.
As artificial intelligence continues to grow in popularity and application globally, various fields have shown both apprehension and fascination with the new technology. The judiciary seems to recognize the great potential, as well as risk of embracing artificial intelligence. On one hand, the judiciary can benefit greatly from its application by improving administrative tasks and automating case management. On the flip side, using AI in judiciary systems risks building distrust, undermining accountability, and eroding privacy. While the double-edged nature of AI is not a new concept, the high stakes of judicial decision-making warrants caution and consideration of the risks that the new and emerging technology poses.
Despite undergoing decades of reform following the Asian Financial Crisis in the late 1990s, Thailand has yet to overcome the deeply ingrained inefficiencies within its administrative structure that continually interfere with interdisciplinary collaboration and hinder its ability to comply with international regulatory frameworks. In response to these issues, the Thai government has begun leveraging AI to perform technical translation, keyword identification, policy and legal gap analysis, to better implement frameworks such the OECD’s IRC. This solution does not only enable Thai officials and OECD staff to communicate more effectively through automated translation and analytical processes but also generates translated standardized reports for gap analysis. While the use of advanced LLMs for such a purpose shows significant promise, concerns remain regarding quality control, transparency, and the extent of human involvement in the process.
As artificial intelligence (AI) gains traction across Africa, it is increasingly framed as a solution to developmental challenges, including in agriculture, health, and governance. However, the application of AI often reflects the priorities of Global North actors, governments, corporations, and donors rather than those of African communities. This creates a power imbalance in who gets to define the problems AI should solve, resulting in solutions that may reinforce existing inequalities or fail to serve marginalized groups, especially women and gender-diverse people.
Africa has a strong normative framework for gender equality evident in instruments like the AU Agenda 2063 and the Gender Equality and Women’s Empowerment Strategy (GEWE), but these often lack feminist grounding, particularly around intersectionality and social justice. While countries like Rwanda and Kenya have made strides toward inclusive national AI strategies, broader challenges, such as limited infrastructure, digital illiteracy, language exclusion, and unequal access to technology, persist and leave many, especially women, behind.
India stands at a critical juncture in its artificial intelligence (AI) journey, uniquely positioned to bridge the developed world and the Global South. This case study explores India’s evolution from a fragmented AI ecosystem into a cohesive, multi-stakeholder model inspired by the indigenous triad of Samaj (Society), Sarkar (Government), and Bazaar (Market). This philosophy — where technological innovation is aligned with public purpose and rooted in collaboration — anchors India’s emerging role as a leader in inclusive and ethical AI. Crucially, India’s regulatory posture balances innovation with ethical oversight.
Through frameworks like the Digital Personal Data Protection Act (2023), the Safe & Trusted AI program, and the newly established AI Safety Institute, India embeds fairness, transparency, and cultural alignment into its AI ecosystem. The government promotes a “light-touch, innovation-friendly” approach, while mandating bias audits, explainability protocols, and localized safeguards. Despite notable progress, challenges persist. These include digital literacy gaps, risks of algorithmic bias, insufficient compute access in rural regions, and fragmented coordination across sectors. Addressing these will require deepening the Samaj-Sarkar-Bazaar alignment through continued investment in skilling, infrastructure, and inclusive design.