Editor’s Note: Since 2023, Microsoft’s Office of Responsible AI has partnered with the Strategic Foresight Hub at the Stimson Center to convene a diverse group of experts from the Global South to evaluate the impacts of AI in emerging markets. Guided by the question of how AI-related risks and benefits might manifest in various social, cultural, economic, and environmental contexts, program participants identify technological and regulatory solutions that can help mitigate risks and maximize opportunities across the globe. Fellows also have the opportunity to publish at Stimson; in the RAI Case Studies, Fellows share insights about responsible AI governance from within their own thematic and geographic areas of expertise.
By Giulia Neaher, Managing Editor for RAI Case Studies
Across Africa, AI is being harnessed to achieve positive impacts for marginalized communities. However, while AI can be used for good, some fear it could further marginalize and harm those it is intended to empower. Despite emphasis from both public and private sectors on equality and equity, uncertainty around policy-enabling environments, skills, and resources still presents a bottleneck for building inclusive AI. Though there are promising femtech1Femtech is a term used to refer to technological solutions, often in the form of software, that are designed to address the unique health and wellness needs, and ensure the economic and social participation, of women and girls. solutions aimed at addressing specific gender concerns, the question of addressing needs and wants from a feminist approach in AI lingers.
Gender is often still an afterthought when it comes to policy implementation and practice, but there are many initiatives, charters, and agreements in Africa that support equality and aim to eliminate violence against women, combat the disproportionate effect of poverty on women, and support women’s participation in the political and economic spheres. For example, Agenda 2063 promotes gender equality and an engaged, empowered youth. The African Union strategy on Gender Equality and Women’s Empowerment (GEWE) 2018-2028 also aims to strengthen women’s agency in Africa and ensure that women’s voices are amplified and their concerns are fully addressed. The African Charter on Human and Peoples’ Rights on the Rights of Women in Africa similarly requires member states to tackle “all forms of discrimination against women through appropriate legislative measures.”
In Africa, there is a strong normative framework on gender equality and women’s and girls’ rights, and correspondingly, there are some civil or governmental initiatives that support women in national digital transformation policies. In Rwanda, for example, women have become increasingly influential in building national plans for artificial intelligence, and in Kenya, women have impacted the national plan for data uses. However, Africa is still facing challenges in integrating AI and policy. The Oxford AI Worldwide Readiness Index exemplifies the gap between the United States, ranked as first, and Mauritius, considered the African flagship country in AI policy, ranked 69th. There are only four African countries – Mauritius, South Africa, Rwanda, and Egypt – whose scores were higher than the global average of 47.59.
Policy efforts across the continent are increasing but still limited. In mid-2021, Egypt launched its national AI strategy, christened “Artificial Intelligence for Development and Prosperity,” making clear the country’s ambitious goals for development and economic growth. Senegal followed suit in 2023 with a strategy of its own, also focused on economic development. On April 20, 2023, Rwanda released its “National AI Policy for Responsible AI Adoption,” which emphasizes AI for sustainable development. In 2024, Kenya published its draft national AI strategy with goals including social inclusion, ethics, and equity in AI.
Despite all efforts led by African governments in recent years to accelerate digital transformation — particularly internet penetration, digital infrastructure, and economic innovation — vulnerable and poor communities still have limited access to digital technologies due to infrastructure fragility. Most web sites are available only in English or other colonial languages like French, rather than local languages. As such, several African demographics are left out of digital participation, given technology activities are often limited to those who can read and understand English or French. This effect is heightened for women and gender-diverse groups, who are similarly left out from the benefits of digital transformation and artificial intelligence due to educational and opportunity gaps.
Some significant challenges remain in addressing inclusive responsible AI on a global scale, and particularly in Africa.
Global North-Centric Problem Definition
Most AI systems are developed in the Global North by tech elites, corporations, and governments. These actors often define problems based on their own values, interests, and priorities — not those of the communities most affected by AI. As a result, AI solutions often serve corporate or state interests, not community empowerment. Global North-centric problem definition has trickle-down effects on AI solutions and data practices — in particular, which data is used to train models, how it is collected, in which domain, and who is involved in the process.
AI as a Solution Looking for a Problem
Too often, AI is used simply because it is new or profitable, not because it is the most ethical or effective tool for the issues at hand. This leads to misaligned priorities and wasted resources, while root causes like structural inequality go unaddressed. This is particularly true in cases where communities lack very basic AI infrastructure; for instance, AI enabled tools are less useful for farmers in areas where connectivity and access to smart devices pose challenges. Do we define the problem or the solution first? Which precedes the other?
Exclusion of Marginalized Voices
AI is a discipline founded and studied for decades almost exclusively by men; and since AI is a human construct, it is socially and culturally conditioned. Because of this, minority communities, such as indigenous peoples, rural populations, and people with disabilities, are often excluded from AI design and decision-making. This means that the needs and harm they experience are overlooked or even worsened by AI interventions. This is where the question of identifying needs and wants when it comes to AI usage comes into picture. Without the participation of all, can AI be for all?
Biased Data Leads to Biased AI
AI bias is introduced when systems are trained on biased data. This bias can be gendered and affect specific gender or ethnic groups more than others. In a study by MIT, scientists tested various facial recognition algorithms that were supposed to determine someone’s gender from their photo. The results are worrying: The darker the skin tone, the higher the error rate. The rate was even worse for black women, for whom the algorithm reached a maximum error rate of up to 35%. Gender bias comes from the biases already present in societies, which can seep into AI-driven processes. Even if these biases are unintentional, their effects are real and significant.
Research and Technical Capacity in AI
According to UNESCO, women make up 20% of employees in technical roles in major Machine Learning (ML) companies, 12% of AI researchers globally, and 6% of professional software developers. A notable gender gap exists in the AI workforce itself, just as there is inequity in job acquisition and retention for women in STEM roles generally. This disparity means that women’s perspectives are underrepresented not only in the application of AI but also in its very development.
Case Studies
The case studies below offer learning opportunities for AI from a gendered perspective.
South Africa: Zuzi AI Chatbot
South Africa faces some of the highest rates of gender-based violence (GBV) globally, disproportionately impacting Black women, rural populations, migrant communities, and LGBTQI+ individuals. Survivors often experience barriers to justice, including a lack of accessible legal information, language exclusion, and fear of institutional retaliation or inaction.
Sexual and reproductive health (SRH) services are often underfunded or stigmatized, leaving many without the knowledge or support to access care. In response to this, Gender Rights in Tech (GRIT) launched Zuzi, a survivor-centered chatbot designed to provide critical, accessible information on rights and services related to GBV and SRH.
Approach
- GRIT was co-designed through collaborations with survivors, activists, and relevant stakeholders through focus groups and co-creation activities.
- Available in local South African languages (i.e. Zulu).
- Connected to relevant authorities and stakeholders.
- Emphasis on data privacy, allowing users to engage anonymously and without risk of being tracked or profiled.
Impacts on Marginalized Communities
- Black and Rural Women
- For many rural women, Zuzi became the first trusted source of information about their legal rights after experiencing GBV.
- The chatbot reduced travel costs and time, eliminating the need to physically visit government offices for information.
- LGBTQI+ Communities
- Users from queer and trans communities reported feeling seen and affirmed by Zuzi’s gender-inclusive language.
- LGBTQI+ users reported that Zuzi was one of the few spaces where they could ask questions safely without fear of judgement or outing.
Zuzi demonstrates the power of feminist, locally grounded, and participatory tech design in responding to structural inequalities in access to justice and health. While still evolving, its early impact highlights how AI tools rooted in community voice can become trusted lifelines for marginalized and underserved populations.
Uganda: Gender Bias in Luganda-English Translation
As machine translation systems become increasingly integrated into digital services, educational tools, and public communication platforms, bias in translations poses a serious risk, particularly around gender. For low-resource languages like Luganda — widely spoken in Uganda but underrepresented in AI research — machine translation systems often inherit and amplify stereotypical associations, leading to misgendering, erasure, or misrepresentation of women and non-binary people. Makerere AI Lab led a project that evaluated and quantified the gender bias within a Luganda-English machine translation system.
Approach
- Modified Translation Gender Bias Index (TGBI): Makerere AI Lab developed a tailored version of the TGBI for the project. The index was adapted to Luganda’s noun class system, which groups nouns based on semantics rather than gender, making direct comparisons with English gendered pronouns methodologically complex.
- Human Evaluation for Validation: Beyond computational metrics, the project incorporated human evaluation with native Luganda speakers to assess the validity and relevance of bias detection results. This participatory evaluation helped contextualize algorithmic findings with actual language use and community perspectives.
Initial Impacts on Marginalized Groups
- Inclusion of African Languages in AI Bias Research: Most global AI fairness tools are developed for high-resource languages, which means that African languages and their unique grammatical structures are often ignored. By adapting tools to Luganda, the Makerere AI Lab advanced African-led methodologies in fairness research.
- Awareness of Misgendering in Translation Tools: The study revealed that Luganda-English machine translation systems frequently misgender neutral or female subjects, defaulting to masculine pronouns when translating to English.
This project is a pioneering example of decolonizing AI evaluation methodologies by not only applying fairness tools to African languages but also adapting and reimagining those tools based on African linguistic and gender realities. It highlights how language technology can either reinforce or challenge gender norms, and why inclusive, context-specific AI research is essential for equitable digital futures.
Best Practices and Recommendations
In a world where AI tools are often designed and implemented without community involvement, how do we define the needs and wants of communities, and more precisely, marginalized communities? Feminist approaches, in which structural inequalities are not only recognized but are addressed as core systemic factors, can help redefine how we identify what a community needs and frame problems and solutions along with communities. In other words, we can take the approach of building by, for, and with those that are affected by digital transformation. Methods for such an effort are outlined below:
- Centering Lived Realities: Start by grounding AI design, use, and evaluation in the daily experiences, cultural contexts, and aspirations of marginalized African groups. Prioritize bottom-up knowledge production and usage of participatory methods such as community-driven needs assessments, co-design sessions, and feminist tech workshops to define what problems AI should solve.
- Challenging Extractive Data Practices: Critique and resist data colonialism, where African data is harvested without community benefit or consent. Embrace data licensing frameworks that center downstream impact, such as the Nwulite Obodo open data license.
- Intersectional and Contextual Design: Recognize that gender inequality in Sub-Saharan Africa intersects with race, ethnicity, disability, class, and geography. Intersectionality should be applied as a design principle to ask who is included, excluded, and affected at each stage of the AI lifecycle.
- Redistributing Power in AI Development: Decenter dominant tech narratives and elevate African women and queer technologists, researchers, and knowledge holders. Prioritize initiatives that promote open access to infrastructure, such as compute power, datasets, and research tools, for feminist actors.
- Building Ecologies of Feminist AI Practice: AI should not be built in isolation but as part of a broader ecosystem of justice movements that include digital rights, reproductive justice, climate, and land rights.
- Gender-Centered Policy Design: Design policy frameworks, regulations, laws, and acts from a gendered lens to enable compliance in terms of practice. Accountability, fairness, and intersectionality should be at the core of policy design.
Notes
- 1Femtech is a term used to refer to technological solutions, often in the form of software, that are designed to address the unique health and wellness needs, and ensure the economic and social participation, of women and girls.