RAI Session: Inclusive Design Practices for Developing AI

Past
 Private Event

Discussing solutions to translating complex and taboo topics around AI and responsible AI practices to effectively engage impacted stakeholders

Designing, developing, and deploying AI systems responsibly often requires implementing inclusive design practices that bring impacted stakeholders and/or subject matter experts to the table to ensure their needs and concerns are incorporated into each phase. Although this concept of inclusivity shows up in AI principles around the world, it is rare to find it being practiced in earnest for multiple reasons.

On August 17, 2023, fellows participated in a discussion on the topic of inclusive design practices for developing AI. In order to seed the discussion, three program participants presented concrete case studies and research findings that uncovered practices from developing countries that effectively bring impacted stakeholders to the table as AI systems are designed, developed, and deployed.

Presenters

Summary of the Discussion

The central focus of the second session was put on the critical topic of inclusive design practices within the realm of artificial intelligence (AI) development. First and foremost, participants underscored the significance of bringing together a diverse array of stakeholders and experts to ensure that AI systems incorporate the needs and desires of the broader community. The ensuing discussions revolved around an exploration of the challenges associated with fostering inclusivity in AI, thereby trying to clarify the concept of artificial intelligence in general, probing into its operational mechanics, and contemplating the technology’s integration into society.

Presentations focused, for example, on responsible AI development efforts in Africa, by highlighting the remarkable presence of over 2400 AI projects across six distinct sectors on the continent, accompanied by a burgeoning community of African developers. Accordingly, the region had witnessed substantial tech investments, particularly in Kenya, and notable industry involvement, exemplified also by Microsoft’s presence on site. Other presenters delved deeper into the crucial facet of ethical AI education whereby strategies to practically equip students with a strong foundation in AI ethics were highlighted, emphasizing the importance of implementing structured mentorship programs for tech students. Addressing the ethical implications inherent in AI deployment was considered paramount.

Discussions progressed to ideas and work streams for effectively engaging stakeholders in the AI development process. Fellows emphasized the significance of fostering a sense of community, contextualizing AI problems within local socio-economic conditions, skill development, and incentivizing contributions. It was acknowledged that many of these strategies are interdependent and should be addressed in a sequential manner to maximize their impact.

Another program participant focused their presentation and insights on the concept of participatory AI in humanitarian and peace-building efforts. The core idea is thereby centered on involving diverse stakeholders in the development of AI solutions. Different levels of inclusion, ranging from mere consultation to active co-creation, were examined and illustrated in detail, alongside associated challenges and considerations. Discussions extended to the complexities of incorporating non-technical audiences into AI development, emphasizing the need to balance diverse interests. A key shift highlighted was moving from the traditional approach of “designing for” users to a more inclusive one of “designing with” users and affected stakeholders.

The final presentation of the day focused on developing AI tools to foster social inclusion in developing countries. Key points included the government’s pivotal role as a stakeholder, respective national AI strategies, opportunities to overcome language barriers, enhance service delivery, and support vulnerable populations through tech applications. An innovative proposal was made to establish a national public digital platform for language, leveraging open-source AI technologies to promote linguistic diversity and accessibility in India. Underscored was also the significance of high-quality conversational AI solutions across various domains, particularly in contexts with multiple languages, as a means to enhance accessibility and inclusivity in countries.

Additionally, the meeting explored the concept of data cooperation, designed to provide dignified digital work opportunities for economically disadvantaged individuals. A respective cooperative could offer an hourly wage, for example, and collaborate with local NGOs to ensure that access to these job opportunities primarily benefits the most economically vulnerable. Program participants also delved into AI-powered healthcare solutions, specifically mobile applications for disease detection. While discussing these initiatives, participants raised ethical concerns related to AI, including potential harm, bias, governance, and regulation. It was noted that digital privacy acts had recently been passed in a variety of countries, yet stakeholders’ collaboration was deemed essential for addressing social challenges stemming from AI deployment and development. In closing, participants highlighted the importance of considering additional support for stakeholders, especially in rural areas involving language models – acknowledging the emotional and psychological dimensions of inclusion in AI development.

This summary captures the extensive discussion and insights shared during the meeting held on Aug 17, 2023, at 11:00 am EST while maintaining the anonymity of individual participants as per the Chatham House Rule.

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea