A discussion on the implications of importing AI and responsible practices to deploy it locally

Although AI systems are often presented as neutral, they are heavily influenced by their creators’ economic priorities, as well as social and cultural values. When AI systems are conceived, designed, and developed by developers in advanced economies and deployed in developing ones, there is considerable risk that these systems either will not work well for the local scenario or will incur AI related harms – particularly if their development does not carefully consider the nuanced social, cultural, economic, and environmental contexts in which they are deployed.

On August 4, 2023 fellows participated in a discussion on the topic of importing AI and its implications. In order to seed the conversation, three program participants presented concrete case studies that highlighted the pros and cons of importing AI, effective strategies to mitigate the risks while reaping existing benefits, and come up with potential long-term solutions.

Presenters

Summary of the Discussion

The meeting began with a collective discussion about the various considerations and challenges associated with importing artificial intelligence (AI). Attendees shared their perspectives, with one highlighting a recent news story from Kenya, where a technology company’s data collection practices had raised concerns about privacy and data security. The case served as a timely example of the complexities surrounding the importation of AI technologies.

Another participant pointed out that discussions about digital sovereignty were gaining momentum in their respective region, particularly in the context of importing AI. Most likely an indication that policymakers and legislators are increasingly focused on issues related to national control over AI technologies. Cultural, linguistic, regulatory, and economic differences were underscored as crucial factors when considering the importation of technology in general. Such variations across regions necessitate a careful and nuanced approach to AI adoption. Importing AI was further seen as an opportunity to build local capacity, especially in areas where existing expertise is limited – which, of course, emphasizes the importance of aligning AI importation with local technology transfer laws.

Language-specific nuances in AI models, particularly those related to gender biases, were discussed, too. For instance, languages with grammatical gender distinctions pose significant challenges, and it was suggested that norms, rules, and automatic supervision tools are needed to mitigate these biases effectively. AI models often inherit biases present in the data used for training. When such models are then applied in different regions, they may perpetuate or exacerbate local biases, leading to unfair outcomes. Discussants agreed on the need for careful consideration of data sources and their ethical implications.

The meeting also touched upon concerns about digital colonialism, where Western powers provide foundational AI models to developing countries, potentially creating a dependency on these sources – which can be interpreted as a stark contrary to the goal of an equitable AI future. Making efforts to further democratize AI was viewed as a positive step in the right direction, as it would empower smaller organizations to leverage AI technologies with confidence.

The discussion then shifted towards human rights considerations in AI importation, with a focus on international human rights law as a guideline for both states and businesses. States should be encouraged to come up with legislation so that technologies must not interfere with human rights but protect them, and provide remedies for violations. Tech-related risks, including the black-box nature of complex AI systems, opacity, explainability, oversight, accountability, and the potential for large-scale violations, were also brought to the forefront. In developing countries, concerns continue to arise about the balance between totalitarian regimes on the one hand and foreign tech companies on the other. Fellows delved deeper into the legal, political, and social contexts of importing AI. It was noted that in some regions, the law-making process was mostly driven by political and economic motives, largely detached from society’s needs, resulting in a lag between enacting and implementing regulations. Specific legal examples, such as electronic signature laws, were discussed, highlighting their limited use in practice. Apparently, “soft laws” are adopted mainly to impress, and there is a perceived absence of a robust regulatory scheme for artificial intelligence almost everywhere.

Explored was also the political context of imported AI systems, with concerns raised about how they often served digital authoritarianism or lead to privacy invasions. Some laws were seen as aiming to control data rather than protect it. For example, AI for propaganda – including deepfakes and botnets – was mentioned as a significant concern that eroded trust in respective governments. The use of facial recognition for data collection on the population was noted, too.

The social context of AI importation allegedly includes challenges related to digital illiteracy, cashless payment regulations, accessibility and affordability of the respective technology, societal biases, language barriers, content moderation, and the need for more investments and research. Participants also discussed the limitations of AI-powered chatbots, particularly in customer service, which often exacerbate issues rather than solving them. Furthermore, AI’s potential role in education was considered, too, with the idea that it might automate administrative tasks and free up educators to focus on teaching – in an ideal scenarios. However, concerns were raised about privacy issues, such as face scanning in universities.

The meeting concluded with a series of questions and proposed actions. These included strategies for moving forward with AI importation, the need for digital literacy programs, clarification on cashless payments schemes, and efforts to address the digital divide and societal biases. There was large consensus on the importance of considering the broader societal impact of AI importation, including its benefits and potential consequences, as well as the necessity of involving key stakeholders in decision-making processes early on.

This summary captures the extensive discussion and insights shared during the meeting held on Aug 4, 2023 at 11:00am EST while maintaining the anonymity of individual participants as per the Chatham House Rule.

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea