2023 Global Artificial Intelligence Infrastructures Report

This report provides evidence to illustrate how cultural values and institutional priorities shape artificial intelligence (AI) infrastructures in national and global contexts.

By  J.P. Singh  •  Amarda Shehu  •  Caroline Wesson  •  Manipriya Dua  •  David Bray (Foreword Author)

This report uses computer science techniques to analyze national and sub-national AI policies published by 54 different countries. It’s the most comprehensive analysis to date on national AI policies. It provides important comparisons and contrasts of national and global priorities for the development and deployment of AI. The report analyzes the empirical determinants of dominant strategies for developing AI around the globe and shows where countries are converging and diverging in their approaches to AI. The cross-national and global comparisons are important for a host of important players in AI including policy-makers, governments, businesses, and civil society organizations.

Download

The complete version of this item is available as a download.

Executive Summary

In 2016, the United States published its National Artificial Intelligence Research and Development Strategic Plan, usually understood in policy communities as the first statement of its AI infrastructure strategy (Select Committee on Artificial Intelligence, 2016). Since then over 60 countries have announced their national or sectoral AI policies.

This report employs computer science techniques to analyze the published national AI plans of 54 countries. In other words, we employ AI to analyze AI strategies. The report includes an analysis of 213 documents on AI strategies. Apart from national plans, the set includes reports and publications from various government departments, ministries, nation commissions, bodies appointed to forward recommendations for specific issues and sectors.

Our computer science methodology, specifically Latent Dirichlet Analysis (LDA) (Blei, Ng and Jordan, 2003), is calibrated to recognize embedded or latent topics that each document contains. It does so through providing probabilities of words that are most likely to occur together in each document. All documents are analyzed together for a pre-specified number of topics, ascertained through rigorous methodological criteria. The choice of the number of topics reflects fulfillment of various methodological LDA criteria for model stability (consistency) and topic stability (coherence). A document may feature a dominant topic, or a document may contain two or more topics. Further, we employ a technique known ensemble-LDA (e-LDA) to provide stable results assessed over multiple model specifications.

Collectively we present the most detailed and comprehensive empirical analysis undertaken of national AI infrastructures to date. This analysis provides comparisons and contrasts across 54 national strategies and a granular look at what these strategies contain. We note the priorities that are contained in documents, but our analysis also points out the policy depth for particular countries. Policy depth refers to the extent to which countries have covered the entire gamut of issues that comprise an infrastructure, and the institutional and financial resources they have committed to these issues. For example, AI policies from leading powers such as United States and China contain depth for basic research capabilities in science and mathematics, while the European Union policies contain the most depth for data governance and ethics. For example, one of the strategic objectives stated in the Chinese AI strategy states: “by 2025, China will achieve major breakthroughs in basic theories for AI, such that some technologies and applications achieve a world-leading level and AI becomes the main driving force for China’s industrial upgrading and economic transformation” (State Council, 2017).

We make three major claims:

  • There is no grand strategy or conclusion that applies to all AI infrastructures. Countries and clusters of countries feature different objectives and how to achieve them.
  • Countries are pursuing a variable mix of similar elements in their national strategies. We propose and utilize the concept of ‘AI Wardrobes’ to show the various elements available for putting together an AI infrastructure and the variable ways in which countries are putting together these wardrobes.
  • Clusters of countries pursuing similar strategies are identifiable. Our machine learning algorithms are able to point out some obvious clusters from the European Union, Latin America, and East Asia. But there are also surprises. United Kingdom leads a British influence cluster. Spain is prominent in the Latin American cluster.

Our three major claim are made at three different levels:

  • We analyze 54 plans that are taken to be national. These are often ‘performative’. They are as much about national priorities as they are declarations meant for the international community. But they reveal the broad trajectory and differences among national strategies.
  • We analyze 213 documents including the national plans that national governments, commissions and departments have published on their AI infrastructures. Unlike, the performativity and differences among national plans, the intra-national plan reveal fewer national differences but a few countries have more policy depth than others. We notice countries that are at the early stages of policies regarding their AI infrastructures, versus those that have detailed regulatory and sub-sector policies.
  • We also analyze the 213 documents, regardless of country labels, and here we see the broad topics that stand out in country plans. These include transportation, education, data ethics, and regulation. Looking at the documents we can then understand the countries that dominate these topics and also some broad differences among them.

Based on our analysis we present three policy recommendations:

  • Comparative analyses like ours provide countries sign posts and guidelines for their ambitions. There is no one size fits all for designing national AI infrastructures. Different countries have different capabilities and priorities.
  • Regulating AI will depend on country preparedness and political systems. Grand pronouncements such as fears about sacrificing our human rights or privacy to machine-led systems in our media about AI need a reality check. Several countries, generally with democratic systems, are putting together or struggling to put together systems of accountability, while others barely feature any such concerns. This provides room to think about governance issues, rather than ceding this authority to machines (or corporations) prematurely.
  • AI policies have many good stories to tell about service provision. These include AI applications for health, education and research, and transportation.

Foreword: Ensuring We Build the Right Foundation to Evaluate Trust in AI and Societies

By David Bray

Dear Readers,

As a field, Artificial Intelligence (AI) has been around since the mid-20th century. In 1955, U.S. computer scientist John McCarthy coined the term. Later in 1959, McCarthy collaborated with Marvin Minsky to establish the Artificial Intelligence Project, nowadays MIT’s CSAIL (Computer Science and Artificial Intelligence Laboratory). In parallel to McCarthy and Minsky, U.S. political scientist Herbert Simon completed a PhD in 1943 exploring decision-making in administrative organizations and pursued research that later influenced the fields of computer science, economics, and cognitive psychology. In 1957, Simon partnered with Allen Newell to develop a General Problem Solver separating information about a problem from the strategy required to solve it.

All four individuals – McCarthy, Minsky, Simon, and Newell – would go on to receive the ACM A.M. Turing Award during their respective careers. In the almost six and a half decades that followed, AI research developed several flavors of systemic approaches to include: Logical Reasoning and Problem-Solving Algorithms, Expert Systems, Statistical Inferences and Reasoning, Decision Support Systems, Cognitive Simulations, Natural Language Processing, Machine Learning, Neural Networks, and more.

Though AI has many subcategories and has had many flavors of approaches since the 1950s, within the last few years, a subset of Neural Networks built on the transformer architecture have revolutionized natural language processing and given rise to what are now known as Large Language Models (LLMs). Just in the last year, LLMs such as ChatGPT and variants, have activated significant public interest, excitement, and anxiety with regards to the future of AI. While the full extent of the public, business, community, and individual value of LLMs remains to be seen, the ability of these models to provide responses to effective engineered prompts regarding the generation of predictive text, synthesized images, as well as the full gamut of multimedia audio and even video outputs has captured the public zeitgeist.

I. A valuable compass reading as to where different nations have decided to steer approaches to AI

Pundits globally have indicated both excitement and concerns about whether machines may be able to perform work previously thought only performable by humans as well as whether they may be able to produce content and interactions that appear human. It is precisely at this moment that this 2023 Global Artificial Intelligence Infrastructures Report by J.P. Singh, Amarda Shehu, and their doctoral students is so prescient. By bridging together multiple fields, including the best of computer science, economics, political science, and public policy, in a collaborative manner akin to the best work of Herbert Simon – Singh and Shehu have produced a valuable compass reading pointing to where different nations have decided to steer their approaches to AI for the future ahead. Their report presents both rigorous and much needed insights that demystify some of the current fervor around the future AI and societies, namely:

  • First, the report shares convincing evidence that humanity’s AI-associated future will not be set by just the United States and China alone – there exist different AI strategies being pursued by multiple nations beyond just these two large nations, with different objectives and proposed paths outlined in these national AI policies. 
  • Second, while there is no singular grand strategy across the fifty-four national AI plans analyzed in this report – Singh and Shehu find similar choice elements in the national strategies analyzed. The researchers dub these similar choice elements a collective ‘AI Wardrobe,’ a term coined by Caroline Wesson, one of the doctoral students in the team, to relate the various choices each country can make in assembling a tailored AI national strategy outfit.
  • Third – country clusters are apparent among the different national strategies that were analyzed for this report – to include the European Union, East Asia, Spain leading a Latin America cluster, and the United Kingdom leading a British influence cluster. Whether or not these clusters will result in closer AIrelated business interactions, nation-to-nation civil relations, and geopolitical ties amongst the countries more closely aligned with regards to their national AI strategies represents a crucial area to watch both now and in future.

II. Building the necessary foundation an interdisciplinary mix of fields to tackle Trust, AI, and Societies

Juxtaposed against the global zeitgeist regarding AI, this important 2023 report exists amid a deeper milieu of important questions regarding trust within and across nations. In October 2017, the Pew Research Center found that less than forty-five percent of residents living in the United States under the age of twenty-five years old thought capitalism was in their opinion a good force in society. Contemporary studies at the time also found declining levels of trust among a similar age demographic in the essentialness of living in a democracy – not just for the United States but also for Sweden, Australia, the Netherlands, New Zealand, and the United Kingdom. Together these global trends blend to create a central question – namely:

Can nations invoke strategies that result in Trust in AI and societies? – and a corollary: Can nations encourage Trust in AI and societies, while facing growing distrust in their economic and political systems?

Readers should note that trust can be defined as the willingness to be vulnerable to the actions of an actor not directly controlled by you – a definition that works for both human and AI actors. Multiple studies have established that the antecedents of trust include the perceptions of benevolence, competence, and integrity of the actor to an individual. If perceptions of these three antecedents are positive, then trust is more likely. If perceptions are negative or absent, then trust is less likely.

The clustering of similar choice elements in this report, specifically the set of elements that comprise the report’s described “AI Wardrobe”, represent an important tool for leaders in the public and private sectors to assess if a national AI strategy has the requisite elements to address challenging questions of improving Trust in AI and societies.

Cumulatively, this question of Trust in AI and societies represents an essential one for nations’ AI strategies with regards to their expressed objectives. In terms of expressed objectives, though LLMs and their outputs have captured the current public consciousness of 2023, there are so many more outcomes for which AI can be employed by nations, communities, and networked groups of people working to shared outcomes beyond just generative content. Readers are invited, after seeing the analysis and results in the report, to consider more expansive objectives for AI and societies, to include exploring how:

  • Can AI improve human understanding of decisions we need to make now?
  • Can AI help improve understanding the impact of our decisions (or lack thereof) on possible local and global futures?
  • Can AI help improve human collaborations across sectors and geographies, potentially tipping and cueing humans that there are other humans with similar projects underway?
  • Can AI help improve identification and reconciliation of misaligned goals and incentives – be they community, regional, or global – for important peacekeeping activities?
  • Can AI help improve public safety, international security, and global preparedness for disruptions both natural and human-caused in the world?
  • Can AI help improve the operations and resilience of networked, digital technologies for both organizational and public benefit – especially in an era of increasing internet devices?
  • Can AI help improve the “essential fabrics” of open societies to include freedom of speech, freedom to think differently, and the need for an educated public to help inform pluralistic discussions all amid a digital tsunami of data?
  • Can AI help improve education, focus, and entrepreneurial activities to tackle big, thorny, “hairball” issues like climate change, immediate & long-term food security, natural resources, and future sustainability for a planet of 9+ billion people?

These important questions represent a few of the important, shared outcomes to be explored and achieved through AI strategies that bring together human communities. While this report does not answer them all, it does indicate the different objectives being pursued by different nations with regards to their AI strategy as well as their performative declarations meant for the broader international community. Furthermore, this report both embodies and demonstrates the importance of interdisciplinary teams for AI research and AI education. Working across multiple disciplines is essential for both research and education especially as policymakers, business leaders, and students alike learn to explore and advance the necessary AI technical, commercial, civil, and ethical concepts required for a more positive future ahead.

III. For Trust, AI, and Societies, what if the Turing Test is the wrong test for AI?

This report represents a vanguard assembly of an interdisciplinary mix of fields to include the best of computer science, economics, political science, and public policy. Ultimately for AI to succeed in benefiting nations, communities, and networked groups of people, we must understand human nature more. We humans are products of natural selection pressures. Darwinian evolution is akin to a “blind watchmaker” – and as a result evolution has not prepared us to encounter the true alienness of AI. It is risky for humans to think AI is aligned to the same things we want and value, especially when the alignment problem of an AI to specific outcomes remains an unsolved challenge for several neural network approaches. In addition, we humans anthropomorphize lots of things including animals, weather, inanimate objects, as well as machines and now AI – even if those things do not act, think, or behave at all like us humans. Furthermore, training an AI depends heavily on the datasets employed, meaning both extant human datasets as well as our human choices regarding AI may amplify some of the more socially beneficial or detrimental elements of human nature. These elements include the considerable number of known human biases that each of us possess, to include confirmation bias, sunk cost bias, “in vs. out group” biases (aka, xenophobia), and many more biases though, fortunately, these biases can be mitigated some by education and experiences. By both providing a valuable compass reading as to where different nations have decided to steer their approaches to AI for the future ahead, and building the necessary foundation for bringing together an interdisciplinary mix of fields to study national AI strategies – Singh, Shehu, and their students enable readers to ask what I professionally consider to be the crucial question of the 2020s, specifically: what if the Turing Test is the wrong test for AI?

It is important to remember the original Turing test – designed by computer science pioneer Alan Turing himself – involved Computer A and Person B, with B attempting to convince an interrogator Person C that they were human, and that A was not. Meanwhile Computer A was trying to convince Person C that they were human. In reading the findings and conclusions of this 2023 report, I invite readers to consider what if this test of a computer “fooling us” is the wrong test for the type of AI that our society needs, especially if we are to improve extant levels of trust among humans and machines collectively?

After all, consider the current state of 2023 LLMs where benevolence of the machine is indeterminate, competence is questionable as existing LLMs are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent as the LLMs can with some variability change their stance if user prompts ask them to do as such. These crucial questions regarding the antecedents of trust associated with AI should not fall upon these digital innovations alone. First, these are systems designed and trained by humans. Second, ostensibly the 2023 iteration of generative AI models will improve in the future ahead. Third, and perhaps most importantly, readers who care about the national AI strategies present in 2023 around the world also should carefully consider the other “obscured boxes” present in human societies, such as decision making in organizations, community associations, governments, oversight boards, and professional settings.

All of which brings us back, in conclusion to the earlier corollary to the central question of Trust in AI and societies, namely: Can nations encourage Trust in AI and societies, while facing growing distrust in their economic and political systems? It could be that for the near future, both members of the public and representative leaders both in the public and private sectors need to take actions that remedy the perceptions of benevolence, competence, and integrity – namely Trust – both in AI and societies (sans AI) simultaneously. As mapping positive, deliberative paths forward to improve the state of Trust in AI and Societies is important, this important 2023 report delivers a prescient view of the current expressed state of fifty-four different national AI strategies to help us understand the present and consider the next steps necessary for the future ahead.

Recent & Related

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea