Measuring Artificial Intelligence Development

In the last decade, Artificial Intelligence (AI), including siblings machine learning and deep learning, has been growing by leaps and bounds. More importantly, the technology has been deployed effectively in a wide range of traditional sectors bringing real transformational change while raising fundamental socio-economic (joblessness, more inequality, etc.)  and ethical (bias, discrimination, etc.)  issues along the way.  As it stands today, AI, understood as a set of still-evolving technologies, seems poised to become a general-purpose technology that could leave no stone untouched.

As with other digital technologies, most developing countries face the daunting challenge of harnessing AI to foster national human development  Prima facie, AI looks mostly like software, code that one can even download from the web for free, thus not seemingly requiring any massive capital investments for practical use.  While this is true, AI depends on four different but interconnected technologies 1. High computing power. 2. Fast and reliable network connectivity. 3. Big data. And 4. Vast storage capacity. And most demand a substantial investment of financial resources. Indeed, most developing countries cannot claim they already have in place a technology ecosystem prone to AI diffusion  – not to mention glaring policy, human (including AI expertise), and institutional capacity gaps, among others.

So how can governments in this set of countries get the ball rolling?

It is here where an government AI readiness index (GAIRI) could be handy. The idea of measuring readiness when it comes to digital technologies is certainly not new. Recall that the Network Readiness Index, spearheaded by Harvard, was first published in the early 2000s. The last WEF NRI report available is for 2016.

First published in 2017, the 2019 Government AI Readiness Index (GovAirIn), a joint publication of Oxford Insights and Canada’s IDRC, developed an index that includes eleven variables, two of them directly taken from the latest NRI report. The data set now consists of all UN member states (193) plus Taiwan. Unfortunately, comparisons with the 2017 data set are not possible as the index has been redefined, the geographical coverage expanded, and the data sources changed substantially.

The eleven core variables are grouped into four clusters 1. Governance. 2. Infrastructure and Data. 3. Skills and Education. And 4. Government (effectiveness) and Public Services. The Governance cluster is composed of two discrete indicators, data protection/privacy laws and AI strategy development, reporting no missing data. The other three clusters include three continuous variables each. Figure 1 depicts the nine variables that comprise such clusters.

The data set publicly available on the GovAirIn website does not include coding for missing values. Instead, missing country data is reported as having a zero value. As the report acknowledges, missing data points are extensive, especially when it comes to developing nations. Figure 1 clearly depicts this gap with indicators such as data availability and the number of AI startups having well over 50% missing data. In fact, only 57 countries (or 29%) have data for all eleven core variables.

The GovAirIn score is calculated as the unweighted average of all eleven variables, each being previously normalized between 0 and 1. Figure 2 below presents the AI rankings by World Bank country income levels. The data is sorted by GDP per capita at constant prices. The light red line depicts the median AI Score for all countries in the sample.

For each income category, the top five countries are highlighted. Singapore, China, India, and Nepal take the number one spot in each of the four-country income categories, thus suggestion that Asia is one of the prime movers. While some correlation between country income and AI score can be seen in the chart, the actual correlation coefficient is below 0.5 – and statistically insignificant. This is somehow unexpected but could be partially explained by the lack of data for many countries, and the way the AI score is computed –  including missing values or zeros in the estimation which lowers the score for all countries impacted. This is why China is ranked 20th overall, for example. Note also that all income categories have countries below the median AI score, the number increasing as income levels fall from left to right.

At the continental level (figure 3 below),  Kenya (ranked 52 overall), the U.S, Singapore, Great Britain, and Australia come on top.

Africa is the only region where low-middle income countries beat upper-middle-income ones. Note also that several low-income countries are performing better than others in higher-income categories. Nevertheless, most African nations have a hard time crossing the median AI score. In the Americas, Mexico is ranked third, behind the U.S and Canada, but surpassing high-income countries such as Chile and Uruguay. On the other hand, Europe is the only region where most countries (89%) find themselves above the median AI score. And, as expected, Asia’s overall performance is impressive and could be even more if data estimations accounted for missing data. Asia is also boosted by the inclusion of Gulf countries, most of which are fully funding AI strategies and initiatives.

What about the role of political regimes in fostering AI development? Figure 4 shows GAIRI rankings by political regime, based on the EIU Democracy Index, available for 167 countries. EUI defines four types of political regimes ranging from full democracies to authoritarian states.

The graph presents the top five countries for each of the four political regimes. Great Britain, Singapore, Turkey, and the UAE are the best performers in each category. A close correlation between the two indices can be intuitively seen in the chart. The actual correlation coefficient is 0.72 and has statistical significance. While this is much higher than that between GDP and AI score, the difference between the two could be partially explained by the relatively smaller sample size used here (and thus less missing data).  The light red line depicts a polynomial trend which has a clear ascending inflection point when crossing the hybrid regimes category. Indeed, several authoritarian states are ahead of many hybrid regimes when it comes to deploying AI. Moreover, if we guide ourselves by the trend line only,  higher AI scores are associated with relatively lower levels of democracy, especially in the case of both full and (so-called) flawed democracies.

The role played by the four clusters identified in the GovAirIn report is not evident at first sight. If the AI score is the simple average of our eleven indicators, one could then move them across the various clusters and still obtain the same country AI score. This raises the broader issue of the apparent lack of a conceptual framework to develop such clusters in a more systematic fashion.  For example, one could argue that the Governance cluster looks more like a policy one and should also include the relevance that ICT has in government.  On the other hand, the current cluster structure helps shed some light on the critical significance of some of the proposed indicators, as seen below.

Much like the AI score, each cluster value per country is computed as the simple average of its normalized indicators. As mentioned above, the Governance cluster only takes a set of discrete values (four in total), so comparisons with the others via graphics are less than ideal. Figure 5 shows the comparative distribution of the other three clusters sorted by country rank, excluding all missing data. The graph also fits a polynomial trend line for each cluster as well as a regression line (in black) for the whole sample.

Using the regression line as reference, we can see that the government effectiveness and digital service provision of digital public services cluster pull up the AI score, especially for most high-ranked countries. The trend starts to change for countries ranked over 100 or so. On the other hand, the other two clusters are pulling down the AI score for almost the same group of countries. The trend for both clusters then reverses itself for lower-ranked countries. In any event, the data shows the critical importance of state capacity to harness AI effectively. States with low capacity will undoubtedly face a plethora of challenges unless they can also use AI itself to build and increase overall state capacity.

A sound analytical framework for the analysis and ensuing measurements should have four core pillars 1. Policy, no limited to technology. 2. Technology, including innovation, overall startup environment, etc. 3. Capacity, including both institutional and human and not limited to technology. And 4. Infrastructure, including some of the themes mentioned in the introduction to this post.

Each pillar, in turn, should be interrelated and sequenced accordingly, with state capacity taking center stage as we are dealing with  Government AI readiness. But governments must also have the capacity to create adequate policies, harness technologies, and support the deployment of infrastructure around the country.  This does not mean governments need to do everything on their own. Here, the distinction between design and implementation is crucial. Once the former is in place, the latter can be executed by the best professionals in the areas under consideration.

Let us also remind ourselves that building state capacity in developing countries is also part of the UN SDGs.

Cheers, Raúl

Print Friendly, PDF & Email