While researching the deployment of artificial intelligence within the public sector, I encountered a limited number of precious case studies that poked a bit deeper into the benefits and risks of such a move . For the most part, that set of studies focused on public service provision, while a few explored AI’s institutional impact within public administration. One surprising finding was the widespread use of chatbots within the public sector. That certainly makes sense, given the need to support the ever-increasing interactions between public institutions and stakeholders and the public in general.
I also realized that most of them were not looking at themes that, in my view, are part and parcel of the process. For starters, the procurement process used by public entities to acquire a specific AI platform or technology was not part of the picture. Of course, it is quite possible that some of the cases under the microscope benefitted from donations from private companies or in-kind grants from various other sources. But more often than not, most must follow standard public procurement processes that usually start by developing an RFP (request for proposals) to then request bids from the various providers. Writing such an RFP becomes a monumental challenge if the entity lacks AI capacity or expertise. Here, the entity might end up facing the old chick-and-egg conundrum. Solving it demands external expertise, which might also require a public procurement process.
Another critical gap is the lack of analysis of AI governance. Public institutions already have in place traditional governance instances and mechanisms available to decision-makers, which usually do not factor in consultations with external people or potential beneficiaries and stakeholders. Given AI’s reputed “intelligence,” having adequate governance mechanisms to manage it responsibly is fundamental. Again, the lack of AI expertise and knowledge limits the impact older governance mechanisms can have while selecting and deploying AI within the entity and overseeing its overall implementation. That unquestionably opens the door for algorithmic governance, where AI can make final decisions without human supervision, and redress mechanisms for those affected are conspicuously absent. The governance of AI is the antidote to the abuse of AI in governance processes and should thus be part of the analysis. On the other hand, the case studies do reflect what has actually been happening in practice where governments, national and local, have deployed AI platforms used to determine eligibility for receiving a public service or classify people criminally based on past records and correlations with other people’s background.
However, I was more surprised by the researchers’ classification of the various AI platforms deployed in the public sector. That certainly demands some technical knowledge, which perhaps this group of researchers with social science backgrounds did not have handy. So, for example, it is not uncommon for them to place machine learning, artificial neural nets, chatbots, expert systems and genetic algorithms, to name a few, in the same column, as if they were utterly orthogonal or independent. If academics can get a bit confused here, I cannot imagine how policymakers would handle this boiling and spicy soup of letters, acronyms and weird names.
To promptly cool such a sui generis dish, we must develop an AI typology that distinguishes between types, algorithms and applications. The first step to achieving this goal requires differentiating between symbolic AI and sub-symbolic or connectionist AI . The former, also known as good-old AI (GOFAI, ), which I described in a previous post, dominated the AI scene for most of the second half of the last century. On the other hand, Neural Networks (NNs), which have been around almost since AI saw the light of day, are the best example of the latter. Success, however, only came this century thanks to the rapid development of ICTs (Information and Communication Technology). Machine Learning (ML) and Deep Learning (DL) are today’s crowning champions of connectionist AI, with resounding success. However, since life is never that simple, connectionist AI is larger than ML. For example, evolutionary algorithms (including genetic algorithms) and fuzzy logic systems, among others, are also club members. So, we can place them under the generic AI rubric, considering that they are also distinct from GOFAI. Within that context, we can depict the various types of modern AI in the graph below. I am sure many of you have seen this before, but I am convinced my color scheme is so much nicer.
The words “Artificial Intelligence” thus have different meanings depending on who is speaking and the context in which they are doing so. In the case of connectionist AI, the above graph establishes a hierarchy between the three main types, where AI is the most generic and DL the most specific. So, while DL is always AI, not all AI is DL. And so on.
The table below shows the three interrelated types of modern AI, their respective algorithms, and the applications for which they can be used.
Policy, decision-makers, non-technical administrators, and staff should use the table, starting with the last column and working their way back to the first. After all, applications are needed in practice, not a specific algorithm or AI type. In any case, when a vendor offers an AI or ML solution, further clarification should be sought to ensure it matches the application being sought. The same advice can be provided to researchers trying to classify AI applications in some way or fashion.
The table reveals at least three key points. First, different algorithms and AI types can create or support a given application. There is no one-to-one relation between applications and algorithms. Chatbots and customer service are excellent examples here. Second, the most sophisticated AI type, DL, can use algorithms that the table links to ML or AI. The best examples here are applications such as ChatGPT, which was created using supervised, unsupervised and reinforcement learning algorithms in addition to GPT models. However, deploying GPTs or other DL algorithms in ML or AI, while possible, might not lead to better AI models and applications. Third, while connectionist AI is less transparent and explainable than GOFAI, within the connectionist scheme depicted in the table, DL is the most complex, with more significant opacity and less explainability than the other two. That is critical as its deployment will demand, in theory, at least, more robust governance and oversight mechanisms to ensure responsible and unbiased outputs and outcomes.
Of course, nothing stops an AI vendor from pitching GOFAI to potential clients with little expertise. However, there is another twist here, and it comes in the form of artificial AI. No, that is not yet another type. Artificial AI happens when so-called AI applications are, in fact, supported by low-paid humans working behind the scenes in remote locations, totally invisible to end users, who might, in any case, end up glorifying AI. Typical applications here include chatbots, transcription services and virtual assistants.
I am now wondering how many of the chatbots deployed in the public sector since the late 2010s are indeed artificial AI. More research is needed, that is for sure.
Raúl
References