AI Typology

While researching the deployment of artificial intelligence within the public sector, I encountered a limited number of precious case studies that poked a bit deeper into the benefits and risks of such a move . For the most part, that set of studies focused on public service provision, while a few explored AI’s institutional impact within public administration. One surprising finding was the widespread use of chatbots within the public sector. That certainly makes sense, given the need to support the ever-increasing interactions between public institutions and stakeholders and the public in general.

I also realized that most of them were not looking at themes that, in my view, are part and parcel of the process. For starters, the procurement process used by public entities to acquire a specific AI platform or technology was not part of the picture. Of course, it is quite possible that some of the cases under the microscope benefitted from donations from private companies or in-kind grants from various other sources. But more often than not, most must follow standard public procurement processes that usually start by developing an RFP (request for proposals) to then request bids from the various providers. Writing such an RFP becomes a monumental challenge if the entity lacks AI capacity or expertise. Here, the entity might end up facing the old chick-and-egg conundrum. Solving it demands external expertise, which might also require a public procurement process.

Another critical gap is the lack of analysis of AI governance. Public institutions already have in place traditional governance instances and mechanisms available to decision-makers, which usually do not factor in consultations with external people or potential beneficiaries and stakeholders. Given AI’s reputed “intelligence,” having adequate governance mechanisms to manage it responsibly is fundamental. Again, the lack of AI expertise and knowledge limits the impact older governance mechanisms can have while selecting and deploying AI within the entity and overseeing its overall implementation. That unquestionably opens the door for algorithmic governance, where AI can make final decisions without human supervision, and redress mechanisms for those affected are conspicuously absent. The governance of AI is the antidote to the abuse of AI in governance processes and should thus be part of the analysis. On the other hand, the case studies do reflect what has actually been happening in practice where governments, national and local, have deployed AI platforms used to determine eligibility for receiving a public service or classify people criminally based on past records and correlations with other people’s background.

However, I was more surprised by the researchers’ classification of the various AI platforms deployed in the public sector. That certainly demands some technical knowledge, which perhaps this group of researchers with social science backgrounds did not have handy. So, for example, it is not uncommon for them to place machine learning, artificial neural nets, chatbots, expert systems and genetic algorithms, to name a few, in the same column, as if they were utterly orthogonal or independent. If academics can get a bit confused here, I cannot imagine how policymakers would handle this boiling and spicy soup of letters, acronyms and weird names.

To promptly cool such a sui generis dish, we must develop an AI typology that distinguishes between types, algorithms and applications. The first step to achieving this goal requires differentiating between symbolic AI and sub-symbolic or connectionist AI . The former, also known as good-old AI (GOFAI, ), which I described in a previous post, dominated the AI scene for most of the second half of the last century. On the other hand, Neural Networks (NNs), which have been around almost since AI saw the light of day, are the best example of the latter. Success, however, only came this century thanks to the rapid development of ICTs (Information and Communication Technology). Machine Learning (ML) and Deep Learning (DL) are today’s crowning champions of connectionist AI, with resounding success. However, since life is never that simple, connectionist AI is larger than ML. For example, evolutionary algorithms (including genetic algorithms) and fuzzy logic systems, among others, are also club members. So, we can place them under the generic AI rubric, considering that they are also distinct from GOFAI. Within that context, we can depict the various types of modern AI in the graph below. I am sure many of you have seen this before, but I am convinced my color scheme is so much nicer.

The words “Artificial Intelligence” thus have different meanings depending on who is speaking and the context in which they are doing so. In the case of connectionist AI, the above graph establishes a hierarchy between the three main types, where AI is the most generic and DL the most specific. So, while DL is always AI, not all AI is DL. And so on.

The table below shows the three interrelated types of modern AI, their respective algorithms, and the applications for which they can be used.

Policy, decision-makers, non-technical administrators, and staff should use the table, starting with the last column and working their way back to the first. After all, applications are needed in practice, not a specific algorithm or AI type. In any case, when a vendor offers an AI or ML solution, further clarification should be sought to ensure it matches the application being sought. The same advice can be provided to researchers trying to classify AI applications in some way or fashion.

The table reveals at least three key points. First, different algorithms and AI types can create or support a given application. There is no one-to-one relation between applications and algorithms. Chatbots and customer service are excellent examples here. Second, the most sophisticated AI type, DL, can use algorithms that the table links to ML or AI. The best examples here are applications such as ChatGPT, which was created using supervised, unsupervised and reinforcement learning algorithms in addition to GPT models. However, deploying GPTs or other DL algorithms in ML or AI, while possible, might not lead to better AI models and applications. Third, while connectionist AI is less transparent and explainable than GOFAI, within the connectionist scheme depicted in the table, DL is the most complex, with more significant opacity and less explainability than the other two. That is critical as its deployment will demand, in theory, at least, more robust governance and oversight mechanisms to ensure responsible and unbiased outputs and outcomes.

Of course, nothing stops an AI vendor from pitching GOFAI to potential clients with little expertise. However, there is another twist here, and it comes in the form of artificial AI. No, that is not yet another type. Artificial AI happens when so-called AI applications are, in fact, supported by low-paid humans working behind the scenes in remote locations, totally invisible to end users, who might, in any case, end up glorifying AI. Typical applications here include chatbots, transcription services and virtual assistants.

I am now wondering how many of the chatbots deployed in the public sector since the late 2010s are indeed artificial AI. More research is needed, that is for sure.

Raúl

References

Charalabidis, Y., Medaglia, R., & van Noordt, C. (Eds.). (2024). Research Handbook on Public Management and Artificial Intelligence. Edward Elgar Publishing Ltd.
Vogl, T. (2020). Artificial Intelligence and Organizational Memory in Government: The Experience of Record Duplication in the Child Welfare Sector in Canada. The 21st Annual International Conference on Digital Government Research, 223–231. https://doi.org/10.1145/3396956.3396971
Yalçın, O. G. (2021, June 21). Symbolic vs. Subsymbolic AI Paradigms for AI Explainability. Medium. https://towardsdatascience.com/symbolic-vs-subsymbolic-ai-paradigms-for-ai-explainability-6e3982c6948a
Goel, A. K. (2021). Looking back, looking ahead: Symbolic versus connectionist AI. AI Magazine, 42(4), 83–85. https://doi.org/10.1609/aaai.12026
Misuraca, G., van Noordt, C., & Boukli, A. (2020). The use of AI in public services: results from a preliminary mapping across the EU. Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, 90–99. https://doi.org/10.1145/3428502.3428513
Giest, S. N., & Klievink, B. (2024). More than a digital system: how AI is changing the role of bureaucrats in different organizational contexts. Public Management Review, 26(2), 379–398. https://doi.org/10.1080/14719037.2022.2095001
Maragno, G., Tangi, L., Gastaldi, L., & Benedetti, M. (2023). AI as an organizational agent to nurture: effectively introducing chatbots in public entities. Public Management Review, 25(11), 2135–2165. https://doi.org/10.1080/14719037.2022.2063935
Oliveira, C., Talpo, S., Custers, N., Miscena, E., & Malleville, E. (2023). Citizen-centric and trustworthy AI in the public sector: the cases of Finland and Hungary. Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance, 404–406. https://doi.org/10.1145/3614321.3614377
Medaglia, R., & Tangi, L. (2022). The adoption of Artificial Intelligence in the public sector in Europe: drivers, features, and impacts. Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, 10–18. https://doi.org/10.1145/3560107.3560110
Berman, A., de Fine Licht, K., & Carlsson, V. (2024). Trustworthy AI in the public sector: An empirical analysis of a Swedish labor market decision-support system. Technology in Society, 76. Scopus. https://doi.org/10.1016/j.techsoc.2024.102471
van Noordt, C., & Misuraca, G. (2022). Exploratory Insights on Artificial Intelligence for Government in Europe. Social Science Computer Review, 40(2), 426–444. https://doi.org/10.1177/0894439320980449
Tangi, L., van Noordt, C., & Rodriguez Müller, A. P. (2023). The challenges of AI implementation in the public sector. An in-depth case studies analysis. Proceedings of the 24th Annual International Conference on Digital Government Research, 414–422. https://doi.org/10.1145/3598469.3598516
Neumann, O., Guirguis, K., & Steiner, R. (2022). Exploring artificial intelligence adoption in public organizations: a comparative case study. Public Management Review, 0(0), 1–28. https://doi.org/10.1080/14719037.2022.2048685
Kuziemski, M., & Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy, 44(6), 101976. https://doi.org/10.1016/j.telpol.2020.101976
Cantens, T. (2023). How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations (SSRN Scholarly Paper No. 4521315). https://doi.org/10.2139/ssrn.4521315
Selten, F., & Klievink, B. (2023). Organizing public sector AI adoption: Navigating between separation and integration. Government Information Quarterly, 41(1), 101885. https://doi.org/10.1016/j.giq.2023.101885
Sienkiewicz-Małyjurek, K. (2023). Whether AI adoption challenges matter for public managers? The case of Polish cities. Government Information Quarterly, 40(3), 101828. https://doi.org/10.1016/j.giq.2023.101828
Miller, S. M. (2022). Singapore public sector AI applications emphasizing public engagement: Six examples. Research Collection School Of Computing and  Information Systems, 1–24. https://ink.library.smu.edu.sg/sis_research/7332
Estevez, E., Janowski, T., & Roseth, B. (2024). When Does Automation in Government Thrive or Flounder? (Argentina). Inter-American Development Bank. https://doi.org/10.18235/0005530
Haugeland, J. (1985). Artificial intelligence: the very idea. MIT Press.

 

Print Friendly, PDF & Email