Better “AI for Good”

Facebooktwitterredditpinterestlinkedinmail

While the dystopian camp perceives digital technologies as a formidable, perhaps even unsurmountable threat to society, those on the other, much more optimistic side do not seem to get tired of repeating its almost countless benefits. The latter camp apparently has the upper hand, at least for now, as its message captures most daily media headlines, mainstream and otherwise. Doom technology scenarios occasionally take center stage when one global personality decides to warn us, once again, about the war we are about to lose should technology be left to its own devices.

Despite such opposing views, both camps share the idea that technology is just like Frankenstein, a human creation that somehow has acquired a life of its own, a distinct personality and a determined will. If we are on the side that sees the glass half full, the best thing we can do is try to convince Frankie 2.0 (and later) to do the right thing. Did someone say ethics? At least digital technologies are not demanding that we also create a mating partner like the older version of the monster did. As things stand today, perhaps they can also do this on their own, sans us primitive humans from hereon in.

Thus, it is not surprising to see the emphasis techno-optimists place on the equation’s ethical and even moral side. Frankie needs to be tamed in exemplary fashion in all of its incarnations (HAL included!), good being the operative word. ICTs for good, Blockchains for good and, of course, AI for good are thus ideas we certainly hear almost every day. And who could disagree with any of them, all spousing such a laudable goal? While perhaps a few will openly embrace more sinister perspectives, “for good” is good for most of us.

Nevertheless, that works until we start questioning what we mean by good. I will submit that universal consensus on the meaning of “good” even for those in the optimists’ team does not exist – and probably cannot be reached by crowdsourcing or collective intelligence. More so in an era where misinformation, glaring inequality, political contention, social divisiveness and virulent nationalism permeate our daily routines already dramatically disrupted by the royal pandemic.

On the other hand, consensus on universal AI applicability has emerged. AI is now defined as a general-purpose technology. Beware, Frankie will be coming to get you soon if it has not already. In any event, we are mixing apples and oranges here. By adding “for good” to AI, we are trying to delimit its application and make it less universal. We are, however, using a relative and perhaps even contentious concept to draw the borders. Undoubtedly, we should be able to define the overall scope of AI use. But we could possibly do so by avoiding using concepts loaded with ethical and moral connotations that can vary from one person to the other. Consensus will not be easily reached, even if we add the word social to the objective. However, “for good’ or “for social good” do end up legitimizing AI diffusion regardless of any sort of consensus. But that is another long story that deserves a standalone post.

A way out of this conundrum is to introduce the idea of domains. At least two of these animals can be useful here 1. Areas or sectors. And 2. Geopolitical location. The first refers to specific social sectors such as health, justice, education and the economy. The impact of AI on the latter has been discussed ad nauseam, the future of work being one of the main characters in the picture. AI will surely bring critical gains in productivity (do more with much less) while throwing quite a few people out of the workplace, at least in the short run. Here, the “for good” qualifier shows its limitations. Good for whom? On the other hand, health has been touted as one of the ideal sectors for AI deployment, breast cancer being a well-known example.

Geopolitical location takes shape via nation-states, regional and sub-regional agreements, political projects such as the EU and a few global instances trying to balance the others with little success on the ground. Positioning within this domain is crucial and, for the most part, reflects the division between rich countries and the rest, the Global South. Do not assume the latter is a monolith. On the contrary, we find a few countries closely trailing those on top, accompanied by many countries facing dire development contexts requiring immediate attention. The critical point here is your position in the geopolitical chain will change the “for good” connotation of AI.

Combining the two domains offers a better way for harnessing AI according to local contexts. Data is the most obvious point here. Training an AI with data from a wealthy country and deploying it in a more impoverished nation is not only “unethical” but plainly absurd. More significant is the potential of prioritizing AI deployment areas according to geopolitical location. Health again provides a good example. Recent estimates suggest that over 600 thousand people die of Malaria each year. Countries where this is happening thus have a clear target for health interventions. But most countries do not face such a challenge, certainly not the rich nations. 

In sum, recognizing that there is no level playing field out there is essential if AI is really going to be a force for “good.” Prioritizing AI interventions by sector and geopolitical location is crucial. The issue then is not AI itself but rather how do we ensure such prioritization is done in an open, transparent, inclusive and democratic fashion.

Do not blame good old Frankie for the failure to do this. After all, Frankie is just fiction. However, AI is a stark reality demanding prompt community response.

Cheers, Raúl

 

One thought on “Better “AI for Good”

  1. Pingback: Greater “AI for Good” |

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.