The proliferation of top, best, fails and prediction posts on almost any topic is now a staple of the annual transition from one year to the next. As the new year starts seeing the light of day, we seem compelled to take stock of the previous 365.25 days and poke more in-depth into the short past. Regular note-taking, logging and recording are, among others, part of the task. The end of a decade calls for more elaborate efforts given the period. A few attempts are even more ambitious and, for example, recommend the 100 books one must read before dying. A bit over the top, perhaps. One could spend a whole year just trying to catch up with all these posts in any event. A better strategy is to focus on areas of interest or specialization. Books, films, social sciences and technology capture my attention. I even have my own annual best films post since 2009. So there you go.
A quick review of the structure of some of these posts reveals three traits. First is the number of items included. Five or ten seem to be the preferred flavors. Some more adventurous go for 20, 25 and even 50 in a few cases. Sure, the longer the list, the higher the risk intended audiences switch to the next, particularly in the era of short attention spans and tweeter-length communications. In any case, it is hard to find posts with, say, three, seven, 11, or 17 items. 13 might be ideal for best horror film posts, I suppose. Regardless, being divisible by five seems to be a requirement. Remembering the top five or top ten is much easier than the top three or top seven. Maybe?
Secondly, most rank items in descending order, the best being on top. Picking winners, and by default losers, is part of the chore. However, presentation styles diverge as many opt to start with the last item and move upwards towards the top, imitating a suspense film. And third, items selected in most are not interconnected or aggregated by categories. Most film posts do not use the genre to group choices. Some are, by tacit agreement, excluded, such as B-movies and documentaries. Separate posts exist for such beasts but seem to be a notch down. The same goes for books, with fiction and non-fiction being the leading contenders but genres disregarded altogether. What about technology?
A recent MIT Technology Review post pinpoints five ways to make AI a greater force for good. Issues start with the title itself, as it assumes AI is already a force for good. I am not sure there is agreement on such a view across the board. Many researchers and practitioners are working hard to contain the “bad” and ugly side of AI. In a previous post, I noted the perils of using “good” as an AI objective. After taking stock of the 2020 AI developments, permeated by the pandemic and the killing of George Floyd, the author lists five themes that she labels “hopes” for 2021. They include
- Reduce corporate influence on research
- Prioritize comprehension over prediction
- Empower marginalized researchers
- Foster participatory AI development
- Codify guard rails into regulation
The number five is once again showing prominently. Trait number one checked. I am not sure the author’s list is ranked in any way or form. All these areas are perhaps equally important. Trait number two is thereby not functional here. Finally, the hopes listed are seemingly disconnected and lack any sequencing, at least in how the post is presented. The last trait is thus fully operational here. Two out of three, final score.
Nevertheless, posts that deal with socio-economic and political issues should be set to a different standard. While ranking themes should be avoided, for the most part, interconnecting and sequencing matters should be required when at all possible. The MIT post falls short in this regard. I will argue that the five themes highlighted can be aggregated under three distinct headers 1. Policy (1 and 5). 2. Governance (3 and 4) and 3. Domain focus (2). Moreover, sequencing is feasible as policies (not to be confused with regulations) could incentivize governance and domain-focus issues.
The author calls for more public investment to counter the private sector’s dominance of AI research. While this might be difficult to implement in practice (for example, wealthy digital monopolies can financially outbid governments in the quest for research dominance, as has, in fact, happened in the UK), this is a public policy issue in the same way as calls for formal regulations are. The same goes for hopes to open the field to other researchers and give voice to those impacted by AI deployments. The domain focus is also interconnected as prediction is the most critical AI feature for many businesses focused only on the bottom line. Telling them to drop the goal of super-profits could be a tremendous challenge unless other incentives are found.
The last item on the list is perhaps the most important if we factor in the big push by tech to promote the self-serving idea of self-regulation by almost monopolizing AI research. I disagree, however, with the idea that self-regulation is fiction. On the contrary, it has been used as a real weapon to avoid public regulations and legitimize AI’s inexorable development in the name of unstoppable innovation.
The tide seems to be turning now. I hope so.
One thought on “Greater “AI for Good””
Comments are closed.