For the last 30 years, the seemingly endless number of so-called technology revolutions invading our expansive yet decaying landscape has been accompanied by a proliferation of wide-ranging publications, usually playing catch-up while trying to predict the future on the spot. That has certainly been the case since the official birth of the Internet. In the early 1990s, Krol’s The Whole Internet became one of the first global best sellers in this arena, translations included. Looking at my aging hard copies of the book, it is curious to see that its first two editions barely mentioned the World Wide Web, which was just emerging at the time. I should not recommend the publication to anyone under 30 unless they are studying Internet archeology. A couple of years later, Negroponte’s Being Digital became even more popular, taking a different approach to Krol’s book. Negroponte was undoubtedly visionary but also targeted the business sector. In any event, the speedy resurgence of AI has not been immune to this phenomenon. There are so many AI publications, academia included, that keeping track is almost a full-time job.
Taking a stratospheric view of this scenery, it is possible to identify three overall publication types. The first and perhaps most evident comprises technical publications, those spelling out the complex nitty-gritty of AI technology. Here, we can find two subtypes. On the one hand, hardcore technology books demanding apriori knowledge to start reading the first page. On the other hand, those trying to simplify the thematic area by explaining to average earthlings how the seemingly incomprehensible and almost magical AI works, if at all. Needless to say, the audiences for each of the two technical subtypes are starkly different.
The second category falls within the human rights agenda that has dominated the global geopolitical space since the late1970s – shifting away from colonialism and other forms of national liberation prevalent at the time and coinciding with Neoliberalism’s meteoric rise. Historically, such an agenda have prioritized civil and political rights, thus leaving the economic, social and cultural ones on the back burner. No wonder economic inequality has soared in the last two decades while most human rights organizations looked in the other direction. Being that as it may, AI also brought innovation into this agenda by propelling the development of so-called “ethical and responsible” uses of the omnipresent technology as a much-needed feature to ensure human rights (civil and political, that is) protection. The same large, monopolistic companies massively deploying AI have been peddling this approach to avoid regulation and thus preserve their autonomy. Although I do not have any hard data in hand, publications along these lines seem to lead all others, at least for the time being – and until AI regulation starts; that, in turn, will trigger another tsunami of publications on AI regulation.
The last category includes the socio-economic sphere. Here, topics range from jobs, productivity and national comparative advantages to education and training, inequality and social exclusion. Economists and, to a lesser extent, sociologists and anthropologists lead this area. As with the first, this category also comprises two subtypes that follow the same pattern. Here, AI algorithms are replaced by complex mathematical models that other publications within this rubric simplify for the average reader. Few of these publications reference economic, social, and cultural human rights. It is indeed curious that very few are invoking Article 25 of the Universal Declaration of Human Rights In the era of a stubborn and deadly pandemic. Article 19 rules – still unchallenged!
Descending to the Troposphere shows that the boundaries across these three categories are fuzzy and resemble Mandelbrot’s fractal geometry. Indeed, most authors within a given rubric easily venture at their own risk into the other two. That is especially true when predicting the future and suggesting human action and interaction to steer the AI warship in a given direction. Discussions on power and governance are the most common themes here. So technologists and engineers get to make policy recommendations. On the other hand, very few lawyers, economists, and sociologists dare to opine on Convolutional Neural Networks. The road is one way only, reflecting the dominance of a technology-centered perspective in this territory. The three rubrics thus have a clearly defined hierarchy. In any event, very few publications aim to bring these three categories under one umbrella.
Crawford’s Atlas of AI is a bright rosebud in this technology-plagued landscape. Against the grain, the author challenges the status quo by arguing that AI is neither artificial nor intelligent. However, questioning the dominant AI rhetoric demands a new approach that can help unveil its real nature. Based on her personal experience and the theoretical underpinnings of Science and Technology Studies (STS), political philosophy and law, Crawford opts to deploy a cartographic perspective to visualize the AI tentacles holding planet Earth hostage.
While not making any claims about universality, the author argues that AI is a multi-dimensional phenomenon that comprises economic, political, cultural and historical processes. In this context, “artificial intelligence is a registry of power” (pg. 8) and not just another general-purpose technology. Reminds me of Heidegger’s claim that the essence of technology is nothing technological. But, of course, many other technologies could also be studied under this same perspective, especially if we were to take yet another look at the several industrial revolutions of the last three centuries. It is thus not unique to AI – albeit the latter has its own idiosyncrasies. Not surprisingly, the book promptly claims that AI is yet another extractive industry that, unlike previous ones, is more than eager to conquer the whole planet via computational means.
Departing from this framework, Crawford uncovers six distinct and interconnected landmarks. Earth, the first stop, brings together three layers. First is rare earth minerals mining, which provides much-needed metals and materials for electronic devices we use and overuse daily. Energy is the second layer directly linked to Cloud Computing and its almost unsatisfiable need for energy resources on a 24/7 basis. Finally, the logistical layer supplements the former two by creating a global transportation network that connects crucial regions of the churning world economy to deliver the goods expeditiously. Together, all three layers dent planet Earth depleting its resources while contributing to anthropogenic Climate Change. In any event, these three earth layers demonstrate that AI is more than just another advanced technology. Instead, it is the new member of the old team of extractive industries.
Labor pops up next. Workplace automation is undoubtedly not new. It has been at work for over two centuries and is one of Capitalism’s core traits. Productivity is the word economist use to describe it. AI introduces more sophisticated ways of tracking labor performance using algorithmic management, advanced sensors and digital video. Every second the worker spends at the workplace can thus be monitored while setting optimal and indisputable time requirements for completing specific tasks. Laggards and shirkers can be rapidly detected and penalized or fired almost instantly. Furthermore, capitalizing on the globalization of the Internet, AI can also recruit workers regardless of location to undertake specific microtasks (part of broader processes unknown to them) and receive micropayments as rewards. Workers in the Global South that seem totally disconnected from the AI world are thus part of its expanding world. A subtle little feature of this is what we can call Artificial Automation. Here, low-paid workers are used to patching glaring gaps in existing AI platforms – Amazon’s Mechanical Turk is the best example. In any event, Crawford sees this globalization process as positive as it provides fertile ground for organizing workers on a global scale. That could be the platform to push back against yet another palpable extractive process.
The third landmark is data, perhaps the most widely cited and discussed in the AI arena. After all, data is the lifeblood of ML algorithms. Yet, data extraction and collection predate the AI resurgence. Nowadays, AI has made collecting all sorts of data a sheer necessity beyond privacy and ethical issues. The author says, “data is more commonly seen as something that can be taken at will, used without restriction, and interpreted without context. There is a rapacious international culture of data harvesting…”(pg.118). The open capture of the Commons as an indisputable mandate is another trait of AI’s inherent extractive hunger.
Taking a political economy stance, we can then say that Crawford’s first three interconnected topographical landmarks, Earth, Labor and Data, pinpoint the elaborate production and distribution processes required to generate the overall AI topography. Indeed, they corroborate the author’s claim about AI’s multi-dimensional complexity that transcends the seemingly strong and overwhelming technological personality its displays when first seen. Labor and Data are critical inputs to the production of AI and thus deserve their own chapters. However, the Earth landmark is more of a mixed bag as it includes both production and distribution (the logistical layer). Moreover, one could argue that labor is also part and parcel of such processes. And data, after all, resides in energy-hungry cloud locations around the globe. Here, the metaphor falls short as the conceptual structure is weakened – probably because Crawford wanted to highlight AI’s environmental impact first and foremost. At any rate, missing in the analysis is the essential role global capital investment financing plays in creating and sustaining these landmarks.
The following three landmarks in the book, Classification, Affection and State, can be positioned within the consumption and exchange spheres. Here we move to how AI is used and available for sale to any interested party.
Classification is perhaps one of the best-known AI features, usually presented as the capacity to predict where individuals could fall within specific pre-established categories. No worries, we are told source data has been anonymized. The problem is that, regardless of the source, ML algorithms put people into boxes. Such a practice is not new and was used by colonial states and others to make decisions and strengthen their power. However, changing their status is challenging once people are classified in one way or another.
Affection is certainly more recent and is perhaps the most insidious. Emotion recognition systems are now widely used, especially in hiring and recruitment processes. The problem with such systems is that they are based on dubious theories that assume that emotions are biological and not constructions of the human mind. Furthermore, not surprisingly, they totally ignore diverse cultural environments where facial expressions might have very different meanings in various cultural contexts. More so in the case of cultures that have existed for centuries and have not been influenced by Western values. Regardless, Affection is a growing AI area, despite the above limitations.
The State landmark has two components. First, it comprises AI for surveillance and military purposes to sustain global superiority against emerging rivals. Here, national states acquire state-of-the-art AI technologies from the private sector and implement them for “national defense” purposes. The second component is what Crawford calls the “outsourced state” (pg. 192), which happens when states, national, regional and local, acquire AI to handle non-military issues such as migration, refugee inflows, crime and justice provision. AI platforms here become threatening tools to these populations that usually have little to no recourse for redressing any adverse outcome. They are powerless indeed.
While Classification and Affection are more process-focused, State explicitly introduces human agency into the landscape just as Labor did. As a result, public and private actors address specific global, regional, national, or local challenges and use AI as the final, indisputable solution. However, the same could be said about Classification and Affection. Looking at the specific way these are deployed is critical. Still, it should also be complemented by examining the actors and agents pushing such solutions, despite their glaring faults and known failures.
Overall, Crawford has done an excellent job showing that AI is more than just a new, if somewhat mystical, technology. Instead, it comprises economic, social, environmental, political and technological components that work tightly together and push it toward its incessant growth on a global scale. As a result, AI is not only a global extractive industry but also power-hungry.
Power is thus the last chapter and ponders how we should respond to AI’s relentless advance already negatively impacting some of us. To date, the most common response has been the development of “ethical and responsible” AI that has been, in fact, created, steered and financed by the same actors busily deploying AI all over the place, no questions asked. I fully agree with the author that if ethics is brought into AI as a technology, then it should also be brought into all of its six topographical landmarks depicted in the book. I will even go even further and argue that it should be introduced into all those areas of human development that can directly impact individuals and communities. As philosopher Enrique Dussel has argued, ethics can become essential for developing critical perspectives, including decolonization.
Crawford instead calls for the “democratization” of AI to curtail its overwhelming power. I have heard that before, however. While she offers some suggestions and hints on the approach to take, the issue of human agency is entirely missing. One could assume, for example, that the Labor landmark should be part of the democratization process. Who else needs to be involved? How do people get organized? What are the governance mechanisms required to make this work? How do we, in fact, confront those with perhaps too much power, financial resources, and the ears of many who allegedly make decisions in our name and for our sake? Not a simple topic and one that perhaps could require a sequel publication. The Power Web of AI, maybe?