The idea of a technological singularity has been around for over 60 years. While initially confined to closed circles of experts, it has been gaining a lot of ground in the race to the future, which, according to its core tenets, will be devastating for us, poor dumb humans. I probably first heard about it in the early 1990s but did not pay any mind. Then, in the early 2000s, I attended a presentation by Roy Kurzweil at a large international gathering. Unexpectedly, a colleague introduced me. We exchanged a few words, but when I asked a question, he suggested I keep an eye on the release of his upcoming book before walking away. I did follow his advice but never got around to reading the publication. I was living my own singularity, working in international development, fully aware that most humans had never used or heard about the Internet at the time but were enduring harsh socio-economic and even political environments. That was indeed a development singularity of sorts. Adding another one to my plate seemed unmanageable to me – and certainly not fair to them.

The resurgence of AI in the early 2010s, thanks mainly to the success of Machine Learning (ML), added plenty of wood logs to the singularity fire. Renowned scientists such as Stephen Hawking, a brilliant mind by all measures, started toying with the idea, raising its ominous long-term predictions. I would love to hear Hawking commenting on the latest AI developments spearheaded by the powerful GPT-based Large Language Models (LLMs), which some claim are the first step in the unavoidable AGI (Artificial General Intelligence) race. In any event, over half a year ago, another group of scientists, entrepreneurs and personalities published a letter demanding a six-month AI development pause. While mainstream media made a big deal about it, as expected, I do not think any action was taken by the companies leading the AI race. Not coincidentally, the latter are primarily members of the ultra-exclusive Big Tech club, informally founded over a decade ago.

Singularity proponents share a lot of ground with techno or digital dystopia followers who, as I see it, have a perhaps more optimistic take on our tech-driven future. Indeed, we will probably survive intelligent machines as the real danger is the ensuing chaos that threatens the socio-political order that prevails today. Of course, such blanket statements in a world where country diversity and differences are humongous cannot be endorsed uncritically. Indeed, it is a bit more complex than usually assumed. In fact, critical AI researchers are already doing precisely that, thus moving the agenda to new territories, perhaps much greener but also full of pitfalls and bobby traps. Like their techno-utopian counterparts, “doomers” and dystopians must also be domesticated.

Undoubtedly, not all doomers are created equally. Different breeds can be found out there. That is certainly the case regarding AI’s role in the ongoing global disinformation pandemic. Here, doomers are single-mindedly focused on one theme, disregarding all others that apparently fall beyond their expertise or interest. The human race will indeed survive, but chaos and anarchy will thrive in their domain-led world. Not surprisingly, a few months after the release of ChatGPT, predictions about its insidious and inevitable impact on disinformation popped up all over the place, from mainstream media to academics and experts primarily based in the Global North. Another blanket statement looming on the horizon.

While researching the topic, I had the chance to interview several Global South disinformation experts and academics. When I asked them about AI’s impact on Global South disinformation processes, all agreed it would be minimal. While the Global South is no monolith,  the experts adduced that development, language and cultural factors must be considered. And perhaps more importantly, disinformation has been around in that set of countries for decades and thus has its own dynamics. Recall that the mainstream disinformation theory tells us that it started in 2016 when the term “fake news” swept the world. More blanket statements in the name of all of humanity. However, no one bothered to ask us.

In any case, solving such an epistemological impasse requires we address two critical issues. First, developing a suitable analytical framework capable of grasping local contexts and country variations around the globe. Second, understanding how AI works vis-a-vis disinformation generation. The Information Disorder framework (IDF) has been enshrined as the de facto analytical tool. It introduces three categories of false information (misinformation, disinformation and malformation, which cannot be adequately translated into most other languages) while arguing that the intentionality of the agent creating it delivers the knockout punch.

By default, that ignores local context from the very start. Malevolent-intentioned agents can access different types of resources, ranging from financial and human to networks and communication channels, depending on location and geopolitical context. As comprehensive research completed by Global South researchers has suggested, WhatsApp is one of the main disinformation channels in low and middle-income countries – and not the usual Global North suspects. Consequently, the dynamics of disinformation production, distribution and consumption are indeed distinct and demand special attention. It cannot be subsumed under a Global North perspective, nor should it be the target of ambitious blanket statements. Studying the relationship between agents and means to generate disinformation in a given context is therefore crucial. The IDF is thus limited and should be either enhanced or replaced altogether. I will share a more detailed critique in a future post – alongside a new framework.

To understand AI’s role in disinformation, it is best to start by having a common understanding of how it has evolved in the last decade or so. That entails looking at at least three factors. 1. The role of data. 2. What output or outputs are generated. And 3. Its overall scope.

The second part of this post will deal with all of that – and a bit more.


Print Friendly, PDF & Email
AI Disinformation – I

Post navigation