AI Philosophy

That Neural Networks (NNs) are the most successful AI in history is indisputable. Large Language Models (LLMs) resounding success has made that much more evident and incontrovertible. Curiously, most people do not seem to remember that NNs predated the term “Artificial Intelligence” by over a decade. Indeed, in 1943, a neuroscientist and a mathematician joined forces to develop the first conceptual NN attempting to imitate the functioning of the human brain —unlike Turin’s goal of mimicking human behavior directly. Fifteen years later, a psychologist, funded by the US Navy, developed the first NN machine, which he called the perceptron. Indeed, that was the first learning machine created a year before the term Machine Learning (ML) was coined.

Meanwhile, core computer scientists, led by McCarthy and Minsky, were having their own private party in the 1956 AI Dartmouth workshop. Interestingly, NNs were created and initially developed by researchers who did not necessarily have a computing science background. Maybe that is why invitations to Dartmouth never came their way.

In any case, here we start to detect the emergence of two competing camps: symbolic and connectionist AI. However, that did not translate into a competitive match between computer scientists and neuroscientists et al. Not at all. Instead, it seems that the core issue was how AI could learn when confronting any given environment. Pasquinelli’s book, The Eye of the Master, traces this neatly while avoiding biological determinism. Accordingly, the 1943 paper and the Dartmouth gang relied on deductive logic to program their allegedly intelligent machines. Here, the goal was to find a set of general rules that could be applied to any particular situation —exactly how old-fashioned programming used to work and still works.

The perceptron took the opposite approach by using inductive logic to deploy pattern recognition using statistical inference as a driver to learn from and about the environment. Its first version was a simple NN with three layers and 4,096 parameters, which at the time seemed huge. Today, it sounds more like a joke as top-notch LLMs demand billions of parameters, not to mention hundreds of layers. At any rate, NNs could, in principle, recognize patterns and classify them accordingly. Thus, they learned and, on that basis, could process additional information or data and learn more as needed.

The symbolic camp, however, was not ready to concede any territory. At the end of the 1960s, Minsky et al. published a book depicting the apparent limitations of the perceptron in all its versions, which had the desired effect: to stop funding connectionist AI. Undoubtedly, that contributed to the first major AI winter that commenced in 1974. Nevertheless, AI research did not come to a complete stop. Universities with good financial endowments and in-house AI expertise, including Minsky’s own, continued the work.

That short AI story shows that AI development has always been related to philosophy. Issues regarding epistemology, ontology, and ethics have been faithful companions despite its apparently many Winters.

One of the first philosophers who took a critical approach to AI, Huber Dreyfus, published his first crucial assessment in 1965 using alchemy as an allegory to describe its status. He identified four AI areas, namely, game playing, problem-solving, language translation and pattern recognition, all struggling at the time. Somehow, the perceptron was not included, suggesting he was targeting symbolic AI first and foremost. Fundamental epistemological concerns were at stake regarding how humans learn and how AI could replicate such a process. Dreyfus proposed a typology for “intelligent activities” that included associationistic, non-formal, simple formal, and complex formal activities. He concluded that simple formal was the most promising, while non-formal, which provides for pattern recognition, seemed to have a dubious future —an accurate assessment if we stick to symbolic AI exclusively.

Today, “non-formal” activities rule the world, thanks to Deep Learning (DL) and the triumph of connectionist AI over its long-term rival. However, such an outcome does not imply that AI’s philosophical issues have been solved—quite the opposite, in fact. Throughout his career, Dreyfus, a Heidegger scholar, raised critical AI epistemological and ontological issues that are still under the microscope today.

Ethical issues have been placed on the philosophical front burner in this century. There is vast literature on the subject, so I do not need to dive into that moral pool. Regardless, two issues have to be raised. First, AI’s ethical turn in the last few years has been partly triggered by Big Tech, old and new, and its concerted efforts to avoid regulation. For them, self-regulation is a safe haven that, in their view, allows them to efficiently control the perimeter. Philosophers benefitted from this as job openings popped out all over the place. Ethical frameworks became almost a virus at some point, with hundreds freely circulating in cyberspace. Nowadays, we hear more about “responsible AI,” which is the other side of the coin and thus raises as many questions as the “pure ethical” focus.

Second, ethical approaches deployed to study AI’s perils are pretty much the usual ones, or mainstream in my view. While different schools of ethics do exist, these, for the most part, depart from formal principles that have universal validity. However, that ignores two critical issues, as highlighted by Dussel’s work on the topic. First are the material conditions of life, which vary dramatically around the globe. For example, those usually excluded from global processes are never part of the ethical discourse. Second, the feasibility of deploying ethical principles in particular contexts. Ethical AI discourses and frameworks ignore both and thus are subject to critical assessments. Introducing them into the fray opens the door for the potential decolonization of ethics.

The case of AI is significant here. Think about the data collected and used by Big Tech AI to generate their models. The exclusion of vast portions of the population is well known here, thus leading to biased and discriminatory results among claims of universality. And those excluded live under conditions most in the West have never experienced firsthand. The feasibility of such populations changing their current status ethically is nearly zero. No one is really listening, and when they do, they switch to another channel.

Epistemological, ontological and ethical issues are closely interrelated. For example, if someone argues that AI learns in the same fashion as humans, then ontologically, it can be considered an alternative form of human and intelligent life. We then need to add an ethical framework to it to ensure potential harm is minimized. On the other hand, if we assume it is just a series of semiautonomous computational agents capable of undertaking a plethora of human chores, then the focus shifts to its production, distribution and consumption. Ethical frameworks and principles are thus related not to the dumb agents but to the humans creating them and selling them as systems of universal applicability. There is, therefore, no AI ethics per se, but instead, ethical principles for those running the AI show in the name of unfettered innovation and the global public good.

Raúl

 

Print Friendly, PDF & Email