While the concept of an algorithm has been around for centuries, the same cannot be said about algocracy. The latter has recently gained notoriety thanks in part to the renaissance of Artificial Intelligence and Machine Learning (AI/ML) and is frequently used to describe the increased use of algorithms in decision-making and governance processes. Indeed, the so-called Singularity could be seen as an extreme and seemingly irreversible algocracy case where humans lose the capacity to control superintelligent machines and might even face extinction. Not sure that will ever happen though.
A more plausible scenario takes place when humans and human institutions rely on algorithms to make critical decisions without any reservations. This is happening today – the quasi-dictatorship of algorithms. In this context, the concept of algocracy seems to reify algorithms as they apparently have a life of their own and can single-handedly call most if not all shots. We, humans, seem to be content with sitting on the fence, dutifully enjoying the view.
We thus need to look more closely at algorithms themselves to understand a bit better this curious state of affairs. So what is an algorithm, really? Compared to blockchain technologies, algorithms are indeed lagging far behind when it comes to online guides and digital videos explaining the concept ad nauseam. So let us not go there. Instead, let me use a non-digital example to illustrate the idea.
Solving complex puzzles
I still vividly recall the Rubik’s Cube boom of last century. While I never actually bought one, several of my friends did. I was challenged to solve the puzzle to demonstrate my level on intelligence. I decided to confirm I was indeed irremediably dumb so I never went for it. Even if one were to solve it one time, the subsequent challenge was to replicate success quickly and thus salvage the genius reputation. When it came to board games and puzzles, chess was (and still is) my cup of tea. Moreover, I had already learned how to code and was sure the solution would be eventually programmed.
The original cube had six different colors each having nine squares. The goal was to arrange a scrambled cube by rotating its moveable edges so that for every single color all similar squares were placed on the same side. There are several ways to solve this puzzle. But of course, the ultimate goal is to minimize both the number of rotations and the amount of time required. It turns out that number is twenty. This solution is known as God’s Algorithm and is in fact used to solve similar puzzles. Moreover, the cube can now be unscrambled in less than one second thanks to robotics and digital algorithms.
We now have sufficient ammunition to define an algorithm is simple terms. An algorithm comprises a series of limited, effective and efficient sequential commands designed to tackle a specific task under certain given or initial conditions.
In the case of Rubik’s cube, the mission is to unscramble the cube. The cube itself provides the requirements, a small tangible object with six colors and nine squares per color. The algorithm developed is effective as it achieves the goal and has a limited number of sequential commands that can be completed efficiently timewise.
The sorting algorithm is one of the first taught to most computer programming students. I soon learned that multiple ways to sort a given set of text or data exist, each having its algorithm with a proper name. It all depends on the initial set of conditions and the input provided. Generally, the creation of an algorithm entails two distinct steps. The first consists of devising the strategy to tackle the task at hand to determine the best or most optimal approach. The result of this process is called pseudocode which, in a nutshell, is a high-level and abstract set of instructions required to achieve the set goal. It then must then be translated into actual computer code which is step two. To write the latter one must have already mastered some programming language (such as C, C++, Java, Python, Solidity, etc.).
Four points are worth highlighting here. First, the division of labor between strategy and actual code is significant as one does not necessarily need to be an expert programmer to come up with some pseudocode, especially if the targetted task is part to one’s daily work. Most seem to think that only programmers can generate pseudocode. Not exactly. Just think about all those well-known pop musicians that compose songs without knowing how to read/write music. Sure, they will eventually need “programmers” to write down the music so it can be arranged, recorded and performed in public. But the music’s copyright still belongs to the composer. This same division of labor also opens the door for the creation of the solution-in-search-of-a-problem conundrum if the programming part of the equation is allowed to lead the pack relentlessly.
Second, actual computer programming generally involves adding other algorithms to the mix. In the sorting algorithm, for example, one might need first to run a “cleaning” algorithm to ensure the integrity of the input data and then add at least another to display the final output. Algorithms rarely run standalone. Rather, they seem to be social animals demanding interaction with others. Computer programs comprise a series of algorithms that interact together to yield the desired set of outputs. Such interactions add complexity and increase the potential for errors.1 The development of computer programs are usually based on software specification requirements, not just pseudocode.
Third, humans are directly involved in the overall process, While mentioning this might seem tautological, the critical issues here are 1. Algorithms are not static as new technological and other developments might help introduce new ways of addressing a particular task. Innovation plays a part here. And 2. Coding by humans opens the door for biases, errors, and bugs which together can generate unwanted or even harmful results. Something is thus lost in translation. More usual than not, finding such pitfalls becomes a complex operation itself. But they are there for sure.
Finally, note the fundamental importance of the initial conditions for developing an algorithm. In the case of Rubik’s cube, the initial conditions are static as we are dealing with a rather small physical object. Larger cubes with more squares are available. But in the end, we can only handle so much. While having distinct engineering constraints, algorithms running on digital computers do not face such physical limitations. Undoubtedly, they are much more powerful to address complex tasks. But they still live on what is fed to them by humans and are not picky about it.
In the next post, we will see the role AI/ML has played in the evolution of algorithms to then turn to a brief overview of algocacry.
Endnotes [ + ]
|1.||⇧||The development of computer programs are usually based on software specification requirements, not just pseudocode.|