Algorithms and Algocracy – II

In the previous post, I provided a simple definition of an algorithm to then explore its use in the digital world. While algorithms live from the inputs they are feed, digital programs such as mobile apps and web platforms are comprised of a series of algorithms that, working in sync, deliver the desired output(s). Algorithms sit between a given input and the expected output. They take the former, do their magic and yield the latter.

There is a direct relationship between the planned output(s) complexity and the coding effort required. The latter is usually measured by the number of coding lines in a given program. For example, Google is said to have over 2 billion coding lines (2×10^9) supporting its various services. You certainly need an army of programmers to create, manage and debug such a large coding platform.

I quickly learned that coding a particular task was much more complex than I initially thought. While writing the actual algorithm was not that complicated, ensuring that the input could be actually processed and the output was returned in a user-friendly format demanded quite a bit of work. In addition, I usually spent much more time trying to figure out all the possible caveats for effectively processing a given input.

At the same time, this revealed that not all tasks could be easily programmed, especially those mainly involving (and lots of) text or images. But the AI/ML renaissance changed this for good.

Machine Learning

Recall that the goal of the original 1950s Perceptron, one of the first ML algorithms, was to recognize images using a simple neural network. Success was elusive back then. Today, this is one of the shining stars of AI/ML. At least two things have changed since. First, neural networks have improved substantially, as reflected by the rapid development of deep learning and backpropagation algorithms. As mentioned before, algorithms also have their own history and thus continuously evolve.

Second, and perhaps more importantly, (most) AI/ML algorithms are now fed millions of images, data points or text as initial inputs, thanks to pervasive digitization and the subsequent emergence of big data. Instead of having a group of programmers trying to code every possible case, algorithms can now analyze millions of data points or images to find patterns and anomalies and make predictions. How does this work?

Simplifying a bit, the original input data is randomly split into two sets, one being used for “training” purposes. ML algorithms are then used to generate a model based on the training data. It is then tested on the unused part of the data. Results might either show a good fit or diverge substantially. The process can be repeated in the latter case using a different data split. And so on and so forth. The idea is to minimize such differences or increase the accuracy of the resulting model.1 One of the core functions of any ML process is to minimize the so-called loss function. Since no model is 100 percent accurate, programmers will have to choose between bias (overfitting) or variance (underfitting), a well-known statistical conundrum. Clearly, false positives are a distinct and real possibility in the ML realm and should thus be openly stated and addressed. So why are these fallible algorithms deployed in the real world and used as oracles that cannot be seemingly challenged?

Note also the close correlation between the input data and the learning process. The fact that we have millions of data points or images at our disposal does not necessarily mean that we have captured all possible variations of the object or subject under consideration. Errors are thus a real possibility  – even though ML is undoubtedly more capable than humans of finding patterns and making predictions from large-scale data sets. Black swans are indeed possible. So how do we detect such errors? And how do we handle them when found?

The latest ML developments spearheaded by deep learning, such as Google’s Alphazero, show that algorithms can also learn to play chess (or other similar complex games) without using external data. Instead, after being fed the rules of the game, the algorithm generates its own data by playing against itself (recursively, as it is known in computer speak). Alphazero played over one billion games to train itself. I am not sure the human race has already recorded one billion chess games. Maybe. Not surprisingly, Alphazero is now considered the best chess engine in the world, albeit top chess Grandmasters had a mixed reaction to its accomplishments. Let us not forget that Alphazero uses tremendous computing power to achieve these results. Do not expect to use it on your own laptop soon. Moreover, Alphazero has yet to prove it can be as effective in other less deterministic domains of life.

While undoubtedly significant, this development is critical to our discussion. Games have clearly defined rules that must be followed to avoid cheating. Such rules imply the existence of a specific governance structure. Those who do not abide by them must be penalized, disqualified or barred from playing. Game rules are created by humans who establish institutional arrangements to manage the game, mainly on a consensus basis, and preserve its integrity. Alphazero plays with these same rules but has much more computing power than any other digital engine. Regardless, it cannot change the game’s rules – nor does it have a vote in the World Chess Federation (FIDE).

Algocracy and Algorithm Fetishism

In some traditional societies, governance was in the hands of a council of elders who made decisions on community issues and managed internal conflict. Furthermore, one of such elders was usually selected or elected as the community leader. The roots of this particular governance scheme, which resembles Plato’s kingdom of philosophers, stemmed from the idea that elders were wise thanks to their personal experience and longevity.

There are at least two ways in which this governance scheme can flourish. First, the elders use their knowledge to impose their authority over the community and use it to prevent any unwanted changes in the governance structures. Alternatively, a bottom-up approach occurs when the community demands that the elders run its state of affairs given their knowledge and wisdom. Here, the community trusts the elders and delegates authority to the council. Needless to say, nothing is stopping the bottom-up approach to quickly turn into its top-down version.

Several authors2 See references at the end of this post. have labeled this state of affairs as epistocracy, or the rule by knowledge. Here, those who happen to have the most knowledge (scientific, political, cultural, etc.) command authority and can thus rule in a hopefully balanced fashion. In modern times, aging alone does not guarantee wisdom, not at all. We now live in the age of meritocracy. The advent of sophisticated AI/ML platforms has also made a dent in this perspective as digital algorithms seem to muster lots of knowledge and quickly learn about almost any topic.

With this in hand, we can revisit the idea of algocracy. For our purposes, algocracy is similar to the bottom-up epistocracy concept described above. The fundamental difference today is that humans willingly delegate authority to intangible objects, algorithms, thus opening the door for algorithmic governance without creating counterbalancing mechanisms for critical review or redress. We are giving autonomous life to computer code that is seemingly intelligent while assuming it is infallible. Various authors and researchers have documented that allowing computer algorithms to make critical decisions in this fashion can have a devastating impact on people’s lives.

We have, however, seen the limitations of algorithms and AI/ML. As a result, we are facing a paradox. While being enshrined as the ultimate source of knowledge, we are also fully aware of the many limitations algorithms (and modern technologies) have. Algorithms have thus become a fetish and one that most people either profoundly admire or see with extreme fear. But in both cases, algorithms seem to exist beyond human reach.

Looking ahead

We need to bring human agency back into the picture. After all, humans are behind the latest algorithm developments and happen to manage them, albeit mostly in private and not very open environments. Contrary to the so-called “democratization” of mobiles and social media, algorithms are going in the same direction as growing income and wealth inequality. Only a few have access. This is usually justified in the name of complexity and sophistication, as only a few can effectively understand, manage and control algorithms. The risk is that these few claim scientific authority and push the top-down version of algocracy to rule the world. This is indeed what is at stake today, the potential emergence of a social singularity where a selected few will have the last word, shielded behind the fetish of algorithms and complex knowledge.

The only way out of this conundrum and to ensure sound and democratic accountability of algorithmic decision-making and governance is to demand more transparency, complemented by the creation of open governance mechanisms where stakeholders have a seat at the table from the very start. Intangible beings such as algorithms have a positive role indeed to play in society as long as humans, acting in a concerted and democratic fashion, decide their own future every step of the way.

In one of the perhaps most dramatic scenes of Kubrick’s 2001: A Space Oddysey, we see Dave, the only member of the doomed crew who is still alive, trying to disable the superintelligent HAL computer by removing its memory banks after surviving an assassination attempt orchestrated by the machine itself. In a drunken voice, we can hear HAL politely begging for its life, so to speak. We probably do not need to go to such extremes to disable such an agent. We can always just turn the power that feeds them off. Make sure you never lose sight of and access to that power switch (or AC plug).

AI singularity, anyone?

Cheers, Raúl

Selected References

Bauer, Jennifer. “The Necessity of Auditing Artificial Intelligence Algorithms.” SSRN Electronic Journal, 2017. doi:10.2139/ssrn.3218675.

Beer, David (ed.). The Social Power of Algorithms. Routledge, Taylor & Francis Group, 2018.

Benson, Michael. Space Odyssey: Stanley Kubrick, Arthur C. Clarke, and the Making of a Masterpiece. Simon & Schuster. 2018.

Brennan, Jason. Against Democracy. Princeton University Press., 2017.

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. MIT Press, 2018.

Burrell, Jenna. “How the Machine ‘thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3, no. 1 (01, 2016): 205395171562251. doi:10.1177/2053951715622512.

Danaher, John. “The Threat of Algocracy: Reality, Resistance and Accommodation.” Philosophy & Technology 29, no. 3 (01, 2016): 245-68. doi:10.1007/s13347-015-0211-1.

Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. 2018.

Estlund, David M. Democratic Authority: A Philosophical Framework. Princeton University, 2008.

Eubanks, Virginia. Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2017.

Ferguson, Andrew Guthrie. Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York University Press, 2017.

Finn, Ed. “What Is an Algorithm?” What Algorithms Want, 03, 2017. doi:10.7551/mitpress/9780262035927.003.0002.

Fry, Hannah. Hello World: Being Human in the Age of Algorithms. W.W. Norton. 2018.

Gillespie, Tarleton. “The Relevance of Algorithms.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, 167-194. Cambridge, MA: MIT Press. 2014. http://culturedigitally.org/2012/11/the-relevance-of-algorithms/

Hill, Robin K. “What an Algorithm Is.” Philosophy & Technology 29, no. 1 (01, 2015): 35-59. doi:10.1007/s13347-014-0184-5.

Hughes. “Algorithms and Posthuman Governance.” Journal of Posthuman Studies 1, no. 2 (2018): 166. doi:10.5325/jpoststud.1.2.0166.

Kitchin, Rob. “Thinking Critically about and Researching Algorithms.” Information, Communication and Society, 20(1, 2017).

Laat, Paul B. De. “Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?” Philosophy & Technology, Dec. 2017, doi:10.1007/s13347-017-0293-z.

Levi-Strauss, Claude. Tristes Tropiques. New York: Atheneum. 1973 (1955).

Musiani, Francesco. “Governance by algorithms.” Internet Policy Review, 2(3). 2013. DOI: 10.14763/2013.3.188

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books, 2018.

Pasquale, Frank. Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2016.

Rosenblatt, Frank. The Perceptron: a Theory of Statistical Separability in Cognitive Systems (Project Para). Cornell Aeronautical Laboratory, 1958.

Sejnowski, Terrence Joseph. The Deep Learning Revolution. MIT Press, 2018.

Silver, David, et.al. “Mastering the Game of Go without Human Knowledge.” Nature 550, no. 7676 (10, 2017): 354-59. doi:10.1038/nature24270.

Tilly, Charles. Democracy. Cambridge University Press, 2008.

Wheeler, Tim. “AlphaGo Zero: How and why does is work?” 2017. http://tim.hibal.org/blog/alpha-zero-how-and-why-it-works/.

Zarsky, Tal. “The Trouble with Algorithmic Decisions: An Analytic Roadmap to Examine Efficiency and Fairness in Automated and Opaque Decision Making.” Science, Technology, & Human Values 41(1). 2016.

Ziewitz, Malte. “Governing Algorithms.” Science, Technology, & Human Values, vol. 41, no. 1, 2015, pp. 3–16., doi:10.1177/0162243915608948.

 

Print Friendly, PDF & Email

Endnotes

Endnotes
1 One of the core functions of any ML process is to minimize the so-called loss function.
2 See references at the end of this post.

Comments

2 Responses to “Algorithms and Algocracy – II”