Algorithms and Algocracy – II

Facebooktwittergoogle_plusredditpinterestlinkedinmail

In the previous post, I provided a simple definition of an algorithm to then explore their use in the digital world. While algorithms live from that there are fed as inputs, digital programs such as mobile apps and web platforms are comprised of a series of algorithms that working in sync deliver the desired output(s). Algorithms sit between a given input and the expected output. They take the former, do their magic and yield the latter.

There is a direct relationship between the complexity of the planned output(s) and the coding effort required for its delivery. The latter is usually measured by the number of coding lines in a given program. For example, Google is said to have over 2 billion coding lines (2×10^9) supporting its various services. You certainly need an army of programmers to create, manage and debug such large coding platform.

I quickly learned that coding a particular task was much more complicated from what I initially thought. While writing the actual algorithm was not that complicated, ensuring that the input could be actually processed and the output was returned in user-friendly format demanded quite a bit of work. I usually ended up spending much more time trying to figure all the possible caveats one could find for effectively processing a given input.

At the same time, this revealed to me that not all tasks could be easily programmed, especially those involving mostly (and lots of) text or images. But the AI/ML renaissance changed this for good.

Machine Learning

Recall that the goal of the original 1950s Perceptron, one of the first ML algorithms, was to recognize images using a simple neural network. Success was indeed not achieved. But today this is one of the shining stars of AI/ML. At least two things have changed since. First, neural networks have improved substantially as reflected by the rapid development of deep learning and backpropagation algorithms. As mentioned before, algorithms also have their own history and thus continuously evolve.

Second, and perhaps more importantly, (most) AI/ML algorithms are now fed millions of images, data points or text as initial inputs, thanks to pervasive digitalization and the subsequent emergence of big data. Instead of having a group of programmers trying to code every single possible case, algorithms can now analyze millions of data points or images to find patterns, anomalies or make predictions. How does this work?

Simplifying a bit, the original input data is randomly split into two sets, one exclusively used for “training” purposes. The algorithm generated by the training process is then tested on the unused set of data. Results might show a similar behavior or diverge substantially. In the latter case, the process can be repeated using a different data split. And so on and so forth.  The idea is to minimize such differences or increase the accuracy of the resulting algorithm. Since no model is 100 percent perfect, programmers will have to choose between bias (overfitting) or variance (underfitting), a well-known statistical conundrum. Clearly, false positives resulting from ML algorithms are a distinct and real possibility and must be addressed appropriately. So why is it that these fallible algorithms are deployed in the real world and used as oracles that cannot be challenged?

Note also the close correlation between the input data and the learning process. The fact that we have at our disposal millions of data points or images does not necessarily mean that we have captured all possible variations of the object or subject under consideration. Errors are thus a real possibility  – even though ML is undoubtedly more capable than humans of finding patterns and anomalies in and making predictions from large-scale data sets. How do we detect such errors? And how do we handle them when found?

The latest ML developments spearheaded by deep learning, such as Google’s Alphazero, show that algorithms are also capable of learning to play chess (or Go) without using any external data. Instead, after being fed the rules of the game, the algorithm generates its own data by playing against itself (recursively, as it is known in computer speak), almost a billion times in the case of Alphazero. I am not even sure the human race has already played and recorded one billion chess games. Not surprisingly, Alphazero is now considered the best chess engine in the world, albeit top chess Grandmasters had a mixed reaction to its accomplishments. Let us not forget that Alphazero uses tremendous computing power to achieve these results. Do not expect to be installing it on your own laptop any time soon. In any event, Alphazero has yet to prove it can be as effective in other less deterministic domains of life.

While undoubtedly significant, this development actually surfaces a critical point relevant to our discussion. Games have clearly defined rules that must be followed to avoid cheating. Such rules define their governance structure. Those who do not abide by them are penalized, disqualified or barred from playing. Game rules are created and can only be altered by humans who in turn must establish adequate institutional arrangements to manage the game realm on a consensus basis to preserve the integrity of the game. Alphazero is playing with this same rules but has much more computing power than any other digital engine and can surely beat any human being. Regardless, it cannot change the rules of the game  – nor does it have a vote in the World Chess Federation (FIDE).

Algocracy and Algorithm Fetishism

In some traditional societies, governance was headed by a council of elders who made decisions on community issues and addressed internal conflict. Furthermore, one of the elders was usually positioned as the community leader. The roots of this particular governance scheme, which in a way resembles Plato’s kingdom of philosophers, stemmed from the idea that the elders were wise people thanks to their personal experiences and longevity. There are at least two ways in which this scheme can flourish. In the first one, the elders use their knowledge to impose their authority over the community and use it to prevent any unwanted changes in the governance structure. Alternatively, a bottom-up approach takes place when the community itself demands that the elders run its state of affairs given their knowledge and wisdom. Here, the community trusts the elders and delegates authority to the council.

Several authors1 See references at the end of this post. have labeled this state of affairs as epistocracy, or the rule by knowledge. Here, those who happen to have the most knowledge (scientific, political, cultural, etc.) command authority and can thus rule in seemingly balanced fashion. In modern times, aging alone does not guarantee wisdom, not at all. We now live in the age of meritocracy. The advent of sophisticated AI/ML platforms has also made a dent on this perspective as digital algorithms seem to muster lots of knowledge, in addition to being able to quickly learn about almost any topic.

With this in hand, we can revisit the idea of algocracy. For our purposes, algocracy is similar to the bottom-up epistocracy concept described above. The fundamental difference is that today humans are willingly delegating authority to intangible objects, algorithms, thus opening the door for algorithmic governance without creating counterbalancing mechanisms for critical review or redress. We are in fact giving autonomous life to code seemingly capable of mustering lots of knowledge while being apparently infallible. Allowing computer algorithms to make critical decisions in this fashion can have a devastating impact on people’s lives as has already been documented by various authors and researchers.

We have however seen the various limitations of algorithms and AI/ML. As a result, we are facing a paradox. While algorithms are being enshrined as the ultimate source of knowledge, we are also well aware of the many limitations that algorithms and modern technologies have. Algorithms have become a fetish and one that most people deeply admire or are afraid of but probably do not fully comprehend.

Looking ahead

We need to bring back human agency into the picture. After all, humans are behind the latest algorithm developments and happen to manage them, albeit mostly in private and not very open environments. Contrary to the “democratization” of mobiles or social media, algorithms are going in the same direction as growing inequality. Only a few have access. This is usually justified in the name of complexity and sophistication as only those few can apparently understand, manage and control them. The risk here is that these few use the top-down version of algocracy to claim authority and rule the world from here on in in the name of knowledge and wisdom.

The only way out of this conundrum and ensure sound and democratic accountability of algorithmic decision-making and governance is to demand more transparency, complemented by the creation of open governance mechanisms where all stakeholders have a seat at the table from the very start. Intangible beings such as algorithms have indeed a positive role to play in society as long as humans, acting in a concerted and democratic fashion, have the last word.

One of the most dramatic scenes of Kubrick’s 2001: A Space Oddysey happens when Dave, the only surviving member of the doomed crew, is trying to disable the superintelligent HAL computer by removing its memory banks after surviving an assassination attempt orchestrated by the machine. We can hear HAL in a drunken voice politely begging for its life, so to speak. We probably do not need to go to such extremes to beat a superintelligent agent. We can always just turn the power that feeds them off. Just make sure you never lose sight of and access to that power switch (or plug). Singularity, anyone?

Cheers, Raúl

 

Selected References

Bauer, Jennifer. “The Necessity of Auditing Artificial Intelligence Algorithms.” SSRN Electronic Journal, 2017. doi:10.2139/ssrn.3218675.

Beer, David (ed.). The Social Power of Algorithms. Routledge, Taylor & Francis Group, 2018.

Brennan, Jason. Against Democracy. Princeton University Press., 2017.

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. MIT Press, 2019.

Burrell, Jenna. “How the Machine ‘thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3, no. 1 (01, 2016): 205395171562251. doi:10.1177/2053951715622512.

Danaher, John. “The Threat of Algocracy: Reality, Resistance and Accommodation.” Philosophy & Technology 29, no. 3 (01, 2016): 245-68. doi:10.1007/s13347-015-0211-1.

Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books, a Member of the Perseus Books Group, 2018.

Estlund, David M. Democratic Authority: A Philosophical Framework. Princeton University, 2008.

Eubanks, Virginia. Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2017.

Ferguson, Andrew Guthrie. Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York University Press, 2017.

Finn, Ed. “What Is an Algorithm?” What Algorithms Want, 03, 2017. doi:10.7551/mitpress/9780262035927.003.0002.

Gillespie, Tarleton. “The Relevance of Algorithms.” In Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, 167-194. Cambridge, MA: MIT Press. 2014. http://culturedigitally.org/2012/11/the-relevance-of-algorithms/

Hill, Robin K. “What an Algorithm Is.” Philosophy & Technology 29, no. 1 (01, 2015): 35-59. doi:10.1007/s13347-014-0184-5.

Hughes. “Algorithms and Posthuman Governance.” Journal of Posthuman Studies 1, no. 2 (2018): 166. doi:10.5325/jpoststud.1.2.0166.

Kitchin, Rob.  “Thinking Critically about and Researching Algorithms.” Information, Communication and Society, 20(1, 2017). http://www.tandfonline.com/doi/full/10.1080/1369118X.2016.1154087#abstract

Laat, Paul B. De. “Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?” Philosophy & Technology, Dec. 2017, doi:10.1007/s13347-017-0293-z.

Levi-Strauss, Claude. Tristes Tropiques. New York: Atheneum. 1973 (1955).

Musiani, Francesco. “Governance by algorithms.” Internet Policy Review, 2(3). 2013. DOI: 10.14763/2013.3.188

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books, 2018.

Pasquale, Frank. Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2016.

Rosenblatt, Frank. The Perceptron: a Theory of Statistical Separability in Cognitive Systems (Project Para). Cornell Aeronautical Laboratory, 1958.

Sejnowski, Terrence Joseph. The Deep Learning Revolution. MIT Press, 2018.

Silver, David, et.al.  “Mastering the Game of Go without Human Knowledge.” Nature 550, no. 7676 (10, 2017): 354-59. doi:10.1038/nature24270.

Tilly, Charles. Democracy. Cambridge University Press, 2008.

Wheeler, Tim. “AlphaGo Zero: How and why does is work?” 2017. http://tim.hibal.org/blog/alpha-zero-how-and-why-it-works/.

Zarsky, Tal.  ‘‘The Trouble with Algorithmic Decisions: An Analytic Roadmap to Examine Efficiency and Fairness in Automated and Opaque Decision Making.’’ Science, Technology, & Human Values 41(1). 2016.  http://sth.sagepub.com/content/early/2015/10/13/0162243915605575.abstract

Ziewitz, Malte. “Governing Algorithms.” Science, Technology, & Human Values, vol. 41, no. 1, 2015, pp. 3–16., doi:10.1177/0162243915608948.

 

Endnotes   [ + ]

1. See references at the end of this post.