Biased Artificial Intelligence

A recent piece in MIT’s Technology Review nicely summarizes the issue of bias in AI/ML (AI) algorithms used in production to make decisions or predictions. The usual suspects make a cameo appearance including data, design and implicit fairness assumptions. But the article falls a bit short as it does not distinguish between bias in general and that which is unique to AI.

Indeed, I was surprised to see the issue of problem framing as the first potential source of AI bias. While this might occur in some cases, this is not an issue that only pertains AI projects and enterprises. For example, large multinational drug companies indeed face a similar challenge. Nowadays, almost none of them are investing in developing new antibiotics to stop the spread of the so-called superbugs nor have any interest

Read More

Algorithms and Algocracy – II

In the previous post, I provided a simple definition of an algorithm to then explore their use in the digital world. While algorithms live from the inputs they are feed, digital programs such as mobile apps and web platforms are comprised of a series of algorithms that, working in sync to, deliver the desired output(s). Algorithms sit between a given input and the expected output. They take the former, do their magic and yield the latter.

There is a direct relationship between the complexity of the planned output(s) and the coding effort required. The latter is usually measured by the number of coding lines in a given program. For example, Google is said to have over 2 billion coding lines (2×10^9) supporting its various services. You certainly need an army of programmers to create, manage

Read More

Algorithms and Algocracy – I

While the concept of algorithm has been around for centuries, the same cannot be said about algocracy. The latter has recently gained notoriety thanks in part to the renaissance of Artificial Intelligence and Machine Learning (AI/ML) and is frequently used to describe the increased use of algorithms in decision-making and governance processes. Indeed, the so-called Singularity could be seen as an extreme and seemingly irreversible algocracy case where humans lose the capacity to control superintelligent machines and might even face extinction. Not sure that will ever happen though.

A more plausible scenario takes place when humans and human institutions blindly rely on algorithms to make critical decisions. This is happening today in many sectors – the quasi-dictatorship of algorithms. In

Read More

Uncertainty and Artificial Intelligence

In a world where perfect information supposedly rules across the board, uncertainty certainly poses a challenge to mainstream economists. While some of the tenets of such assumption have been already addressed – via the theory of information asymmetries and the development of the rational expectations school, for example, uncertainty still poses critical questions.

For starters, uncertainty should not be confused with risk. The latter in a nutshell can be quantified using probability theory. Based on existing data and previous behavior, we could say predict there is a 75 percent chance investments in the stock market can yield a 25 percent reward in say 5 years. This is not the case for uncertainty as here the outcome is entirely unknown. In other words, we have no idea what is going to

Read More

Learning about Machine Learning

A few months ago, as I was finishing a paper on blockchain technology, I received an unexpected comment on Artificial Intelligence (AI from here on in) from one of the peer reviewers. While addressing the overall topic of innovation in the 21st Century, I mentioned in passing the revival of both AI and Machine Learning (ML, not be confused with Marxism-Leninism) as a good example. The reviewer requested the deletion of one of the two terms as, in his book, they were exactly the same. Not so fast, was my prompt reply. In the end, both survived the peer review.

Looking at the history of AI helps shed some light on these concepts. While the AI term was coined in the 1950s, the work of Alan Turing, limited by the use of analog/mechanical computers, can be seen as its launching pad. Digital computers

Read More