Uncertainty and Artificial Intelligence

Facebooktwittergoogle_plusredditpinterestlinkedinmail

In a world where perfect information supposedly rules across the board, uncertainty certainly poses a challenge to mainstream economists. While some of the tenets of such assumption have been already addressed – via the theory of information asymmetries and the development of the rational expectations school, for example, uncertainty still poses critical questions.

For starters, uncertainty should not be confused with risk. The latter in a nutshell can be quantified using probability theory. Based on existing data and previous behavior, we could say predict there is a 75 percent chance investments in the stock market can yield a 25 percent reward in say 5 years. This is not the case for uncertainty as here the outcome is entirely unknown. In other words, we have no idea what is going to happen. The Global Economic Crisis of 2007/2008 is a good example here and. Not surprisingly,  most economists did not see coming. This led to the emergence of the so-called black swan theory which shows that some events just cannot be predicted. They are thus 100 percent uncertain. But are they, really? Can Artificial Intelligence (AI) play a role here?

In the book Prediction Machines: The Simple Economics of Artificial Intelligence, the authors deal with some of these questions. Their opening claim is that economics is ideal for comprehending the role of uncertainty in decision-making processes. Furthermore, they see prediction as the best tool to reduce uncertainty. And it is here where AI could play a decisive role (pg. 3). It seems however that uncertainty and risk as defined above are not distinct concepts in their analysis.

In their view, AI’s real power is not on either the A or the I. Rather, it comes from the fact that AI is an excellent tool for developing and enhancing both prediction models and predictions in general with relatively low costs for firms and end users. In fact, they argue, prediction is a form of intelligence as it can generate new information from data and knowledge we already have. In their view, AI is essentially a prediction technology that can help drive business decision-making processes as well as many other activities. Prediction machines thus rule. While incisive, I am not entirely convinced all AI is about prediction. Take chess, and the innovative Alphazero reinforced learning approach which does not use external data to “predict” the next move.

One key contribution of the book is the way the authors introduce AI in the decision-making process. The core idea is not to initially replace the way this currently works but rather to enhance it by introducing AI-based prediction. They propose a model with seven layers (pg. 75), three of them being data centered. The key element in the model, however, is the distinction between prediction and judgment. Indeed, machines are better suited to predict new information especially in complex situations where we happen to have lots of data points and too many variables for example. But humans still excel at making judgments based on predictions.

Before any action towards a given outcome is taken, a judgment about the prediction must be made. Nevertheless, nothing is really stopping AI from also mastering judgment in the same way it conquered prediction. Autonomous vehicles are an example here – and one where “prediction machines” not only take action but also assess the impact on the intended outcome, provide feedback to the data source to then improve prediction power and refine judgment. And so on and so for.  In this scheme, most humans might become redundant.

Not so fast, the authors suggest. First, humans are better than prediction machines when it comes to events that take place occasionally. This they call human prediction by exception (pg. 67).  Second, predicting human judgment is limited by the lack of relevant data.  Humans do have some data that machines (yet) do not and thus have an advantage. Examples here are individual preferences and cultural values. In sum, “Machines cannot predict judgment when a situation has not occurred many times in the past” (pg. 102).  And third, the outputs and impact of prediction machines will undoubtedly generate new issues and challenges that will demand human intervention and thus create new types of jobs (pg. 211-212).

The book’s last chapter considers the overall impact of prediction machines on society. While acknowledging that AI is still taking baby steps. The issue boils down to a question of trade-offs regarding productivity/inequality, competition/ monopolies, and performance/privacy. AI tends to promote the three of them. It is up to each country, we are told, to make the right decisions based on the local context.

Hopefully, I would add, they will not use prediction machines to reach such critical decisions.

Finally, we can conclude that AI, still nascent, can certainly play a key role in reducing risk but not in alerting us about black swans and other uncertainties. Will a more mature AI be able to predict the next global economic crisis? May be – but do not hold your breath.

Cheers, Raúl