Uncertainty and Artificial Intelligence

In a world where perfect information supposedly rules across the board, uncertainty certainly challenges mainstream economists. While some of the tenets of such assumption have already been addressed – via the theory of information asymmetries and the development of the rational expectations school, for example, uncertainty still poses critical questions.

For starters, uncertainty should not be confused with risk. The latter, in a nutshell, can be quantified using probability theory. For example, based on existing data and previous behavior, we could predict a 75 percent chance investments in the stock market can yield a 25 percent reward in 5 years. However, that is not the case for uncertainty, as here, the outcome is entirely unknown. In other words, we have no idea what is going to happen. The Global Economic Crisis of 2007/2008 is a good example here and. Not surprisingly,  most economists did not see it coming. This led to the emergence of the so-called black swan theory, which shows that some events cannot be predicted. They are thus 100 percent uncertain. But are they, really? Can Artificial Intelligence (AI) play a role here?

The authors of the book Prediction Machines: The Simple Economics of Artificial Intelligence deal with some of these questions. Their opening claim is that economics is ideal for comprehending the role of uncertainty in decision-making processes. Furthermore, they see prediction as the best tool to reduce uncertainty. And it is here where AI could play a decisive role (pg. 3). It seems that uncertainty and risk, as defined above, are not distinct concepts in their analysis.

In their view, AI’s real power is not on either the A or the I. Rather, it comes from the fact that AI is an excellent tool for developing and enhancing prediction models and predictions with relatively low costs for firms and end users. In fact, they argue prediction is a form of intelligence as it can generate new information from data and knowledge we already have. In their view, AI is essentially a prediction technology that can help drive business decision-making processes and many other activities. Prediction machines thus rule. While incisive, I am not entirely convinced all AI is about prediction. Take chess, and the innovative Alphazero reinforced learning approach, which does not use external data to “predict” the next move.

One essential contribution of the book is how the authors introduce AI in decision-making. The core idea is not to initially replace the way this currently works but rather to enhance it by introducing AI-based prediction. They propose a model with seven layers (pg. 75), three of them being data centered. However, the critical element in the model is the distinction between prediction and judgment. Indeed, machines are better suited to predict new information, especially in complex situations with lots of data points and too many variables. But humans still excel at making judgments based on predictions.

Before any action toward a given outcome is taken, a judgment about the prediction must be made. Nevertheless, nothing stops AI from mastering judgment like it conquered prediction. Autonomous vehicles are an example here – and one where “prediction machines” not only take action but also assess the impact on the intended outcome, provide feedback to the data source to then improve prediction power and refine judgment. And so on and so for. In this scheme, most humans might become redundant.

Not so fast, the authors suggest. First, humans are better than prediction machines when it comes to events that take place occasionally. This they call human prediction by exception (pg. 67). Second, predicting human judgment is limited by the lack of relevant data. Humans do have some data that machines (yet) do not and thus have an advantage. Examples here are individual preferences and cultural values. In sum, “Machines cannot predict judgment when a situation has not occurred many times in the past” (pg. 102). And third, the outputs and impact of prediction machines will undoubtedly generate new issues and challenges that will demand human intervention and thus create new jobs (pg. 211-212).

The book’s last chapter considers the overall impact of prediction machines on society. While acknowledging that AI is still taking baby steps. The issue boils down to a question of trade-offs regarding productivity/inequality, competition/ monopolies, and performance/privacy. Unfortunately, AI tends to promote the three of them. It is up to each country, we are told, to make the right decisions based on the local context.

Hopefully, they will not use prediction machines to reach such critical decisions.

Finally, we can conclude that AI, still nascent, can undoubtedly play a key role in reducing risk but not alerting us about black swans and other uncertainties. For example, will a more mature AI be able to predict the next global economic crisis? Maybe – but do not hold your breath.

Cheers, Raúl

 

Print Friendly, PDF & Email