Trade is one of the main trademarks of the globalization process. Nowadays, most countries in the world exchange products and services on a regular basis and use local comparative advantages to specialize in specific trade sectors and/or commodities. Food and agricultural products are important components of this process. Within countries, rapid urbanization has increased the demand for food. At the same time, the number of people working in the agricultural sector and living in rural areas has decreased substantially. While some food staples are imported, others are still produced locally but must travel from rural areas to urban centers and big cities to meet the demand.
Food products are thus in perpetual motion, moving from their place of birth as soon as possible towards a wide variety
In the last decade, Artificial Intelligence (AI), including siblings machine learning and deep learning, has been growing by leaps and bounds. More importantly, the technology has been deployed effectively in a wide range of traditional sectors bringing real transformational change while raising fundamental socio-economic (joblessness, more inequality, etc.) and ethical (bias, discrimination, etc.) issues along the way. As it stands today, AI, understood as a set of still-evolving technologies, seems poised to become a general-purpose technology that could leave no stone untouched.
As with other digital technologies, most developing countries face the daunting challenge of harnessing AI to foster national human development Prima facie, AI looks mostly like software, code that one can
A recent piece in MIT’s Technology Review nicely summarizes the issue of bias in AI/ML (AI) algorithms used in production to make decisions or predictions. The usual suspects make a cameo appearance including data, design and implicit fairness assumptions. But the article falls a bit short as it does not distinguish between bias in general and that which is unique to AI.
Indeed, I was surprised to see the issue of problem framing as the first potential source of AI bias. While this might occur in some cases, this is not an issue that only pertains AI projects and enterprises. For example, large multinational drug companies indeed face a similar challenge. Nowadays, almost none of them are investing in developing new antibiotics to stop the spread of the so-called superbugs nor have any interest
In the previous post, I provided a simple definition of an algorithm to then explore their use in the digital world. While algorithms live from the inputs they are feed, digital programs such as mobile apps and web platforms are comprised of a series of algorithms that, working in sync to, deliver the desired output(s). Algorithms sit between a given input and the expected output. They take the former, do their magic and yield the latter.
There is a direct relationship between the complexity of the planned output(s) and the coding effort required. The latter is usually measured by the number of coding lines in a given program. For example, Google is said to have over 2 billion coding lines (2×10^9) supporting its various services. You certainly need an army of programmers to create, manage
While the concept of algorithm has been around for centuries, the same cannot be said about algocracy. The latter has recently gained notoriety thanks in part to the renaissance of Artificial Intelligence and Machine Learning (AI/ML) and is frequently used to describe the increased use of algorithms in decision-making and governance processes. Indeed, the so-called Singularity could be seen as an extreme and seemingly irreversible algocracy case where humans lose the capacity to control superintelligent machines and might even face extinction. Not sure that will ever happen though.
A more plausible scenario takes place when humans and human institutions blindly rely on algorithms to make critical decisions. This is happening today in many sectors – the quasi-dictatorship of algorithms. In
In a world where perfect information supposedly rules across the board, uncertainty certainly poses a challenge to mainstream economists. While some of the tenets of such assumption have been already addressed – via the theory of information asymmetries and the development of the rational expectations school, for example, uncertainty still poses critical questions.
For starters, uncertainty should not be confused with risk. The latter in a nutshell can be quantified using probability theory. Based on existing data and previous behavior, we could
say predict there is a 75 percent chance investments in the stock market can yield a 25 percent reward in say 5 years. This is not the case for uncertainty as here the outcome is entirely unknown. In other words, we have no idea what is going to
A few months ago, as I was finishing a paper on blockchain technology, I received an unexpected comment on Artificial Intelligence (AI from here on in) from one of the peer reviewers. While addressing the overall topic of innovation in the 21st Century, I mentioned in passing the revival of both AI and Machine Learning (ML, not be confused with Marxism-Leninism) as a good example. The reviewer requested the deletion of one of the two terms as, in his book, they were exactly the same. Not so fast, was my prompt reply. In the end, both survived the peer review.
Looking at the history of AI helps shed some light on these concepts. While the AI term was coined in the 1950s, the work of Alan Turing, limited by the use of analog/mechanical computers, can be seen as its launching pad. Digital computers