A few months ago, as I was finishing a paper on blockchain technology, I received an unexpected comment on Artificial Intelligence (AI from here on in) from one of the peer reviewers. While addressing the overall topic of innovation in the 21st Century, I mentioned in passing the revival of both AI and Machine Learning (ML, not be confused with Marxism-Leninism) as a good example. The reviewer requested the deletion of one of the two terms as, in his book, they were exactly the same. Not so fast, was my prompt reply. In the end, both survived the peer review.
Looking at the history of AI helps shed some light on these concepts. While the AI term was coined in the 1950s, the work of Alan Turing, limited by the use of analog/mechanical computers, can be seen as its launching pad. Digital computers emerged around the same time. The invention of the transistor a few years later pen the doors for their rapid evolution. The famous Turing Test perhaps the best well-known AI example: a brilliant machine capable of fooling a human tester and make believe it is indeed a regular human being.
So how do we build such a brilliant machine?
For starters, distinguishing between thinking and acting is critical here. Note that Turing’s test does not assume that machines must think like humans. Instead, the goal is to get them to act like humans, play the imitation game. The brilliant computer can then be seen as an intelligent agent that acts rationally with and reacts to the environment. The thinking part of the equation is part of cognitive science, a field that emerged in the 1960s and is nowadays quite distinct from AI.
While there are several approaches to AI, two broad ones can be highlighted for our purposes. One perspective places knowledge as the starting point. Here, knowledge of a specific area or theme is initially codified into computer language and then used for further processing and implementation by computer algorithms and applications. Inductive and deductive processes are also feasible thus making these agents behave in a seemingly smart fashion.
The second approach centers on learning and takes the opposite approach. Here the machine is expected to interact with the environment, learn about it and develop knowledge to operate more intelligently the next time around. The Perceptron, invented in 1957, is the first example. Based on a neural network, the algorithm turned machine was expected to scan images and be able to recognize them as its learned about them. Nowadays, this is one of the core ML functions known as supervised learning. But back then it did not deliver and was soon abandoned.
By the 1970s, the knowledge approach gained traction thanks to the development of expert systems. These systems soon showed severe limitations thus putting a break in their development – marking the opening of the so-called AI winter in the mid-1980s.
Five new developments help end the AI winter. First, the emergencet of new algorithms amid the rapid evolution of computer science in general. Second, new microchips and the geometric progression of cheaper computing power as depicted by the now-defunct Moore’s Law. Third, the birth on the Internet and the subsequent linking of millions of networks around the globe. Fourth, the rapid increase in capacity and decrease in costs of digital storage. And fifth, the accelerated development of data digitization and the subsequent emergence of big data.
The convergence of these five factors during the first decade of this century provided fertile ground for not only the revival but also for the accelerated development of both AI and ML – to levels that previously seemed unthinkable or even impossible. While the two AI approaches still subsist, the consensus today is that a mix of the two is needed to ensure the field continues to move forward in innovative ways. Conceptually, knowledge and learning are closely related if we think of them in dynamic terms.
Going back to the initial comment to my paper, we can safely say that while all ML is AI, not all AI is ML. ML is a distinct field within AI. In any case, the overall field has grown exponentially in the last few years thanks to the development of deep learning, the new kid in the block seemingly creating more havoc.
Buchanan, Bruce G. Expert Systems. Heuristic Programming Project, Department of Computer Science, Stanford University, 1984.
Gregory, Josh. Artificial Intelligence. Cherry Lake Publishing, 2018.
Hastie, Trevor, Robert Tibshirani, and J. H. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2004.
Heaton, Jeff. Artificial Intelligence for Humans. Heaton Research, 2013.
Kaplan, Jerry. Artificial Intelligence. Oxford University Press, 2016.
McCarthy, John. Defending AI Research: A Collection of Essays and Reviews. Center for the Study of Language, 2015.
Minsky, Marvin. Computation: Finite and Infinite Machines. Prentice Hall, 1967.
Minsky, Marvin, and Seymour A. Papert. Perceptrons: An Introduction to Computational Geometry. Expanded edition. MIT Press, 1987.
Mitchell, Tom M. Machine Learning. McGraw Hill, 2017.
Poole, David L., and Alan K. Mackworth. Artificial Intelligence: Foundations of Computational Agents. Cambridge University Press, 2017.
Rosenblatt, Frank. The Perceptron: A Theory of Statistical Separability in Cognitive Systems (Project Para). Cornell Aeronautical Laboratory, 1958.
Russell, Stuart, and Peter Norvig. Artificial Intelligence: A Modern Approach. 3rd Edition. Pearson Education, 2009.
Shieber, Stuart M. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. MIT Press, 2004.