From ELIZA to Alexa: Misunderstanding the Limits of AI
Artificial intelligence is an amazing and powerful tool being deployed in every industry under the sun to solve a broad spectrum of problems. But its power has limits, and in order to maximize the value of investments in AI, as capitalists, entrepreneurs, and customers, we must understand and embrace those limits.
Artificial intelligence as a field of research (and product development) has been around for decades, since the earliest days of computer science. As soon as computer technology arrived on the scene, science fiction authors, like Frank Herbert, Isaac Asimov, and many others, have imagined computerized versions of humans displaying human artificial intelligence, as robots (Asimov’s Daneel Olivaw, Herbert’s Mentats, and Star Trek’s Data), disembodied AIs (HAL from 2001), or all-powerful AI-driven networks (Dennis Jones’ Collosus, Terminator’s SkyNet and, of course, the Matrix). All of these fictional instantiations of AI have a common thread: they imagine artificial intelligence having a deep understanding of the world commensurate with human understanding. And this false promise (and it is false) has led to the overpromise of artificial intelligence applications as they have been introduced in broader society.
Let’s go back to the 1950s and talk about Alan Turing’s Turing Test. The Turing Test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The key point of the Turing Test is that an application that passes the test is one that gives the appearance of intelligence. There is no requirement, or expectation, that such an AI would actually be intelligent. It is merely expected to mimic intelligent behaviors.
One of the earliest versions of an artificial intelligence chatbot, Joseph Weizenbaum’s ELIZA, appeared to pass the Turing Test by mimicking a psychotherapist. But it was merely doing simple pattern matching, and once you figured out how it worked, you could easily trick it into giving nonsensical and meaningless responses in conversations.
Amazon’s Alexa, a far more impressive attempt at interactive, conversational, and informative artificial intelligence, is clearly more useful and effective than ELIZA. Nonetheless, it is still just mimicking intelligence. While it has a (mostly) human-sounding voice, many human-style mannerisms and idiosyncrasies, and humorous humanizing easter eggs, it is still just a personified accessor of Amazon’s search functionality. It frequently gives responses to queries that clearly demonstrate that it doesn’t “understand” human language, any more than the Google search bar does.
Between ELIZA and Alexa, there have been countless attempts at passing the Turing Test: chatbots that propose to seamlessly replace humans; non-verbal artificial intelligence solutions that attempt to replace human performance of complex tasks. One persistent approach to producing AI is to create an ontology of the physical world and build a logic engine that attempts to reason inside a virtual replica of the world based on the ontology. A classic example of this ontological approach is Doug Lenat’s Cyc project. The fact that Cyc was started in 1984 and, after over 35 years of development, has still failed to fulfill its original promise, should give insight into the merits of that approach. IBM’s Watson is a data-driven successor to Cyc, and while it is more evolutionary (in that it adjusts its ontological “view” of the world upon seeing new data) and thus more robust in its nature, it also seems to have failed to deliver true broad understanding of the world.
Today, AI is promising to produce an unsupervised human-level performance on a variety of tasks: self-driving cars, intelligent digital assistants, automated customer service representatives, and more. As an early-stage venture capital investor, I see dozens of companies every month promising to solve these problems and many more with truly intelligent AI solutions. These companies may produce useful products. They may produce outstanding revenues. They may even lead to unicorn-level exits. But one thing they promise which I am sure they will never achieve is something that AI is incapable of doing: understanding.
That is the crux of the overpromise and under-delivering of the artificial intelligence community. They promise, and sometimes even believe, that their computerized creations will “understand” the world in a way, or at least at a performance level, that human beings appear to understand the world. Aside from the fact that it isn’t clear what it means to understand something, AI-driven computer programs are not actually built to understand anything (with perhaps the exception of programs like Cyc and Watson, but those have failed to achieve their stated goals as well). AI is almost always about collecting data around complex tasks and training computers to mimic the behaviors exhibited in those complex tasks. If the data set is rich and complete enough, machine learning algorithms can build detailed enough models to capture enough of the examples of those behaviors to reproduce the human-like performance on new instances of those tasks.
That’s all that AI is doing: advanced mimicry. For many problems whose solutions are economically valuable, and where the occasional failure is tolerable (read: not self-driving cars), advanced mimicry at the level AI can achieve today is more than adequate, and even quite excellent. Many tasks expensively performed by humans today such as customer service calls, for example, can be done by existing AI using advanced mimicry.
But AI is not understanding the world (whatever that means), and it in all likelihood never will. And in order to avoid creating a bubble that will inevitably burst in the marketplace for AI-powered solutions, it is important that entrepreneurs, technologists, and customers acknowledge and embrace this limitation. Entrepreneurs should not oversell their products by claiming they are understanding as they are solving problems, or at least should explain the limits of what they mean when they say it is understanding. Technologists need to acknowledge the limits of their AI-powered solutions, both to their customers and, most importantly, to the executives selling their solutions. Additionally, customers for AI-powered solutions need to be properly informed about the performance ceiling they should expect from the products they are buying.
Before I was a venture capital investor, I was a data scientist and human language processing AI researcher. I once succumbed to the belief in the limitless ceiling of data-driven machine learning and its ability to understand the world. I have come down to earth in my expectations, and I still love the power of AI and happily invest in it. I hope all of us - the investing community, the consumer community, and the world of users of AI - can learn to accept the limitations of AI and learn to love its power.