When I began my career way back in 1985, in Silicon Valley, artificial intelligence (AI) was one of the trendier technology plays. I always had – and still have – a problem with the notion of artificial intelligence, mostly because I don’t think we can really define intelligence. And, in fact, the AI folks in the 1980s pretty much agreed, if only reluctantly, and it did not take long for the more practical AI folks to scale back their grander talk of machine intelligence to the narrower concept of the “expert system.” Expert systems were basically programs that included all of the known (including in the best cases probabilistically known) information about a particular field (say searching for oil) with the idea that they could then answer – to the extent current knowledge included an answer – any question about that field of enquiry.
The venture community financed a bunch expert systems startups in the 1980s, which were premised on the idea that while machine sentience – more or less (there is not generally accepted definition of sentience) the idea that a machine could not only “manipulate” (“remember” if you are of a Platonic bent) but “create” knowledge was impractical, machine manipulation of knowledge was itself sufficient to create value, if not new knowledge. Alas, the underlying computer technology of the time was on the whole not up to the task of creating commercially exciting machines that could outperform human experts even in relatively narrow fields, and by the early 1990s, AI and its less ambitious expert system cousins had pretty much disappeared from the commercial technology landscape.
Which brings us to 2011, and IBM’s Watson, a computer system that more or less handily defeated the two most successful human champions of the popular game show Jeopardy. It was quite an accomplishment, one that some observers are suggesting heralds a new era of AI research and in the not too distant future AI commercialization. But does it?
My own take is that Watson does not herald the re-emergence of AI, but rather the re-emergence of expert systems, this time powerful enough to be commercially useful managers of knowledge. But managers are not creators. Manipulating data, even vast amounts of data, accurately, more or less precisely (Watson made some humorous mistakes, for example suggesting that Toronto was a city in the United States) and blazingly fast, is not the same thing as discovering heretofore unknown information. Watson, in other words, may be able to parse the writings of a Shakespeare, but Watson, at least the Watson I saw playing Jeopardy, did not impress me as being able to write original literature of Shakespearian proportions. Watson is no more (or less, I think) sentient than the best expert systems of the 1980s, which is, finally, to say it is no more intelligent than those systems.
Which is a good thing. Personally I am glad that Watson heralds a new era of expert systems that are robust enough to be useful servants of mankind. A sentient Watson, on the other hand, would be a more problematic prospect. How far out that is, now there is a, if not the, question….