Archive for February, 2011

The Lighthill Parliamentary Debate on General Purpose Artificial Intelligence

Posted in General on February 24th, 2011 by Hector Zenil – Be the first to comment

In 1973, Lucasian professor at Cambridge, James Lighthill, was asked by the British Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the failure of AI to achieve its grandiose objectives. He specifically mentioned the problem of “combinatorial explosion” or “intractability” of the discourse universe, which implied that many of AI’s most successful algorithms would grind to a halt faced with real world problems and were only suitable for solving toy versions.

The report was contested in a debate broadcast as part of the BBC series “Controversy” in 1973. The debate, which was on the proposition “The general purpose robot is a mirage,” was sponsored by the Royal Institute and pitted Lighthill against a team of researchers, including Donald Michie and Richard Gregory and led by the young AI researcher John McCarthy. The report led to the near-complete dismantling of AI research in England.

Here is the debate on YouTube (in 6 parts):

This debate goes to show that as recently as 1973 it was generally believed (as indeed it still is in some quarters), that one had to write a complex program to obtain complex behavior, or that no simple rule could produce complex behavior (2nd. part, min 7:45). This position was epitomized by Lighthill.

Lighthill’s position does not come as a surprise. He was, after all, a researcher in fluid dynamics and aeroacoustics, where it is easy to be misled by complicated differential equations involving ‘continuous’ variables and where nonexistent solutions arise so often. His main argument was that because one had to specify the rules in a computer to tell the robot how to behave in every possible scenario, every attempt to come up with a general purpose robot would quickly turn out to be an intractable problem, with a combinatorial explosion of possible solutions. I don’t entirely disagree with Lighthill on this, but I can by no means endorse his conclusion.

Unfortunately the response of the researchers on the panel was rather weak, except for a couple of minor arguments put forward by McCarthy to what seemed to be fundamental impediments that Lighthill appeared to be invoking. From my point of view, one of the items that should have been put on the table ought to have been a serious discussion of whether universal computation actually had something to tell us about the possibility of general purpose artificial intelligence (i.e. what the abstract could say about the concrete). Also the question of the essential differences, if any, between advanced automation and artificial intelligence, which McCarthy seems to have broached in one of his arguments against Lighthill, without reaching any conclusions. Indeed the point may be that there is no essential difference, which is something that was perhaps difficult to see back then. But instead of considering this as a caveat against AI, as Lighthill seemed inclined to do, it actually makes AI inevitable in the event, even if achieved by means other than those traditionally employed in AI labs.

If automation is not really that different from AI, as I think has been proven over the years, what this suggests is that automation will eventually lead to what we think of as intelligent behavior and therefore to artificial intelligence.

In the end, Lighthill’s position was reduced by the moderator (Sir George Porter) to an argument about pragmatic difficulties rather than about the fundamental impossibility of the project, in response to which Lighthill quipped that his neuroscientist friends had made it plain that they felt it was hopeless to try and model the brain using a computer. I find several problems with this position, beginning with the assumption that AI is exclusively about trying to mimic the brain.

Donald Michie made a good point concerning the combinatorial explosion made so much of by Lighthill by citing the example of chess. In 1973 humans were still better at playing chess, so he asked whether Lighthill would acknowledge artificial intelligence if a machine performing an exhaustive search turned out to win a chess match against a Master.

In 1968, David Levy was an international Master and he bet a thousand pounds that no computer program would beat him in the next 10 years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time). Lighthill responded that a chess program would avoid the combinatorial explosion by virtue of having the benefit of human knowledge. In any case, he said, he was reluctant to believe that a chess program would ever beat a human chess player. Lighthill was cleverly hedging his bets.

Lighthill’s position wasn’t totally clear at the end of the debate–was his argument purely pragmatic or did it concern something fundamental? I’m not wholly unsympathetic to his position because AI has been misleading as a field. While it has certainly made some contributions to science and technology, it has achieved what it has by dint of technological improvements (hardware capabilities) rather than scientific breakthroughs (e.g. a set of sophisticated new algorithms) on the part of the AI community. And I have explained this in some detail in my previous blog post on the IBM computer Watson in the Jeopardy! game Is Faster Smarter?

Is Faster Smarter? IBM’s Watson Search Engine Approach to Beat Humans

Posted in General, Minds and Machines on February 18th, 2011 by Hector Zenil – 4 Comments

IBM’s computer named “Watson” has beaten Jeopardy! (human) contestants in a series of games this month. IBM has a long history of innovations (watch this other 100th anniversary documentary featuring Greg Chaitin and Benoit Mandelbrot, among others here).

Not everybody was impressed by Watson though, according to Gavin C. Schmitt who interviewed Noam Chomsky, the recognized linguist:

  • Noam Chomsky: I’m not impressed by a bigger steamroller.
  • Interviewer: I assume that “a bigger steamroller” is a reference to Deep Blue. Watson understands spoken language and adapts its knowledge based on human interaction. What level of AI would be required to impress you?
  • Noam Chomsky: Watson understands nothing. It’s a bigger steamroller. Actually, I work in AI, and a lot of what is done impresses me, but not these devices to sell computers.

In some sense, I understand Chomsky’s view and he does well pointing out what may seem the clearest difference between Watson and a human being, but the point may be much more subtle and deeper. Wolfram’s Principle of Computational Equivalence (PCE) (see this previous post) may shed some light on this subject. According to this principle, memory is what makes a system intelligent, because basically any system that does not behave in an obviously trivial fashion will likely be as sophisticated as the most sophisticated system. So what is the difference between a system that is potentially intelligent and one that shows signs of actually being so? It is somehow a matter of scale in several directions. Take the example of weather. When Wolfram is asked whether clouds are intelligent according to his principle, his short answer is yes. Every time humans want to make a prediction about whether it will rain, it turns out to be incredibly difficult to do so for more than a couple of days, and often the weather forecast is wrong even for the next day. How is it that weather prediction is so hard despite our long experience at weather forecasting? Well, clouds are computing themselves, and as part of weather they are quite complex, as complex as systems like the human brain, says PCE.

Picture of Gregory Chaitin taken by Hector Zenil outside of the IBM Research center at Yorktown, N.Y. where Watson was designed.
(Picture of Gregory Chaitin taken by Hector Zenil outside of the IBM’s
Thomas J. Watson Research Lab at Yorktown, N.Y. where Watson was designed)

After all these years, IBM hasn’t come up with a theoretical breakthrough to meet the challenge but rather with a supercomputer fast enough to beat the other participants. Watson uses fairly sophisticated algorithms, but these aren’t that much more sophisticated than those used by search engines, which proceed by matching pieces of text and retrieving other pieces of text statistically related to the original. The IBM team has come up with a super-computer to challenge the Jeopardy! participants and not with a new algorithm. The main goal of machine learning labs is usually to perform about 1% to 3% better than the best benchmark of the top lab in a particular area (e.g. word tagging, word disambiguation, information retrieval, etc.). If Watson has achieved anything like a breakthrough, it may be credited with having advanced the current state of AI research. It takes Google’s kind of technology a couple steps further by having drawn together several technologies on steroids. The point is that one may not need to come up with the smartest algorithm– because one may not be able to, simply because it doesn’t make sense to engineer a super complicated algorithm to reach intelligence, it being more appropriate to start from a simple but potentially powerful system, and then extract the best of it by running it on a super large corpus of data on a super fast computer. The Watson- in-Jeopardy! experience tells us that even clouds may look intelligent when running on a supercomputer.

Watson confirms what I’d suspected, that we’re not that special after all (in this sense). Watson meets the challenge of beating a human on its own turf and at what it does best basically through the use of brute force. Achieving artificial intelligence (AI) is not, as I suspected (among other thinkers), a matter of science breakthrough but rather a matter of scale and technological achievement. Over time we’ll have faster and faster computers, and that means computers with intelligence resembling ours (of course there are other subtle issues here, such as the fact that the system should be allowed to interact with the intelligent forms it is meant to interact with, otherwise its intelligence may prove alien to ours, and look as strange as that of clouds).

Jean (Michel Jarré with his electronic harp.<br />
Picture by Hector Zenil, Paris concert 2010)
(Jean Michel Jarré with his electronic harp.
Picture by Hector Zenil, Paris concert 2010)

Wired published (when they used to publish interesting articles more often) an interesting article back in 2009, which reached the conclusion that one could exchange data for models: The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.

Some interesting inferences can be drawn from this milestone where IBM supercomputers have beaten humans at human tasks (remember this is the second time; at least with such a publicity, the first saw IBM’s Deep Blue beat Garry Kasparov, the reigning World Chess Champion at the time, in a series of chess matches). Among these we may single out the fact that we humans seem ready to call machines smart and intelligent even though they may not necessarily think like us (humans can only see ‘ahead’ a handful of selected chess movements while chess programs perform an exhaustive search only bounded by time), despite seeming as clever as we are. This is already a vindication of Turing’s proposal for a test of intelligence.

Yet Chomsky’s opinion seems to point in the opposite direction, that we may still think we are much more special than Watson, and he might be in some sense right. We definitely play very differently than machines, but is chess that different from natural language? It may be, but this time the question may be whether what Watson does at playing is that different from what we do.

At the end of the Deep Blue vs. Garry Kasparov, Kasparov pointed out that he felt that the machine actually had a strategy and something that made him think that the machine was actually playing like a human, somehow perhaps even teasing him. Ken Jennings (one of the two human participants against Watson) wrote a day after the match: “This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.”

Some people think that Watson has absolutely no ability to understand what it did, or any awareness about it, and that still makes the difference. This might be only partially true. Watson may not understand or be yet aware of its achievement, but I wonder if the same processes that made Watson to win this match are not the same that may be in action for even more sophisticated human thinking, such as self-awareness. But the question can also be reversed and one may also ask whether we are really aware of what happened, and how much of our feeling of being aware is the result of the type of thinking that Watson just exhibited.

As many of my readers certainly know, Alan Turing came up with a basic test suggesting that if something looked intelligent then it was intelligent. So we have reached the point at which not only has a machine passed the Turing test but also humanity may be ready to accept the idea behind the test, that is that it doesn’t really matter how something thinks if it looks clever enough to fool us (or even beat us at something) then it is intelligent. For the machine, the milestone is located at a point in time that reflects the current state of technology, which basically amounts to the miniaturization and mastery of computing devices, as well as the management of very large quantities of data, not forgetting the current state of fields involved (basically machine learning and natural language processing or NLP). But it is by no means an isolated achievement. I see IBM as the standard bearer for a combination of several current technologies run on the best hardware available today, a pioneer in this sense.

Wolfram|Alpha computational engine control room. Lunch day picture by Hector Zenil.
(Wolfram|Alpha computational engine control room.
Launch day picture by Hector Zenil)

It seems we are now able to gauge the size and power of the human brain as against something that looks as sophisticated as we are, at least at playing a sophisticated game. Watson and humans may reach the same conclusions, whether or not they do so in different ways, but the fact that Watson requires a computer the size of 10 full-size fridges, 15-terabyte of memory (likely full of data and programs) and 2,880 microprocessors working in parallel tells us more about us than about Watson itself. We knew we carried a supercomputer in each of our heads but we didn’t know what its specs may be. We also thought that the were specially gifted with unprecedented intelligence but now a machine that is certainly not aware of it and hasn’t taken the same path is also able to exhibits key features of intelligence.

Jen Kennings adds after the last match: “…But unlike us, Watson cannot be intimidated. It never gets cocky or discouraged. It plays its game coldly, implacably, always offering a perfectly timed buzz when it’s confident about an answer.” “…I was already thinking about a possible consolation prize: a second-place finish ahead of the show’s other human contestant and my quiz-show archrival, undefeated Jeopardy! phenom Brad Rutter.” Read more here and here.

Watson specs will fit in a small box in the future–given the trend of the past several decades following Moore’s law– and in the future as it is the case today, faster will be smarter.

Aired on PBS, NOVA called their documentary The Smartest Machine On Earth:

Watch the full episode. See more NOVA.