Archive for August, 2008

The 2008 Midwest NKS Conference: What is computation? How does nature compute?

Posted in General on August 19th, 2008 by Hector Zenil – 1 Comment

2008 Midwest NKS Conference: Call for Papers and/or Participation

GENERAL ANNOUNCEMENT AND CALL FOR PAPERS

What is computation? (How) does nature compute?

2008 Midwest NKS Conference

Fri Oct 31 – Sun Nov 2, 2008
Indiana University — Bloomington, IN

http://www.cs.indiana.edu/~dgerman/2008midwestNKSconference/

In 1964, in one of the six Messenger lectures he delivered at Cornell University (later published as a book “The Character of Physical Law”) Richard Feynman said: “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time … So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.

The topic of the conference has been chosen with this quote in mind. The conference will host a most distinguished group of scientists supporting different views of a computable universe, from those supporting the thesis that Nature performs (only) digital computation and does it up to a maximal level, to those supporting the thesis of nature as a quantum computer. Some strongly suggest however that the true nature of Nature can be only explained by the study of randomness. Randomness however preserves its mysterious reputation, for some of these authors it seems that randomness can be generated deterministically in the classical sense, while others claim the existence of “true” randomness from the principles underlying quantum mechanics necessarily to explain the complexity seen around. This event will become the place of confluence in which all these views will be presented, discussed and analyzed by the guests and the conference participants themselves. After presenting their views during the first three days of the conference, the keynote speakers will then participate in a round table discussion on the topic.

Invited speakers:

  • * Charles Bennett (IBM Research)
  • William Bialek (Princeton University)
  • Cristian Calude (University of Auckland)
  • Gregory Chaitin (IBM Research)
  • David Deutsch (Oxford University, via videoconference)
  • Edward Fredkin (Carnegie Mellon University)
  • Tony Leggett (University of Illinois)
  • Seth Lloyd (Massachusetts Institute of Technology)
  • Stephen Wolfram (Wolfram Research)
  • * Leonid Levin (Boston University)

* to be confirmed

Round table moderators:

  • James Gleick (author of Chaos, Genius and Isaac Newton)
  • Gerardo Ortiz (Indiana University Bloomington)
  • Hector Zenil (Univ. of Paris 1 and Univ of Lille 1)

Conference topics:
Topics of interest for submissions include (but are not limited to):

- Pure NKS projects
- The physics of computation
- Computational physics
- Foundations of computation
- Universality and Irreducibility
- Classical (digital) and quantum computation
- Algorithmic information theory

It is encouraged to relate the above topics with the conference title (What is computation? (How) does nature compute?) and the points of intersection between classical computation, quantum computation, algorithmic information theory, and the principle of computational equivalence.
 

Organizing committee:

Adrian German (Indiana University Bloomington)
Gerardo Ortiz (Indiana University Bloomington)
Hector Zenil (Univ. Paris 1 and Univ. of Lille 1)

Misguiding research on Artificial Intelligence.

Posted in General on August 7th, 2008 by Hector Zenil – Be the first to comment


Athens
Originally uploaded by hzenilc.

The field of strong AI, sometimes referred to as artificial general intelligence or AGI, is defined as intelligence that is capable of the same intellectual functions as a human being, including self-awareness and consciousness, I will call the assumed object of study of AGI just agi in lowercases, not only to avoid confusion with the current field of research AGI, but also to differentiate agi from what is known as AI, just as AGI is meant to identify a field apart from AI

I do think that it is techniques similar to those that led to the discovery of the Game of Life (see the previous post) that will eventually produce artificial general intelligence or agi. I think the study of the Game of Life is for the Game of Life what the fields of AI and AGI are for ai and agi respectively. In other words, there are no real fields of AI or AGI apart from adopting a suitable approach to the systems we already see around. Any simple systems such as CA are likely to achieve agi of the kind we expect without having to do anything else! From my point of view it is just a matter of technological sophistication, of providing the necessary elements and interfaces with the real world, and not of theoretical foundations or the invention of a mathematical or new conceptual framework. If the former is what AGI is focusing on, I think it is headed in the right direction, but to think of the AGI field as building agi from the bottom-up would be a mistake of the same kind as thinking of the Game of Life as designed or engineered. I find the terms usually used in the AGI field — “agi design” and “designing agi,”— discouraging.

Perhaps an engineering based approach will succeed first. Whatever the case, it is a very exciting prospect which I look forward to, and I hope to make a contribution in my own field. But that doesn’t change the fact that in the end, just as with the Game of Life, agi will have been engineered and successful because it was already there rather than because we were clever enough to reinvent intelligence, as we have reinvented flying by building airplanes. I think current AI does for intelligence what airplanes did for flying. We don’t mind how we fly but we do mind how machines think or don’t think (at least as a matter of research for AGI). So the common airplane analogy does not work anymore for this. If we haven’t been able to create intelligence of the kind seen in humans it is because we are still looking to build airplanes. While it is unobjectionable for airplanes to keep flying as they do, flying unlike birds fly, current approaches from AI and AGI to agi will just keep producing airplane-type solutions to a non-airplane problem. That does not imply  that we have (as the negation of the airplane analogy suggests) to come up with agi by imitating human intelligence step by step; that’s another matter.

I don’t see why intelligence would need to be different in nature from the potential behavior of all the sophisticated systems that are around us, why it should differ from them as airplanes differ from birds. Unfortunately all current research on AI and AGI seems to have a stake in making just such a distinction. While research on complexity theory and complex systems is faring better, it still seems to me to have missed the real problem when it comes to agi.

Systems such as the Game of Life and Rule 110 (capable of universal computation) are potentially all the primary ingredients needed for agi. Disregarding matters of computational efficiency. I think it is just a matter of time before we are able just to provide the current systems with the potential power of general intelligence with the means to reach that power. Perhaps it would turn out to be an engineered approach, just as the Game of Life turned out to be capable of universal computation. But even when the authors of these approaches have the strong impression that they are designing agi, they will end up with just another powerful-enough general system capable of agi, but so ubiquitous that is more in the nature of a discovery than an invention.

“No one has tried to make a thinking machine. The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack. We have got to get back to the deepest questions of AI and general intelligence and quit wasting time on little projects that don’t contribute to the main goal.”
— Marvin Minsky (interviewed in Hal’s Legacy, Edited by David Stork, 2000)

The Game of Life: A notable discovery, not an invention

Posted in General on August 7th, 2008 by Hector Zenil – 1 Comment


John Conway giving a talk on knots

Originally uploaded by hzenilc.

It is well known that cellular automata (CA) can evolve in a few specific types of behavior, one of which is precisely the type discovered by John Conway in his Game of Life. I happen to think it is something of a misnomer to speak of John Conway as having “designed” the Game of Life. Working from an initial intuition (using it as a kind of filter) it is quite possible to find a CA of an equivalent type. The procedure is very simple. If you limit the space of all possible 2D CA to those that are clearly non-trivial (avoiding periodic and fractal evolutions) and apply a simple filter (based on compression, topology parameters or other known techniques), you end up with two basic classes identified and studied before by Stephen Wolfram as complex and random behavior (at the end, a single class, according to what he calls the Principle of Computational Equivalence or PCE). The probability that a complex 2D CA will behave like the Game of Life or another CA of equivalent sophistication is very high after the filtering process. As has been demonstrated, many of these interesting but not uncommon 1D, 2D or nD CAs share some fundamental properties, reaching a maximal degree of sophistication (namely complexity/universality).

All this notwithstanding, it is Conway’s great achievement to have drawn attention to this particular 2D CA and to have analyzed it in such detail. Another example of such a CA is the elementary cellular automaton Rule 110. What’s special about Rule 110 is not that it has been extensively studied by Wolfram and eventually proven to be universal by him and his assistant M. Cook, though this is no doubt significant. What is truly noteworthy is that Rule 110 is meant to prove that universality can be reached even by simple systems, leading one to expect that all manner of interesting/complex CA are likewise universal.

Almost every serious attempt to prove that a given arbitrary interesting CA is universal has been successful. Examples range from Conway’s Game of Life to Wolfram’s Rule 110 to Ed Fredkin’s Salt Model, and to cite the most recent, Maurice Margenstern and Yu Song’s universal cellular automaton on the ternary heptagrid (papers accepted for http://www.csc.liv.ac.uk/~rp2008/accepted.php, 2nd Workshop on Reachability Problems).

Rule 110, just like the Game of Life, was discovered, not built. These discoveries are the product of an exhaustive (‘random’) search (over the whole or a part of all possible rules)– which doesn’t mean that they are random discoveries. One can of course mistake such a discovery for an invention. For in low dimensional CA spaces it is very easy to explore the space of possible rules by changing the rules themselves before letting the system evolve through a few initial steps in order to observe said process of evolution. This may make it seem as if one has designed the system rather than simply explored a part of the space, but such is not the case. And even if it were, it is, strictly speaking, misguided to think in terms of designing rather than exploring for the reason given above—viz. that all complex systems have the potential to behave with a sophistication  equivalent to that of the Game of Life.

Furthermore, by what Stephen Wolfram calls “irreducibility”, one cannot predict (and this applies particularly to complex CAs such as the Game of Life) how a CA rule will evolve without having run the said CA (and once proven to be universal most properties about this CA will be undecidable). Which explains why the study of CAs has been so successful lately, attracting many minds; because we now have the tools (computers) to let these systems run so we can observe how they develop. The idea of irreducible systems is very simple and powerful at the same time. Basically it holds that one cannot shortcut any of these interesting complex CAs, which means that there could have been no other way for Conway to test his CA apart from running it. Which in turn means that if it were not behaving in the right way, he would have had to change the rule of the CA and test it again, or even if he had the chance to find this particular CA at the first shot, he would have had to let it run for some steps before being sure about what he just made. This doesn’t mean that he wouldn’t have been able to figure out some of the basic properties of the rule he was working with beforehand, but it means that he wouldn’t ever have been certain of his conclusions save by actually running the CA itself. This has been the case with other rules as well, such as Rule 30, which no one has succeeded in cracking by coming up with a function or formula to shortcut the evolution of the rule able to provide the value at an arbitrary step without having to calculate the previous steps(i.e. without running the CA up to the step in question). Even if Rule 30 were crackable, based on computational complexity it would be expected that most of these systems are irreducible in several respects, from undecidability of the properties of the particular CA to incapability of our systems (including ourselves) to compress the information generated by these systems. Applying Wolfram’s PCE, a system with a maximal degree of computational sophistication would have the same computational power of another system of maximal degree, so one cannot expect any one of them to figure out what the other is doing without having to emulate it step by step.

This doesn’t mean that Conway does not deserve credit for the sheer cleverness of the Game of Life and for the discovery of this interesting evolution of a 2D CA. But the Game of Life has become so popular and interesting not because it is rare, but because it has been so extensively studied as a particular instance of an interesting complex 2D CA. Furthermore, there is the fact that it involves some basic game theoretical principles pertaining to how life actually develops.

I met John John Conway 3 weeks ago (and took the photo that goes along this text) and probably I should have brought up the subject with him. However, as far as I remember, Stephen Wolfram himself asked him how he came up with the Game of Life, but unfortunately he seemed to say that he was not certain anymore…