Conferences

On the Foundations of Quantum Mechanics, The Netherlands

Posted in Complexity, Conferences, Foundations of Physics, New Ideas on November 15th, 2007 by Hector Zenil – Be the first to comment


Originally uploaded by hzenilc.

Models and Simulations 2
11 – 13 October 2007
Tilburg University, The Netherlands

I attended this conference one month ago. Among several interesting talks, one in particular caught my attention. It was given by Michael Seevinck from the Institute for History and Foundations of Science at Utrecht, The Netherlands. His talk was about the foundations of Quantum Mechanics, and there were many NKS related topics that it brought  to mind. He talked about reconstructing Quantum Mechanics (QM) from scratch by exploring several restricted models in order to solve the so-called measurement problem, to deal with the nonlocality of quantum correlations, and with its alleged non-classicality, there being  no consensus on  the meaning of Quantum Mechanics  (Niels Bohr said once: “If you think you have understood quantum mechanics, then you have not understood quantum mechanics.”—More quotes of this sort on QM here).  The restrictons chosen in order to reconstruct the theory must be physical principles and not  theoretical assumptions. In other words, one approaches the problem contrariwise than is traditional, taking the least possible restrictions and exploring the theories that can be built thereon. The speaker characterized  this approach  as the “study [of]  a system from the outside” in order to “reconstruct the model”. It is basically a pure NKS approach: “Start from a general class of possible models and try to constrain it using some physical principles so as to arrive at the model in question (in this case QM).”

One can then proceed to ask such questions as how one might identify QM uniquely, what it is that makes QM quantum, what set of axioms in the model is to be used, and which of them are necessary and sufficient? The question of meaning, previously asked of the formalism, is removed, and bears, if at all, only on the selection and justification of  first principles. Seevinck came up with the following interesting statement: “The partially ordered set of all questions in QM is isomorphic to the partially ordered set of all closed subspaces of a separable Hilbert space” (one of Mackey’s axioms in his axiomatisation of 1957). He added: “They (the principles)have solely an epistemic status. The personal motives for adopting certain first principles should be bracketed. One should be ontologically agnostic. The principles should be free of ontological commitment.” And further: “…axioms are neutral towards philosophical positions: they can be adopted by a realist, instrumentalist, or subjectivist.” He cited Clifton, Bub and Halverson who provided the following quantum information constraints used to derive quantum theory:

1. No superluminal information transfer via measurement.

2. No broadcasting

3. No secure bit commitment

Seevinck’s methodology in further detail is: Start with a general reconstruction model with a very weak formalism. Gradually see what (quantum) features are consequences of what added physical principles, and also see which features are connected and which features are a consequence of adding which principle. One thereby learns which principle is responsible for which element in the (quantum) theoretical structure.

One can generate further foundational questions over the whole space of restricted models, e.g.  how many of them:

- forbid superluminal signalling?

- allow nonlocality, and to what extent?

- solve NP-complete problems in polynomial time?

An important question which arises concerns whether intrinsic randomness would be of a different nature in different models or whether all of them would yield to deterministic randomness.

His talk slides are available online. Highly recommended.

Among other interesting people I met was Rafaela Hillebrand, of  the Institute for The Human Future at Oxford University. The Institute’s director, Nick Bostrom, has proposed an interesting theory concerning the likelihood that our reality is actually  a computer simulation. I have myself approached the  question in my work on experimental algorithmic complexity, in particular in my work on  the testability and the skepticism content of the simulation hypothesis. I will post on that subject later. The subject of thought experiments–in which I have an interest– was one that came up frequently.

Gregory Chaitin cutting an Omega cake surrounded by Leibniz cookies.

Posted in Conferences, General on November 3rd, 2007 by Hector Zenil – Be the first to comment

The NKS Science Conference 2007 held at the University of Vermont included a special session featuring the contributors to the volume  “Randomness and Complexity: From Leibniz to Chaitin” (see related post),  recently published by World Scientific and edited by Cristian Calude. The session was organized by Calude and myself.

The program was as follows:
9:45am-12 noon
A. Presentations from “Randomness & Complexity: From Leibniz to Chaitin”, Angell Lecture Center B106:

* Cristian Calude, “Proving and Programming”
* John Casti, “Greg Chaitin: Twenty Years of Personal and Intellectual Friendship”
* Karl Svozil, “The Randomness Information Paradox: Recovering Information in Complex Systems”
* Paul Davies, “The Implications of a Cosmological Information Bound for Complexity, Quantum Information and the Nature of Physical Law”
* Gordana Dodig-Crnkovic, “Where Do New Ideas Come From? How Do They Emerge? Epistemology as Computation (Information Processing)”
* Ugo Pagallo, “Chaitin’s Thin Line in the Sand. Information, Algorithms, and the Role of Ignorance in Social Complex Networks”
* Hector Zenil, “On the Algorithmic Complexity for Short Sequences”
* Gregory Chaitin, “On the Principle of Sufficient Reason”

Calude began by talking about  “Randomness and Complexity: From Leibniz to Chaitin”, published to mark Gregory Chaitin’s  60th birthday.

The blog entry of my presentation is posted here:

http://blog.wolframscience.com/

while an extended version of the published paper (co-authored with Jean-Paul Delahaye)  from which that presentation was culled is available here:

http://arxiv.org/abs/0704.1043

Following the  presentations, there was a panel discussion on the subject “What is Randomness?” organized by myself  in collaboration with Cristian Calude (who edited the book), and Wolfram Research’s Catherine Boucher and Todd Rowland. It was held at the Angell Lecture Center and  featured Cristian Calude himself, John Casti, Gregory Chaitin, Paul Davies, Karl Svozil and Stephen Wolfram.




Gregory Chaitin cutting his Omega cake surrounded by Leibniz cookies

We  had a good time discussing various topics of interest  at a  luncheon on the university campus and again at dinner the following night in downtown Burlington. At the luncheon, Stephen Wolfram provided an overview of Chaitin’s prominent career as a pioneer of  algorithmic information theory and then invited Chaitin to cut an Omega cake surrounded by Leibniz cookies.

On the possible Computational Power of the Human Mind

Posted in Conferences, Minds and Machines on March 13th, 2007 by Hector Zenil – Be the first to comment

My paper On the possible Computational power of the Human Mind (co-authored with my BS thesis advisor Francisco Hernández-Quiroz of the Math Department of the National University of Mexico [UNAM], which I delivered as a lecture 2 years ago at the Complexity, Science & Society 2005 Conference at the University of Liverpool, U.K.) has been recently published by World Scientific as a book chapter. It is available from World Scientific at http://www.worldscibooks.com/chaos/6372.html ; also as a paper or from Amazon.
The book is edited by Carlos Gershenson, Diederik Aerts (Brussels Free University, Belgium) & Bruce Edmonds (Manchester Metropolitan University Business School, UK).

WorldviewsComplexityAndUs
Introduction: Scientific, technological, and cultural changes have always had an impact upon philosophy. They can force a change in the way we perceive the world, reveal new kinds of phenomena to be understood, and provide new ways of understanding phenomena. Complexity science, immersed in a culture of information, is having a diverse but particularly significant impact upon philosophy. Previous ideas do not necessarily sit comfortably with the new paradigm, resulting in new ideas or new interpretations of old ideas.In this unprecedented interdisciplinary volume, researchers from different backgrounds join efforts to update thinking upon philosophical questions with developments in the scientific study of complex systems. The paper contributions cover a wide range of topics, but share the common goal of increasing our understanding and improving our descriptions of our complex world. This revolutionary debate includes contributions from leading experts, as well as young researchers proposing fresh ideas.Contents:* Restricted Complexity, General Complexity (E Morin)* Complexity Science as an Aspect of the Complexity of Science (D Mikulecky)* On the Importance of a Certain Slowness (P Cilliers)* Simplicity Is Not Truth-Indicative (B Edmonds)* Why Diachronically Emergent Properties Must Also Be Salient (C Imbert)* Some Problems for an Ontology of Complexity (M McGuire)* Physical Complexity and Cognitive Evolution (P Jedlicka)* Informational Dynamic Systems: Autonomy, Information, Function (W Riofrio)* The Complexity of Information-Processing Tasks in Vision (J Symons)* On the possible Computational Power of the Human Mind (H Zenil & F Hernandez-Quiroz)and other papers

International Union of History and Philosophy of Science Conference on calculability and constructivity.

Posted in Conferences on December 3rd, 2006 by Hector Zenil – Be the first to comment

A Conference of the International Union of History and Philosophy of Science (Joint Session of the Division of Logic, Methodology and Philosophy of Science and of the Division of History of Science and Technics) was held in Paris at the Ecole Normale Supérieure on November 17-18th 2006. It was organized by Jacques Dubucs. Appointed president of the joint conference by the IUHPS,  he is also the director of the IHPST and  was   my master’s thesis adviser.

The conference began with an interesting talk  by Yiannis Moschovakis, professor of math at UCLA, about elementary algorithms, which  according to him are usually defined in terms of mathematical models of computers with “unbounded memory”. His talk was based on his article entitled “What is an algorithm?” –a very fine piece, in my opinion–included in Mathematics Unlimited – 2001 and Beyond,  edited by B. Engquist and W. Schmid and published by Springer in 2001 (also available online). Such definitions, he argued, do not coincide with our basic intuitions about algorithms. He provided examples of definitions that eschewed the use of  abstract machines. For instance McCarthy, with the concept of recursive programs, or elementary (first-order) algorithms. He took the common example of the GCD and asked whether the Euclidean algorithm was optimal among all algorithms for some primitive (There have been some results on upper bounds of the form Ce(x,y)=2log and definitions based on partial and pointed algebras). Then he asked about isomorphism between algorithms and defined what he called a “recursor”—a tuple of monotone functions on complete posets which determine a system. It  models an algorithm. A natural notion of recursor isomorphism models algorithm identity (which he seems to believe is the finest notion of an algorithm). He went through concepts like “program congruency” and a related theorem (which is easily demonstrable for McCarthy programs) : Every program E is reducible to a unique – up to congruence – irreducible program cf(E) [where cf(E) means canonical form] which is the optimal algorithm. And then he continued through more abstract terms such as “referential intension” and “partial algebras,’ which are covered  in his paper . He defined a complexity measure which he called “complexity functions for programs” where the basic idea consists in counting the number of calls to some primitives. For example, the number of primitives q1,q2,…,qk in the computation of an algorithm M(x,y1,…,y_k) using E (a McCarthy program) where x can be a vector as input. He talked about a depth complexity measure too and stated a Church-Turing thesis for elementary algorithms which would be as follows:Every algorithm which computes a partial function f:A^n->A from q1,…,1k is faithfully represented by a recursive program on some logical extension B of a partial algebra A=(A,0,1,q1,…,qk). On the basis of this assumption it would seem  that the lower bound results obtained by the embedding method hold for all elementary algorithms.

After Moschovakis, Prof. Serge Grigorieff  of the Math Dept. of  the University of Paris VII gave a talk on foundations of randomness in terms of recursivity, which was of particular interest to me given my current research on the topic. According to him, there is no formal notion of a random object in probability theory; random variables are entities having nothing to do with random objects. He talked about  Berry’s paradox as instanced by the question “What is the smallest number that needs at least n words to specify it, where n is large” or by the phrase “the first undefinable ordinal” and about a solution replacing “specify” or “describe” with “compute”. Then he went through traditional definitions by Kolmogorov: K_f(x)=min{|p|:f(p)=x} where f can be interpreted as a computer or compiler, and p the programming language with no input and x of course the object whose complexity is to be measured. In terms of compressibility, if |x|=x then x is random and if K(x)=|x|-c then x is c-incompressible. We know of course that the problem is in general non-computable.
Following that Prof. Grigorieff stated the invariance theorem, which basically says that  f (the computer program) varies by a constant. The problem, according to Kolmogorov(1964) himself,  lies with these constants, since “reasonable” optimal functions will lead to complexity estimates. In 1965 Martin-Lof gave another equivalent definition: x would be c-random id Delta(x)<=c, i.e. x passes statistical tests with significal level c. We know then that incompressible=random, from Kolmogorov’s and Martin-Lof’s work. Thus if the size of x is equal to the size of p, which is the smallest program which produces x,  x would be random. Next he cited Chaitin’s concept of the  prefix-free domain, and finally pointed out the equivalence between  them all. However all these definitions are weak in some fashion. Take for instance the notion of invariance ( “reasonable” variance if you prefer) or fragility:  a random string a0,a1,… would be random but the same with some regularity inside, like a fixed number in particular positions ( a0,0,a1,0…), wouldn’t be more so. Kolmogorov extendeded his own idea to infinite objects but it did not work. Martin-Lof’s random sequences satisfy. I found some ideas related to Schnorr, Martingale and Solovay relevant to the concept of irreducibility. Michael Detlefsen, philosophy Professor at the Univ. of Notre Dame, USA, gave another interesting lecture, more philosophically oriented, in which he merged discussion of construction, computation, exhibition and proofs. He made some remarks on the concept of proof : A proof is a sequence of judgment. A proof needs to use reference to judge. We moved on to something that caught my attention since it is a topic to which I have given considerable thought and  is related to the role of mental or graphic representation of proofs and proofs that need more powerful tools and a higher language to prove a statement posed in a lower and less powerful language. For example, according to Detlefsen, Frege  (Frege on the existential requirement of proof) pointed out that the use of objects like the square root of -1 in proofs for Real Analysis would be immediately seen as distractors; some proofs use the sqrt of -1 when the magnitude does not occur obviously in the real analysis theorem to be proven.  Such proofs would collapse if the number 1 were to be taken out. From my point of view the use of sqrt of -1 and equivalent cases can have even more implications : they could mean that sometimes proofs use stronger axiomatization in order to force a proof in a less powerful  statement in a less powerful axiomatization (I am concerned, for example, with the case of Fermat’s last theorem, but that is a subject for a separate post). According to Frege (who was not talking about Fermat’s last theorem, of course) we import something foreign into arithmetic. Gauss himself asked the same question about the significance of this foreignness of symbols and even more powerful tools. A concern with “purity” played a large part in Frege’s logicism. These questions seem to have engaged everyone from  Proclus to Leibniz to Frege. They can be found in papers about reference and rigor. Following this line of thought, in math, figures do more than simply “refer” to the objects they represent. In geometry objects are represented by entities  of the same kind– line by lines, and circles by circles– but in algebra the use of signs to represent objects avoids  explicit reference. According to Hobbes, the prover maintains contact with the reality to be proved by exhibiting it, that is, by manifestly generating it through a process of efficient causation. According to Francis Maseres, visually exhibiting the objects of geometrical reasoning increases rational confidence in its premises.

Another interesting lecture was given by Maurice Margenstern, Computer Science professor at the University of P. Verlaine, Metz, France on the “Computer Science Revolution”. The title did not really reflect the rich content of the talk which brought home to me the  importance of two concepts almost completely ignored in computer science and which seem to be of fundamental and foundational value. We have heard a lot on the equivalences between proof, algorithm and program (Curry-Howard for example and the concept of Currying). However, an algorithm is a project of execution while a program is the execution itself. According to Margenstern, time is the key concept both in an algorithm and a program (and of course the arrow of time–my contribution). The “equals” in a proof or in a program often means “compute” and what is merely a description of something to do becomes an actual computation. For example, replacing “=” for “:=”, a non symmetrical relation, introduces the role of time in computation.I have much more to say in this regard but I’ll do so in a separate post, since it is part of my personal view and pertains to the basic requirement of a computation, which includes the notion of time of course, but also the notion of the carrier and the medium, all of which are matters requiring in-depth analysis. Computation is often referred to in terms of an input, an output, and an intermediate process, but we will analyze what is involved in detail inside an actual computation, which from my point of view is inseparable from certain physical constraints.

Before the last lecturer I took some random notes: Computability is an epistemological notion. Constructivism refined by Martin-Lof. 4 features of finitism: a domain D is admisible if it is r.e.

The last lecturer was Wilfried Sieg, Professor in the Philosophy Department at Carnegie Mellon University. His lecture was based on his most recent paper “Church without Dogma : Axioms for Computability”. Prof. Sieg’s talk drew attention to some comments made by Goedel about Turing (in a letter to Martin Davis). According to Goedel, Turing’s work includes an analysis of the concept of “mechanical procedure”(I think it worth drawing attention here to Goedel’s so-called dichotomy, which casts doubt on the validity of Turing’s approach to the human brain).  An additional comment from me: In the history of calculability several terms have served as equivalents for the vague concept in the first part of Church’s thesis: “algorithm”, “effective procedure”, “verifiable by hand”, “computable function”, “effective function”, “feasible computor”, “mechanical procedure”, “finite combinatorial procedure”, “finite machine”and almost any permutation between them. According to Sieg there are two basic constraints for mechanical procedure:

- Boundness: There is a fixed bound on the number of configurations a computer can immediately reconize, and
- Locality: A computer can modify only immediately recognizable subconfigurations.  He talked about his own definition of a k-graph machine (equivalent to a string machine)–F: D->D (operation transforming states into states).He takes finiteness for granted, from which I deduce that he inclines to the view that  Church’s thesis has no content and is therefore neither a thesis nor a hypothesis. He presented a nice diagram in which he drew arrows from the concept of effective calculability to several computation systems like lambda-definability, general recursive functions and Turing machines.  All of them were proven to be equivalent. He gave us his definition of a computor, a human computer or mechanical device for instance, inspired in some way by Gandy machines as he himself expressed it, and he noted the two main constraints which, according to him, are basic to the definition of what is effectively computable. M=(S,T,G) is a T computor on S when S is a structural class, T a finite set of patterns and G a structural operation on G. Then he presented  a list of statements expressed in first-order logic which can be found in his original paper.

Seth Lloyd’s quantum universe view

Posted in Complexity, Computability, Universality and Unsolvability, Conferences, Minds and Machines on November 22nd, 2006 by Hector Zenil – Be the first to comment

mathematiker.jpg

In an exchange of emails, Seth Lloyd and I discussed the topic I wrote about some posts ago. Here is some of it.

According to Lloyd, there is a perfectly good definition of a quantum Turing machine (basically, a Turing machine with qubits and extra instructions to put those qubits in superposition, as above). A universal quantum computer is a physical system that can be programmed (i.e., whose state can be prepared) to simulate any quantum Turing machine. The laws of physics support universal quantum computation in a straightforward way, which is why my colleagues and I can build quantum computers. So the universe is at least as powerful as a universal quantum computer. Conversely, he says, a number of years ago he proved that quantum computers could simulate any quantum system precisely, including one such as the universe that abides by the standard model. Accordingly, the universe is no more computationally powerful than a quantum computer.

The chain of reasoning, to jump to the quantum computer universe view, seems to be 1 and 2 implies 3 where 1, 2 premises and the conclusion 3 are:

1 the universe is completely describable by quantum mechanics
2 standard quantum computing completely captures quantum mechanics
3 therefore the universe is a quantum computer.

Seth Lloyd claims to have proved the connection between 1 and 2, which probably puts the standard (or some standard) theory of quantum mechanics and the standard quantum computing model in an isomorphic relation with each other.

Lloyd’s thesis adds to the conception of the Universe as a Turing computer an important and remarkable claim (albeit one that depends on the conception of the quantum computer), viz. that the Universe is not only Turing computable, but because it is constituted by quantum particles which behave according to quantum mechanics, it is a quantum computer.

In the end, the rigid definition of qubit together with the versatility of possible interpretations of quantum mechanics allows, makes difficult to establish the boundaries of the claim that the universe is a quantum computer. If one does assume that it is a standard quantum computer in the sense of the definition of a qubit then a description of the universe in these terms assumes that quantum particles encode only a finite amount of information as it does the qubit, and that the qubit can be used for a full description of the world.

Quantum computation may have, however, another property that may make it more powerful than Turing machines as Cristian Calude et al. have suggested. That is the production of indeterministic randomness for free. Nevertheless, no interpretation of quantum mechanics rules out the possibility of deterministic randomness even at the quantum level. Some colleagues, however, have some interesting results establishing that hidden variables theories may require many more resources in memory to keep up with known quantum phenomena. In other words hidden variable theories are more expensive to assume, and memory needed to simulate what happens in the quantum world grows as bad as it could be for certain deterministic machines. But still, that does not rule out other possibilities, not even the hidden variables theories, even if not efficient in traditional terms.

This is important because this means one does not actually need ‘true’ randomness, the kind of randomness assumed in quantum mechanics. So one does not really need quantum mechanics to explain the complexity of the world or to underly reality to explain it, one does require, however, computation, at least in this informational worldview. Unlike Lloyd and Deutsch, it is information that we think may explain some quantum phenomena and not quantum mechanics what explains computation (neither the structures in the world and how it seems to algorithmically unfold), so we put computation at the lowest level underlying physical reality.

Lloyd’s thesis adds to the conception of the Universe as a Turing computer an important and remarkable claim (albeit one that depends on the conception of the quantum computer), viz.  that the Universe is not only Turing computable, but because it is constituted by quantum particles which behave according to quantum mechanics, it is a quantum computer computing its future state from its current one. The better we understand and master such theories, the better prepared we would be to hack the universe in order to perform the kind of computations–quantum computations–we would like to perform.

I would agree with Rudy Rucker too as to why Seth Lloyd assigns such an important role to quantum mechanics in this story. Rudy Rucker basically says that being a subscriber to quantum mechanics, Lloyd doesn’t give enough consideration to the possibility of deterministic computations. Lloyd writes, “Without the laws of quantum mechanics, the universe would still be featureless and bare.” However, though I am one among many (including Stephen Wolfram) who agree  that it is unlikely that the universe is a cellular automaton, simply because cellular automata are unable to reproduce quantum behavior from empirical data (but note that Petri and Wolfram himself attempt explanations of quantum processes based on nets), there’s  absolutely no need to rush headlong into quantum mechanics. If you look at computer simulations of physical systems, they don’t use quantum mechanics as a randomizer, and they seem to be able to produce enough variations to feed a computational universe. Non-deterministic randomness is not neccesary; pseudorandomness or unpredictable computation seem to be enough.

Is the Universe a Computer? (Ist das Universum ein Computer?) Conference, Berlin, Germany, 6,7 November 2006

Posted in Computability, Universality and Unsolvability, Conferences, Minds and Machines, New Ideas on November 12th, 2006 by Hector Zenil – 2 Comments

InformatikJahr.gif

Ist das Universum ein Computer?

http://www.dtmb.de/Aktuelles/Aktionen/Informatikjahr-Zuse/

Germany, November 2006, Informatik Jahr
Deutschen Technikmuseum Berlin
From Konrad Zuse’s Invention of the Computer to his “Calculating Space” to Quantum Computing.

Lesson One: For someone with a hammer in his hand the world seems to be a  nail. Joseph Weizenbaun.

Lesson Two: Knowing the input and the transition function of a Turing machine we know everything about it. Marvin Minsky.

Zuse's drawing 2
Dr. Zuse’s futuristic drawing

- The first talk entitled  “What Can We Calculate With?” by Prof. Dr. Bernd Mahr from the Technische Universitat of Berlin was a very good introduction to the standard theory of computation based on Turing’s model and classical mathematical logic. His remarks on the time when computing arose from math because the Greeks discovered they were unable to compute the square root of 2 were interesting. He pointed out some evident but not always explicit facts: Calculation has a subject (the individual who calculates), an object (what is calculated),  a medium (how it is calculated), and a symbolic representation (the language -binary, for instance). His use of the Leibniz medallion for explaining   starting points, ending points and transitions in a calculation was elementary but interesting (transitions: intermediate calculations). Further explanations of reversibility and non-linearity using transition nets were also illuminating. The context of a calculation (or computation) and the strong relation between the computation itself and its context is  such that it is sometimes difficult to distinguish them. Since any process can be seen as an object in itself, the context can become the calculation and the context of the context too. In some way, as we know, the concept of calculation is a constraint of a part of a calculation, and then it is defined in terms of an input, an ouput and a transition. He pointed out too that behind many of these complicated systems there is a Turing machine. It  is no longer visible from the top, but it is there.

Konrad Zuse
Dr. Konrad Zuse

- Dr. Horst Zuse’s  talk titled “Konrad Zuse’s Calculating Space (Der Rechnende Raum)”:
Dr. Konrad Zuse’s son, Dr. Horst Zuse made some interesting remarks about his father’s book “Calcualting Space”,  in which  Dr. Zuse proposes studying  the universe as a digital system, specifically a cellular automaton. Dr. Horst Zuse is a professor at the Technische Universitat of Berlin and his personal webpage can be found at: www.cs-tu-berlin.de/~zuse and www.zuse.info

Zuse's son
Dr. Konrad Zuse’s son, Dr. Horst Zuse

Dr. Zuse’s father’s main question was: “Is Nature Digital, Analog or Hybrid?” It seems that he tended to answer “Digital,” proposing a No-Yes value language (binary). His thoughts were published in the Nova Acta Leopoldina. He evidently did acknowledge that there could be problems attempting to reconcile an explanation of the Universe in terms of Cellular Automata with Quantum Mechanics and General Relativity.

According to Konrad Zuse, laws of physics could be explained in terms of laws of switches (in the computational sense). Physical laws are computing approximations captured by the formulas in our models. He saw that differential equations could be solved by digital (hence discrete) systems.

Dr. Petri talk
Dr. Carl Adam Petri at the Berlin conference

- Dr. Carl Adam Petri (yes, the creator of the Petri nets!) on “Rechnender Netzraum” or “Computing Net Universe”:
According to Dr. Petri, at the Planck length quantum computing can be described by digital systems using combinatorial models (net models, kennings, combinat), and therefore the universe can be studied using discrete nets which are even capable of explaining quantum and relativistic fundamentals like Bell’s theorem and Heisenberg’s uncertainty principle. That would mean that discrete systems (for instance those proposed by Stephen Wolfram) would suffice to explain even quantum and relativistic phenomena.

According to Petri, measurement is equivalent to counting. For instance in S.I. one second is 9192631770 Cs periods. In fact Norbert Wiener proposed some axiomatics of measurement.

The correspondence of Petri nets with the Heisenberg uncertainty principle arises from the limitations of our observational capacities when carrying out measurements. When two different types of observations are performed, -for example momentum p and position q- we can only see p or q in a chain of succesive events related by a causality net. His nets as well as his explanations of such phenomena are very neat and elegant. The relevant slides  on causality and linear logic may be found  at:

http://www.informatik.uni-hamburg.de/TGI/mitarbeiter/profs/petri/slides/

He also distributed a CD with his slide presentation at the conference.

For Petri, the Universe is a Petri Net.

SethLloyd.jpg
Dr. Seth Lloyd’s presentation at the conference in Berlin, Germany

- Dr. Seth Lloyd’s talk entitled “The Universe is a Quantum Computer”:
Some researchers think of information as more fundamental than physics itself. And it is a common practice in science to find all the time more fundamental structures on which previous ones were lying on. Some others such as Seth Lloyd and David Deutsch, however, strongly stand in favor of putting a quantum reality at the lowest level of reality description as they stand for a physical universe where information only makes sense if it is carried and represented by a physical entity. So these authors have developed world theories based in the concept of quantum computation.

A quantum computer differs from a Turing machine in that its bits are quantum bits, and so can exist in a superposition. In addition, it can be instructed to put those bits in superpositions. A universal Turing machine can simulate any other Turing machine in polynomial time but while what a quantum computer does is still Turing computable, a typical quantum computation of T steps on N qubits requires $O(2^N)$ bits on a classical Turing machine, and $O(T 2^2N)$ logical operations. That is, it takes a Turing machine exponential amounts of time and space to simulate a quantum computer or quantum Turing machine.

The standard quantum computer as shaped by, among others, Feynman and Deutsch differs from the classical computer in the concept of the qubit. While the concept of qubit takes advantage of a clear asset that the quantum world provides, entanglement, it does not necessarily makes use of the full properties of quantum mechanics as interpreted under the Copenhagen interpretation. The convenient definition of a qubit makes a quantum computer not to compute more but much faster than classical digital computers.

Lloyd’s conclusion is that the universe is not only a computer but a quantum computer. However some questions arise:

1. What exactly is a quantum computer? Does the quantum computer definition actually captures all quantum phenomena? According to the standard model quantum computing is Turing computable (disregarding run time). If Lloyd is assuming the standard model then the universe is indeed a quantum computer, but even more remarkably (since we have scientific and philosophical hypotheses like the Turing thesis) it is Turing computable. However, if he is assuming the more general quantum mechanics model, let’s say the standard model in physics (which basically assumes the possibility of harmless continuity rather than inquiring into it) he is saying that the universe is not a computer (since the term derives  from what we mean by Turing computable and hence covers  digital computers too). So the assumptions made are significant and cannot be glossed over if one wishes  to argue convincingly that  the universe is  a computer in some standard sense. If what we  assume to be computing is something that seems to be deterministic or rule-based, the concept becomes fuzzy and additional remarks need to be made.

2. What if a quantum particle encodes more information than just a 1 or 0 for the spin or any other quantum property? Let’s say a third value, or even worse, a non-computable number of values. In quantum mechanics for example, the superposition of a particle assumes an infinite and non-countable number of places since it is in all the spectra at the same time. If space/time is a continuum then it is evidently in a non -countable number of positions, which leaves us with  a non-computable model, or at least with something that’s neither a Turing-computable model nor a standard quantum-computable model. And this is not a simple assumption since it requires another Turing-type thesis which in the final analysis does not answer the most fundamental  question, i.e. whether the universe is a computer or not and if it is, what kind of computer (in the computational power sense) it is.

I raised these questions with Seth Lloyd and I will be posting his answers soon.

Seth Lloyd lecture at Berlin
Seth Lloyd at the conference in Berlin

An interesting concept mentioned by Seth Lloyd is “hacking the universe”. As Charles Bennett used to say, a computer is a handicapped quantum computer. So if Lloyd is right, a computer is not only a handicapped quantum computer but it is not taking advantage of the full computational power of the universe and it is just patching the universe instead of hacking it, as it would be in its power to do. By contrast, a quantum computer uses some particles that are already computing “something” (nothing less and nothing more than the processes in the universe ) to perform the computation that we want it to perform. It can be said to be  “hacking the universe” in Lloyd’s terms.

On the other hand, if the notion of programmer monkeys is valid it should be possible to test it experimentally. Under the supervision of M. Jean-Paul Delahaye, computer science professor at the University of Lille I (http://www2.lifl.fr/~delahaye/) we are undertaking this task. We are exploring  Lloyd’s quantum computational universe (or at least a handicapped but representative part, the recursive computational universe), applying some complexity measures (universal distribution, average-case complexity or Levin’s measure) in order to uncover the monkeys behind the Universe, or in other terms, to analyze the average distribution of randomly discrete systems with random inputs.

Is Seth Lloyd falling into the carpenter’s problem of thinking that the universe is a nail and the moon made of wood?  Is it because he is a quantum computer scientist that he thinks the universe is a quantum computer? He argues of course that the charge is unfair, but then we have been told by Dr Petri  that the Universe is in fact a Petri Net which probably  needs neither strong randomness nor quantum mechanics!

Here  is a video online in which he explains much of this:

http://www.edge.org/video/dsl/EF02_Lloyd.html

Zuse's drawing
Dr. Zuse’s futuristic drawing 2

- Jurgen Schmidhuber reprised his algorithmic approach to the theory of everything in his talk entitled   “The program that computes all computable universes”.
Jurgen Schmidhuber’s major contribution probably is his Speed Prior concept, a complexity measure similar to Algorithmic Information Complexity, except that it is based on computation speed and not program length. i.e. the fastest way of describing objects rather than the shortest.
There is more information on his website: http://www.idsia.ch/~juergen/ (where he includes an unfavorable review of  NKS) and in his slide presentation on the Speed Prior at: http://www.idsia.ch/~juergen/speedprior/sld001.htm
Of course Schmidhuber himself has identified a problem with the Prior measure: If every possible future exists, how can we predict anything?

Other interesting talks on philosophical issues: If the Universe is a computer, therefore the human mind should be a computer too.
Is “the Universe is a  computer” a metaphor?
My answer: The metaphor is “The Universe is not a Computer”

Lesson Three: Metaphors can be reversed.

Kovas Boguta
Kovas Boguta’s  talk was titled “Is the Computer a Universe?” In it he pointed out the richness of mining the computational universe of simple programs.

Because we were together during the gala dinner I had an interesting exchange with Dr. Konrad Zuse’s son, Dr. Horst Zuse (Also at our table were the Technikmuseum director Dr. Dirk Bondel and  my colleague Kovas Boguta from Wolfram Research, among others). He shed some light on his father’s interactions with Alan Turing ( none apparently),  with von Neumann (some interaction regarding the controversy over who first built a digital computer and concerning von Neumann’s architecture, which  our current digital computers do not conform to, the ALU being separated from the memory as it is in Zuse’s conception but not in  von Neumann’s original design).

Z1computer.jpg
Zuse’s Z1 first computer “the input device, something equivalent to the keyboard, at the Technikmuseum in Berlin”

Kurt Godel: The writings. Université de Lille III

Posted in Computability, Universality and Unsolvability, Conferences, Foundations of Math, Minds and Machines, New Ideas on November 12th, 2006 by Hector Zenil – Be the first to comment

Kurt Godel workshop for studying his legacy and writings. Lille, France, May 19-21, 2006

My thoughts, ideas, references, comments and informal notes:

- The wheel machine, a machine for real computation which I am proposing -as a thought experiment- in a forthcoming paper  on the Church-Turing thesis -Yes, one more paper on the CT thesis!- with comments on Wilfried Sieg’s paper entitled “Church Without Dogma: Axioms for Computability”

- “In case Cantor’s continuum problem should turn out to be undecidable from the accepted axioms of set theory, the question of its truth would loose its meaning, exactly as the question of the truth of Euclid’s fifth postulate in Euclidian geometry did”. Godel replies: “It has meaning anyway, as Euclid’s fifth postulate gave rise to other now accepted mathematical fields.”

- Godel Gibbs Lecture and his dicotomy on absolutely undecidable propositions and the computational power of the human mind (Turing did great work… but he was wrong when he proposed his formal theory as a model of human thought…)

- New contacts and references: Olivier Souan, Rudy Rucker, Karl Svozil

Mark van Atten’s “On Godel’s awareness of Skolem’s lecture”.
Rick Tieszen

- Herbrand on general recursive functions, letter to Godel.

- Leibniz’ influence on Godel’s arithmetization?

- Sources: Godel Editorial Project. Firestone Library, Princeton University. I.A.S. Marcia Tucker, librarian for Godel papers.

- Godel’s concept of finite procedure as the most satisfactory definition of computation. “A machine with a finite number of parts as Turing did” or “finite combinatorial procedure” as a definition of an algorithm, mechanical or computational procedure.

- Computation’s main constraints: boundness and locality (paper from Hernandez-Quiroz and Raymundo Morado).

- Aphorisms and autoreference (Gabriel Sandu and Hinttika)

- Feferman on Turing

- Is Sieg’s paper and the question of “finite machine=effective procedure” a tautology? In fact such an approach seems to be one of the most strict versions of the Turing Thesis, and even though both Church and Turing probably did propose it in such a strict sense, extensive versions of the thesis have traditionaly covered more content, but even when it is strictly stated that there is still space for a thesis, it is neither proved nor provable from my point of view, and most authors would concur, though some clearly would not. I will comment on this more extensively later, since this was one of my Master’s topics and merits a post by itself.

- Putnam’s thought experiment on cutting all sensorial inputs. Solution: It is impossible in practice. However, machines are an example in a sense, and that is why we do not recognize intelligence in them – they are deprived of  sensorial capabilities.

Yes, Godel found an inconsistency in the U.S. constitution. My answer: One? Certainly a bunch. That’s why we need lawyers, who make them even worse.

SIGGRAPH 2006

Posted in Conferences on August 15th, 2006 by Hector Zenil – Be the first to comment

As part of  Hacking Days we paid a visit to SIGGRAPH 2006. I found interesting stuff regarding the latest developments in computer graphics. There were some very impressive examples of holographics, visual effects, an impressive device using light synchronization with movement rotation to create different effects in a small house, magnetic fluids, a screen with a globe showing the real-time queries from Google by country, language and frequency. There was a bunch of stuff on sensors, and user interfaces including 3D glasses, and virtual reality devices like gloves and headsets.

siggraph06

Hacking Days at MIT and Wikimania at Harvard, August 2006.

Posted in Computer Science, Conferences on August 7th, 2006 by Hector Zenil – Be the first to comment

Hacking Days at MIT and Wikimania at the Harvard Law School came to a close yesterday. Here is a brief summary:

Brion Vibber, Chief Technology Officer of Wikimedia, gave many talks. He discussed everything from why wiki projects are difficult to cache (since they are dynamical) to new features to come, like Semantic MediaWiki, a possible Xapian search engine, better Wikitags, a better parser,  possible support for  PDF documents and integration with the DjVu image format file, among other video formats like OpenID/YADIS/A-Select/better. There were some OLPC -One Child Per Laptop- computers outside which are able to synchronize themselves, being interconnected in order to play music or share any type of information through a wireless net they build by themselves.

Mark Bergsma talked about near-future  server technology of the Wikimedia projects, like 64bits servers. He provided information about the geographical sites of the Wikipedia clusters, mainly located in Florida and Amsterdam. He talked about some features of the caching architecture using squid and some new DNS technologies they are exploring, like PowerDNS and geographical balancing (e.g. BGP DNS) and object purging. He announced that they were already using the HTCP inter-cache protocol. He also announced a plan to make the switch with one core switch/router more reliable. Some of the participants proposed the use of  Planet Lab services (http://www.planet-lab.org), a collection of machines distributed over the globe running a common software package including a Linux-based OS, mechanisms for bootstrapping, distribution software and a set of node monitor tools. PlanetLab is mainly devoted to research as a testbed for overlay networks, providing groups the opportunity to experiment with planetary-scale services.

A later talk was about enhancing Wiktionary to be used as a database for external applications requiring it. Right now the Wiktionary can only be  exploited by addressing a query directly to it. A new database structure is being developed in order to give Wiktionary semantic meaning -among other things, relating each word to its translation in all other languages already in Wikitionary- which will eventually allow  many new features and the generation of a full knowledge database rather than a bunch of words ( about a million words at the moment,  in all langauges) with definitions.

An interesting talk about managing and posting onto the discussion page also took place. In a nutshell, the idea was to treat each post as a wikipage in itself. Some questions were raised about performance impact on the whole system arising from the huge amount of new wikipages, and other security and control questions emerged too but the idea seemed to be very well accepted. Finally a nice proposal to include video and audio streaming into wikiprojects was presented.

There were several talks about WikitionaryZ during  Hacking Days and Wikimania, by Eric Moeller and others.Wiktionary Z is an iniative to create the Ultimate Universal Wiktionary (pretty humble, isn’t it?) by integrating semantic knowledge into Wiktionary. The project is based on defining meaning using a short, simple phrase  that defines a word clearly and unequivocally and that could be exactly translated into all the languages of  the Wiktionary. There is also a record of the relationships of the words with each other, thus making it possible to build a machine-readable repository of semantic knowledge.

Following that, Andre Engels talked about pywikipediabot and the challenges of writing wikipedia bots avoiding anything that used  screen scrapping, making the process of maintaning the bots quite complicated given the need to change bots everytime there is a change in the format of the articles. He also spoke about the dangers of using bots for big tasks, because errors in bot programming can lead to hundreds of thousands of damaged pages.

Other talks were about OpenID, an OpenSource authentication system very similar to Microsoft Passport in that it integrates the user’s identity in several projects (wikimedia projects, blogs, etc) into a single id. There are good plans to integrate this feature into Wikipedia soon.

WYSIWYG for Wikipedia: One of the main problems when using Wikipedia is the difficulty of editing using the Wikitags. Although technologically advanced users can easily adapt to the Wikitags system, most  people just can’t get the hang of it. Hence the need for an easy-to-use, simple editor is evident, although the lack of a proper mediaWiki parser and the complexity of the Wikitags language makes such a thing hard to implement. Anyway, Frederico Caldeira Knabben has created a very nice and useful WYSIWYG HTML editor called FCKeditor (www.FCKeditor.org), and is willing to join forces with media wiki to integrate it into wikipedia.

There was also a panel featuring Brion Vibber, Andre Engels and Erik Moeller which addressed the possibility of a  MediaWiki API. Many of the attendees were enthusiastic, and some went into a separate room and discussed the specification with Brion and Andre. They came up with a preliminary agreement that may be available soon on the web. The day ended with an enjoyable tour of the MIT’s Media Lab.

Some topics were very boring. For instance,  the Wikimedia discussion panel was mainly about internal politics and logistics. Nothing interesting for the broad audience.

From the point of view of our discussions here the most interesting topics at Wikimania related to the Semantic Web and the reliability of Wikiprojects, given  that everybody can edit the entries. Jim Giles, the author  of the Nature paper comparing Wikipedia and Britannica talked about the strong and weak points of the entries, and the reviews. According to him, Britannica made the point that most of their entries that were evaluated were in the sciences, where Wikipedia is stronger, most of the contributors being from these disciplines. The argument was that this  kind of comparison could not serve as an adequate measure of the entire corpus of knowledge covered by Britannica. Furthermore Britannica also made the point that the entries (50 in all) were badly reviewed, accounting for the fact that  Wikipedia earned a rating very close to that earned by them (3.9 errors on average per article for Wikipedia as against the 2.9 obtained by Britannica). However the author argues that the reviewers were the same for both sides, so they would on average have committed the same number of errors.

Regarding the session in which they were expecting to have a consensus on improving the Wikipedia content reliability, they didn’t reach it. According to Martin Walker, a professor of Chemistry, things gradually coalesced during discussions over the weekend. Both the Germans and the English-language people seem to have come to a similar consensus:
1. Set up stable (uneditable) versions of articles (static unvandalized
versions). The Germans expect to be testing out this idea within a month.
2. Then set up a validation system, possibly using review teams of trusted
people from wikiprojects, to check every fact (& sign off, giving the
reference source). The fact checking would be made available to any user
who wanted to take the time. This validated version would also be an
uneditable version.
3. On the English Wikipedia we thought there ought to be an outside
expert (with a PhD and a reputation) to sign off on the validated version,
so we could say: “This page has been approved by Professor XXX from
University of YYY”.

The discussion page   set up on this issue is at:

http://en.wikipedia.org/wiki/Wikipedia_talk:Pushing_to_validation

and

http://wikimania2006.wikimedia.org/wiki/Proceedings_talk:MW2

Concerning the Semantic web, there is already a wikiproject at www.ontoworld.org which is working. The basic idea is to use tags for all pieces of information contained in the articles in order to relate them to other data.
A typical example is:
”’Africa”’ is a continent, south of [[Europe]] and southwest of [[Asia]].

where the tags for Europe and Asia signal a  relation to  countries with the same tags.

In the end what we have is a big relational database underlying the system which allows queries using a SQL-type query language called SPARQL the  specification for which is at:
http://www.sparql.org/query.html

I conducted some tests using the following entry:
http://wiki.ontoworld.org/index.php?title=Germany&action=edit,
and using the Africa article I made a search of www.ontoworld.org and created a new article from a query looking for each country in Africa sorted by Population. The query was:
Editing Africa Population by Countries>

== List of African countries ==
[[Category:Country||Persons]]
[[located in::Africa]]
[[Population:=*]]
[[Area:=*]]

[[Category:Continent]]

The technology used is something called RDF. More on that at:

http://wiki.ontoworld.org/index.php/Special:ExportRDF

and there are some libraries in several languages that deal with it:
For RDF access with libraries
Get RDFLib from rdflib.net
SMW lib
RAP from www.wiwiss.fu-berlin.de/suhl/bizer/rdfapi

and the description of the MediaWiki extension project for the Semantical Web is at:

http://ontoworld.org/index.php/Semantic_MediaWiki

We also explored  some browsers being developed for the Semantic Web. They are at:

http://brownsauce.sourceforge.net/

http://simile.mit.edu

The workshop on “Using Wikipedia’s Knowledge in Your Applications”
(http://wikimania2006.wikimedia.org/wiki/Proceedings:MK1) was very interesting. There I met the speakers (Markus Krötzsch, Denny Vrandecic)with whom I exchanged information, agreeing  to keep in contact with them to discuss my concerns relating to addressing  queries to the URL and  transforming unstructured data into semantically  usable information. There was some discussion about Natural Language Processing translation to Semantic Information, directly taking clue words from the articles in Wikipedia and introducing tags and relations:

http://wikimania2006.wikimedia.org/wiki/Proceedings:DV2

http://wikimania2006.wikimedia.org/wiki/Proceedings:DV1

and

http://wikimania2006.wikimedia.org/wiki/Proceedings:MN1

I met Tim Berners-Lee who is also very interested in the Semantic Web and the idea of creating a browser exploiting these features. I also met Betsy Megas who is  working on a Semantic Web project called wiktionaryZ.org which is like the wiktionary.org but semantical. We had a long discussion about the convenience of having  a “categories” squeme in the Semantic Web. My point was that in most cases  the categories could  be built dynamicaly. They would be present in an abstract form without there being any need to  define them explicitly. A completely relational model strikes me as more interesting. People have tried to categorize everything since the existence of the encyclopedias,  but the number of possible categories can be much higher than the number of elements to categorize, simply because categories can be so arbitrary that the final number of categories can reach the number of subsets of elements, which is not useful from my point of view.

A couple of plenary sessions were held in the main auditorium of Harvard Law School. One of them featured a lecture by Lawrence Lessig, a lawyer who has been involved in cases such as the  one pitting Monopoly against Microsoft. He is the founder of the Commons Licence, better known as Left-copyright, where some rights are reserved but others are yielded to the people to allow them to be creative when using resources. He talked about the Read-Only (RO) Society, which was how he characterized  the last century,  and the new Read-Write (RW) Society which is moving toward the Commons Licence, Open Software, Open Hardware, Free and Open Encylopedias like Wikipedia, Freedom of Work/Job (freelancers and people organizing free foundations), free access to  human knowledge and communications (Web Access, Skype), Free and Open broadcasting (pod-casting), among other things.

Most of the sessions were recorded and are available at:

http://wikimania2006.wikimedia.org/wiki/Archives

And the Abstracts and Procedures are at:

http://wikimania2006.wikimedia.org/wiki/Proceedings:Index

I also found an interesting site for exploring the degree of separation between 2 different topics through Wikipedia (sometimes it works):

http://tools.wikimedia.de/sixdeg/index.jsp?from=cuneiform&to=Semantic_Web

Computability in Europe Conference (CiE) Report, Wales UK

Posted in Computability, Universality and Unsolvability, Computer Science, Conferences, Foundations of Computation on August 7th, 2006 by Hector Zenil – Be the first to comment

This is a report on the Computability in Europe Conference (CiE), held at the University of Swansea, Wales in the United Kingdom in July 2006.

I attended a mini-course on Quantum Computing given by Julia Kempe, a lecture on the Church-Turing thesis by Martin Davis– who defended it against proposed models of hypercomputation– and a  lecture on Proof Theory. Another very interesting lecture was on Godel and Turing’s remarks on the Human Mind (the dichotomy argument from Godel and the mechanistic vision from Turing). Among other noteworthy lectures were Samuel Buss’ on Complexity of Proofs, John Dawson’s on Godel in Computability, Wilfried Sieg’s on the Concept of Mechanical Procedure in Godel and Turing, as well as many presentations on hypercomputability and computing over the reals. I met people whom I had only known through email exchanges, like Felix Da Costa from the Technological Institute of Lisbon, Robert Meyer professor emeritus at the National University of Australia, and Julia Kempe from France who is a renowned researcher in the quantum computing field and with whom I shared some doubts I had concerning where the restrictions in Quantum Computing lay which constrained its power to the set of recursive functions. I also met people from SUNY who are doing interesting research on Turing-computation, studying isomorphisms between Oracle machines and the relation with the Tenenbaum theorem upon the uniqueness of the recursive model of PA (Peano Arithmetic). Many lectures were given on computing over infinite time and space and computing at the limit of the general relativity theory. The conference was intended to take the pulse of the field of hypercomputation in Europe and worldwide.

International Conference in Complex Systems, NECSI

Posted in Complexity, Computability, Universality and Unsolvability, Computer Science, Conferences, Foundations of Computation on August 7th, 2006 by Hector Zenil – Be the first to comment

NECSI/ICCS Conference Report, Quincy, Greater Boston, USA, July 2006.

First lesson: For every complex problem there is a simple, neat, wrong solution.

I attended talks given by Ed Fredkin on Finite Nature, Lazlo Barabasi on Complex Networks, Christoph Teuscher on Biology and Computation and John Nash on his research upon Game Theory.
* Ed Fredkin presented a table salt 3-D cellular automata model that fulfils all physical constraints on symmetry and energy conservation. The model is surprisely Turing-universal. As a reminder, the Zuse-Fredkin Thesis is a Church-type thesis which claims additionally that the universe being a Cellular Automaton can be understood in terms of the evolution of  Cellular Automata. An interesting paper entitled “Church-Turing Thesis is Almost Equivalent to Zuse-Fredkin Thesis” by Plamen Petrov is available here>

http://digitalphysics.org/~ppetrov/

And much more information on the interesting ideas of Ed Fredkin is available in
http://www.math.usf.edu/~eclark/ANKOS_zuse_fredkin_thesis.html and on  Ed Fredkin homepage at MIT, which  includes some remarks on why this thesis is not in agreement with  Stephen Wolfram’s, since Wolfram does not intend to imply that the universe is a classical cellular automaton but rather conceives of   a discrete universe based on systems performing simple rules and producing all the complex behavior we find in nature. Such systems are comparable to Turing machines, tag systems, axioms or any other equivalent system.
* In his lecture, Lazlo Barabasi claimed that behind all complex systems there are complex networks, in particular Free-Scale Networks (with a few nodes very well connected with others, and many nodes weakly connected). These nets are efficient and robust (even minus random nodes they remains connected)except under attack (provided the best connected nodes are targeted).
* Christoph Teuscher gave a lecture on computation inspired by biology. However, as he himself admitted, it was not his area of expertise.
* John Nash presented his research on Game Theory using Mathematica as an engine.
* I met Hava Siegelmann and one of her fellows, Yariv, who is working on Artificial Intelligence. Siegelmann worked some years ago (while completing  her PhD dissertation) upon a model of computation called Analog Recurrent Neural Network or just ARNN which, under certain circumstances,  is able to compute more functions than the set of recursive functions , which are those computed by the Turing machines. She is now working on topics related to Molecular Biology. I asked her about the forceful  critiques delivered by traditional logicians like Martin Davis, who wrote a paper entitled “The Myth of HyperComputation.”    The target of most of these critiques is a certain circular argument—to compute more than Turing machines it is necessary to use non-Turing computable weights previously coded into the neural network. It has been known for a long time that the set of all real numbers is able to encode any arbitrary language, even if it is non-Turing computable. What is remarkable from my point of view is her result relating to complexity classes, since weights with lower complexity are able to compute a higher class when  used in those networks. Aditionally she argued that even with a stochastic function her networks are able to solve non-computable functions. Recently I discussed the fact with Stephen Wolfram and we agreed that she is assuming strong randomness. I would say however that Siegelmann’s work is much more beatiful from a complexity point of view than from a “hypercomputational” viewpoint. In her book published under the title “Neural Networks and Analog Computation: Beyond the Turing Limit ” she proves that: a) there exists an universal neural network with only 9 neurons, and b) that  p(r) suffices to compute a non-computable function, where r is an arbitrary complete real number but p(r) represents the first p digits of the expansion of r–which means that linear precision suffices to achieve “super-Turing” capabilities, assuming that the neural network can have access to any possible real number before the computation. In other words this seems to be true only if all possible real numbers are allowed a priori (just as in the Turing machines an unbounded tape is neccesary to carry out all recursive languages, and neural networks with rational numbers as weights do not compute the same set of functions as neural networks with whole numbers as weights, the first having been  proven by Klenee to compute the same set as Turing machines and the second  to compute the same set of languages as finite automata, those called regular languages).
I exchanged ideas with some other interesting people, like an engineer from the Space and Airbone Systems Department of Raytheon. And I met John Nash Jr. during the gala dinner  at which he was presented with an award  for his contributions to  Complexity Theory, mainly honoring his work relating to Game Theory.