Archive for the 'Recreation' Category

Hypercomputation in A Computable Universe

Friday, September 21st, 2012

This is the full version of my answer to a question formulated by Francisco A. Doria to me included in the Discussion Section of A Computable Universe, my edited volume being published by World Scientific and Imperial College Press coming out next month (already available in Asia) concerning whether I think if Hypercomputation is possible:

I was once myself a hypercomputation enthusiast (my Master’s thesis–in French–was on “Hyper calcul”, focused on feasibility of hypercomputational models). Paradoxically it wasn’t until I had a good knowledge of it that I started to better appreciate and to increasingly enjoy more the beauty of the digital (Turing) model. On the one hand, hypercomputational models do not converge in computational power. There are a plethora of possible models of hypercomputation while there is only one digital (in terms of computational power). I find it astonishing how little it takes to reach the full power of (Turing) universality, and I don’t see the power of universal machines as being subject to any limitation, all the opposite. I happen to think the question has a truth value and is ultimately susceptible of a physical answer. However, I don’t think the answer will emerge in the foreseeable future, if it ever does.

Still, I found Doria’s contribution to the subject interesting, as he points out a particular (“ideal”) model of hypercomputation that I think gets around some of Martin Davis’ objections (The Myth of Hypercomputation), though it doesn’t fully address the problem of verification. Unlike a Turing computation, which can in principle be verified by carrying it out by hand, step by step the inner working of a hypercomputer can only be followed by another hypercomputer. The caricature version of the problem is in the whimsical answer given by the (hyper?)computer Deep Thought in “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, which proposed “42” as the “Ultimate (uncomputable?) answer to the Ultimate Question of Life, The Universe, and Everything” [added parenthesis]. The only way to verify such an answer would be by building another, more powerful and even less understandable computer. This makes me wonder whether we ought not to favour meaningful computation over what could potentially be hypercomputation, even if hypercomputation were possible.

There is a strong analogy to the concept of proof in math. Mathematical proofs seem to fall into two types. They either serve to convince of the truth of a statement one wasn’t certain was true, or else to provide logical evidence that a statement intuitively believed to be true was in fact so (e.g. the normality of the mathematical constant pi). But in the latter case, why would one bother to provide a proof of a statement that nobody would argue to be false? It is because ideally math proofs should provide insight into why a statement is true, and not simply establish whether or not it is so. There are a few exceptions of course. Some come from mathematical practice, for example Wiles’ proof of Fermat’s last theorem. It is not clear whether Wiles’ proof provides any insight into the original question of the truth value of Fermat’s theorem (it does contribute to understand the connection among different powerful mathematical theories). Some other cases, especially among computer automated proofs, are of the same kind, often neglecting the fact that a proof is also about explanation (for humans). In that sense I think we should also favour meaningful mathematical proofs (from meaningful mathematical questions!), just as we should better appreciate meaningful (digital) computation.

The study of infinite objects has given us great insight into profound and legitimate questions of math and computation, such as the question of the nature of what is a number or what is a computation. And it has been immensely useful to focus on limits of computation in order to better understand it. It is at the boundary between decidability and undecidability that one seems best positioned to answer the question of what does computation mean. And there are some examples where the study of a hypercomputational model is more valuable not as a model of hypercomputation but by its ancillary results. Unlike most people, I think the contribution of, for example, Siegelmann’s ARNN model, has much more to say about computational complexity (the ARNN model relativises P = NP and P != NP in a novel fashion) and therefore classical computation than about hypercomputation! (Siegelmann’s ARNNs model has come to be strangely revered among hypercomputation enthusiasts).

While I may agree that the problem of verification of a computation is not necessarily an argument against hypercomputation (because it can also be used against digital computation in practice), the answer is only in the realm of physics and not in paper models. As Davis points out, it is not a surprise that encoding non-computable numbers as weights in an artificial neural network deals to non-computable computation! so, assuming real numbers exist the consequence is straightforward and the justification of a hypercomputational model just circular.

Other models of hypercomputation are just fundamentally wrong, such as Peter Wegner’s model of “interactive computation” that is claimed to break Turing’s barrier. For a recent commentary pointing out the the flaws of yet another hypercomputational claim, you can consult this entertaining blog post by Scott Aaronson and his Toaster-Enhanced Turing Machine.

Meaningful Math Proofs and “Math is not Calculation”

Sunday, August 5th, 2012

Mathematicians are generally thought to be very good at calculation, and they sometimes are, but this is not because math is about calculation, just as astronomy is not about telescopes and computer science is not about computers (paraphrasing Edsger Dijkstra).

But if it is not preeminently about calculation, then what is mathematics about? The common answer, given by mathematicians themselves, is that it is about solving problems–learning to do so, and to think in a logical fashion. I think this may also be misguiding. Let’s take as an example Fermat’s last theorem (which may also be misguiding, as some math is not only about historical unresolved questions with little impact, in some sense, Fermat’s Last Theorem is not much more than a curiosity).

As is well-known, Fermat’s last theorem was solved by Andrew Wiles in 1994. The solution wasn’t simple at all, and it required a lot of convoluted math (Wiles himself got it wrong first and a few years later fixed it). The solution required much more machinery than the machinery required to formulate Fermat’s original statement, it is like going after a fly with a bazooka. For some mathematicians concerned with the foundations of math, Wiles’ theorem may not be as satisfactory they would have wished, even though as a piece of mathematical work connecting several areas of advanced math.

Wiles’ exact solution couldn’t have been arrived at earlier, as it draws upon some of the most sophisticated areas of modern math, it transfers a pure problem of arithmetic to a very powerful framework founded in algebraic geometry and analysis (elliptic curves, modular forms). If Fermat had a solution it wouldn’t have been at all similar to Wiles’. In fact, even before Wiles’ proof, no mathematician ever thought that Fermat’s last theorem was actually false. The consensus was that the theorem was true, so the long expected proof was supposed to shed light on the reason behind this truth rather than simply confirm it. Wiles’ great achievement only confirmed that with a very sophisticated set of tools one could prove Fermat’s last theorem, but it didn’t shed any light on why it was true (if it does, it is still hard to justify such a tradeoff in which to shed some light in arithmetic one requires the use of set theory).

Wiles’ solution doesn’t answer the question of whether Fermat himself had a more basic (but no less valid) proof. The most likely case is that Fermat’s last theorem is actually independent of number theory, and requires the full power of set theory. Hence Fermat very likely had no such solution, contrary to what he claimed.

Fermat’s theorem is, however, a rare case of the ultimate great challenge, and not part of the common practise of the “average” mathematician. The work of an “average” mathematician depends very much on its field. If the field is concerned with foundations it may involve a lot of set theory and logic, which in large part involves finding sets of formulae to prove other sets of formulae and looking for relativisations. In algebra or group theory, or even topology, their work may have to do with all sorts of symmetries or ways to characterise things in different ways in order to establish new and unanticipated connections, shedding more light on one area in terms of another, from a different point of view (and employing a different language). This connection is in fact the greatest achievement of Wiles’ proof of Fermat’s last theorem, as it connects several subdisciplines of modern mathematics building sophisticated intuitions of true statements, ultimately connected to Fermat’s last theorem.

I think math can be better described as a discipline of patterns (e.g. mappings, homeomorphisms, isomorphisms). Patterns are integrally related to the discipline I value most: algorithmic information theory, which is the ultimate field of the study of patterns, patterns in strings and objects, leading to applications such as compression algorithms.

One of the consequences of this change of mindset is that it will allow students access to computers. Computers and humans together can achieve an impressive number of things; they do so everyday in all sorts of fields, from car design to the development of new chips for the next generation of computers. In fact, airplanes today are mostly driven by computers, no pilot or set of pilots alone would be able to fly a modern airplane these days, and more than 60 to 70% (in some markets even 90%) of the operations in any top stock market are computer made, with no human intervention. Today’s industry wouldn’t be conceivable without the combined forces of human creativity and motivation, and the computer’s speed, accuracy and productivity.

There is an important new trend spearheaded by the UK initiative Computer-Based Math
promoting these very ideas. Conrad Wolfram has recently explained in a blog post how computer programming could not only help to teach math, but how they could merge into a single discipline. I may have a chance to talk about all this at the House of Lords soon, where the education shift would need to take official form.

Stephen Hawking: A brief examination of the recent warning over alien civilizations

Tuesday, May 4th, 2010

Stephen Hawking asserts that while aliens almost certainly exist, humans should avoid making contact.

The original story published by BBC News can be found here.

He claims: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”

Stephen Hawking recent assertion looks like an interesting time to bring up the question of intelligent life, meaning and purpose.

Let’s examine what Hawking’s argument implies, assuming that intelligent life, other than human, exists elsewhere in the universe:

1. Aliens come from an Earth-like planet.
2. The Earth has something the aliens do not.
3. The Earth has something aliens need or want.

Each point may be not necessarily independent of each other, each may be chained or added to the other. Point 2 may imply Point 1. However, Earth-like planets are likely to be quite common, as the current research on exoplanets seems to suggest. While Point 1 is chauvinistic, Point 3–the notion that Earth possesses something special– is egocentric. Concerning Point 2 again: as a sort of example, many cities on Earth are in need of potable water. Does that mean they can just go looking for it in another city?

Destructed castle on the coast way of Ireland

Originally uploaded by hzenilc.

If we think that aliens are looking for a planet with an atmosphere like ours, we are already making a very strong assumption that requires further support. To assume that their planet has the same atmosphere as ours and that they would consequently need a planet exactly like it, is to make a strong claim–and a highly implausible one at that. We may think that water is the most precious element in the universe, but it might just be lethal to an alien civilization. If it is true that we can only imagine them in the basis of the kind of life and it diversity on Earth, it is also true that we know nothing about them, if they really do exist.

If humans find life elsewhere, and if it is not related to Earth’s life, it is likely to be unrecognizable. After all we have experience of a very large range of life forms here on Earth, and we find some of them already alien enough. “Extremophiles” is the name we give to Earth species that can survive in places that would quickly kill humans and other “normal” life-forms.

As for needing a work force, it doesn’t make sense that aliens would seek it here. Humans are highly unproductive. And aliens with the technological capabilities to travel to Earth are likely to have had long experience building the kinds of machines that humans have only recently got around to building and improving. Advanced intelligent aliens most likely make extensive use of robots, if they are not some kind of cybernetic beings themselves.

Though assuming that aliens and humans are alike in their biochemical constitution may be chauvinistic and misguided, the supposition that our intelligences are similar has a more solid basis, especially since we also assume that the alien civilization in question has built spaceships, travels in space and is looking for other civilizations. According to Stephen Wolfram’s Principle of Computational Equivalence (PCE), there is a good chance that if non-trivial life forms exist they will have or develop the same degree of sophistication (e.g. intelligence) than ours. Wolfram’s PCE would also suggest that once primitive life has developed it will eventually achieve its maximal degree of sophistication, which would be the maximal possible degree of sophistication. In other words, if life is around, intelligent life is as likely as life.

We used to say that aliens are likely to be much more advanced than humans. Why so? Because the universe has a 15 billion year history. Characterizing a civilization that has been around for about 2 million years and has only begun developing technology in the last few hundred as ‘advanced’ is beyond naive, it is statistically implausible. As Carl Sagan pointed out, every intelligent civilization seems to reach a period when its own technological power is capable of destroying itself. This is the stage which human civilization reached about 70 years ago (and still is), with the invention of atomic bombs. The longevity of said aliens’ civilization means that they have managed to make good use of the resources of their host planet. Even assuming they have reached the point where they have exhausted their resources, we have to concede that they have probably been good, ecologically responsible stewards of their planet. Of course it is still possible, despite all this, that these aliens may wish to colonize a new planet in order to exploit its resources rather than simply seizing what they find and taking it away with them.

Water is one of the most common elements in the universe, or it is quiet easy to synthesize it (see the chemical recipe here). If they just need a rock revolving around a star at a distance such that life can be sustained, there are certainly many more possibilities than coming to Earth. Think about it. The human civilization is much closer to achieving the terraformation of Mars (reengineering Mars soil and atmosphere to make it Earth-life friendly), given its current and near future capabilities than traveling abroad to conquer, if not cohabit with another civilization.

Now, from a practical standpoint, the notion that we could hide away in a corner of the universe is nonsense, to say the least, if we are not somehow already hidden due to the size of the universe itself. The Earth has been broadcasting to space since radio signals were invented. Messages have been kept symbolical on purpose. So in the worst case scenario, assuming Hawking is right and our signals are likely to be picked up by an alien civilization willing to take us on, hiding is just rubbish because we can’t. As Seth Shostak says, the first thing to do would be to shut down the BBC, NBC, CBS and the radars at all airports. Not unless we install a kind of Dyson sphere around the Earth or stop broadcasting altogether. Now, the only way to make sense out of Hawking particular comment in this regard, is taking into consideration time scales. If it is true that Earth has been broadcasting to the space during the last 50 or more years, it is also true that someday in the future it may reach a technological state that allows it to avoid, purposely or not, doing so. So it may be the case that civilizations decide to hide themselves after a short period of broadcasting history.

The advice to hide from aliens implies of course that they are destructive, or that the contact between our civilizations may be destructive, and this brings us back to the question of their longevity. If they were indeed destructive they would have annihilated themselves. As Hawking does right, consider the human case.

But if Hawking is right, we would probably have nothing to be scared of. If an alien civilization wants us dead we will either barely notice it when that happens or that won’t ever happen if it hasn’t happened already. The chances of a destructive civilization to exist seem lower than the chances of a pacific civilization to extend their existence time period in the universe history. The former would have likely already destroyed itself while the latter may have better chances. Caution would not hurt though. We ought to keep an eye on ourselves and how we develop, and that means using our resources more intelligently and not necessarily manufacturing more bombs. But being scared definitely says much more about us than anyone else because we simply know nothing about them, nor we can pretend to know what are their intentions, if any.

But of course in the matter of encounters between civilizations in space, every possibility is time-scale dependent. Imagine for an instant that two alien civilizations are at war. We would certainly be well advised to keep away from the conflict. We may be justified in thinking that if we were in the way when either of the parties ran short of resources to prosecute their war, they would most likely help themselves to anything we had that could be of use. But think a little further. The odds of encountering two civilizations actually engaged in warfare are rather small.

Civilizations making war would either annihilate each other, or if they don’t, then one party would end up achieving hegemonic status. It would seem logical that periods of peace are much longer than periods of war. In the first place civilizations that contemplate war must be both smart enough and hostile enough, and reaching these thresholds of achievement and antagonism takes time–peacetime.

As Hawking claims, many of the life forms in the universe are probably just microbes, but unlike Hawking I believe that if civilizations do exist they’d have little time to make war, even assuming they wanted to. Once again, one has only to think of the Earth to reach this conclusion. If the Cold War had not remained cold, we either wouldn’t be here or else we’d all now be ruled by a single country–as has been pretty much the case (despite there being no actual war) over the last few decades, with the emergence of a current single superpower (with a partial sharing of the global power, and other powers emerging). But when superpowers emerge these days, they are more interested in keeping the peace because they have reached a stage where they depend on each other, chiefly because trade and commerce have become globalized. It turns out that the human world has become much more cooperative than we might have expected. But this is not by chance; it seems to be a common path. Not that one can be absolutely certain, though. Only by witnessing other civilizations could we safely make generalizations. And yet logic dictates that the opposite path–the path of belligerence–would be unlikely.

What I think is that if civilizations were to find each other they are more likely to be interested in each other’s culture and knowledge, than in each other’s natural resources–either because natural resources can be found in many places, or even created by civilizations that are sufficiently advanced, or else because they are simply not needed, curiosity about the universe being the sole motive behind exploration. Which would mean that civilizations would preserve ‘alien’ civilizations to enrich themselves, just as anthropologists and ethnologists do on Earth. To make my point in terms of Hawking, aliens are more likely to be like modern scientists than like the barbaric Europeans who colonized the Americas.

The Shortest Universal Turing Machine Implementation Contest

Monday, December 22nd, 2008

The Shortest Universal Turing Machine Implementation Contest


23 Dec – 2008

Contest Overview

In the spirit of the busy beaver competition though related to program-size complexity, we are pleased to announce the “Shortest Universal Turing Machine Implementation Contest”.

The contest is open-ended and open to anyone. To enter, a competitor must submit a universal machine implementation written in the language specified in the contest web site (C++) with smaller size values than the latest  record published on the web page.

In order to take part in this competition it is necessary to submit the source code, to be compiled using the compiler program and version specified in the contest web site. It is important that you provide documentation of your code, either in an attached file or as commented text in the source code file.

Each submitter must agree to be bound and abide by the rules. Submissions remain the sole property of the submitter(s), but should be released under the GNU General Public License (GPL)  so we may be permitted to make them available on  this web site for downloading and executing.

Rules (General Rules section)

Team composition

Players may enter alone or as teams of any size. Anyone is eligible to enter.

Hector Zenil (IHPST and LIFL, Paris 1 University and Lille 1 University)
Jean-Paul Delahaye (LIFL, Lille 1 University)

The art of creating creatures from simple rules

Sunday, November 18th, 2007

Having quit his studies in physics, Theo Jansen became an artist. In this video he demonstrates his amazing life-like kinetic sculptures, built from plastic tubes and bottles. His Beach Creatures or Strandbeest are built to move and even survive on their own:

I’ve been in touch with Theo Jansen recently. For further details about his creations he referred me to his book (available at his web shop ) entitled The Great Pretender. Even more details are provided in Boris Ingram’s thesis on leg designs based on 12-bar linkages, in which he describes Jansen’s walker algorithm. Jansen’s designs are computer-generated using an evolutionary algorithm, and the animals, which are wind powered, are made out of PVC piping.


The valves essentially act like logic gates, allowing water to pass or not depending on the state of the other gates.


Jansen’s creations do not require engines, sensors or any other type of advanced technology in order to walk and react to the environment. As for Boris Ingram’s work, it would be greatly enriched if it were to incorporate a wider range of possible structures and algorithms.



More online references:

NKS Cooking

Monday, July 31st, 2006

A novel approach to cooking

The algorithm:

  1. Consider the space of all possible recipes.
  2. Enumerate the ingredients per food style (Mexican, Chinese, French, Italian, and so on).
  3. Take the subsets of the ingredients per food-style set.
  4. Begin with a random choice of initial ingredients.
  5. Mix them by following a set of simple cooking rules like baking, sprinkling or heating.
  6. Season with  salt and pepper to taste.

One could use a robotic device and see what results. One would also have to set up an evaluation scheme to decide which dishes are good and which are bad– either a human tester or a reliable sensor such as a modern “artificial nose” (essentially a microarray-like object with a number of different receptors).

However, since the testing procedure is undecidable in general, in the case of any given recipe, neither a human nor an automatic device can decide whether or not the end result will taste good.

It would also be of interest to find out which “cooking rules” lead to which patterns of smell and taste. For instance,  one would expect  any dish involving sugar, butter, and flour  to taste reasonably good provided the rules do not yield a “burnt offering”.  Of course, to make things really interesting, one should introduce additional ingredients, including ingredients  which aren’t traditionally combined with the basic ones.

For French cuisine, butter would be a primitive, for American food, ketchup, for  Mexican, chili, for Chinese,  rice,  for Italian, tomato. French and Italian food share  cheese as a primitive; cooking “au gratin” would always be a reasonably safe bet.  Since primitives are important, you cannot cook pancakes au gratin, and ketchup would be safer if you wanted your dish to meet with the approval of the American tester.

An ingredient is a primitive in the sense that any dish in a given style can use either more or less of it and still retain  its character as an exemplar of that style.  Primitives can be used at will,  including in emergencies or when the rules happen to yield nasty results.

One might investigate reducible shortcuts, such as preparing something faster or skipping steps. However, most of the recipes are irreducible, making it impossible to take shortcuts without going through the whole preparation.

With some effort one can also prove that there exists a universal algorithm which,  given a recipe and the initial ingredients,  is able to reproduce any other recipe.

Concerning time/space complexity, it remains an open question whether the preparation time can always be reduced to a polynomial.

One can also ask about the algorithmic complexity of a dish, since one can measure its information content–the number of ingredients involved, the number of rules and steps applied, whether the rules were iterated, nested or random- looking.  However, since, as noted above, the procedure is undecidable, the only known way to approach the question of the complexity of a dish is to compress it into a piece of universal tupper-ware and see what happens.

Another open question: Given a compressed ingredient, how many other ingredients can one compress? It is believed that this yields  a classification of  food types, defining a threshold between those in a lower/trivial class and  more sophisticated items.