Hypercomputation in A Computable Universe

This is the full version of my answer to a question formulated by Francisco A. Doria to me included in the Discussion Section of A Computable Universe, my edited volume being published by World Scientific and Imperial College Press coming out next month (already available in Asia) concerning whether I think if Hypercomputation is possible:

I was once myself a hypercomputation enthusiast (my Master’s thesis–in French–was on “Hyper calcul”, focused on feasibility of hypercomputational models). Paradoxically it wasn’t until I had a good knowledge of it that I started to better appreciate and to increasingly enjoy more the beauty of the digital (Turing) model. On the one hand, hypercomputational models do not converge in computational power. There are a plethora of possible models of hypercomputation while there is only one digital (in terms of computational power). I find it astonishing how little it takes to reach the full power of (Turing) universality, and I don’t see the power of universal machines as being subject to any limitation, all the opposite. I happen to think the question has a truth value and is ultimately susceptible of a physical answer. However, I don’t think the answer will emerge in the foreseeable future, if it ever does.

Still, I found Doria’s contribution to the subject interesting, as he points out a particular (“ideal”) model of hypercomputation that I think gets around some of Martin Davis’ objections (The Myth of Hypercomputation), though it doesn’t fully address the problem of verification. Unlike a Turing computation, which can in principle be verified by carrying it out by hand, step by step the inner working of a hypercomputer can only be followed by another hypercomputer. The caricature version of the problem is in the whimsical answer given by the (hyper?)computer Deep Thought in “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, which proposed “42″ as the “Ultimate (uncomputable?) answer to the Ultimate Question of Life, The Universe, and Everything” [added parenthesis]. The only way to verify such an answer would be by building another, more powerful and even less understandable computer. This makes me wonder whether we ought not to favour meaningful computation over what could potentially be hypercomputation, even if hypercomputation were possible.

There is a strong analogy to the concept of proof in math. Mathematical proofs seem to fall into two types. They either serve to convince of the truth of a statement one wasn’t certain was true, or else to provide logical evidence that a statement intuitively believed to be true was in fact so (e.g. the normality of the mathematical constant pi). But in the latter case, why would one bother to provide a proof of a statement that nobody would argue to be false? It is because ideally math proofs should provide insight into why a statement is true, and not simply establish whether or not it is so. There are a few exceptions of course. Some come from mathematical practice, for example Wiles’ proof of Fermat’s last theorem. It is not clear whether Wiles’ proof provides any insight into the original question of the truth value of Fermat’s theorem (it does contribute to understand the connection among different powerful mathematical theories). Some other cases, especially among computer automated proofs, are of the same kind, often neglecting the fact that a proof is also about explanation (for humans). In that sense I think we should also favour meaningful mathematical proofs (from meaningful mathematical questions!), just as we should better appreciate meaningful (digital) computation.

The study of infinite objects has given us great insight into profound and legitimate questions of math and computation, such as the question of the nature of what is a number or what is a computation. And it has been immensely useful to focus on limits of computation in order to better understand it. It is at the boundary between decidability and undecidability that one seems best positioned to answer the question of what does computation mean. And there are some examples where the study of a hypercomputational model is more valuable not as a model of hypercomputation but by its ancillary results. Unlike most people, I think the contribution of, for example, Siegelmann’s ARNN model, has much more to say about computational complexity (the ARNN model relativises P = NP and P != NP in a novel fashion) and therefore classical computation than about hypercomputation! (Siegelmann’s ARNNs model has come to be strangely revered among hypercomputation enthusiasts).

While I may agree that the problem of verification of a computation is not necessarily an argument against hypercomputation (because it can also be used against digital computation in practice), the answer is only in the realm of physics and not in paper models. As Davis points out, it is not a surprise that encoding non-computable numbers as weights in an artificial neural network deals to non-computable computation! so, assuming real numbers exist the consequence is straightforward and the justification of a hypercomputational model just circular.

Other models of hypercomputation are just fundamentally wrong, such as Peter Wegner’s model of “interactive computation” that is claimed to break Turing’s barrier. For a recent commentary pointing out the the flaws of yet another hypercomputational claim, you can consult this entertaining blog post by Scott Aaronson and his Toaster-Enhanced Turing Machine.

  1. plm says:

    Regarding the use of mathematical proofs. Both Wiles’s proof and “human-unfriendly” computer-generated proofs are useful if you spend enough time studying them, with appropriate mathematical knowledge. They shorten the “search” for a proof of a statement, say P, in the sense of building in our head the appropriate certainty, implemented as neural networks that make us behave as P is fact.

    Actually knowing or thinking a bit about the history of Wiles’s proof may convince you of this. I myself do not understand well the argument but even the faint picture I have of it gives me confidence in FLT, more than, say, a proof of the n=3 or 4 cases.
    So those historical details I am thinking about: many people have contributed to the only currently accepted proof. (Though stay tuned for another proof if Mochizuki’s work settles ABC, which implies FLT for large exponents, and perhaps all exponents with some tuning.) Actually what motivated Wiles to start working on FLT was not really working on FLT itself, but on the Shimura-Taniyama-Weil modularity conjecture. Because the biggest insight into FLT had been obtained before, Hellegouarch and Frey (and others) sought to prove that Fermat triples for any n>2 yielded elliptic curves with strange properties. It would contradict, via a naturally associated elliptic curve, a conjecture of Szpiro related to the ABC conjecture. And Frey also looked at a similar contradiction supposing that curve was modular. He was helped (or rescued) by Serre and Ribet. And once this connection to modularity was well understood Wiles was quite confident and “only” had to prove modularity, the STW conjecture. In fact he did not have himself the insight that FLT was reachable, but when he understood the work of the above-mentioned people he was convinced, enough to work hard for 7 years on STW. So that tells you that definitely he got insight that FLT was true from those partial results toward it, and he completed the work, and certainly he got surer and surer that FLT was true as he fleshed out his arguments and uncovered key parts of his eventual proof.

    Similarly, all the people involved in that (Nicholas Katz, Ribet, Sarnak, grad students like Buzzard) felt more and more sure as they learned and thought, each at their own pace, about the different parts of the proof, some old (the theory of elliptic curves, or even algebraic number theory) some new (Galois representations in GL(2,F_p)).

    If you invested much time in studying that you would also gain insight into FLT, become comfortable with it.

    Unfriendly computer proofs are just hard to fit to our relatively flexible neural constructs but they can be grasped. Now the question of whether it’s more efficient to look for proofs by ourselves rather than try to make computers good at that and then understand them is a very interesting question.

    But in the case of Wiles’s proof it definitely does shed light on FLT. I guess though it is still a matter of taste: you may say I am satisfied with checking 3 exponents up to a,b<1000 on the computer to be convinced of FLT, or you may be satisfied with Kummer's proof for regular primes.

    The questions seems to be "How much certainty am I looking for?", "How much work am I willing to put?", "What is the best tradeoff?". From this point of view we can question complicated proofs, but if they are full and fully understood proofs compared to heuristics, they do provide more insight.

    We can also think of the interesting situation where a proof is so complicated that we do not have the ressources to understand it, and we do not trust expert that much. Then a heuristic may be more reliable than the proof, though not fully reliable. And we can argue the definition of insightful, it may be a product "information x speed to get it" = "information / time to get it". But I still like to say Wiles et al's proof provides insight beyond partial results.

    Well, this was much rambling, I hope at least it will not bother you too much.

    I have to read more of your work to understand it but I should already take the opportunity to thank you for it.

  1. There are no trackbacks for this post yet.

Leave a Reply