## Archive for the 'Foundations of Math' Category

### Meaningful Math Proofs and “Math is not Calculation”

Sunday, August 5th, 2012

Mathematicians are generally thought to be very good at calculation, and they sometimes are, but this is not because math is about calculation, just as astronomy is not about telescopes and computer science is not about computers (paraphrasing Edsger Dijkstra).

But if it is not preeminently about calculation, then what is mathematics about? The common answer, given by mathematicians themselves, is that it is about solving problems–learning to do so, and to think in a logical fashion. I think this may also be misguiding. Let’s take as an example Fermat’s last theorem (which may also be misguiding, as some math is not only about historical unresolved questions with little impact, in some sense, Fermat’s Last Theorem is not much more than a curiosity).

As is well-known, Fermat’s last theorem was solved by Andrew Wiles in 1994. The solution wasn’t simple at all, and it required a lot of convoluted math (Wiles himself got it wrong first and a few years later fixed it). The solution required much more machinery than the machinery required to formulate Fermat’s original statement, it is like going after a fly with a bazooka. For some mathematicians concerned with the foundations of math, Wiles’ theorem may not be as satisfactory they would have wished, even though as a piece of mathematical work connecting several areas of advanced math.

Wiles’ exact solution couldn’t have been arrived at earlier, as it draws upon some of the most sophisticated areas of modern math, it transfers a pure problem of arithmetic to a very powerful framework founded in algebraic geometry and analysis (elliptic curves, modular forms). If Fermat had a solution it wouldn’t have been at all similar to Wiles’. In fact, even before Wiles’ proof, no mathematician ever thought that Fermat’s last theorem was actually false. The consensus was that the theorem was true, so the long expected proof was supposed to shed light on the reason behind this truth rather than simply confirm it. Wiles’ great achievement only confirmed that with a very sophisticated set of tools one could prove Fermat’s last theorem, but it didn’t shed any light on why it was true (if it does, it is still hard to justify such a tradeoff in which to shed some light in arithmetic one requires the use of set theory).

Wiles’ solution doesn’t answer the question of whether Fermat himself had a more basic (but no less valid) proof. The most likely case is that Fermat’s last theorem is actually independent of number theory, and requires the full power of set theory. Hence Fermat very likely had no such solution, contrary to what he claimed.

Fermat’s theorem is, however, a rare case of the ultimate great challenge, and not part of the common practise of the “average” mathematician. The work of an “average” mathematician depends very much on its field. If the field is concerned with foundations it may involve a lot of set theory and logic, which in large part involves finding sets of formulae to prove other sets of formulae and looking for relativisations. In algebra or group theory, or even topology, their work may have to do with all sorts of symmetries or ways to characterise things in different ways in order to establish new and unanticipated connections, shedding more light on one area in terms of another, from a different point of view (and employing a different language). This connection is in fact the greatest achievement of Wiles’ proof of Fermat’s last theorem, as it connects several subdisciplines of modern mathematics building sophisticated intuitions of true statements, ultimately connected to Fermat’s last theorem.

I think math can be better described as a discipline of patterns (e.g. mappings, homeomorphisms, isomorphisms). Patterns are integrally related to the discipline I value most: algorithmic information theory, which is the ultimate field of the study of patterns, patterns in strings and objects, leading to applications such as compression algorithms.

One of the consequences of this change of mindset is that it will allow students access to computers. Computers and humans together can achieve an impressive number of things; they do so everyday in all sorts of fields, from car design to the development of new chips for the next generation of computers. In fact, airplanes today are mostly driven by computers, no pilot or set of pilots alone would be able to fly a modern airplane these days, and more than 60 to 70% (in some markets even 90%) of the operations in any top stock market are computer made, with no human intervention. Today’s industry wouldn’t be conceivable without the combined forces of human creativity and motivation, and the computer’s speed, accuracy and productivity.

There is an important new trend spearheaded by the UK initiative Computer-Based Math
promoting these very ideas. Conrad Wolfram has recently explained in a blog post how computer programming could not only help to teach math, but how they could merge into a single discipline. I may have a chance to talk about all this at the House of Lords soon, where the education shift would need to take official form.

### Turing’s Deep Field: Visualizing the Computational Universe

Thursday, January 5th, 2012

I generated this image in the course of an investigation of the distribution of runtimes of programs in relation to the lengths of mathematical proofs, the results of which are being published in my paper bearing the title “Computer Runtimes and the Length of Proofs with an Algorithmic Probabilistic Application to Optimal Waiting Times in Automatic Theorem Proving” Volume 7160 of the Lecture Notes in Computer Science series (LNCS), a festschrift for Cristian Calude.

The preprint is now available online in the ArXiv here. The paper shows that as theoretically predicted, most machines either stop quickly or never halt. It also suggests how a theoretical result for halting times may be used to predict the number of random theorems (dis)proven in a random axiom system –see the section I’ve called “Gödel meets Turing in the computational universe”.

Turing View of The Computational Universe

The plot was the winning image in this year’s Kroto Institute Image Competition, in the Computational Imagery category titled “Runtime Space in a Peano Curve”, it shows the calculation of the distribution of runtimes from simulating (4n+2)^(4n) = 10000 Turing machines with 2 symbols and n=2 states (of a total of more than 10^12 simulated Turing machines with up to n=4 states) following a quasi-lexicographical order in a Peano curve preserving–as far as possible–the distance between 2 machines arranged in a 2-dimensional array from a 1-dimensional enumeration of Turing machines.

In the image each point or cluster of points represents a Turing machine or a group of Turing machines, and the color is determined by a spectrum encoding their halting runtimes–the lighter the square the sooner the machine entered the halting state. White cells represent machines that are proven to halt in infinite time. Red cells show the Turing machines that take longer to halt (popularly called Busy Beavers). Knowing the values of the Busy Beaver functions allows us to identify the machines that never halt (depicted in white). The image is noncomputable, meaning that the process cannot be arbitrarily extended because of the undecidability of the halting problem (i.e. there is no procedure for ascertaining the color of the following pixels to zoom out the picture and cover a larger fraction of the computational universe). Put it in the words of crystilogic,

What you’re looking at is a subset of descriptions of all possible things, and some impossible things. This is possibility and reality compressed into an image.

Turing machines with an arbitrary number of states can encode any possible mathematical problem and are therefore perfect compressors of the known, the yet to be known, and even the unknowable (due to the undecidability of the halting problem).

This is how the image looks like as displayed in the stairway of the Kroto Research Institute in the UK:

Some postcards with the winning image were printed and they can be sent to scholar or enthusiasts upon request sending an email to hectorz[at] labores.eu.

I want to dedicate this prize to my former thesis advisor Jean-Paul Delahaye who suggested me the Peano arrangement as a packing for my visual results of halting times. And also to Cristian Calude who co-advised me all along my PhD thesis and who encouraged me to publish this paper and what better place to do so than for his festschrift.

I’m releasing the images under an open licence in honour of the Turing Year, so that it may be used for any artistic illustration of some of Turing’s main results (the halting problem) or for any other purpose in any medium.

Postcards of the winning image are also available upon request. Just send an email requesting 1 or more postcards to hectorz [at] labores.eu or to let him know (if you can) that you will be using any of the images (or if you need better resolution versions).

### Collections of axioms and information on theories dependency

Saturday, March 3rd, 2007

Saturday, March 3rd, 2007

“Ten has a one-syllable name. Seventy-seven has a five-syllable
name. The seventh power of seven hundred seventy-seven has a name
that, if we were to work it out, might run to 100 syllables or so;
but this number can also be specified more briefly in other terms. I
have just specified it in 15 syllables. We can be sure, however, that
there are no end of numbers that resist all specification, by name or
description, under 19 syllables. There is only a finite stock of
syllables altogether, and hence only a finite number of names or
phrases of less than 19 syllables, whereas there are an infinite
number of positive integers. Very well, then ; of those numbers not
specifiable in less than 19 syllables, there must be a least. And
here is our antinomy : the least number not specifiable in less than
nineteen syllables is specifiable in 18 syllables. I have just so
specified it.
The antinomy belongs to the same family as the antinomies that have
gone before. For the key word of this antinomy, “specifiable”, is
interdefinable with “true of”. It is one more of the truth locutions
that would take on subscripts under the Russell-Tarski plan. The
least number not specifiable-0 in less than nineteen syllables is
indeed specifiable-1 in 18 syllables, but it is not specifiable-0 in
less than 19 syllables ; for all I know it is not specifiable-0 in
less than 23.”

By the way, it seems that Russell thought Berry’s number was 111,777.

### Book on self-reference (comprising papers by various contributors)

Saturday, March 3rd, 2007

http://www.imm.dtu.dk/~tb/genintro.pdf

### On single and shortest axioms for Boolean logic

Saturday, March 3rd, 2007

Both the philosopher Charles Sanders Peirce in 1880 and the American logician H. M. Sheffer in 1913 realized that the truth-functions of elementary logic could all be defined from a single operation. The Sheffer stroke, also known as the Nand operation, is a logical operator with the following meaning: p Nand q is true if and only if not both p and q are true. It is named for Henry M. Sheffer, who proved that all the usual operators of Boolean algebra (Not, And, Or, Implies) could be expressed in terms of Nand. There is another logical operator which is able to express all the others: Nor [A set of five independent postulates for Boolean algebras, with application to logical constants. Transactions of the American Mathematical Soc. 14 (1913), pp. 481-488]. In 1933, Edward Huntington proposed an alternative set of axioms for Boolean algebra, consisting of associativity and commutativity plus Huntington’s axiom:

!( !x [Or] y) [Or] ( !( !x [Or] !y ) = x

Huntington showed that the three axioms together imply the axioms of Boolean algebra. Sometime after, Herbert Ellis Robbins also found a single axiom:

!( !( x [Or] y) [Or] ( y [Or] !x )) = x

that he conjectured (when taken  together with associativity and commutativity) implied that of Huntington, so that Robbins algebras are equivalent to Boolean algebras. The proof was finally delivered in 1996 by William McCune, using his theorem prover EQP. See:

Gina Kolata, Computer Math Proof Shows Reasoning Power, The New York Times, 1996.

Tutorial on Automatic Reasoning and Theorem Proving related to the course MAT 504, Topics in Logic, for the Spring term of 1997 from prof. Edward Nelson

As a result of an exhaustive and systematic computer exploration undertaken by Stephen Wolfram looking for the shortest single axiom equivalent to the axioms of Boolean algebra. Wolfram found among 288684 a single formula (up to other equivalent of the same size) with 6 Nands and 3 variables. It then turns out that Wolfram’s axiom:

(((x [Nand] y) [Nand] z) [Nand] (x [Nand] ((x [Nand] z) [Nand] x))) = z

is equivalent to the axioms of Boolean algebra and the shortest (in the number of operators and variables) single axiom equivalent to Boolean algebra (Wolfram 2002, pp. 808-811 and 1174).

In the NKS notes at the end of the book there is an account that around 1949 Carew Meredith found the axiom system (NKS page 1173):

f[f[a, f[b, c]], f[a, f[b, c]]] = f[f[f[c, a], a],f[f[b, a], a]]f[f[a, a], f[b, a]] ==a

And in 1967 George Spencer Brown found:

f[f[a, a], f[f[b, b], b]] ==af[a, f[b, c]] = f[f[f[f[c, c], a], f[f[b, b], a]], f[f[f[c, c], a], f[f[b, b], a]]]

and again Meredith in 1969:

f[a, f[b, f[a, c]]] = f[a, f[b, f[b, c]]]f[f[a, a], f[b, a]] = af[a, b] == f[b, a]

Carew Meredith spent a long time studying somewhat similar forms. Meredith’s interest in short axioms came originally from Lukasiewicz.

More references:

More bibliographical references (Meredith and Prior 1968 and Meredith 1969b are the main ones on equational logic, and Meredith 1951 contains the six-character single axiom that Prior described as ‘sensational’):

• – Meredith, C.A. 1951. ‘On an Extended System of the Propositional Calculus’. Proceedings of the Royal Irish Academy, vol. 54 Sect. A, 37-47.
• – Meredith, C.A. 1953a. ‘Single Axioms for the Systems (C,N), (C, 0), and (A, N) of the Two-Valued Propositional Calculus’. Journal of Computing Systems, vol. 1, 155-164.
• – Meredith, C.A. 1953b. ‘A Single Axiom of Positive Logic’. Journal of Computing Systems, vol. 1, 169-170.
• – Meredith, C.A. 1958. ‘The Dependence of an Axiom of Lukasiewicz’. Transactions of the American Mathematical Society, vol. 87, 54.
• – Meredith, C.A. 1966. ‘Postulates for Implicational Calculi’. Journal of Symbolic Logic, vol. 31, 7-9.
• – Meredith, C.A. 1969b. ‘Equational Postulates for the Sheffer Stroke’. Notre Dame Journal of Formal Logic, vol. 10, 266-270.
• – Meredith, C.A., Prior, A.N. 1963. ‘Notes on the Axiomatics of the Propositional Calculus’. Notre Dame Journal of Formal Logic, vol. 4, 171-187.
• – Meredith, C.A., Prior, A.N. 1968. ‘Equational Logic’. Notre Dame Journal of Formal Logic, vol. 9, 212-226.

Sunday, November 12th, 2006

– Symbolic Visualizations, University of Texas:http://cvcweb.ices.utexas.edu/ccv/projects/VisualEyes/SymbVis/index.php- Proof nets and zero-knowledge proofs.

### Kurt Godel: The writings. Université de Lille III

Sunday, November 12th, 2006

Kurt Godel workshop for studying his legacy and writings. Lille, France, May 19-21, 2006

My thoughts, ideas, references, comments and informal notes:

– The wheel machine, a machine for real computation which I am proposing -as a thought experiment- in a forthcoming paper  on the Church-Turing thesis -Yes, one more paper on the CT thesis!- with comments on Wilfried Sieg’s paper entitled “Church Without Dogma: Axioms for Computability”

– “In case Cantor’s continuum problem should turn out to be undecidable from the accepted axioms of set theory, the question of its truth would loose its meaning, exactly as the question of the truth of Euclid’s fifth postulate in Euclidian geometry did”. Godel replies: “It has meaning anyway, as Euclid’s fifth postulate gave rise to other now accepted mathematical fields.”

– Godel Gibbs Lecture and his dicotomy on absolutely undecidable propositions and the computational power of the human mind (Turing did great work… but he was wrong when he proposed his formal theory as a model of human thought…)

– New contacts and references: Olivier Souan, Rudy Rucker, Karl Svozil

Mark van Atten’s “On Godel’s awareness of Skolem’s lecture”.
Rick Tieszen

– Herbrand on general recursive functions, letter to Godel.

– Leibniz’ influence on Godel’s arithmetization?

– Sources: Godel Editorial Project. Firestone Library, Princeton University. I.A.S. Marcia Tucker, librarian for Godel papers.

– Godel’s concept of finite procedure as the most satisfactory definition of computation. “A machine with a finite number of parts as Turing did” or “finite combinatorial procedure” as a definition of an algorithm, mechanical or computational procedure.

– Computation’s main constraints: boundness and locality (paper from Hernandez-Quiroz and Raymundo Morado).

– Aphorisms and autoreference (Gabriel Sandu and Hinttika)

– Feferman on Turing

– Is Sieg’s paper and the question of “finite machine=effective procedure” a tautology? In fact such an approach seems to be one of the most strict versions of the Turing Thesis, and even though both Church and Turing probably did propose it in such a strict sense, extensive versions of the thesis have traditionaly covered more content, but even when it is strictly stated that there is still space for a thesis, it is neither proved nor provable from my point of view, and most authors would concur, though some clearly would not. I will comment on this more extensively later, since this was one of my Master’s topics and merits a post by itself.

– Putnam’s thought experiment on cutting all sensorial inputs. Solution: It is impossible in practice. However, machines are an example in a sense, and that is why we do not recognize intelligence in them — they are deprived of  sensorial capabilities.

Yes, Godel found an inconsistency in the U.S. constitution. My answer: One? Certainly a bunch. That’s why we need lawyers, who make them even worse.

### Experimental Metamathematics

Sunday, July 30th, 2006

Metamathematics is to mathematics what metaphysics is to philosophy. It is about stepping outside the discipline and attempting to grasp it as a single entity/ object of study; it is an investigation into the foundations of mathematics. (For an elucidation of  Foundations of Mathematics see http://www.math.psu.edu/simpson/hierarchy.html)

Some mathematicians have begun to practice  a quasi-empirical brand of mathematics and insist on results for which there is experimental evidence. This development has been facilitated by the fact that massive calculations are possible nowadays.  This new brand of mathematics is related to experimental mathematics in being a type of mathematical research in which computation is used to investigate mathematical structures and identify their fundamental properties and patterns.

A few researchers have devoted themselves to what we could call experimental metamathematics, doing methamatematical research with the help of today’s vast computational resources, which were unavailable not too long ago. Among such researchers are Stephen Wolfram and Gregory Chaitin, both of whom have formulated rich questions and made remarkable discoveries and contributions to the field (NKS on Math sample video lecture by Wolfram  available here.)

Such efforts have made it possible to explore and arrive at important results regarding particular spaces of possible programs, including precise results which are perfectly consonant with traditional mathematics. For instance, Wolfram arrived at an axiom using only NAND and NOR equivalent to the set of all axioms of Boolean Algebra by considering the space of all propositional logic expressions up to six NANDs and two variables. He found that none of the 16896 possible axiom systems of this kind explored by him worked even up to 3-value operators. But with 6 NANDS and 3 variables, he found that 296 of the 288684 possible axiom systems worked up to 3-value operators, and 100 worked up to 4-value operators (NKS Book, Wolfram 2002, p. 809). He then found the smallest, proving that there were no smaller and formulating the following axiom: (((x \[Nand] y) \[Nand] z) \[Nand] (x \[Nand] ((x \[Nand] z) \[Nand] x))) == z.

Similar research on first-order logic would be worth undertaking, considering that it is powerful enough to express most or all of our mathematical theories.

The image below is a graph of what I call the universe of mathematical theories (or axiom systems, since they are not interpreted under any particular structure and have no semantic content). Each column represents a formula generated under an enumeration which eventually goes through all possible first-order formulas. Each cell on the matrix has a color,  gray if the formula of the column is false in the axiom system of the row,  white if it is true. Some cells are black and they represent formulas that are neither false nor true. In other words they are syntactically underivable within the axiom system in question. The project was  developed in a pre-release Mathematica Version 6 using Waldmeister, an automatic theorem prover embedded in the new Mathematica– a theorem prover based on the Bendix-Knuth resolution. Some patterns in the image can be explained by the chosen enumeration of the formulas and the axiom systems (which are subsets of the same formulas). However the question is whether or not a pattern can always be explained, and whether patterns are formed only by syntactical facts or whether they can shed light on the general properties of classes of axiom systems. I have explored these issues through to the beginning of the mathematical universe of length 5 and 6, but the process is clearly intractable, so I am beginning to generate samples of universes of higher degree, and to perform translations of mathematical theories into pure syntactical axioms and axioms systems into theories under arbitrary interpretations. One way or another this procedure will touch one of our current mathematical theories. The question is which number will it be  and how many times could it be repeated under different interpretations.

the The image above shows a sample with the first proofs for all the 161 raw formulas of length 4 (involving 4 bound variables) and the first 1000 axiom systems (which come from the subset operation applied to the  161 raw formulas of length 4). Since the size (total number of axiom systems) in the universe of length 4 is 2.9×10^48, this is only the very first part of it, with the first 161000 proofs obtained using the automated theorem prover. The problem is evidently intractable but a sampling method can be conceived in order to explore higher universes, to mix formulas between universes and find possible unrecognized (not easily syntactically explained) structures.

All equational axiom systems can be represented with two binary operators f and p, where p is a pairing function and we specify a constant using an axiom of the form
ForAll[a,ForAll[b,f[a, a] == f[b, b]]], and then the ith constant is c[i]=Nest[p[f[a,a],#]&,f[a,a],i]. We can then use f[c[i],p[c[1],x]], as the ith function of x. Note that x may be a list built from pairs. Of course, this assumes that there are an infinite number of  individuals in the most general case.

All axioms can be given in Skolemized form, hence there is no need to use existential quantifiers in axioms.

The mathematical universes created by this method were checked for consistency (proving that a==b in inconsistency systems for any a and b, in other words that in them it is possible to prove that anything equals anything) and internal independence (the minimal set of axioms, in other words that an axiom ai in S cannot be derived from another axiom aj or a subset of {a1,..,an} in the system S, where n is the length –number of axioms– of the system S). Some interesting results (to be published in the Journal of Complex Systems) have been obtained, like the proportion of  consistent to inconsistent axiom systems in such an enumeration –which is not precisely arbitrary but lexicographically–, or of internal independence to dependency. The purpose of the project is to shed some light on questions like the following:

– Euclid described axioms as self-evident truths. From the perspective of modern mathematics it is quite evident that it would be very difficult to  find axioms that are self-evident truths, and even  more difficult to claim that one’s axiom of choice or the continuum hypothesis are self-evident truths. So the problem is: Where do we get axioms from?
– What is evident is that math theories begin from something that is assumed to be true,  a starting point adopted for pragmatical reasons rather than because of its intrinsic truth value.
– How special are our mathematics (Are we being “mathcentric” in the Copernican sense)?
– How far along are  current mathematical theories in the enumeration of all possible axiom systems (they are enumerable)?
– Given an enumeration, how arbitrary is it and what role will it have in placing our current mathematics?
– Is there a shorter axiom system (in terms of number of axioms or their length) defining current mathematical theories? (My claim: There may be several copies of mathematical theories through the whole enumeration, but I wonder whether they get syntactically simpler while becoming semanticaly more complicated. For example, the same operations may have more complicated interpretations in order to capture an equivalent theory in fewer or shorter axioms. S. Wolfram was surprised at how close the axiom systems for propositional logic were when he enumerated non-quantified systems for his NKS book.  Could the same be true  for arbitrary mathematical theories, and if so what would that mean?)
– Is there a single type of math? In other words, could  an hypothetical alien civilization have a different mathematics? It seems that there is a single physics since, presumably, we live in a single universe. However we have several models of that universe, not all of which are mutually compatible, and a priori their are known as approximations. So the question could be split into two: perhaps there is a single physics but we are not able to access it assuming real numbers, i.e. by traditional methods. However, it seems unclear how we might argue that there is a single type of mathematics.
– Is there a minimum set of mathematics shared by all possible hypothetical civilizations, probably at least  arithmetic theory, those theories closer to physical reality?.
– Assuming a discrete universe and the role of mathematics and physics for determining what we call “real”, the enumeration of possible universes would be feasible. What would it mean if our universe is very close to the beginning of the enumeration of all possible universes? Of course that is very dependent on the chosen enumeration, but could it be in some way independent or less arbitrary? In what degree?