General

How Humans perceive the world is biased by how patterns are distributed in Nature and their intrinsic complexity

Posted in Algorithmic information theory, Complexity, Computer Science, General, New Ideas on August 27th, 2014 by Hector Zenil – Be the first to comment

A new paper of mine with my colleagues, and Algorithmic Nature Lab members, Nicolas Gauvrit and Fernando Soler-Toscano just came out.

Using previously generated and new experimental data together with new methods to calculate the algorithmic complexity of 2-dimensional objects, we were able to find that when humans assess the complexity of an image (a small 4×4 pattern), their rating is correlated to the algorithmic complexity of the image mediated by the probability that such pattern appears in real world scenes. In other words, humans are biased both towards patterns in the world and algorithmic complexity, but also patterns in the world are correlated to algorithmic complexity. This strengthens my claim for an algorithmic world, where patterns can be accounted for by an algorithmic production process.

The journal (Visual Cognition) allows 50 free electronic copies of the paper to be downloaded. Should you be interested in this paper and can’t access it otherwise, you can have a free copy, using the following e-print link.

The Turing Test pass fiasco

Posted in General on June 10th, 2014 by Hector Zenil – Be the first to comment
originalTuringTest

The Turing test was a proposal made by Turing in 1950 intended to be a test to evaluate the apparent intelligence of a machine. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the human, and which is the machine. Taken seriously, this test should set a measure of language skill qualities for chatbots performing as intelligent interlocutors.

I would have loved to hear the news that the Turing Test had been passed for the right reasons–if there was genuine reason to believe so. Unfortunately this is not the case for the recent claim (http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx) and the bold reports published in its wake–without any critical comment– by the major newspapers and magazines. While it is difficult to fathom the exact motivations that drove Turing to come up with what he called “the imitation game”, it is clear that the chatbot which it is claimed passed the Turing test is no different from any other chatbot tested before judging by the types of conversations it has undertaken, except for the deliberate attempt by its creators to underscore its limitations by characterizing it as a 13 year old non-native English speaker. If the rules can be bent in this way, I could—taking things to the limit– easily write a script to pass the Turing test that simulated a 2-month old baby, or or of an alien being writing gibberish or a drunkard for that matter, one that forgets even the last question that was asked.

Taken seriously, the Turing Test should not be a test for deceiving the judges in this way; on display should be the  language skills of a typical interlocutor working in their native language and at full capability (which would rule out, for instance, a simulation of a drunkard or of an intellectually handicapped person). A milestone in AI in the context of the Turing test will be a chatbot that is genuinely able to simulate the entire range of language skills of a normal person working at full capability, a chatbot that does not reply with questions or forget what was said at the beginning of a conversation (or one question before for that matter), a chatbot that does not need a lookup table of about the same size as the number of questions it can answer, and yet is able to answer in about the same time as a human being.

The claim that the Turing Test has been passed does nothing but harm to the field of artificial intelligence, because anyone probing beyond what the newspapers and magazines have picked up from the original press release and repeated word for word (shame on them all, not only for this but for so many other atrocious errors disseminated by them, such as taking a script for a supercomputer!)  will judge it a fiasco in detriment of true successes in the field, past and future. This supposed success has done a disservice to the field and to the possibly honest creators of the chatbot, whose open admission that they had given it the character of a 13 year old foreign kid may have been meant to lower expectations of what it could achieve.

The mistake of claiming that their winner passed the true Turing test as they called it, and even calling it a milestone, is hard to excuse, especially in view of the damage it could do to the field, and indeed to the organizers themselves and other Turing test events that had already a hard time distancing them from a merely entertaining activity. In summary, if a Turing test was deemed to have been passed 2 days ago, it did it for all the wrong reasons.

For people interested in the subject, I recently gave an invited talk on Cognition, Information and Subjective Computation at the special session on Representation of Reality: Humans, Animals and Machines at the convention of the AISB – The Society for the Study of Artificial Intelligence and Simulation of Behaviour - at Goldsmiths, University of London. In the talk I surveyed some of the latest approaches to intelligence, consciousness and other generalized Turing tests for questions such as what is life. The slides are available at SlideShare here.

A more interesting discussion in the context of the Turing Test and good sense, is what has happened with CAPTCHA, for example, where images get more and more complicated in a battle to tell bots apart from simulating humans, yet computers have become better than humans in things that we recently thought they were decades away, such as text and face recognition, speech recognition is still getting there, but there is no need for a poorly performed *restricted* Turing Test to realize how computers can perform and excel at previously only-human tasks intelligent. Just as Turing said, I have no problems these days attributing intelligence to computers, even if a different kind of intelligence. And this was the purpose of the Turing Test from my point of view, that how intelligence is achieved is not important, just as flying does not need to be like birds, for an airplane to fly.

Update (June 22): Prof. Warwick has written in The Independent about the critics in his defense:

“…the judges were not told he was a teenager and Turing never suggested that artificial intelligence would have to pose as an adult – just that it fooled people to thinking it was human. Judges were free to ask the subjects any questions they liked in unrestricted conversations – and Eugene was clearly capable of holding its own.”

Really… I stand by my proposal then to be able to pass a truly unrestricted Turing Test in this spirit and write a chatbot that emulates a 2-month old baby. How little common sense for a test that should have been performed spotless and that has been claimed to had very high standards. I learnt also that judges were allowed to be children among others, so again, why we don’t put babies as judges, Turing never said anything against it, the only requirement explicitly said by Turing was to have non-computer experts. Were children taken into account in the 30% of confused judges? A question for the organizers that are holding information for the sake of an academic paper.

One of Eugene chatbot’s developers, Vladimir Veselov, has also spoken, in The Calvert Journal, he has said that

“I’m not sure that Eugene is an achievement in computer science, although, yes, we’ve shown that it’s possible to pass the Turing Test.”

(citing the way in which the organizers and the University of Reading announced the event as a milestone).

Really… I can help you, No, it is not a milestone whatsoever, and you have the answer, just a few sentences above you wrote:

“…There are two classes of chatbots, a self-learning system like a Cleverbot and the second, like Eugene, that comes with predetermined answers. In some instances, he asks questions and in others he uses information from the user to answer questions.”

and immediately after when asked by the journalist “Does that mean his ability to have a natural conversation is down to chance?” Veselov answers:

“He’s not like a smart system, so yes. He has knowledge of certain facts and some basic information about different countries, different inventors and so on. He uses an encyclopaedia to answer questions and can do very simple arithmetic operations. He can also control conversation so that if he doesn’t know the answer, he’ll turn the conversation around by asking a question.”

Is that different to what was done in the 60s with chatbots such as Eliza? Certainly there has been progress simply by how fast computers have become. And actually it is more surprising how bad chatbots perform despite the speedup increasing of incredible computer power and resources. Yet, after all, in 2014 it is claimed that the Turing Test has been passed with an Eliza-type chatbot after incremental improvements in the last 5 decades.

Veselov makes the outrageous claim that”The test is about imitation of humans, it’s not about artificial intelligence”! Wrong! Turing designed the test as a replacement question to whether machines could think, it has everything to do with AI. In fact Turing’s 1950 landmark paper is considered the starting point of the field of AI!

New paper solves and addresses some of Wolfram’s Open Problems in Cellular Automata

Posted in General on June 19th, 2013 by Hector Zenil – Be the first to comment

In a new paper entitled Asymptotic Behaviour and Ratios of Complexity in Cellular Automata Rule Spaces to be published soon by IJBC and available online here, at least 3 open problems in Cellular Automata are solved/addressed. These are related to:

  1. The development of a formal classification of behaviours of cellular automaton
  2. The development of automated ways to find “interesting” cellular automata and
  3. Find ways to estimate the frequencies of different types of cellular automaton behaviour among rule spaces

In Wolfram’s open problems document it reads:

“If one looks at all possible cellular automaton rules, can one make a quantitative estimate of what fraction will show repetitive, nested, etc. behavior? Is there a definite limiting fraction of rules that, say, show nested behavior.”

In the aforementioned paper, a string arguments and numerical answer is given: the fraction of complex cellular automata for increasingly larger rule spaces grows asymptotically to 1. Otherwise said, picking a longer computer program at random has greater chances to produce complex (algorithmic random) behaviour. When the program space is large enough the probability is arbitrarely close to 1. Notice the converse is not true, small program spaces do not necessarely have small fraction of complex behaviour. We show that defining a concept of asymptotic behaviour, complex behaviour dominates among n possible stable behaviours of a single program.

In this paper, we studied the asymptotic behaviour of symbolic computing systems, notably one-dimensional cellular automata (CA), in order to ascertain whether and at what rate the number of complex versus simple rules dominate the rule space for increasing neighbourhood range and number of symbols (or colours), and how different behaviour is distributed in the spaces of different cellular automata formalisms (as applied to elementary and totallistic CA rules). In doing so we defined an enumeration of initial conditions and automatic methods for interesting rules discovery. Using two different measures, Shannon’s block entropy and Kolmogorov complexity, the latter approximated by two different methods (lossless compressibility and block decomposition based on algorithmic probability), we arrive at the same trend of larger complex behavioural fractions. We also advance a notion of asymptotic and limit behaviour for individual rules, both over initial conditions and runtimes, and we provide a formalisation of Wolfram’s classification as a limit function in terms of Kolmogorov complexity.

Among the results, we confirmed preliminary findings reporting a preponderance of complex classes 3 and 4 as the number of symbols (colours) and neighbourhood ranges of these systems increase. We have shown that behaviour is for the most part stable for a given initial condition but that complexity increases in terms of the number of initial configurations and rules space sizes. The rate at which it does so is fast before asymptotically slowing down when approaching compressibility ratio 1 (uncompressibility). Similar results were obtained by using two different measures. One was a block entropy measure normalised and applied to CA space-time diagrams and the other 2 different approximations of Kolmogorov complexity (lossless compression and algorithmic probability). Following this latter, we found at least 2 ECA rule candidates for Class III or IV behaviour that were believed to belong to Class I or II (if you wonder which specific rules, have a look at the paper!).

While the entropy result may be less surprising because the number of possible configurations with more symbols and larger neighbourhood ranges suggests an increase in entropy, the fact that the 3 approaches used to measure complexity in rule spaces (including estimations of a universal and objective measure such as Kolmogorov complexity) yielded the same results provides a robust answer to the question regarding the change of fractions of behaviour and the rate of this change in computer programs for increasing resources where complexity is clearly seen to dominate asymptotically.

To know more about this:

Information Special Issue “Physics of Information”

Posted in General on May 28th, 2013 by Hector Zenil – Be the first to comment

Special Issue “Physics of Information”

A special issue of Information (ISSN 2078-2489).

Deadline for manuscript submissions: 20 November 2013

Special Issue Editors

Guest Editor
Dr. Hector Zenil
Unit of Computational Medicine, Center for Molecular Medicine, Karolinska Institutet, Stockholm, Sweden
Website: http://www.mathrix.org/zenil/
E-Mail: hectorz@labores.eu
Interests: algorithmic information theory; computational biology and complex networks

Guest Editor
Dr. Gordana Dodig-Crnkovic
School of Innovation, Design and Engineering, Computer Science Laboratory, Mälardalen University, Sweden
Website: http://www.idt.mdh.se/~gdc
E-Mail: gordana.dodig-crnkovic@mdh.se
Interests: philosophy of information and computing

Special Issue Information

Dear Colleagues,

Information has enabled new interpretations of quantum phenomena that promise fresh theoretical and practical insights. It also lies at the heart of statistical mechanics. Longstanding paradoxes in thermodynamics have been solved (or raised) using information and computation. In cosmology, it has been suggested that the universe has an unexpectedly limited capacity to store and process information, perhaps indicating that space and time are not fundamental properties of reality. Paradoxically, physics seems to impose constraints on information, such as the speed at which it can travel from one region to another in or how much information we can extract from a physical event at the smallest scale. Focusing on information flow, it has also been suggested, will also help us better understand how cells and complex biological organisms work. Indeed these days molecular biology is mostly an information science.

But it is computer science that has placed information at the center of the modern debate. Digital information has dominated technology in the half century, empowering and extending human capabilities to new frontiers. How unreasonable is the effectiveness of digital computation in the natural sciences? Is computation the obvious description of information-processing? What are the connections between carrying information, programming artificial and natural systems and computation? What is the nature of the interplay between information, entropy and other complexity measures?

This special issue is devoted to all these questions as approached through rigorous and unpublished technical work from areas such as cosmology, astrophysics, mathematics, computer science, complexity science, biology, and neuroscience. The central topic for authors to bear in mind is some foundational aspect of information and reality, or information processing in nature.

Dr. Hector Zenil
Dr. Gordana Dodig-Crnkovic
Guest Editors

 

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 300 CHF (Swiss Francs) but fee waivers are possible for authors in developing countries and invited authors.

 

Keywords

  • information paradoxes in physics
  • programming reality
  • digital and informational physics
  • it from bit and/or it from bit
  • bit-string physics
  • quantum information and computation
  • computational thermodynamics
  • mathematical/Computational universe hypothesis
  • simulation and simulacra
  • theories of quantum gravity
  • algorithmic information theory
  • information, algorithms and automata
  • models of information processing
  • natural computing

Help science and behave randomly

Posted in General on May 9th, 2013 by Hector Zenil – Be the first to comment

Take part in a cutting-edge massive online experiment to asses how humans perceive and generate randomness

Human Randomness Perception and Generation Project:

http://complexitycalculator.com/hrng/

For obvious reasons I cannot tell much.

This is an experiment undertaken by the Algorithmic Nature Group of LABoRES.

The results will be announced in this blog. Stay tuned.

New volume: A Computable Universe

Posted in General on March 23rd, 2013 by Hector Zenil – Be the first to comment

With a foreword by Sir Roger Penrose, my new volume A Computable Universe has been published by World Scientific and the Imperial College Press after my bestseller Randomness Through Computation in 2011. A Computable Universe: Understanding and Exploring Nature as Computation, introduces the views of the pioneers in the computable universe view of the world and some of its strongest opponents (including Penrose himself).

 

bookcover

In the context of the Alan Turing Year, the book discusses the foundations of computation in relation to nature.

  • What is computation?
  • How does nature compute?

They discuss computation in the world from a variety of perspectives, ranging from foundational concepts to pragmatic models to ontological conceptions and philosophical implications.

The volume provides a state-of-the-art collection of technical papers and non-technical essays, representing a field that assumes information and computation to be key in understanding and explaining the basic structure underpinning physical reality. It also includes a new edition, prepared by Adrian German from Indiana University Bloomington and myself, of Konrad Zuse’s “Calculating Space” (the MIT translation) fully retyped in shiny LaTeX with a few corrections and improvements to figures. A free copy of which is available here (trivia: Zuse’s name was used in the Tron: Legacy film (2010) certainly inspired by his mind blowing ideas on a digital universe).

The book also includes a panel discussion transcription on the volume topic, featuring worldwide experts including a Nobel prize in quantum mechanics, physics, cognition, computation and algorithmic complexity. In this section I asked authors (and other authors asked as well) what I thought an informed reader would ask them (including myself), sometimes leading to quite heated exchanges.

The volume is dedicated to the memory of Alan M. Turing — the inventor of universal computation, on the 100th anniversary of his birth, and is part of the Turing Centenary celebrations.

Free sample chapter(s)
Foreword (533 KB)
Chapter 1: Introduction the Computable Universe (476 KB) by myself.
Table of Contents:

    • Foreword (R Penrose)
    • Preface
    • Acknowledgements
    • Introducing the Computable Universe (H Zenil)
  • Historical, Philosophical & Foundational Aspects of Computation:
    • Origins of Digital Computing: Alan Turing, Charles Babbage, & Ada Lovelace (D Swade)
    • Generating, Solving and the Mathematics of Homo Sapiens. E Post’s Views on Computation (L De Mol)
    • Machines (R Turner)
    • Effectiveness (N Dershowitz & E Falkovich)
    • Axioms for Computability: Do They Allow a Proof of Church’s Thesis? (W Sieg)
    • The Mathematician’s Bias — and the Return toEmbodied Computation (S B Cooper)
    • Intuitionistic Mathematics and Realizability in the Physical World (A Bauer)
    • What is Computation? Actor Model versus Turing’s Model (C Hewitt)
  • Computation in Nature & the Real World:
    • Reaction Systems: A Natural Computing Approach to the Functioning of Living Cells (A Ehrenfeucht, J Kleijn, M Koutny & G Rozenberg)
    • Bacteria, Turing Machines and Hyperbolic Cellular Automata (M Margenstern)
    • Computation and Communication in Unorganized Systems (C Teuscher)
    • The Many Forms of Amorphous Computational Systems (J Wiedermann)
    • Computing on Rings (G J Martínez, A Adamatzky & H V McIntosh)
    • Life as Evolving Software (G J Chaitin)
    • Computability and Algorithmic Complexity in Economics (K V Velupillai & S Zambelli)
    • Blueprint for a Hypercomputer (F A Doria)
  • Computation & Physics & the Physics of Computation:
    • Information-Theoretic Teleodynamics in Natural and Artificial Systems (A F Beavers & C D Harrison)
    • Discrete Theoretical Processes (DTP) (E Fredkin)
    • The Fastest Way of Computing All Universes (J Schmidhuber)
    • The Subjective Computable Universe (M Hutter)
    • What Is Ultimately Possible in Physics? (S Wolfram)
    • Universality, Turing Incompleteness and Observers (K Sutner)
    • Algorithmic Causal Sets for a Computational Spacetime (T Bolognesi)
    • The Computable Universe Hypothesis (M P Szudzik)
    • The Universe is Lawless or “Pantôn chrêmatôn metron anthrôpon einai” (C S Calude, F W Meyerstein & A Salomaa)
    • Is Feasibility in Physics Limited by Fantasy Alone? (C S Calude & K Svozil)
  • The Quantum, Computation & Information:
    • What is Computation? (How) Does Nature Compute? (D Deutsch)
    • The Universe as Quantum Computer (S Lloyd)
    • Quantum Speedup and Temporal Inequalities for Sequential Actions (M ?ukowski)
    • The Contextual Computer (A Cabello)
    • A Gödel-Turing Perspective on Quantum States Indistinguishable from Inside (T Breuer)
    • When Humans Do Compute Quantum (P Zizzi)
  • Open Discussion Section:
    • Open Discussion on A Computable Universe (A Bauer, T Bolognesi, A Cabello, C S Calude, L De Mol, F Doria, E Fredkin, C Hewitt, M Hutter, M Margenstern, K Svozil, M Szudzik, C Teuscher, S Wolfram & H Zenil)
  • Live Panel Discussion (transcription):
    • What is Computation? (How) Does Nature Compute? (C S Calude, G J Chaitin, E Fredkin, A J Leggett, R de Ruyter, T Toffoli & S Wolfram)
  • Zuse’s Calculating Space:
    • Calculating Space (Rechnender Raum(K Zuse)
    • Afterword to Konrad Zuse’s Calculating Space (A German & H Zenil)

Available at Amazon and your favorite online retailers.

My new book: Irreducibility and Computational Equivalence

Posted in General on March 23rd, 2013 by Hector Zenil – Be the first to comment

Irreducibility and Computational Equivalence is a collection of contributions by outstanding authors in celebration of Stephen Wolfram’s A New Kind of Science after 10 years of its publication. Published by Springer Verlag it is already available both Paperback and Hardcover, as well as an eBook and CDF (computable document).

Here is a free preview and the book website from Springer.

ZenilWolframBook It is clear that computation is playing an increasingly prominent role in the development of mathematics, as well as in the natural and social sciences. The work of Stephen Wolfram over the last several decades has been a salient part in this phenomenon helping founding the field of Complex Systems, with many of his constructs and ideas incorporated in his book A New Kind of Science (ANKS) becoming part of the scientific discourse and general academic knowledge–from the now established Elementary Cellular Automata to the unconventional concept of mining the Computational Universe, from today’s widespread Wolfram’s Behavioural Classification to his principles of Irreducibility and Computational Equivalence.

I found this volume fascinating in its efforts to flesh out the computational implications for biology more generally.
– Dr. Mark Changizi

I believe that this book will be an inspiration for future work in interdisciplinary research at the intersection of computer science, natural and social sciences.
– Prof. Ivan Zelinka

The volume, with a Foreword by Gregory Chaitin and an Afterword by Cris Calude, covers these and other topics related to or motivated by Wolfram’s seminal ideas, reporting on research undertaken in the decade following the publication of Wolfram’s NKS book. Featuring 39 authors, its 23 contributions are organized into seven parts.

It is now available from Amazon and any large online bookstore, and directly from Springer itself.

Table of Contents:

Foreword
Gregory Chaitin

Part I Mechanisms in Programs and Nature

1. Hyperbolic Cellular Automata
Maurice Margenstern
2. A Lyapunov View on the Stability of Cellular Automata
Jan M. Baetens & Bernard De Baets
3. On the Necessity of Complexity
Joost J. Joosten
4. Computational Technosphere and Cellular Engineering
Mark Burgin

Part II The World of Numbers & Simple Programs

5. Cellular Automata: Models of the Physical World
Herbert W. Franke
6. Symmetry and Complexity of Cellular Automata: Towards an Analytical Theory of Dynamical System
Klaus Mainzer
7. A New Kind of Science: Ten Years Later
David H. Bailey

Part III Everyday Systems

8. A New Kind of Finance
Philip Z. Maymin
9. The Relevance and Importance of Computation Universality in Economics
Kumaraswamy Velupillai
10. Exploring the Sources of and Nature of Computational Irreducibility
Brian Beckage, Stuart Kauffman, Louis Gross, Asim Zia, Gabor Vattay and Chris Koliba

Part IV Fundamental Physics

11. The Principle of a Finite Density of Information
Gilles Dowek and Pablo Arrighi
12. Artificial Cosmogenesis: A New Kind of Cosmology
Clément Vidal
13. Do Particles Evolve?
Tommaso Bolognesi

Part V The Behavior of Systems & the Notion of Computation

14. An Incompleteness Theorem for the Natural World
Rudy Rucker
15. Pervasiveness of Universalities of Cellular Automata: Fascinating Life-like Behaviours
Emmanuel Sapin
16. Wolfram’s Classification and Computation in Cellular Automata Classes III and IV
Genaro J. Martinez, Juan Carlos Seck Tuoh Mora and Hector Zenil

Part VI Irreducibility & Computational Equivalence

17. Exploring the Computational Limits of Haugeland’s Game as a Two-Dimensional Cellular Automaton
Drew Reisinger, Taylor Martin, Mason Blankenship, Christopher Harrison, Jesse Squires and Anthony Beavers
18. Irreducibility and Computational Equivalence
Hervé Zwrin and Jean-Paul Delahaye
19. Computational Equivalence and Classical Recursion Theory
Klaus Sutner

Part VII Deliberations and Philosophical Implications

20. Wolfram and the Computing Nature
Gordana Dodig-Crnkovic
21. A New Kind of Philosophy. Manifesto for a Digital Ontology
Jacopo Tagliabue
22. Free Will For Us, not For Robots
Selmer Bringsjord

Afterword
Cristian Calude

Available from Amazon and any large online bookstore, and directly from Springer itself.

Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments

Posted in General on March 23rd, 2013 by Hector Zenil – Be the first to comment

In evolutionary biology, attention to the relationship between stochastic organisms and their stochastic environments has leaned towards the adaptability and learning capabilities of the organisms rather than toward the properties of the environment. This article is devoted to the algorithmic aspects of the environment and its interaction with living organisms. We ask whether one may use the fact of the existence of life to establish how far nature is removed from algorithmic randomness. The paper uses a novel approach to behavioral evolutionary questions, using tools drawn from information theory, algorithmic complexity and the thermodynamics of computation to support an intuitive assumption about the near optimal structure of a physical environment that would prove conducive to the evolution and survival of organisms, and sketches the potential of these tools, at present alien to biology, that could be used in the future to address different and deeper questions. We contribute to the discussion of the algorithmic structure of natural environments and provide statistical and computational arguments for the intuitive claim that living systems would not be able to survive in completely unpredictable environments, even if adaptable and equipped with storage and learning capabilities by natural selection (brain memory or DNA).

This new paper has recently been published online and is freely available from MDPI here. The statistics show 355 readers in only 3 months.

Zenil, H.; Gershenson, C.; Marshall, J.A.R.; Rosenblueth, D.A. Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments. Entropy 2012, 14, 2173-2191.

Announcing the Online Algorithmic Complexity Calculator

Posted in Algorithmic information theory, Complexity, General on March 23rd, 2013 by Hector Zenil – Be the first to comment

We have made available a basic beta version of an Online Algorithmic Complexity Calculator implementing the methods we have developed in recent papers at the Algorithmic Nature lab.

The OACC provides a comprehensive framework of universal mathematical measures of algorithmic complexity for researchers and professionals. It retrieves objective numerical measures of randomness for potential applications in a very wide range of disciplines, from bioinformatics to psychometrics, from linguistics to economics.

It is based on several years of research devoted to new methods for evaluating the information content and algorithmic complexity. The description of the Coding Theorem method to deal with short strings is described in this paper. It currently retrieves numerical approximations to Kolmogorv complexity and Levin’s universal distribution (or Solomonoff algorithmic probability) for binary strings of short length, for which lossless compression algorithms fail as a method for approximation to program-size complexity, hence providing a complementary and useful alternative to compression algorithms. More algorithmic information measures, more data and more techniques will be incorporated gradually in the future, covering a wider range of objects such as longer binary strings, non-binary strings and n-dimensional objects (such as images).

It also includes a Short String Complexity Explorer (it may take some time to run if you open the link) tool developed in Mathematica, it runs oinline with a free player. It provides a comparison of the estimated Kolmogorov complexity (K), the algorithmic probability (m), Shannon’s Entropy (E) and compressibility (using Deflate) of a string. It also provides an indication of the relative complexity among all other strings for which K has been approximated and its distribution rank.

Paper published: Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments

Posted in General on November 6th, 2012 by Hector Zenil – Be the first to comment

Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments

In evolutionary biology, attention to the relationship between stochastic organisms and their stochastic environments has leaned towards the adaptability and learning capabilities of the organisms rather than toward the properties of the environment. This article is devoted to the algorithmic aspects of the environment and its interaction with living organisms. We ask whether one may use the fact of the existence of life to establish how far nature is removed from algorithmic randomness. The paper uses a novel approach to behavioral evolutionary questions, using tools drawn from information theory, algorithmic complexity and the thermodynamics of computation to support an intuitive assumption about the near optimal structure of a physical environment that would prove conducive to the evolution and survival of organisms, and sketches the potential of these tools, at present alien to biology, that could be used in the future to address different and deeper questions. We contribute to the discussion of the algorithmic structure of natural environments and provide statistical and computational arguments for the intuitive claim that living systems would not be able to survive in completely unpredictable environments, even if adaptable and equipped with storage and learning capabilities by natural selection (brain memory or DNA).

Zenil, Hector; Gershenson, Carlos; Marshall, James A.R.; Rosenblueth, David A. 2012. “Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments.” Entropy 14, no. 11: 2173-2191.

http://www.mdpi.com/1099-4300/14/11/2173

Hypercomputation in A Computable Universe

Posted in Computability, Universality and Unsolvability, Foundations of Computation, General, Recreation on September 21st, 2012 by Hector Zenil – Be the first to comment

This is the full version of my answer to a question formulated by Francisco A. Doria to me included in the Discussion Section of A Computable Universe, my edited volume being published by World Scientific and Imperial College Press coming out next month (already available in Asia) concerning whether I think if Hypercomputation is possible:

I was once myself a hypercomputation enthusiast (my Master’s thesis–in French–was on “Hyper calcul”, focused on feasibility of hypercomputational models). Paradoxically it wasn’t until I had a good knowledge of it that I started to better appreciate and to increasingly enjoy more the beauty of the digital (Turing) model. On the one hand, hypercomputational models do not converge in computational power. There are a plethora of possible models of hypercomputation while there is only one digital (in terms of computational power). I find it astonishing how little it takes to reach the full power of (Turing) universality, and I don’t see the power of universal machines as being subject to any limitation, all the opposite. I happen to think the question has a truth value and is ultimately susceptible of a physical answer. However, I don’t think the answer will emerge in the foreseeable future, if it ever does.

Still, I found Doria’s contribution to the subject interesting, as he points out a particular (“ideal”) model of hypercomputation that I think gets around some of Martin Davis’ objections (The Myth of Hypercomputation), though it doesn’t fully address the problem of verification. Unlike a Turing computation, which can in principle be verified by carrying it out by hand, step by step the inner working of a hypercomputer can only be followed by another hypercomputer. The caricature version of the problem is in the whimsical answer given by the (hyper?)computer Deep Thought in “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, which proposed “42″ as the “Ultimate (uncomputable?) answer to the Ultimate Question of Life, The Universe, and Everything” [added parenthesis]. The only way to verify such an answer would be by building another, more powerful and even less understandable computer. This makes me wonder whether we ought not to favour meaningful computation over what could potentially be hypercomputation, even if hypercomputation were possible.

There is a strong analogy to the concept of proof in math. Mathematical proofs seem to fall into two types. They either serve to convince of the truth of a statement one wasn’t certain was true, or else to provide logical evidence that a statement intuitively believed to be true was in fact so (e.g. the normality of the mathematical constant pi). But in the latter case, why would one bother to provide a proof of a statement that nobody would argue to be false? It is because ideally math proofs should provide insight into why a statement is true, and not simply establish whether or not it is so. There are a few exceptions of course. Some come from mathematical practice, for example Wiles’ proof of Fermat’s last theorem. It is not clear whether Wiles’ proof provides any insight into the original question of the truth value of Fermat’s theorem (it does contribute to understand the connection among different powerful mathematical theories). Some other cases, especially among computer automated proofs, are of the same kind, often neglecting the fact that a proof is also about explanation (for humans). In that sense I think we should also favour meaningful mathematical proofs (from meaningful mathematical questions!), just as we should better appreciate meaningful (digital) computation.

The study of infinite objects has given us great insight into profound and legitimate questions of math and computation, such as the question of the nature of what is a number or what is a computation. And it has been immensely useful to focus on limits of computation in order to better understand it. It is at the boundary between decidability and undecidability that one seems best positioned to answer the question of what does computation mean. And there are some examples where the study of a hypercomputational model is more valuable not as a model of hypercomputation but by its ancillary results. Unlike most people, I think the contribution of, for example, Siegelmann’s ARNN model, has much more to say about computational complexity (the ARNN model relativises P = NP and P != NP in a novel fashion) and therefore classical computation than about hypercomputation! (Siegelmann’s ARNNs model has come to be strangely revered among hypercomputation enthusiasts).

While I may agree that the problem of verification of a computation is not necessarily an argument against hypercomputation (because it can also be used against digital computation in practice), the answer is only in the realm of physics and not in paper models. As Davis points out, it is not a surprise that encoding non-computable numbers as weights in an artificial neural network deals to non-computable computation! so, assuming real numbers exist the consequence is straightforward and the justification of a hypercomputational model just circular.

Other models of hypercomputation are just fundamentally wrong, such as Peter Wegner’s model of “interactive computation” that is claimed to break Turing’s barrier. For a recent commentary pointing out the the flaws of yet another hypercomputational claim, you can consult this entertaining blog post by Scott Aaronson and his Toaster-Enhanced Turing Machine.

Meaningful Math Proofs and “Math is not Calculation”

Posted in Foundations of Math, General, New Ideas, Recreation on August 5th, 2012 by Hector Zenil – Be the first to comment

Mathematicians are generally thought to be very good at calculation, and they sometimes are, but this is not because math is about calculation, just as astronomy is not about telescopes and computer science is not about computers (paraphrasing Edsger Dijkstra).

But if it is not preeminently about calculation, then what is mathematics about? The common answer, given by mathematicians themselves, is that it is about solving problems–learning to do so, and to think in a logical fashion. I think this may also be misguiding. Let’s take as an example Fermat’s last theorem (which may also be misguiding, as some math is not only about historical unresolved questions with little impact, in some sense, Fermat’s Last Theorem is not much more than a curiosity).

As is well-known, Fermat’s last theorem was solved by Andrew Wiles in 1994. The solution wasn’t simple at all, and it required a lot of convoluted math (Wiles himself got it wrong first and a few years later fixed it). The solution required much more machinery than the machinery required to formulate Fermat’s original statement, it is like going after a fly with a bazooka. For some mathematicians concerned with the foundations of math, Wiles’ theorem may not be as satisfactory they would have wished, even though as a piece of mathematical work connecting several areas of advanced math.

Wiles’ exact solution couldn’t have been arrived at earlier, as it draws upon some of the most sophisticated areas of modern math, it transfers a pure problem of arithmetic to a very powerful framework founded in algebraic geometry and analysis (elliptic curves, modular forms). If Fermat had a solution it wouldn’t have been at all similar to Wiles’. In fact, even before Wiles’ proof, no mathematician ever thought that Fermat’s last theorem was actually false. The consensus was that the theorem was true, so the long expected proof was supposed to shed light on the reason behind this truth rather than simply confirm it. Wiles’ great achievement only confirmed that with a very sophisticated set of tools one could prove Fermat’s last theorem, but it didn’t shed any light on why it was true (if it does, it is still hard to justify such a tradeoff in which to shed some light in arithmetic one requires the use of set theory).

Wiles’ solution doesn’t answer the question of whether Fermat himself had a more basic (but no less valid) proof. The most likely case is that Fermat’s last theorem is actually independent of number theory, and requires the full power of set theory. Hence Fermat very likely had no such solution, contrary to what he claimed.

Fermat’s theorem is, however, a rare case of the ultimate great challenge, and not part of the common practise of the “average” mathematician. The work of an “average” mathematician depends very much on its field. If the field is concerned with foundations it may involve a lot of set theory and logic, which in large part involves finding sets of formulae to prove other sets of formulae and looking for relativisations. In algebra or group theory, or even topology, their work may have to do with all sorts of symmetries or ways to characterise things in different ways in order to establish new and unanticipated connections, shedding more light on one area in terms of another, from a different point of view (and employing a different language). This connection is in fact the greatest achievement of Wiles’ proof of Fermat’s last theorem, as it connects several subdisciplines of modern mathematics building sophisticated intuitions of true statements, ultimately connected to Fermat’s last theorem.

I think math can be better described as a discipline of patterns (e.g. mappings, homeomorphisms, isomorphisms). Patterns are integrally related to the discipline I value most: algorithmic information theory, which is the ultimate field of the study of patterns, patterns in strings and objects, leading to applications such as compression algorithms.

One of the consequences of this change of mindset is that it will allow students access to computers. Computers and humans together can achieve an impressive number of things; they do so everyday in all sorts of fields, from car design to the development of new chips for the next generation of computers. In fact, airplanes today are mostly driven by computers, no pilot or set of pilots alone would be able to fly a modern airplane these days, and more than 60 to 70% (in some markets even 90%) of the operations in any top stock market are computer made, with no human intervention. Today’s industry wouldn’t be conceivable without the combined forces of human creativity and motivation, and the computer’s speed, accuracy and productivity.

There is an important new trend spearheaded by the UK initiative Computer-Based Math
promoting these very ideas. Conrad Wolfram has recently explained in a blog post how computer programming could not only help to teach math, but how they could merge into a single discipline. I may have a chance to talk about all this at the House of Lords soon, where the education shift would need to take official form.

Towards a Web of One…

Posted in General on November 25th, 2011 by Hector Zenil – 1 Comment

The author is a little more critical of personalization than I’d be inclined to be, and he gives too much credit to human editors in my opinion (I think they can be as bad or worse than algorithmic editors. Witness the hacking scandals in the UK, a phenomenon not exclusive to Great Britain).

Eli Pariser’s words resonate powerfully with the way I see the web and how it has evolved into its current form, with its relentless social networking emphasis. My main concern is not personalization but trivialisation, owing not only to the inane content posted by individual users but also by companies controlling the web.

Social networking has been driven by an insidious knack for making clever people spout nonsense, and those who succumb make the place invasive in every sense. Not to mention the frantic race to acquire more friends (there have been reports of periods in which there is a reversal of this phenomenon, perhaps when people realise how meaningless these connections are and start unfriending their superfluous contacts, trimming their friend lists. Google+ and Facebook eventually came up with the idea of Acquaintances rather than Friends, in large part to mitigate this need.

The sad thing is that in many respects the Internet has brought out pretty much the worst in us rather than making us better. Here is a little meaningful piece of content though, to make us think about it:

David Hilbert and Deutsch in den Wissenschaften

Posted in General on November 13th, 2011 by Hector Zenil – Be the first to comment

David Hilbert was probably one of the most, if not the most, prominent German mathematician. His interests ranged from quantum mechanics to topology, and notably from the foundations to the history of science and mathematics. Among his main interests there was geometry, at which he excelled by formalizing a two and a half centuries (Euclidian) geometry synthesizing it into a few (consistent and independent) different sets axioms, making the entire theory as mathematically precise as possible (in the tradition of Euclid himself but taken to culmination) at the level of precision that any theorem could from then on be carried out by a mechanical procedure manipulating symbols with no semantics attached.

His formalization of geometry certainly led him to think that math could one day be entirely formalized in the same way, proving any true theorem for a given mathematical theory. In his famous 1900 lecture at the Sorbonne in Paris Hilbert set most of the mathematics agenda for the 20th. century, and among the open problems there was the one he called the Entscheidungsproblem (the Decision problem), that is whether a mechanical treatment of math would eventually make possible theorems to be automatically proven.

In his address to the Society of German Scientists and Physicians, in Königsberg on September 8, 1930, Hilbert finished his speech with a loud “Wir müssen wissen — wir werden wissen!” (We must know — we will know!) (the radio broadcast was recorded and is available here and an English transcription here). His statement defined so well his mind that it was used as the epitaph on his tomb in Göttingen.

Hilbert’s ideas eventually led two mathematicians to make two of the greatest discoveries in the last century solving Hilbert’s open question in the negative with remarkable outcomes. First, there was Kurt Gödel, who in 1931 found that there are always (true) theorems in any (powerful enough) mathematical theory which these theories could never prove to be true (or false) calling them undecidable propositions. Then, Alan Turing in 1939, proved what he called the undecidability of the Halting problem for any general-purpose mechanical machine, with nothing else but what was the basis of the first description of the modern digital computer and the foundations of a new science: computer science.

Hilbert’s “Wir müssen wissen — wir werden wissen!” made all this possible, and its impact is yet to be measured in the centuries to come.

My entry to the Deutsch in den Wissenschaften Competition (German in the Sciences) organized by the Goethe Institut.

A bit of cryptography, passwords strength, and sysadmins of savings banks

Posted in General, Natural Language and Computational Linguistics on August 14th, 2011 by Hector Zenil – Be the first to comment

This recent webcomic from xkcd, Password Strength, relates to what I’ve been investigating lately–the question of whether there need be a tradeoff between password strength and ease of recall, using, as usual, my knowledge of finite algorithmic information theory. While my conclusions will be presented in a paper in the next few months/years, the webcomic made me realize how defective even sophisticated things sometimes turn out to be. The webcomic draws attention to an unnecessary, even silly requirement for password generation.

Password strength is related to two main parameters: length and the likelihood of being guessed. Length is a simple parameter, the longer the better, but the likelihood of something being guessed is a bit trickier. What makes a string like 0000000 more likely to be cracked than say 0110101? Classical probability tells us that both have the same likelihood of occurring, thanks to (uniform) chance. But there is a good reason not to choose a word in any language as a password, viz. that someone trying to guess a password will assume that people wish to remember their password and therefore will most likely use meaningful strings, such as words, anniversary dates, and the like. Hackers use a technique called ‘dictionary attacks’ where every entry in a dictionary is presumed to be a potential password and used as such. The results are often spectacular. Despite the general awareness that words are not good passwords, when hackers perform these attacks they usually find at least a few people with weak, word-based passwords. The only reason not to use words is dictionary attack, so sysadmins took it upon themselves to dissuade people from choosing words as passwords, and the way they set about this was to ask users, and now quite often to force them, to:

1. Observe fixed length requirements;
2. Sometimes require a combination of caps;
3. Sometimes require numbers and other symbols;
4. In the case of some sites, require frequent password changes.

While requirement number 1 makes some sense (people may argue, with reason, that a short but good password is superior to an easy-to-guess long one), requirements 2 to 4 are quite arbitrary, if not completely nonsensical.

If people didn’t pick word-based passwords, if every character were assumed to be equally likely, only the length of the password would matter. If a password were to be generated randomly, the probability of hitting a word by random letter generation is very low, because there are more letters than one-digit numbers and reasonable symbols to choose from. The problem is that words are not random in the sense that they are sequences of correlated letters according to a distribution of words in a language.

But under the circumstances, the way that sysadmins decided to address this issue was to force people not to choose single words by arbitrarily enforcing requirements 1 to 3, with the result that people frequently either no longer remember their passwords or they cannot have a range of different passwords for use at different sites, because remember that if a password is disclosed at one site it will be disclosed for all the others where said password is used. To fix this, again sysadmins introduced the procedure of forcing people to change their passwords, so that users can no longer synchronize their password among all the sites that need one, leading them to have to either write down their nonsensical passwords because they cannot recall them, store them where they would not be secure, or else ask the system to retrieve the password every time they want to log in, often through unsecured lines (e.g. public wireless networks). These arbitrary requirements aren’t listed while you’re trying to log in, so that you may not remember whether you used a number or symbol (it may seem a bad idea to make hackers privy to this, but they will be anyway if they have access to the registration process). The computational time cost of retrieving or generating a new password may be low, but the human time cost may not be.

A longer password will always be better than a shorter one, even compared to a shorter one containing all sort of numbers and symbols. Numbers and symbols are only useful for preventing dictionary attacks, but if people use leeting (or chat slang) then I’m afraid it would not solve the problem that sysadmins are faced with, because hackers could just apply the translation tables to the dictionaries and try again, which I’m sure would be very successful. There are leetspeak translation sites on the web (e.g. www.computerhope.com/cgi-bin/convert.pl). The only truly effective way to solve this problem is to require longer passwords (which also prevents the use of words, because most common words have less than 8 characters, so asking for a 10-character password may be enough to preempt the choice of most dictionary words, though more characters can be required for greater saftey).

Could this problem have been solved otherwise? It would certainly be too expensive–computationally speaking–to check every password against a dictionary to avoid every possible word in every language (and even harder to avoid employing anything having to do with the user’s life, such as his forename or anniversary date), but being asked to come up with silly characters became unbearable for many users.

The xkcd site gives you a foretaste of it of a side effect of my algorithmic approach to cryptography. One does not and should not have to sacrifice ease of recall for strength. Something as simple as composing random words has greater combinatorial strength than a leetspeak-corrupted single word or passwords comprising a few random characters, with the added advantage that even if longer, sentences are much easier to remember.

There is no trick to this. Choosing a few words means a longer password, and putting them together circumvents dictionary attacks (to perform dictionary attacks over full sentences is computationally too expensive, so there is no way hackers can start doing so).

One shouldn’t need to choose between a good password and one that’s easy to remember, because there are passwords that can be both. What one may not be able to do is have short passwords that are easy to remember and also strong enough. Thus requirement 1 is the only important one, because one can also control the number of possible dictionary words, e.g. by picking a password length greater than most commonly used words. Zip’s law tells us that the number of letters will not be that long, because the longest words are less frequently used and therefore also less likely to be picked as passwords (besides, systems could actually make sure that there is no word in any major language that is of the stipulated length).

Compare the result of putting 2 random short (but not that short) words together using Mathematica:

In[1]:= passwds=Table[StringJoin[
RandomChoice[
Select[DictionaryLookup[], (StringLength[#] <
6) && (LowerCaseQ[#]) &], 2]], {100}];
In[2]:= RandomChoice[passwds, 5]
Out[2]="rimecrave", "stoneobeys", "mixedtromp", "whiffmutt", "letopted"

with their leetspeak versions that one would probably be required by sysadmins to use:

In[3]:=ToLeetSpeak/@passwds
Out[3]=”r1m3cr4v3″, “570n30b3y5″, “m1x3d7r0mp”, “wh1ffmu77″, “l370p73d”

even if the joined words do not either exist in a standard English dictionary:

In[4]:=Select[passwds, MemberQ[DictionaryLookup[], #] &]
Out[4]={}

To have fun you can also generate 2 word passwords in other languages using Mathematica:

In Spanish:

Table[StringJoin[
RandomChoice[
Select[DictionaryLookup[{"Spanish", All}], StringLength[#] < 6 &],
2]], {100}]

or French:

Table[StringJoin[
RandomChoice[
Select[DictionaryLookup[{"French", All}], StringLength[#] < 6 &],
2]], {100}]

Or any other of these languages:

In[5]:= DictionaryLookup[All]
Out[5]= {“Arabic”, “BrazilianPortuguese”, “Breton”, “BritishEnglish”, “Catalan”, “Croatian”, “Danish”, “Dutch”, “English”, “Esperanto”, “Faroese”, “Finnish”, “French”, “Galician”, “German”, “Hebrew”, “Hindi”, “Hungarian”, “IrishGaelic”, “Italian”, “Latin”, “Polish”, “Portuguese”, “Russian”, “ScottishGaelic”, “Spanish”, “Swedish”}

“The World is Either Algorithmic or Mostly Random” awarded a 3rd Place Prize in this year’s FQXi contest

Posted in Algorithmic information theory, Complexity, Foundations of Computation, Foundations of Physics, General, New Ideas on June 10th, 2011 by Hector Zenil – 2 Comments

Based on the combined ratings of the contest community and the panel of expert reviewers appointed by the FXQi, which included the members of the institute, I was awarded a 3rd Place Prize for my work The World is Either Algorithmic or Mostly Random in this year’s FQXi contest on the topic Is Reality Digital or Analog? sponsored by the Foundational Questions Institute. The winners were announced at this year’s World Science Festival in New York City.

My work can be summarized in one line as an explanation of the complexification process of the world, the process whereby we have evolved from a more primitive (random) state to the current organized state of the universe.

The essay is a summary of the philosophical branch of my current scientific research on finite algorithmic information theory. This philosophical branch is concerned with the exploration of the possible connections between algorithmic complexity and the physical world (or what happens in it). I propose the notion that the universe is likely digital, not as a claim about what the universe is ultimately made of but rather about the way it unfolds. Central to the argument are concepts of symmetry breaking and algorithmic probability, which are used as tools to compare the way patterns are distributed in our world to the way patterns are distributed in a simulated digital one. These concepts provide a framework for a discussion of the informational nature of reality. I argue that if the universe were analog, then the world would likely look more random, making it largely incomprehensible. The digital model has, however, an inherent beauty in its imposition of an upper limit and in the convergence in computational power to a maximal level of sophistication. Even if deterministic, that the world is digital doesn’t necessarily mean that the world is trivial or predictable, but rather that it is built up from operations that at the lowest scale are simple but that at a higher scale look complex and even random–though in appearance only.

How have we come from the early state of the universe (left) to the structures we find today (right)?

The arguments supporting my views are partially based on the findings of my research, epitomized by our most recent paper Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Randomness available in ArXiv in which my co-author and I describe a method that combines several theoretical and experimental results to numerically approximate the algorithmic (Kolmogorov-Chaitin) complexity of bitstrings by using the concept of algorithmic probability, which is connected to algorithmic complexity by way of the (Levin-Chaitin) coding theorem.

An extended (and detailed) version of The World is Either Algorithmic or Mostly Random is forthcoming and will be eventually posted.

New book: “Lo que cabe en el espacio” on Philosophy of Space and Geometry

Posted in General on June 10th, 2011 by Hector Zenil – Be the first to comment

I am excited to announce the publication of my new book written in Spanish Lo que cabe en el espacio on the philosophy of space in connection to our reality, and what we can or cannot do with it and in it. The book, under the title “Lo que cabe en el espacio: La geometría como pretexto para explorar nuestra realidad física y matemática”, approaches the basic notions of geometry in the context of their original formulations, that is, the formulations of the mathematicians in their particular times and places. By so doing it attempts to understand the motivations behind these original formulations and build a modern view from the perspective of our time. Properties of space, such as dimension and curvature are discussed, as treated by mathematicians from Euclid to Legendre, from Descartes to Hilbert.

The book is available through CopIt Arxives UNAM, a peer reviewed publishing organization run by professors of the National University of Mexico (UNAM). Readers can download the PDF at no charge from CopIt Arxives and it will soon be available in physical form at Amazon.com for those wishing to own a paper copy. There is also a Kindle version of a previous incarnation of the book available from Amazon here, which despite issues around accent encodings, has been in the top 100 Kindle books in Spanish for several weeks/months. I should be updating the Kindle version fairly soon to the newest edition.

¿Cuánto cabe en nuestro espacio? ¿Por qué sólo pueden existir cinco poliedros regulares y no seis? ¿Qué figuras llenan el plano y el espacio? ¿Es el espacio una generalización trivial del plano? ¿Podemos notar si estamos en un plano curveado o si estamos sobre él? En lenguaje casi coloquial, despegaremos figuras del espacio para rotarlas en cuatro dimensiones, contaremos poliedros en dimensiones superiores y nos preguntaremos acerca de las propiedades fundamentales del espacio. En este libro se devela cómo una parte de las matemáticas describen las propiedades más fundamentales del espacio en que vivimos y por lo tanto de la realidad en la que estamos inmersos.

The Lighthill Parliamentary Debate on General Purpose Artificial Intelligence

Posted in General on February 24th, 2011 by Hector Zenil – Be the first to comment

In 1973, Lucasian professor at Cambridge, James Lighthill, was asked by the British Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the failure of AI to achieve its grandiose objectives. He specifically mentioned the problem of “combinatorial explosion” or “intractability” of the discourse universe, which implied that many of AI’s most successful algorithms would grind to a halt faced with real world problems and were only suitable for solving toy versions.

The report was contested in a debate broadcast as part of the BBC series “Controversy” in 1973. The debate, which was on the proposition “The general purpose robot is a mirage,” was sponsored by the Royal Institute and pitted Lighthill against a team of researchers, including Donald Michie and Richard Gregory and led by the young AI researcher John McCarthy. The report led to the near-complete dismantling of AI research in England.

Here is the debate on YouTube (in 6 parts):

This debate goes to show that as recently as 1973 it was generally believed (as indeed it still is in some quarters), that one had to write a complex program to obtain complex behavior, or that no simple rule could produce complex behavior (2nd. part, min 7:45). This position was epitomized by Lighthill.

Lighthill’s position does not come as a surprise. He was, after all, a researcher in fluid dynamics and aeroacoustics, where it is easy to be misled by complicated differential equations involving ‘continuous’ variables and where nonexistent solutions arise so often. His main argument was that because one had to specify the rules in a computer to tell the robot how to behave in every possible scenario, every attempt to come up with a general purpose robot would quickly turn out to be an intractable problem, with a combinatorial explosion of possible solutions. I don’t entirely disagree with Lighthill on this, but I can by no means endorse his conclusion.

Unfortunately the response of the researchers on the panel was rather weak, except for a couple of minor arguments put forward by McCarthy to what seemed to be fundamental impediments that Lighthill appeared to be invoking. From my point of view, one of the items that should have been put on the table ought to have been a serious discussion of whether universal computation actually had something to tell us about the possibility of general purpose artificial intelligence (i.e. what the abstract could say about the concrete). Also the question of the essential differences, if any, between advanced automation and artificial intelligence, which McCarthy seems to have broached in one of his arguments against Lighthill, without reaching any conclusions. Indeed the point may be that there is no essential difference, which is something that was perhaps difficult to see back then. But instead of considering this as a caveat against AI, as Lighthill seemed inclined to do, it actually makes AI inevitable in the event, even if achieved by means other than those traditionally employed in AI labs.

If automation is not really that different from AI, as I think has been proven over the years, what this suggests is that automation will eventually lead to what we think of as intelligent behavior and therefore to artificial intelligence.

In the end, Lighthill’s position was reduced by the moderator (Sir George Porter) to an argument about pragmatic difficulties rather than about the fundamental impossibility of the project, in response to which Lighthill quipped that his neuroscientist friends had made it plain that they felt it was hopeless to try and model the brain using a computer. I find several problems with this position, beginning with the assumption that AI is exclusively about trying to mimic the brain.

Donald Michie made a good point concerning the combinatorial explosion made so much of by Lighthill by citing the example of chess. In 1973 humans were still better at playing chess, so he asked whether Lighthill would acknowledge artificial intelligence if a machine performing an exhaustive search turned out to win a chess match against a Master.

In 1968, David Levy was an international Master and he bet a thousand pounds that no computer program would beat him in the next 10 years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time). Lighthill responded that a chess program would avoid the combinatorial explosion by virtue of having the benefit of human knowledge. In any case, he said, he was reluctant to believe that a chess program would ever beat a human chess player. Lighthill was cleverly hedging his bets.

Lighthill’s position wasn’t totally clear at the end of the debate–was his argument purely pragmatic or did it concern something fundamental? I’m not wholly unsympathetic to his position because AI has been misleading as a field. While it has certainly made some contributions to science and technology, it has achieved what it has by dint of technological improvements (hardware capabilities) rather than scientific breakthroughs (e.g. a set of sophisticated new algorithms) on the part of the AI community. And I have explained this in some detail in my previous blog post on the IBM computer Watson in the Jeopardy! game Is Faster Smarter?

Is Faster Smarter? IBM’s Watson Search Engine Approach to Beat Humans

Posted in General, Minds and Machines on February 18th, 2011 by Hector Zenil – 4 Comments

IBM’s computer named “Watson” has beaten Jeopardy! (human) contestants in a series of games this month. IBM has a long history of innovations (watch this other 100th anniversary documentary featuring Greg Chaitin and Benoit Mandelbrot, among others here).

Not everybody was impressed by Watson though, according to Gavin C. Schmitt who interviewed Noam Chomsky, the recognized linguist:

  • Noam Chomsky: I’m not impressed by a bigger steamroller.
  • Interviewer: I assume that “a bigger steamroller” is a reference to Deep Blue. Watson understands spoken language and adapts its knowledge based on human interaction. What level of AI would be required to impress you?
  • Noam Chomsky: Watson understands nothing. It’s a bigger steamroller. Actually, I work in AI, and a lot of what is done impresses me, but not these devices to sell computers.

In some sense, I understand Chomsky’s view and he does well pointing out what may seem the clearest difference between Watson and a human being, but the point may be much more subtle and deeper. Wolfram’s Principle of Computational Equivalence (PCE) (see this previous post) may shed some light on this subject. According to this principle, memory is what makes a system intelligent, because basically any system that does not behave in an obviously trivial fashion will likely be as sophisticated as the most sophisticated system. So what is the difference between a system that is potentially intelligent and one that shows signs of actually being so? It is somehow a matter of scale in several directions. Take the example of weather. When Wolfram is asked whether clouds are intelligent according to his principle, his short answer is yes. Every time humans want to make a prediction about whether it will rain, it turns out to be incredibly difficult to do so for more than a couple of days, and often the weather forecast is wrong even for the next day. How is it that weather prediction is so hard despite our long experience at weather forecasting? Well, clouds are computing themselves, and as part of weather they are quite complex, as complex as systems like the human brain, says PCE.

Picture of Gregory Chaitin taken by Hector Zenil outside of the IBM Research center at Yorktown, N.Y. where Watson was designed.
(Picture of Gregory Chaitin taken by Hector Zenil outside of the IBM’s
Thomas J. Watson Research Lab at Yorktown, N.Y. where Watson was designed)

After all these years, IBM hasn’t come up with a theoretical breakthrough to meet the challenge but rather with a supercomputer fast enough to beat the other participants. Watson uses fairly sophisticated algorithms, but these aren’t that much more sophisticated than those used by search engines, which proceed by matching pieces of text and retrieving other pieces of text statistically related to the original. The IBM team has come up with a super-computer to challenge the Jeopardy! participants and not with a new algorithm. The main goal of machine learning labs is usually to perform about 1% to 3% better than the best benchmark of the top lab in a particular area (e.g. word tagging, word disambiguation, information retrieval, etc.). If Watson has achieved anything like a breakthrough, it may be credited with having advanced the current state of AI research. It takes Google’s kind of technology a couple steps further by having drawn together several technologies on steroids. The point is that one may not need to come up with the smartest algorithm– because one may not be able to, simply because it doesn’t make sense to engineer a super complicated algorithm to reach intelligence, it being more appropriate to start from a simple but potentially powerful system, and then extract the best of it by running it on a super large corpus of data on a super fast computer. The Watson- in-Jeopardy! experience tells us that even clouds may look intelligent when running on a supercomputer.

Watson confirms what I’d suspected, that we’re not that special after all (in this sense). Watson meets the challenge of beating a human on its own turf and at what it does best basically through the use of brute force. Achieving artificial intelligence (AI) is not, as I suspected (among other thinkers), a matter of science breakthrough but rather a matter of scale and technological achievement. Over time we’ll have faster and faster computers, and that means computers with intelligence resembling ours (of course there are other subtle issues here, such as the fact that the system should be allowed to interact with the intelligent forms it is meant to interact with, otherwise its intelligence may prove alien to ours, and look as strange as that of clouds).

Jean (Michel Jarré with his electronic harp.<br />
Picture by Hector Zenil, Paris concert 2010)
(Jean Michel Jarré with his electronic harp.
Picture by Hector Zenil, Paris concert 2010)

Wired published (when they used to publish interesting articles more often) an interesting article back in 2009, which reached the conclusion that one could exchange data for models: The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.

Some interesting inferences can be drawn from this milestone where IBM supercomputers have beaten humans at human tasks (remember this is the second time; at least with such a publicity, the first saw IBM’s Deep Blue beat Garry Kasparov, the reigning World Chess Champion at the time, in a series of chess matches). Among these we may single out the fact that we humans seem ready to call machines smart and intelligent even though they may not necessarily think like us (humans can only see ‘ahead’ a handful of selected chess movements while chess programs perform an exhaustive search only bounded by time), despite seeming as clever as we are. This is already a vindication of Turing’s proposal for a test of intelligence.

Yet Chomsky’s opinion seems to point in the opposite direction, that we may still think we are much more special than Watson, and he might be in some sense right. We definitely play very differently than machines, but is chess that different from natural language? It may be, but this time the question may be whether what Watson does at playing is that different from what we do.

At the end of the Deep Blue vs. Garry Kasparov, Kasparov pointed out that he felt that the machine actually had a strategy and something that made him think that the machine was actually playing like a human, somehow perhaps even teasing him. Ken Jennings (one of the two human participants against Watson) wrote a day after the match: “This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.”

Some people think that Watson has absolutely no ability to understand what it did, or any awareness about it, and that still makes the difference. This might be only partially true. Watson may not understand or be yet aware of its achievement, but I wonder if the same processes that made Watson to win this match are not the same that may be in action for even more sophisticated human thinking, such as self-awareness. But the question can also be reversed and one may also ask whether we are really aware of what happened, and how much of our feeling of being aware is the result of the type of thinking that Watson just exhibited.

As many of my readers certainly know, Alan Turing came up with a basic test suggesting that if something looked intelligent then it was intelligent. So we have reached the point at which not only has a machine passed the Turing test but also humanity may be ready to accept the idea behind the test, that is that it doesn’t really matter how something thinks if it looks clever enough to fool us (or even beat us at something) then it is intelligent. For the machine, the milestone is located at a point in time that reflects the current state of technology, which basically amounts to the miniaturization and mastery of computing devices, as well as the management of very large quantities of data, not forgetting the current state of fields involved (basically machine learning and natural language processing or NLP). But it is by no means an isolated achievement. I see IBM as the standard bearer for a combination of several current technologies run on the best hardware available today, a pioneer in this sense.

Wolfram|Alpha computational engine control room. Lunch day picture by Hector Zenil.
(Wolfram|Alpha computational engine control room.
Launch day picture by Hector Zenil)

It seems we are now able to gauge the size and power of the human brain as against something that looks as sophisticated as we are, at least at playing a sophisticated game. Watson and humans may reach the same conclusions, whether or not they do so in different ways, but the fact that Watson requires a computer the size of 10 full-size fridges, 15-terabyte of memory (likely full of data and programs) and 2,880 microprocessors working in parallel tells us more about us than about Watson itself. We knew we carried a supercomputer in each of our heads but we didn’t know what its specs may be. We also thought that the were specially gifted with unprecedented intelligence but now a machine that is certainly not aware of it and hasn’t taken the same path is also able to exhibits key features of intelligence.

Jen Kennings adds after the last match: “…But unlike us, Watson cannot be intimidated. It never gets cocky or discouraged. It plays its game coldly, implacably, always offering a perfectly timed buzz when it’s confident about an answer.” “…I was already thinking about a possible consolation prize: a second-place finish ahead of the show’s other human contestant and my quiz-show archrival, undefeated Jeopardy! phenom Brad Rutter.” Read more here and here.

Watson specs will fit in a small box in the future–given the trend of the past several decades following Moore’s law– and in the future as it is the case today, faster will be smarter.

Aired on PBS, NOVA called their documentary The Smartest Machine On Earth:

Watch the full episode. See more NOVA.

Alan M. Turing documentary teaser

Posted in General on January 28th, 2011 by Hector Zenil – Be the first to comment