split my research time between Stockholm and Oxford. I am a research associate in algorithmic information and computational
at the Unit
Medicine of the Karolinska
Institute in Sweden. My research focuses on
applying information theory and complexity science to
genomics, synthetic and network biology. With backgrounds in math, computer science and philosophy, I
think of myself as a kind of experimental philosopher or a computational natural scientist. (Greg Chaitin once referred to me as a "new kind of practical theoretician").
have been an R&D fellow and later senior
research associate for Wolfram
Research in the U.S. since
2006 in Stephen Wolfram's science group in Boston. I worked on the Wolfram|Alpha project from its inception to its public release in aspects of data mining, computational linguistics, and fast prototyping. I am also a member of the Foundational
Questions Institute (FQXi); an honorary founding associate of ASSRU (Economics
Dept. Trento, Italy); a member of the Turing
Centenary Advisory Committee in the UK and
member of the national researchers body (SNI) in Mexico.
Tradeoffs of complexity measures: how entropy, algorithmic randomness, logical depth, algorithmic probability, computational complexity and fractal dimension can be connected (see e.g. papers 3, 4, 6, 19 and proceedings 6 and 11). Contributing with the only method other than lossless compression to approximate the Kolmogorov complexity of strings (see papers 5, 23). Applications to many areas (see papers 2, 7, 8, 20).
Controllability and programmabilityof complex systems: devising strategies to understand, test and reprogram artificial and natural systems as models of computation (see e.g. papers 9, 12, 18) devising also 2 measures for testing the algorithmicity and programmability of a system (see also in proceedings 10 and in collections 5).
Spatial complexity, molecular and natural computing: contributing with methods to evaluate 2-dimensional complexity (previously identified by Feldman as an open problem, see papers 16, 18, 22).
Information theory and algebraic
graph theory: contributed a solution to the challenge of network complexity (previously identified by Gell-mann and Feldman as an open problem, see paper 22).
Emergenceof organization and structure: providing numerical evidence in favour of Levin's universal distribution and Solomonoff's program-size bias towards simplicity (see papers 12, 15, 18, 23). I have also proposed what I call the Busy Beaver universality conjecture, that is that all Busy Beaver Turing machines are likely Turing universal (see paper 10).
Computational, synthetic and network biology: novel applications of information theory to genetic regulatory networks and other biological interaction networks (see papers 18, 22 and proceedings 10 and 13).
Cellular automata theory: contributing a novel formal behavioural classification based on asymptotic behaviour, giving answer to one of Wolfram's open question on the fraction of behaviour classes in CA rule spaces (see paper 17), a classification by compression (paper 1) and a new proof of universality in Elementary Cellular Automata (ECA) (soon available).
I was invited to talk at the Information and Life workshop "Information, Causality and the Origin of Life" organized by Paul Davies' research group and hisBeyond Center at Arizona State University, Tempe with title iAlgorithmic Information Theory and Life.
I will be in the jury of Luís Filipe Antunes' PhD student Cristina Santos for her dissertation on "Assessing Complexity of Physiological Interactions", Medical School, University of Porto, Portugal (date TBD).
Our work and paper (see paper 20) with colleagues J.-P. Delahaye, N. Gauvrit and F. Soler-Toscano was featured in a short note by the French newspaper Le Monde in its science section under the title Le cerveu: probabilist par nature? (I find the title misleading, what our paper actually suggests is that the human brain develops an algorithmic nature not a classical probabilistic one, this based on actual experiments to explain the nature of subjective randomness).
21. Complexity Measurement Based on Information Theory
and Kolmogorov Complexity L. Ting
Lui, G. Terrazas, H. Zenil,
C. Alexander and N. Krasnogor Artificial
Life (in press)
20. Algorithmic complexity for short binary strings applied
to psychology: a primer N. Gauvrit, H.
Soler-Toscano and J.-P. Delahaye Behavior Research Methods, vol. 46-3, pp 732-744, 2013 [preprint,
online] DOI: 10.3758/s13428-013-0416-0
19. Correspondence and Independence of Numerical Evaluations
of Algorithmic Information Measures
F. Soler-Toscano, H. Zenil#, J.-P. Delahaye
and N. Gauvrit Computability,
vol. 2, no. 2, pp 125-140, 2013.
DOI 10.3233/COM-13019 [online, preprint]
18. Exploring Programmable Self-Assembly in Non DNA-based
Terrazas, H. Zenil and N. Krasnogor Natural Computing, vol 12(4): 499--515, 2013.
DOI: 10.1007/s11047-013-9397-2 [online, preprint]
16. Two-Dimensional Kolmogorov Complexity and Validation of the Coding Theorem Method by Compressibility
H. Zenil, F. Soler-Toscano, J.-P. Delahaye and N. Gauvrit In revision. Preprint on ArXiv:1212.6745 [cs.CC], [preprint]
14. Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments H. Zenil, C. Gershenson, J.A.R. Marshall and D. Rosenblueth Entropy, 14(11),
2173-2191, 2012. [online]
13. Computation and Universality: Class IV versus Class III Cellular Automata
G.J. Martinez, J.C. Seck-Touh-Mora and H. Zenil Journal of Cellular Automata, vol. 7, no. 5-6, pp. 393-430,
9. What is Nature-like Computation? A Behavioural Approach
and a Notion of Programmability
Philosophy & Technology (special
issue on the History and Philosophy of Computing), 2013. [online, preprint]
3. Image Information Content Characterization and Classification by Physical Complexity H. Zenil, J.-P. Delahaye and C. Gaucherel Complexity,
vol. 17-3, pages 26-42, 2012.
Zenil, G. Ball and J. Tegnér Testing Biological
Models for Non-linear Sensitivity with a Programmability Test
In P. Liò, O. Miglino, G. Nicosia, S. Nolfi and M. Pavone
in Artificial Intelligence, ECAL 2013, pp. 1222-1223,
MIT Press, 2013. DOI: http://dx.doi.org/10.7551/978-0-262-31719-2-ch188
Hernández-Orozco, F. Hernández-Quiroz and H.
Zenil Sparsity of Non-trivial Proving Speed-up in Random
Systems of Propositional Calculus
In Paola Bonizzoni, Vasco Brattka, Benedikt Löwe (eds.),
The Nature of Computation, Logic, Algorithms, Applications;
9th Conference on Computability
in Europe, CiE 2013, 2013.
Zenil A Turing Test-Inspired Approach to Natural Computation
In G. Primiero and L. De Mol (eds.), Turing in Context II, Historical
and Contemporary Research in Logic, Computing Machinery and Artificial Intelligence.
Proceedings by the Royal Flemish
Academy of Belgium for Science and the Arts, Belgium, 2013.
4. H. Zenil What Randomness for What Biological Process? XXIII SITGES,
Understanding and Managing Randomness in Physics, Chemistry
and Biology, University of Barcelona,
p.98, Spain, 2012.
3. J.J. Joosten, H. Zenil and F. Soler-Toscano Fractal Dimension as an Indication of the Terminating
Runtime of Discrete Dynamical Systems, (abstract
p. 214) in S. Thurner M. Szell (eds), Löcker Verlag,European
Conference on Complex Systems (ECCS'11),
2. J.J. Joosten, F. Soler-Toscano and H. Zenil# Program-size versus Time complexity, Slowdown
and speed-up phenomena in the micro-cosmos of small Turing
In H. Guerra (ed.). Physics and Computation 2010, CAMIT, 3rdInternational
Workshop on Physics and Computation 2010. [PDF]
1. J.J. Joosten, F. Soler-Toscano and H. Zenil# Complejidad descriptiva y computacional en máquinas
de Turing pequeñas Actas de las V Jornadas Ibéricas, Lógica
Universal e Unidade da Ciência, CFCU 2010.
On the Complex Behaviour of Natural and Artificial Machines and Systems
In F.P. Bonsignorio, A.P. del Pobil, E. Messina, J. Hallam (eds.), Metrics of sensory motor integration in robots and animals, How to measure the success of bio-inspired solutions with respect to their natural models, and against more ‘artificial’ solutions?, Springer Cosmos Series, Cognitive Systems Monographs, Springer,
(to appear) [preprint]
Zenil and J. Tegnér Methods of information theory and algorithmic complexity
for network biology
Submitted to Frontiers in BIOENGINEERING AND BIOTECHNOLOGY, special issue on Computational methods for understanding complexity: the use of formal methods in biology [preprint]
10. The Π Research Network (H. Zenil i.a.), Algorithmic Information Theory, The Philosophy of Information An Introduction, Society for the Philosophy of Information, 2013 [online]
Turing, Randomness and Economics In K. Velupillai and S-H. Chen (eds), Mathematics in
Economics - Foundations, Philosophy and Epistemology, World Scientific (forthcoming) [preprint]
G.J. Martinez, J.C. Seck-Touh-Mora and H. Zenil Wolfram's Classification and Computation in Cellular
Automata Classes III and IV
In H. Zenil (ed), Irreducibility
and Computational Equivalence:
Wolfram Science 10 Years After the Publication of A New Kind of Science,
Springer, 2013 [preprint].
7. H. Zenil, and J.-P. Delahaye Un
método estable para la evaluación de la
complejidad algorítmica de cadenas cortas (A
Stable Method for the Evaluation of the Algorithmic Complexity
of Short Strings).
In G.J. Martinez, H. Zenil and C.R. Stevens (eds), Complex Systems
as Computing Models, Luniver Press, 2011. [preprint]
6. H. Zenil and J.-P. Delahaye An Algorithmic Information-theoretic Approach to
the Behaviour of Financial Markets.
In S. Zambelli, D.A.R. George (eds.) Nonlinearity, Complexity and Randomness
in Economics: Towards Algorithmic Foundations for Economics, Wiley-Blackwell,
2. J.J. Joosten, F. Soler-Toscano, H. Zenil# Complejidad descriptiva y computacional en máquinas
de Turing pequeñas
En Lógica Universal e Unidade da Ciência, Centro de Filosofia
das Ciências da Universidade de Lisboa, pp. 11--32, 2011. [preprint]
Zelinka, A. Sanayei, H. Zenil and O.E. Rössler
(eds), How Nature Works, Complexity in Interdisciplinary
Research and Applications,
Series: Emergence, Complexity and Computation, Vol. 5,
H. Zenil, F. Soler-Toscano,
J.-P. Delahaye and N. Gauvrit, Methods and Applications
of Kolmogorov Complexity,
Springer, forthcoming 2014.
L'approche algorithmique de
l'aléatoire, thesis fulfilling the dissertation
requirement for the Degree of Doctor of Philosophy under
the direction of J.
Mosconi, IHPST (Paris 1/ENS Ulm/CNRS).
Calcul et hyper calcul,
(mémoire) fulfilling the dissertation requirement for
the degree of Masters (Logic) under the direction
Dubucs at the University of Paris 1 Panthéon-Sorbonne,
Encaje de las Redes Neuronales Recurrentes Analógicas en la Jerarquía Aritmética fulfilling the requirement for the B.Sc. (Math) degree, under advice of F. Hernández-Quiroz, Facultad de Ciencias, UNAM, 2005.
Selected Essays and Reviews
Zenil, The Seemingly Contradictory Philosophical
Legacy of Turing and Shannon, AISB
Quarterly no. 138 pp. 9--15, 2014. [PDF]
H. Zenil and J.A.R. Marshall Some Aspects of Computation Essential to Evolution and
Life Ubiquity (ACM),
vol. 2013, no. April (2013), pp 1-16. [online, preprint]
Zenil, Complejidad y Aleatoriedad,
Ciencia (número especial de Turing), Academia
Mexicana de Ciencias, vol 64-4, 2013. [online]. The outstanding cover of the special issue here.
And the full article here.
A. German and H. Zenil, Afterword to Konrad Zuse's Calculating Space (Rechnender Raum)--The MIT translation--In A Computable Universe, World Scientific, 2012. [PDF]
H. Zenil, Introducing
the Computable Universe, Introduction to A Computable
Universe (foreword by Roger Penrose), World Scientific,
An Algorithmic Approach to Information and Meaning APA Newsletter on Philosophy and Computers, vol. 11, No. 1, 2011. [online]
Information Theory and Computational Thermodynamics: Lessons
for Biology from Physics Information 3, no. 4: 739-750, 2012. [online] DOI: :10.3390/info3040739
H. Zenil The Complexity of Simple Programs: Workshop Report from Cork, Ireland Bulletinof the EuropeanAssociation for TheoreticalComputer Science (BEATCS), 2009.
H. Zenil Très courte enquête sur l'extension non-triviale de la logique de propositions à la logique du premier et deuxième ordre
Reviewer for the CONICYT (Comisión
Nacional de Investigación Científica y Tecnológica),
gobierno de Chile, 2009.
Peer reviewed for: Theoretical
Computer Science; Fundamenta Informaticae; Complex Systems;
Nonlinearity; Information Processing Letters; Discrete Applied
Mathematics; International Journal of Unconventional Computing;
and Natural Computing, New Journal of Physics.
Book and paperreviewer: For the ACM Computing Reviews (more than 25 reviews published, some of them featured selected review and selected reviewer).
Lecturing, Recent & Forthcoming Talks
Invited speaker: UNESCO UniTwin CS-DC e-Kickoff, June 23-26, 2014. Talk title: The Online Algorithmic Complexity Calculator (OACC): From Sequence Complexity to Graph Complexity.
and Computation in Nature, CDT403 Research
Methods in Natural Sciences and Engineering, Thur
2013-10-03, 15-17.00, Mälardalen University.
School of Innovation, Design and Engineering, Sweden.
How can we test for artificial life? Testing for Non-linear
Sensitivity and Programmability, Synthetic
Modeling of Life and Cognition, SMLC 2013, September
seminar talk: Algorithmic Information Theory and Synthetic
Biology: Towards characterising and reprogramming nature, February
21, Unit of Computational Medicine, Karolinska
Institute, Stockholm, Sweden.
Talk: A Turing Test-Inspired Approach to Natural Computation, Turing in Context II, Historical and Contemporary Research in Logic, Computing Machinery and Artificial Intelligence, Royal
Flemish Academy for the Sciences and Arts, Paleis der Academieën, October, 12, Brussels, Belgium.
Invited talk: Turing, Randomness
and Economics (Turing, aleatoriedad y economía),Alan Turing Workshop From Computers to Life, June 2012, Turing Centenary event, CEIICH, Universidad
Nacional Autónoma de México (UNAM), July, Mexico City,Mexico.
Talk: The Algorithmic Approach to Information and Meaning, Faculty of Informatics. Interdisciplinary Workshop: Ontological, Epistemological and Methodological Aspects of Computer Science, Philosophy of Simulation (SimTech Cluster of Excellence), Institute of Philosophy, July 7, University of Stuttgart, Germany.
on Randomness, Turing Machines and conducting searches--since
2007. Also student supervising.Wolfram
Science School, University of Vermont, Bentley,
and ISTI, CRS Italy (in 2009).
Awards & Grants
Questions Institute (FQXi) mini-grant awarded by
way of the Silicon Valley Community Foundation for travel
support in the context of my "The Nature
and the Physics of Information", (FQXi-MGA-1316),
member of the National Researchers Body (Sistema Nacional
of the National Council of Science and Technology of Mexico (CONACyT),
Foundational Questions Institute (FQXi) mini-grant awarded by way of the Silicon Valley Community Foundation for the project "Computation and Biology", (No. FQXi-MGA-1212) 2012-2013.
Foundational Questions Institute (FQXi) mini-grant awarded by way of the Silicon Valley Community Foundation for the project "Computation and Time" (no. 2011-93849 (4661)), 2011-2012.
Winner in the Computational Imagery category of
Kroto Institute Scientific Image Competition". Image
the Computational Universe:Runtime Space in a
Early nominated to the MIT Technology Review (Spanish
version) annual list of 35 INNOVATORS UNDER 35, 2012.
Bourse aux jeunes chercheurs Comité National Français d'Histoire et de Philosophie des Sciences, 2011.
Ministère de l'Enseignement Supérieur et de la Recherche (bourse de mobilité aires culturelles), 2007,University of Paris.
Association for Symbolic Logic (ASL) Travel Grant Award(CiE 06, Swansea UK)
NSF and the New England Complex Institute (NECSI) (ICCS06-Boston, USA)
Université de Lille III, Kurt Gödel: The Writings, Maison de la Recherche 2006, Lille, France.
Isaac Newton Institute for Mathematical Sciences, Cambridge University, UK (visiting fellow, Workshop LAAW04, Wolfson Court, Girton College, UK), 2006.
Masters and PhD fellowships from CONACYT and the French Ministry of Education, 2005-2011.
Volunteering & Internships Massachusetts Institute of Technology (MIT)
Summer 2007 Mars Gravity Biosatellite team
Supporting member of the software engineering team for the program, in charge of assembling, reviewing and enhancing software requirements and flowdown. Project co-sponsored by the NASA.
The International Wikimania Conference
for the Wikimedia Foundation, Inc. (the Wikipedia Foundation) Wikimedia Organization Team, July-August, Cambridge, Massachusetts, U.S.A.
Massachusetts Institute of Technology (MIT) Summer 2006
Artificial Gravity Project (test subject)
Man-Vehicle Laboratory, August, Cambridge, Massachusetts, U.S.A.
Mexican Consulate in Seattle, WA.
December 2003-January 2004
Documentation and Consular Activities
Seattle, Washington, U.S.A.
An uncomputable picture of uncomputable art: This picture, titled “Visualizing the Computational Universe: Runtime Space in a Peano Curve," won me 1st. prize in the Computational Imagery category of the "2011 Kroto Institute Scientific Image Competition" held in the UK. It represents a small slice of the computational universe of all possible computer programs. Each pixel represents a computer program with empty input running on an abstract computer (a Turing machine). The color indicates the runtime of the program upon completion of its computation. The darker a pixel the longer the program took to produce its output. White pixels are computer programs that do not produce any output because they never halt (just as happens when your computer hangs forever sometimes making you reboot). Red pixels are the computer programs that take the longest time—relative to other programs of the same size-- before finishing (they are also called Busy Beavers). I call this color type coding the computability spectrum. Programs were sorted by size from smallest to largest (in terms of number of instructions) in such a way that one could be certain that any program would eventually be reached (including versions of MS Windows and Angry Birds that can be encoded as one-run computer programs). Because there is no upper limit to the size of programs, the space of all computer programs is infinite but discrete, perhaps mirroring our own world. Because an enumeration of computer programs is linear (you start with program 1, then proceed to program 2, and so on) pixels depicted in this 2-dimensional figure are disposed along a "space-filling" Peano curve. This is a curve that guarantees that 2 contiguous elements in the enumeration remain as close as possible to each other in the 2D configuration. The picture has the striking particularity that it is ultimately uncomputable. Imagine you point a telescope and take a picture of an object in the computational universe for which some pixel values cannot be determined, not only in practice but in principle-- that is, no matter how good your camera, or how many terapixels some future version of it may have. This slide of the computational universe was only possible because it is a tiny region of short computer programs for which all pixels can be determined by zooming in deep enough. However, Gödel and Turing proved that if you zoomed out to a larger view of the computational universe, the picture would l begin to look completely blurry because pixel colors would not be determinable . In other words this picture is the inverse equivalent of Hubble's Deep Field space--the larger the space field the blurrier. While we can only have access to the visible universe determined by the speed of light, we can only see a small part of the computational universe due to unattainable limits. Most of the computational space will remain unknowable, not due to a limitation of resources but due to a fundamental limitation from the the digital world and possibly of our own reality. Only marginal improvements to this picture are possible. While the picture is ultimately uncomputable and unattainable, one can make an informed guess as to how it would look. The theory that would tell you the average color of the picture at a greater scale, even if uncomputable, is the theory of algorithmic probability also closely related to what is known as the Chaitin Omega number. It turns out that the picture will be mostly white, as in an ever expanding universe, finding computer programs that halt and produce something becomes increasingly harder. Indeed, most programs will never halt, and those that do will do so relatively quickly in proportion to their size, as Calude has theoretically found and I have numerically confirmed. Algorithmic probability also tells us that most programs will produce the same output, and only those rare ones producing different outputs will produce random-looking digital sequences. In other words, among the computer programs that do produce an output, most of them will be highly ordered, as if they were creating a universe of sparse but highly ordered patterns out of nothing (remember these programs run with no input). Some of these programs that produce the same output may do so faster than others, but precisely how much faster some program may be is another an open foundational question and one of the greatest challenges in mathematics and computer science. If it were possible to find a program considerably faster than any other, then everything could be computed faster and faster, regardless of the apparent difficulty of a program.
I hereby give permission to reproduce this figure as a pedagogical tool for explaining deep and fundamental aspects of our reality. Just mention the source and if possible drop me a line. You can create your own too, and improve on mine (I did calculate the one that cannot be further improved upon without including undetermined pixels). If you do so I would love to know about it. You may also need to ask permission of Springer Verlag, as this picture was published in my paper From Computer Runtimes to the Length of Proofs: With an Algorithmic Probabilistic Application to Waiting Times in Automatic Theorem Proving. In M.J. Dinneen, B. Khousainov, and A. Nies (Eds.),Computation, Physics and Beyond, International Workshop on Theoretical Computer Science, WTCS 2012, LNCS 7160, pp.223-240, 2012.
Pictures and Videos
From left to right:
H. Zenil, Stephen Wolfram, Paul Davies, Ugo Pagallo, Greg Chaitin, Cris Calude,
Karl Svozil, Gordana Dodig-Crnkovic and John Casti. Invited Speakers, NKS Conference, University of Vermont,
USA. 2007. (Picture by Sally McCay)
With Per Martin-Löf (student of Komogorov) having
lunch at the Maison de la Recherche, Paris 2010, France
With Greg Chaitin at his
office in Yorktown, New York IBM Thomas J. Watson Research Center, 2006, USA
Moderating a roundtable featuring
(in order from left to right):
Greg Chaitin, Ed Fredkin, George Csicsery, Tony Leggett, Cris Calude,
Tom Toffoli and Stephen Wolfram on the subject: What is Computation? How does nature
compute? University of Indiana Bloomington, 2008, USA
On the National Geographic Explorer ship invited by the FQXi. From Bergen in Norway through the Swedish archipielago to Copenhagen, Denmark, 2011.
Lecturing at Mälardalen University in Sweden CDT403 Research Methods in Natural Sciences and Engineering.
Space Shuttle Endeavour STS-123 liftoff (video by H. Zenil on March 11, 2008) This video recording was made from the vantage point of the Banana river, 6 miles from the Shuttle platform in a restricted area inside the Kennedy Space Center. The STS-123 was the 25th. flight to the International Space Station (ISS) and delivered a Japanese module and the Canadian Dextre robot.
Control room of the Arecibo telescope in Puerto Rico and M. Nolan, director of the
observatory in a visit with friend L. Doyle from SETI. January 2014.
Wolfram Science School
trip to San Gimignano, Italy (Summer, 2009) From left to right school instructors: Hector
Zenil, Jamie Williams, Paul-Jean Letourneau,
Tommaso Bolognesi, Jean Baetens. Among the participants: Ray Aschheim.