Archive for August, 2006

SIGGRAPH 2006

Posted in Conferences on August 15th, 2006 by Hector Zenil – Be the first to comment

As part of  Hacking Days we paid a visit to SIGGRAPH 2006. I found interesting stuff regarding the latest developments in computer graphics. There were some very impressive examples of holographics, visual effects, an impressive device using light synchronization with movement rotation to create different effects in a small house, magnetic fluids, a screen with a globe showing the real-time queries from Google by country, language and frequency. There was a bunch of stuff on sensors, and user interfaces including 3D glasses, and virtual reality devices like gloves and headsets.

siggraph06

NKS upon Morphogenesis

Posted in Foundations of Biology on August 7th, 2006 by Hector Zenil – Be the first to comment

Stephen Wolfram’s NKS approach to the Reaction-Diffusion Process can be found at:

http://www.wolframscience.com/nksonline/page-1012g-text

A beautiful compound of different animals’  markings can be found in the following NKS book page:

http://www.wolframscience.com/nksonline/page-426

Turing on Morphogenesis

Posted in Foundations of Biology on August 7th, 2006 by Hector Zenil – Be the first to comment

Shortly before his death Turing did research on Biology, specifically on the  formation of patterns. He proposed that under certain conditions diffusion can destabilize a chemical system and cause spatial patterns.

His original paper on the subject can be found at the Turing Archive [turingarchive.org].

More information can be found if you search  for “Gierer and Meinhardt” on pattern formation. Here is an interesting introduction to the topic written by P.T. Saunders:

Collected Works of A.M. Turing
Morphogenesis
P.T. Saunders, Editor

Introduction

Turing’s work in biology illustrated just as clearly as his other work his ability to identify a fundamental problem and to approach it in a highly original way, drawing remarkably little from what others had done. He chose to work on the problem of form at a time when the majority of biologists were primarily interested in other questions. There are very few references in these papers, and most of them are for confirmation of details rather than for ideas which he was following up. In biology, as in almost everything else he did within science — or out of it — Turing was not content to accept a framework set up by others.

Even the fact that the mathematics in these papers is different from what he used in his other work is significant. For while it is not uncommon for a newcomer to make an important contribution to a subject, this is usually because he brings to it techniques and ideas which he has been using in his previous field but which are not known in the new one. Now much of Turing’s career up to this point had been concerned with computers, from the hypothetical Turing machine to the real life Colossus, and this might have been expected to have led him to see the development of an organism from egg to adult as being programmed in the genes and to set out to study the structure of the programs. This would also have been in the spirit of the times, because the combining of Darwinian natural selection and Mendelian genetics into the synthetic theory of evolution had only been completed about ten years earlier, and it was in the very next year that Crick and Watson discovered the structure of DNA. Alternatively, Turing’s experience in computing might have suggested to him something like what are now called cellular automata, models in which the fate of a cell is determined by the states of its neighbours through some simple algorithm, in a way that is very reminiscent of the Turing machine.

For Turing, however, the fundamental problem of biology had always been to account for pattern and form, and the dramatic progress that was being made at that time in genetics did not alter his view. And because he believed that the solution was to be found in physics and chemistry it was to these subjects and the sort of mathematics that could be applied to them that he turned. In my view, he was right, but even someone who disagrees must be impressed by the way in which he went directly to what he saw as the most important problem and set out to attack it with the tools that he judged appropriate to the task, rather than those which were easiest to hand or which others were already using. What is more, he understood the full significance of the problem in a way that many biologists did not and still do not. We can see this in the joint manuscript with Wardlaw which is included in this volume, but it is clear just from the comment he made to Robin Gandy (Hodges 1983, p. 431) that his new ideas were “intended to defeat the argument from design”.

This single remark sums up one of the most crucial issues in contemporary biology. The argument from design was originally put forward as a scientific proof of the existence of God. The best known statement of it is William Paley’s (1802) famous metaphor of a watchmaker. If we see a stone on some waste ground we do not wonder about it. If, on the other hand, we were to find a watch, with all its many parts combining so beautifully to achieve its purpose of keeping accurate time, we would be bound to infer that it had been designed and constructed by an intelligent being. Similarly, so the argument runs, when we look at an organism, and above all at a human being, how can we not believe that there must be an intelligent Creator?

Turing was not, of course, trying to refute Paley; that had been done almost a century earlier by Charles Darwin. But the argument from design had survived, and was, and indeed remains, still a potent force in biology. For the essence of Darwin’s theory is that organisms are created by natural selection out of random variations. Almost any small variation can occur; whether it persists and so features in evolution depends on whether it is selected. Consequently we explain how a certain feature has evolved by saying what advantage it gives to the organism, i.e. what purpose it serves, just as if we were explaining why the Creator has designed the organism in that way. Natural selection thus takes over the role of the Creator, and becomes “The Blind Watchmaker” (Dawkins 1986).

Not all biologists, however, have accepted this view. One of the strongest dissenters was D’Arcy Thompson (1917), who insisted that biological form is to be explained chiefly in the same way as inorganic form, i.e., as the result of physical and chemical processes. The primary task of the biologist is to discover the set of forms that are likely to appear. Only then is it worth asking which of them will be selected. Turing, who had been very much influenced by D’Arcy Thompson, set out to put the program into practice. Instead of asking why a certain arrangement of leaves is especially advantageous to a plant, he tried to show that it was a natural consequence of the process by which the leaves are produced. He did not in fact achieve his immediate aim, and indeed more than thirty-five years later the problem of phyllotaxis has still not been solved. On the other hand, the reaction-diffusion model has been applied to many other problems of pattern and form and Turing structures (as they are now called) have been observed experimentally (Castets at al. 1990), so Turing’s idea had been vindicated.

Hacking Days at MIT and Wikimania at Harvard, August 2006.

Posted in Computer Science, Conferences on August 7th, 2006 by Hector Zenil – Be the first to comment

Hacking Days at MIT and Wikimania at the Harvard Law School came to a close yesterday. Here is a brief summary:

Brion Vibber, Chief Technology Officer of Wikimedia, gave many talks. He discussed everything from why wiki projects are difficult to cache (since they are dynamical) to new features to come, like Semantic MediaWiki, a possible Xapian search engine, better Wikitags, a better parser,  possible support for  PDF documents and integration with the DjVu image format file, among other video formats like OpenID/YADIS/A-Select/better. There were some OLPC -One Child Per Laptop- computers outside which are able to synchronize themselves, being interconnected in order to play music or share any type of information through a wireless net they build by themselves.

Mark Bergsma talked about near-future  server technology of the Wikimedia projects, like 64bits servers. He provided information about the geographical sites of the Wikipedia clusters, mainly located in Florida and Amsterdam. He talked about some features of the caching architecture using squid and some new DNS technologies they are exploring, like PowerDNS and geographical balancing (e.g. BGP DNS) and object purging. He announced that they were already using the HTCP inter-cache protocol. He also announced a plan to make the switch with one core switch/router more reliable. Some of the participants proposed the use of  Planet Lab services (http://www.planet-lab.org), a collection of machines distributed over the globe running a common software package including a Linux-based OS, mechanisms for bootstrapping, distribution software and a set of node monitor tools. PlanetLab is mainly devoted to research as a testbed for overlay networks, providing groups the opportunity to experiment with planetary-scale services.

A later talk was about enhancing Wiktionary to be used as a database for external applications requiring it. Right now the Wiktionary can only be  exploited by addressing a query directly to it. A new database structure is being developed in order to give Wiktionary semantic meaning -among other things, relating each word to its translation in all other languages already in Wikitionary- which will eventually allow  many new features and the generation of a full knowledge database rather than a bunch of words ( about a million words at the moment,  in all langauges) with definitions.

An interesting talk about managing and posting onto the discussion page also took place. In a nutshell, the idea was to treat each post as a wikipage in itself. Some questions were raised about performance impact on the whole system arising from the huge amount of new wikipages, and other security and control questions emerged too but the idea seemed to be very well accepted. Finally a nice proposal to include video and audio streaming into wikiprojects was presented.

There were several talks about WikitionaryZ during  Hacking Days and Wikimania, by Eric Moeller and others.Wiktionary Z is an iniative to create the Ultimate Universal Wiktionary (pretty humble, isn’t it?) by integrating semantic knowledge into Wiktionary. The project is based on defining meaning using a short, simple phrase  that defines a word clearly and unequivocally and that could be exactly translated into all the languages of  the Wiktionary. There is also a record of the relationships of the words with each other, thus making it possible to build a machine-readable repository of semantic knowledge.

Following that, Andre Engels talked about pywikipediabot and the challenges of writing wikipedia bots avoiding anything that used  screen scrapping, making the process of maintaning the bots quite complicated given the need to change bots everytime there is a change in the format of the articles. He also spoke about the dangers of using bots for big tasks, because errors in bot programming can lead to hundreds of thousands of damaged pages.

Other talks were about OpenID, an OpenSource authentication system very similar to Microsoft Passport in that it integrates the user’s identity in several projects (wikimedia projects, blogs, etc) into a single id. There are good plans to integrate this feature into Wikipedia soon.

WYSIWYG for Wikipedia: One of the main problems when using Wikipedia is the difficulty of editing using the Wikitags. Although technologically advanced users can easily adapt to the Wikitags system, most  people just can’t get the hang of it. Hence the need for an easy-to-use, simple editor is evident, although the lack of a proper mediaWiki parser and the complexity of the Wikitags language makes such a thing hard to implement. Anyway, Frederico Caldeira Knabben has created a very nice and useful WYSIWYG HTML editor called FCKeditor (www.FCKeditor.org), and is willing to join forces with media wiki to integrate it into wikipedia.

There was also a panel featuring Brion Vibber, Andre Engels and Erik Moeller which addressed the possibility of a  MediaWiki API. Many of the attendees were enthusiastic, and some went into a separate room and discussed the specification with Brion and Andre. They came up with a preliminary agreement that may be available soon on the web. The day ended with an enjoyable tour of the MIT’s Media Lab.

Some topics were very boring. For instance,  the Wikimedia discussion panel was mainly about internal politics and logistics. Nothing interesting for the broad audience.

From the point of view of our discussions here the most interesting topics at Wikimania related to the Semantic Web and the reliability of Wikiprojects, given  that everybody can edit the entries. Jim Giles, the author  of the Nature paper comparing Wikipedia and Britannica talked about the strong and weak points of the entries, and the reviews. According to him, Britannica made the point that most of their entries that were evaluated were in the sciences, where Wikipedia is stronger, most of the contributors being from these disciplines. The argument was that this  kind of comparison could not serve as an adequate measure of the entire corpus of knowledge covered by Britannica. Furthermore Britannica also made the point that the entries (50 in all) were badly reviewed, accounting for the fact that  Wikipedia earned a rating very close to that earned by them (3.9 errors on average per article for Wikipedia as against the 2.9 obtained by Britannica). However the author argues that the reviewers were the same for both sides, so they would on average have committed the same number of errors.

Regarding the session in which they were expecting to have a consensus on improving the Wikipedia content reliability, they didn’t reach it. According to Martin Walker, a professor of Chemistry, things gradually coalesced during discussions over the weekend. Both the Germans and the English-language people seem to have come to a similar consensus:
1. Set up stable (uneditable) versions of articles (static unvandalized
versions). The Germans expect to be testing out this idea within a month.
2. Then set up a validation system, possibly using review teams of trusted
people from wikiprojects, to check every fact (& sign off, giving the
reference source). The fact checking would be made available to any user
who wanted to take the time. This validated version would also be an
uneditable version.
3. On the English Wikipedia we thought there ought to be an outside
expert (with a PhD and a reputation) to sign off on the validated version,
so we could say: “This page has been approved by Professor XXX from
University of YYY”.

The discussion page   set up on this issue is at:

http://en.wikipedia.org/wiki/Wikipedia_talk:Pushing_to_validation

and

http://wikimania2006.wikimedia.org/wiki/Proceedings_talk:MW2

Concerning the Semantic web, there is already a wikiproject at www.ontoworld.org which is working. The basic idea is to use tags for all pieces of information contained in the articles in order to relate them to other data.
A typical example is:
”’Africa”’ is a continent, south of [[Europe]] and southwest of [[Asia]].

where the tags for Europe and Asia signal a  relation to  countries with the same tags.

In the end what we have is a big relational database underlying the system which allows queries using a SQL-type query language called SPARQL the  specification for which is at:
http://www.sparql.org/query.html

I conducted some tests using the following entry:
http://wiki.ontoworld.org/index.php?title=Germany&action=edit,
and using the Africa article I made a search of www.ontoworld.org and created a new article from a query looking for each country in Africa sorted by Population. The query was:
Editing Africa Population by Countries>

== List of African countries ==
[[Category:Country||Persons]]
[[located in::Africa]]
[[Population:=*]]
[[Area:=*]]

[[Category:Continent]]

The technology used is something called RDF. More on that at:

http://wiki.ontoworld.org/index.php/Special:ExportRDF

and there are some libraries in several languages that deal with it:
For RDF access with libraries
Get RDFLib from rdflib.net
SMW lib
RAP from www.wiwiss.fu-berlin.de/suhl/bizer/rdfapi

and the description of the MediaWiki extension project for the Semantical Web is at:

http://ontoworld.org/index.php/Semantic_MediaWiki

We also explored  some browsers being developed for the Semantic Web. They are at:

http://brownsauce.sourceforge.net/

http://simile.mit.edu

The workshop on “Using Wikipedia’s Knowledge in Your Applications”
(http://wikimania2006.wikimedia.org/wiki/Proceedings:MK1) was very interesting. There I met the speakers (Markus Krötzsch, Denny Vrandecic)with whom I exchanged information, agreeing  to keep in contact with them to discuss my concerns relating to addressing  queries to the URL and  transforming unstructured data into semantically  usable information. There was some discussion about Natural Language Processing translation to Semantic Information, directly taking clue words from the articles in Wikipedia and introducing tags and relations:

http://wikimania2006.wikimedia.org/wiki/Proceedings:DV2

http://wikimania2006.wikimedia.org/wiki/Proceedings:DV1

and

http://wikimania2006.wikimedia.org/wiki/Proceedings:MN1

I met Tim Berners-Lee who is also very interested in the Semantic Web and the idea of creating a browser exploiting these features. I also met Betsy Megas who is  working on a Semantic Web project called wiktionaryZ.org which is like the wiktionary.org but semantical. We had a long discussion about the convenience of having  a “categories” squeme in the Semantic Web. My point was that in most cases  the categories could  be built dynamicaly. They would be present in an abstract form without there being any need to  define them explicitly. A completely relational model strikes me as more interesting. People have tried to categorize everything since the existence of the encyclopedias,  but the number of possible categories can be much higher than the number of elements to categorize, simply because categories can be so arbitrary that the final number of categories can reach the number of subsets of elements, which is not useful from my point of view.

A couple of plenary sessions were held in the main auditorium of Harvard Law School. One of them featured a lecture by Lawrence Lessig, a lawyer who has been involved in cases such as the  one pitting Monopoly against Microsoft. He is the founder of the Commons Licence, better known as Left-copyright, where some rights are reserved but others are yielded to the people to allow them to be creative when using resources. He talked about the Read-Only (RO) Society, which was how he characterized  the last century,  and the new Read-Write (RW) Society which is moving toward the Commons Licence, Open Software, Open Hardware, Free and Open Encylopedias like Wikipedia, Freedom of Work/Job (freelancers and people organizing free foundations), free access to  human knowledge and communications (Web Access, Skype), Free and Open broadcasting (pod-casting), among other things.

Most of the sessions were recorded and are available at:

http://wikimania2006.wikimedia.org/wiki/Archives

And the Abstracts and Procedures are at:

http://wikimania2006.wikimedia.org/wiki/Proceedings:Index

I also found an interesting site for exploring the degree of separation between 2 different topics through Wikipedia (sometimes it works):

http://tools.wikimedia.de/sixdeg/index.jsp?from=cuneiform&to=Semantic_Web

Computability in Europe Conference (CiE) Report, Wales UK

Posted in Computability, Universality and Unsolvability, Computer Science, Conferences, Foundations of Computation on August 7th, 2006 by Hector Zenil – Be the first to comment

This is a report on the Computability in Europe Conference (CiE), held at the University of Swansea, Wales in the United Kingdom in July 2006.

I attended a mini-course on Quantum Computing given by Julia Kempe, a lecture on the Church-Turing thesis by Martin Davis– who defended it against proposed models of hypercomputation– and a  lecture on Proof Theory. Another very interesting lecture was on Godel and Turing’s remarks on the Human Mind (the dichotomy argument from Godel and the mechanistic vision from Turing). Among other noteworthy lectures were Samuel Buss’ on Complexity of Proofs, John Dawson’s on Godel in Computability, Wilfried Sieg’s on the Concept of Mechanical Procedure in Godel and Turing, as well as many presentations on hypercomputability and computing over the reals. I met people whom I had only known through email exchanges, like Felix Da Costa from the Technological Institute of Lisbon, Robert Meyer professor emeritus at the National University of Australia, and Julia Kempe from France who is a renowned researcher in the quantum computing field and with whom I shared some doubts I had concerning where the restrictions in Quantum Computing lay which constrained its power to the set of recursive functions. I also met people from SUNY who are doing interesting research on Turing-computation, studying isomorphisms between Oracle machines and the relation with the Tenenbaum theorem upon the uniqueness of the recursive model of PA (Peano Arithmetic). Many lectures were given on computing over infinite time and space and computing at the limit of the general relativity theory. The conference was intended to take the pulse of the field of hypercomputation in Europe and worldwide.

International Conference in Complex Systems, NECSI

Posted in Complexity, Computability, Universality and Unsolvability, Computer Science, Conferences, Foundations of Computation on August 7th, 2006 by Hector Zenil – Be the first to comment

NECSI/ICCS Conference Report, Quincy, Greater Boston, USA, July 2006.

First lesson: For every complex problem there is a simple, neat, wrong solution.

I attended talks given by Ed Fredkin on Finite Nature, Lazlo Barabasi on Complex Networks, Christoph Teuscher on Biology and Computation and John Nash on his research upon Game Theory.
* Ed Fredkin presented a table salt 3-D cellular automata model that fulfils all physical constraints on symmetry and energy conservation. The model is surprisely Turing-universal. As a reminder, the Zuse-Fredkin Thesis is a Church-type thesis which claims additionally that the universe being a Cellular Automaton can be understood in terms of the evolution of  Cellular Automata. An interesting paper entitled “Church-Turing Thesis is Almost Equivalent to Zuse-Fredkin Thesis” by Plamen Petrov is available here>

http://digitalphysics.org/~ppetrov/

And much more information on the interesting ideas of Ed Fredkin is available in
http://www.math.usf.edu/~eclark/ANKOS_zuse_fredkin_thesis.html and on  Ed Fredkin homepage at MIT, which  includes some remarks on why this thesis is not in agreement with  Stephen Wolfram’s, since Wolfram does not intend to imply that the universe is a classical cellular automaton but rather conceives of   a discrete universe based on systems performing simple rules and producing all the complex behavior we find in nature. Such systems are comparable to Turing machines, tag systems, axioms or any other equivalent system.
* In his lecture, Lazlo Barabasi claimed that behind all complex systems there are complex networks, in particular Free-Scale Networks (with a few nodes very well connected with others, and many nodes weakly connected). These nets are efficient and robust (even minus random nodes they remains connected)except under attack (provided the best connected nodes are targeted).
* Christoph Teuscher gave a lecture on computation inspired by biology. However, as he himself admitted, it was not his area of expertise.
* John Nash presented his research on Game Theory using Mathematica as an engine.
* I met Hava Siegelmann and one of her fellows, Yariv, who is working on Artificial Intelligence. Siegelmann worked some years ago (while completing  her PhD dissertation) upon a model of computation called Analog Recurrent Neural Network or just ARNN which, under certain circumstances,  is able to compute more functions than the set of recursive functions , which are those computed by the Turing machines. She is now working on topics related to Molecular Biology. I asked her about the forceful  critiques delivered by traditional logicians like Martin Davis, who wrote a paper entitled “The Myth of HyperComputation.”    The target of most of these critiques is a certain circular argument—to compute more than Turing machines it is necessary to use non-Turing computable weights previously coded into the neural network. It has been known for a long time that the set of all real numbers is able to encode any arbitrary language, even if it is non-Turing computable. What is remarkable from my point of view is her result relating to complexity classes, since weights with lower complexity are able to compute a higher class when  used in those networks. Aditionally she argued that even with a stochastic function her networks are able to solve non-computable functions. Recently I discussed the fact with Stephen Wolfram and we agreed that she is assuming strong randomness. I would say however that Siegelmann’s work is much more beatiful from a complexity point of view than from a “hypercomputational” viewpoint. In her book published under the title “Neural Networks and Analog Computation: Beyond the Turing Limit ” she proves that: a) there exists an universal neural network with only 9 neurons, and b) that  p(r) suffices to compute a non-computable function, where r is an arbitrary complete real number but p(r) represents the first p digits of the expansion of r–which means that linear precision suffices to achieve “super-Turing” capabilities, assuming that the neural network can have access to any possible real number before the computation. In other words this seems to be true only if all possible real numbers are allowed a priori (just as in the Turing machines an unbounded tape is neccesary to carry out all recursive languages, and neural networks with rational numbers as weights do not compute the same set of functions as neural networks with whole numbers as weights, the first having been  proven by Klenee to compute the same set as Turing machines and the second  to compute the same set of languages as finite automata, those called regular languages).
I exchanged ideas with some other interesting people, like an engineer from the Space and Airbone Systems Department of Raytheon. And I met John Nash Jr. during the gala dinner  at which he was presented with an award  for his contributions to  Complexity Theory, mainly honoring his work relating to Game Theory.

Turing’s approach to Biology

Posted in Computer Science, Foundations of Biology on August 7th, 2006 by Hector Zenil – Be the first to comment

Where do the spots on animals come from?
Turing’s answer to this question as well as much more on Turing and modern Fibonacci phyllotaxis is  presented and analysed by Jonathan Swintons in his Deodans’ blog

http://www.swintons.net/deodands/archives/000091.html

The Web as a Graph, Adaptive Crawlers and more…

Posted in Computer Science on August 1st, 2006 by Hector Zenil – Be the first to comment

The Web as a graph, adaptive crawlers, and mining the Web from unstructured data?

http://www.almaden.ibm.com/webfountain/resources/TheWebasaGraph.pdf

To be discussed soon…

How To Criticize Computer Scientists

Posted in Computer Science, General on August 1st, 2006 by Hector Zenil – Be the first to comment

How To Criticize Computer Scientists or Avoiding Ineffective Deprecation And Making Insults More Pointed, from http://www.cs.purdue.edu/homes/dec/essay.criticize.html

In recent exchanges, members of the faculty have tried in vain to attack other Computer Scientists and disparage their work. Quite frankly, I find the results embarrassing — instead of cutting the opponent down, many of the remarks have been laughably innocuous. Something must be done about it because any outsider who hears such blather will think less of our department: no group can hold the respect of others unless its members can deal a devastating verbal blow at will.This short essay is an effort to help faculty make their remarks more pointed, and help avoid wimpy vindictives. It explains how to insult CS research, shows where to find the Achilles’ heel in any project, and illustrates how one can attack a researcher.The Two Basic Types Of Research: Most lousy insults arise from a simple misimpression that all researchers agree on the overall aims of CS research. They do not. In particular, CS has inherited two, quite opposite approaches from roots in mathematics and engineering. Researchers who follow the mathematical paradigm are called theorists, and include anyone working in an area that has the terms “analysis”, “evaluation”, “algorithms”, or “theory” in the title. Researchers who follow the engineering paradigm are called experimentalists, and include most people working in areas that have the terms “experimental”, “systems”, “compiler”, “network”, or “database” in the title. Complex Theory And Simple Systems: Knowing the tradition from which a researcher comes provides the basis for a well-aimed insult.Theorists Favor Sophistication. Like mathematicians, theorists in Computer Science take the greatest pride in knowing and using the most sophisticated mathematics to solve problems. For example, theorists will light up when telling you that they have discovered how an obscure theorem from geometry can be used in the analysis of a computer algorithm. Theorists focus on mathematical analysis and the asymptotic behavior of computation; they take pride in the beauty of equations and don’t worry about constants. Although they usually imply that their results are relevant to real computers, they secretly dream about impressing mathematicians.Experimentalists Favor Simplicity. Like engineers, systems researchers take pride in being able to invent the simplest system that offers a given level of functionality. For example, systems researchers will light up when telling you that they have constructed a system that is twice as fast, half as small, and more powerful than its predecessor. Experimentalists focus on the performance of real computer systems; they take pride in the beauty of their code and worry about constants. Although they usually imply that their results can extend beyond real computers, they secretly dream of filing patents that apply to extant hardware.The Insult: Knowing that CS can be divided into two basic groups helps immensely when criticizing someone. There are two basic rules: identify the type of the researcher and issue an insult for that type. Avoid saying anything that inadvertently compliments them. If performed well, an insult will not only stun the researcher (who will be shocked to learn that not everyone agrees with his or her basic value system), but will also intimidate others in the audience.Identifying A Type: Identifying the type of a researcher is usually easy and does not require a strong technical background or real thinking. It can be done using keyword matching according to the following lists. Detecting Theory: You can tell someone is a theorist because they slip one or more of the following keywords and phrases into lectures and technical conversations: “theorem”, “lemma”, “proof”, “axiom”, “polynomial time”, “logarithmic”, “semantics”, “numerical”, “complexity”, “nondeterministic” or “nondeterminism”, and “for large enough N”. They write lots of equations, brag about knocking off the “extra log factor”, and often end their lecture with an uppercase “O” followed by a mathematical expression enclosed in parentheses. You can also recognize a theorist because they take forever to prove something that may seem quite obvious. (I once sat through an hour lecture where someone proved that after a computer executed an assignment statement that put the integer 1 into variable x, the value in x was 1.)Detecting Systems: An experimentalist will slip one or more of the following keywords and phrases into lectures and technical conversations: “architecture,” “memory,” “cpu” (sometimes abbreviated“CISC” or “RISC”), “I/O” or “bus”, “network”, “interface”, “virtual”, “compile” or “compiler”, “OS” or “system”, “distributed”, “program” or “code”, and “binary”. They talk about building programs and running the resulting system on real computer systems. They refer to companies and products, and use acronyms liberally. Their lectures often end with a graph or chart of measured system performance. You can also recognize an experimentalist because they describe in excruciating detail how they set up an experiment to measure a certain value even if the measurement produced exactly the expected results. (I once sat through an hour lecture where someone carefully explained how they used three computer systems to measure network traffic, when their whole point was simply to show that the network was not the cause of the problem they were investigating.)Forming An Insult:The key to a good insult lies in attacking whatever the researcher holds most dear and avoiding whatever the researcher does not care about. Thus, an insult lobbed at a theorist should focus on lack of sophisticated mathematics such as the following:Despite all the equations, it seems to me that your work didn’t require any real mathematical sophistication. Did I miss something? (This is an especially good ploy if you observe others struggling to understand the talk because they will not want to admit to that after you imply it was easy.)Isn’t this just a straightforward extension of an old result by Hartmanis? (Not even Hartmanis remembers all the theorems Hartmanis proved, but everyone else will assume you remember something they have forgotten.)Am I missing something here? Can you identify any deep mathematical content in this work? (Once again, audience members who found the talk difficult to understand will be unwilling to admit it.)In contrast, an insult lobbed at an experimentalist should imply that the techniques were used in previous systems or that the work isn’t practical such as:Wasn’t all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don’t.)Have you tested this on the chip Intel got running last week in their lab? (No one knows what chip Intel got running last week, but everyone will assume you do.)Am I missing something? Isn’t it obvious that there’s a bottleneck in the system that prevents scaling to arbitrary size? (This is safe because there’s a bottleneck in every system that prevents arbitrary scaling.)How To Avoid Having An Insult Backfire On You: A misplaced insult can backfire, turning into an embarrassment for the attacker and a victory for the intended attackee. To avoid such occurrences, remember the following:Never attempt to attack theoretical work as not considering constants, as unrelated to real computer systems, or as requiring too much sophisticated mathematics. (The intended victim is likely to smile and thank you for the flattery.)Never attempt to attack a system as too small, too simple, or as lacking sophisticated mathematics (Again, the intended victim is likely to smile and thank you for the flattery.)Never attempt to attack systems work simply by saying that it’s so simple and obvious that you could have done it. (For years, people said that about UNIX and the TCP/IP protocols.) In fact, this is merely an extension of a ploy used by children on a playground: “Oh yeah? I could have done that if I wanted to.” Don’t try using it or someone will tell you to grow up. Attacking Crossover Work: Although rare, a few researchers include both theoretical and experimental work in the same project. Insulting such combinations can be tricky because a researcher can escape unscathed by pointing to one part of their work or the other as the answer. You can try to attack both parts simultaneously:I note that the systems aspect of this project seems quite complex. Do you think the cause of the convoluted implementation can be attributed to the more-or-less “simplistic” mathematical analysis you used?However, a clever insult can avoid talking about the work by suggesting sinister reasons for the paradigm shift:I notice that you did something unusual by combining both theory and experiment. Did you decide to try a second approach because you had insufficient results from the first?You seem to have a little theory and a little experimental work combined into one project. Isn’t it true that if you had a sufficiently strong contribution in one or the other you would have lectured about them separately?A Final Plea: I certainly hope faculty will take this essay to heart and sharpen their insult skills. In the future please make all your thrusts count.