Random Item Generation tasks (RIG) are commonly used to assess high cognitive abilities such as inhibition or sustained attention. They also draw upon our approximate sense of complexity. A detrimental effect of ageing on pseudo-random productions has been demonstrated for some tasks, but little is as yet known about the developmental curve of cognitive complexity over the lifespan. In this paper we investigated the complexity trajectory across the lifespan of human responses to five common RIG tasks, using a large sample (n = 3429). Our main finding is that the developmental curve of the estimated algorithmic complexity of responses is similar to what may be expected of a measure of higher cognitive abilities, with a performance peak around 25 and a decline starting around 60, suggesting that RIG tasks yield good estimates of such cognitive abilities. Our study illustrates that very short strings of, i.e., 10 items, are sufficient to have their complexity reliably estimated and to allow the documentation of an age-dependent decline in the approximate sense of complexity.
Our algorithmic brain hypothesis does not oppose the Bayesian brain hypothesis because the Bayesian brain hypothesis can be updated to embrace the algorithmic hypothesis by changing what is commonly known as the ‘prior’, that is the probability distribution that you assume as a starting point, or in other words your degree of belief that certain observations are more likely than others. In the traditional statistical approach, the Bayesian model would start with a flat non-informative distribution, in the algorithmic Bayesian approach one would start with the assumption that the world is not statistical random and therefore that the flat distribution should not be biased towards considering the world in this ‘statistically random’ fashion.
So let me give you a few examples of how we know the brain is algorithmic in nature, both from simple examples from everyday life and also as a result of our study. In everyday life, we know we care not only about statistical patterns but also algorithmic ones. I am not sure if I had given you this example before but let’s say that you are given a phone number and you have no paper to write it down forced to try to memorize it, the first thing we do is to look at the sequence of numbers for some pattern. Let’s say that the sequence looks like 217 217 2172, you would be amazed by realizing that basically, you need to recall only 3 digits to reconstruct the number because there is a statistical pattern namely a period of 3 digits repeated 3.3 times. But what if the number looks rather like 246 810 1214, then you may find out that the phone number is a sequence of even numbers and that you only need to remember this compressed version of the phone number ‘the first even numbers’. However, this latter one is not a statistical property of the sequence, is an algorithmic one! And we know most patterns, both that we can recognize, and both that we cannot recognize, are of this type (algorithmic). Notice also that all statistical patterns are algorithmic, so an algorithmic pattern is a generalization of a statistical one (rather than in opposition to).
An animated video from the authors show the approach taken to conduct the experiment: https://www.youtube.com/watch?v=E-YjBE5qm7c
Media covering the article: