Tag Archives: Turing machine

Doing science backwards

A recent article, (Trettenbrein, P. (2016); The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?; Frontiers in Systems Neuroscience, 10), questions what many consider settled science – plastic changes to synapses are the basis of learning and memory – may not be correct. Thanks to Neurosceptic for noting this paper (here).

Actually, as of today, large parts of the field have concluded, primarily drawing on work in neuroscience, that neither symbolism nor computationalism are tenable and, as a consequence, have turned elsewhere. In contrast, classical cognitive scientists have always been critical of connectionist or network approaches to cognitive architecture.”Trettenbrein is in the classical cognitive scientist camp.

First Trettenbrein assumes that the brain is a Turing machine. In other words that the coinage of thought is symbols and that they are manipulated by algorithms (programs) that write to a stable memory and read from it. The brain is assumed to deal in representation/symbols as variables, stepwise procedures as programs and random access memory, giving together a Turing machine. “The crucial feature of a Turing machine is its memory component: the (hypothetical) machine must possess a read/write memory in order to be vastly more capable than a machine that remembers the past only by changing the state of the processor, as does, for example, a finite-state machine without read/write memory. Thus, there must be an efficient way of storing symbols in memory (i.e., writing), locating symbols in memory (i.e., addressing), and transporting symbols to the computational machinery (i.e., reading). It is exactly this problem, argue Gallistel and King (2009), that has by and large been overlooked or ignored by neuroscientists. …

Synaptic plasticity is widely considered to be the neurobiological basis of learning and memory by neuroscientists and researchers in adjacent fields, though diverging opinions are increasingly being recognized. From the perspective of what we might call “classical cognitive science” it has always been understood that the mind/brain is to be considered a computational-representational system. Proponents of the information-processing approach to cognitive science have long been critical of connectionist or network approaches to (neuro-)cognitive architecture, pointing to the shortcomings of the associative psychology that underlies Hebbian learning as well as to the fact that synapses are practically unfit to implement symbols.” So an assumption that we have a Turing machine dictates that it needs a particular type of memory which is difficult to envisage with plastic synapses.

I like many others believe, science starts with observations and moves on to explanations of those observations, or to state it differently, the theories of science are based on physical evidence. It is not science to start with a theoretical assumption and argue from that assumption what has to be. Science starts with ‘what is’ not ‘what has to be’.

Trettenbrein is not thinking that the brain resembles a computer in many ways (computer metaphor), he is thinking that it IS a computer (actual Turing machine). If the brain is an actual computer than it is a Turing machine, working in a stepwise fashion controlled by an algorithmic program. Then he reasons that the memory must be individual neurons that are - what? Perhaps they are addressable items in the random access memory. Well, it seems that he does not know. “To sum up, it can be said that when it comes to answering the question of how information is carried forward in time in the brain we remain largely clueless… the case against synaptic plasticity is convincing, but it should be emphasized that we are currently also still lacking a coherent alternative.” We are not clueless (although there are lots of unknowns) and the case for synaptic plasticity is convincing (as it has convinced many/most scientists) because there is quite a bit of evidence for it. But if someone starts with an assumption, then looks for evidence and finds it hard to produce – they are doing their science backwards.

Trettenbrein is not doing neuroscience, not even biology, in fact not even science. There are a lot of useful metaphors that we use to help understand the brain but we should never get so attached to them that we believe they can take the place of physical evidence from actual brains.

Just because we use the same words does not mean that they describe the same thing. A neurological memory is not the same as a computer memory. Information in the neurological sense is not the same as the defined information of information theory. Brain simulations are not real brains. Metaphors give resemblances not definitions.