Imagination and reality

ScienceDaily has an item (here) on a paper (D. Dentico, B.L. Cheung, J. Chang, J. Guokas, M..e Boly, G. Tononi, B. Van Veen. Reversal of cortical information flow during visual imagery as compared to visual perception. NeuroImage, 2014; 100: 237) looking at EEC dynamics during thought.

The researchers examined electrical activity as subjects alternated between imagining scenes and watching video clips.

Areas of the brain are connected for various functions and these interactions change as during processing. The changes to network interactions appear as movement on the cortex. The research groups are trying to develop tools to study these changing networks: Tononi to study sleep and dreaming and Van Veen to study short-term memory.

The activity seems very directional. “During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe — from a higher-order region that combines inputs from several of the senses out to a lower-order region. In contrast, visual information taken in by the eyes tends to flow from the occipital lobe — which makes up much of the brain’s visual cortex — “up” to the parietal lobe… To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle — focusing on the details of shapes, colors and textures — before watching a short video of silent nature scenes.

The study has been used to verify their equipment, methods and calculations – could they discriminate the ‘flow’ in the two situations of imagining and perceiving. And it appears they could.

The actual directions of flow are not surprising. In perception, information starts in the primary sensory areas at the back of the brain. The information becomes more integrated as it moves forward to become objects in space, concepts and even word descriptions. On the other hand during imagining the starting points are objects, concepts and words. They must be rendered in sensory terms and so processing would be directed back towards the primary sensory areas. In both cases the end point would be a connection between sensory qualia and their high level interpretation. In perception the movement is from the qualia to the interpretation and in imagining it would be from the interpretation to the qualia.

 

A new old discovery

Some say that science has been looking at the brain for some time now and yet there is no agreed explanation of how it works. This is sometimes followed by – and therefore maybe science will never understand the brain. But the brain is much more complex than most people think and the tools to examine it are far less powerful as well. On a regular basis new aspects of the brain are discovered, not little details but major discoveries.

Recently there was a large white matter tract found. Really it was re-found because it had been previously reported, doubted, and forgotten. It had been found in the 1880s. This is basic brain anatomy in the most closely studied part of the cortex, the visual cortex, and it illustrates just how little is known about the brain. It would be like a major artery was missing from the knowledge of the circulatory system.

ScienceDaily has an item on this (here). The announcement is in the paper: Jason D. Yeatman et al. The vertical occipital fasciculus: A century of controversy resolved by in vivo measurements. PNAS, November 2014 DOI: 10.1073/pnas.1418503111.

Carl Wernicke discovered it; Yeatman and Weiner re-discovered it. They call it the vertical occipital fasciculua (VOF). There are three ways in which the knowledge could have been forgotten.

A sc ientific disagreement — In an 1881 neuroanatomy atlas, Wernicke, a well-know n anatomist who in 1874 discovered “Wernicke’s area,” which is essential for language, wrote about a fiber pathway in a monkey brain he was examining. He called it “senkrechte Occiptalbündel” (translated as vertical occipital bundle). But its vertical orientation contradicted the belief of one of the most renowned neuroanatomists of the era, Theodor Meynert, who asserted that brain connections could only travel in between the front and the back of the brain, not up and down. Haphazard naming methods — The 1880s and 1890s were a fertile time in the neuroanatomy world, but scientists lacked a shared process for naming the brain structures they found. Looking at drawings of the brain from this time period, Yeatman and coauthors saw that the fiber pathway that they were looking for appeared in brain atlases but was called different things, including “Wernicke’s perpendicular fasciculus,” “perpendicular occipital fasciculus of Wernicke,” and “stratum profundum convexitatis.” “When we started, it was just for our own knowledge and curiosity,” said Weiner, who’s also the director of public information at the Institute for Applied Neuroscience, a nonprofit based in Palo Alto, California. “But, after a while, we realized that there was an important story to tell that contained a series of missing links that have been buried for so long within this puzzle of historical conversation among many who are considered the founders of the entire neuroscience field.” Also the way dissections were done changed so that the VOF was less visible.

There are more details in Mo Castandi’s blog (here)

The new measurements delineate the full extent of the VOF, revealing it as a flat sheet of white matter tracts that extends up through the brain for a distance of 5.5cm, connecting the ‘lower’ and ‘upper’ streams of the visual pathway. These run in parallel, and are sometimes called the ‘What’ and ‘Where’ pathways, for the type of information they carry: the lower stream, connects brain regions involved in processes such as object recognition, including the fusiform gyrus, and the upper stream connects the angular gyrus to other areas involved in attention, motion detection, and visually-guided behaviour. The front portion of the VOF links the intraparietal sulcus, which encodes information about eye movements, to the occipito-temporal sulcus, which encodes representations of word forms. The portion further back links higher order visual areas within the two streams, which encode complex maps of the visual field. Given the functions of these brain regions, the researchers speculate that the VOF likely plays an important role in perceptual processes such as reading and recognising faces.”

It seems a pretty important piece of anatomy to have been lost for a 100 years.

 

Habits and learning

Habits allow us to perform actions without attending to every detail; we can do complex things and more than one action at a time without overloading our cognitive and motor systems. They are goal-directed macro actions made up of a sequence of simple primitive actions. A habit allows a complex action to be launched as a unit and efficiently reach the goal of the habit without each step needing its own specific goal.

In forming a habit, a sequence of actions is consolidated by passing from a closed reward loop to an open reward loop. In other words the whole sequence comes to be evaluated rather than each step. Passing from step to step becomes much faster when it is automatic. “To explain how these sequences are consolidated, Dezfouli and Balleine distinguish between closed-loop and open- loop execution. At the beginning of learning, feedback is crucial. The organism needs a reward or some clues in the environment to identify and perform the proper behavior (closed-loop execution). In advanced stages of training, a step in the sequence is conditioned by the previous step, regardless of feedback stimuli or reward (open-loop execution). This independence accounts for the insensitivity to the outcome shown in experiments of reward devaluation and contingency degradation that are standard measures to determine if a habit has been acquired .” It takes persistent failure of the expected reward to disrupt the habit.

Learning is adaptation of an individual to the environment by changes in behavior resulting from regularities in the environment. Learning is adaptive because it is a response to regularity. As habits present regularities in the environment because one step automatically follows another, they can be the basis of learning.

The author, Balderas, (see citation below) uses the fast-mapping that dogs do in learning to associate a name with an object, to illustrate the intertwining of habit and learning. Only some dogs do fast-mapping: learning that a new word applies to the only new object available using exclusion logic. Other dogs stand about looking lost. She explains the learning of a particular dog, Rico, that uses two habits (automatic sequences): one is playing fetch and the other is associating a name with an object. The fetch sequence has three main actions (a) go for (b) select (c) deliver. Select however can be seen as a sub-sequence (1) look-for (2) match (3) take. If there is no new object/name then abc can be executed without interruption. But during fast-mapping it becomes more complex. “In this case, take can not start because match was not executed. Since Rico does not dispose of a name-object association that enables it to complete the task, it is in a situation where it has to make a decision in the middle of the selection task, so the goal-directed system regains control. After solving the problem, the fetching-game sequence follows its tendency to completion and Rico returns to the sequence: it goes to take and to c (deliver). This description also follows the hierarchical view because at the starting point the behavior begins as a habit, when a decision is required it becomes goal-directed and ends again as a habit after overcoming the difficulty.” The dog uses the exclusion principle, and that involves the matching of previously learned pairs to eliminate them. When the dog finds the only possible answer is the unmatched object, he must select this object in order to deliver and reach the end-point, the habit’s goal. This sequence results in learning a new name/object matching. Habits modulate behavior and guide the animal to detect and solve a problem and thus learn.

I have to admit that part of the reason for this post is my love of a former dog (a much missed border collie – husky cross) who could learn vocabulary, including by the exclusion principle. We were building a house and the internal walls were only the studs. I had shown people around and the dog had followed. I would stand in a space and say this is the kitchen and then go on to the next room. After a few times the dog preceded the group. Then I would stay in the middle of the house and say, “Badger, show them the kitchen”. She did the tour with me only naming the rooms. Then one day I said, “show them the basement”. The dog looked at me and around the space, a couple of times and then trotted to the top of the stairs to the basement. I don’t think she picked up the word ‘basement’ from conversations or she would not have been puzzled at first, but she did recognize that it was the only space left that she could possibly show them. From then on she could be told to go to the basement and she understood. When the walls were finished she could still be told to go to a particular room, although now she had to use the doors.
ResearchBlogging.org

Balderas, G. (2014). Habits as learning enhancers Frontiers in Human Neuroscience, 8 DOI: 10.3389/fnhum.2014.00918

I'm on ScienceSeeker-Microscope

Integration-to-bound decision model

Neuroskeptic has a posting (here) with the title ‘Do Rats have Free Will?’ It is a review of a paper by Murakami and others – abstract is below.

The paper supports the integration-to-bound model of decision making. A population of secondary motor cortex neurons ramp up their output to a constant threshold. Crossing the threshold triggers the motor action. The researchers found a second group of neurons that appeared to establish the rate of rise of the integrating neurons and therefore the time that elapses before the threshold is reached. This fits the model. But what does it say about free will?

The abstract does not mention free will but Neuroskeptic does. It is fortunate that he has talked with the group and shared it in his post. He points out the similarity between the intergration signal and the readiness potential that Libet and others found preceded an action and preceded conscious awareness of a decision to act. He quotes Murakami: “activity preceding bound crossing, either input or accumulated activity, could be said to participate causally in the timing of an action, but does not uniquely specify it. The integration-to-bound theory implies that no decision has been made until the bound has been reached… as at any moment up to bound crossing, the arrival of opposing inputs may avert an action.” Neuroskeptic comments that the readiness potential may be a contributor to a decision rather than the consequence of a decision. And again quotes Murakami: “Crossing the threshold from unawareness to awareness [could be] a reflection of bound crossing [in the integrator]…In this way, the integration-to-bound theory may help to resolve the contradiction between the subjective report of free will and the requirement for causal antecedents to non-capricious, willed actions.…our results provide a starting point for investigating mechanisms underlying concepts such as self, will and intention to act, which might be conserved among mammalian species.”

Although their results do give confirmation to the integration-to-bound theory, I do not think they say much about free will. First, I cannot see how they have any information on when the rats are consciously aware of whatever they may be aware of in a decision. Second, if another signal is controlling the rate of integration, when was it set on course and what are the signals that might control it? This is a long way from an understanding of how decisions are made and whether consciousness is involved.

Abstract of paper (Murakami M, Vicente MI, Costa GM, & Mainen ZF (2014). Neural antecedents of self-initiated actions in secondary motor cortex. Nature neuroscience, 17 (11), 1574-82 PMID: 25262496):

The neural origins of spontaneous or self-initiated actions are not well understood and their interpretation is controversial. To address these issues, we used a task in which rats decide when to abort waiting for a delayed tone. We recorded neurons in the secondary motor cortex (M2) and interpreted our findings in light of an integration-to-bound decision model. A first population of M2 neurons ramped to a constant threshold at rates proportional to waiting time, strongly resembling integrator output. A second population, which we propose provide input to the integrator, fired in sequences and showed trial-to-trial rate fluctuations correlated with waiting times. An integration model fit to these data also quantitatively predicted the observed inter-neuronal correlations. Together, these results reinforce the generality of the integration-to-bound model of decision-making. These models identify the initial intention to act as the moment of threshold crossing while explaining how antecedent subthreshold neural activity can influence an action without implying a decision.

 

A multitasking neuron with a name

aiyMention of C.elegans always makes me smile. It is a small simple worm. It has exactly 302 neurons (each is named) and its connectome is completely known. And yet the relationship between the actions of those neurons and the animal’s behaviour is not yet understood. In a recent paper reviewed by NeuroScienceNews (here) researchers have found a multitasking neuron (AIY by name).

Multitasking neurons have been suspected in other animal and humans, but ways for them to do this have not be understood. The researchers found that AIY sends an analog excitatory signal to one circuit having to do with speed of movement and a digital inhibitory signal to another circuit having to do with switching direction. The neurotransmitter is the same for both signals but the receptor that receives the signal is a different type in the two circuits.

Here is their diagram and abstract for Z. Li, J. Liu, M. Zheng, S. Xu; Encoding of Both Analog- and Digital-like Behavioral Outputs by One C. elegans Interneuron; Cell 159-4 Nov 2014:

Model organisms usually possess a small nervous system but nevertheless execute a large array of complex behaviors, suggesting that some neurons are likely multifunctional and may encode multiple behavioral outputs. Here, we show that the C. elegans interneuron AIY regulates two distinct behavioral outputs: locomotion speed and direction-switch by recruiting two different circuits. The “speed” circuit is excitatory with a wide dynamic range, which is well suited to encode speed, an analog-like output. The “direction-switch” circuit is inhibitory with a narrow dynamic range, which is ideal for encoding direction-switch, a digital-like output. Both circuits employ the neurotransmitter ACh but utilize distinct postsynaptic ACh receptors, whose distinct biophysical properties contribute to the distinct dynamic ranges of the two circuits. This mechanism enables graded C. elegans synapses to encode both analog- and digital-like outputs. Our studies illustrate how an interneuron in a simple organism encodes multiple behavioral outputs at the circuit, synaptic, and molecular levels.

The ghost is us

In schizophrenia, some other conditions and extreme physical situations, people can feel an unseen presence accompanying them, a ghost. But this ghost has been shown to probably be ourselves. NeuroScienceNews (here) has a review of a new paper, including a video linked below.

The self that we experience is constructed from a number of sources: individual senses, internal body senses, motor prediction. This usually works seamlessly and we feel that we inhabit this self/body. The construct relies on three areas of the brain cooperating. If one of these areas is damaged or the ability to work together is faulty, part of the self may be detached from the rest and then be experienced as a ‘presence’, near but displaced from the rest of the self. “Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease – or, in this case, a robot – this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”

The researchers duplicated the effect in the lab with a robotic device which is clearly shown in a video (here).

I have found ghosts interesting since a conversation with my mother many years ago. She did not believe in ghosts or anything like that, but she found that after my father died, she could talk to him. She knew that it was herself talking in his voice in her head. She said that she knew him well enough to know what he would say and how. If fact she encouraged the voice – it was comforting. When she had a problem and want to know what he would advise, if he were alive, she would ask him. It worked best just as she was going to sleep. After a time the effect weakened and then was no longer available. Her grief and her immediate change in responsibility would have affected her, and given her problems that she had not faced before. In trying to figure out what he would have done she made those thoughts into a separate verbal presence. At first, she also thought she could see him out of the corner of her eye, but when she turned there was no one there. She put that down to missing him and changing any little movement, half seen, into him.

I figure there were a number of tiny areas of her brain that were dedicated to monitoring my dad. When he died they were not called on to do any work and eventually started creating sightings of him, like our brains react to sensory deprivation with hallucinations. I have been told that such things are quite common, but people do not mention them for fear of being ridiculed. Also, it is reported that many people hear voices from time to time, but do not report it for fear of being thought mad.

Here is the abstract of the paper (“Neurological and Robot-Controlled Induction of an Apparition”; O. Blanke, P. Pozeg, M. Hara, L. Heydrich, A. Serino, A. Yamamoto, T. Higuchi, R. Salomon, M. Seeck, T. Landis, S. Arzy, B. Herbelin, H. Bleuler, and G. Rognini; Current Biology 2014):

Tales of ghosts, wraiths, and other apparitions have been reported in virtually all cultures. The strange sensation that somebody is nearby when no one is actually present and cannot be seen (feeling of a presence, FoP) is a fascinating feat of the human mind, and this apparition is often covered in the literature of divinity, occultism, and fiction. Although it is described by neurological and psychiatric patients and healthy individuals in different situations, it is not yet understood how the phenomenon is triggered by the brain. Here, we performed lesion analysis in neurological FoP patients, supported by an analysis of associated neurological deficits. Our data show that the FoP is an illusory own-body perception with well-defined characteristics that is associated with sensorimotor loss and caused by lesions in three distinct brain regions: temporoparietal, insular, and especially frontoparietal cortex. Based on these data and recent experimental advances of multisensory own-body illusions, we designed a master-slave robotic system that generated specific sensorimotor conflicts and enabled us to induce the FoP and related illusory own-body perceptions experimentally in normal participants. These data show that the illusion of feeling another person nearby is caused by misperceiving the source and identity of sensorimotor (tactile, proprioceptive, and motor) signals of one’s own body. Our findings reveal the neural mechanisms of the FoP, highlight the subtle balance of brain mechanisms that generate the experience of “self” and “other,” and advance the understanding of the brain mechanisms responsible for hallucinations in schizophrenia.

 

Which is the illusion?

There is a nice recent review of the state of play with regard to ‘free will’ (here). I must say that the comments on this blog were very frustrating. They seem to bypass important questions and facts.

  1. Almost everyone seems to believe that determinism and free will are opposites. There are compatibilists who say the free will can be defined so that it is not in opposition to determinism. Fine, but why do this? I don’t like the phrase, ‘free will'; I don’t want it saved; I want to be rid of the phrase and its baggage. We do not have to accept determinism either. They are not one right and one wrong, not both right, but they are both wrong, in my opinion.
  2. What is wrong with free will is the insistence that we make conscious decisions. We make decisions, freely in the sense that they cannot be predicted before we make them. But that does not mean they are in any sense conscious at that point. They (at least some times) rise into conscious awareness, but that does not mean that they were ‘made consciously'; they were made and then entered consciousness. The decision is ours whether we are aware of it or not, and if we are aware of it, that awareness is after the decision is made.
  3. Our conscious awareness of the justifications for a decision – that is not necessarily the real reasons. It is an illusion that we know our actual reasons. We guess, usually correctly but sometimes very incorrectly. Our justification mechanism can be fooled.
  4. Our conscious awareness takes responsibility for any action that appears to be ours, even if it is not. In a situation where we never made a decision or moved a muscle, we can be fooled into being mistakenly aware of doing both.
  5. In order to learn we need not only to remember actions and their outcomes, but also whether we caused the actions or not. We learn by making causal hypotheses. In episodic memory, we remember only the events that reach consciousness. It is important that the fact that we did something involved in an event is remembered along with the event. So we remember decisions as appropriate, but those decisions are not ‘made in memory’ any more than they are ‘made in consciousness’. Without this information about causes, we could not learn from experience.
  6. We are of course responsible for every single thing we do. But we are responsible to an extra degree (some would say morally responsible) if we have taken ownership of that action by labeling it with a ‘decision tag’. Again, we can fool ourselves, and some people are very good at not taking responsibility, or taking responsibility but fudging the justifications. People can also through false memory, take responsibility for an action they were not involved in.
  7. Absolutely nothing has been lost. These effects are noticable though carefully planned experimental set ups, that are most unnatural. But the experiments can fool this system and bring to light the picture of all thought being unconscious in its construction. This does not mean that we cannot continue to function normally.
  8. Calling what we have ‘free will’ is dangerous. It carries implications that are false. Forgetting ‘free will’ and just talking about decisions is a much better way to go. And given what we know about quantum mechanics (not to mention the practical impossibility of predicting as complex a system as the brain and all that might go into a decision) we should jettison ‘determinism’ too.
  9. The really important change in viewpoint is about the nature of consciousness. Simple consciousness is not an illusion – we have that stream of awareness and we know it. The idea that consciousness is more than an awareness-attention-memory sort of thing is the illusion; conscious mind as opposed to consciousness is an illusion; introspection is an illusion; conscious decision is an illusion; conscious thought is an illusion; a self watching consciousness is an illusion. We do our thinking unconsciously and then, not before, we may or may not be consciously aware of our thoughts. Even in the step-wise linear thinking that appears to be conscious, the creation of each step is still unconscious.

Carving Nature at its joints

If you have done any butchery or even carved the meat at the table, you will understand this metaphor. In order not to hack and end up with a terrible mess, you must follow the actual anatomy of the meat. In particular, the place to separate two bones leaving their muscles attached is at the joint. That is where you cut and break the two bones apart. This was Plato’s metaphor for making valid categories, ones that fit with the underlying ‘anatomy’ of nature.

It seems to me that we are not cutting at the joint in neuroscience. How does a science know if its concepts, categories, technical terms, contrasts/opposites are mirroring nature? Well, strictly speaking, there is no way to know that our categories are in keeping with nature. However, we can tell ways in which they are not. Perfection may not be possible but improvement almost always is. When we have to make room for odd little exceptions, when we can’t use the categories to make good predictions, when they are not easy to use, when they seem fragile to cultural or semantic differences, when they seem part of a slippery slope, when they do not fit with our theories – we have to think again about where the joints might be.

Why should neuroscience be in trouble with its categories? First, it is a very new science. It only really started in the last century; some would say it didn’t get going until into the 1980s and that would be 30 years ago. It does not have any overarching theory (not like Relativity, Quantum mechanics, Molecular theory, Plate Techonics, Cell theory, Evolution and the like). Its territory is more in ignorance than in light. Finding the joints is almost a matter of luck.

Second, it is immensely complex.

Third, neuroscience has inherited a lot of folk psychology; a great burden of Freudian psychology and other older theories; medical terminology and theories to do with mental illness; dated biological theories; attempts to simulate thought with computers; philosophical, legal and religious notions and theories. It is little wonder that agreed categories are next to impossible at the present time.

Take schizophrenia as an example. Most people treat that name as denoting a single disease. But it is more likely to be denoting a variety of diseases with differing causes, courses, symptoms, treatments, outcomes. There is no reason to accept, and many reasons to doubt, that it is a single disease. So what exactly does a statement like, “people suffering from schizophrenia hear voices”, mean. Not all schizophrenics hear voices and not everyone who hears voices is schizophrenic. And so it is with most symptoms of this ‘disease’. The same problem dogs ‘autism’ and some other conditions.

Intelligence is also hard to see as a clean category. How can it be measured? Is it one general thing or many specific one? Which specific ones? Do we know what personality is? Can we agree on subdividing it? What is its relationship to other things? There are so many, many words with such vague meanings. Neuroscience has words acquired from many sources. I read a philosophical paper and I wonder where do these words touch physical reality? What, I wonder, is a ‘mental state'; could it be a real thing? The popular press and some academics talk of ‘ego’. That is a Freudian concept and his division of the brain (ego, superego, id) is very clearly not at any ‘joints’? The computer set uses ‘algorithm'; just where are we likely to find algorithms in the brain?

It would seem that the closer a scientist is working to the level of cells and cell assemblies, the more likely they are to see the joints. But they would be less likely to be answering questions that people outside of neuroscience want answered. But unless people want to wade through oceans of muddy water, they may have to wait for answers to ‘important’ questions until after many boring questions have been investigated. My guess would be that the semantic arguments will continue because the words in which people are thinking are not doing a good job of the carving.

 

Why no brain-in-a-vat

A comment on the previous blog asked for a discussion of embodied cognition. I will try to express why I find embodied cognition a more attractive model than classic cognition. My natural approach to living things is biological – I just think that way – and if something does not make much sense from a biological standpoint than I am suspicious.

So to start, why don’t all living things have brains? Brains seem to be confined to animals, organisms that move. This makes sense: to move an organism needs mechanisms for propulsion (muscles for example), mechanisms to sense the environment (eyes for example), and mechanisms for coordinating and planning movement (nervous systems). So we have motor neurons that activate muscles and sensory neurons that sample the environment and the two are connected in the simplest nervous systems. But all we have in this simple setup is reflexes and habituation. But if there are nets of inter-neurons between the motor and sensory ones then complex actions and thoughts are possible including learning, memory, a working model of reality, emotion, problem solving etc. (brains). In other words, I picture cognition as coming into being and then being honed by evolution as an integral part of the whole organism: its niche or way of life, its behaviour, its anatomy.

Did the evolutionary process give us a brain that is a general computer? Why would it? There tends to be a loss of anatomy/physiology when they are not particularly useful. For example, moles lost sight because their niche is without light; parasites can lose all functions except nutrition and reproduction. A general computer would be a costly organ so it would only be evolved if it were definitely useful.

Today science does not hold that there are exactly three dimensions but talks of 4, 11 ½, 37 etc. We can accept more than 3, believe there are more than 3, but we cannot put ourselves in more than 3 dimensions no matter how we try. Our brain is constructed to create a model of the world with 3 dimensions and that is that. Why? We sense our orientation, acceleration, balance from the semi-circular canals of the inner ear. There are 3 canals and they are at mutual right angles to each other – physical x,y,z planes are evident in this arrangement. The parts of the brain that do the cognitive processes to track orientation, acceleration and balance are built to use signals from the inner ear. It is not a general computing ability that could deal with the mathematics of any number of dimensions – no, it is a task-specific cognitive ability that only deals in 3 dimensions. I think that all our cognitive abilities are like this; they are very sophisticated in what they do but limited to tasks that are useful and matched to what the body and environment can supply.

Further, when evolutionary pressures are forcing new behaviours and reality modeling, new cognitive abilities are not created from scratch, because changes to old cognitive abilities are faster. They will win the race. Take time for example. Animal usually have circadian rhythms and often seasonal/tidal rhythms too. But to incorporate time into our model of reality would probably require a lot of change if done from scratch. However we already have an excellent system for incorporating space in our reality. The system of place cells, grid cells, border cells, heading cells etc. is elaborate. So we can just deal with time as if it was space. Many of these re-uses of old abilities can be seen in the metaphors that people use. A whole branch of embodiment is dedicated to identifying these metaphors used in our normal thinking.

This business of re-using one ability to serve other domains brings up the question of ‘grounding’. People often remark on the circularity of dictionaries. Each word is defined by other words. As we pile up metaphoric schemes each an elaboration and re-identification of elements of other metaphors, the situation appears circular and unsupported. But with a dictionary, what is needed is that a few primitive words are defined by pointing at the object. In the same way each pile of metaphors needs to be grounded in the body. There are primitive schemes that babies are born with or that they learn naturally as they learn to use their bodies. In other words all the cognitive abilities can be traced back to the nature of the body and environment.

There is one case where it can be proven that the cognition is embodied and not classic. When a fielder catches a fly ball, the path he runs is that of an embodied method and not a classic one. The fielder makes no calculation or predictions, he simply keeps running in such a way as to keep the image of the ball in the sky in a particular place. He will end up with the ball and his glove meeting along that image line. There are good write ups of this. (here)

By contrast, classical cognition is seen as isolated and independent from the body and environment, using algorithms to manipulate symbols and capable of running any algorithm (ie a general computer). It just does not ring true to me. I see the brain-in-a-vat as about as useful as a car engine in a washing machine. Why would anyone want a brain-in-a-vat? As a thought experiment to support skepticism it is so-so, because like many philosophical ideas it is concerned with Truth, capitalized. Whereas the brain is not aiming at truth but at appropriate behaviour. A heart can be kept alive on an isolated heart perfusion apparatus and it will beat away and pump a liquid – but to what purpose? Even robots need bodies to really think in a goal directed, real time, real place way and so they are fitted with motors, cameras, arms etc. Robots can be embodied.

 

Embodied thinking

TalkingBrains has a posting, “Embodied or Symbolic? Who Cares?” (here). Greg Hickok is asking what exactly is the difference between embodied and symbolic cognition. He takes a nice example of a neurocomputation that is understood, the way a barn owl turns its head to a sound source. If you have not seen it before have a look at the link – it is well explained and easy to follow.

He asks:

Question: what do we call this kind of neural computation? Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn’t work without that!) and I suppose we can talk of a certain “resonance” of the external world with neural activation. In that sense, it’s embodied. On the other hand, the network can be said to represent information in a neural code–the pattern of activity in network of cells–that no longer resembles the air pressure wave that gave rise to it. In fact, we can write a symbolic code to describe the computation of the network.

I think, however, that the example is a bit off the subject. Of course there are many examples in the brain of clear computations that could be presented in the form of a computer program or an algorithm for manipulating symbols. And it is generally assumed that the brain manipulates entities that are best called symbols: words, objects, concepts, places and the like. Even the brains great ability to work with metaphors is like substituting symbols in schemes that relate a number of symbols in a particular way. Symbols and their manipulation seems useful in understanding the brain. Symbols in the brain, of course, would always be metaphors for actual processes, but then the idea of a symbol is by its nature always a sort of metaphor standing in for whatever it is a symbol of.

But just because some, or a great many perhaps, processes in the brain can be pictured as manipulations of symbols, in ways akin to algorithms, this does not mean that the brain acts like a general computing device. Embodied cognition is quite clearly computation only in the sense of task specific processes and architecture and, not the actions of a general device. To be understood, the brain has to be seen as an integral part of the body. It is and does its part of what the body is and does. The cognitive abilities and facilities of the brain are the ones the body needs to function. If those abilities are sometimes used for arbitrary and abstract things like playing chess, this does not mean that they are not individually ‘grounded’ in the body’s requirements and limitations.

Just because some task could be done in a particular way, does not mean that it is done that way. The brain is what it is; metaphors can help us understand its workings or they can also stand in the way of understanding. They do not dictate the nature of the brain. We always should keep in mind that metaphors are somewhat limited tools.