Yearly Archives: 2014

Echo-location in humans

We can echo-locate but it is only possible to master well if blind. This is because, to be well done, echolocation uses parts of the visual cortex. A few years ago Thaler et al published the details (see citation below). Here is their description of this natural ability.

The enormous potential of this ‘natural’ echolocation ability is realized in a segment of the blind population that has learned to sense silent objects in the environment simply by generating clicks with their tongues and mouths and then listening to the returning echoes. The echolocation click produced by such individuals tends to be short (approximately 10 ms) and spectrally broad. Clicks can be produced in various ways, but it has been suggested that the palatal click, produced by quickly moving the tongue backwards and downwards from the palatal region directly behind the teeth, is best for natural human echolocation. For the skilled echolocator, the returning echoes can potentially provide a great deal of information regarding the position, distance, size, shape and texture of objects. ”

They found that their blind echo-locating subjects (early-blind and late-blind) used visual areas of cortex in processing echo information. When they were presented with recordings of clicks with and without the resulting echoes, they found activity in the calcarine cortex when there were echoes but not in echo free recordings. But there was no difference in the activity of the auditory cortex in hearing the two recordings. There was also activity of other visual areas when listening to echoes reflected by moving objects. They conclude that blind echo-locating experts use brain regions typically used for vision rather than auditory areas to process the echoes into a perception of objects in space.

The calcarine cortex has other names: primary visual cortex, striate cortex, V1 visual area. It is the area that first receives information from the eye (via the thalamus). It contains a point-to-point map of the retina. V1 is known for several visual abilities: the identification of simple forms such as lines with orientations and lengths, aiming the eyes (saccades) towards interesting clues in the peripheral vision, and participating in forming images even with the eyes closed. This is the sort of processing area that can be taken over to process echoes into a spatial image when vision is not able to use it.

It is likely that our senses are all (to some extent) building 3D models of our surroundings and they would all contribute to our working model of the world. Particularly what we see, what we hear and what we feel, all seem to be part of one reality, not three. This must mean that the models from each sense are fitted together somewhere, or that the models of each sense fed information into each other, or, of course, both. In the end though, the visual model seems, in our case, to be the more influential part of our working model.

The mechanisms for finding discontinuities in light and finding their linear orientation and length, would not be that much different from finding discontinuities in echoes and finding their linear orientation and length. Fitting this sort of information into a perceptual model would use mechanisms that are used for visual lines and objects in sighted people. But is there evidence of this coordination of perception?

Buckingham et al (see citation below) have looked at this and found “that echolocation is not just a functional tool to help visually-impaired individuals navigate their environment, but actually has the potential to be a complete sensory replacement for vision.” There is an illusion where the size of an object affects the perceived weight. With boxes weighing exacting the same but of different sizes – the smaller boxes feel heavier than the larger ones. This illusion was used to show the size information which is usually visual can be replaced by echolocation information without changing the illusion.

Here is their abstract: “Certain blind individuals have learned to interpret the echoes of self-generated sounds to perceive the structure of objects in their environment. The current work examined how far the influence of this unique form of sensory substitution extends by testing whether echolocation-induced representations of object size could influence weight perception. A small group of echolocation experts made tongue clicks or finger snaps toward cubes of varying sizes and weights before lifting them. These echolocators experienced a robust size-weight illusion. This experiment provides the first demonstration of a sensory substitution technique whereby the substituted sense influences the conscious.”

Why don’t sighted people echo-locate. I do not believe it has been shown that we don’t. If we do it is not rendered consciously or used in preference to visual data. But there is no reason to assume that it is not there in the background helping to form a perceptual working model. For example, if an echo based edge coincided with an optical edge in the V1 area, it could give additional information about the nature of the edge.

I also think it may be that in order to simplify auditory perception, our brains suppress the low level echoes of any sound we make. We would be aware of the sound we made but much less aware of echoes of that sound. The auditory cortex would then be unable to echo-locate and the visual cortex would be busy with vision (and perhaps some echoes and other sounds) producing a visual model. In this case, we would not consciously hear our echoes and we would not directly consciously ‘see’ them either, although we might be processing them as additions to visual input.
ResearchBlogging.org

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts PLoS ONE, 6 (5) DOI: 10.1371/journal.pone.0020162

Buckingham, G., Milne, J., Byrne, C., & Goodale, M. (2014). The Size-Weight Illusion Induced Through Human Echolocation Psychological Science DOI: 10.1177/0956797614561267

Another change in the picture

As I have pointed out in previous postings, there are important new discoveries about the brain every month or so. This time we have a whole new signaling pathway in the brain. This involves new knowledge of biochemistry, physiology and anatomy. This is not a minor addition to knowledge of the brain.

ScienceDaily reports (here) on the paper: (Sakry, Neitz, Singh, Frischknecht, Marongiu, Binamé, Perera, Endres, Lutz, Radyushkin, Trotter, Mittmann; Oligodendrocyte Precursor Cells Modulate the Neuronal Network by Activity-Dependent Ectodomain Cleavage of Glial NG2; PLoS Biology, 2014; 12 (11)).

There are a number of glia cell types. Oligodendrocytes are the glia that myelinate axons by wrapping around them, insulating and speeding transmission along the axons. They develop from a precursor cell – but this precursor is a wide spread, stable and significant (5-8%) cell type in the brain. The OPC (oligodendrocyte progenitor cells) were shown, a few years ago, to form synapses with neurons and to receive signals from neurons through these synapses, but this was thought to be a one-way communication.

“We have now discovered that the precursor cells do not only receive information via the synapses, but in their turn use these to transmit signals to adjacent nerve cells. They are thus an essential component of the network,” explained Professor Jacqueline Trotter… Classically, neurons have been considered as the major players in the brain. Over the past few years, however, increasing evidence has come to light that glial cells may play an equally important role. “Glial cells are enormously important for our brains and we have now elucidated in detail a novel important role for glia in signal transmission,” explained Professor Thomas Mittmann…

A signal from the neuron results in reactions in the OPC that releases a fragment of a protein (NG2) into the local environment where it affects neighbouring neurons’ synapses altering their electrical activity. “The role of NG2 in this process became apparent when the researchers removed the protein: neuronal synaptic function is altered, modifying learning and disrupting the processing of sensory input that manifests in the form of behavioral changes in test animals.

The way brain networks function is much more complex than our models and less understood than we assumed. I believe there will be many more surprises.

Ways to navigate

When I was a little girl, my father stood me on the door step and pointed across the yard and said, “that’s north”. He went on that the house behind me was south, the village was west and the grove of trees was east. To this day when I think of north I see the barn and so on; my sense of direction is based, even after 70 odd years, on the vision of the farm yard I grew up in. I have a small problem with left and right, but if I just think of facing the barn then left is in the west towards the village. Until I traveled away from the flat prairies, that was all I needed and the only skill required was to keep track of where north was. I found later that landmarks were useful and so was a map.

My husband has his own way of finding his way and never seems to worry about the cardinal directions. He does not seem to keep an continuous, unconscious tally of which way he is facing. His only way of dealing with cardinal directions is to know that the sun is going to be to the south and going from east to west during the day. (This was a problem when he was first in the tropics where the sun is not always to the south – he could get lost within half a city block.) I had never paid any attention to the sun to know which direction I was going – it had never occurred to me. It is clear to me that there is more than one way to navigate.

A recent paper (citation below) examines types of navigation. Head-direction cells in the entorhinal/subicular area have been known for some time. It appears to be why Alzheimer’s sufferers tend to lose their sense of direction early in the disease; one of the first areas affected is the entorhinal cortex. But heading cells alone cannot give navigation accuracy. What is needed is a goal-direction cell to work with heading to keep movement in the direction of the goal. And this directional information has to be framed in either a world view (north, south, east, west) or a self view (left, right, forward, back). The geocentric information appears to be processed in the entorhinal/subicular area, egocentric information in the precuneus region. Navigation could also be done by following a sequence of visible landmarks using the place cells of the hippocampus. All of these methods could and would be used depending on the circumstances.

The researchers looked for goal-direction cells using multivoxel pattern analysis. (The method used to try and guess which video someone was watching that caused the interest in ‘mind reading’ last year; or, as reported in a previous posting, the difference between physical and social pain.) They found that the direction of the goal is stored by the same cells as the direction the body is facing. These cells were in the entorhinal/subicular area and geocentric. The same cells could be used for both heading and goal direction. The exact way this is done was not clear in this study. “Due to the relatively poor temporal resolution of fMRI, we are not able to determine what the temporal dynamics of head-direction simulation may be. Our assumption is that head-direction populations are initially involved in representing current facing direction and then switch to simulation during navigational planning. However, other temporal dynamics, such as constant oscillation between facing and goal direction, would explain our results equally well. Thus, we remain agnostic regarding the precise temporal dynamics involved in head-direction simulation, which will have to be resolved with alternative methodological approaches.

Their findings are relevant to actual navigation. “We found a significant positive correlation between entorhinal/subicular facing direction information and overall task accuracy….These results therefore show that participants with a stronger representation of current heading direction are both more accurate and faster at making goal direction judgments in this task

Here is the abstract: “Navigating to a safe place, such as a home or nest, is a fundamental behavior for all complex animals. Determining the direction to such goals is a crucial first step in navigation. Surprisingly, little is known about how or where in the brain this ‘‘goal direction signal’’ is represented. In mammals, ‘‘head-direction cells’’ are thought to support this process, but despite 30 years of research, no evidence for a goal direction representation has been reported. Here, we used fMRI to record neural activity while participants made goal direction judgments based on a previously learned virtual environment. We applied multivoxel pattern analysis to these data and found that the human entorhinal/subicular region contains a neural representation of intended goal direction. Furthermore, the neural pattern expressed for a given goal direction matched the pattern expressed when simply facing that same direction. This suggests the existence of a shared neural representation of both goal and facing direction. We argue that this reflects a mechanism based on head-direction populations that simulate future goal directions during route planning. Our data further revealed that the strength of direction information predicts performance. Finally, we found a dissociation between this geocentric information in the entorhinal/subicular region and egocentric direction information in the precuneus.”
ResearchBlogging.org

Chadwick, M., Jolly, A., Amos, D., Hassabis, D., & Spiers, H. (2014). A Goal Direction Signal in the Human Entorhinal/Subicular Region Current Biology DOI: 10.1016/j.cub.2014.11.001

What is being humble?

What is humility; what does it mean in folk pyschology to be intellecually humble? It is good or bad? ScienceDaily has an item on a study of this topic (here). The researchers are looking for the real world definition. “This is more of a bottom-up approach, what do real people think about humility, what are the lay conceptions out there in the real world and not just what comes from the ivory tower. We’re just using statistics to present it and give people a picture of that.

Being humble is the opposite of being proud. A humble person has a real regard for others and is “ not thinking too highly of himself – but highly enough”.

...analysis found two clusters of traits that people use to explain humility. Traits in the first cluster come from the social realm: Sincere, honest, unselfish, thoughtful, mature, etc. The second and more unique cluster surrounds the concept of learning: curious, bright, logical and aware.” These occur together in the intellectually humble person who appreciates learning from others.

It seems to me that such a person has self-esteem but also has ‘other-esteem’ to coin a phrase. It is not just the opposite of proud but it contrasts with narcissistic and individualistic. The idea of humility would seem to fit well with the Ubuntu philosophy, a very underrated way of approaching life. Other-esteem is important.

Here is the abstract of paper, (Peter L. Samuelson, Matthew J. Jarvinen, Thomas B. Paulus, Ian M. Church, Sam A. Hardy, Justin L. Barrett. Implicit theories of intellectual virtues and vices: A focus on intellectual humility. The Journal of Positive Psychology, 2014; 1):

Abstract: “The study of intellectual humility is still in its early stages and issues of definition and measurement are only now being explored. To inform and guide the process of defining and measuring this important intellectual virtue, we conducted a series of studies into the implicit theory – or ‘folk’ understanding – of an intellectually humble person, a wise person, and an intellectually arrogant person. In Study 1, 350 adults used a free-listing procedure to generate a list of descriptors, one for each person-concept. In Study 2, 335 adults rated the previously generated descriptors by how characteristic each was of the target person-concept. In Study 3, 344 adults sorted the descriptors by similarity for each person-concept. By comparing and contrasting the three person-concepts, a complex portrait of an intellectually humble person emerges with particular epistemic, self-oriented, and other-oriented dimensions.”

 

The roots of language

If you are not searching for something, then you are unlikely see it. That has been so with language. There was an agreement on what language was and how it came to be. Any other way of looking at things was hardly considered. But now language is seen in a different light – part of the spectrum of animal communication. Recently there have been some very interesting papers – on dogs, birds, monkeys and cows.

Dogs: They understand our language very much as we do. They process separately the words or the phonemic sound from the non-word aspects or the prosodic cues. We do this. We separate the verbal information from the emotional sound envelop. And the dogs like us do the word-meaning work in the left hemisphere and the tone of voice work in the right hemisphere, in similar regions. This implies that the lateralization of aspects of communication is probably an old feature of the mammalian brain. The two abstracts below explain the experimental evidence.

Abstract: (Victoria Ratcliffe, David Reby; Orienting Asymmetries in Dogs’ Responses to Different Communicatory Components of Human Speech; Cell Current Biology Volume 24, Issue 24, p2908–2912, 15 December 2020) “It is well established that in human speech perception the left hemisphere (LH) of the brain is specialized for processing intelligible phonemic (segmental) content, whereas the right hemisphere (RH) is more sensitive to prosodic (suprasegmental) cues. Despite evidence that a range of mammal species show LH specialization when processing conspecific vocalizations, the presence of hemispheric biases in domesticated animals’ responses to the communicative components of human speech has never been investigated. Human speech is familiar and relevant to domestic dogs (Canis familiaris), who are known to perceive both segmental phonemic cues and suprasegmental speaker-related and emotional prosodic cues. Using the head-orienting paradigm, we presented dogs with manipulated speech and tones differing in segmental or suprasegmental content and recorded their orienting responses. We found that dogs showed a significant LH bias when presented with a familiar spoken command in which the salience of meaningful phonemic (segmental) cues was artificially increased but a significant RH bias in response to commands in which the salience of intonational or speaker-related (suprasegmental) vocal cues was increased. Our results provide insights into mechanisms of interspecific vocal perception in a domesticated mammal and suggest that dogs may share ancestral or convergent hemispheric specializations for processing the different functional communicative components of speech with human listeners.

Abstract: (Attila Andics, Márta Gácsi, Tamás Faragó, Ádám Miklósi; Voice-Sensitive Regions in the Dog and Human Brain Are Revealed by Comparative fMRI; Cell Current Biology Volume 24, Issue 5, p574–578, 3 March 2021) “During the approximately 18–32 thousand years of domestication, dogs and humans have shared a similar social environment. Dog and human vocalizations are thus familiar and relevant to both species, although they belong to evolutionarily distant taxa, as their lineages split approximately 90–100 million years ago. In this first comparative neuroimaging study of a nonprimate and a primate species, we made use of this special combination of shared environment and evolutionary distance. We presented dogs and humans with the same set of vocal and nonvocal stimuli to search for functionally analogous voice-sensitive cortical regions. We demonstrate that voice areas exist in dogs and that they show a similar pattern to anterior temporal voice areas in humans. Our findings also reveal that sensitivity to vocal emotional valence cues engages similarly located nonprimary auditory regions in dogs and humans. Although parallel evolution cannot be excluded, our findings suggest that voice areas may have a more ancient evolutionary origin than previously known.”

It has also been shown that some dogs (border collies) can learn a remarkable number of words, many hundred names for toy objects, some verbs and adjectives. This implies that the structures in our language are not unique. Objects, proper names, actions, attributes are all aspects of our perception of the world and seem to be basic to the mammalian brain’s way of thinking. The idea of an agent causing a change is how a working border collie earns its keep. Nothing new here – these are old architectural feature of the brain that language appears to have harnessed.

Birds: Recently 100+ researchers with use of 9 supercomputers analyzed the genomes of 48 species of birds. The results have just been published in 28 papers appearing together in various journals. There is now a complete outline of the bird family tree. There is a similarity between our genes and those of birds groups that have vocal abilities. Behaviorally there are similarities in the learning of vocalizations. Besides ourselves and the songbirds, vocal learners include dolphins, sea lions, bats and elephants; and in birds, parrots and hummingbirds as well as the songbirds. The genetic similarity is found in 55 genes shared by us and songbirds, a pattern found only in vocal-learners.

Scientific American reviewed this research (here) “The similarity of the gene networks needed for vocal learning between humans and birds is not completely surprising. After all, all vocal-learning species can trace their ancestry back to the same basal branches on the tree of life, White says. Even though the ability evolved independently, it was influenced by a similar initial deal from the genetic deck of cards. Also, the broadly similar environment of this Earth created the evolutionary pressures that shape vocal learners. Just as multiple species came up with similar solutions to the problem of vision, species that evolved vocal learning seem to have settled on common strategies. Viewed from another angle, however, the convergence is striking. “This, to my knowledge, is the first time a learned behavior has been shown to have so much similar molecular underpinnings,” White says. The discoveries open up a host of potential avenues for future exploration: Can nonvocal learners acquire some traits needed for vocal learning simply by tweaking some key genes? Almost certainly, zebra finches have more to tell us about our own ability to babble, shout and sing. ”

Monkeys: We have been told that monkey’s use of calls is nothing like language, because they are fixed, neither learned or elaborated. But a new study examines the differences between the use of the calls in the same species but in difference places. The differences can be explained by established human language mechanisms. When two words compete and one (A) has a more specific meaning and the other (B) has a general meaning - then (B)’s meaning will change so that it doesn’t include (A) but only all other instances of the general meaning. There is a rudimentary ‘primate linguistics’ that is not non-language like. Here is the abstract.

Abstract: (Philippe Schlenker, Emmanuel Chemla, Kate Arnold, Alban Lemasson, Karim Ouattara, Sumir Keenan, Claudia Stephan, Robin Ryder, Klaus Zuberbühler; Monkey semantics: two ‘dialects’ of Campbell’s monkey alarm calls; Linguistics and Philosophy, 2014; 37 (6)) “We develop a formal semantic analysis of the alarm calls used by Campbell’s monkeys in the Tai forest (Ivory Coast) and on Tiwai island (Sierra Leone)—two sites that differ in the main predators that the monkeys are exposed to (eagles on Tiwai vs. eagles and leopards in Tai). Building on data discussed in Ouattara et al. (PLoS ONE 4(11):e7808, 2009a; PNAS 106(51): 22026–22031, 2009b and Arnold et al. (Population differences in wild Campbell’s monkeys alarm call use, 2013), we argue that on both sites alarm calls include the roots krak and hok, which can optionally be affixed with -oo, a kind of attenuating suffix; in addition, sentences can start with boom boom, which indicates that the context is not one of predation. In line with Arnold et al., we show that the meaning of the roots is not quite the same in Tai and on Tiwai: krak often functions as a leopard alarm call in Tai, but as a general alarm call on Tiwai. We develop models based on a compositional semantics in which concatenation is interpreted as conjunction, roots have lexical meanings, -oo is an attenuating suffix, and an all-purpose alarm parameter is raised with each individual call. The first model accounts for the difference between Tai and Tiwai by way of different lexical entries for krak. The second model gives the same underspecified entry to krak in both locations (= general alarm call), but it makes use of a competition mechanism akin to scalar implicatures. In Tai, strengthening yields a meaning equivalent to non-aerial dangerous predator and turns out to single out leopards. On Tiwai, strengthening yields a nearly contradictory meaning due to the absence of ground predators, and only the unstrengthened meaning is used.”

Cows: ScienceDaily reported (here) on a press release, “Do you speak cow?” on research led by Monica Padilla de la Torre from University of Queen Mary London. “They identified two distinct maternal ‘calls’. When cows were close to their calves, they communicated with them using low frequency calls. When they were separated — out of visual contact — their calls were louder and at a much higher frequency. Calves called out to their mothers when they wanted to start suckling. And all three types of calls were individualized — it was possible to identify each cow and calf using its calls.”

Many animals have been shown to recognize other individuals and to identify themselves vocally. But it is still a surprise that an animal like a cow has ‘names’. It could be a general ability among mammals.

Work like this on other animals is likely to further illustrate the roots of our language. It takes looking rather than accepting the idea that our language has no roots to be found in other animals.

 

All pain is not the same

A popular illustration of embodied cognition is the notion that physical pain and social pain share the same neural mechanism. The researchers that first published this relationship, have now published a new paper that finds the two types of pain do not overlap in the brain but are just close neighbours, close enough to have appeared together on the original fMRI scans. But the pattern of activity is different. The data has not changed but the method of analyzing it has produced a much clearer picture.

Neuroskeptic has a good blog on this paper and observes: “ Woo et al. have shown commendable scientific integrity in being willing to change their minds and update their theory based on new evidence. That sets an excellent example for researchers.” Have a look at the Neuroskeptic post (here).

It would probably be wise for other groups to re-examine, using multivariant analysis, similar data they have previously published.

 

 

 

 

Abstract of paper (Woo CW, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature Communications, 5 PMID: 25400102)

Current theories suggest that physical pain and social rejection share common neural mechanisms, largely by virtue of overlapping functional magnetic resonance imaging (fMRI) activity. Here we challenge this notion by identifying distinct multivariate fMRI patterns unique to pain and rejection. Sixty participants experience painful heat and warmth and view photos of ex-partners and friends on separate trials. FMRI pattern classifiers discriminate pain and rejection from their respective control conditions in out-of-sample individuals with 92% and 80% accuracy. The rejection classifier performs at chance on pain, and vice versa. Pain- and rejection-related representations are uncorrelated within regions thought to encode pain affect (for example, dorsal anterior cingulate) and show distinct functional connectivity with other regions in a separate resting-state data set (N=91). These findings demonstrate that separate representations underlie pain and rejection despite common fMRI activity at the gross anatomical level. Rather than co-opting pain circuitry, rejection involves distinct affective representations in humans.”

 

Agency and intention

Nautilus has a post (here) by Matthew Hutson that is a very interesting review of the connection between our perception of time and of causation. If we believe that two events are causally related we perceive less time between them than a clock would register; and if we believe the events are not causally connected, time is increased between them. And on the other side of the coin. If we perceive a shorter time between two events, we are more likely to believe they are causally connected; and if the time is longer between them, it is harder for us to believe they are causally related. This effect is called intentional binding. The article describes the important experiments that underpin this concept.

But intentional binding is part of a larger concept. How is our sense of agency created and why? To learn how to do things in this world, we have to know what we set in motion and what was caused by something other then ourselves. Our memory of an event has to be marked as caused by us if it is, in order to be useful in future situations. As our memory of an event is based on our consciousness of it, our consciousness must reflect whether we caused the outcome. So the question becomes – how do our brains make the call to mark an event as our agency. If the actual ‘causing’ was a conscious process, there would be no need for a procedure to establish whether we were the agents of the action. However there is a procedure.

I wrote about this previously (here) in looking at Chapter 1 of ‘The New Unconscious’, ‘Who is the Controller of Controlled Processes?’. What needs to happen for us to feel that we have willed an action? We have to believe that thoughts which reach our consciousness have caused our actions. Three things are needed for us to make a causal connection between the thoughts and the actions:

  1. priority

The thought has to reach consciousness before the action if it is going to appear a cause. Actually it must occur quite closely, within about 30 sec., before the action. Wegner and Wheatley investigated the principle with fake thoughts fed through earphones and fake actions gently forced by equipment, to give people the feeling that their thought caused their action.

  1. consistency

The thought has to be about the action in order for it to appear to be the cause. Wegner, Sparrow and Winerman used a mirror so that a subject saw the hands of another person standing behind them instead of their own. If the thoughts fed to the subject through earphones matched the hand movements then the subject experienced willing the movements. If the earphones gave no ‘thoughts’ or contradictory ones, there was no feeling of will.

  1. exclusivity

The thought must be the only apparent source of a cause for the action. If another cause that seems more believable is available it will be used. The feeling of will can disappear when the subject is in a trance and feels controlled by another agent such as a spirit.

Also previously (here) I discussed a report in Science, “Movement Intention after Parietal Cortex Stimulation in Humans”, by M. Desnurget and others, with the following summary:

Parietal and premotor cortex regions are serious contenders for bringing motor intentions and motor responses into awareness. We used electrical stimulation in seven patients undergoing awake brain surgery. Stimulating the right inferior parietal regions triggered a strong intention and desire to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region provoked the intention to move the lips and to talk. When stimulation intensity was increased in parietal areas, participants believed they had really performed these movements, although no electromyographic activity was detected. Stimulation of the premotor region triggered overt mouth and contralateral limb movements. Yet, patients firmly denied that they had moved. Conscious intention and motor awareness thus arise from increased parietal activity before movement execution.”

The feeling of agency is not something that we can change even if we believe it is not true. Here is Rodolfo Llinas describing an experiment that he conducted on himself that I discussed previously (here). It was in a video interview of Rodolfo Llinas (video). There are many interesting ideas in this hour long discussion. The part I am quoting from the transcript is Llinas’ self-experimentation on the subject of free-will.

“…I understand that free will does not exist; I understand that it is the only rational way to relate to each other, this is to assume that it does, although we deeply know that it doesn’t. Now the question you may ask me is how do you know? And the answer is, well, I did an actually lovely experiment on myself. It was extraordinary really. There is an instrument used in neurology called a magnetic stimulator…its an instrument that has a coil that you put next to the top of the head and you pass a current such that a big magnetic field is generated that activates the brain directly, without necessary to open the thing. So if you get one of these coils and you put it on top of the head, you can generate a movement. You put it in the back, you see a light, so you can stimulate different parts of the brain and have a feeling of what happens when you activate the brain directly without, in quotes, you doing it. This of course is a strange way of talking but that’s how we talk. So I decide to put it on the top of the head where I consider to be the motor cortex and stimulate it and find a good spot where my foot on the right side would move inwards. It was *pop* no problem. And we did it several time and I tell my colleague, I know anatomy, I know physiology, I can tell you I’m cheating. Put the stimulus and then I move, I feel it, I’m moving it. And he said well, you know, there’s no way to really know. I said, I’ll tell you how I know. I feel it, but stimulate and I’ll move the foot outwards. I am now going to do that, so I stimulate and the foot moves inwards again. So I said but I changed my mind. Do it again. So I do it half a dozen times… (it always moved inward)…So I said, oh my god, I can’t tell the difference between the activity from the outside and what I consider to be a voluntary movement. If I know that it is going to happen, then I think I did it, because I now understand this free will stuff and this volition stuff. Volition is what’s happening somewhere else in the brain, I know about and therefore I decide that I did it…In other words, free will is knowing what you are going to do. That’s all.”

Crows

I think it is time to look at crows again. There are three interesting papers want to commented on. What reminds me of crows is that I stumbled across a few years old blog by a linguist (he has probably changed his tune – so no references) who ridiculed the idea that birds were at all smart because they had tiny brains with no ‘higher’ brain anatomy. He was unwilling to take seriously any of the work of Pepperberg with her parrot Alex. How the climate has changed in a few years.

The most recent paper is review in ScienceDaily (here) : Martinho, Burns, von Bayern, Kacelnik. “Monocular Tool Control, Eye Dominance, and Laterality in New Caledonian Crows.” Current Biology, 2014. It deals with the seeming ‘handedness’ in the way crows hold tools. It is actually ‘eyedness’; the crows hold the tool on one side of the beak, so that they see the end of the tool and the target with their preferred eye. Caledonia crows have have unusually forward looking eyes and a substantial area of binocular vision. The researchers found that the crows use a monocular part of the opposite side eye to see clearly when using a tool. This implies that they are anatomically adapted to tool use. “In other words, the birds are using their notable binocular vision for better monocular vision, allowing each eye to see further toward the other side of the beak. The birds’ unusually wide binocular field is among the first known examples of a physical adaptation to enable tool use, the researchers say.

In another paper from the spring (citation below), Jelbert and others investigate the extant of New Caledonian crow’s understanding of how to displace water to receive a reward and found that they had the causal understanding level of a 5-7 year-old child. Wild crows, after short training, were tested in 6 Aesop fable type tasks. They could solve 4 of them: dropping stones into water but not sand filled tubes, dropping sinking not floating and solid not hollow objects, and dropping into tubes with higher water levels. They failed to solve 2 of them: understanding tubes of difference diameter and U shaped tubes. The results show the understanding the causal idea of volume displacement at about the level of the 5-7 year old child. “These results are striking as they highlight both the strengths and limits of the crows’ understanding. In particular, the crows all failed a task which violated normal causal rules, but they could pass the other tasks, which suggests they were using some level of causal understanding when they were successful.

Last year there was a paper reviewed by ScienceDaily (here): Veit, Nieder. “Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds.” Nature Communications, 2013; 4. This paper dealt with how crows make strategic decisions. As crows do many things that are thought of as primate strengths and yet have a very different brain architecture, this is a way to look at intelligence in a fundamental way that would apply to both primates and crows.

Crows were trained to do a memory test. On a computer screen they were shown an image, they had to remember the image and later pick one of two images on the screen. The hard part was that sometimes they had to pick the image that was the same and other times the one that was different. They had to switch back and forth between two rules-of-the -game. They could use this mental flexibility, which even takes effort for humans. While the birds were engaged in this task their nidopallium caudolaterale area of the brain was monitored. One group of cells was active for the different image rule and another for the same image rule.

Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. “Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains.

Citation: Sarah A. Jelbert, Alex H. Taylor, Lucy G. Cheke, Nicola S. Clayton, Russell D. Gray. Using the Aesop’s Fable Paradigm to Investigate Causal Understanding of Water Displacement by New Caledonian Crows. PLoS ONE, 2014; 9 (3): e92895 DOI: 10.1371/journal.pone.0092895
ResearchBlogging.org

Veit, L., & Nieder, A. (2013). Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds Nature Communications, 4 DOI: 10.1038/ncomms3878

Reading patterns

There is a paper (citation below) that takes a different look at language. It attempts to examine what happens in the brain when we read a story. There is the act of reading, the processing of the language, and the engagement in the story, all going on at the same time.

One of the main questions in the study of language processing in the brain is to understand the role of the multiple regions that are activated in response to reading. A network of multiple brain regions have been implicated in language, and while the view of the field started with a simplistic dissociation between the roles of Broca’s area and Wernicke’s area, the current theories about language comprehension are more complex and most of them involve different streams of information that involve multiple regions (including Broca’s and Wernicke’s).” By studying sub-processes in isolation, previous studies have resulted in a confused picture. The researchers changed the method and looked at all parts of the brain at the same time in a normal natural reading situation (reading a chapter of a Harry Potter book). “We extract from the words of the chapter very diverse features and properties (such as semantic and syntactic properties, visual properties, discourse level features) and then examine which brain areas have activity that is modulated by the different types of features, leading us to distinguish between brain areas on the basis of which type of information they represent.” This is unlike the usual method of finding the areas of the brain with the most change (those that ‘light up’ or ‘go dark’) during some activity or process. Here what is being noted is changes in pattern. They used a program that had been trained to predict the fMRI activation pattern for a piece of text from training with passages that had each word tagged with 195 features (size, part of speech, role in parsed sentence, emotion, involved with a particular character and the like). The program uses brain-wide patterns, not the activity of individual areas. “The model makes predictions of the fMRI activation for an arbitrary text passage, by capturing how this diverse set of information contributes to the neural activity, then combining these diverse neural encodings into a single prediction of brain-wide fMRI activity over time. Our model not only accounts for the different levels of processing involved in story comprehension; it goes further by explicitly searching for the brain activity encodings for individual stimuli such as the mention of a specific story character, the use of a specific syntactic part-of-speech or the occurrence of a given semantic feature. … It has not been shown previously that one could model in detail the rapidly varying dynamics of brain activity with fMRI while reading at a close to normal speed.

Many of the results of the natural reading while being scanned are not surprising. But there are some very interesting insights. We think of language, especially syntax, as being primarily a left hemisphere function. “The strong right temporal representation of syntax that we found was not expected. Indeed we did not find other papers that report the large right hemisphere representation of sentence structure or syntax that we obtain. One reason might be that our syntax features are unique: whereas most experiments have approximated syntactic information in terms of processing load (length of constituents, hard vs easy phrase structure etc.) we model syntax and structure using a much more detailed set of features. Specifically, our model learns distinct neural encodings for each of 46 detailed syntax features including individual parts of speech, (adjectives, determiners, nouns, etc.) specific substructures in dependency parses (noun modifiers, verb subjects, etc.), and punctuation. Earlier studies considering only increases or decreases in activity due to single contrasts in syntactic properties could not detect detailed neural encodings of this type. We hypothesize that these regions have been previously overlooked.

There have been questions in the past about how connected syntactic and semantic processing are. “The question whether the semantics and syntactic properties are represented in different location has been partially answered by our results. There seems to be a large overlap in the areas in which both syntax and semantics are represented.

 

The characters actions seems to use areas of imagined action. But dialog may make special demands. “Presence of dialog among story characters was found to modulate activity in many regions in the bilateral temporal and inferior frontal cortices; one plausible hypothesis is that dialog requires additional processing in the language regions. More interestingly, it seems like presence of dialog activates the right temporo-parietal junction, a key theory of mind region. This observation raises an exciting hypothesis to pursue: that the presence of dialog increases the demands for perspective interpretation and recruits theory of mind regions.

This is a great step forward in studying language in the context of actual communication.

Abstract:

Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.”
ResearchBlogging.org

Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., & Mitchell, T. (2014). Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses PLoS ONE, 9 (11) DOI: 10.1371/journal.pone.0112575

Fluid, flow, zone and zen

So we have conscious and unconscious, type 1 and type 2 cognitive processes, default and task related modes, fluid intelligence, being in the flow, being in the zone and the Zen mind. I am wondering which are really the same but just expressed in different semantic frameworks. What might actually be the same physical thing from a different view point. I suspect that these are all ways of expressing various aspects of how we use or fail to use unconscious cognition.

Here was an interesting Scientific American blog (here) by SB Kaufman last January, looking at the relationship between fluid reasoning and working memory. Fluid reasoning works across all domains of intelligence and uses very little prior knowledge, expertise or practice to build relationships, patterns and inferences. How much it depends on working memory is controlled by speed. If the fluid reasoning is done quickly, it requires good working memory; but it can be done slowly with less need for working memory. Is this the difference between quick and deep thinkers, both described as intelligent?

Fluid reasoning does not fit nicely with the two types of cognitive processes: type 1—intuitive, fast, automatic, unconscious, effortless, contextualized, error-prone, and type 2—reflective, slow, deliberate, cogitative, effortful, decontextualized, normatively correct. As type 2 is typified as using working memory and type 1 as not using it, there is an implication that when speed is required for fluid reasoning, more working memory is required and therefore the thinking is leaning towards type 2 processing which is the slower of the two. It is a bit of a paradox. Perhaps what sets apart fluid reasoning is the type of problem rather than the type of process. Maybe the two types of process are ends of a spectrum rather than some sort of opposites. Let’s imagine the reasoning as being little spurts of type 1 process feeding a type 2 use of working memory. This could be a spectrum: at one end continuous type 1 thinking with working memory and consciousness only being involved in the beginning and the end. The other end would be a continuous back and forth as working memory steps through a solution. Let’s imagine that there is little control of efficiency in the type 1 working. The unconscious does not necessarily stick to a plan, while the use of working memory almost dictates a step-wise method. Fluid problems which occur in areas with little expertise, knowledge and practice may tax the type 1 reasoning unless it is closely monitored and controlled with working memory. A ‘step-wise plan’ may restrict and slow down progress on a well-practiced task; not having such a plan, may overwhelm the process with irrelevant detail and slow down an unfamilar task. There may (for any situation) be an optimal amount of type 2 control of type 1 free-wheeling speed.

People talking about ‘flow’ and ‘zone’ tend to acknowledge the similarity in the two concepts. But flow seems less concentrated and describes a way of living and especially working. While zone seems to describe short periods of more intense activity, as in a sport. This is almost the opposite of fluid reasoning in that neither flow nor zone can be achieved without first acquiring skill (expertise, knowledge and practice are basic). This seems to be type 1 processing at its best. In fact, one way to lose the zone is to try and think about or consciously control what you are doing. That is how to choke.

Mihály Csíkszentmihályi has documented flow for most of his career. His theory of Flow has three conditions for achieving the flow state: be involved in an activity with a clear set of goals and progress (direction and structure); have clear and immediate feedback to allow change and adjustment; have balance between the perceived challenges and perceived skills (confidence in one’s ability for the task). The person in flow is experiencing the present moment, a sense of control, a loss of sense of time and of self-consciousness, with a feeling of great reward and enjoyment. There is an automatic connection of action and perception and an effortless relaxation, but still a feeling of control.

Young and Pain have studied being ‘in the zone’. It is described as “a state in which an athlete performs to the best of his or her ability. It is a magical and…special place where performance is exceptional and consistent, automatic and flowing. An athlete is able to ignore all the pressures and let his or her body deliver the performance that has been learned so well. Competition is fun and exciting.” Athletes reporting on ‘in the zone’ moments report: “clear inner process”, “felt all together”, “awareness of power”, “clear focus”, “strong sense of self”, “free from outer restrictions”, “need to complete”, “absorption”, “intention”, “process ‘clicked’”, “personal understanding & expression”, “actions & thoughts spontaneous”, “event was practiced”, “performance”, “fulfillment”, “intrinsic reward”, “loss of self”, “spiritual”, “loss of time and space”, “unity of self and environment”, “enjoyed others”, “prior related involvement”, “fun”, “action or behavior”, “goals and structure”. Zone seems more intense and more identified with a very particular event than flow.

The hallmark of both flow and zone is that it appears to be the unconscious, fully equiped and practiced, in charge and doing the task well and effortlessly. The other thing to note is that the task mode is being used and not the default mode. Introspection, memory and imagination are taking second place.

The flow/zone way of acting is even more extreme in some Eastern religious exercises and also a few Western ones. The pinnacle of this is perhaps Zen states of mind. One in particular is like zone. “Mushin means “Without Mind” and it is very similar in practice to the Chinese Taoist principle of wei wuwei . Of all of the states of mind, I think not only is working toward mastery of mushin most important, it’s also the one most people have felt at some point in time. In sports circles, mushin is often referred to as “being in the zone”. Mushin is characterized by a mind that is completely empty of all thoughts and is existing purely in the current moment. A mind in mushin is free from worry, anger, ego, fear or any other emotions. It does not plan, it merely acts. If you’ve ever been playing a sport and you got so into it you stopped thinking about what you were doing and just played, you’ve experienced mushin.” I find the use of mind with this meaning misleading, but it is clear in the context that they are referring to just the conscious part of the mind when they use the word ‘mind’. It could be replaced with the word consciousness without changing the meaning.

In summary, unconscious control of tasks have been extremely well learned (the learning likely requires conscious thought) leads to states of mind that are valued, very skilled, without effort and agreeable. The default mode is suppressed and the self recedes in importance as do past and future because introspection, recall of past events and dreaming of future ones require the default mode. It is not an all or nothing thing but one of degree.