What is being humble?

What is humility; what does it mean in folk pyschology to be intellecually humble? It is good or bad? ScienceDaily has an item on a study of this topic (here). The researchers are looking for the real world definition. “This is more of a bottom-up approach, what do real people think about humility, what are the lay conceptions out there in the real world and not just what comes from the ivory tower. We’re just using statistics to present it and give people a picture of that.

Being humble is the opposite of being proud. A humble person has a real regard for others and is “ not thinking too highly of himself – but highly enough”.

...analysis found two clusters of traits that people use to explain humility. Traits in the first cluster come from the social realm: Sincere, honest, unselfish, thoughtful, mature, etc. The second and more unique cluster surrounds the concept of learning: curious, bright, logical and aware.” These occur together in the intellectually humble person who appreciates learning from others.

It seems to me that such a person has self-esteem but also has ‘other-esteem’ to coin a phrase. It is not just the opposite of proud but it contrasts with narcissistic and individualistic. The idea of humility would seem to fit well with the Ubuntu philosophy, a very underrated way of approaching life. Other-esteem is important.

Here is the abstract of paper, (Peter L. Samuelson, Matthew J. Jarvinen, Thomas B. Paulus, Ian M. Church, Sam A. Hardy, Justin L. Barrett. Implicit theories of intellectual virtues and vices: A focus on intellectual humility. The Journal of Positive Psychology, 2014; 1):

Abstract: “The study of intellectual humility is still in its early stages and issues of definition and measurement are only now being explored. To inform and guide the process of defining and measuring this important intellectual virtue, we conducted a series of studies into the implicit theory – or ‘folk’ understanding – of an intellectually humble person, a wise person, and an intellectually arrogant person. In Study 1, 350 adults used a free-listing procedure to generate a list of descriptors, one for each person-concept. In Study 2, 335 adults rated the previously generated descriptors by how characteristic each was of the target person-concept. In Study 3, 344 adults sorted the descriptors by similarity for each person-concept. By comparing and contrasting the three person-concepts, a complex portrait of an intellectually humble person emerges with particular epistemic, self-oriented, and other-oriented dimensions.”

 

The roots of language

If you are not searching for something, then you are unlikely see it. That has been so with language. There was an agreement on what language was and how it came to be. Any other way of looking at things was hardly considered. But now language is seen in a different light – part of the spectrum of animal communication. Recently there have been some very interesting papers – on dogs, birds, monkeys and cows.

Dogs: They understand our language very much as we do. They process separately the words or the phonemic sound from the non-word aspects or the prosodic cues. We do this. We separate the verbal information from the emotional sound envelop. And the dogs like us do the word-meaning work in the left hemisphere and the tone of voice work in the right hemisphere, in similar regions. This implies that the lateralization of aspects of communication is probably an old feature of the mammalian brain. The two abstracts below explain the experimental evidence.

Abstract: (Victoria Ratcliffe, David Reby; Orienting Asymmetries in Dogs’ Responses to Different Communicatory Components of Human Speech; Cell Current Biology Volume 24, Issue 24, p2908–2912, 15 December 2014) “It is well established that in human speech perception the left hemisphere (LH) of the brain is specialized for processing intelligible phonemic (segmental) content, whereas the right hemisphere (RH) is more sensitive to prosodic (suprasegmental) cues. Despite evidence that a range of mammal species show LH specialization when processing conspecific vocalizations, the presence of hemispheric biases in domesticated animals’ responses to the communicative components of human speech has never been investigated. Human speech is familiar and relevant to domestic dogs (Canis familiaris), who are known to perceive both segmental phonemic cues and suprasegmental speaker-related and emotional prosodic cues. Using the head-orienting paradigm, we presented dogs with manipulated speech and tones differing in segmental or suprasegmental content and recorded their orienting responses. We found that dogs showed a significant LH bias when presented with a familiar spoken command in which the salience of meaningful phonemic (segmental) cues was artificially increased but a significant RH bias in response to commands in which the salience of intonational or speaker-related (suprasegmental) vocal cues was increased. Our results provide insights into mechanisms of interspecific vocal perception in a domesticated mammal and suggest that dogs may share ancestral or convergent hemispheric specializations for processing the different functional communicative components of speech with human listeners.

Abstract: (Attila Andics, Márta Gácsi, Tamás Faragó, Ádám Miklósi; Voice-Sensitive Regions in the Dog and Human Brain Are Revealed by Comparative fMRI; Cell Current Biology Volume 24, Issue 5, p574–578, 3 March 2014) “During the approximately 18–32 thousand years of domestication, dogs and humans have shared a similar social environment. Dog and human vocalizations are thus familiar and relevant to both species, although they belong to evolutionarily distant taxa, as their lineages split approximately 90–100 million years ago. In this first comparative neuroimaging study of a nonprimate and a primate species, we made use of this special combination of shared environment and evolutionary distance. We presented dogs and humans with the same set of vocal and nonvocal stimuli to search for functionally analogous voice-sensitive cortical regions. We demonstrate that voice areas exist in dogs and that they show a similar pattern to anterior temporal voice areas in humans. Our findings also reveal that sensitivity to vocal emotional valence cues engages similarly located nonprimary auditory regions in dogs and humans. Although parallel evolution cannot be excluded, our findings suggest that voice areas may have a more ancient evolutionary origin than previously known.”

It has also been shown that some dogs (border collies) can learn a remarkable number of words, many hundred names for toy objects, some verbs and adjectives. This implies that the structures in our language are not unique. Objects, proper names, actions, attributes are all aspects of our perception of the world and seem to be basic to the mammalian brain’s way of thinking. The idea of an agent causing a change is how a working border collie earns its keep. Nothing new here – these are old architectural feature of the brain that language appears to have harnessed.

Birds: Recently 100+ researchers with use of 9 supercomputers analyzed the genomes of 48 species of birds. The results have just been published in 28 papers appearing together in various journals. There is now a complete outline of the bird family tree. There is a similarity between our genes and those of birds groups that have vocal abilities. Behaviorally there are similarities in the learning of vocalizations. Besides ourselves and the songbirds, vocal learners include dolphins, sea lions, bats and elephants; and in birds, parrots and hummingbirds as well as the songbirds. The genetic similarity is found in 55 genes shared by us and songbirds, a pattern found only in vocal-learners.

Scientific American reviewed this research (here) “The similarity of the gene networks needed for vocal learning between humans and birds is not completely surprising. After all, all vocal-learning species can trace their ancestry back to the same basal branches on the tree of life, White says. Even though the ability evolved independently, it was influenced by a similar initial deal from the genetic deck of cards. Also, the broadly similar environment of this Earth created the evolutionary pressures that shape vocal learners. Just as multiple species came up with similar solutions to the problem of vision, species that evolved vocal learning seem to have settled on common strategies. Viewed from another angle, however, the convergence is striking. “This, to my knowledge, is the first time a learned behavior has been shown to have so much similar molecular underpinnings,” White says. The discoveries open up a host of potential avenues for future exploration: Can nonvocal learners acquire some traits needed for vocal learning simply by tweaking some key genes? Almost certainly, zebra finches have more to tell us about our own ability to babble, shout and sing. ”

Monkeys: We have been told that monkey’s use of calls is nothing like language, because they are fixed, neither learned or elaborated. But a new study examines the differences between the use of the calls in the same species but in difference places. The differences can be explained by established human language mechanisms. When two words compete and one (A) has a more specific meaning and the other (B) has a general meaning – then (B)’s meaning will change so that it doesn’t include (A) but only all other instances of the general meaning. There is a rudimentary ‘primate linguistics’ that is not non-language like. Here is the abstract.

Abstract: (Philippe Schlenker, Emmanuel Chemla, Kate Arnold, Alban Lemasson, Karim Ouattara, Sumir Keenan, Claudia Stephan, Robin Ryder, Klaus Zuberbühler; Monkey semantics: two ‘dialects’ of Campbell’s monkey alarm calls; Linguistics and Philosophy, 2014; 37 (6)) “We develop a formal semantic analysis of the alarm calls used by Campbell’s monkeys in the Tai forest (Ivory Coast) and on Tiwai island (Sierra Leone)—two sites that differ in the main predators that the monkeys are exposed to (eagles on Tiwai vs. eagles and leopards in Tai). Building on data discussed in Ouattara et al. (PLoS ONE 4(11):e7808, 2009a; PNAS 106(51): 22026–22031, 2009b and Arnold et al. (Population differences in wild Campbell’s monkeys alarm call use, 2013), we argue that on both sites alarm calls include the roots krak and hok, which can optionally be affixed with -oo, a kind of attenuating suffix; in addition, sentences can start with boom boom, which indicates that the context is not one of predation. In line with Arnold et al., we show that the meaning of the roots is not quite the same in Tai and on Tiwai: krak often functions as a leopard alarm call in Tai, but as a general alarm call on Tiwai. We develop models based on a compositional semantics in which concatenation is interpreted as conjunction, roots have lexical meanings, -oo is an attenuating suffix, and an all-purpose alarm parameter is raised with each individual call. The first model accounts for the difference between Tai and Tiwai by way of different lexical entries for krak. The second model gives the same underspecified entry to krak in both locations (= general alarm call), but it makes use of a competition mechanism akin to scalar implicatures. In Tai, strengthening yields a meaning equivalent to non-aerial dangerous predator and turns out to single out leopards. On Tiwai, strengthening yields a nearly contradictory meaning due to the absence of ground predators, and only the unstrengthened meaning is used.”

Cows: ScienceDaily reported (here) on a press release, “Do you speak cow?” on research led by Monica Padilla de la Torre from University of Queen Mary London. “They identified two distinct maternal ‘calls’. When cows were close to their calves, they communicated with them using low frequency calls. When they were separated — out of visual contact — their calls were louder and at a much higher frequency. Calves called out to their mothers when they wanted to start suckling. And all three types of calls were individualized – it was possible to identify each cow and calf using its calls.”

Many animals have been shown to recognize other individuals and to identify themselves vocally. But it is still a surprise that an animal like a cow has ‘names’. It could be a general ability among mammals.

Work like this on other animals is likely to further illustrate the roots of our language. It takes looking rather than accepting the idea that our language has no roots to be found in other animals.

 

All pain is not the same

A popular illustration of embodied cognition is the notion that physical pain and social pain share the same neural mechanism. The researchers that first published this relationship, have now published a new paper that finds the two types of pain do not overlap in the brain but are just close neighbours, close enough to have appeared together on the original fMRI scans. But the pattern of activity is different. The data has not changed but the method of analyzing it has produced a much clearer picture.

Neuroskeptic has a good blog on this paper and observes: “ Woo et al. have shown commendable scientific integrity in being willing to change their minds and update their theory based on new evidence. That sets an excellent example for researchers.” Have a look at the Neuroskeptic post (here).

It would probably be wise for other groups to re-examine, using multivariant analysis, similar data they have previously published.

pains

 

 

 

 

Abstract of paper (Woo CW, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature Communications, 5 PMID: 25400102)

Current theories suggest that physical pain and social rejection share common neural mechanisms, largely by virtue of overlapping functional magnetic resonance imaging (fMRI) activity. Here we challenge this notion by identifying distinct multivariate fMRI patterns unique to pain and rejection. Sixty participants experience painful heat and warmth and view photos of ex-partners and friends on separate trials. FMRI pattern classifiers discriminate pain and rejection from their respective control conditions in out-of-sample individuals with 92% and 80% accuracy. The rejection classifier performs at chance on pain, and vice versa. Pain- and rejection-related representations are uncorrelated within regions thought to encode pain affect (for example, dorsal anterior cingulate) and show distinct functional connectivity with other regions in a separate resting-state data set (N=91). These findings demonstrate that separate representations underlie pain and rejection despite common fMRI activity at the gross anatomical level. Rather than co-opting pain circuitry, rejection involves distinct affective representations in humans.”

 

Agency and intention

Nautilus has a post (here) by Matthew Hutson that is a very interesting review of the connection between our perception of time and of causation. If we believe that two events are causally related we perceive less time between them than a clock would register; and if we believe the events are not causally connected, time is increased between them. And on the other side of the coin. If we perceive a shorter time between two events, we are more likely to believe they are causally connected; and if the time is longer between them, it is harder for us to believe they are causally related. This effect is called intentional binding. The article describes the important experiments that underpin this concept.

But intentional binding is part of a larger concept. How is our sense of agency created and why? To learn how to do things in this world, we have to know what we set in motion and what was caused by something other then ourselves. Our memory of an event has to be marked as caused by us if it is, in order to be useful in future situations. As our memory of an event is based on our consciousness of it, our consciousness must reflect whether we caused the outcome. So the question becomes – how do our brains make the call to mark an event as our agency. If the actual ‘causing’ was a conscious process, there would be no need for a procedure to establish whether we were the agents of the action. However there is a procedure.

I wrote about this previously (here) in looking at Chapter 1 of ‘The New Unconscious’, ‘Who is the Controller of Controlled Processes?’. What needs to happen for us to feel that we have willed an action? We have to believe that thoughts which reach our consciousness have caused our actions. Three things are needed for us to make a causal connection between the thoughts and the actions:

  1. priority

The thought has to reach consciousness before the action if it is going to appear a cause. Actually it must occur quite closely, within about 30 sec., before the action. Wegner and Wheatley investigated the principle with fake thoughts fed through earphones and fake actions gently forced by equipment, to give people the feeling that their thought caused their action.

  1. consistency

The thought has to be about the action in order for it to appear to be the cause. Wegner, Sparrow and Winerman used a mirror so that a subject saw the hands of another person standing behind them instead of their own. If the thoughts fed to the subject through earphones matched the hand movements then the subject experienced willing the movements. If the earphones gave no ‘thoughts’ or contradictory ones, there was no feeling of will.

  1. exclusivity

The thought must be the only apparent source of a cause for the action. If another cause that seems more believable is available it will be used. The feeling of will can disappear when the subject is in a trance and feels controlled by another agent such as a spirit.

Also previously (here) I discussed a report in Science, “Movement Intention after Parietal Cortex Stimulation in Humans”, by M. Desnurget and others, with the following summary:

Parietal and premotor cortex regions are serious contenders for bringing motor intentions and motor responses into awareness. We used electrical stimulation in seven patients undergoing awake brain surgery. Stimulating the right inferior parietal regions triggered a strong intention and desire to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region provoked the intention to move the lips and to talk. When stimulation intensity was increased in parietal areas, participants believed they had really performed these movements, although no electromyographic activity was detected. Stimulation of the premotor region triggered overt mouth and contralateral limb movements. Yet, patients firmly denied that they had moved. Conscious intention and motor awareness thus arise from increased parietal activity before movement execution.”

The feeling of agency is not something that we can change even if we believe it is not true. Here is Rodolfo Llinas describing an experiment that he conducted on himself that I discussed previously (here). It was in a video interview of Rodolfo Llinas (video). There are many interesting ideas in this hour long discussion. The part I am quoting from the transcript is Llinas’ self-experimentation on the subject of free-will.

“…I understand that free will does not exist; I understand that it is the only rational way to relate to each other, this is to assume that it does, although we deeply know that it doesn’t. Now the question you may ask me is how do you know? And the answer is, well, I did an actually lovely experiment on myself. It was extraordinary really. There is an instrument used in neurology called a magnetic stimulator…its an instrument that has a coil that you put next to the top of the head and you pass a current such that a big magnetic field is generated that activates the brain directly, without necessary to open the thing. So if you get one of these coils and you put it on top of the head, you can generate a movement. You put it in the back, you see a light, so you can stimulate different parts of the brain and have a feeling of what happens when you activate the brain directly without, in quotes, you doing it. This of course is a strange way of talking but that’s how we talk. So I decide to put it on the top of the head where I consider to be the motor cortex and stimulate it and find a good spot where my foot on the right side would move inwards. It was *pop* no problem. And we did it several time and I tell my colleague, I know anatomy, I know physiology, I can tell you I’m cheating. Put the stimulus and then I move, I feel it, I’m moving it. And he said well, you know, there’s no way to really know. I said, I’ll tell you how I know. I feel it, but stimulate and I’ll move the foot outwards. I am now going to do that, so I stimulate and the foot moves inwards again. So I said but I changed my mind. Do it again. So I do it half a dozen times… (it always moved inward)…So I said, oh my god, I can’t tell the difference between the activity from the outside and what I consider to be a voluntary movement. If I know that it is going to happen, then I think I did it, because I now understand this free will stuff and this volition stuff. Volition is what’s happening somewhere else in the brain, I know about and therefore I decide that I did it…In other words, free will is knowing what you are going to do. That’s all.”

Crows

I think it is time to look at crows again. There are three interesting papers want to commented on. What reminds me of crows is that I stumbled across a few years old blog by a linguist (he has probably changed his tune – so no references) who ridiculed the idea that birds were at all smart because they had tiny brains with no ‘higher’ brain anatomy. He was unwilling to take seriously any of the work of Pepperberg with her parrot Alex. How the climate has changed in a few years.

The most recent paper is review in ScienceDaily (here) : Martinho, Burns, von Bayern, Kacelnik. “Monocular Tool Control, Eye Dominance, and Laterality in New Caledonian Crows.” Current Biology, 2014. It deals with the seeming ‘handedness’ in the way crows hold tools. It is actually ‘eyedness'; the crows hold the tool on one side of the beak, so that they see the end of the tool and the target with their preferred eye. Caledonia crows have have unusually forward looking eyes and a substantial area of binocular vision. The researchers found that the crows use a monocular part of the opposite side eye to see clearly when using a tool. This implies that they are anatomically adapted to tool use. “In other words, the birds are using their notable binocular vision for better monocular vision, allowing each eye to see further toward the other side of the beak. The birds’ unusually wide binocular field is among the first known examples of a physical adaptation to enable tool use, the researchers say.

In another paper from the spring (citation below), Jelbert and others investigate the extant of New Caledonian crow’s understanding of how to displace water to receive a reward and found that they had the causal understanding level of a 5-7 year-old child. Wild crows, after short training, were tested in 6 Aesop fable type tasks. They could solve 4 of them: dropping stones into water but not sand filled tubes, dropping sinking not floating and solid not hollow objects, and dropping into tubes with higher water levels. They failed to solve 2 of them: understanding tubes of difference diameter and U shaped tubes. The results show the understanding the causal idea of volume displacement at about the level of the 5-7 year old child. “These results are striking as they highlight both the strengths and limits of the crows’ understanding. In particular, the crows all failed a task which violated normal causal rules, but they could pass the other tasks, which suggests they were using some level of causal understanding when they were successful.

Last year there was a paper reviewed by ScienceDaily (here): Veit, Nieder. “Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds.” Nature Communications, 2013; 4. This paper dealt with how crows make strategic decisions. As crows do many things that are thought of as primate strengths and yet have a very different brain architecture, this is a way to look at intelligence in a fundamental way that would apply to both primates and crows.

Crows were trained to do a memory test. On a computer screen they were shown an image, they had to remember the image and later pick one of two images on the screen. The hard part was that sometimes they had to pick the image that was the same and other times the one that was different. They had to switch back and forth between two rules-of-the -game. They could use this mental flexibility, which even takes effort for humans. While the birds were engaged in this task their nidopallium caudolaterale area of the brain was monitored. One group of cells was active for the different image rule and another for the same image rule.

Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. “Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains.

Citation: Sarah A. Jelbert, Alex H. Taylor, Lucy G. Cheke, Nicola S. Clayton, Russell D. Gray. Using the Aesop’s Fable Paradigm to Investigate Causal Understanding of Water Displacement by New Caledonian Crows. PLoS ONE, 2014; 9 (3): e92895 DOI: 10.1371/journal.pone.0092895
ResearchBlogging.org

Veit, L., & Nieder, A. (2013). Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds Nature Communications, 4 DOI: 10.1038/ncomms3878

I'm on ScienceSeeker-Microscope

Reading patterns

There is a paper (citation below) that takes a different look at language. It attempts to examine what happens in the brain when we read a story. There is the act of reading, the processing of the language, and the engagement in the story, all going on at the same time.

One of the main questions in the study of language processing in the brain is to understand the role of the multiple regions that are activated in response to reading. A network of multiple brain regions have been implicated in language, and while the view of the field started with a simplistic dissociation between the roles of Broca’s area and Wernicke’s area, the current theories about language comprehension are more complex and most of them involve different streams of information that involve multiple regions (including Broca’s and Wernicke’s).” By studying sub-processes in isolation, previous studies have resulted in a confused picture. The researchers changed the method and looked at all parts of the brain at the same time in a normal natural reading situation (reading a chapter of a Harry Potter book). “We extract from the words of the chapter very diverse features and properties (such as semantic and syntactic properties, visual properties, discourse level features) and then examine which brain areas have activity that is modulated by the different types of features, leading us to distinguish between brain areas on the basis of which type of information they represent.” This is unlike the usual method of finding the areas of the brain with the most change (those that ‘light up’ or ‘go dark’) during some activity or process. Here what is being noted is changes in pattern. They used a program that had been trained to predict the fMRI activation pattern for a piece of text from training with passages that had each word tagged with 195 features (size, part of speech, role in parsed sentence, emotion, involved with a particular character and the like). The program uses brain-wide patterns, not the activity of individual areas. “The model makes predictions of the fMRI activation for an arbitrary text passage, by capturing how this diverse set of information contributes to the neural activity, then combining these diverse neural encodings into a single prediction of brain-wide fMRI activity over time. Our model not only accounts for the different levels of processing involved in story comprehension; it goes further by explicitly searching for the brain activity encodings for individual stimuli such as the mention of a specific story character, the use of a specific syntactic part-of-speech or the occurrence of a given semantic feature. … It has not been shown previously that one could model in detail the rapidly varying dynamics of brain activity with fMRI while reading at a close to normal speed.

Many of the results of the natural reading while being scanned are not surprising. But there are some very interesting insights. We think of language, especially syntax, as being primarily a left hemisphere function. “The strong right temporal representation of syntax that we found was not expected. Indeed we did not find other papers that report the large right hemisphere representation of sentence structure or syntax that we obtain. One reason might be that our syntax features are unique: whereas most experiments have approximated syntactic information in terms of processing load (length of constituents, hard vs easy phrase structure etc.) we model syntax and structure using a much more detailed set of features. Specifically, our model learns distinct neural encodings for each of 46 detailed syntax features including individual parts of speech, (adjectives, determiners, nouns, etc.) specific substructures in dependency parses (noun modifiers, verb subjects, etc.), and punctuation. Earlier studies considering only increases or decreases in activity due to single contrasts in syntactic properties could not detect detailed neural encodings of this type. We hypothesize that these regions have been previously overlooked.

There have been questions in the past about how connected syntactic and semantic processing are. “The question whether the semantics and syntactic properties are represented in different location has been partially answered by our results. There seems to be a large overlap in the areas in which both syntax and semantics are represented.

 

The characters actions seems to use areas of imagined action. But dialog may make special demands. “Presence of dialog among story characters was found to modulate activity in many regions in the bilateral temporal and inferior frontal cortices; one plausible hypothesis is that dialog requires additional processing in the language regions. More interestingly, it seems like presence of dialog activates the right temporo-parietal junction, a key theory of mind region. This observation raises an exciting hypothesis to pursue: that the presence of dialog increases the demands for perspective interpretation and recruits theory of mind regions.

This is a great step forward in studying language in the context of actual communication.

Abstract:

Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.”
ResearchBlogging.org

Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., & Mitchell, T. (2014). Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses PLoS ONE, 9 (11) DOI: 10.1371/journal.pone.0112575

I'm on ScienceSeeker-Microscope

Fluid, flow, zone and zen

So we have conscious and unconscious, type 1 and type 2 cognitive processes, default and task related modes, fluid intelligence, being in the flow, being in the zone and the Zen mind. I am wondering which are really the same but just expressed in different semantic frameworks. What might actually be the same physical thing from a different view point. I suspect that these are all ways of expressing various aspects of how we use or fail to use unconscious cognition.

Here was an interesting Scientific American blog (here) by SB Kaufman last January, looking at the relationship between fluid reasoning and working memory. Fluid reasoning works across all domains of intelligence and uses very little prior knowledge, expertise or practice to build relationships, patterns and inferences. How much it depends on working memory is controlled by speed. If the fluid reasoning is done quickly, it requires good working memory; but it can be done slowly with less need for working memory. Is this the difference between quick and deep thinkers, both described as intelligent?

Fluid reasoning does not fit nicely with the two types of cognitive processes: type 1—intuitive, fast, automatic, unconscious, effortless, contextualized, error-prone, and type 2—reflective, slow, deliberate, cogitative, effortful, decontextualized, normatively correct. As type 2 is typified as using working memory and type 1 as not using it, there is an implication that when speed is required for fluid reasoning, more working memory is required and therefore the thinking is leaning towards type 2 processing which is the slower of the two. It is a bit of a paradox. Perhaps what sets apart fluid reasoning is the type of problem rather than the type of process. Maybe the two types of process are ends of a spectrum rather than some sort of opposites. Let’s imagine the reasoning as being little spurts of type 1 process feeding a type 2 use of working memory. This could be a spectrum: at one end continuous type 1 thinking with working memory and consciousness only being involved in the beginning and the end. The other end would be a continuous back and forth as working memory steps through a solution. Let’s imagine that there is little control of efficiency in the type 1 working. The unconscious does not necessarily stick to a plan, while the use of working memory almost dictates a step-wise method. Fluid problems which occur in areas with little expertise, knowledge and practice may tax the type 1 reasoning unless it is closely monitored and controlled with working memory. A ‘step-wise plan’ may restrict and slow down progress on a well-practiced task; not having such a plan, may overwhelm the process with irrelevant detail and slow down an unfamilar task. There may (for any situation) be an optimal amount of type 2 control of type 1 free-wheeling speed.

People talking about ‘flow’ and ‘zone’ tend to acknowledge the similarity in the two concepts. But flow seems less concentrated and describes a way of living and especially working. While zone seems to describe short periods of more intense activity, as in a sport. This is almost the opposite of fluid reasoning in that neither flow nor zone can be achieved without first acquiring skill (expertise, knowledge and practice are basic). This seems to be type 1 processing at its best. In fact, one way to lose the zone is to try and think about or consciously control what you are doing. That is how to choke.

Mihály Csíkszentmihályi has documented flow for most of his career. His theory of Flow has three conditions for achieving the flow state: be involved in an activity with a clear set of goals and progress (direction and structure); have clear and immediate feedback to allow change and adjustment; have balance between the perceived challenges and perceived skills (confidence in one’s ability for the task). The person in flow is experiencing the present moment, a sense of control, a loss of sense of time and of self-consciousness, with a feeling of great reward and enjoyment. There is an automatic connection of action and perception and an effortless relaxation, but still a feeling of control.

Young and Pain have studied being ‘in the zone’. It is described as “a state in which an athlete performs to the best of his or her ability. It is a magical and…special place where performance is exceptional and consistent, automatic and flowing. An athlete is able to ignore all the pressures and let his or her body deliver the performance that has been learned so well. Competition is fun and exciting.” Athletes reporting on ‘in the zone’ moments report: “clear inner process”, “felt all together”, “awareness of power”, “clear focus”, “strong sense of self”, “free from outer restrictions”, “need to complete”, “absorption”, “intention”, “process ‘clicked’”, “personal understanding & expression”, “actions & thoughts spontaneous”, “event was practiced”, “performance”, “fulfillment”, “intrinsic reward”, “loss of self”, “spiritual”, “loss of time and space”, “unity of self and environment”, “enjoyed others”, “prior related involvement”, “fun”, “action or behavior”, “goals and structure”. Zone seems more intense and more identified with a very particular event than flow.

The hallmark of both flow and zone is that it appears to be the unconscious, fully equiped and practiced, in charge and doing the task well and effortlessly. The other thing to note is that the task mode is being used and not the default mode. Introspection, memory and imagination are taking second place.

The flow/zone way of acting is even more extreme in some Eastern religious exercises and also a few Western ones. The pinnacle of this is perhaps Zen states of mind. One in particular is like zone. “Mushin means “Without Mind” and it is very similar in practice to the Chinese Taoist principle of wei wuwei . Of all of the states of mind, I think not only is working toward mastery of mushin most important, it’s also the one most people have felt at some point in time. In sports circles, mushin is often referred to as “being in the zone”. Mushin is characterized by a mind that is completely empty of all thoughts and is existing purely in the current moment. A mind in mushin is free from worry, anger, ego, fear or any other emotions. It does not plan, it merely acts. If you’ve ever been playing a sport and you got so into it you stopped thinking about what you were doing and just played, you’ve experienced mushin.” I find the use of mind with this meaning misleading, but it is clear in the context that they are referring to just the conscious part of the mind when they use the word ‘mind’. It could be replaced with the word consciousness without changing the meaning.

In summary, unconscious control of tasks have been extremely well learned (the learning likely requires conscious thought) leads to states of mind that are valued, very skilled, without effort and agreeable. The default mode is suppressed and the self recedes in importance as do past and future because introspection, recall of past events and dreaming of future ones require the default mode. It is not an all or nothing thing but one of degree.

Virtual reality is not that real

Virtual reality is used in many situations and is often seen as equivalent to actual experience. For example, it is used in training where actual experience is too expensive or dangerous. In science, it is used in experiments with the assumption that it can be compared to reality. A recent paper (Z. Aghajan, L. Acharya, J. Moore, J. Cushman, C. Vuong, M. Mehta; Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality; Nature Neuroscience 2014) shows that virtual reality and ‘real’ reality are treated differently in the hippocampus where spatial mapping occurs. ScienceDaily reports on this paper (here).

It is assumed that cognitive maps are made by the neurons of the hippocampus, computing the distances to landmarks. Of course, this is not the only way a map could be constructed: sounds and echos could give clues, smells could identify places, and so on. To test whether visual clues alone could give the information to create a map, the researchers compared the activity of neurons in the hippocampus in a virtual walk and a real walk that were visually identical. In the real set-up the rat walked across a scene while in the virtual set-up the rat walked on the treadmill while the equivalent visual ‘movie’ was projected all around the rat.

The results showed that the mapping of the two environments was different. The mapping during real experience involved more activity by more neurons and was not random. In the virtual experiment, the activity was random and more sparse. It appeared, using neuron activity, as if the rat could not map virtual reality and was somewhat lost or confused, even though they appeared to be acting normally. “Careful mathematical analysis showed that neurons in the virtual world were calculating the amount of distance the rat had walked, regardless of where he was in the virtual space.

In the same report, other research by the same group is reported. Mehta describes the complex rhythms involved in learning and memory in the hippocampus, “The complex pattern they make defies human imagination. The neurons in this memory-making region talk to each other using two entirely different languages at the same time. One of those languages is based on rhythm; the other is based on intensity.” The two languages are used simultaneously by hippocampal neurons. “Mehta’s group reports that in the virtual world, the language based on rhythm has a similar structure to that in the real world, even though it says something entirely different in the two worlds. The language based on intensity, however, is entirely disrupted.

As a rat hippocampus is very similar to a human one and the virtual reality set up was a very realistic one, this study throws doubt on experiments and techniques that use virtual reality with humans. It is also very interesting to notice another surprising ability of neurons, to process two types of signal at the same time.

Abstract: “During real-world (RW) exploration, rodent hippocampal activity shows robust spatial selectivity, which is hypothesized to be governed largely by distal visual cues, although other sensory-motor cues also contribute. Indeed, hippocampal spatial selectivity is weak in primate and human studies that use only visual cues. To determine the contribution of distal visual cues only, we measured hippocampal activity from body-fixed rodents exploring a two-dimensional virtual reality (VR). Compared to that in RW, spatial selectivity was markedly reduced during random foraging and goal-directed tasks in VR. Instead we found small but significant selectivity to distance traveled. Despite impaired spatial selectivity in VR, most spikes occurred within ~2-s-long hippocampal motifs in both RW and VR that had similar structure, including phase precession within motif fields. Selectivity to space and distance traveled were greatly enhanced in VR tasks with stereotypical trajectories. Thus, distal visual cues alone are insufficient to generate a robust hippocampal rate code for space but are sufficient for a temporal code.

Synesthesia can be learned

Synesthesia is a condition where one stimulus (like a letter) automatically is experienced with another attribute (like a colour) that is not actually present. About 4% of people have some form of this sensory mixing. It has been generally assumed that synesthesia is inherited because it runs in families. But it has been clear that some learning is involved in triggering and shaping synesthesia. “Simner and colleagues tested grapheme-color consistency in synesthetic children between 6 and 7 years of age, and again in the same children a year later. This interim year appeared critical in transforming chaotic pairings into consistent fixed associations. The same cohort were retested 3 years later, and found to have even more consistent pairings. Therefore, GCS (grapheme-color synesthesia) appears to emerge in early school years, where first major pressures to use graphemes are encountered, and then becomes cemented in later years. In fact, for certain abstract inducers, such as graphemes, it is implausible that humans are born with synesthetic associations to these stimuli. Hence, learning must be involved in the development of at least some forms of synesthesia.” There have been attempts to train people to have synesthetic experiences but these have not shown the conscious experience of genuine synesthesia.

In the paper cited below Bor and others managed to produce these genuine experiences in people showing no previous signs of synesthesia or a family history of it. They feel their success is due to more intensive training. “Here, we implemented a synesthetic training regime considerably closer to putative real-life synesthesia development than has previously been used. We significantly extended training time compared to all previous studies, employed a range of measures to optimize motivation, such as making tasks adaptive, and we selected our letter-color associations from the most common associations found in synesthetic and normal populations. Participants were tested on a range of cognitive and perceptual tasks before, during, and after training. We predicted that this extensive training regime would cause our participants to simulate synesthesia far more closely than previous synesthesia training studies have achieved. ”

The phenomenology in these subjects was mild and not permanent, but definitely real synesthesia. The work has shown that although there is a genetic tendency, in typical synesthetics the condition is learned, probably during intensive, motivated, developmental training. It also seems that the condition is simply an associative memory one and not ‘extra wiring’.

Here is the abstract:

Synesthesia is a condition where presentation of one perceptual class consistently evokes additional experiences in different perceptual categories. Synesthesia is widely considered a congenital condition, although an alternative view is that it is underpinned by repeated exposure to combined perceptual features at key developmental stages. Here we explore the potential for repeated associative learning to shape and engender synesthetic experiences. Non-synesthetic adult participants engaged in an extensive training regime that involved adaptive memory and reading tasks, designed to reinforce 13 specific letter-color associations. Following training, subjects exhibited a range of standard behavioral and physiological markers for grapheme-color synesthesia; crucially, most also described perceiving color experiences for achromatic letters, inside and outside the lab, where such experiences are usually considered the hallmark of genuine synesthetes. Collectively our results are consistent with developmental accounts of synesthesia and illuminate a previously unsuspected potential for new learning to shape perceptual experience, even in adulthood.”
ResearchBlogging.org

Bor, D., Rothen, N., Schwartzman, D., Clayton, S., & Seth, A. (2014). Adults Can Be Trained to Acquire Synesthetic Experiences Scientific Reports, 4 DOI: 10.1038/srep07089

I'm on ScienceSeeker-Microscope

Imagination and reality

ScienceDaily has an item (here) on a paper (D. Dentico, B.L. Cheung, J. Chang, J. Guokas, M..e Boly, G. Tononi, B. Van Veen. Reversal of cortical information flow during visual imagery as compared to visual perception. NeuroImage, 2014; 100: 237) looking at EEC dynamics during thought.

The researchers examined electrical activity as subjects alternated between imagining scenes and watching video clips.

Areas of the brain are connected for various functions and these interactions change as during processing. The changes to network interactions appear as movement on the cortex. The research groups are trying to develop tools to study these changing networks: Tononi to study sleep and dreaming and Van Veen to study short-term memory.

The activity seems very directional. “During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe — from a higher-order region that combines inputs from several of the senses out to a lower-order region. In contrast, visual information taken in by the eyes tends to flow from the occipital lobe — which makes up much of the brain’s visual cortex — “up” to the parietal lobe… To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle — focusing on the details of shapes, colors and textures — before watching a short video of silent nature scenes.

The study has been used to verify their equipment, methods and calculations – could they discriminate the ‘flow’ in the two situations of imagining and perceiving. And it appears they could.

The actual directions of flow are not surprising. In perception, information starts in the primary sensory areas at the back of the brain. The information becomes more integrated as it moves forward to become objects in space, concepts and even word descriptions. On the other hand during imagining the starting points are objects, concepts and words. They must be rendered in sensory terms and so processing would be directed back towards the primary sensory areas. In both cases the end point would be a connection between sensory qualia and their high level interpretation. In perception the movement is from the qualia to the interpretation and in imagining it would be from the interpretation to the qualia.