Category Archives: learning

Does it ring true?

I make a point of not commenting on research into medical and psychological conditions. However, I am dyslexic and feel able to comment on research into that specific condition. I recognize that there are probably many types, levels and causes of dyslexia and so my reaction might not be the same as others. But I still automatically judge the research by ‘does it feel like it is right in my case?’

Several theories have fit with my experience of dyslexia. The idea that there is a problem with the corpus callosum, the nerves that connect the two hemispheres, in the region where the sound processing is done so that the left and right hemispheres do not properly cooperate for auditory information. This fits with my brother’s cleft pallate and more severe dyslexia and with my high pallate. It might explain the lack of consciousness of what I am going to say that often happens to me. (It has only been on rare occasions that I have disagreed with that I have said.) I am left-handed and perhaps am not conscious of what the other hemisphere is preparing to say due to a lack of communication at some area along the corpus callosum.

Another theory points to a fault in the dorsal/ventral streams. This idea is that sensory information leaves the primary sensory areas via two paths called the dorsal and ventral streams, also called the ‘where/how’ and the ‘what’ streams. The dorsal (where) path leads to motor speech areas, is very fast, and not very conscious. The ventral (what) path leads to more cognitive areas where auditory information is converted into semantic information, is slower, and more conscious. These streams interact in some ways – they both map phonemes but in two different maps and those maps need to be consistent with one another. We need to recognize a phoneme and we need to speak a phoneme. Dyslexics appear to have great difficulty consciously recognizing individual phonemes. They also appear to have difficulty with very short phonemes in particular. This appears to have something to do with a lack of communication between the streams.

Reasonable oral skill (as opposed to written) is possible without phonological awareness by dealing with syllables as entities that are not divided into individual phonemes. The vowel in the syllable is modified by the consonants that proceed or follow it. So the a in bat is different than the a in cap. It is not necessary to recognize the individual b, t, c or p in order to recognize the two words and produce them in speech because the short consonants modify the vowel. This also rings true to me – it is like it feels. The inability to consciously recognize things as separate if they are close together and very poor reflex times also indicate this time problem with short consonants. It is odd, but I find it hard to explain to people how it is to hear a syllable clearly but not hear its components. I seems such a simple obvious perception to me, a single indivisible sound.

Neither of these theories explain the symptoms of mixing up left and right, clockwise and counterclockwise, confusing something with its mirror image and the ‘was’ and ‘saw’ problem. Nor do they explain the slight lag between knowing something was said and hearing what it was.

Theories that have to do with vision or with short-term memory do not seem to apply to me. Although I have to admit that I am not sure what a bad short-term memory would feel like. I certainly have an excellent long-term memory.

Recently there has appear a paper with a new theory. (Perrachione, Del Tufo, Winter, Murtagh, Cyr, Chang, Halverson, Ghosh, Christodoulou, Gabrieli; Dysfunction of Rapid Neural Adaptation in Dyslexia ; Neuron 92, 1383–1397, December 2016) They looked at perceptual adaption in dyslexics and non-dyslexics. Perceptual adaption is the attenuation in perceptual processing of repetitive stimuli. So for example if the same voice says a list of words, there is less activity in parts of the brain than if a different voice delivers each word. The brain has adapted to the voice and that makes processing easier. They measured the adaptation using fMRI and used procedures featuring spoken words, written words, objects and faces with adult subjects and children just starting to read. Always the adaption was weaker for dyslexics then for controls. Also the differences were in the areas involved in processing the particular type of stimulus (such as in visual areas for visual stimuli). The amount of adaptation in these areas correlated with the level of reading skill of the dyslexic. The research supports the idea that dysfunction in neural adaptation may be and important aspect of dyslexia.

Here is part of their conclusion:

Dyslexia is a specific impairment in developing typical reading abilities. Correspondingly, structural and functional disruptions to the network of brain areas known to support reading are consistently observed in dyslexia. However, these observations confound cause and consequence, especially since reading is a cultural invention that must make use of existing circuitry evolved for other purposes. In this way, differences between brains that exert more subtle influences on non-reading behaviors are likely to be the culprit in a cascade of perceptual and mnemonic challenges that interfere with the development of typical reading abilities. Recent research has begun to elucidate a cluster of behaviorally distinct, but potentially physiologically related, impairments that are evinced by individuals with reading difficulties and observable in their brains. Through this collection of neural signatures—including unstable neural representations, diminished top-down control, susceptibility to noise, and inability to construct robust short-term perceptual representations—we are beginning to see that reading impairments can arise from general dysfunction in the processes supported by rapid neural adaptation.”

Does the theory ring true? It certainly fits with the feeling that the problem is wider than just language. I have to say that I have always found it difficult to mimic other people’s speech and that would fit with a weak adaptation. The theory does not seem impossible to me but it also does not seem to fit closely to how I feel about being dyslexic. I feel a kind of wall between what I hear and written language; I have never felt that I have overcome the wall; but I have felt that I worked around it.

I have to give the paper respect for the convincing data even if it does not seem to be the whole story. The picture may be about some aspect of the dyslexic developmental fault but not actually have much to do with the main symptom, difficulty with phoneme awareness.

It is not about rules

The question of the trolley has always bothered me. You probably have encountered the scenario many times. You are on a bridge over a trolley track with another person you do not know. There are 5 people on the track some way off. A run-away trolley is coming down the track and will hit the 5 people. Do you allow this to happen or do you throw the person beside you onto the track in front of the trolley to stop it? This question comes in many versions and is used to categorize types of moral reasoning. My problem is that I do not know what I would do in the few seconds I would have to consider the situation and I don’t believe that others know either.

In another dip into OpenMIND (here) I find a paper on morality by Paul Churchland, “Rules: The Basis of Morality?”. This is the abstract:

Most theories of moral knowledge, throughout history, have focused on behavior-guiding rules. Those theories attempt to identify which rules are the morally valid ones, and to identify the source or ground of that privileged set. The variations on this theme are many and familiar. But there is a problem here. In fact, there are several. First, many of the higher animals display a complex social order, one crucial to their biological success, and the members of such species typically display a sophisticated knowledge of what is and what is not acceptable social behavior —but those creatures have no language at all. They are unable even to express a single rule, let alone evaluate it for moral validity. Second, when we examine most other kinds of behavioral skills—playing basketball, playing the piano, playing chess—we discover that it is surpassingly difficult to articulate a set of discursive rules, which, if followed, would produce a skilled athlete, pianist, or chess master. And third, it would be physically impossible for a biological creature to identify which of its myriad rules are relevant to a given situation, and then apply them, in real time, in any case. All told, we would seem to need a new account of how our moral knowledge is stored, accessed, and applied. The present paper explores the potential, in these three regards, of recent alternative models from the computational neurosciences. The possibilities, it emerges, are considerable.

Apes, wolves/dogs, lions and many other intelligent social animals appear to have a moral sense without any language. They have ways of behaving that show cooperation, empathy, trust, fairness, sacrifice for the group and punishment of bad behavior. They train their young in these ways. No language codifies this behavior. Humans that lose their language through brain damage and can not speak or comprehend language still have other skills intact, including their moral sense. People who are very literate and very moral can often not give an account of their moral rules – some can only put forward the Golden Rule. If we were actually using rules they would be able to report them.

We should consider morality a skill that we learn rather than a set of rules. It is a skill that we learn and continue learning thoughout our lives. A skill that can take into consideration a sea of detail and nuance, that is lightning fast compared to finding the right rule and applying it. “Moral expertise is among the most precious of our human virtues, but it is not the only one. There are many other domains of expertise. Consider the consummate skills displayed by a concert pianist, or an all-star basketball player, or a grandmaster chess champion. In these cases, too, the specific expertise at issue is acquired only slowly, with much practice sustained over a period of years. And here also, the expertise displayed far exceeds what might possibly be captured in a set of discursive rules consciously followed, on a second-by-second basis, by the skilled individuals at issue. Such skills are deeply inarticulate in the straightforward sense that the expert who possesses them is unable to simply tell an aspiring novice what to do so as to be an expert pianist, an effective point guard, or a skilled chess player. The knowledge necessary clearly cannot be conveyed in that fashion. The skills cited are all cases of knowing how rather than cases of knowing that. Acquiring them takes a lot of time and a lot of practice.

Churchland then describes how the neural bases of this sort of skill is possible (along with perception and action). He uses a model of Parallel Distributed Processing where a great deal of input can quickly be transformed into a perception or an action. It is an arrangement that learns skills. “It has to do with the peculiar way the brain is wired up at the level of its many billions of neurons. It also has to do with the very different style of representation and computation that this peculiar pattern of connectivity makes possible. It performs its distinct elementary computations, many trillions of them, each one at a distinct micro-place in the brain, but all of them at the same time. … a PDP network is capable of pulling out subtle and sophisticated information from a gigantic sensory representation all in one fell swoop.” I found Churchland’s explanation very clear and to the point but I also thought he was using AI ideas of PDP rather than biological ones in order to be easily understood. If you are not familiar with parallel processing ideas, this paper is a good place to find a readable starting explanation.

Another slight quibble with the paper is that he does not point out that some of the elements of morality appear to be inborn and those elements probably steer the moral learning process. Babies often seem to ‘get it’ prior to the experience need develop and improve the skill.

 

The thalamus revisited

For a few decades, I have had the opinion that to understand how the brain works it is important to look at more than the neocortex, but also look to the other areas of the brain that may modify, control or even drive the activity of the cortex. Because of my special interest in consciousness, the thalamus was always interesting in this respect. Metaphorically the cortex seemed to be the big on-line computer run by the thalamus.

A recent paper makes another connection between the cortex and the thalamus, to add to many others – (F. Alcaraz, A. R. Marchand, E. Vidal, A. Guillou, A. Faugere, E. Coutureau, M. Wolff. Flexible Use of Predictive Cues beyond the Orbitofrontal Cortex: Role of the Submedius Thalamic Nucleus. Journal of Neuroscience, 2015; 35 (38): 13183 DOI: 10.1523/JNEUROSCI.1237-15.2015).

The various parts of the thalamus are connected to incoming sensory signals, all parts of the cortex, the hippocampus, the mid-brain areas, the spinal cord and the brain stem. It is one of the ‘hubs’ of the brain and its activity is essential for consciousness. However, the particular bit of the thalamus that is implicated in this particular function (adaptive decision making flexibility) appears to have been mainly studied in relationship to pain and control of pain. There is a lot to learn about the thalamus!

Here is the abstract: “The orbitofrontal cortex (OFC) is known to play a crucial role in learning the consequences of specific events. However, the contribution of OFC thalamic inputs to these processes is largely unknown. Using a tract-tracing approach, we first demonstrated that the submedius nucleus (Sub) shares extensive reciprocal connections with the OFC. We then compared the effects of excitotoxic lesions of the Sub or the OFC on the ability of rats to use outcome identity to direct responding. We found that neither OFC nor Sub lesions interfered with the basic differential outcomes effect. However, more specific tests revealed that OFC rats, but not Sub rats, were disproportionally relying on the outcome, rather than on the discriminative stimulus, to guide behavior, which is consistent with the view that the OFC integrates information about predictive cues. In subsequent experiments using a Pavlovian contingency degradation procedure, we found that both OFC and Sub lesions produced a severe deficit in the ability to update Pavlovian associations. Altogether, the submedius therefore appears as a functionally relevant thalamic component in a circuit dedicated to the integration of predictive cues to guide behavior, previously conceived as essentially dependent on orbitofrontal functions.

SIGNIFICANCE STATEMENT: In the present study, we identify a largely unknown thalamic region, the submedius nucleus, as a new functionally relevant component in a circuit supporting the flexible use of predictive cues. Such abilities were previously conceived as largely dependent on the orbitofrontal cortex. Interestingly, this echoes recent findings in the field showing, in research involving an instrumental setup, an additional involvement of another thalamic nuclei, the parafascicular nucleus, when correct responding requires an element of flexibility (Bradfield et al., 2013a). Therefore, the present contribution supports the emerging view that limbic thalamic nuclei may contribute critically to adaptive responding when an element of flexibility is required after the establishment of initial learning.

The learning of concepts

I once tried to learn a simple form of a Bantu language and failed (not surprising as I always fail to learn a new language). One of the problems with this particular attempt was classes of nouns. There were 10 or so classes, each with their own rules. Actually it works like the gender of nouns in most European languages, but it is much more complex and unlike gender it is less arbitrary. The nouns are grouped in somewhat descriptive groups like animals, people, places, tools etc. Besides the Bantu languages there are a number of other groups that have extensive noun classes, twenty or more.

Years ago I found the noun classes inexplicable. Why did they exist? But there has been a number of hints that it is a quite natural way for concepts to be stored in the brain – faces stored here, tools stored there, places stored somewhere else.

A recent paper (Andrew James Bauer, Marcel Adam Just. Monitoring the growth of the neural representations of new animal concepts. Human Brain Mapping, 2015; DOI: 10.1002/hbm.22842) studies how and where new concepts are stored.

Their review of previous finds illustrates the idea. “Research to date has revealed that object concepts (such as the concept of a hammer) are neurally represented in multiple brain regions, corresponding to the various brain systems that are involved in the physical and mental interaction with the concept. The concept of a hammer entails what it looks like, what it is used for, how one holds and wields it, etc., resulting in a neural representation distributed over sensory, motor, and association areas. There is a large literature that documents the responsiveness (activation) of sets of brain regions to the perception or contemplation of different object concepts, including animals (animate natural objects), tools, and fruits and vegetables. For example, fMRI research has shown that nouns that refer to physically manipulable objects such as tools elicit activity in left premotor cortex in right-handers, and activity has also been observed in a variety of other regions to a lesser extent. Clinical studies of object category-specific knowledge deficits have uncovered results compatible with those of fMRI studies. For example, damage to the inferior parietal lobule can result in a relatively selective knowledge deficit about the purpose and the manner of use of a tool. The significance of such findings is enhanced by the commonality of neural representations of object concepts across individuals. For example, pattern classifiers of multi-voxel brain activity trained on the data from a set of participants can reliably predict which object noun a new test participant is contemplating. Similarity in neural representation across individuals may indicate that there exist domain-specific brain networks that process information that is important to survival, such as information about food and eating or about enclosures that provide shelter.

Their study is concerned with how new concepts are formed (they have a keen interest in education). Collectively, the results show that before instruction about a feature, there were no stored representations of the new feature knowledge; and after instruction, the feature information had been acquired and stored in the critical brain regions. The activation patterns in the regions that encode the semantic information that was taught (habitat and diet) changed, reflecting the specific new concept knowledge. This study provides a novel form of evidence (i.e. the emergence of new multi-voxel representations) that newly acquired concept knowledge comes to reside in brain regions previously shown to underlie a particular type of knowledge. Furthermore, this study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

This is a different type of learning. It is conceptual knowledge learning rather than learning an intellectual skill such as reading or a motor skill such as juggling.

The storage of conceptual knowledge appears to be quite carefully structured rather than higgly piggly.

Here is the abstract. “Although enormous progress has recently been made in identifying the neural representations of individual object concepts, relatively little is known about the growth of a neural knowledge representation as a novel object concept is being learned. In this fMRI study, the growth of the neural representations of eight individual extinct animal concepts was monitored as participants learned two features of each animal, namely its habitat (i.e., a natural dwelling or scene) and its diet or eating habits. Dwelling/scene information and diet/eating-related information have each been shown to activate their own characteristic brain regions. Several converging methods were used here to capture the emergence of the neural representation of a new animal feature within these characteristic, a priori-specified brain regions. These methods include statistically reliable identification (classification) of the eight newly acquired multivoxel patterns, analysis of the neural representational similarity among the newly learned animal concepts, and conventional GLM assessments of the activation in the critical regions. Moreover, the representation of a recently learned feature showed some durability, remaining intact after another feature had been learned. This study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

Simplifying assumptions

There is an old joke about a group of horse betters putting out a tender to scientists for a plan to predict the results of races. A group of biologists submitted a plan to genetically breed a horse that would always win. It would take decades and cost billions. A group of statisticians submitted a plan to devise a computer program to predict races. It would cost millions and would only predict a little over chance. But a group of physicists said they could do it for a few thousand. They would be able to have the program finished in just a few weeks. The betters wanted to know how they could be so quick and cheap. “Well, we have equations for how the race variables interact. It’s a complex equation but we have made simplifying assumptions. First we said let each horse be a perfect rolling sphere. Then…

For over 3 decades ideas have appeared about how the brain must work from studies of electronic neural nets. These studies usually make a lot of assumptions. First, they assume that the only active cells in the brain are the neurons. Second, the neurons are simple (they have inputs which can be weighted and if the sum of the weighted inputs is over a threshold, the neuron fires its output signals) and there is only one type (or a very, very few different types). Third, the connections between the neurons are only structured in very simple and often statistically driven nets. There is only so much that can be learned about the real brain from this model.

But on the basis of electronic neural nets and information theory with, I believe, only a small input from the physiology of real brains, it became accepted that the brain used a ‘sparse coding’. What does this mean? At one end of a spectrum, the information held in a network depends on the state of just one neuron. This coding is sometimes referred to as grandmother cells because one and only one neuron would code for your grandmother. If the information depends on the state of all the neurons or in other words your grandmother would be coded by a particular pattern of activity that includes the states of all the neurons, that is the other end of the spectrum. Sparse coding uses only a few neurons so is near the grandmother cell end of the spectrum.

Since the 1980s it has generally been accepted that the brain uses sparse coding. But experiments with actual brains have been showing that it may not be the case. A recent paper (Anton Spanne, Henrik Jörntell. Questioning the role of sparse coding in the brain. Trends in Neurosciences, 2015; DOI: 10.1016/j.tins.2015.05.005) argues that it may not be sparse after all.

It was assumed that the brain would use the coding system that gives the lowest total activity without losing functionality. But that is not what the brain actually does. It has higher activity that it theoretically needs. This is probably because the brain sits in a fairly active state even at rest (a sort of knife edge) where it can quickly react to situations.

If sparse coding were to apply, it would entail a series of negative consequences for the brain. The largest and most significant consequence is that the brain would not be able to generalize, but only learn exactly what was happening on a specific occasion. Instead, we think that a large number of connections between our nerve cells are maintained in a state of readiness to be activated, enabling the brain to learn things in a reasonable time when we search for links between various phenomena in the world around us. This capacity to generalize is the most important property for learning.

Here is the abstract:

Highlights

  • Sparse coding is questioned on both theoretical and experimental grounds.
  • Generalization is important to current brain models but is weak under sparse coding.
  • The beneficial properties ascribed to sparse coding can be achieved by alternative means.

Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.

Music affects on the brain

A recent paper identified genes that changed their expression as a result of music performance in trained musicians. (see citation below). There were a surprising number of affected genes, 51 genes had increased and 22 had decreased expression, compared to controls who were also trained musicians but were not involved in making or listening to music for the same time period. It is also impressive that this set of 73 genes has a very broad range of presumed functions and effects in the brain.

musictableAnother interesting aspect is the overlap of a number of these genes with some that have been identified in song birds. This implies that the music/sophisticated sound perception and production has been conserved from a common ancestor of birds and mammals.

It has been known for some time that musical training has a positive effect on intelligence and outlook – that it assists learning. Musical training changes the structure of the brain. Now scientists are starting to trace the biology of music’s effects. Isn’t it about time that education stopped treating music (and other arts for that matter) as unimportant frills? It should not be the first thing to go when money or teaching time is short.

Here is the Abstract:

Music performance by professional musicians involves a wide-spectrum of cognitive and multi-sensory motor skills, whose biological basis is unknown. Several neuroscientific studies have demonstrated that the brains of professional musicians and non-musicians differ structurally and functionally and that musical training enhances cognition. However, the molecules and molecular mechanisms involved in music performance remain largely unexplored. Here, we investigated the effect of music performance on the genome-wide peripheral blood transcriptome of professional musicians by analyzing the transcriptional responses after a 2-hr concert performance and after a ‘music-free’ control session. The up-regulated genes were found to affect dopaminergic neurotransmission, motor behavior, neuronal plasticity, and neurocognitive functions including learning and memory. Particularly, candidate genes such as SNCA, FOS and DUSP1 that are involved in song perception and production in songbirds, were identified, suggesting an evolutionary conservation in biological processes related to sound perception/production. Additionally, modulation of genes related to calcium ion homeostasis, iron ion homeostasis, glutathione metabolism, and several neuropsychiatric and neurodegenerative diseases implied that music performance may affect the biological pathways that are otherwise essential for the proper maintenance of neuronal function and survival. For the first time, this study provides evidence for the candidate genes and molecular mechanisms underlying music performance.”

ResearchBlogging.org

Kanduri, C., Kuusi, T., Ahvenainen, M., Philips, A., Lähdesmäki, H., & Järvelä, I. (2015). The effect of music performance on the transcriptome of professional musicians Scientific Reports, 5 DOI: 10.1038/srep09506

I'm on ScienceSeeker-Microscope

Co-evolution of language and tool-making

It has been more or less accepted that genetic evolution can affect culture and that cultural evolution can affect genetics. But many favour one direction over the other. A recent paper looks at a long sustained period of genetic/cultural co-evolution. (Morgan, Uomini, Rendell, Chouinard-Thuly, Street, et al.; Experimental evidence for the co-evolution of hominin tool-making teaching and language. Nature Communications 6, 2015). The paper is a ScienceDaily item (here ).

Early homo species, our ancestors Homo habilis and Australopithecus garhi, used stone tools for two and a half million years. Through the first 700,000 years the tools, called Oldowan, remained unchanged. The researches show that stone-knapping is not easy to learn. The lack of any improvements to the Oldowan tools probably was because language would have been required to teach more sophisticated techniques. After this long period, about 1.8 million years ago, a new set of stone tools appeared, called the Acheulean, that were more technologically challenging. The researchers show that this knapping skill would have needed language to learn from a master.

The researchers set up learning chains where one person was shown and taught a particular knappng skill. That person then taught another and the skill was passed down a chain of learners. Various teaching techniques were used in the chains. It was found that language was needed to learn some skills successfully. Thus they suggest that the Acheulean improvements to tools were due to the start of proto-languages and that knapping and language evolved together. The driving evolutionary pressure was the advantage of better tools.

This picture is very different from the ‘history of language’ put forward by Chomsky. First because the process is seen as long and gradual. Second because it is basically developed as a teaching aid, a form of communication. “Our findings suggest that stone tools weren’t just a product of human evolution, but actually drove it as well, creating the evolutionary advantage necessary for the development of modern human communication and teaching. Our data show this process was ongoing two and a half million years ago, which allows us to consider a very drawn-out and gradual evolution of the modern human capacity for language and suggests simple ‘proto-languages’ might be older than we previously thought.

Here is the abstract: “Hominin reliance on Oldowan stone tools—which appear from 2.5 mya and are believed to have been socially transmitted—has been hypothesized to have led to the evolution of teaching and language. Here we present an experiment investigating the efficacy of transmission of Oldowan tool-making skills along chains of adult human participants (N=184) using five different transmission mechanisms. Across six measures, transmission improves with teaching, and particularly with language, but not with imitation or emulation. Our results support the hypothesis that hominin reliance on stone tool-making generated selection for teaching and language, and imply that (i) low-fidelity social transmission, such as imitation/emulation, may have contributed to the ~700,000 year stasis of the Oldowan technocomplex, and (ii) teaching or proto-language may have been pre-requisites for the appearance of Acheulean technology. This work supports a gradual evolution of language, with simple symbolic communication preceding behavioural modernity by hundreds of thousands of years.

 

Talking to babies

When babies learn language, they learn more than language. According to a recent paper they also learn cognition. This news reminded me of something I had read months ago and I went back and found it. Here is the abstract of the paper, followed by the story illustrating the absence of good language learning.

Abstract of paper (Vouloumanos, Waxman; Listen up! Speech is for thinking during infancy; Trends in Cognitive Sciences Vol 18, issue 12 Dec 2014): “Infants’ exposure to human speech within the first year promotes more than speech processing and language acquisition: new developmental evidence suggests that listening to speech shapes infants’ fundamental cognitive and social capacities. Speech streamlines infants’ learning, promotes the formation of object categories, signals communicative partners, highlights information in social interactions, and offers insight into the minds of others. These results, which challenge the claim that for infants, speech offers no special cognitive advantages, suggest a new synthesis. Far earlier than researchers had imagined, an intimate and powerful connection between human speech and cognition guides infant development, advancing infants’ acquisition of fundamental psychological processes.

From Catherine Porter’s Column Aug 2014, Why Senegalese women have been afraid to talk to their babies – Fears of evil spirits have kept parents from talking to their babies, but that is changing thanks to a program that teaches about brain development. (here) : “10-year-old children in Senegal, deemed incomprehensibly dull by an international early literacy test six years ago. … The results were a blow to the Senegalese government, which pours a quarter of its national budget into education. … Tostan, a well-known non-governmental organization in Senegal, began asking the same questions. Staff members launched focus groups, to research local ideas about schools and child development. After four months, they concluded the root of the problem stretched beyond schools into village homes. Parents, although loving, were not speaking directly to their babies. Many avoided looking deeply into their babies’ eyes. … a baby in rural Senegal would hear about 200 words an hour, Tostan founder and chief executive officer Molly Melching says. Most of those were orders. No wonder they weren’t learning how to read, Melching posited. The language part of their brains was vastly underdeveloped. … The concept of djinns comes from both ancient African religions and the Koran. They are spirits, which can be helpful or hurtful. The hurtful ones, locals believe, can possess them. … Djinns are attracted to babies by jealousy, many locals believe. So, looking a baby in the eye is taboo, as is speaking directly to her. … “In our culture, if you talk with your child, you risk losing him,” says Tostan’s Penda Mbaye. She recalls how she was talking to her first baby when her grandmother warned her about djinns. “After that, I didn’t dare to do it.” … It is one thing to change the national course curriculum, or teacher training, or even severe malnutrition that stunts children’s brains. It’s another to change people’s cultural beliefs and corresponding behaviour. … Tostan facilitators developed a year-long class curriculum for parents. It includes lessons on everything from infant nutrition and children’s rights to sleep schedules and baby massage. The most important part though, is the new understanding of children’s growing brains. “We delve into brain development in a non-judgmental way,” Melching says.

This program seems to be working and mothers are enthusiastic, enjoying being able to interact with and talk to their babies. In a few years the data will be in and it will be seen what difference communication with babies brings. It is expected to not just improve language skills but IQ and general cognition.

What is being humble?

What is humility; what does it mean in folk pyschology to be intellecually humble? It is good or bad? ScienceDaily has an item on a study of this topic (here). The researchers are looking for the real world definition. “This is more of a bottom-up approach, what do real people think about humility, what are the lay conceptions out there in the real world and not just what comes from the ivory tower. We’re just using statistics to present it and give people a picture of that.

Being humble is the opposite of being proud. A humble person has a real regard for others and is “ not thinking too highly of himself – but highly enough”.

...analysis found two clusters of traits that people use to explain humility. Traits in the first cluster come from the social realm: Sincere, honest, unselfish, thoughtful, mature, etc. The second and more unique cluster surrounds the concept of learning: curious, bright, logical and aware.” These occur together in the intellectually humble person who appreciates learning from others.

It seems to me that such a person has self-esteem but also has ‘other-esteem’ to coin a phrase. It is not just the opposite of proud but it contrasts with narcissistic and individualistic. The idea of humility would seem to fit well with the Ubuntu philosophy, a very underrated way of approaching life. Other-esteem is important.

Here is the abstract of paper, (Peter L. Samuelson, Matthew J. Jarvinen, Thomas B. Paulus, Ian M. Church, Sam A. Hardy, Justin L. Barrett. Implicit theories of intellectual virtues and vices: A focus on intellectual humility. The Journal of Positive Psychology, 2014; 1):

Abstract: “The study of intellectual humility is still in its early stages and issues of definition and measurement are only now being explored. To inform and guide the process of defining and measuring this important intellectual virtue, we conducted a series of studies into the implicit theory – or ‘folk’ understanding – of an intellectually humble person, a wise person, and an intellectually arrogant person. In Study 1, 350 adults used a free-listing procedure to generate a list of descriptors, one for each person-concept. In Study 2, 335 adults rated the previously generated descriptors by how characteristic each was of the target person-concept. In Study 3, 344 adults sorted the descriptors by similarity for each person-concept. By comparing and contrasting the three person-concepts, a complex portrait of an intellectually humble person emerges with particular epistemic, self-oriented, and other-oriented dimensions.”

 

Synesthesia can be learned

Synesthesia is a condition where one stimulus (like a letter) automatically is experienced with another attribute (like a colour) that is not actually present. About 4% of people have some form of this sensory mixing. It has been generally assumed that synesthesia is inherited because it runs in families. But it has been clear that some learning is involved in triggering and shaping synesthesia. “Simner and colleagues tested grapheme-color consistency in synesthetic children between 6 and 7 years of age, and again in the same children a year later. This interim year appeared critical in transforming chaotic pairings into consistent fixed associations. The same cohort were retested 3 years later, and found to have even more consistent pairings. Therefore, GCS (grapheme-color synesthesia) appears to emerge in early school years, where first major pressures to use graphemes are encountered, and then becomes cemented in later years. In fact, for certain abstract inducers, such as graphemes, it is implausible that humans are born with synesthetic associations to these stimuli. Hence, learning must be involved in the development of at least some forms of synesthesia.” There have been attempts to train people to have synesthetic experiences but these have not shown the conscious experience of genuine synesthesia.

In the paper cited below Bor and others managed to produce these genuine experiences in people showing no previous signs of synesthesia or a family history of it. They feel their success is due to more intensive training. “Here, we implemented a synesthetic training regime considerably closer to putative real-life synesthesia development than has previously been used. We significantly extended training time compared to all previous studies, employed a range of measures to optimize motivation, such as making tasks adaptive, and we selected our letter-color associations from the most common associations found in synesthetic and normal populations. Participants were tested on a range of cognitive and perceptual tasks before, during, and after training. We predicted that this extensive training regime would cause our participants to simulate synesthesia far more closely than previous synesthesia training studies have achieved. ”

The phenomenology in these subjects was mild and not permanent, but definitely real synesthesia. The work has shown that although there is a genetic tendency, in typical synesthetics the condition is learned, probably during intensive, motivated, developmental training. It also seems that the condition is simply an associative memory one and not ‘extra wiring’.

Here is the abstract:

Synesthesia is a condition where presentation of one perceptual class consistently evokes additional experiences in different perceptual categories. Synesthesia is widely considered a congenital condition, although an alternative view is that it is underpinned by repeated exposure to combined perceptual features at key developmental stages. Here we explore the potential for repeated associative learning to shape and engender synesthetic experiences. Non-synesthetic adult participants engaged in an extensive training regime that involved adaptive memory and reading tasks, designed to reinforce 13 specific letter-color associations. Following training, subjects exhibited a range of standard behavioral and physiological markers for grapheme-color synesthesia; crucially, most also described perceiving color experiences for achromatic letters, inside and outside the lab, where such experiences are usually considered the hallmark of genuine synesthetes. Collectively our results are consistent with developmental accounts of synesthesia and illuminate a previously unsuspected potential for new learning to shape perceptual experience, even in adulthood.”
ResearchBlogging.org

Bor, D., Rothen, N., Schwartzman, D., Clayton, S., & Seth, A. (2014). Adults Can Be Trained to Acquire Synesthetic Experiences Scientific Reports, 4 DOI: 10.1038/srep07089

I'm on ScienceSeeker-Microscope