I will not be posting to my blog for a while for various reasons including a holiday - see you in a few weeks.
I am both left handed and dyslexic and so a recent paper on the connection in hemispheric dominance for hand and for language was a paper I had to read. The Mazoyer study seems to be the first to use a reasonable number of left- and as well as right-handed people to look at language lateralization. (citation below).
Whether someone was left-handed or right-handed was determined by the self-reported category (the LH and RH identifier in the paper). However, the subjects were also given the Edinburgh questions which give an index between -100 (most left-handed), +100 (most right-handed) with 0 as perfectly ambidextrous. This was used as a measure of the extent and direction of lateralization of the hand’s motor control. This index need not tally with self-reporting, but actually does quantify the lateralization. They used fMRI measurements for the lateralization of language. Reciting a very over-learned list (like the months of the year) is almost symmetrical (not lateralized) and so it was used as a base compared to forming a sentence which varies in lateralization. Language is usually biased to the left hemisphere as is hand control in right-handed people.
Their conclusion was: “This study demonstrates that, except in a small sample of strong LH with rightward asymmetry, concordance of hemispheric dominance for hand and for language production occurs by chance. The present result thus questions the existence of a link between control of the hand and of language by the same hemisphere, while indicating that a rightward representation of language, although rare, is a normal variant of language lateralization.”
At first glance this is not what the graph appears to show. But if you ignore the white data points at the bottom then it appears that the amount of language lateralization (y axis) is heavily biased to the left hemisphere but the amount of bias is evenly spread across the hand lateralization (x axis). The white data points on the other hand show that extreme right hemisphere lateralization of language only seems to occur in a small group of extremely left-handed people. These people would be approximately 1% of the population. This group was also identified by Gaussian analysis which found 4 peaks, the 4th being this group of atypical left-handed people. Without this group the peaks for left and right-handed people were not statistically different.
Lateralization of language plotted against lateralization of hand control: “Figure 5. Plot of hemispheric functional lateralization for language as a function of manual preference strength. Manual preference strength was assessed using the Edinburgh inventory, ranging from 100 (exclusive use of the right hand) to -100 (exclusive use of the left hand). Subjects also self-reported whether they consider themselves as right- handed (RH, squares) or left-handed (LH, circles). HFLI, an index of hemispheric functional lateralization for language measured with fMRI during covert generation of sentences compared to covert generation of list of words, was used for classifying subjects as « Typical » (HFLI>50, bright color symbols), « Ambilateral» (-20<HFLI<50, pale color symbols), or « Strongly-atypical » (HFLI<-20, open symbols).”
Personally I find this very interesting. I have to assume I am in this small strongly atypical group. I score -100 on the Edinburgh test and have fought with dyslexia all my life. But from a more general perspective it is interesting that the lateralization of language has a natural spread without regard to another lateralization that gives handedness. Another interesting piece of data is that left-handed people appear (on the surface) to not be as left-handed as right-handed people are right-handed. The crossover seems to be at Edinburgh 50 (not 0 or -50). This may be an artifact. Left-handed people may learn to do a number of tasks in a right-handed manner because of the general handedness of the environment. A right-handed person has no incentive to do any particular task with the left-hand. We may be looking at motivation rather than anatomy. Finally, although this is a good start to looking at the lateralization of language, language is a complex function and there may be a lot of detail hidden in a single fMRI procedure. This authors mention this. “Because typical subjects represent 90% of the population, it is important to assess whether or not they constitute a homogeneous group with respect to hemispheric dominance. Gaussian mixture model suggests the existence two distinct subgroups of typical individuals, having strong and moderate left language lateralization, respectively, this holding both for RH and for LH. ”
Here is the abstract:
“Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH). A hemispheric functional lateralization index (HFLI) for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH) and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely ‘‘Typical’’ (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH), ‘‘Ambilateral’’ (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH) and ‘‘Strongly-atypical’’ (right-hemisphere dominance with clear negative HFLI values, 7% of LH). Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population) who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.”
Mazoyer, B., Zago, L., Jobard, G., Crivello, F., Joliot, M., Perchey, G., Mellet, E., Petit, L., & Tzourio-Mazoyer, N. (2014). Gaussian Mixture Modeling of Hemispheric Lateralization for Language in a Large Sample of Healthy Individuals Balanced for Handedness PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0101165
What happens when you overcome distraction and remain focused. The brain can retain its concentration. How? Science Daily (here) reports on a paper by Jacobs and Nieder in Neuron, which shows that one part of the brain ignores the distraction completely while another attends to it very briefly and then returns to the memory task at hand.
Science Daily says, “The monkeys had to remember the number of dots in an image and reproduce the knowledge a moment later. While they were taking in the information, a distraction was introduced, showing a different number of dots. And even though the monkeys were mostly able to ignore the distraction, their concentration was disturbed and their memory performance suffered.
Measurements of the electrical activity of nerve cells in two key areas of the brain showed a surprising result: nerve cells in the prefrontal cortex signaled the distraction while it was being presented, but immediately restored the remembered information (the number of dots) once the distraction was switched off. In contrast, nerve cells in the parietal cortex were unimpressed by the distraction and reliably transmitted the information about the correct number of dots.”
The paper’s highlights and summary were:
- Prefrontal suppression of distractors is not required to filter interfering stimuli
- Distractors can be bypassed by storing and retrieving target information
- Frontal and parietal cortex assume complementary functions to control working memory
Prefrontal cortex (PFC) and posterior parietal cortex are important for maintaining behaviorally relevant information in working memory. Here, we challenge the commonly held view that suppression of distractors by PFC neurons is the main mechanism underlying the filtering of task-irrelevant information. We recorded single-unit activity from PFC and the ventral intraparietal area (VIP) of monkeys trained to resist distracting stimuli in a delayed-match-to-numerosity task. Surprisingly, PFC neurons preferentially encoded distractors during their presentation. Shortly after this interference, however, PFC neurons restored target information, which predicted correct behavioral decisions. In contrast, most VIP neurons only encoded target numerosities throughout the trial. Representation of target information in VIP was the earliest and most reliable neuronal correlate of behavior. Our data suggest that distracting stimuli can be bypassed by storing and retrieving target information, emphasizing active maintenance processes during working memory with complementary functions for frontal and parietal cortex in controlling memory content.
It is interesting that this as not what the researchers expected to find. “The researchers were surprised by the two brain areas’ difference in sensitivity to distraction. “We had assumed that the prefrontal cortex is able to filter out all kinds of distractions, while the parietal cortex was considered more vulnerable to disturbances,” says Professor Nieder. “We will have to rethink that. The memory-storage tasks and the strategies of each brain area are distributed differently from what we expected.””
But I’m sure they found it made sense after thinking about it. We can look at it this way: the ventral intrapariental area is involved with the task, concentrating on the task and little else (bottom-up). The prefrontal cortex on the other hand is involved in somewhat higher level executive operations (top-down). It looks at what is happening, and as it is those researchers trying to distract me, I ignore it and carry on with the task. If on the other hand it is a big machine about to hit me, I will not ignore it and stop the silly dot test while getting out of the way. Something has to be a look-out, take note of things that are happening and decide whether to ignore distractions.
Science Daily has an item (here) on musical appreciation in chimpanzees. Previous studies using blues, classical and pop music have found that although chimps can distinguish features of music and have preferences, they still preferred silence to the music. So were the chimps able to ‘hear’ the music but not appreciate its beauty? A new paper has different results using non-western music: West African akan, North Indian raga, and Japanese taiko. Here the chimps liked the African and Indian music but not the Japanese. They seemed to base their appreciation on the rhythm. The Japanese music has very regular prominent beats like western music, while the African and Indian music had varied beats. “The African and Indian music in the experiment had extreme ratios of strong to weak beats, whereas the Japanese music had regular strong beats, which is also typical of Western music.”
It may be that they like a more sophisticated rhythm. Or de Waal says, ““Chimpanzees may perceive the strong, predictable rhythmic patterns as threatening, as chimpanzee dominance displays commonly incorporate repeated rhythmic sounds such as stomping, clapping and banging objects.””
Here is the abstract for M. Mingle, T. Eppley, M. Campbell, K. Hall, V. Horner, F. de Waal; Chimpanzees Prefer African and Indian Music Over Silence;Journal of Experimental Psychology: Animal Learning and Cognition, 2014:
“All primates have an ability to distinguish between temporal and melodic features of music, but unlike humans, in previous studies, nonhuman primates have not demonstrated a preference for music. However, previous research has not tested the wide range of acoustic parameters present in many different types of world music. The purpose of the present study is to determine the spontaneous preference of common chimpanzees (Pan troglodytes) for 3 acoustically contrasting types of world music: West African akan, North Indian raga, and Japanese taiko. Sixteen chimpanzees housed in 2 groups were exposed to 40 min of music from a speaker placed 1.5 m outside the fence of their outdoor enclosure; the proximity of each subject to the acoustic stimulus was recorded every 2 min. When compared with controls, subjects spent significantly more time in areas where the acoustic stimulus was loudest in African and Indian music conditions. This preference for African and Indian music could indicate homologies in acoustic preferences between nonhuman and human primates.”
There is a paper (F. Gaunet, How do guide dogs of blind owners and pet dogs of sighted owners (Canis familiaris) ask their owners for food?, Animal Cognition 2008) mentioned in a blog (here) that is billed as showing that guide dogs do not know their owners are blind. Here is the abstract:
Although there are some indications that dogs (Canis familiaris) use the eyes of humans as a cue during human-dog interactions, the exact conditions under which this holds true are unclear. Analysing whether the interactive modalities of guide dogs and pet dogs differ when they interact with their blind, and sighted owners, respectively, is one way to tackle this problem; more specifically, it allows examining the effect of the visual status of the owner. The interactive behaviours of dogs were recorded when the dogs were prevented from accessing food that they had previously learned to access. A novel audible behaviour was observed: dogs licked their mouths sonorously. Data analyses showed that the guide dogs performed this behaviour longer and more frequently than the pet dogs; seven of the nine guide dogs and two of the nine pet dogs displayed this behaviour. However, gazing at the container where the food was and gazing at the owner (with or without sonorous mouth licking), gaze alternation between the container and the owner, vocalisation and contact with the owner did not differ between groups. Together, the results suggest that there is no overall distinction between guide and pet dogs in exploratory, learning and motivational behaviours and in their understanding of their owner’s attentional state, i.e. guide dogs do not understand that their owner cannot see (them). However, results show that guide dogs are subject to incidental learning and suggest that they supplemented their way to trigger their owners’ attention with a new distal cue.
It may or may not be true that these dogs do not know that their owners are blind. This experiment indicates that but not too strongly. I could do an experiment with people talking on telephones and show that a good many of them believe that the person on the other end of the phone can see them because they would use hand gestures while talking. Or I could show that my dog has knowledge of the difference between my eyesight and my husband’s. This is because she does not move out of the way if we step over her in the daytime. She moves at night so as not to be stepped on. But if there is a lot of moonlight she moves for my husband who has poor sight in low light but not for me. She could have learned this by trial and error or she could have reasoned it out as a difference in eyesight. We don’t know. But we do know that the person on the telephone that gestures is not ignorant of what the other person can see. That person is using a habitual routine without even being aware of how silly it is.
The problem is that we treat other people differently from other animals when we try and understand their thinking. We assume animal are unintelligent as a first assumption and have to prove any instance of smarts. On the other hand we insist that humans think things out consciously and have to establish any instance of behavior being not under conscious control. We really should be using similar criteria for all animals ourselves included.
When linguists talk about language they use the idea of a function called Merge. Chomsky has the theory that without Merge there is no Language. The idea is that two things are merged together and make one composite thing. And it can be done iteratively to make longer and longer strings. Is this the magic key to language?
The ancient Greeks had ‘elements’ and everything was a combination of elements. The elements were water, fire, earth and air. That is a pretty good guess: matter in its three states and energy. This system was used to understand the world. It was not until it became clear that matter was atomic and atoms came in certain varieties that our current idea of elements replaced the Greek one. It was not that the Greek elements were illogical or that they could not be used to describe the world. The problem was that there was now a much better way to describe the world. The new way was less intuitive, less simple, less beautiful but it explained more, predicted better and fit well with other new knowledge about the world.
This illustrates my problem with conventional syntax and especially Merge. Syntax is not a disembodied logic system because we know it is accomplished in the brain by cells and networks of cells in the brain. It is a biological thing. So a description of how language is formatted has to fit with our knowledge of how the brain works. It is not our theories of language that dictate how the brain works; it is the way the brain works that dictates how we understand language. Unfortunately, we have only just begun to understand the brain.
Some of the things that we think the brain does fit well with language. The brain uses the idea of causal links, events are understood in terms of cause and effect and even in terms of actor – action – outcome. So it is not surprising that a great many utterances have a form that expresses this sort of relationship: subject – verb or subject – verb – object. We are not surprised that the brain would use the same type of relationship to express an event as it does to create that event from sensory input and store it. Causal events are natural to the brain.
So is association, categorization and attribution natural. We see a blue flower but these are separate in the brain until they are bound together. Objects are identified and their color is identified and then they are combined. So not only nouns and verbs are natural to the brain’s way of working but so are attributes – adjectives and adverbs for example. Copula forms are another example: they link an entity with another or with an attribute. And so it goes, most things I can think of about language are natural seeming to the brain (time, place, proper names, interjections etc.).
Even Merge in a funny way is normal to the brain in the character of clumping. The working memory is small and holds 4 to 7 items, we think. But by clumping items together and treating them as one item the memory is able to deal with more items. Clumping is natural to the brain.
This picture is like Changizi’s harnessing theory. The things we have created, were created by harnessing pre-existing abilities of the brain. The abilities needed no new mutation to be harnessed to a new function, mutations to make a better fit would come after they were used for the new function – otherwise there would be no selective pressure modifying the ability to the new function.
So what is my problem with conventional syntax and especially with Merge? It is not a problem with most of the entities – parts of speech, cases, tenses, word order and the like. It is a problem with the rigidity of thought. Parsing diagrams make me grind my teeth. There is an implication that these trees are the way the brain works. I have yet to encounter any good evidence that those diagrams reflect processes in the brain. The idea that a language is a collection of possible sentences bothers me – why does language have to be confined to sentences. I have read verbatum court records – actual complete and correctly formed sentences appear to be much less common than you would think. It is obvious that utterances are not always (probably not mostly) planned ahead. The mistakes people make often imply that they changed horse in mid-sentence. Most of what we know about our use of language implies that the process is not at all like the diagrams or the approaches of grammarians.
The word ‘Merge’, unlike say ‘modify’, is capitalized. This is apparently because some feel it is the essence of language, the one thing that makes human language unique and the one mutation required for our languages. But if merge is just an ordinary word and pretty much like clumping, which I think it is, than poof goes the magic. My dog can clump and merge things into new wholes – she can organize a group of things into a ritual and recognize that ritual event with a single word or short phrase or indicate it with a small action.
What is unique about humans is not Merge but the extent and sophistication of our communication. We do not need language to think in the way we do, language is built on the way we think. We need language in order to communicate better.