Language and handedness

 

I am both left handed and dyslexic and so a recent paper on the connection in hemispheric dominance for hand and for language was a paper I had to read. The Mazoyer study seems to be the first to use a reasonable number of left- and as well as right-handed people to look at language lateralization. (citation below).

Whether someone was left-handed or right-handed was determined by the self-reported category (the LH and RH identifier in the paper). However, the subjects were also given the Edinburgh questions which give an index between -100 (most left-handed), +100 (most right-handed) with 0 as perfectly ambidextrous. This was used as a measure of the extent and direction of lateralization of the hand’s motor control. This index need not tally with self-reporting, but actually does quantify the lateralization. They used fMRI measurements for the lateralization of language. Reciting a very over-learned list (like the months of the year) is almost symmetrical (not lateralized) and so it was used as a base compared to forming a sentence which varies in lateralization. Language is usually biased to the left hemisphere as is hand control in right-handed people.

Their conclusion was: “This study demonstrates that, except in a small sample of strong LH with rightward asymmetry, concordance of hemispheric dominance for hand and for language production occurs by chance. The present result thus questions the existence of a link between control of the hand and of language by the same hemisphere, while indicating that a rightward representation of language, although rare, is a normal variant of language lateralization.”

At first glance this is not what the graph appears to show. But if you ignore the white data points at the bottom then it appears that the amount of language lateralization (y axis) is heavily biased to the left hemisphere but the amount of bias is evenly spread across the hand lateralization (x axis). The white data points on the other hand show that extreme right hemisphere lateralization of language only seems to occur in a small group of extremely left-handed people. These people would be approximately 1% of the population. This group was also identified by Gaussian analysis which found 4 peaks, the 4th being this group of atypical left-handed people. Without this group the peaks for left and right-handed people were not statistically different.

lateralization graph

 

 

 

 

 

 

Lateralization of language plotted against lateralization of hand control: “Figure 5. Plot of hemispheric functional lateralization for language as a function of manual preference strength. Manual preference strength was assessed using the Edinburgh inventory, ranging from 100 (exclusive use of the right hand) to -100 (exclusive use of the left hand). Subjects also self-reported whether they consider themselves as right- handed (RH, squares) or left-handed (LH, circles). HFLI, an index of hemispheric functional lateralization for language measured with fMRI during covert generation of sentences compared to covert generation of list of words, was used for classifying subjects as « Typical » (HFLI>50, bright color symbols), « Ambilateral» (-20<HFLI<50, pale color symbols), or « Strongly-atypical » (HFLI<-20, open symbols).”

Personally I find this very interesting. I have to assume I am in this small strongly atypical group. I score -100 on the Edinburgh test and have fought with dyslexia all my life. But from a more general perspective it is interesting that the lateralization of language has a natural spread without regard to another lateralization that gives handedness. Another interesting piece of data is that left-handed people appear (on the surface) to not be as left-handed as right-handed people are right-handed. The crossover seems to be at Edinburgh 50 (not 0 or -50). This may be an artifact. Left-handed people may learn to do a number of tasks in a right-handed manner because of the general handedness of the environment. A right-handed person has no incentive to do any particular task with the left-hand. We may be looking at motivation rather than anatomy. Finally, although this is a good start to looking at the lateralization of language, language is a complex function and there may be a lot of detail hidden in a single fMRI procedure. This authors mention this. “Because typical subjects represent 90% of the population, it is important to assess whether or not they constitute a homogeneous group with respect to hemispheric dominance. Gaussian mixture model suggests the existence two distinct subgroups of typical individuals, having strong and moderate left language lateralization, respectively, this holding both for RH and for LH. ”

Here is the abstract:

Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH). A hemispheric functional lateralization index (HFLI) for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH) and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely ‘‘Typical’’ (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH), ‘‘Ambilateral’’ (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH) and ‘‘Strongly-atypical’’ (right-hemisphere dominance with clear negative HFLI values, 7% of LH). Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population) who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.”
ResearchBlogging.org

Mazoyer, B., Zago, L., Jobard, G., Crivello, F., Joliot, M., Perchey, G., Mellet, E., Petit, L., & Tzourio-Mazoyer, N. (2014). Gaussian Mixture Modeling of Hemispheric Lateralization for Language in a Large Sample of Healthy Individuals Balanced for Handedness PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0101165

I'm on ScienceSeeker-Microscope

Distractions

 

What happens when you overcome distraction and remain focused. The brain can retain its concentration. How? Science Daily (here) reports on a paper by Jacobs and Nieder in Neuron, which shows that one part of the brain ignores the distraction completely while another attends to it very briefly and then returns to the memory task at hand.

Science Daily says, “The monkeys had to remember the number of dots in an image and reproduce the knowledge a moment later. While they were taking in the information, a distraction was introduced, showing a different number of dots. And even though the monkeys were mostly able to ignore the distraction, their concentration was disturbed and their memory performance suffered.

Measurements of the electrical activity of nerve cells in two key areas of the brain showed a surprising result: nerve cells in the prefrontal cortex signaled the distraction while it was being presented, but immediately restored the remembered information (the number of dots) once the distraction was switched off. In contrast, nerve cells in the parietal cortex were unimpressed by the distraction and reliably transmitted the information about the correct number of dots.”

The paper’s highlights and summary were:

  • Prefrontal suppression of distractors is not required to filter interfering stimuli
  • Distractors can be bypassed by storing and retrieving target information
  • Frontal and parietal cortex assume complementary functions to control working memory

Prefrontal cortex (PFC) and posterior parietal cortex are important for maintaining behaviorally relevant information in working memory. Here, we challenge the commonly held view that suppression of distractors by PFC neurons is the main mechanism underlying the filtering of task-irrelevant information. We recorded single-unit activity from PFC and the ventral intraparietal area (VIP) of monkeys trained to resist distracting stimuli in a delayed-match-to-numerosity task. Surprisingly, PFC neurons preferentially encoded distractors during their presentation. Shortly after this interference, however, PFC neurons restored target information, which predicted correct behavioral decisions. In contrast, most VIP neurons only encoded target numerosities throughout the trial. Representation of target information in VIP was the earliest and most reliable neuronal correlate of behavior. Our data suggest that distracting stimuli can be bypassed by storing and retrieving target information, emphasizing active maintenance processes during working memory with complementary functions for frontal and parietal cortex in controlling memory content.

It is interesting that this as not what the researchers expected to find. “The researchers were surprised by the two brain areas’ difference in sensitivity to distraction. “We had assumed that the prefrontal cortex is able to filter out all kinds of distractions, while the parietal cortex was considered more vulnerable to disturbances,” says Professor Nieder. “We will have to rethink that. The memory-storage tasks and the strategies of each brain area are distributed differently from what we expected.””

But I’m sure they found it made sense after thinking about it. We can look at it this way: the ventral intrapariental area is involved with the task, concentrating on the task and little else (bottom-up). The prefrontal cortex on the other hand is involved in somewhat higher level executive operations (top-down). It looks at what is happening, and as it is those researchers trying to distract me, I ignore it and carry on with the task. If on the other hand it is a big machine about to hit me, I will not ignore it and stop the silly dot test while getting out of the way. Something has to be a look-out, take note of things that are happening and decide whether to ignore distractions.

 

Chimps appreciate rhythm

 

Science Daily has an item (here) on musical appreciation in chimpanzees. Previous studies using blues, classical and pop music have found that although chimps can distinguish features of music and have preferences, they still preferred silence to the music. So were the chimps able to ‘hear’ the music but not appreciate its beauty? A new paper has different results using non-western music: West African akan, North Indian raga, and Japanese taiko. Here the chimps liked the African and Indian music but not the Japanese. They seemed to base their appreciation on the rhythm. The Japanese music has very regular prominent beats like western music, while the African and Indian music had varied beats. “The African and Indian music in the experiment had extreme ratios of strong to weak beats, whereas the Japanese music had regular strong beats, which is also typical of Western music.”

It may be that they like a more sophisticated rhythm. Or de Waal says, ““Chimpanzees may perceive the strong, predictable rhythmic patterns as threatening, as chimpanzee dominance displays commonly incorporate repeated rhythmic sounds such as stomping, clapping and banging objects.”

Here is the abstract for M. Mingle, T. Eppley, M. Campbell, K. Hall, V. Horner, F. de Waal; Chimpanzees Prefer African and Indian Music Over Silence;Journal of Experimental Psychology: Animal Learning and Cognition, 2014:

All primates have an ability to distinguish between temporal and melodic features of music, but unlike humans, in previous studies, nonhuman primates have not demonstrated a preference for music. However, previous research has not tested the wide range of acoustic parameters present in many different types of world music. The purpose of the present study is to determine the spontaneous preference of common chimpanzees (Pan troglodytes) for 3 acoustically contrasting types of world music: West African akan, North Indian raga, and Japanese taiko. Sixteen chimpanzees housed in 2 groups were exposed to 40 min of music from a speaker placed 1.5 m outside the fence of their outdoor enclosure; the proximity of each subject to the acoustic stimulus was recorded every 2 min. When compared with controls, subjects spent significantly more time in areas where the acoustic stimulus was loudest in African and Indian music conditions. This preference for African and Indian music could indicate homologies in acoustic preferences between nonhuman and human primates.”

 

Animal – human bias

 

There is a paper (F. Gaunet, How do guide dogs of blind owners and pet dogs of sighted owners (Canis familiaris) ask their owners for food?, Animal Cognition 2008) mentioned in a blog (here) that is billed as showing that guide dogs do not know their owners are blind. Here is the abstract:

Although there are some indications that dogs (Canis familiaris) use the eyes of humans as a cue during human-dog interactions, the exact conditions under which this holds true are unclear. Analysing whether the interactive modalities of guide dogs and pet dogs differ when they interact with their blind, and sighted owners, respectively, is one way to tackle this problem; more specifically, it allows examining the effect of the visual status of the owner. The interactive behaviours of dogs were recorded when the dogs were prevented from accessing food that they had previously learned to access. A novel audible behaviour was observed: dogs licked their mouths sonorously. Data analyses showed that the guide dogs performed this behaviour longer and more frequently than the pet dogs; seven of the nine guide dogs and two of the nine pet dogs displayed this behaviour. However, gazing at the container where the food was and gazing at the owner (with or without sonorous mouth licking), gaze alternation between the container and the owner, vocalisation and contact with the owner did not differ between groups. Together, the results suggest that there is no overall distinction between guide and pet dogs in exploratory, learning and motivational behaviours and in their understanding of their owner’s attentional state, i.e. guide dogs do not understand that their owner cannot see (them). However, results show that guide dogs are subject to incidental learning and suggest that they supplemented their way to trigger their owners’ attention with a new distal cue.

It may or may not be true that these dogs do not know that their owners are blind. This experiment indicates that but not too strongly. I could do an experiment with people talking on telephones and show that a good many of them believe that the person on the other end of the phone can see them because they would use hand gestures while talking. Or I could show that my dog has knowledge of the difference between my eyesight and my husband’s. This is because she does not move out of the way if we step over her in the daytime. She moves at night so as not to be stepped on. But if there is a lot of moonlight she moves for my husband who has poor sight in low light but not for me. She could have learned this by trial and error or she could have reasoned it out as a difference in eyesight. We don’t know. But we do know that the person on the telephone that gestures is not ignorant of what the other person can see. That person is using a habitual routine without even being aware of how silly it is.

The problem is that we treat other people differently from other animals when we try and understand their thinking. We assume animal are unintelligent as a first assumption and have to prove any instance of smarts. On the other hand we insist that humans think things out consciously and have to establish any instance of behavior being not under conscious control. We really should be using similar criteria for all animals ourselves included.

 

My problem with Merge

 

When linguists talk about language they use the idea of a function called Merge. Chomsky has the theory that without Merge there is no Language. The idea is that two things are merged together and make one composite thing. And it can be done iteratively to make longer and longer strings. Is this the magic key to language?

The ancient Greeks had ‘elements’ and everything was a combination of elements. The elements were water, fire, earth and air. That is a pretty good guess: matter in its three states and energy. This system was used to understand the world. It was not until it became clear that matter was atomic and atoms came in certain varieties that our current idea of elements replaced the Greek one. It was not that the Greek elements were illogical or that they could not be used to describe the world. The problem was that there was now a much better way to describe the world. The new way was less intuitive, less simple, less beautiful but it explained more, predicted better and fit well with other new knowledge about the world.

This illustrates my problem with conventional syntax and especially Merge. Syntax is not a disembodied logic system because we know it is accomplished in the brain by cells and networks of cells in the brain. It is a biological thing. So a description of how language is formatted has to fit with our knowledge of how the brain works. It is not our theories of language that dictate how the brain works; it is the way the brain works that dictates how we understand language. Unfortunately, we have only just begun to understand the brain.

Some of the things that we think the brain does fit well with language. The brain uses the idea of causal links, events are understood in terms of cause and effect and even in terms of actor – action – outcome. So it is not surprising that a great many utterances have a form that expresses this sort of relationship: subject – verb or subject – verb – object. We are not surprised that the brain would use the same type of relationship to express an event as it does to create that event from sensory input and store it. Causal events are natural to the brain.

So is association, categorization and attribution natural. We see a blue flower but these are separate in the brain until they are bound together. Objects are identified and their color is identified and then they are combined. So not only nouns and verbs are natural to the brain’s way of working but so are attributes – adjectives and adverbs for example. Copula forms are another example: they link an entity with another or with an attribute. And so it goes, most things I can think of about language are natural seeming to the brain (time, place, proper names, interjections etc.).

Even Merge in a funny way is normal to the brain in the character of clumping. The working memory is small and holds 4 to 7 items, we think. But by clumping items together and treating them as one item the memory is able to deal with more items. Clumping is natural to the brain.

This picture is like Changizi’s harnessing theory. The things we have created, were created by harnessing pre-existing abilities of the brain. The abilities needed no new mutation to be harnessed to a new function, mutations to make a better fit would come after they were used for the new function – otherwise there would be no selective pressure modifying the ability to the new function.

So what is my problem with conventional syntax and especially with Merge? It is not a problem with most of the entities – parts of speech, cases, tenses, word order and the like. It is a problem with the rigidity of thought. Parsing diagrams make me grind my teeth. There is an implication that these trees are the way the brain works. I have yet to encounter any good evidence that those diagrams reflect processes in the brain. The idea that a language is a collection of possible sentences bothers me – why does language have to be confined to sentences. I have read verbatum court records – actual complete and correctly formed sentences appear to be much less common than you would think. It is obvious that utterances are not always (probably not mostly) planned ahead. The mistakes people make often imply that they changed horse in mid-sentence. Most of what we know about our use of language implies that the process is not at all like the diagrams or the approaches of grammarians.

The word ‘Merge’, unlike say ‘modify’, is capitalized. This is apparently because some feel it is the essence of language, the one thing that makes human language unique and the one mutation required for our languages. But if merge is just an ordinary word and pretty much like clumping, which I think it is, than poof goes the magic. My dog can clump and merge things into new wholes – she can organize a group of things into a ritual and recognize that ritual event with a single word or short phrase or indicate it with a small action.

What is unique about humans is not Merge but the extent and sophistication of our communication. We do not need language to think in the way we do, language is built on the way we think. We need language in order to communicate better.

 

Children’s effect on language

 

It seems that children can invent language, but adults cannot and they only invent ‘pidgins’. Languages once invented also are re-made by each generation’s learning of them. So it may be that languages carry the marks of how children think and communicate. A recent paper by Clay and others (citation below) investigates this idea.

They notice that the Nicaraguan Sign Language, in its development by deaf children, appeared to be driven by pre-adolescent children rather than older ones. “In its initial 10 to 15 years, NSL users developed an increasingly strong tendency to segment complex information into elements and express them in a linear fashion. Senghas et al. investigated how NSL signs and Spanish speakers’ gestures expressed a complex motion event, in which a shape’s manner and path of motion are shown simultaneously. They compared signs produced by successive cohorts of deaf NSL signers, who entered the special education school as young children (age 6 or younger) at different periods in the history of NSL…the second and third cohorts showed stronger tendencies to segment manner and path (of a movement) in two separate signs and linearly ordered the two elements.

However, just using an artificial language transmitted from one person to another in a chain also shows some segmentation and linear expression of originally complex words. This paper sets out to test whether young children, adolescents and adults differ in their tendency to make complex actions into segmented and linear language.

Subjects of different ages were asked to do pantomimes of video clips. The clips were of one of two objects going up or down a hill either with bounces or rotations. So there were three aspects of the motion (object, direction, manner) and the subjects were rated on how much they separated the aspects and mimicked them in a linear string as opposed to mimicking the total motion in one go.

Compared with adolescents and adults, young children (under 4) showed the strongest tendencies to segment and linearize the manner and path of a motion event that had been represented to them simultaneously. Moreover, the difference in the pantomime performance between the three age groups cannot be attributed to young children’s poor event perception or memory because the children performed very well in the event-recognition task and because the children’s performances in the pantomime task and the recognition task did not correlate. The results indicate that young children, but not adolescents and adults, have a bias to segment and linearize information in communication. ”

The authors suggest that it may be the limited processing capacity of young children that might limit them to dealing with one aspect at a time.

Here is the abstract:

Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children’s learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system.
ResearchBlogging.org

Clay, Z., Pople, S., Hood, B., & Kita, S. (2014). Young Children Make Their Gestural Communication Systems More Language-Like: Segmentation and Linearization of Semantic Elements in Motion Events Psychological Science DOI: 10.1177/0956797614533967

I'm on ScienceSeeker-Microscope

Can fMRI be trusted?

 

The use of brain images is often criticized. A recent article by M Farah looks at what ‘the kernals of truth’ behind the critiques are and how safe we are to trust the images. (citation below). She is concerned by the confusion of legitimate worries about imaging and false ones.

The first criticism that she addresses is that the BOLD signal in fMRI is from oxygenated blood and not brain activity. True, but she says that scientific measurements are very often indirect. What is important is the nature of the link involved in the measurement. In this case, the answer is that even though the exact nature of the link between brain activity and blood flow is not known – it has been established that they are causally related. One thing she does not make a point of is that there is not necessarily a time lag in the blood flow. The flow is controlled by astrocytes and these glia appear (at least in the case of attention) to anticipate the need for increased blood flow. “In many ‘cognitive’ paradigms, blood flow modulation occurs in anticipation of or independent of the receipt of sensory input” – Moore & Coa (citation below).

There are complaints that the presentation of images are fabrications of scale and colour. The colours are misleading and the differences they represent can be tiny. Scales can be arbitrary. Farah points out that this is true across science. Graphs and illustrations are arbitrary and exaggerated in order for them to be easier for readers to see and understand and this is not particularly prominent in fMRI images.

A number of different criticisms have been made about the emphasis that imaging puts on localization and modular thinking. Again this is somewhat true. But – only early imaging did localization for localization’s sake to prove the validity of the method. Looking for activity in locations that had been previously shown to be involved in a particular process. Today’s imaging has gone past that. Another related gripe is that there are no psychological hypotheses that can be decisively tested by imaging. Her answer is that this is true of all psychological methods, none are decisive. Never the less, imaging has helped to resolve issues. There are complaints that imaging favours the production of modular hypotheses, biasing research. But the questions that science, in general, asks are those it has the tools to answer. This is not new, not true only of imaging and not an unreasonable way to proceed.

Farah does agree with criticism of ‘wanton reverse inference’ but only when it is wanton. Although you can infer that a particular thought process is associated with a particular brain activity – you cannot turn that around – a particular brain activity does not imply a particular thought process. An area of the brain may do more than one thing. An example of this that I still notice occurs is the idea that the amygdala activity has to do with fear when fear is only one of the things the amygdala processes. Farah uses wanton because this criticism should not be applied to MVPA (multivoxel pattern analysis) which is a valid special case of back and forth of inference.

The statistics of imaging are another area where suspicion is raised. Some are simply concerned that statistics is a filter and not ‘the reality’. But the use of statistics in science is widespread; it is a very useful tool; stats do not mask reality but gives better approximates it than does raw data. There are two types of statistical analysis that Farah does feel are wrong. They are often referred to as dead salmon activity (multiple comparisons) and voodoo correlations (circularity). These two faulty statistical methods can also be found in large complex data sets in other sciences: psychometrics, epidemiology, genetics, and finance.

When significance testing is carried out with brain imaging data, the following problem arises: if we test all 50,000 voxels separately, then by chance alone, 2,500 would be expected to cross the threshold of significance at the p<0.05 level, and even if we were to use the more conservative p<0.001 level, we would expect 50 to cross the threshold by chance alone. This is known as the problem of multiple comparisons, and there is no simple solution to it…Statisticians have developed solutions to the problem of multiple comparisons. These include limiting the so-called family-wise error rate and false discovery rate.”

Some researchers first identified the voxels most activated by their experimental task and then—with the same data set—carried out analyses only on those voxels to estimate the strength of the effect.Just as differences due to chance alone inflate the uncorrected significance levels in the dead fish experiment, differences due to chance alone contribute to the choice of voxels selected for the second analysis step. The result is that the second round of analyses is performed on data that have been “enriched” by the addition of chance effects that are consistent with the hypothesis being tested. In their survey of the social neuroscience literature, Vul and colleagues found many articles reporting significant and sizeable correlations with proper analyses, but they also found a large number of articles with circular methods that inflated the correlation values and accompanying significance levels”

Finally she tackles the question of influence. The complaint is that images are too convincing, especially to the general public. This may be true in some cases but attempted replication of many of the undue influence studies have not shown the effect. It may be the notion of science rather than imaging in particular that is convincing. Or it may be that people have become used to images and the coloured blobs no longer have undue impact. There is also the question of resources. Some feel that image studies get the money, acceptance in important journals, interest from the media etc. There seems to be little actual evidence for this and it may often be sour grapes.

Should we trust fMRI? Yes, within reason. One single paper with images or without cannot be taken as True with that capital T, but provided the stats and inferences are OK, images are as trust worth as other methods.
ResearchBlogging.org

Farah MJ (2014). Brain images, babies, and bathwater: critiquing critiques of functional neuroimaging. The Hastings Center report, Spec No PMID: 24634081

Moore, C., & Cao, R. (2008). The Hemo-Neural Hypothesis: On The Role of Blood Flow in Information Processing Journal of Neurophysiology, 99 (5), 2035-2047 DOI: 10.1152/jn.01366.2006

I'm on ScienceSeeker-Microscope

What is in a smile?

 

We distinguish genuine from fake smiles, even though we appreciate the polite sort of fake smile in many cases. I have thought it was a settled matter. Smiles are marked by the raising of the corners of the mouth and pulling them back. A broad smile (fake or real) opens the mouth by lowering the jaw. But only authentic smiles are marked by crow’s feet at the corners of the eyes. This is the Duchenne marker. Would you believe that it is just not that simple? The smile is a dynamic thing and research has mostly used static pictures to investigate smiles. A recent paper by Korb (citation below) examines dynamic smiles. Here is the abstract:

The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major (smile muscle – positive), Orbicularis Oculi (Duchenne muscle – positive), and Corrugator muscles (frown muscle – negative). Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.”

In these experiments stronger smiles were found both more realistic and more authentic. This did not depend as much as previously thought on the eyes. The smile muscle action, the opening of the mouth and the lack of a frown in the brow were as important as the Duchenne marker. The subjects showed electrical activity in the muscles of their own faces mimicking the video being shown and whether the subject found the smile genuine could be predicted from this mimicry. The most clear mimicry was the combination of smile muscle and frown muscles. These two are correlated: in a smile the Zygomaticus is activated and the Corrugator is relaxed, while the opposite happens in a frown. The Masseter (jaw) muscle did not show mimicry. Since this is different from findings on static smiles, the question is raised whether smiles are judged by a different pathway when they are dynamic.

Embodiment theories propose that facial mimicry is a low-level motor process that can generate or modify emotional processes via facial feedback. However, other scholars favor the view that facial expressions are the downstream reflection of an internally generated emotion, and therefore play at best a minor role at a later stage of the emotion generation process. The main critique of the embodiment view is based on the observation that, in addition to their well-documented role in facial mimicry, the Zygomaticus and Corrugator muscles respond, respectively, to positive and negative emotional stimuli not containing facial expressions. However, the Orbicularis Oculi muscle is not clearly associated with positive or negative emotions and contracts, for example, during smiling (producing crow’s feet) as well as during a startle reflex in response to a sudden loud noise.”

This points to a low-level motor process because the Duchenne marker is mimicked in the Orbicularis muscle even though it is not actually a diagnostic for a smile. (It can occur in other situations and can be missing in some smiles.) It is more likely that the identification of a smile is due to mimicry than that mimicry is due to the identification of a smile. The authors suggest that this should be further investigated.

Nevertheless, the hypothesis that facial mimicry mediates the effect of smile characteristics on rated authenticity remains the most parsimonious one based on the fact that 1) facial mimicry is a costly behavior for the organism, 2) participants spontaneously mimicked the perceived smiles, and 3) this mimicry predicted ratings of authenticity. Importantly, the reverse hypothesis, i.e. that perceived authenticity may have caused participants’ facial reactions, seems less likely based on the finding that participants’ Orbicularis Oculi muscle was most activated in response to two types of smiles that contained the highest degree of the corresponding (marker), but resulted in very different ratings of authenticity.”

I hope that researchers will follow up on the idea that static and dynamic images of smiles are processed differently. Would there be clues in the order and timing of a smile unfolding that would point to its authenticity? If fake and genuine smiles are produced by different mechanisms then perhaps they would by quite different in their dynamics. Using avatars is a neat way to vary the dynamics of the muscle movements.
ResearchBlogging.org

Korb, S., With, S., Niedenthal, P., Kaiser, S., & Grandjean, D. (2014). The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0099194

I'm on ScienceSeeker-Microscope

Why do we get pleasure from sad music?

 

Sadness is a negative emotion; and, we recognize sadness in some music; but yet, we often enjoy listening to sad music. We can be positive about a negative emotion. A recent paper by Kawakami (citation below) differentiates between some hypotheses to explain this contradiction.

The hypotheses that the response has to do with musical training (ie that the pleasure comes from the appreciation and familiarity with the art involved) was shown false by finding no difference in response between musicians and non-musicians in their experiments. “Participants’ emotional responses were not associated with musical training. Music that was perceived as tragic evoked fewer sad and more romantic notions in both musicians and non-musicians. Therefore, our hypothesis—when participants listened to sad (i.e., minor-key) music, those with more musical experience (relative to those with less experience) would feel (subjectively experience) more pleasant emotions than they would perceive (objectively hear in the music)—was not supported.

The key innovation in this experimental setup was that the subjects were not just asked how sad they found the music but were given an extensive quiz. For each of 2 pieces of music, played in both minor and major keys, the subjects rated the experience in terms of 62 words and phrases, rating both their perception of the music’s emotional message and the personal emotion they actually felt. Four factors were extracted from the 62 emotional descriptions: tragic emotion, heightened emotion, romantic emotion, blithe emotion.

As would be expected the tragic emotion was rated higher for the minor key and lower for the major key music for both perceived and felt emotion. Likewise, there is no surprise that the blithe emotion was the opposite, high for the major and low for the minor for both felt and perceived emotion. The heightened emotion was only slightly higher for the sad minor music over the happy major. Romantic emotion was moderately higher for the happy music over the sad. However, there were differences between felt and perceived emotion. These were significant for the minor music: it was felt to be less tragic, more romantic and more blithe than it was perceived. This difference between felt and perceived is not too difficult to imagine. Suppose you are arguing with someone and you make them very angry. You can perceive their anger while your own feelings may be of smug satisfaction. Although emotion can be very contagious, it is not a given that felt emotion will be identical to perceived emotion.

The hypothesis of catharsis would imply a deeply felt sadness to lift depression. But this is not what was seen. The next hypothesis the authors discuss is ‘sweet anticipation’. A listener has certain expectations of what will be heard next and a positive emotion is felt when the prediction is fulfilled. This could contribute to the effect (but not because of musical training).

A third hypothesis is that we have an art-experience-mode in which we have positive emotions from exposure to art. If we believe we are in the presence of ‘art’ that in itself is positive. “When we listen to music, being in a listening situation is obvious to us; therefore, how emotion is evoked would be influenced by our cognitive appraisal of listening to music. For example, a cognitive appraisal of listening to sad music as engagement with art would promote positive emotion, regardless of whether that music evoked feelings of unpleasant sadness, thereby provoking the experience of ambivalent emotions in response to sad music. ” Again this could contribute.

Their new and favourite hypothesis is ‘vicarious emotion’. “In sum, we consider emotion experienced in response to music to be qualitatively different from emotion experienced in daily life; some earlier studies also proposed that music may evoke music-specific emotions. The difference between the emotions evoked in daily life and music-induced emotions is the degree of directness attached to emotion-evoking stimuli. Emotion experienced in daily life is direct in nature because the stimuli that evoke the emotion could be threatening. However, music is a safe stimulus with no relationship to actual threat; therefore, emotion experienced through music is not direct in nature. The latter emotion is experienced via an essentially safe activity such as listening to music. We call this type of emotion

vicarious emotion.” … That is, even if the music evokes a negative emotion, listeners are not faced with any real threat; therefore, the sadness that listeners feel has a pleasant, rather than an unpleasant, quality to it. This suggests that sadness is multifaceted, whereas it has previously been regarded as a solely unpleasant emotion. ”

I find the notion of vicarious emotion could also explain why we can be entertained and enjoy frightening plays, books and movies. All sorts of negative emotions are sought as vicarious experiences and enjoyed. Many things we do for leisure and our enjoyment of much of art have a good deal of vicarious emotional content for us to safely enjoy and even learn from.

ResearchBlogging.org

Kawakami, A., Furukawa, K., & Okanoya, K. (2014). Music evokes vicarious emotions in listeners Frontiers in Psychology, 5 DOI: 10.3389/fpsyg.2014.00431