Tag Archives: gestures

Hand gestures

About 20 years ago I took an interest in the non-verbal part of speech communication: gesture, facial expression, posture, tone of voice. During this time I watched the hands of speakers carefully and noted how they gestured. I saw four types of movement that seemed distinct:

One.. Word gestures took the place of words, quite literally. They were made at the point were the word would be used and there was a gap in speaking for it to fit into so to speak. Also treated like words were the (sometimes impolite) gestures for which Italians are famous.

Two.. Illustrating gestures do not interrupt speech but are separately ‘saying’ the same as the words, like miming while talking.

Three.. There are emotional gestures that are very ancient and even understood across species. They are often completely unconscious. Palms towards the body communicate submission or at least non-aggression. Palms away from the body communicate rejection or defense.

Four.. The fourth type is also usually unconscious. I called it baton gestures. They set a rhythm to the speech and quite often the listener moved in keeping with the baton. It also seemed to emphasize important phrases. The baton beat seemed to mark out groups of words that should be processed together, a great help to a listener if they used it to wrap up one meaning and start on the analysis of the next words.

It is this last type that has been the subject of a recent paper. Unfortunately I have no access to the paper and must be content with the abstract. (grr) Here are the abstracts of this paper and an earlier one by the same authors.

Abstract of (Biau, Torralba, Fuentemilla, Balaguer, Soto-Faraco; Speaker’s hand gestures modulate speech perception through phase resetting of ongoing neural oscillations; Cortex Dec 2014) “Speakers often accompany speech with spontaneous beat gestures in natural spoken communication. These gestures are usually aligned with lexical stress and can modulate the saliency of their affiliate words. Here we addressed the consequences of beat gestures on the neural correlates of speech perception. Previous studies have highlighted the role of theta oscillations in temporal prediction of speech. We hypothesized that the sight of beat gestures may influence ongoing low-frequency neural oscillations around the onset of the corresponding words. Electroencephalographic (EEG) recordings were acquired while participants watched a continuous, naturally recorded discourse. The phase-locking value (PLV) at word onset was calculated from the EEG from pairs of identical words that had been pronounced with and without a concurrent beat gesture in the discourse. We observed an increase in PLV in the 5-6 Hz theta range as well as a desynchronization in the 8-10 Hz alpha band around the onset of words preceded by a beat gesture. These findings suggest that beats tune low-frequency oscillatory activity at relevant segments during natural speech perception, providing a new insight of how speech and paralinguistic information are integrated.

Abstract of (Biau, Soto-Faraco; Beat gestures modulate auditory integration in speech perception; Brain and Language 124, 2, Feb 1013) “Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory component P2. The same word tokens produced no ERP differences when participants listened to the discourse without view of the speaker. We conclude that beat gestures are integrated with speech early on in time and modulate sensory/phonological levels of processing. The present results support the possible role of beats as a highlighter, helping the listener to direct the focus of attention to important information and modulate the parsing of the speech stream.

Also from a summary of an oral presentation by the same group: “We observed an increase in phase-locking at the delta–theta frequency range (2–6 Hz) from around 200 ms before word-onset to 200 ms post word-onset, when words were accompanied with a beat gesture compared to audio alone. Furthermore, this increase in phase-locking, most noticeable at fronto-central electrodes, was not accompanied by an increase in power in the same frequency range, confirming the oscillatory-based nature of this effect. These results suggest that beat gestures are used as robust predictive information capable to tune neural oscillations to the optimal phase for auditory integration of relevant parts of the discourse during natural speech processing.

This research points to a synchronization between speaker and listener where a visual clue is used to divide the speech stream into chunks that can be processed (at least to a large degree) in isolation from the words before and after the chunk. The warning at the beginning of a new chunk, given automatically by the speakers hands, is used automatically by the listener to ‘clear the decks’ and begin a new chunk. This takes some of the strain out of listening. Of course, this information probably is also carried by the voice as well. Redundancy in oral language is common. Conversation is a wonderful dance of voice, face, hands and body that transfers an idea from one brain to another. It only seems easy because of how complicated and automatic it is.


Children’s effect on language


It seems that children can invent language, but adults cannot and they only invent ‘pidgins’. Languages once invented also are re-made by each generation’s learning of them. So it may be that languages carry the marks of how children think and communicate. A recent paper by Clay and others (citation below) investigates this idea.

They notice that the Nicaraguan Sign Language, in its development by deaf children, appeared to be driven by pre-adolescent children rather than older ones. “In its initial 10 to 15 years, NSL users developed an increasingly strong tendency to segment complex information into elements and express them in a linear fashion. Senghas et al. investigated how NSL signs and Spanish speakers’ gestures expressed a complex motion event, in which a shape’s manner and path of motion are shown simultaneously. They compared signs produced by successive cohorts of deaf NSL signers, who entered the special education school as young children (age 6 or younger) at different periods in the history of NSL…the second and third cohorts showed stronger tendencies to segment manner and path (of a movement) in two separate signs and linearly ordered the two elements.

However, just using an artificial language transmitted from one person to another in a chain also shows some segmentation and linear expression of originally complex words. This paper sets out to test whether young children, adolescents and adults differ in their tendency to make complex actions into segmented and linear language.

Subjects of different ages were asked to do pantomimes of video clips. The clips were of one of two objects going up or down a hill either with bounces or rotations. So there were three aspects of the motion (object, direction, manner) and the subjects were rated on how much they separated the aspects and mimicked them in a linear string as opposed to mimicking the total motion in one go.

Compared with adolescents and adults, young children (under 4) showed the strongest tendencies to segment and linearize the manner and path of a motion event that had been represented to them simultaneously. Moreover, the difference in the pantomime performance between the three age groups cannot be attributed to young children’s poor event perception or memory because the children performed very well in the event-recognition task and because the children’s performances in the pantomime task and the recognition task did not correlate. The results indicate that young children, but not adolescents and adults, have a bias to segment and linearize information in communication. ”

The authors suggest that it may be the limited processing capacity of young children that might limit them to dealing with one aspect at a time.

Here is the abstract:

Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children’s learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system.

Clay, Z., Pople, S., Hood, B., & Kita, S. (2014). Young Children Make Their Gestural Communication Systems More Language-Like: Segmentation and Linearization of Semantic Elements in Motion Events Psychological Science DOI: 10.1177/0956797614533967