Category Archives: infants

Language in the left hemisphere

Here is the posting mentioned in the last post. A recent paper (Harvey M. Sussman; Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots; Biolinguistics Vol 9 Dec 2020), deals with the nature of language processing in the left hemisphere and why it is that in right-handed people with split brains only the left cortex can talk although both sides can listen. There is a lot of interesting information in this paper (especially for someone like me who is left-handed and dyslexic). He has a number of ‘dots’ and he connects them.

Dot 1 is infant babbling. The first language-like sounds babies make are coos and these have a very vowel-like quality. Soon they babble consonant-vowel combinations in repetitions. By noting the asymmetry of the mouth it can be shown that babbling comes from the left hemisphere, non-babbling noises from both, and smiles from the right hemisphere. A speech sound map is being created by the baby and it is formed at the dorsal pathway’s projection in the frontal left articulatory network.

Dot 2 is the primacy of the syllable. Syllables are the unit of prosodic events. A person’s native language syllable constraints are the origin of the types of errors that happen in second language pronunciation. Also syllables are the units of transfer in language play. Early speech sound networks are organized in syllable units (vowel and associated consonants) in the left hemisphere of right-handers.

Dot 3 is the inability for the right hemisphere to talk in split brain people. When language tasks are directed at the right hemisphere the stimulus exposure must be longer (greater than 150 msec) than when directed to the left. The right hemisphere can comprehend language but does not evoke a sound image from seen objects and words although the meaning of the objects and words is understood by that hemisphere. The right hemisphere cannot recognize if two words rhyme from seeing illustations of the words. So the left hemisphere (in right-handers) has the only language neural network with sound images. This network serves as the neural source for generating speech, therefore in a split brain only the left side can speak.

Dot 4 deals with the problems of DAS, Development Apraxia of Speech. I am going to skip this.

Dot 5 is the understanding of speech errors. The ‘slot-segment’ hypothesis is based on analysis of speech errors. Two thirds of errors are the type where phonemes are substituted, omitted, transposed or added. The picture is of a two-tiered neural ‘map’ with syllable slots serially ordered as one tier, and an independent network of consonant sounds in the other tier. The tiers are connected together. The vowel is the heart of the syllable in the nucleus slot. Forms are built around it with consonants (CV, CVC, CCV etc.). Spoonerisms are restricted to consonants exchanging with consonants and vowels exchanging with vowels; and, exchanges occurring between the same syllable positions – first with first, last with last etc.

Dot 6 is Hawkin’s model, “the neo-cortex uses stored memories to produce behaviors.” Motor memories are used sequentially and operate in an auto-associative way. Each memory elicits the next in order (think how hard it is to do things backwards). Motor commands would be produced in a serial order, based on syllables - learned articulatory behaviors linked to sound equivalents.

Dot 7 is experiments that showed representations of sounds in human language at the neural level. For example there is a representation of a generic ‘b’ sound, as well as representations of various actual ‘b’s that differ from one another. This is why we can clearly hear a ‘b’ but have difficulty identifying a ‘b’ when the sound pattern is graphed.

Here is the abstract:

Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

Two things on language

There are a couple of interesting reports about language.

First, it has been shown that repeating something aloud helps us remember it. But a recent study goes further – we remember even better if we repeat it aloud to someone. The act of communication helps the memory. The paper is: Alexis Lafleur, Victor J. Boucher. The ecology of self-monitoring effects on memory of verbal productions: Does speaking to someone make a difference? Consciousness and Cognition, 2015; 36: 139 DOI:10.1016/j.concog.2015.06.015.

From ScienceDaily (here) Previous studies conducted at Professor Boucher’s Phonetic Sciences Laboratory have shown that when we articulate a sound, we create a sensory and motor reference in our brain, by moving our mouth and feeling our vocal chords vibrate. “The production of one or more sensory aspects allows for more efficient recall of the verbal element. But the added effect of talking to someone shows that in addition to the sensorimotor aspects related to verbal expression, the brain refers to the multisensory information associated with the communication episode,” Boucher explained. “The result is that the information is better retained in memory.

No one can tell me that language is not about and for communication.

The second item is reported in ScienceDaily (here) Infants cannot perceive the difference between certain sounds when their tongue is restricted with a teether. They have to be able to mimic the sounds in order to distinguish them. The paper is: Alison G. Bruderer, D. Kyle Danielson, Padmapriya Kandhadai, and Janet F. Werker. Sensorimotor influences on speech perception in infancy. PNAS, October 12, 2021 DOI: 10.1073/pnas.1508631112.

From ScienceDaily: …teething toys were placed in the mouths of six-month-old English-learning babies while they listened to speech sounds-two different Hindi “d” sounds that infants at this age can readily distinguish. When the teethers restricted movements of the tip of the tongue, the infants were unable to distinguish between the two “d” sounds. But when their tongues were free to move, the babies were able to make the distinction. Lead author Alison Bruderer, a postdoctoral fellow in the School of Audiology and Speech Sciences at UBC, said the findings call into question previous assumptions about speech and language development. “Until now, research in speech perception development and language acquisition has primarily used the auditory experience as the driving factor,” she said. “Researchers should actually be looking at babies’ oral-motor movements as well.”

hey say that parents do not need to worry about using teething toys but a child should also have time to freely use their tongue for good development.

 

It is about communication

Some people understand language as a way of thinking and ignore the obvious – language is a way of communicating. A recent study looks at the start of language in very young babies and shows the importance of communication. (Marno, H. et al. Can you see what I am talking about? Human speech triggers referential expectation in four-month-old infants. Sci. Rep. 5, 13594; doi: 10.1038/srep13594 (2015)) The researchers looked at infants’ ability to recognize that a word can refer to an object in the world but they also show the importance of the infants’ recognizing the act of communication.

The authors review what is known and it is an interesting list. “Human language is a special auditory stimulus for which infants show a unique sensitivity, compared to any other types of auditory stimuli. Various studies found that newborns are not only able to distinguish languages they never heard before based on their rhythmical characteristics, but they can also detect acoustic cues that signal word boundaries, discriminate words based on their patterns of lexical stress and distinguish content words from function words by detecting their different acoustic characteristics. Moreover, they can also recognize words with the same vowels after a 2 min delay. In fact, infants are more sensitive to the statistical and prosodic patterns of language than adults, which provides an explanation of why acquiring a second language is more difficult in adulthood than during infancy. In addition to this unique sensitivity to the characteristics of language, infants also show a particular preference for language, compared to other auditory stimuli. For example, infants at the age of 2-months, and even newborns prefer to listen to speech compared to non-speech stimuli, even if the non-speech stimuli retain many of the spectral and temporal properties of the speech signal. Thus, there is growing evidence that infants are born with a unique interest and sensitivity to process human language. … it might be that infants are receptive towards speech because they also understand that speech can communicate about something. More specifically, they might understand that speech can convey information about the surrounding world and that words can refer to specific entities. Indeed, without this understanding, they would have great difficulty to accept relations between objects and their labels, and thus language acquisition would become impossible.

The experiments reported in the paper are designed to show whether infants (about 4 months old) understand that words can refer to objects in the world. They do show this, but also show that this depends on the infant recognizing the act of communication. The infant attends to eye-contact and when the face speaks language (not backward language or silent mimed language), the infant then appears to recognize it is being communicated with. Without the eye-contact or without the actual language, the infant does not assume an act of communication. Then the infant can go on to recognize that reference to something is what is being communicated. “… we suggest that during the perception of a direct eye-gaze, infants can recognize the communicative intention, even before they could assess the content of these intentions. Eye-gaze thus is able to establish a communicative context, which can direct the attention of the infant. However, we also suggest that while an infant-directed gaze acts as a communicative cue signaling that the infant was addressed by someone, additional cues are required to elicit the referential expectation of the infant (i.e. to understand that the speaker is talking about something). Following this, we propose that when the infant hears speech (without being able to actually understand the content of speech) and observes a person directly gazing at her/him (like in the Infant-directed gaze condition in our experiment), s/he will understand the communicative intention of the speaker (i.e. that s/he was addressed by the speaker), but s/he will still have to wait for additional referential cues to make an inference that the speaker is actually talking about something. This additional cue arrives when the direct eye contact is broken: the very moment when the speaker averts her gaze to a new direction, the infant will infer that some new and relevant information is being presented to her via the speech signals, and, as a consequence will be ready to seek this information.

Language is about communication. Children learn language by communicating, for communicating.

Abstract: “ Infants’ sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words.

 

First and last syllables

Have you wondered why rhyme and alliteration are so common and pleasing, why they assist memorization? They seem to be taking advantage of the way words are ‘filed’ in the brain.

A ScienceDaily item (here) looks at a paper on how babies hear syllables. (Alissa L. Ferry, Ana Fló, Perrine Brusini, Luigi Cattarossi, Francesco Macagno, Marina Nespor, Jacques Mehler. On the edge of language acquisition: inherent constraints on encoding multisyllabic sequences in the neonate brain. Developmental Science, 2015; DOI: 10.1111/desc.12323).

It is known that our cognitive system recognizes the first and last syllables of words better than middle syllables. For example there is a trick of being able to read print where the middle of the words are changed. It has also been noted that the edges of words are often information rich, especially with grammatical information.

This paper shows that this is a feature of our brains from birth – no need to learn it.At just two days after birth, babies are already able to process language using processes similar to those of adults. SISSA researchers have demonstrated that they are sensitive to the most important parts of words, the edges, a cognitive mechanism which has been repeatedly observed in older children and adults.” The babies were also sensitive to the very short pause between words as a way to tell when one word ends and another begins.

Here is the abstract: “To understand language, humans must encode information from rapid, sequential streams of syllables – tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences. After familiarization with a six-syllable sequence, the neonate brain responded to the change (as shown by an increase in oxy-hemoglobin) when the two edge syllables switched positions but not when two middle syllables switched positions (Experiment 1), indicating that they encoded the syllables at the edges of sequences better than those in the middle. Moreover, when a 25ms pause was inserted between the middle syllables as a segmentation cue, neonates’ brains were sensitive to the change (Experiment 2), indicating that subtle cues in speech can signal a boundary, with enhanced encoding of the syllables located at the edges of that boundary. These findings suggest that neonates’ brains can encode information from multisyllabic sequences and that this encoding is constrained. Moreover, subtle segmentation cues in a sequence of syllables provide a mechanism with which to accurately encode positional information from longer sequences. Tracking the order of syllables is necessary to understand language and our results suggest that the foundations for this encoding are present at birth.

Link between image and sound

Babies link the sound of a word with the image of an object in their early learning of language and this is an important ability. How do they come to have this mechanism? Are there predispositions to making links between sounds and images?

Research by Asano and others (citation below) shows one type of link. They show that sound symbolism can be used by infants about to learn language (about 11 months) to match certain pseudo-words to drawings - “moma” to rounded shapes and “kipi” to sharply angled shapes. Sound symbolism is interesting but it need not be the first or most important link between auditory and visual information. It seems to me that a 11 month old child would associate barks with dogs, twitters with bird, honks and engine noises with cars, and so on. They even mimic sounds to identify an object. It is clear that objects are recognized by their feel, smell, and sound as well as by sight. The ability to derive meaning from sound is completely natural, as is deriving it from sight. What is important is not the linking of sound and sight with the same meaning/object – mammals without language have this ability.

What is important about sound symbolism is that it is arbitrary and abstract. We appear to be born with certain connections of phonemes and meanings ready to be used. These sorts of connections would be a great help to a child grasping the nature of language as opposed to natural sounds.

Here is the abstract: “A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) fol lowed by a novel spoken word that either sound-symbolically matched (“moma”) or mis matched (“kipi”) the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response - an index of semantic integration difficulty - in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life.
ResearchBlogging.org

Asano, M., Imai, M., Kita, S., Kitajo, K., Okada, H., & Thierry, G. (2015). Sound symbolism scaffolds language development in preverbal infants Cortex, 63, 196-205 DOI: 10.1016/j.cortex.2014.08.025

Watching hands

Think of a little baby trying to understand the world. One thing they need to be able to deal with is causal relationships. They seem to come with their visual systems able to identify some simple events, for example the ‘launching effect’ where one moving object contacts another stationary one and imparts its motion to the stationary object. Identifying objects that seem to control other objects is an important starting point for understanding events. I had presumed this is why infants notice hands and follow their movements. Hands, including their own, are movers of other things. Well, I still do assume this importance of hands to infants, but a recent paper by Yu and Smith adds more importance to hands (see citation below). They are useful in establishing joint attention.

 

 

Joint attention is a key to communication. Once two people are both attending to the same object/event and know it, they can communicate. Even without language - through gesture, posture, facial expression and little noises - they can ‘discuss’ the joint target of their attention. With language, words become the pointers to steer joint attention, not just to objects in sight but to objects and metaphors in the mind. Being able to establish and maintain joint attention is very important for infants to master in order to go on to master a number of skills including language.

 

 

Until recently it was thought that infants established joint attention by following the eye movements of their partner. But this is quite a difficult skill. The eye movements are small, not too accurate and limited in availability. On the other hand, hand movements are very clear and accurate. Most people move their eyes in the same direction as they move their hands when they are using their hands. It is more efficient for an infant to follow hand movement then eye movements.

 

 

Here is part of the Yu/Smith paper:

 

One-year-olds and their parents temporally coordinated their visual attention to objects and did so smoothly, consistently, and as equal partners without one partner dominating or leading the interaction. Further, the two partners often shifted attention to the same objects together in time. (One way) to coordinated visual attention: each partner looks to the other’s eyes and the seen direction of gaze of the partner influences the direction of the other partner’s gaze, leading to coupled looking behavior. (An other way) alternate pathway: Within individuals a tight coordination of hand and eye in goal-directed action means that hand and eye actions present spatially redundant signals but with the hand cue being more spatially precise and temporally stable. The results show that the hand actions of an actor have a direct effect on the partner’s looking, leading to coordinated visual attention without direct gaze following. This hand-eye pathway is used by one-year-olds and their parents, and supports a dynamic coordination of the partners’ fixations that is characterized by rapid socially coordinated adjustments of looking behavior. The documentation of a functional alternative to following the eye gaze of a social partner begins to fill the contemporary knowledge gap in understanding just how joint attention between infants and parents might work in cluttered and complex everyday contexts. Joint attention as a means of establishing common reference is essential to infant learning in many domains including language and the present results show how coordinated looking may be established and maintained in spatially and dynamically complex contexts that include manual actions on objects. Infant attention and sensitivity to hand actions demonstrated in the present results is also consistent with the large and growing literature on their ability to interpret the causal implications of hand movements and gestures .

 

Successful adult social interactions are known to depend on rapid (with fractions of a second) behavioral adjustments in response to and across a suite of sensory-motor behaviors that include eye, head, hand, mouth, and posture movements. The hand-eye pathway evident in one-year-olds and their parents shows this same character of well-coordinated rapid adjustment in response to the partner.

ResearchBlogging.org

Yu C, & Smith LB (2013). Joint Attention without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects through Eye-Hand Coordination. PloS one, 8 (11) PMID: 24236151