Babies show the way

It is January and therefore we see the answers to the Edge Question. This year the question is “What do you consider the most interesting recent (scientific) news? What makes it important?” I have to say that I did not find this year’s crop of short essays as interesting as in previous years – but there were some gems.

For example N.J. Enfield’s ‘Pointing is a Prerequisite for Language’ fits so well with what I think and is expressed so well (here). I have a problem with the idea that language is not primarily about communication but rather is about a way of thinking. I cannot believe that language arose over a short space of time rather than a long evolution (both biological and cultural evolution). And it began as communication not as a proper-ish language. “Infants begin to communicate by pointing at about nine months of age, a year before they can produce even the simplest sentences. Careful experimentation has established that prelinguistic infants can use pointing gestures to ask for things, to help others by pointing things out to them, and to share experiences with others by drawing attention to things that they find interesting and exciting. … With pointing, we do not just look at the same thing, we look at it together. This is a particularly human trick, and it is arguably the thing that ultimately makes social and cultural institutions possible. Being able to point and to comprehend the pointing gestures of others is crucial for the achievement of “shared intentionality,” the ability to build relationships through the sharing of perceptions, beliefs, desires, and goals.”

So this is where to start to understand language – with communication and with gestures, and especially joint attention with another person as in pointing. EB Bolles has a lot of information on this, collected over quite a few years in his blog (here).

Get rid of the magic

I have difficulty with the reaction of many people to the idea that consciousness is a process of the brain. They say it is impossible, consciousness cannot be a physical process. How can that vivid subjective panorama be the product of a physical process? They tend to believe either some variety of dualism – consciousness is not physical but spiritual (or magical), or consciousness is a natural primitive – a sort of state of matter/energy that objects possess more of less of (another sort of magic). Or they fudge the issue by believing it is an emerging aspect of physical processes (kind of physical but arising by magic). I find explanations like these far more difficult than a plain and simple physical process in the brain (with no magic).

My question is really, “what would you expect awareness to be like?” “Have you a better idea of how to do awareness?” It would certainly not be numeric values. It would not be word descriptions. Why not a simulated model of ourselves in the world based on what our sensory organs can provide? Making a model seems a perfectly reasonable brain process with no reason to reject it as impossible. It sounds like what we have. But does it need to be a conscious model? (Chalmers’ idea of philosophical zombies assumes that consciousness is an added extra and not needed for thought.)

But it seems that consciousness is an important aspect of a shared simulation. It is reasonable to suppose that all our senses, our memory, our cognition, our motor plans, our emotional states, all contribute to create a simulation. And it is reasonable to assume that they all are responsive to the simulation, use it to coordinate and integrate all the various things going on in the brain so making our behaviour as appropriate as possible. If a model is going to be created and used by many very different parts and functions of the brain it has to be something like a conscious model – a common format, tokens and language.

There have been a number of good and interesting attempts to explain how consciousness might work as the physical process; and there have been a number of attempts to show that such an explanation is impossible. They pass one another like ships in the night. Agreement is not getting any closer. There is not even the start of a concensus and the reason is that one group will not accept something is a valid explanation if it include the magic and the other group will not accept an explanation that loses the magic. The hard question is all about the magic and not about anything else. The question boils down to how can consciousness be explained scientifically while including the magic? I hope that more and more science throws out the magic and the hard question and gets on with explaining consciousness.

Language in the left hemisphere

Here is the posting mentioned in the last post. A recent paper (Harvey M. Sussman; Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots; Biolinguistics Vol 9 Dec 2015), deals with the nature of language processing in the left hemisphere and why it is that in right-handed people with split brains only the left cortex can talk although both sides can listen. There is a lot of interesting information in this paper (especially for someone like me who is left-handed and dyslexic). He has a number of ‘dots’ and he connects them.

Dot 1 is infant babbling. The first language-like sounds babies make are coos and these have a very vowel-like quality. Soon they babble consonant-vowel combinations in repetitions. By noting the asymmetry of the mouth it can be shown that babbling comes from the left hemisphere, non-babbling noises from both, and smiles from the right hemisphere. A speech sound map is being created by the baby and it is formed at the dorsal pathway’s projection in the frontal left articulatory network.

Dot 2 is the primacy of the syllable. Syllables are the unit of prosodic events. A person’s native language syllable constraints are the origin of the types of errors that happen in second language pronunciation. Also syllables are the units of transfer in language play. Early speech sound networks are organized in syllable units (vowel and associated consonants) in the left hemisphere of right-handers.

Dot 3 is the inability for the right hemisphere to talk in split brain people. When language tasks are directed at the right hemisphere the stimulus exposure must be longer (greater than 150 msec) than when directed to the left. The right hemisphere can comprehend language but does not evoke a sound image from seen objects and words although the meaning of the objects and words is understood by that hemisphere. The right hemisphere cannot recognize if two words rhyme from seeing illustations of the words. So the left hemisphere (in right-handers) has the only language neural network with sound images. This network serves as the neural source for generating speech, therefore in a split brain only the left side can speak.

Dot 4 deals with the problems of DAS, Development Apraxia of Speech. I am going to skip this.

Dot 5 is the understanding of speech errors. The ‘slot-segment’ hypothesis is based on analysis of speech errors. Two thirds of errors are the type where phonemes are substituted, omitted, transposed or added. The picture is of a two-tiered neural ‘map’ with syllable slots serially ordered as one tier, and an independent network of consonant sounds in the other tier. The tiers are connected together. The vowel is the heart of the syllable in the nucleus slot. Forms are built around it with consonants (CV, CVC, CCV etc.). Spoonerisms are restricted to consonants exchanging with consonants and vowels exchanging with vowels; and, exchanges occurring between the same syllable positions – first with first, last with last etc.

Dot 6 is Hawkin’s model, “the neo-cortex uses stored memories to produce behaviors.” Motor memories are used sequentially and operate in an auto-associative way. Each memory elicits the next in order (think how hard it is to do things backwards). Motor commands would be produced in a serial order, based on syllables – learned articulatory behaviors linked to sound equivalents.

Dot 7 is experiments that showed representations of sounds in human language at the neural level. For example there is a representation of a generic ‘b’ sound, as well as representations of various actual ‘b’s that differ from one another. This is why we can clearly hear a ‘b’ but have difficulty identifying a ‘b’ when the sound pattern is graphed.

Here is the abstract:

Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

Language in the right hemisphere

Language in the right hemisphere

I am going to write two posts: this one on the right hemisphere and prosody in language, and a later one on the left hemisphere and motor control of language. Prosody is the fancy word for things like rhythm, tone of voice, stress patterns, speed and pitch. It is not things like individual phonemes, words or syntax. In order to properly understand language, we need both.

A recent paper (Sammler, Grosbras, Anwander, Bestelmeyer, and Belin; Dorsal and Ventral Pathways for Prosody; Current Biology, Volume 25, Issue 23, p3079–3085, 7 December 2015) gives evidence of the anatomy of the auditory system in the right hemisphere that is like that in the left. Of course the two hemispheres collaborate in understanding and producing language but the right side processes the emotional aspects while the left processes the literal meaning.

Here is the abstract:

Our vocal tone—the prosody—contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a “yes” may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax, and semantics. Whether or not prosody—a reportedly right-hemispheric faculty—involves analogous processing streams is a matter of debate. Functional connectivity studies on prosody leave no doubt about the existence of such streams, but opinions diverge on whether information travels along dorsal or ventral pathways. Here we show, with a novel paradigm using audio morphing combined with multimodal neuroimaging and brain stimulation, that prosody perception takes dual routes along dorsal and ventral pathways in the right hemisphere. In experiment 1, categorization of speech stimuli that gradually varied in their prosodic pitch contour (between statement and question) involved (1) an auditory ventral pathway along the superior temporal lobe and (2) auditory-motor dorsal pathways connecting posterior temporal and inferior frontal/premotor areas. In experiment 2, inhibitory stimulation of right premotor cortex as a key node of the dorsal stream decreased participants’ performance in prosody categorization, arguing for a motor involvement in prosody perception. These data draw a dual-stream picture of prosodic processing that parallels the established left-hemispheric multi-stream architecture of language, but with relative rightward asymmetry.

The ventral and dorsal pathways are also found in both hemispheres in vision. The ventral is often called the ‘what’ pathway and identifies objects and conscious perception while the dorsal is called the ‘where’ pathway and is involved in spatial location for motor accuracy. The auditory pathways appear to also have the dorsal path going to motor centers and the ventral to perceptual centers. And although they deal with different processing functions the pair of auditory pathways appear in both hemispheres, like the visual ones.

 

Complexity of conversation

Language is about communication. It can be studied as written sentences, as production of spoken language, or as comprehension of spoken language, but these do not get to the heart of communicating. Language evolved as conversation, each baby learns it in conversation and most of our use of it each day is in conversations. Exchanges, taking turns, is the essence of language. A recent paper by S. Levinson in Trends in Cognitive Sciences, “Turn-taking in Human Communication – Origins and Implications for Language Processing”, looks at the complications of turn-taking.

The world’s languages vary in almost all levels of organization but there is a striking similarity in exchanges – rapid turns of short phrases or clauses within single sound envelopes. There are few long gaps or much overlapping speech during the changes of speaker. Not only is a standard turn-taking universal in human cultures but it is found in all types of primates and it is learned by babies before any language is acquired. It may be the oldest aspect of our language.

turntaking1But it is paradoxical – for the gap between speakers is too short to produce a response to what has been said by the last speaker. In fact, the gap tends to be close to the minimum reflex time. A conversational speaking turn averages 2 seconds (2000ms) and the gap between speakers is about 200ms, but it takes 600ms to prepare the first word (1500ms for a short phrase). So it is clear that production and comprehension must go on at the same time in the same areas of the brain and that comprehension must include a good deal of prediction of how a phrase is going to end. Because comprehension and turntaking2production have been studied separately, it is not clear how this multitasking, if that is what it is, is accomplished. First, the listener has to figure out what sort of utterance the speaker is making – statement, question, command or whatever. Without this the listener does not know what sort of reply is appropriate. The listener then must predict (guess) the rest of the utterance, decide what the response should be and formulate it. Finally the listener must recognize the signal/s of when the end of the utterance will be. The listener can immediately begin to talk as soon as the utterance ends. There is more to learn about how the brain does this and what the effect of turn-taking has on the nature of language.

There are cultural conventions that override turn-taking so that speakers can talk for some time without interruption, and even if they pause from time to time, no one jumps in. Of course, if someone speaks for too long without implicit permission, they will be forcibly interrupted fairly soon, people will drift away or some will start new conversations in sub-groups. That’s communication.

Here is the abstract of – Stephen C. Levinson. Turn-taking in Human Communication – Origins and Implications for Language Processing. Trends in Cognitive Sciences, 2015:

Most language usage is interactive, involving rapid turn-taking. The turn-taking system has a number of striking properties: turns are short and responses are remarkably rapid, but turns are of varying length and often of very complex construction such that the underlying cognitive processing is highly compressed. Although neglected in cognitive science, the system has deep implications for language processing and acquisition that are only now becoming clear. Appearing earlier in ontogeny than linguistic competence, it is also found across all the major primate clades. This suggests a possible phylogenetic continuity, which may provide key insights into language evolution.

Trends

The bulk of language usage is conversational, involving rapid exchange of turns. New information about the turn-taking system shows that this transition between speakers is generally more than threefold faster than language encoding. To maintain this pace of switching, participants must predict the content and timing of the incoming turn and begin language encoding as soon as possible, even while still processing the incoming turn. This intensive cognitive processing has been largely ignored by the language sciences because psycholinguistics has studied language production and comprehension separately from dialog.

This fast pace holds across languages, and across modalities as in sign language. It is also evident in early infancy in ‘proto-conversation’ before infants control language. Turn-taking or ‘duetting’ has been observed in many other species and is found across all the major clades of the primate order.

 

Shared attention

Social interaction or communication requires the sharing of attention. If two people are not paying attention to one another then there is no interaction and no communication. Shared attention is essential for a child’s development of social cognition and communication skills. Two types of shared attention have been identified: mutual gaze when two people face one another and attend to each others eyes; and joint attention when two people look at a third person or object. Joint attention is not the same for both individuals because one initiates it and the other responds.

In a recent paper, researchers studied shared attention (Takahiko Koike etal; Neural substrates of shared attention as social memory: A hyperscanning functional magnetic resonance imaging study ; NeuroImage 125 (2016) 401–412). This cannot be done on an individual level as it involves social exchange and so the researchers used fMRI hyperscanning. Real time video recording and projection allowed two individuals in separate scanners to communicate through facial expression and eye movements while they were both being scanned. Previous studies had shown neural synchronization during shared attention and synchronization of eye blinks. They found that it was the task of establishing joint attention which requires sharing an attentional temporal window that task creates the blink synchrony. This synchrony is remembered in a pair specific way in social memory.

Mutual gaze is needed to give mutual attention – and that is needed to initiate joint attention which requires a certain synchrony – and finally that synchronizing results in a specific memory of the pair’s joint attention which allows further synchrony during subsequent mutual gaze without joint attention first.

Here is their abstract: “During a dyadic social interaction, two individuals can share visual attention through gaze, directed to each other (mutual gaze) or to a third person or an object (joint attention). Shared attention is fundamental to dyadic face- to-face interaction, but how attention is shared, retained, and neutrally represented in a pair-specific manner has not been well studied. Here, we conducted a two-day hyperscanning functional magnetic resonance imaging study in which pairs of participants performed a real-time mutual gaze task followed by a joint attention task on the first day, and mutual gaze tasks several days later. The joint attention task enhanced eye-blink synchronization, which is believed to be a behavioral index of shared attention. When the same participant pairs underwent mutual gaze without joint attention on the second day, enhanced eye-blink synchronization persisted, and this was positively correlated with inter-individual neural synchronization within the right inferior frontal gyrus. Neural synchronization was also positively correlated with enhanced eye-blink synchronization during the previous joint attention task session. Consistent with the Hebbian association hypothesis, the right inferior frontal gyrus had been activated both by initiating and responding to joint attention. These results indicate that shared attention is represented and retained by pair-specific neural synchronization that cannot be reduced to the individual level.

The right inferior gyrus (rightIFG) region of the brain has been linked in other research with: interfacing between self and other; unconscious incorporation of facial expression in self and others; the release from mutual attention; and, neural synchronization during social encounters. The rightIFG is active in both initiating and responding to joint attention and in the synchrony during mutual gaze (when it is present). However it is unlikely to cause blinking directly. “Neural synchronization of the right IFG represents learned shared attention. Considering that shared attention is to be understood as a complementary action due to its social salience, relevance in initiating communication, and joint action, the present finding is consistent with a previous study by Newman-Norlund et al. who showed that the right IFG is more active during complimentary as compared to imitative actions.” Communication, communication, communication!

This fits with the theory that words steer joint attention to things present or absent, concrete or abstract in a way that is similar to the eyes steering joint attention on concrete and present things. Language has harnessed the brain’s mechanisms for joint attention if this theory is correct (I think it is).

 

Close but not quite

I wonder how often we are almost right but not quite. It seems to be a fairly common trap in biology.

Motor somatoIt has been thought for many years (140+ years) that the primary motor cortex (lying across the top of the head) mapped the muscles of the body and controlled their contractions. From this we got the comical homunculus with its huge lips and hands on a spindly little body. Each small area on this map was supposed to activate one muscle.

A recent paper by Graziano, Ethological Action Maps: A Paradigm Shift for the Motor Cortex (here), argues that this is not as it appears. What is being mapped are actions and not muscles. Here is the abstract:

The map of the body in the motor cortex is one of the most iconic images in neuroscience. The map, however, is not perfect. It contains overlaps, reversals, and fractures. The complex pattern suggests that a body plan is not the only organizing principle. Recently a second organizing principle was discovered: an action map. The motor cortex appears to contain functional zones, each of which emphasizes an ethologically relevant category of behavior. Some of these complex actions can be evoked by cortical stimulation. Although the findings were initially controversial, interest in the ethological action map has grown. Experiments on primates, mice, and rats have now confirmed and extended the earlier findings with a range of new methods.

Trends – For nearly 150 years, the motor cortex was described as a map of the body. Yet the body map is overlapping and fractured, suggesting that it is not the only organizing principle. In the past 15 years, a second fundamental organizing principle has been discovered: a map of complex, meaningful movements. Different zones in the motor cortex emphasize different actions from the natural movement repertoire of the animal. These complex actions combine multiple muscles and joints. The ‘action map’ organization has now been demonstrated in primates, prosimians, and rodents with various stimulation, lesion, and neuronal recording methods. The action map was initially controversial due to the use of electrical stimulation. The best argument that the action map is not an artifact of one technique is the growing confirming evidence from other techniques.”

Even settled science when it is neuroscience should be taken with a grain of salt. Any part of it could be something similar but not the same.

Powerful Induction

In an article in the Scientific American (here) Shermer points to ‘consilience of inductions’ or ‘convergence of evidence’. This is a principle that I have held for many, many years. Observations, theories and explanations are only trustworthy when they stop being a string of a few ‘facts’ and become a tissue or fabric of a great many independent ‘facts’.

I find it hard to take purely deductive arguments seriously – they are like rope bridges across a gap. They depend on every link in the argument and more importantly on the mooring points at either end. A causeway across the same gap does not depend on any single rock – it is dependable.

There is one theory that is put forward often and, to many, is ‘proven’, that is that brains can be duplicated with a computer. The reasoning goes something like: all computers are Turin machines, any program on a Turin machine can be duplicated on any other Turin machine, brains are computers and therefore Turin machines and can be duplicated on other computers. I see this as a very thin linear string of steps.

Step one is a somewhat circular argument in that being a Turin machine seems to be the definition of a ‘proper’ computer and so yes, all of those computers are Turin machines. What if there are other machines that do something that resembles computing but that are not Turin machines? Step two is pretty solid – unless someone disproves it which is unlikely but possible. The unlikely does happen; for example, someone did question the obvious ‘parallel lines do not meet’ to give us non-Euclidian geometry. Step three is the problem. Is the brain a computer in the sense of a Turin machine? People have said things like, “Well, brains do compute things so they are computers.” But no one has shown that any machine that can do any particular computation by any means is a Turin machine.

No one can say exactly how the brain does its thinking. But there are good reasons to question whether the brain does things step-wise using algorithms. In many ways the brain resembles an analog machine using massively parallel processing. The usual answer is that any processing method can be simulated on a digital algorithmic machine. There is a difference between duplication and simulation. No one says that a Turin machine can duplicate any other machine via a simulation. In fact, it is probable that this is not possible.

This is the sort of argument, a deductive one, that is hardly worth making. We will get somewhere with induction. It takes time: many experimental studies, methods have to be developed, models created and tested etc. But in the end it will be believable – we can trust that understanding because it is the product of a web or fabric of dependent inductions.

 

Rhythms – always rhythms

Why do we learn trigonometry in our school days and not get past the triangles and on to the waves? Who knows. But waves, rhythms and sine functions are such a constant part of this world. They are certainly important in biology.

We have seasonal rhythms, some of us have monthly rhythms, and we have circadian daily rhythms. Then we have heart rhythms, breathing rhythms, peristaltic gut waves and we have automatic muscle rhythms for walking and eye movements. We use rhythms in our speech, music, and dancing. Then there are the many brain wave patterns that we are only beginning to understand. The brain seems to function using rhythmic waves, waves of many frequencies, overlapping, synchronized and nested.

I noted a few things lately on this subject.

A paper in Cell, Descending Command Neurons in the Brainstem that Halt Locomotion, by J Bouvier and others (here), looks at the control of the start and stop of walking. The walking rhythm comes from an automatic network in the spinal cord but the commands to start and stop walking come from the brain stem. The question was about this signaling. There might be one signal with walking when it was present and not walking when it was absent. Or there could be two signals and this is what they found, separate on and off signals. The interesting thing from the stand point of rhythms is that a ‘stop’ signal was needed. Stopping a rhythm is not simple. The rhythmic dynamic of walking cannot be stop instantaneously to any point. There is no point that it can be just frozen that would leave a stable position with all feet on the ground and the center of gravity not off center. It takes a special functional network to stop the rhythm without stumbling, tripping or falling. Of course the rhythm could be just slowed until it stopped but most animals want to stop ‘on a dime’ rather than after some time.

In a release from UoW Madison (here) there is an outline of the work of J Samaha. He has found that our sight is controlled by the alpha rhythm in the back of the brain. We do not process the information that arrives from the eyes during the trough in the alpha rhythm but only during the peaks. The faster a persons alpha frequency, the more often they sample the world and the better they can distinguish close flashes of light as separate.

ScienceDaily has an item (here) about a paper by R Cho and others about the strengthening of synapses as we form associations during learning, memory and development.

Over the past 30 years, scientists have found that strong input to a postsynaptic cell causes it to traffic more receptors for neurotransmitters to its surface, amplifying the signal it receives from the presynaptic cell. This phenomenon, known as long-term potentiation (LTP), occurs following persistent, high-frequency stimulation of the synapse. Long-term depression (LTD), a weakening of the postsynaptic response caused by very low-frequency stimulation…Scientists have focused less on the presynaptic neuron’s role in plasticity, in part because it is more difficult to study”

Presynaptic cells occasionally release transmitters into the synapse when there is no activity in the cell as a whole and this was thought of as noise. They are called minis. Cho found that minis were not just random noise but they could also strengthen a synapse if they were delivered with a high frequency. “When we gave a strong activity pulse to these neurons, these mini events, which are normally very low-frequency, suddenly ramped up and they stayed elevated for several minutes before going down.” After a signal was transmitted, activity resembling an action potential continued without an actual signal. High frequency minis causes the synapse to strengthen, but low frequency ones do not.

Misjudging criteria

Most people think of memory as the ‘past’ and judge it by how well it preserves the past. But that is not its function. Memory is material to be used in the ‘present’ and the ‘future’. What happened in the past is not important except to help understand the present and predict/plan the future. Bits of memory out of historical context are the ingredients of imagination. With more context they are the tools we use to identify things in the present and understand their dangers and opportunities. We need to know if we are encountering the old or the new. We need to remember whether someone is trustworthy when we deal with them. When we look at what we remember, how and how long we remember it, and how closely we keep it to the original memory, we should think of what is the point of all of it.

What seems a fault with memory – that memories are not fixed but can change or be lost altogether – is only a side effect of their being modified to stay relevant and useful. We need memories that help us perceive the present and model the future and that is the real criteria, not absolute accuracy. The criteria for a well constructed memory system are biological evolutionary survival ones.

Colour vision is not about accurately perceiving the frequencies of light coming into the eye. It is not about the light; it is about the surface that reflected the light and how it can be identified. There is no use in saying that our vision is not giving us accurate colour, because accurate colour would interfere with accurate characterization of surfaces and identification of objects. The many optical illusions are not faults in the system – they are due to the ways that the visual system protects the stability of our vision so that things do not appear to change colour or size.

Language is not about meaning or logic; it is about communication. People worry about changes in the meaning of words and the use of grammatical forms. Well, here is what happens generation after generation: if people have difficulty communicating, they will change their language. If their way of life changes, if they move to a different region, if the people they are talking to change, then they will change their language. Our language is not the result of biological evolution so much as cultural evolution. But the same idea applies and the criteria have to do with communication. Is language logical? It may seem so from within that language but talk to anyone learning it as a new language and see the illogical, arbitrary quirks in it. There are languages that count negatives and there must be an odd number to be negative. There are languages that have to have all or no words carry a negative marking. Both types of negation seem logical to the speakers. Is language a good communication tool? Without doubt it is better than anything else we have ever tried to invent. No artificial language has ever made a dent on a natural language no matter how clear was the meaning or logical the grammar of the new language.

When we look at biological and even social systems it is important to consider what is their real, primary reason for existence. We have a tendency to misjudge the criteria and need to watch out for this trap.