Language, music and echolocation

In the immediately previous posting, the main idea was that linguistic and musical communication shared the same syntactic processing in the brain but not the same semantic meaning processing. How can they share syntax? We need to look at communication and at syntax.

 

The simplest type of human communication is non verbal signals: things like posture, facial expression, gestures, tone of voice. They are in effect contagious: if you are sad, I will feel a little sad, if I then cheer up, you may too. The signals are indications of emotional states and we tend to react to another’s emotional state by a sort of mimicry that puts us in sync with them. We can carry on a type of emotional conversation in this way. Music appears to use this emotional communication – it causes emotions in us without any accompanying semantic messages. It appears to cause that contagion with three aspects: the rhythmic rate, the sound envelope and the timbre of the sound. For example a happy musical message has a fairly fast rhythm, flat loudness envelop with sharp ends, lots of pitch variation and a simple timbre with few harmonics. Language seems to use the same system for emotion, or at least some emotion. The same rhythm, sound envelope and timbre is used in the delivery of oral language and it carries the same emotional signals. Whether it is music or language, this sound specification cuts right past the semantic and cognitive processes and goes straight to the emotional ones. Language seems to share these emotional signals with music but not the semantic meaning that language contains.

 

Syntax has a slippery meaning. Its many definitions usually apply to language and it is extended to music as a metaphor. But – if we look at the idea in a more basic way we can see how important this is to processing sound. When we get visual information it is two dimensional because the retina is a surface with two dimensions and the maps of the retina on the cortex are also in two dimensions. Perceptional processing adds depth for a third dimension. But sound comes to us with one dimension because the cochlea is essentially a spiral line. It is mapped as a line on the cortex. Perception gives us a direction for the source of the sound and sometimes a feeling of distance. The identification of what is in the visual field (objects, movement etc.) is perceived by a different process than the identification of what is in a sound. As with all the senses, in perception we are trying to model the environment and events in it. Sound is no different, the meaning of sounds is what we can learn from them about what is happening in the world. This is just like vision which gains meaning from the model of the environment and events in it that it produces. Language and music must be processed by the sound perception system because they come to us as sounds.

 

One description of syntax is that it deals with trains of sound that are complex, have hierarchical patterns, are abstract, have rigid or probabilistic relationships between entities (or rules). It could be presumed that any domain that involves such trains of sound would be processed, as language and music are, in a syntactical manner. The hierarchy would be established, the abstract patterns and relationships identified. The beauty of the train of sound would be appreciated. The entities resulting from this processing would be available for semantic or other processes. There is no reason to rule out a general syntactical processing system, and there is no reason why the domains of sound that use it need to be similar in the sense that they can be mapped one-to-one. Music need not have an exact equivalent of a sentence.

 

If we looked for them, there may be other domains that use the same type of analysis – perhaps all trains of sound to a certain extent. How do we know that a sound (with its echoes) is thunder rather than a gun shot or a dynamite explosion? Perhaps the sound is processed into a hierarchy of direct sounds and echoes with particular sorts of patterns. Ah thunder, we think. This idea of echo processing is intriguing – it would seem, like language and music, to have a syntax that is complex, hierarchical, etc. Some animals, and the humans who have learned it, use echolocation. Would this not be a candidate for the syntactical type of pattern identification? We do not postulate a newish and dedicated visual process to explain reading, and likewise we do not need a newish and dedicated sound process to explain syntactical processing of language. We can be using a system that is very old and only mildly tweaked for language, for music, for echoes.

 

The ingredients of music that appear to be a syntax-like architecture are: there are scales of permissible notes, chords based on those scales and key structures based on changes of chords used within a piece of music. There are similar hierarchies in rhythm patterns with different note lengths and emphasis, organized into bars and the bars into larger patterns. But when these sorts of regularities are compared to words, phrases, sentences and other hierarchies in language, the match is weak at the detailed level.

 

It would be surprising if language and music shared a functional area of the brain that was not more general in nature given the lack of detailed parallels between the structure of language and music.

 

So, is there any evidence that echolocation shares any processing with language and music? There is no real evidence that I can find. A recent paper (citation below) by Thaler, Arnott and Goodale appears to rule out the possibility, but on reflection does not. Here is the abstract:

 

A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds.

 

When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets.

 

These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts.

 

The actual location is done in the otherwise unused visual cortex (calcarine). This may be a situation like language where the semantic meaning is extracted in parts of the cortex that are not associated with auditory perception. It seems that echolocation does not require extraordinary auditory perception but it does required a systematic attention to sound. So a fairly normal sense of hearing is able to provide to the visual part of the cortex the input it requires (in a trained blind individual) to do echolocation. That input is likely to include a sophisticated pattern identification of the echoes. “All subjects also show BOLD activity in the lateral sulcus (i.e. Auditory Complex) of the left and right hemispheres and adjacent and inferior to the right medial frontal sulcus. The former likely reflects the auditory nature of the stimuli. The latter most likely reflects the involvement of higher order cognitive and executive control processes during task performance.” This description and the areas in the illustrations could be parts of the Broca’s and Wernicke’s areas, the areas that were shown to be active in language and music communication.
ResearchBlogging.org

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts PLoS ONE, 6 (5) DOI: 10.1371/journal.pone.0020162

2 thoughts on “Language, music and echolocation

  1. Lorenzo

    Hi there! This is mmy first comment here so I just wanted to give a quick
    shout out and say I really enjoy reading through your posts.
    Can you suggest any other blogs/websites/forums that
    deal with the same topics? Thanks for your time!

    Here is my weblog viagra UK (Lorenzo)

    Reply
    1. JKwasniak Post author

      You could look daily at ScienceSeekers.org and use a Neuroscience filter to pick out the ones you are interested in. You will see many good bloggers there.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *