A recent paper (see citation below) has helped to clarify the relationship between linguistic and musical communication. The researchers used a standard type of communication between jazz players, called “trading fours”. The musicians alternate playing four bar phrases, each relating to the previous one, so that the players in effect answer one another. This back and forth is a musical conversation.
The authors used a number of controls that were not musical conversations as contrasts to the “trading fours”: scales, a practiced melody, improvisation without relating to another player. The resulting music was analyzed for “note density, pitch class distribution, pitch class transitions, duration distribution, duration transitions, interval distribution, interval transitions, melodic complexity, and self- organizing maps of key”. This was used to give a numeric value to the melodic complexity and to identify the nature of the conversation in the “trading fours” sessions. The improvisation in the “trading fours” music was more melodically complex and was related in a conversational way.
One of the players was scanned with fMRI during the sessions. The improvised conversation involved intense activation of two of the language centers (Broca’s and Wernicke’s areas ) and also their right hemisphere counterparts. The left side areas “are known to be critical for language production and comprehension as well as processing of musical syntax.” The right side match to Broca’s area is “associated with the detection of task relevant cues such as those involved in the identification of salient harmonic and rhythmic elements.” These two areas appear to perform syntactic processing for both music and speech. The Wernicke’s area is involved in harmonic processing and it’s right homologue is “implicated in auditory short-term memory, consistent with the maintenance of the preceding musical phrases.” These results are similar to a study of linguistic conversation and are consistent with the ‘shared syntactic integration resource hypotheses’. In other words they are consistent with music and language “sharing a common neural network for syntactic operations”.
However music and language are not semantically similar. In the ‘trading fours’ situation there is a marked deactivation of the angular gyrus which is related to “semantic processing of auditory and visual linguistic stimuli and the production of written language and written music.” It appears that during communication, language and music resemble one another in form (syntax) but not in meaning (semantics).
This points in a particular direction. There may be no language specific system in the brain but rather a communication specific system. Interesting.
Here is the abstract:
Interactive generative musical performance provides a suitable model for communication because, like natural linguistic discourse, it involves an exchange of ideas that is unpredictable, collaborative, and emergent. Here we show that interactive improvisation between two musicians is characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language. These
findings support the hypothesis that musical discourse engages language areas of the brain specialized for processing of syntax but in a manner that is not contingent upon semantic processing. Therefore, we argue that neural regions for syntactic processing are not domain-specific for language but instead may be domain-general for communication.
Donnay, G., Rankin, S., Lopez-Gonzalez, M., Jiradejvong, P., & Limb, C. (2014). Neural Substrates of Interactive Musical Improvisation: An fMRI Study of ‘Trading Fours’ in Jazz PLoS ONE, 9 (2) DOI: 10.1371/journal.pone.0088665