Tag Archives: harnessing

Why are some syllables preferred?

 In a recent paper by Berent and others (citation below) they investigate language universals in syllable structure. Their argument goes: there is a preference for certain syllables over others across languages and even in people whose language does not include those syllables; a set of four syllables which do not occur in English shows this preference in English speakers; this preference is indicated in behavior and in activity in Broca’s area as opposed to auditory and motor areas; and so, the preference is a language universal rather than a constraint in hearing or producing the syllables. This sounds very good but it seems to overlook the ideas of Changizi about the nature of our phonemes.

Berent discusses the reason of the preference in these syllables. “Across languages, syllables like blif are preferred (e.g., more frequent) relative to syllables like bnif, which in turn, are preferred to bdif; least preferred on this scale are syllables like lbif. Linguistic research attributes this hierarchy to universal grammatical restrictions on sonority—a scalar phonological property that correlates with the loudness of segments. Least sonorous are stop consonants (e.g., b, p), followed by nasals (e.g., n, m), and finally the most sonorous consonants—liquids and glides (e.g., l,r,y,w). Accordingly, syllables such as blif exhibit a large rise in sonority, bnif exhibits a smaller rise, in bdif, there is a sonority plateau, whereas lbif falls in sonority. The universal syllables hierarchy (e.g., blif>bnif>bdif>lbif, where > indicates preference) could thus reflect a grammatical principle that favors syllables with large sonority clines—the larger the cline, the better- formed the onset. ”

What is not asked in this paper is why sonority should have this effect on preference. “An alternative explanation (to the sensory-motor one) attributes linguistic preferences to the language faculty itself. At the center of the language system is the grammar—a set of violable algebraic constraints that express tacit linguistic preferences .” This seems to beg the question of whether there is any other way to view language other than the ‘language faculty’ being algebra-like down to the nature of syllables.

On the other hand Changizi assumes that the ‘language faculty’ is something that is a cultural adaption that uses pre-existing brain functions. In his theory, the preference for rising sonority would have to do with understanding natural sounds in the environment. Cultural evolution harnessed the brain’s strengths for language. Broca’s area is about understanding the meanings of sounds – all sounds that have meaning, not just the meanings of words.

Here is part of an interview by Lende with Changizi (here). “I’ll give you a couple starting samples of how speech has the signature sounds of natural auditory events. In particular, my claim is not, say, that speech sounds like the savanna. Rather, the class of natural sounds is a very fundamental and general one, the sounds of events among solid objects. There are lots of regularities in the sounds of solid-object physical events, and it is possible to begin working them out.

For example, there are primarily three “atoms” of solid-object physical events: hits, slides and rings. Hits are when two objects hit one another, and slides where one slides along the other. Hits and slides are the two fundamental kinds of interaction. The third “atom” is the ring, which occurs to both objects involved in an interaction: each object undergoes periodic vibrations — they ring. They have a characteristic timbre, and your auditory system can usually recognize what kind of objects are involved.

For starters, then, notice how the three atoms of solid-object physical events match up nicely with the three fundamental phoneme types: plosives, fricatives and sonorants. Namely, plosives (like t, k, p, d, g, b) sound like hits, fricatives (s, sh, f, z, v) sound like slides, and sonorants (vowels and also phonemes like y, w, r, l) sound like rings.

Our mouths make their sounds *not* via the interaction of solid-object physical events. Instead, our phonemes are produced via air-flow mechanisms that *mimic* solid-object events. In fact, our air-flow sound-producing mechanisms can do *lots* more kinds of sounds, far beyond the limited range of solid-object sounds. But for language, they rein it in, and keep the words sounding like the solid-object events that are most commonly in nature, the kind our auditory system surely evolved to process efficiently.

As a second starter similarity, notice that solid-object events do not occur via random sequences of hits, slides and rings. There are lots of regularities about how they interact — and that I have tested to see that they apply in language — but a first fairly obvious one is this… Events are essentially sequences of hits and slides. That is, the *causal* sequence concerns the hits and the slides, not the rings. “The ball hit the table and bounced up, and then bumped into the wall, hit the ground again, and slid to a stop.”

Rings happen during all events, but they happen “for free” at each physical interaction. Solid-object events are sequences of the form, where ‘interaction’ can have hit or slide in it. This is perhaps the most fundamental “grammatical rule” of solid-object physical events, and it looks suspiciously like the most fundamental morphological rule in language: the syllable, the fundamentally universal version which is the CV form, usually a plosive-or-fricative (ahem, a physical interaction) followed by a sonorant (ahem, a ring).

In my research I continue to work out the regularities found among solid-object physical events, and in each case ask if the regularity can be found in the sounds of speech.

As for “the symbolic meaning of a word is not determined by the physical sound structure of that word,” indeed, I agree. My own theory doesn’t propose this, but only that speech has come to have the signature structures found among solid-object events generally, thereby “sliding” easily into our auditory brain.”

I think Berent et al missed something when they did not address Changizi’s view of the syllable and what it says about preferences. Here is their abstract:

It is well known that natural languages share certain aspects of their design. For example, across languages, syllables like blif are preferred to lbif. But whether language universals are myths or mentally active constraints—linguistic or otherwise— remains controversial. To address this question, we used fMRI to investigate brain response to four syllable types, arrayed on their linguistic well-formedness (e.g., blif>bnif>bdif>lbif, where > indicates preference). Results showed that syllable structure monotonically modulated hemodynamic response in Broca’s area, and its pattern mirrored participants’ behavioral preferences. In contrast, ill-formed syllables did not systematically tax sensorimotor regions—while such syllables engaged primary auditory cortex, they tended to deactivate (rather than engage) articulatory motor regions. The convergence between the cross-linguistic preferences and English participants’ hemodynamic and behavioral responses is remarkable given that most of these syllables are unattested in their language. We conclude that human brains encode broad restrictions on syllable structure.

ResearchBlogging.org

Berent, I., Pan, H., Zhao, X., Epstein, J., Bennett, M., Deshpande, V., Seethamraju, R., & Stern, E. (2014). Language Universals Engage Broca’s Area PLoS ONE, 9 (4) DOI: 10.1371/journal.pone.0095155

Metaphor, Exaptation and Harnessing

We are used to the metaphor of time being related to distance, as in “back in the 1930s” or “it was a long day”. And there is a noticeable metaphor relating social relationships to distance, as in “a close friend” or “distant relatives”. But these are probably not just verbal metaphors, figures of speech, but much deeper connections. Parkinson (see citations below) has studied the neurobiology of this relationship and shows it is likely to be an exaptation, a shift in function of an existing evolutionary adaptation to a new or enlarged function. We have an old and well established brain system for dealing with space. This system has been used to also deal with time (rather than a new system being evolved), and later further co-opted to also deal with social relationships.

 

 

What spatial, temporal and social perception have in common in this system is that they are egocentric. Space is perceived as distances in every direction from here, with ourselves in the ‘here’ center. In the same way we are the center of the present ‘now’. We are also at the center of a social web with various people at a relative distance out from our center. Objects are placed in the perceptual space at various directions and distances from us. Events are placed various distances into the future or past. People are placed in the social web depending on the strength of our connection with them. It appear that with a small amount of adaptation (or learning) almost any egocentric system could be handled by the basically spatial system of the brain.

 

 

Parkinson has looked at the regions of the brain that process spatial information to see if and how they process temporal and social information. The paper has details but essentually, “relative egocentric distance could be decoded across all distance domains (spatial, temporal, social) … in voxels in a large cluster in the right inferior parietal lobule (IPL) extending into the posterior superior temporal gyrus (STG). Cross-domain distance decoding was also possible in smaller clusters throughout the right IPL, spanning both the supramarginal (SMG) and angular (AG) gyri, as well as in one cluster in medial occipital cortex”.

 

 

These findings provide preliminary support for speculation that IPL circuitry originally devoted to sensorimotor transformations and representing one’s body in space was “recycled” to operate analogously on increasingly abstract contents as this region expanded during evolution. Such speculations are analogous to cognitive linguists’ suggestions that we may speak about abstract relationships in physical terms (e.g., “inner circle”) because we think of them in those terms. Consistent with representations of spatial distance scaffolding those of more abstract distances, compelling behavioral evidence demonstrates that task-irrelevant spatial information has an asymmetrically large impact on temporal processing .” As well as the similarity to the linguistic theories of Lakoff and Johnson, this is also similar to Changizi’s ideas of cultural evolution harnessing the existing functionality of the brain for new uses such as writing.

 

 

Here is the abstract of the Parkinson 2014 paper:

 

Distance describes more than physical space: we speak of close friends and distant relatives, and of the near future and distant past. Did these ubiquitous spatial metaphors arise in language coincidentally or did they arise because they are rooted in a common neural computation? To address this question, we used statistical pattern recognition techniques to analyze human fMRI data. First, a machine learning algorithm was trained to discriminate patterns of fMRI responses based on relative egocentric distance within trials from one distance domain (e.g., photographs of objects relatively close to or far away from the viewer in spatial distance trials). Next, we tested whether the decision boundary generated from this training could distinguish brain responses according to relative egocentric distance within each of two separate distance domains (e.g., phrases referring to the immediate or more remote future within temporal distance trials; photographs of participants’ friends or acquaintances within social distance trials). This procedure was repeated using all possible combinations of distance domains for training and testing the classifier. In all cases, above-chance decoding across distance domains was possible in the right inferior parietal lobule (IPL). Furthermore, the representational similarity structure within this brain area reflected participants’ own judgments of spatial distance, temporal soon-ness, and social familiarity. Thus, the right IPL may contain a parsimonious encoding of proximity to self in spatial, temporal, and social frames of reference.

ResearchBlogging.org

Parkinson C, Liu S, & Wheatley T (2014). A common cortical metric for spatial, temporal, and social distance. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (5), 1979-87 PMID: 24478377

Parkinson C, & Wheatley T (2013). Old cortex, new contexts: re-purposing spatial perception for social cognition. Frontiers in human neuroscience, 7 PMID: 24115928