In a recent paper by Berent and others (citation below) they investigate language universals in syllable structure. Their argument goes: there is a preference for certain syllables over others across languages and even in people whose language does not include those syllables; a set of four syllables which do not occur in English shows this preference in English speakers; this preference is indicated in behavior and in activity in Broca’s area as opposed to auditory and motor areas; and so, the preference is a language universal rather than a constraint in hearing or producing the syllables. This sounds very good but it seems to overlook the ideas of Changizi about the nature of our phonemes.
Berent discusses the reason of the preference in these syllables. “Across languages, syllables like blif are preferred (e.g., more frequent) relative to syllables like bnif, which in turn, are preferred to bdif; least preferred on this scale are syllables like lbif. Linguistic research attributes this hierarchy to universal grammatical restrictions on sonority—a scalar phonological property that correlates with the loudness of segments. Least sonorous are stop consonants (e.g., b, p), followed by nasals (e.g., n, m), and finally the most sonorous consonants—liquids and glides (e.g., l,r,y,w). Accordingly, syllables such as blif exhibit a large rise in sonority, bnif exhibits a smaller rise, in bdif, there is a sonority plateau, whereas lbif falls in sonority. The universal syllables hierarchy (e.g., blif>bnif>bdif>lbif, where > indicates preference) could thus reflect a grammatical principle that favors syllables with large sonority clines—the larger the cline, the better- formed the onset. ”
What is not asked in this paper is why sonority should have this effect on preference. “An alternative explanation (to the sensory-motor one) attributes linguistic preferences to the language faculty itself. At the center of the language system is the grammar—a set of violable algebraic constraints that express tacit linguistic preferences .” This seems to beg the question of whether there is any other way to view language other than the ‘language faculty’ being algebra-like down to the nature of syllables.
On the other hand Changizi assumes that the ‘language faculty’ is something that is a cultural adaption that uses pre-existing brain functions. In his theory, the preference for rising sonority would have to do with understanding natural sounds in the environment. Cultural evolution harnessed the brain’s strengths for language. Broca’s area is about understanding the meanings of sounds – all sounds that have meaning, not just the meanings of words.
Here is part of an interview by Lende with Changizi (here). “I’ll give you a couple starting samples of how speech has the signature sounds of natural auditory events. In particular, my claim is not, say, that speech sounds like the savanna. Rather, the class of natural sounds is a very fundamental and general one, the sounds of events among solid objects. There are lots of regularities in the sounds of solid-object physical events, and it is possible to begin working them out.
For example, there are primarily three “atoms” of solid-object physical events: hits, slides and rings. Hits are when two objects hit one another, and slides where one slides along the other. Hits and slides are the two fundamental kinds of interaction. The third “atom” is the ring, which occurs to both objects involved in an interaction: each object undergoes periodic vibrations — they ring. They have a characteristic timbre, and your auditory system can usually recognize what kind of objects are involved.
For starters, then, notice how the three atoms of solid-object physical events match up nicely with the three fundamental phoneme types: plosives, fricatives and sonorants. Namely, plosives (like t, k, p, d, g, b) sound like hits, fricatives (s, sh, f, z, v) sound like slides, and sonorants (vowels and also phonemes like y, w, r, l) sound like rings.
Our mouths make their sounds *not* via the interaction of solid-object physical events. Instead, our phonemes are produced via air-flow mechanisms that *mimic* solid-object events. In fact, our air-flow sound-producing mechanisms can do *lots* more kinds of sounds, far beyond the limited range of solid-object sounds. But for language, they rein it in, and keep the words sounding like the solid-object events that are most commonly in nature, the kind our auditory system surely evolved to process efficiently.
As a second starter similarity, notice that solid-object events do not occur via random sequences of hits, slides and rings. There are lots of regularities about how they interact — and that I have tested to see that they apply in language — but a first fairly obvious one is this… Events are essentially sequences of hits and slides. That is, the *causal* sequence concerns the hits and the slides, not the rings. “The ball hit the table and bounced up, and then bumped into the wall, hit the ground again, and slid to a stop.”
Rings happen during all events, but they happen “for free” at each physical interaction. Solid-object events are sequences of the form, where ‘interaction’ can have hit or slide in it. This is perhaps the most fundamental “grammatical rule” of solid-object physical events, and it looks suspiciously like the most fundamental morphological rule in language: the syllable, the fundamentally universal version which is the CV form, usually a plosive-or-fricative (ahem, a physical interaction) followed by a sonorant (ahem, a ring).
In my research I continue to work out the regularities found among solid-object physical events, and in each case ask if the regularity can be found in the sounds of speech.
As for “the symbolic meaning of a word is not determined by the physical sound structure of that word,” indeed, I agree. My own theory doesn’t propose this, but only that speech has come to have the signature structures found among solid-object events generally, thereby “sliding” easily into our auditory brain.”
I think Berent et al missed something when they did not address Changizi’s view of the syllable and what it says about preferences. Here is their abstract:
It is well known that natural languages share certain aspects of their design. For example, across languages, syllables like blif are preferred to lbif. But whether language universals are myths or mentally active constraints—linguistic or otherwise— remains controversial. To address this question, we used fMRI to investigate brain response to four syllable types, arrayed on their linguistic well-formedness (e.g., blif>bnif>bdif>lbif, where > indicates preference). Results showed that syllable structure monotonically modulated hemodynamic response in Broca’s area, and its pattern mirrored participants’ behavioral preferences. In contrast, ill-formed syllables did not systematically tax sensorimotor regions—while such syllables engaged primary auditory cortex, they tended to deactivate (rather than engage) articulatory motor regions. The convergence between the cross-linguistic preferences and English participants’ hemodynamic and behavioral responses is remarkable given that most of these syllables are unattested in their language. We conclude that human brains encode broad restrictions on syllable structure.
Berent, I., Pan, H., Zhao, X., Epstein, J., Bennett, M., Deshpande, V., Seethamraju, R., & Stern, E. (2014). Language Universals Engage Broca’s Area PLoS ONE, 9 (4) DOI: 10.1371/journal.pone.0095155