Category Archives: language

The power of words

ScienceDaily has an item (here) on an interesting paper. (B. Boutonnet, G. Lupyan. Words Jump-Start Vision: A Label Advantage in Object Recognition. Journal of Neuroscience, 2015; 35 (25): 9329 DOI: 10.1523/JNEUROSCI.5111-14.2015)

The researchers demonstrated how words can affect perception. A particular wave that occurs a tenth of a second after a visual image appears was enhanced by a matching word but not by a matching natural sound. And the word made the identification of the visual quicker but the natural sound did not. For example a picture of a dog, the spoken word ‘dog’, and a dog’s bark would be a set.

They believe this is because the word is about a general category and the natural sound is a specific example from that category. Symbols such as words are the only way to indicate categories. “Language allows us this uniquely human way of thinking in generalities. This ability to transcend the specifics and think about the general may be critically important to logic, mathematics, science, and even complex social interactions.

Here is the abstract: “People use language to shape each other’s behavior in highly flexible ways. Effects of language are often assumed to be “high-level” in that, whereas language clearly influences reasoning, decision making, and memory, it does not influence low-level visual processes. Here, we test the prediction that words are able to provide top-down guidance at the very earliest stages of visual processing by acting as powerful categorical cues. We investigated whether visual processing of images of familiar animals and artifacts was enhanced after hearing their name (e.g., “dog”) compared with hearing an equally familiar and unambiguous nonverbal sound (e.g., a dog bark) in 14 English monolingual speakers. Because the relationship between words and their referents is categorical, we expected words to deploy more effective categorical templates, allowing for more rapid visual recognition. By recording EEGs, we were able to determine whether this label advantage stemmed from changes to early visual processing or later semantic decision processes. The results showed that hearing a word affected early visual processes and that this modulation was specific to the named category. An analysis of ERPs showed that the P1 was larger when people were cued by labels compared with equally informative nonverbal cues—an enhancement occurring within 100 ms of image onset, which also predicted behavioral responses occurring almost 500 ms later. Hearing labels modulated the P1 such that it distinguished between target and nontarget images, showing that words rapidly guide early visual processing.


Colour words

It is well known that all languages do not have words for what we would call the basic colours of the rainbow – red, orange, yellow, green, blue, purple – along with white and black. Why can this be so?

First we can get rid of the idea that because they have no word for a colour, they cannot see it? Of course they can see it, they simply have no category that for that particular colour. Take a language without a word for blue: we would call a darker blue a shade of black and a lighter blue as a shade of white. To wonder why we would answer this way is like wondering why someone calls both straw and apricot shades of yellow. It is not that they cannot see the difference but that they have not formed those particular categories (because they have never spent hours picking the colours of paints, for example). How many colour names we have and the exact lines of demarcation between them depend on the culture/language we live in. When we see a colour in front of us, we see the visual perception and not the category/word/concept of a particular colour. We can compare two shades in front of us and say whether they are the same or different even if we only have one colour word for both of them.

Seeing is one thing but saying is another. All words are categories or concepts and encompass a good deal of variation. In the ‘space-landscape’ of colour, words are like large countries. As children we learn the geography of this space and the borders of each word’s domain. When we are asked to name a colour, we use the word that is the colour’s best category. We sort of understand where in the landscape that colour is and therefore which country it is in. To communicate we need to more of less agree on the borders of the categories and the word of each – otherwise that is no communication. If you say it is a red flower, I will imagine an archetypal flower with an average red colour.

Our culture does more than that. Culture can make connections between objects and colours. Some objects get defined by their colour. What colour is the sky? It is blue. It is a well known fact that the sky is blue. But the sky is not always blue – black on a dark night, various shades of grey (from almost white to quite dark grey with clouds), pink in the dawn, orange and red in the sunset, green with northern lights. Water is also blue by agreement although it is often grey, green, brown, yellow or red. If I think of leaf, green comes along. If I think of lemon, I also bring up yellow. The sky and blue is one of these conventional pairings. But where the colour is important it (in a sense) splits the object concept. It matters whether a wine is red or white, a chess-piece is black or white. The culture will force the noticing of colour when it is important in that culture. Quite often colours are identified by an object (like the apricot and straw mentioned above). This has been going on for a long time: orange from a Persian word for the fruit, yellow from a West German for gold, green from an old Germanic word of new growth, purple from the Greek for a mollusc that gave the royal dye.

Languages acquire colour words over time. Berlin and Kay examined the history of 110 languages and found that words for colour started with light and dark (not just white and black), followed by red (sometimes used as bright coloured), then green and yellow (sometimes together and then separating), then blue. Other colours where added later brown and orange (together sometimes at first), purple, pink, grey. Then we have many, many subcategories (sky blue, pea green) and border ones (aquamarine/turquoise at the green-blue border). I notice that lately when people list basic colours, they include pink along with the primary colours. This is new and implies that red has split to be red and pink. People do not want to call a pink thing red.

Unless it is very important, it seems that colour can be omitted from a memory. It is surprising how little we remember the colour of things. We can see things every day and not be able to remember their colour. There sometimes is simply no reason to remember.

We cannot know what people experience from looking at the words they have. The ancient Greeks lacked many colour words. But the idea that, “It seemed the Greeks lived in a murky and muddy world, devoid of color, mostly black and white and metallic, with occasional flashes of red or yellow”, is just wrong. Their poetry is not full of colourful images but that does not mean that their live was devoid of colour.


Gibbon calls

There is some interesting news on gibbons. But first, what are gibbons? They are apes, called lesser apes but definitely in our group with chimps, gorillas, and orangs and not with monkeys. The Chinese used to call them “gentlemen of the forest” to separate them from troublesome monkeys. Our lineage split from theirs about 18 million years ago. For context, the separation with orangs was 14, gorillas 7, and chimps 5 mya.

They are the fastest travelers through forest canopy, clocked at 55 km/hr, swinging from branch to branch. They have ball and socket wrists on their long and powerful arms. When they are forced to the ground they walk upright (more upright than chimps can manage). Gibbons are social, territorial and pair bond for life. And they sing with very powerful voices due to reverberating throat sacs. They sing duets, and family choir performances. But they also whisper or “hoo”. There have been some studies of their song but the Clarke paper (citation below) is the first study of the softer hoos.

This is important to the inquiry into the history of human language. There are two approaches to looking at our language: one is to look at what is unique and separates us from our nearest cousins; the other is to look a what is similar and forms a continuum with our relatives. We can read many articles on the uniqueness but only recently have there been articles on the similarities.

Although language is a uniquely human behaviour, it is likely to have evolved from precursors in the primate lineage, some of which may still be detectable in the vocal behaviour of extant primates. One important candidate for such a precursor is the ability to produce context-specific calls, a prerequisite to referential communication during which an actor refers a recipient’s attention to an external event. … More recently, functionally referential calling behaviour also has been described for other species of monkeys, apes, dogs, dolphins, and birds such as fowl, jays and chickadees.

Overall, context-specific calling behaviour appears to be widespread in animal communication, presumably because the selection pressure to attend to and understand context-specific calls is very strong, especially in evolutionarily urgent situations. In addition, there is good evidence for call comprehension between different species of primates, between primates and birds and between primates and other mammals, suggesting that such phenomena are driven by a generalised cognitive mechanism that is widely available to animals. Whether or not such abilities are relevant for understanding language evolution has triggered much debate with no real consensus. Nevertheless, the comparative study of animal communication, especially across non-human primates, is one of the most useful tools to make progress and address open questions about human language evolution.

Although gibbon hoos sound much the same to human observers, when they are recorded and analyzed for highest pitch, lowest pitch, pitch delta, duration, volume, interval between calls, it is possible to see the difference between hoo calls in different situations. The distinct situations noted included: tiger, leopard, raptor, encounter with another group, feeding, and introduction to duet song.

This communication based calling, that is fairly common in non-solitary animals, differs from human language to the extent that the calls are relatively fixed to particular situations and are small in number for most animals (dolphins and whales may have a surprising number and it is not known whether they are fixed). Some would say that animal calls are automatic and do not involve any decision to call; it is difficult to measure this and in any case does not seem to apply to the more intelligent animals. The exchange of information is clearly involved in animal communication – communication and exchange of information are almost synonymous. The idea that our communication is based on affecting one another’s attention, metaphorically pointing at concepts, objects, actions etc. fits nicely with animal referential communication.

Here is the paper’s abstract:

Background: Close range calls are produced by many animals during intra-specific interactions, such as during home range defence, playing, begging for food, and directing others. In this study, we investigated the most common close range vocalisation of lar gibbons (Hylobates lar), the ‘hoo’ call. Gibbons and siamangs (family Hylobatidae) are known for their conspicuous and elaborate songs, while quieter, close range vocalisations have received almost no empirical attention, perhaps due to the difficult observation conditions in their natural forest habitats.

Results: We found that ‘hoo’ calls were emitted by both sexes in a variety of contexts, including feeding, separation from group members, encountering predators, interacting with neighbours, or as part of duet songs by the mated pair. Acoustic analyses revealed that ‘hoo’ calls varied in a number of spectral parameters as a function of the different contexts. Males’ and females’ ‘hoo’ calls showed similar variation in these context-specific parameter differences, although there were also consistent sex differences in frequency across contexts.

Conclusions: Our study provides evidence that lar gibbons are able to generate significant, context-dependent acoustic variation within their main social call, which potentially allows recipients to make inferences about the external events experienced by the caller. Communicating about different events by producing subtle acoustic variation within some call types appears to be a general feature of primate communication, which can increase the expressive power of vocal signals within the constraints of limited vocal tract flexibility that is typical for all non-human primates. In this sense, this study is of direct relevance for the on-going debate about the nature and origins of vocally-based referential communication and the evolution of human speech.”

Clarke, E., Reichard, U., & Zuberbühler, K. (2015). Context-specific close-range “hoo” calls in wild gibbons (Hylobates lar) BMC Evolutionary Biology, 15 (1) DOI: 10.1186/s12862-015-0332-2

I'm on ScienceSeeker-Microscope

A new way to parse language

For many years I have followed EB Bolles’ blog Babel’s Dawn (here) while he discussed the origin of human language. He has convinced me of many things about the history and nature of language. And they fit with how I thought of language. Now he has written a chapter in a book, “Attention and Meaning: The Attentional Basis of Meaning”. In his chapter, “Attentional-Based Syntax” (here), Bolles re-writes the mechanics of parsing phrases and sentences. He uses new entities, not nouns and verbs etc., and very different rules.

The reason I like this approach so much is the same reasons that I cannot accept Chomsky’s view of language. I see language from a biological point of view, a product of genetic and cultural evolution, and continuous with the communication of other animals. It is a type of biological communication. I imagine (rightly or wrongly) that Chomsky finds biology and especially animals distasteful and that he also has no feel for the way evolution works. I on the other hand, find a study of language that seems to only deal with complete written sentences on a white board, not of much interest. Instead of a form of biological communication, Chomsky gives us a form of logical thought.

Bolles summarizes his chapter like this. “The commonsense understanding of meaning as reference has dominated grammatical thought for thousands of years, producing many paradoxes while leaving many mysteries about language’s nature. The paradoxes wane if we assume that meaning comes by directing attention from one phenomenon to another. This transfer of meaning from objective reality to subjective experience breaks with the objective grammatical accounts produced by many philosophers, lexicographers, and teachers through the ages. The bulk of this paper introduces a formal system for parsing sentences according to an attention- based syntax. The effort proves surprisingly fruitful and is capable of parsing many sentences without reference to predicates, nouns or verbs. It might seem a futile endeavor, producing an alternative to a system used by every educated person in the world, but the approach explains many observations left unexplained by classical syntax. It also suggests a promising approach to teaching language usage. ”

The key change of concept is that words do not have meanings, nor do they carry meaning from a speaker to a listener – instead, they pilot attention within the brain. Or in other words they work by forcing items into working memory and therefore attention (or attention and therefore working memory). This makes very good sense. Take a simple word like ‘tree’: speaker says ‘tree’, listener hears ‘tree’ and memory automatically brings to the surface memories associated with ‘tree’. The word ‘tree’ is held in working memory and as long as it is there, the brain has recall or near recall of tree-ish concepts/ images/ ideas. The meaning of tree is found within the listener’s brain. No one thing, word or single element of memory has meaning; the meaning is formed when multiple things form a connection. It is the connections that gives meaning. I like this because I have thought for years that single words are without meaning. But words form a network of connections in any culture and a word’s connections in the network is what defines the word. Because we share cultural networks including a language, we can communicate. I also like this starting point because it explains why language is associated with consciousness (an oddity because very little else to do with thinking is so closely tied to consciousness). Consciousness is associated with working memory and attention, and the content of consciousness seems to be (or come from) the focus of attention in working memory.

Bolles uses a particular vocabulary in his parsing method: phenomenon is any conscious experience, sensation is a minimal awareness like a hue or tone, percept is a group of sensations like a loud noise, bound perception is a group of percepts that form a unified experience. We could also say phenomenon is another word for subjective consciousness. Then we have the process of perception. Perception starts with primary sensory input, memory and predictions. It proceeds to bind elements together to form a moment of perception, then serial momentary perceptions are bound into events. It matters little what words are used, the process is fairly well accepted. But what is more, it is not confined to how language is processed – it is how everything that passes through working memory and into the content of consciousness is processed. No magic here! No mutation required! Language uses what the brain more-or-less does naturally.

This also makes the evolution of language easier to visualize. The basic mechanism existed in the way that attention, working memory and consciousness works. It was harnessed by a communication function and that function drove the evolution of language: both biological evolution and a great deal of cultural evolution. This evolution could be slow and steady over a long period of time and does not have to be the result of a recent (only 50-150 thousand years ago) all powerful single mutation.

So – the new method of parsing is essentially to formulate the rules that English uses to bind focuses of attention together to make a meaningful event (or bound perception). Each language would have its own syntax rules. The old syntax rules and the new ones are similar because they are both describing English. But… it is not arbitrary rules any more but understandable rules in the context of working memory and attention. Gone is the feeling of memorizing rules to parse sentences on a white board. Instead is an understanding of English as it is used.

I have to stick in a little rant here about peeves. If someone can understand without effort or mistake what someone else has said then what is the problem? Why are arbitrary rules important if breaking them does not interfere at all with communication? With the new parsing method, it is easy to see what is good communication and what isn’t; it is clear what will hinder communication. The method can be used to improve perfectly good English into even better English. Another advantage is that the method can be used for narratives longer than a sentence.

I hope that this approach to syntax will be taken up by others.


An unnecessary exaggeration

Science 2.0 has a posting (here) on what is called brain to brain interfaces which they and other cannot resist calling telepathy; neither could the original press release writers.

I really think this ‘telepathy’ label is unnecessary. Telepathy implies communication directly on a mental (in the dualistic sense) rather than physical level. In other words telepathy is not natural but supernatural. What is being discussed now is a very physical communication involving a number of machines. No dualistic mental stage enters into it.

No doubt this technology, when it is perfected, will be useful in a number of ways. But as communication between most humans for most purposes, it will not beat language. In essence it is much like language: one brain has a thought and translates it into a form that can be transmitted, it is transmitted, and the receiver translates it back into a thought. That way of communicating sounds a lot like language to me. Just because it uses the internet to carry the message and non-intrusive machines to get information out of one brain and into another, does not mean it is different from language in principle. Language translates thoughts into words that are broadcast by the motor system, carried by sound through the air, received by the sensory system and made into words which can be translated into thoughts. It works well. If this new BBI stuff is telepathy then so is language (and semaphore for that matter).

Language also has some mind-control aspects. If I yell “STOP” it is very likely that another person will freeze before they can figure out why I yelled or why it may be a good idea to stop. It is as if I reached into their brain and pulled the halt cord. If you say “dog” I am going to look at the dog or search for one if there is no obvious dog. You have reached into my brain and pushed my attention from wherever it was focused onto a dog. If someone says “2 and 2 equals”, people will think “4” just like that. Someone has reached in and set the memory recall to find what completes that equation. People can also point metaphorically to shared concepts and so on. This amounts to people influencing one another’s brains.

With writing we have even managed to have time and distance gaps between speakers and listeners.

Language has other advantages but the greatest is that almost everyone has the mechanism already in a very advanced form. We are built to learn language as children and once learned it is handy, cheap and resilient.

Link between image and sound

Babies link the sound of a word with the image of an object in their early learning of language and this is an important ability. How do they come to have this mechanism? Are there predispositions to making links between sounds and images?

Research by Asano and others (citation below) shows one type of link. They show that sound symbolism can be used by infants about to learn language (about 11 months) to match certain pseudo-words to drawings – “moma” to rounded shapes and “kipi” to sharply angled shapes. Sound symbolism is interesting but it need not be the first or most important link between auditory and visual information. It seems to me that a 11 month old child would associate barks with dogs, twitters with bird, honks and engine noises with cars, and so on. They even mimic sounds to identify an object. It is clear that objects are recognized by their feel, smell, and sound as well as by sight. The ability to derive meaning from sound is completely natural, as is deriving it from sight. What is important is not the linking of sound and sight with the same meaning/object – mammals without language have this ability.

What is important about sound symbolism is that it is arbitrary and abstract. We appear to be born with certain connections of phonemes and meanings ready to be used. These sorts of connections would be a great help to a child grasping the nature of language as opposed to natural sounds.

Here is the abstract: “A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) fol lowed by a novel spoken word that either sound-symbolically matched (“moma”) or mis matched (“kipi”) the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response – an index of semantic integration difficulty – in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life.

Asano, M., Imai, M., Kita, S., Kitajo, K., Okada, H., & Thierry, G. (2015). Sound symbolism scaffolds language development in preverbal infants Cortex, 63, 196-205 DOI: 10.1016/j.cortex.2014.08.025

I'm on ScienceSeeker-Microscope

Another brick gone in the wall

The idea that there is an unbridgeable gap between human language and animal communication has taken another hit. For many years it has been maintained that chimpanzees cannot change their vocal signals, so although the grunts vary in different populations, in any particular group they are fixed. Therefore their vocalizations were not at all like a proto-language. A new paper by Watson and others (citation below) documents change in the vocalization in chimpanzees.

Goodall has said, “the production of sound in the absence of an appropriate emotional state seems to be an almost impossible task for a chimpanzee”. The general consensus was that variation of vocalization depends on emotional not informational factors, and that manual gestures were relatively flexible and intentional, whereas vocal signals were fixed.

The new study shows that chimpanzees can change the grunt for a particular food in order to better communicate with another group that they have joined. They can learn vocal symbols in a social context.

This make a big difference to our understanding of our own language ability. The proposition that our close relatives lack some important ingredient in the make-up of their brains and that is why they did not evolve a proper language has become extremely weak. It cannot be assumed that language is such an obvious advantage that any animal that has not evolved language obviously is unable to. The other idea therefore becomes stronger – we have language because we are more cooperative and trusting than our cousins. Language use is risky. Once individuals can risk open communication within a society, language takes off in both cultural and biological evolution (fast, although it probably took a few hundred thousand years). It is likely that all the ingredients were there (in our common ancestor with chimpanzees) for a proto-language and all that was needed was the safety to talk.

Here is the abstract: “One standout feature of human language is our ability to reference external objects and events with socially learned symbols, or words. Exploring the phylogenetic origins of this capacity is therefore key to a comprehensive understanding of the evolution of language. While non-human primates can produce vocalizations that refer to external objects in the environment, it is generally accepted that their acoustic structure is fixed and a product of arousal states. Indeed, it has been argued that the apparent lack of flexible control over the structure of referential vocalizations represents a key discontinuity with language. Here, we demonstrate vocal learning in the acoustic structure of referential food grunts in captive chimpanzees. We found that, following the integration of two groups of adult chimpanzees, the acoustic structure of referential food grunts produced for a specific food converged over 3 years. Acoustic convergence arose independently of preference for the food, and social network analyses indicated this only occurred after strong affiliative relationships were established between the original subgroups. We argue that these data represent the first evidence of non-human animals actively modifying and socially learning the structure of a meaningful referential vocalization from conspecifics. Our findings indicate that primate referential call structure is not simply determined by arousal and that the socially learned nature of referential words in humans likely has ancient evolutionary origins.

Watson, S., Townsend, S., Schel, A., Wilke, C., Wallace, E., Cheng, L., West, V., & Slocombe, K. (2015). Vocal Learning in the Functionally Referential Food Grunts of Chimpanzees Current Biology DOI: 10.1016/j.cub.2014.12.032

I'm on ScienceSeeker-Microscope

Some visual-form areas are really task areas

There are two paths for visual information, one to the motor areas (dorsal ‘where’ stream) and one to the areas concerned with consciousness, memory and cognition (ventral ‘what’ stream). The visual ventral stream has areas for the recognition of various categories of object: faces, body parts, letters for example. But are these areas really ‘visual’ areas or can they deal with input from other senses? There is recent research into an area concerned with numerals. (see citation below) There are some reasons to doubt a ‘vision only’ processing in these areas. “…cortical preference in the ‘visual’ cortex might not be exclusively visual and in fact might develop independently of visual experience. Specifically, An area showing preference for reading, at the precise location of the VWFA (visual word-form area), was shown to be active in congenitally blind subjects during Braille reading large-scale segregation of the ventral stream into animate and inanimate semantic categories have also been shown to be independent of visual experience. More generally, an overlap in the neural correlates of equivalent tasks has been repeatedly shown between the blind and sighted using different sensory modalities.” Is an area specialized in one domain because of cultural learning through visual experience or is the specialization the result of the specific connectivity of an area?

Abboud and others used congenitally blind subjects to see if the numeral area could process numerals arriving from auditory signals. Congenitally blind subjects cannot have categorical area that are based on visual learning. The letter area and numeral area are separate even though the letter symbols and numeral symbols are very similar – in fact can be identical. The researchers predicted that the word area had connections to language areas and the numeral area connected to quantitative areas.

eye-music application

eye-music application

The subjects were trained in eye-music, a sight substitute based on time, pitch, timbre and volume. While being scanned, the subjects heard the same musical description of an object and were asked to identify the object as part of a word, part of a number, or a colour. Roman numerals were used to give a large number of identical musical descriptions of numbers and letters. What they found was that the numeric task gave activation in the same area as it does in a sighted person and that blind and sighted subjects had the same connections, word area to language network and numeral area to quantity network. It is the connectivity patterns, independent of visual experience, that create the visual numeral-form area. “…neither the sensory-input modality and visual experience, nor the physical sensory stimulation itself, play a critical role in the specialization observed in this area. ” It is which network is active (language or quantity) that is critical.

…these results are in agreement with the theory of cultural recycling, which suggests that the acquisition of novel cultural inventions is only feasible inasmuch as it capitalizes on prior anatomical and connectional constraints and invades pre- existing brain networks capable of performing a function sufficiently similar to what is needed by the novel invention. In addition, other factors such as the specifics of how literacy and numeracy are learned, as well as the distinctive functions of numerals and letters in our education and culture, could also account for the segregation of their preferences.

Here is the abstract: “Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. ”

Abboud, S., Maidenbaum, S., Dehaene, S., & Amedi, A. (2015). A number-form area in the blind Nature Communications, 6 DOI: 10.1038/ncomms7026

I'm on ScienceSeeker-Microscope

Co-evolution of language and tool-making

It has been more or less accepted that genetic evolution can affect culture and that cultural evolution can affect genetics. But many favour one direction over the other. A recent paper looks at a long sustained period of genetic/cultural co-evolution. (Morgan, Uomini, Rendell, Chouinard-Thuly, Street, et al.; Experimental evidence for the co-evolution of hominin tool-making teaching and language. Nature Communications 6, 2015). The paper is a ScienceDaily item (here ).

Early homo species, our ancestors Homo habilis and Australopithecus garhi, used stone tools for two and a half million years. Through the first 700,000 years the tools, called Oldowan, remained unchanged. The researches show that stone-knapping is not easy to learn. The lack of any improvements to the Oldowan tools probably was because language would have been required to teach more sophisticated techniques. After this long period, about 1.8 million years ago, a new set of stone tools appeared, called the Acheulean, that were more technologically challenging. The researchers show that this knapping skill would have needed language to learn from a master.

The researchers set up learning chains where one person was shown and taught a particular knappng skill. That person then taught another and the skill was passed down a chain of learners. Various teaching techniques were used in the chains. It was found that language was needed to learn some skills successfully. Thus they suggest that the Acheulean improvements to tools were due to the start of proto-languages and that knapping and language evolved together. The driving evolutionary pressure was the advantage of better tools.

This picture is very different from the ‘history of language’ put forward by Chomsky. First because the process is seen as long and gradual. Second because it is basically developed as a teaching aid, a form of communication. “Our findings suggest that stone tools weren’t just a product of human evolution, but actually drove it as well, creating the evolutionary advantage necessary for the development of modern human communication and teaching. Our data show this process was ongoing two and a half million years ago, which allows us to consider a very drawn-out and gradual evolution of the modern human capacity for language and suggests simple ‘proto-languages’ might be older than we previously thought.

Here is the abstract: “Hominin reliance on Oldowan stone tools—which appear from 2.5 mya and are believed to have been socially transmitted—has been hypothesized to have led to the evolution of teaching and language. Here we present an experiment investigating the efficacy of transmission of Oldowan tool-making skills along chains of adult human participants (N=184) using five different transmission mechanisms. Across six measures, transmission improves with teaching, and particularly with language, but not with imitation or emulation. Our results support the hypothesis that hominin reliance on stone tool-making generated selection for teaching and language, and imply that (i) low-fidelity social transmission, such as imitation/emulation, may have contributed to the ~700,000 year stasis of the Oldowan technocomplex, and (ii) teaching or proto-language may have been pre-requisites for the appearance of Acheulean technology. This work supports a gradual evolution of language, with simple symbolic communication preceding behavioural modernity by hundreds of thousands of years.


Talking to babies

When babies learn language, they learn more than language. According to a recent paper they also learn cognition. This news reminded me of something I had read months ago and I went back and found it. Here is the abstract of the paper, followed by the story illustrating the absence of good language learning.

Abstract of paper (Vouloumanos, Waxman; Listen up! Speech is for thinking during infancy; Trends in Cognitive Sciences Vol 18, issue 12 Dec 2014): “Infants’ exposure to human speech within the first year promotes more than speech processing and language acquisition: new developmental evidence suggests that listening to speech shapes infants’ fundamental cognitive and social capacities. Speech streamlines infants’ learning, promotes the formation of object categories, signals communicative partners, highlights information in social interactions, and offers insight into the minds of others. These results, which challenge the claim that for infants, speech offers no special cognitive advantages, suggest a new synthesis. Far earlier than researchers had imagined, an intimate and powerful connection between human speech and cognition guides infant development, advancing infants’ acquisition of fundamental psychological processes.

From Catherine Porter’s Column Aug 2014, Why Senegalese women have been afraid to talk to their babies – Fears of evil spirits have kept parents from talking to their babies, but that is changing thanks to a program that teaches about brain development. (here) : “10-year-old children in Senegal, deemed incomprehensibly dull by an international early literacy test six years ago. … The results were a blow to the Senegalese government, which pours a quarter of its national budget into education. … Tostan, a well-known non-governmental organization in Senegal, began asking the same questions. Staff members launched focus groups, to research local ideas about schools and child development. After four months, they concluded the root of the problem stretched beyond schools into village homes. Parents, although loving, were not speaking directly to their babies. Many avoided looking deeply into their babies’ eyes. … a baby in rural Senegal would hear about 200 words an hour, Tostan founder and chief executive officer Molly Melching says. Most of those were orders. No wonder they weren’t learning how to read, Melching posited. The language part of their brains was vastly underdeveloped. … The concept of djinns comes from both ancient African religions and the Koran. They are spirits, which can be helpful or hurtful. The hurtful ones, locals believe, can possess them. … Djinns are attracted to babies by jealousy, many locals believe. So, looking a baby in the eye is taboo, as is speaking directly to her. … “In our culture, if you talk with your child, you risk losing him,” says Tostan’s Penda Mbaye. She recalls how she was talking to her first baby when her grandmother warned her about djinns. “After that, I didn’t dare to do it.” … It is one thing to change the national course curriculum, or teacher training, or even severe malnutrition that stunts children’s brains. It’s another to change people’s cultural beliefs and corresponding behaviour. … Tostan facilitators developed a year-long class curriculum for parents. It includes lessons on everything from infant nutrition and children’s rights to sleep schedules and baby massage. The most important part though, is the new understanding of children’s growing brains. “We delve into brain development in a non-judgmental way,” Melching says.

This program seems to be working and mothers are enthusiastic, enjoying being able to interact with and talk to their babies. In a few years the data will be in and it will be seen what difference communication with babies brings. It is expected to not just improve language skills but IQ and general cognition.