Monthly Archives: April 2014

The awareness trick

 

The hard question of consciousness may not be that hard, if one doesn’t give up too soon. Consider magic – the magician does a magic trick and we see the magic but we do not believe it is supernatural. Why? Because magicians in fear of their lives long ago convinced their audiences that it was a trick. They would not reveal how the trick was done but assured us – really, really, it is a trick, we are not dangerous, no supernatural magic here. Let’s consider the idea that consciousness is not supernatural magic but a trick. We don’t know how the trick is done but we know it is a trick. If that is so, than it can be understood with effort (and not throwing up our hands and saying ‘too hard’). Somehow an information processing organ produces our consciousness and we just have to figure out how.

This seems to be the route taken by Michael Graziano – assume consciousness is understandable and try to understand it. He concentrates on awareness – how is awareness produced. He goes straight for the hard question.

He reasons that the brain must have a way to deal with other’s actions – understand and predict them. Our brains use a model of what someone’s actions imply about their future actions. We attribute to others a mechanism that includes entities like intentions, preferences, and in particular for Graziano’s understanding, we attribute awareness to others. We take the trouble to figure out and remember what others are aware of and what they are not aware of. It is important if you are a predator to calculate what your prey is aware of and if you are the potential victim to calculate what your predator is aware of. It is extremely important in social animals for cooperating with others. So we have this (possibly hypothetical) attribute, awareness, that we keep track of in other people and animals. We even have a good idea of where in the brain much of this calculation goes on: the temporoparietal junction (TPJ) and the superior temporal sulcus (STS). Experiments have linked these areas with ‘social attribution’ of various attributes to the internal processing of others. In other words they create a ‘mind’ for the other animal and use it to predict the other’s actions; these areas do the theory-of-mind calculations and they track the awareness in others.

But here is the interesting part – the areas also seems to do similar theory-of-mind calculation about ourselves. When the TPJ and STS are damaged, the awareness of patients is affected and they lose awareness of one side of space. Experimentally people have been shown to have vision on the ‘blind side’, to avoid obstacles on that side and to be aware in memory if they are asked to turn around and look the opposite way. Then they are aware of what they could not see before and not aware of what they did see. These areas are involved in producing the phenomenon of awareness.

One way of saying this is that we have evolved a process that creates a ‘mind’ for another animal and we use that ‘mind’ to understand and predict the animal but we also use it to understand and predict ourselves by creating our ‘mind’.

Graziano says, “The conjunction of these two previous findings suggests that awareness is a computed feature constructed by an expert system in the brain. The feature of awareness can be attributed to other people in the context of social perception. It can also be attributed to oneself, in effect creating one’s own awareness.” But how is this creation done? He proposes that awareness is a model of attention. To say that someone is aware of something is to say that they are attending to it – it has their attention. So our conscious awareness at any moment is the current attention model/schema that the brain has constructed. “One’s own awareness is a schematized model of one’s own attention.” And because it is a model it is approximate and simplified – not a complete and accurate version of attention but a model. We can attend, on occasion, to things we are not aware of. Like our model of the world and our model of our bodies, we cannot rely on completeness or accuracy. Models/schema are not the real thing – they are not the world, not our bodies and not our information processing organs or brains. Consciousness/awareness/mind is a useful fiction based on attention in the brain.

Anyway, that is what Graziano’s ‘Attention Schema Theory’ looks like. It seems a good start to solving the hard question. The material is from a podcast http://brainsciencepodcast.com/bsp/108-graziano .

 

Ravens can play politics

Ravens are often featured in mythology – spirit, god, creator, trickster, fortune teller and so on – heroes and villains. They are one of the most intelligent birds. A recent paper by Massen et al (citation below) shows that they are even more remarkable than science has so far shown.

The social brain hypothesis is the idea that intelligence and large brain size is an adaptation to social behavior. The more complex an animal’s social life is, the more intelligence they need to be successful. Social animals are more intelligent than their related non-social ones. But animal societies can vary in complexity and that demands extra skills. One of the first things a social animal needs is the ability to recognize all the individuals in its group and to know which are more dominant than itself and which less. It is an advantage to not fight with those that can beat you and also an advantage to not let a weaker opponent bluff you. It is an advantage to have a non-competitive bond, an alliance with kin and special friends. It is an advantage to know the etiquette and group tactics, to communicate, to be deceptive, and so on. All this takes memory, cognitive skills, and emotional control plus some theory of mind if the social adaptations include predicting others behaviour.

We have some social behaviours that have not yet been found in other animals, but the list has been getting shorter. Another area to be found not exclusively ours is a ‘political’ skill: to be able to observe but not interact with another group and figure out the dominance structure in that group. Primates can do this other group trick but they need to react with at least some of the strangers directly. The Mason paper shows that ravens can do it with just observation. They can observe another group, learn to tell the individuals apart and learn the dominance hierarchy in that group. Raven’s “cognitive skills are expressed primarily in the social domain: on one hand, they flexibly switch between group foraging (including active recruitment) and individual strategies (like providing no or false information about food, attributing perception and knowledge states about food caches to others); on the other hand, they form and maintain affiliate social relations aside from reproduction and engage in primate-like social strategies like support during conflicts, and reconciliation and consolation after conflicts. Understanding social relations of others may be key in those behaviours. Ravens also remember former group members and their relationship valence over years.” And this new skill to that list.

Massen and the other researchers had a group of young ravens in a pen (for other reasons) and another group within sight. They staged fake encounters between pairs of ravens that were just out of sight using recorded sounds. Sometimes the staged encounters matched the dominance relationship between the hidden birds and sometimes they were opposite to the expected interaction. The reaction of the test raven to these staged encounters was studied. In this way the researchers could note which dominance relationships the raven knew and which they didn’t by their reaction to incongruous events. There were differences between male and female birds in their reactions, and which staged encounters that most surprised them. But overall, ravens can very often learn the dominance hierarchy of another group by just observing them. This may be found in other animals (when it is looked for) but until now we only knew that humans could do this bit of social behavior.

Here is the abstract:

A core feature of social intelligence is the understanding of third-party relations, which has been experimentally demonstrated in primates. Whether other social animals also have this capacity, and whether they can use this capacity flexibly to, for example, also assess the relations of neighbouring conspecifics, remains unknown. Here we show that ravens react differently to playbacks of dominance interactions that either confirm or violate the current rank hierarchy of members in their own social group and of ravens in a neighbouring group. Therefore, ravens understand third-party relations and may deduce those not only via physical interactions but also by observation.”

ResearchBlogging.org

Massen, J., Pašukonis, A., Schmidt, J., & Bugnyar, T. (2014). Ravens notice dominance reversals among conspecifics within and outside their social group Nature Communications, 5 DOI: 10.1038/ncomms4679

Why are some syllables preferred?

 In a recent paper by Berent and others (citation below) they investigate language universals in syllable structure. Their argument goes: there is a preference for certain syllables over others across languages and even in people whose language does not include those syllables; a set of four syllables which do not occur in English shows this preference in English speakers; this preference is indicated in behavior and in activity in Broca’s area as opposed to auditory and motor areas; and so, the preference is a language universal rather than a constraint in hearing or producing the syllables. This sounds very good but it seems to overlook the ideas of Changizi about the nature of our phonemes.

Berent discusses the reason of the preference in these syllables. “Across languages, syllables like blif are preferred (e.g., more frequent) relative to syllables like bnif, which in turn, are preferred to bdif; least preferred on this scale are syllables like lbif. Linguistic research attributes this hierarchy to universal grammatical restrictions on sonority—a scalar phonological property that correlates with the loudness of segments. Least sonorous are stop consonants (e.g., b, p), followed by nasals (e.g., n, m), and finally the most sonorous consonants—liquids and glides (e.g., l,r,y,w). Accordingly, syllables such as blif exhibit a large rise in sonority, bnif exhibits a smaller rise, in bdif, there is a sonority plateau, whereas lbif falls in sonority. The universal syllables hierarchy (e.g., blif>bnif>bdif>lbif, where > indicates preference) could thus reflect a grammatical principle that favors syllables with large sonority clines—the larger the cline, the better- formed the onset. ”

What is not asked in this paper is why sonority should have this effect on preference. “An alternative explanation (to the sensory-motor one) attributes linguistic preferences to the language faculty itself. At the center of the language system is the grammar—a set of violable algebraic constraints that express tacit linguistic preferences .” This seems to beg the question of whether there is any other way to view language other than the ‘language faculty’ being algebra-like down to the nature of syllables.

On the other hand Changizi assumes that the ‘language faculty’ is something that is a cultural adaption that uses pre-existing brain functions. In his theory, the preference for rising sonority would have to do with understanding natural sounds in the environment. Cultural evolution harnessed the brain’s strengths for language. Broca’s area is about understanding the meanings of sounds – all sounds that have meaning, not just the meanings of words.

Here is part of an interview by Lende with Changizi (here). “I’ll give you a couple starting samples of how speech has the signature sounds of natural auditory events. In particular, my claim is not, say, that speech sounds like the savanna. Rather, the class of natural sounds is a very fundamental and general one, the sounds of events among solid objects. There are lots of regularities in the sounds of solid-object physical events, and it is possible to begin working them out.

For example, there are primarily three “atoms” of solid-object physical events: hits, slides and rings. Hits are when two objects hit one another, and slides where one slides along the other. Hits and slides are the two fundamental kinds of interaction. The third “atom” is the ring, which occurs to both objects involved in an interaction: each object undergoes periodic vibrations — they ring. They have a characteristic timbre, and your auditory system can usually recognize what kind of objects are involved.

For starters, then, notice how the three atoms of solid-object physical events match up nicely with the three fundamental phoneme types: plosives, fricatives and sonorants. Namely, plosives (like t, k, p, d, g, b) sound like hits, fricatives (s, sh, f, z, v) sound like slides, and sonorants (vowels and also phonemes like y, w, r, l) sound like rings.

Our mouths make their sounds *not* via the interaction of solid-object physical events. Instead, our phonemes are produced via air-flow mechanisms that *mimic* solid-object events. In fact, our air-flow sound-producing mechanisms can do *lots* more kinds of sounds, far beyond the limited range of solid-object sounds. But for language, they rein it in, and keep the words sounding like the solid-object events that are most commonly in nature, the kind our auditory system surely evolved to process efficiently.

As a second starter similarity, notice that solid-object events do not occur via random sequences of hits, slides and rings. There are lots of regularities about how they interact — and that I have tested to see that they apply in language — but a first fairly obvious one is this… Events are essentially sequences of hits and slides. That is, the *causal* sequence concerns the hits and the slides, not the rings. “The ball hit the table and bounced up, and then bumped into the wall, hit the ground again, and slid to a stop.”

Rings happen during all events, but they happen “for free” at each physical interaction. Solid-object events are sequences of the form, where ‘interaction’ can have hit or slide in it. This is perhaps the most fundamental “grammatical rule” of solid-object physical events, and it looks suspiciously like the most fundamental morphological rule in language: the syllable, the fundamentally universal version which is the CV form, usually a plosive-or-fricative (ahem, a physical interaction) followed by a sonorant (ahem, a ring).

In my research I continue to work out the regularities found among solid-object physical events, and in each case ask if the regularity can be found in the sounds of speech.

As for “the symbolic meaning of a word is not determined by the physical sound structure of that word,” indeed, I agree. My own theory doesn’t propose this, but only that speech has come to have the signature structures found among solid-object events generally, thereby “sliding” easily into our auditory brain.”

I think Berent et al missed something when they did not address Changizi’s view of the syllable and what it says about preferences. Here is their abstract:

It is well known that natural languages share certain aspects of their design. For example, across languages, syllables like blif are preferred to lbif. But whether language universals are myths or mentally active constraints—linguistic or otherwise— remains controversial. To address this question, we used fMRI to investigate brain response to four syllable types, arrayed on their linguistic well-formedness (e.g., blif>bnif>bdif>lbif, where > indicates preference). Results showed that syllable structure monotonically modulated hemodynamic response in Broca’s area, and its pattern mirrored participants’ behavioral preferences. In contrast, ill-formed syllables did not systematically tax sensorimotor regions—while such syllables engaged primary auditory cortex, they tended to deactivate (rather than engage) articulatory motor regions. The convergence between the cross-linguistic preferences and English participants’ hemodynamic and behavioral responses is remarkable given that most of these syllables are unattested in their language. We conclude that human brains encode broad restrictions on syllable structure.

ResearchBlogging.org

Berent, I., Pan, H., Zhao, X., Epstein, J., Bennett, M., Deshpande, V., Seethamraju, R., & Stern, E. (2014). Language Universals Engage Broca’s Area PLoS ONE, 9 (4) DOI: 10.1371/journal.pone.0095155

Neuroscience is not ready for schools

I don’t believe that children’s education should be experimented with. This is a personal concern of mine. I am dyslexic and entered school in a short period when a drastic change in curriculum had banished all phonetics from learning to read/write. That may work for some students but it certainly did not work for me or any other dyslexics or just plain slow readers. Why was I experimented on? Why are children today being experimented on? It seems like the fads in language and mathematics just keep coming year after year. From a distance, I see a four cornered fight between parents, teachers, academics, and civil servants about who knows best and who should be in charge.

In the middle of this tug-of-war, there appears a new ingredient - neuroscience. D. Bishop drew attention to this in a tweet recommending an articles by Stephen Exley, Max Coltheart, Science editorial and the Santiago Declaration.

The teachers in the UK are demanding training in neuroscience. They feel that they need this - It is true that the emerging world of neuroscience presents opportunities as well as challenges for education, and it’s important that we bridge the gulf between educators, psychologists and neuroscientists.” But what do they envisage they will do with this additional knowledge? Apparently one example was to tailor lessons for creative right-brain thinkers. I have to say I cannot think of a better reason not to have this training. What we do not need is people in education following every half-baked popular idea that the press and companies selling their ‘neuro’ wares put out there. The last thing we want is teachers dividing their classes in the right and left brained children. Nor do we want visual learners as opposed to auditor learners, or whatever the next fad is. Even the Common Core mathematics in the US seems very faddish. This is just not fair to the children who will be the subjects of these experiments.

This is a repeat of seven years ago when neuroscientists were asked to explain learning and issued the Santiago Declaration. Nothing has changed – neuroscience is still not a settled body of knowledge on which you could base an education system. The neuroscientists were saying that at present, neuroscience is not the appropriate science to help education; instead it is the developmental and social sciences that will be helpful.

Here is the declaration signed by 136 neuroscientists in 2007. (the underlining is mine)

The education of young children has become an international priority. Science offers irrefutable evidence that high-quality early childhood education better prepares children for the transition to formal education. It helps each child reach his or her potential in reading, mathematics, and social skills. Around the world, there is renewed interest in investing in young children to prepare them for future participation in a global economy. This interest is manifest not only in governmental policies (from Japan to the United States to Chile) but also in popular culture through the media and commercial endeavors marketing educational products to the parents of young children. As internationally recognized scientists in child development, we applaud the attention now directed to the world’s youngest citizens, but we also urge that policies, standards, curricula, and to the extent possible, commercial ventures be based on the best scientific research and be sensitive to evidence-based practice. We also recognize the limitations of our own scientific disciplines. Our research can provide guides in designing the most efficient means to a policy ends, but cannot dictate those ends, which must arise out of political debate and social consensus. Our research can also be abused in attempts to rationalize pre-conceived policies and popular notions about early childhood, putting science to a rhetorical and selective, rather than rational use. For our part, we pledge to actively oppose this practice and to speak out whenever it occurs.

We assert that the following principles enjoy general and collective consensus among developmental scientists in 2007:

  • All polices, programs, and products directed toward young children should be sensitive to children’s developmental age and ability as defined through research- based developmental trajectories. Developmental trajectories and milestones are better construed through ranges and patterns of growth rather than absolute ages.
  • Children are active, not passive, learners who acquire knowledge by examining and exploring their environment.
  • Children, as all humans, are fundamentally social beings who learn most effectively in socially sensitive and responsive environments via their interactions with caring adults and other children.
  • Young children learn most effectively when information is embedded in meaningful contexts rather than in artificial contexts that foster rote learning. It is here where research coupling psychology with the use of emerging technologies (e.g. multimedia and virtual reality) can provide powerful educational insights.
  • Developmental models of child development offer roadmaps for policy makers, educators, and designers who want to understand not only what children learn but how they optimally learn and further imply that educational policies, curricula, and products must focus not only on the content, but also on the process of learning.
  • These developmental models along with advances in our understanding of learning in children at cognitive risk can be applied to improve learning among all children.
  • The principles enunciated above are based primarily on findings from social and behavioral research, not brain research. Neuroscientific research, at this stage in its development, does not offer scientific guidelines for policy, practice, or parenting.
  • Current brain research offers a promissory note, however, for the future. Developmental models and our understanding of learning will be aided by studies that reveal the effects of experience on brain systems working in concert. This work is likely to enhance our understanding of the mechanisms underlying learning.

We, the undersigned, recognize that the political agenda and marketplace forces often proceed without meaningful input from the science of child development. Given the manifest needs of many young children throughout the world, the current state of knowledge and consensus in developmental science, this gap between knowledge and action must be closed. Scientific data and evidence-based practice must be integral to the ongoing global dialogue.

 

 

 

Close to truth

I have been thinking about induction and deduction. I was taught that I could prove something was true with deduction but not with induction. A logical argument gives truth with a capital T. But for years I have not accepted this way to thinking. All the logical argument gives is a relationship. If the axioms are True then the conclusion is True and if the conclusion is False then one or more of the axioms is False. But how do you get your first couple of True axioms, the axioms that are needed for the first True conclusion. Not with logic obviously. How axioms have been identified in the past is by induction. They are statements we find trustworthy because we have never encountered them to be suspect. It does seem a bit ironic that deduction is held to be more rigorous than induction when at the bottom of a deduction are axioms arrived at by induction. So I just assume there are no truths with a capital T.

But induction is much stronger than it is usually portrayed. Popper seemed to think that a strong case for inductive arguments could not be made and that the best that could be done was to falsify those that could be falsified and temporarily assuming that the rest were OK (but certainly not true even without a capital). This is somewhat counter-intuitive, because we do trust inductions more if they are ‘confirmed’. Confirmation is somehow more valued than falsification – probably because we are more interested in what has a good chance of being true than what is almost certain to be false.

The Bayesian probability adherents make the argument that by confirmations piling up, each making it more probable that a statement is true, a statement can become so close to true that it makes no-never-mind. Many believe that our minds use a Bayesian approach to understanding the world. Of course nothing that is statistical is going to merit an actual true with a capital T. So I have to again accept that there is no true with a capital T. Even if many confirmations and no falsification is close.

But there is a deeper problem than even induction under Bayesian rules of probability. Our knowledge is not little bits and pieces that can be confirmed or found false, this is a simplification that can confuse. What we have is a huge web of knowledge, not independent bits. This does not lend itself to actual Bayesian calculations, but the general idea is still valid. New (and therefore suspect) ideas are confirmed or falsified by being set in that web of knowledge – they eventually fit or don’t fit. Each confirmation strengthens the web as well as the new idea; and, each falsification can be interpreted as a fault in the web as well as the failure of the new idea, but it is almost always the web that stays and the new idea that is thrown away. This has been going on for a few centuries and the web is very strong. It takes an upheaval every once in a while but it is as close to true as we have. It is in essence a product of induction and not deduction.

Against the evidence

I would have thought that the argument was over (but of course this sort of argument never is). I keep thinking that Chomsky and his adherents will have accepted the evidence and moved on but I keep being surprised that they have not changed their theories.

 

Chomsky has not yet accepted that language has been around for a very long time – since before Homo split into Neanderthal and us. The adaptations to speech in the ear, throat, and brain can be traced ‘way back’, not the short 50 to 140 thousand years ago that he proposed. There is enough time for language to evolve slowly without any miraculous single mutation events, just normal evolution of a function under the pressure of improving an advantageous behavior. In the fossil record, we can almost see the pressure working on the ear, throat, and brain. There is no sensible reason why the body would change in preparation for speech that was not yet in existence. Evolution is not something that foretells an advantage. That makes absolutely no sense. The fossil record makes sense if Homo started to communicate in a way that became a big advantage and so that ability to communicate was under evolutionary pressure to improve hence the changes in ear, throat and brain.

 

Chomsky would have us believe that we used language as an internal thinking tool before we used it for communication. This implies that our thinking requires this internal language, and so is qualitatively different from that of other animals. But the trend in the last couple of decades is for the thinking of mammals and birds to be found more similar to our own, not less. Further, there appears to be no evidence from studies of the brain to imply that language is necessary for thought. Concepts seem necessary – concepts for object, actions, attributes – but it does not seem that these need be verbal concepts, although they often seem to be. The sequence of situation – agent – action – outcome - new situation seems so deeply structured in the brain (and in animal brains) that its resemblance to a sentence appears to be a case of language fitting into the brain’s structure. rather than the brain fitting into the structure of language. It seems that we communicate with ourselves using the facility that evolved to communicate with others.

 

Chomsky appears to be still saying that there is not enough language presented to a child for them to learn the language without some built-in language scaffold. Many researchers who study infant learning disagree and point out that the child actively engages in the learning and is not a passive receiver of language examples. It does appear that children seek out language and avidly learn it, but that does not mean that they necessarily have a universal grammar template. The learning process appears far more complex than Chomsky’s model or the behaviorist model he unseated.

 

The idea of ‘merge’ as the test for true ‘language’ seems fairly thin. It is true that we can replace words with phrases over and over again, to any number of merge levels, and still have a sentence. Great - but who cares. I suspect that a large percentage of people (people who successfully live their lives without any communication problems) have never uttered a sentence with more than one level of merge. And if you spoke to them and used more than three levels, they would shake their heads and walk away saying that you should learn how to say what you mean. Is there ‘true’ and ‘untrue’ language? Is there some measurement that separates ‘tall’ from ‘short’ people so that in some single night’s sleep a teenager goes to bed short and wakes up tall? Why do we need a firm line between the earliest steps to language and our present day language, a line to divide off ‘true’ language. This concern with ‘true’ language seems a way to move the goal posts for the sole purpose of insisting that language is found only in humans, that it is about logic and not about communication, and it is not something that psychologists, geneticists, paleontologists, neuroscientists etc. or even some schools of linguistics should have a say about.

 

This rant was brought on by my reading another great posting to Babel’s Dawn, Biology without Darwin. I recommend it.

 

Don’t forget the cerebellum

Many theories of humanness rely on a simple idea that the cerebral cortex is enlarged in humans relative to other primates and in primates relative to other mammals. So it must be the cerebral cortex that is the important part of the brain, giving us our smarts and our skills. What is often overlooked is that the cerebellum has also increased in the same proportion. Across the mammals the ratio of neurons in cerebellum to the number in the cerebrum is 3.6 so whatever was happening to the cerebrum was also happening to the cerebellum. In fact our cerebellum may have gained a little – our cerebrum is slightly smaller than earlier homos but our cerebellum is not and maybe a bit bigger.

And what does the cerebellum actually do? It does not appear to initiate anything but modifies what is initiated. In other words, it is not responsible for doing anything, just for doing it much better. It does coordination, timing, accuracy, smoothness, balance. It was once thought to be purely concerned with motor actions but now it appears to also deal in cognition, attention, learning and emotion.

There is a recent paper by Joan Baizer (citation below) on comparisons of the hindbrain. The paper discusses the evidence in the brainstem and cerebellum for (1) structures that are conserved across species but show subtle biochemical differences (2) structures also conserved but showing major differences in overall organization (3) structures found in humans and chimpanzees but not in monkeys or cats (4) structures found only in humans (5) two features that are considered exclusive to the cerebral cortex, individual variability and left-right asymmetries. All these changes mean that the hindbrain has been evolving in step with the forebrain. The cerebellum is doing some very important processing tasks for the rest of the brain.

Here is part of the Discussion:

The human brain is distinguished by parallel and functionally linked expansion of the cerebellum and the cerebral cortex. Our studies show that there are also major changes in the human brainstem, most notably in structures that are known or suspected to project to the cerebellum. It is clear that the expansion of the human brain underlies unique aspects of human cognitive and motor function. What is known about the relative contributions of the cerebellum and the cerebral cortex? The cerebral cortex is critical for both cognitive function and motor control. The traditional view of the cerebellum is that it is critical for motor, but not cognitive, function. That view has been challenged on the basis of anatomical, physiological, and behavioral data, with many supporting role for the cerebellum in cognitive functions.

We will focus on the motor role of the cerebellum and associated brainstem structures. Humans are bipedal and bipedal locomotion imposes very different demands for the control of balance and posture, functions to which the cerebellum contributes. Second, bipedal locomotion frees the forelimbs and hands, allowing the development of fine motor skills, skilled tool use, and the emergence of handedness. There are parallel changes in the visual system, with the evolution of the fovea and parallel changes in voluntary eye movements. The cerebellum also participates in the control of the hands and fingers as well as in the control of eye movements. The specializations of primate brainstem structures may be related to these evolutionary changes.

Here is her observation on the ‘reptilian brain’:

The idea that evolution affects only the cerebral cortex, with brainstem and cerebellum essentially unchanged entered the popular culture of neuroscience through the writings of Paul Maclean, “The Triune Brain” and Carl Sagan’s “reptilian brain”. The concept of the “reptilian brain” maintains that the brainstem and cerebellum are “old” structures that have not changed over evolution. That perspective still colors the understanding of students and the general public today. As shown in this review, it clearly does not reflect the dramatic changes in cerebellar and brainstem structures and their contribution to uniquely human capabilities.

ResearchBlogging.org

Baizer, J. (2014). Unique Features of the Human Brainstem and Cerebellum Frontiers in Human Neuroscience, 8 DOI: 10.3389/fnhum.2014.00202

Knowing your grandmother

There is a spectrum of ways in which the brain may hold concepts that range from very localized to very distributed, and there is little agreement of where along that spectrum various concepts are held. At the one end is the ultimate local storage: a single ‘grandmother’ neuron that recognizes your grandmother in matter how she is presented (words, images, sounds, actions etc.). This single cell, if it exists, would literally be the concept of grandmother. At the other end of the spectrum is a completely distributed storage where a concept is a unique pattern of activity across the whole cortex with every cell being involved in the pattern of many concepts. Both of these extremes have problems. Our concept of grandmother does not disappear if part of the cortex is destroyed – no extrmely small area has ever been found that obliterates grandma. On the other hand, groups of cells have been found that are relatively tuned to one concept. When we look at the extreme of distributed storage, there is the problem of localized specialties such as the fusiform face area. And more telling, is the problem of a global pattern being destroyed if multiple concepts are activated at the same time. Each neuron would be involved in a significant fraction of all the concepts and so there would be confusion if a dozen or more concepts were part of a thought/memory/process. As we leave the extremes the local storage becomes a larger group of neurons with more distribution and the distributed storage becomes patterns in smaller groups of neurons.

 

 

The idea of localized concepts was thought to be improbable in the 70’s and the grandmother cell became something of a joke. The type of network that computer scientists were creating became the assumed architecture of the brain.

 

 

Computer simulations have long used a ‘neural network’ called PDP or parallel distributed processing. This is not a network made of neurons, in spite of the name, but a mathematical network. Put extremely simply there are layers of units; each unit has a value for its level of activity; the units have inputs from other units and outputs to other units; the connections between units can be weighted in their strength. The bottom layer of units takes input from the experimenter and this travels through ‘hidden’ layers to an output layer which reveals the output to the experimenter. Such a setup can learn and compute in various ways that depend of the programs that control the weightings and other parameters. This PDP model has favoured the distributed network idea when modeling actual biological networks. Some researchers have made a PDP network do more than one thing at once (but ironically this entails having more localization in the hidden layer). This might seem a small problem for PDP but PDP does suffer from a limitation that makes rapid one-trial learning difficult. That type of learning is the basis of episodic memory. Because each unit in PDP is involved in many representations – any change in weighting affects most of those representations and so it takes many iterations to get the new representation worked into the system. Rapid one-trial learning in PDP destroys previous learning; this is termed catastrophic interference or the stability-plasticity dilemma. The answer has been that the hippocampus may have a largely local arrangement for its fast one-trial learning but the rest of the cortex can have a dense distribution. But there is a problem. When a fully distributed network tries to represent more than one thing it has problems of ambiguity. This is a real problem because the cortex does not handle one concept at a time – in fact, it handles many concepts at once and often some are novel. There is no way that thought processes could work with this kind of chaos. This can be overcome in PDP networks but again the fix is to move towards local representations.

 

 

This is the abstract from a paper to be published soon (citation below).

A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the co-activation of multiple “things” (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the co-activation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to co-activate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.

 

 

The result is that our model of the brain moves a good way along the spectrum toward the grandmother cell end. And lately there has been a new methods to study the brain. Epilepsy patients have electrodes placed in their brains to monitor seizures prior to surgery. These patients can volunteer for experiments while waiting for their operations. So it is now possible to record the activity of small groups of neurons in awake functioning human beings. And something very similar to grandmother cells have been found. Some electrodes respond to a particular person – Halle Berry and Jennifer Aniston were two of the first concepts to be found to each have their own local patch of a hundred or so neurons. There was a response in these cells to not just various images, but written names and voices too. It happened with objects as well as people. This home of concepts held as small local groups of neurons has been observed in the area of the hippocampus.

 

 

The idea that the brain was one great non-localized network has also suffered from the results of brain scans. Areas of the brain (far from the hippocampus) appear to be specialized. Very specific functions can be lost completely by the destruction of smallish areas of the brain as a result of stroke. The old reasons for rejecting a localized brain organization are disappearing while the arguments against a globally distributed organization are growing. This does not mean that there is no distributed operations or that there are unique single cells for a concept – it just means that we are well to the local end of the spectrum.

 

 

Rodrigo Quian Quiroga, Itzhak Fried and Christof Koch wrote a recent piece in the Scientific American (here) in which they look at this question and explain what it means for memory. The whole article is very interesting and worth looking at.

Concept cells link perception to memory; they give an abstract and sparse representation of semantic knowledge—the people, places, objects, all the meaningful concepts that make up our individual worlds. They constitute the building blocks for the memories of facts and events of our lives. Their elegant coding scheme allows our minds to leave aside countless unimportant details and extract meaning that can be used to make new associations and memories. They encode what is critical to retain from our experiences. Concept cells are not quite like the grandmother cells that Lettvin envisioned, but they may be an important physical basis of human cognitive abilities, the hardware components of thought and memory.

ResearchBlogging.org

Bowers JS, Vankov II, Damian MF, & Davis CJ (2014). Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe. Psychological review PMID: 24564411

Curious publicity

Our conscious image of what we are seeing usually appears complete; it is the whole visual field. This is an illusion. The image is built up from many narrower views of parts of the scene that we attend to in rapid succession. Our visual system also establishes a knowledge of the general balance of the whole scene, such as what are the dominant colours of the scene and this seems to help produce the illusion that we see all simultaneously. Ordinarily we notice changes and they attract our attention. However there is ‘change blindness’. A change can occur during a blink, an eye movement, an abrupt blanking of vision. The actual moment of change is missed and the change will not be noticed unless the part that has changed was being attended to at the time. The study of change blindness has given rise to the ‘coherence theory’ that predicts that “whenever a change is detected the observer will always know what has changed since the observer will necessarily have a representation of the corresponding portion of the original image.” Howe and Webb in a recent paper (citation below) set out to test this prediction. “…there are good reasons to believe that observers should be able to detect changes by monitoring the statistics of the scene, but that monitoring the scene statistics may not provide enough information to identify which object in the scene was changed. However, the behavioural evidence that this actually happens in practice with natural scenes is mixed.” Their experiments appear to confirm change detection without the identification of the change and that the detection is probably mediated by reaction of changes to the statistics of the overall scene. It is nicely done but not very surprising.

 

 

However, for some reason this paper was billed as “Debunking the sixth sense - New research has helped debunk the common belief that a sixth sense, also known as extrasensory perception, exists.” by ScienceDaily (here). Even though there is no mention of ESP in the actual paper. ScienceDaily quotes Howe, “There is a common belief that observers can experience changes directly with their mind, without needing to rely on the traditional physical senses such as vision, hearing, taste, smell and touch to identify it. This alleged ability is sometimes referred to as a sixth sense or ESP. We were able to show that while observers could reliably sense changes that they could not visually identify, this ability was not due to extrasensory perception or a sixth sense.

 

 

Who are these people who are interested in neuroscience but still believe in ESP? It’s a curious treatment of some experiments that were not published with this slant!

 

 

Here is the article’s abstract: “Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.”

ResearchBlogging.org

Howe, P., & Webb, M. (2014). Detecting Unidentified Changes PLoS ONE, 9 (1) DOI: 10.1371/journal.pone.0084490

Fear of cherry blossom

There has been a rash of headlines about mice inheriting memories, for example: Phobias may be memories passed down from ancestors; Fearful memories haunt mouse descendants; Memories pass between generations; Fear of a smell can be passed down several generations; Mice inherit the memories of their grandfathers; Fear can be inherited through sperm. This contrasts with the title of the paper that is being discussed: Parental olfactory experience influences behavior and neural structure in subsequent generations. You will notice that the paper title does not mention either ‘memory’ or ‘inheritance’. How you react to this research depends on how you understand memory and inheritance.

 

 

Male mice are trained to fear a particular odour; their children and grandchildren also avoid that smell. Whatever is happening is ‘epigenetic’, that is it is not a permanent change to the DNA code. It is a change in how that code is used. This is inheritance in a broad meaning of the word but should not be confused with a permanent change to the DNA. Epigenetics is a fairly new field and there are many puzzles still to be solved. It is a way for cells in the body to differ (like heart cells differ from skin cells) and for cells to adapt to particular environmental challenges. The mechanisms of epigenetics are ways to increase or decrease the production of a protein and/or to change the way it is prepared for use. All this has absolutely nothing to do with inheritance. But… sometimes these changes are passed on to offspring, because of the material in the egg cell and/or because of faults in clearing all of the epigenetic markers in the egg and the sperm. (In this case, it appear to be the sperm marker was not being cleared.) But this is a new field and there are probably a host of little quirks in the system that have no come to light. In this study the researchers found that the gene for a particular odour receptor was given an epigenetic marker that increased that gene’s use and as a result increased the strength of the brain’s representation of that odour and the behavior association with it. It is not clear to me whether the behavior (aversion) is the same but just stronger or whether the aversion is a new component of the behavior.

 

 

It is not just epigenetics that is not well understood. The effect of fear on memory is also a bit vague. Episodic memories are stored in the cortex by the hippocampus. But memories that are fearful also involve the amygdala. Just exactly how the amygdala changes the recall of the memory is not clear. Nor is it clear exactly how memories associated with odours are handled. It is not clear how the odour receptor fields and odour perception are modified to respond to differing environments.

 

 

So I have a number of questions. These may be answered in the original paper but I have not been able to read the original. The questions are not answered in the abstract nor in other reviews of the paper. How do mice react to strong unusual smells in the absence of fear conditioning? If they react with aversion then the increase in receptors could explain the increase in aversion. Can the mice be conditioned to some other behavior-odour association, rather than fear, and pass it on to offspring? If this only works for fear conditioning then they may be dealing with a distinct type of memory and not ordinary memories. For how many generation is the effect present and does its strength diminish slower that other cases of epigenetic inheritance? If it is a robust effect over generations than it is probably protected by a special mechanism – like a ‘do not erase’ tag for epigenetic markers to do with fear and/or smell? If the effect is being protected it may be a quirk of epigenetics that needs to be investigated. It might be an advantageous adaption to block the erasing mechanism to prepare offspring for the local environment under certain conditions. Why does it only get passed on in sperm? Is this an indication of a general difference in the erasing of epigenetic markers in the two sexes?

 

 

I have a general rule to not trust individual papers, especially those with unexpected results. I don’t assume they are wrong; I wait for other scientists to add confirmation or suspicion. I know nothing about the detail of these experiments (wish their were open access) and that is another reason for reluctance. But I have no reason to reject the results either and they are very interesting results, even if they are not results about ‘inheriting’ ‘memories’ as we usually use those words.

 

 

Here is the abstract of Dias & Ressler; Parental olfactory experience influences behavior and neural structure in subsequent generations. Nature Neuroscience (2014) 17, 89-96 doi:10.1038/nn.3594.

 

 

 

Using olfactory molecular specificity, we examined the inheritance of parental traumatic exposure, a phenomenon that has been frequently observed, but not understood. We subjected F0 mice to odor fear conditioning before conception and found that subsequently conceived F1 and F2 generations had an increased behavioral sensitivity to the F0-conditioned odor, but not to other odors. When an odor (acetophenone) that activates a known odorant receptor (Olfr151) was used to condition F0 mice, the behavioral sensitivity of the F1 and F2 generations to acetophenone was complemented by an enhanced neuroanatomical representation of the Olfr151 pathway. Bisulfite sequencing of sperm DNA from conditioned F0 males and F1 naive offspring revealed CpG hypomethylation in the Olfr151 gene. In addition, in vitro fertilization, F2 inheritance and cross-fostering revealed that these transgenerational effects are inherited via parental gametes. Our findings provide a framework for addressing how environmental information may be inherited transgenerationally at behavioral, neuroanatomical and epigenetic levels.