Memory switch

A new tool has been used for the first time to look at brain activity – ribosomal profiling. The method identifies the proteins that are being made at any time. Ribosomes make proteins using messenger RNA that was copied from the DNA of genes. The method is to destroy all the messenger RNA that is not actually within a ribosome, or in other words, being actively used to make protein. The protected RNA can be used to identify the genes that were being translated into proteins at the moment that the cell was broken and the free RNA destroyed.

ScienceDaily reports on a press release from the Institute for Basic Science describing the use of this technique to study memory formation. (here) The research was done in the IBS Center for RNA Research and Department of Biological Sciences at Seoul National University. There is a on-off switch for formation of memories that is based on changes in protein production.

When an animal experiences no stimulus in an environment the hippocampus undergoes gene repression which prevents the formation of new memories. Upon the introduction of a stimulus, the hippocampus’ repressive gene regulation is turned off allowing for new memory creation, and as Jun Cho puts it, “Our study illustrates the potential importance of negative gene regulation in learning and memory”.

I assume this research will appear in a journal paper and that the technique will be used in other studies of the brain. It is always good to hear of new methods being available.

Islands and ocean of memory

Episodic memories are tagged with information about time and place. If we remember an event then it is almost certain we will remember where it happened and where it lies in the temporal sequence of events. Research has shown that an activity pattern in a part of the brain involved in memory, the entorhinal cortex, feeds where and when information to the hippocampus which forms the new memory.

The research is reported in a recent paper: Takashi Kitamura, Chen Sun, Jared Martin, Lacey J. Kitch, Mark J. Schnitzer, Susumu Tonegawa. Entorhinal Cortical Ocean Cells Encode Specific Contexts and Drive Context-Specific Fear Memory. Neuron, 2015; DOI: 10.1016/j.neuron.2015.08.036.

The entorhinal area involved has been likened to an ocean of context specific ‘where’ cells with islands of ‘when’ cells. The ocean cells signal the CA3 cells of the hippocampus and the island cells signal the CA1 cells. If ocean cells are blocked, animals cannot learn to connect fear with a particular environment. Island cells seem to react to the speed an animal is moving at and manipulating their signals changed the gap between events being linked in an animals memory. This is probably one of many ingredients in the processing of time-and-space.

Absract: “Forming distinct representations and memories of multiple contexts and episodes is thought to be a crucial function of the hippocampal-entorhinal cortical network. The hippocampal dentate gyrus (DG) and CA3 are known to contribute to these functions, but the role of the entorhinal cortex (EC) is poorly understood. Here, we show that Ocean cells, excitatory stellate neurons in the medial EC layer II projecting into DG and CA3, rapidly form a distinct representation of a novel context and drive context-specific activation of downstream CA3 cells as well as context-specific fear memory. In contrast, Island cells, excitatory pyramidal neurons in the medial EC layer II projecting into CA1, are indifferent to context-specific encoding or memory. On the other hand, Ocean cells are dispensable for temporal association learning, for which Island cells are crucial. Together, the two excitatory medial EC layer II inputs to the hippocampus have complementary roles in episodic memory.

The thalamus revisited

For a few decades, I have had the opinion that to understand how the brain works it is important to look at more than the neocortex, but also look to the other areas of the brain that may modify, control or even drive the activity of the cortex. Because of my special interest in consciousness, the thalamus was always interesting in this respect. Metaphorically the cortex seemed to be the big on-line computer run by the thalamus.

A recent paper makes another connection between the cortex and the thalamus, to add to many others – (F. Alcaraz, A. R. Marchand, E. Vidal, A. Guillou, A. Faugere, E. Coutureau, M. Wolff. Flexible Use of Predictive Cues beyond the Orbitofrontal Cortex: Role of the Submedius Thalamic Nucleus. Journal of Neuroscience, 2015; 35 (38): 13183 DOI: 10.1523/JNEUROSCI.1237-15.2015).

The various parts of the thalamus are connected to incoming sensory signals, all parts of the cortex, the hippocampus, the mid-brain areas, the spinal cord and the brain stem. It is one of the ‘hubs’ of the brain and its activity is essential for consciousness. However, the particular bit of the thalamus that is implicated in this particular function (adaptive decision making flexibility) appears to have been mainly studied in relationship to pain and control of pain. There is a lot to learn about the thalamus!

Here is the abstract: “The orbitofrontal cortex (OFC) is known to play a crucial role in learning the consequences of specific events. However, the contribution of OFC thalamic inputs to these processes is largely unknown. Using a tract-tracing approach, we first demonstrated that the submedius nucleus (Sub) shares extensive reciprocal connections with the OFC. We then compared the effects of excitotoxic lesions of the Sub or the OFC on the ability of rats to use outcome identity to direct responding. We found that neither OFC nor Sub lesions interfered with the basic differential outcomes effect. However, more specific tests revealed that OFC rats, but not Sub rats, were disproportionally relying on the outcome, rather than on the discriminative stimulus, to guide behavior, which is consistent with the view that the OFC integrates information about predictive cues. In subsequent experiments using a Pavlovian contingency degradation procedure, we found that both OFC and Sub lesions produced a severe deficit in the ability to update Pavlovian associations. Altogether, the submedius therefore appears as a functionally relevant thalamic component in a circuit dedicated to the integration of predictive cues to guide behavior, previously conceived as essentially dependent on orbitofrontal functions.

SIGNIFICANCE STATEMENT: In the present study, we identify a largely unknown thalamic region, the submedius nucleus, as a new functionally relevant component in a circuit supporting the flexible use of predictive cues. Such abilities were previously conceived as largely dependent on the orbitofrontal cortex. Interestingly, this echoes recent findings in the field showing, in research involving an instrumental setup, an additional involvement of another thalamic nuclei, the parafascicular nucleus, when correct responding requires an element of flexibility (Bradfield et al., 2013a). Therefore, the present contribution supports the emerging view that limbic thalamic nuclei may contribute critically to adaptive responding when an element of flexibility is required after the establishment of initial learning.

Forming memories

The contents of episodic memory (but not other types of memory) are formed in the medial temporal lobe (the hippocampus and neocortical areas adjacent to it, including the entrorhinal area). This part of the brain seems to pick out moments of our conscious experience and commit them to memory. A recent paper looks at individual neurons involved in this memory formation. (Matias J. Ison, Rodrigo Quian Quiroga, Itzhak Fried. Rapid Encoding of New Memories by Individual Neurons in the Human Brain. Neuron, 2015; 87 (1): 220 DOI: 10.1016/j.neuron.2015.06.016)

The researchers used the method of asking cooperation from patients who are awaiting surgery for epilepsy and have been fitted with electrodes in a particular area of the brain to locate the exact focus of the epilepsy. This is probably the only ethical way in which studies can be done involving the recording from individual neurons in humans.

It is thought that memories are established by associations. But it is a puzzle how associations can be made by single exposures to unique natural events. The researchers showed patients many images of people, animals and places. Images that activated a single neuron from those that were being recorded were set aside for use. The images were made into composites where one person or animal image was put into one place image to appear to be the person/animal in that setting. The neurons that had responded to one of the images but not the other began responding to both original images after being exposed to the composite. A neuron was forming an association of the two parts of the composite. Associations formed rapidly in an all-or-nothing manner.

After rapid capture, these associations are probably manipulated and consolidated (or destroyed) in the production of more permanent and complex memories.

Abstract: “The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories.

What meets where

A paper looking at the newly re-found nerve bundle, the vertical occipital fasciculus, connects it with “The Man who Mistook his Wife for a Hat”. Why would someone reach for an object in a different place than the object was? ScienceDaily has a report (here), “Scientists chart a lost highway in the brain”.

The ‘what’ and ‘where’ visual pathways have been studied for some time, but there appeared to be little connecting them until they had passed out of the early visual area. A nerve tract that was known over a hundred years ago and was lost to the anatomy texts until recently, connects the ‘what’ and ‘where’ maps. “Our new study shows that the VOF may provide the fundamental white matter connection between two parts of the visual system: that which identifies objects, words and faces and that which orients us in space. … The structure forms a ‘highway’ between the lower, ventral part of the visual system, which processes the properties of faces, words and objects, and the upper, dorsal parietal regions, which orients attention to an object’s spatial location.

This long flat nerve tract was described long ago and then mysteriously disappeared from view. Why? “The answer may be scientific rivalry. In their earlier paper, Pestilli and collaborators attributed the VOF’s disappearance to competing beliefs among 19th-century neuroanatomists. In contrast to Wernicke, Theodor Meynert, another prominent scientist in Germany, never accepted the new structure due to his belief that all white matter pathways ran horizontally. Over time, the VOF faded into obscurity.

Are other structures being overlooked because they are up and down rather than side to side or back to front, or in some other way are counter to orthodoxy?

Here is the abstract (H. Takemura, A. Rokem, J. Winawer, J. D. Yeatman, B. A. Wandell, F. Pestilli. A Major Human White Matter Pathway Between Dorsal and Ventral Visual Cortex. Cerebral Cortex, 2015; DOI: 10.1093/cercor/bhv064): “Human visual cortex comprises many visual field maps organized into clusters. A standard organization separates visual maps into 2 distinct clusters within ventral and dorsal cortex. We combined fMRI, diffusion MRI, and fiber tractography to identify a major white matter pathway, the vertical occipital fasciculus (VOF), connecting maps within the dorsal and ventral visual cortex. We use a model-based method to assess the statistical evidence supporting several aspects of the VOF wiring pattern. There is strong evidence supporting the hypothesis that dorsal and ventral visual maps communicate through the VOF. The cortical projection zones of the VOF suggest that human ventral (hV4/VO-1) and dorsal (V3A/B) maps exchange substantial information. The VOF appears to be crucial for transmitting signals between regions that encode object properties including form, identity, and color and regions that map spatial information.


Liking the easy stuff

It is not just true that if something is not understood, it is assumed to be easily done. It is also true that if it is easier to grasp then it is more likeable. A recent study looked at this connection between fluency and appreciation. (Forster M, Gerger G, Leder H (2015) Everything’s Relative? Relative Differences in Processing Fluency and the Effects on Liking. PloS ONE 10(8): e0135944. doi:10.1371/journal. pone.0135944)

The question Forster asks is whether the judgement of fluency is absolute or relative. If we have internal reference standards for liking that depend on the ease of perceiving then the level of liking is an absolute judgement. Internal standards seem to be the case for perfect pitch and the feeling of familiarity when something is recalled from memory. But in the case of the effort of perception, our feeling of liking is a relative judgement – a comparison with other amounts of effort for other images.

Abstract: “Explanations of aesthetic pleasure based on processing fluency have shown that ease-of-processing fosters liking. What is less clear, however, is how processing fluency arises. Does it arise from a relative comparison among the stimuli presented in the experiment? Or does it arise from a comparison to an internal reference or standard? To address these questions, we conducted two experiments in which two ease-of-processing manipulations were applied: either (1) within-participants, where relative comparisons among stimuli varying in processing ease were possible, or (2) between-participants, where no relative comparisons were possible. In total, 97 participants viewed simple line drawings with high or low visual clarity, presented at four different presentation durations, and rated for felt fluency, liking, and certainty. Our results show that the manipulation of visual clarity led to differences in felt fluency and certainty regardless of being manipulated within- or between-participants. However, liking ratings were only affected when ease-of-processing was manipulated within-participants. Thus, feelings of fluency do not depend on the nature of the reference. On the other hand, participants liked fluent stimuli more only when there were other stimuli varying in ease-of-processing. Thus, relative differences in fluency seem to be crucial for liking judgements.”

It is about communication

Some people understand language as a way of thinking and ignore the obvious – language is a way of communicating. A recent study looks at the start of language in very young babies and shows the importance of communication. (Marno, H. et al. Can you see what I am talking about? Human speech triggers referential expectation in four-month-old infants. Sci. Rep. 5, 13594; doi: 10.1038/srep13594 (2015)) The researchers looked at infants’ ability to recognize that a word can refer to an object in the world but they also show the importance of the infants’ recognizing the act of communication.

The authors review what is known and it is an interesting list. “Human language is a special auditory stimulus for which infants show a unique sensitivity, compared to any other types of auditory stimuli. Various studies found that newborns are not only able to distinguish languages they never heard before based on their rhythmical characteristics, but they can also detect acoustic cues that signal word boundaries, discriminate words based on their patterns of lexical stress and distinguish content words from function words by detecting their different acoustic characteristics. Moreover, they can also recognize words with the same vowels after a 2 min delay. In fact, infants are more sensitive to the statistical and prosodic patterns of language than adults, which provides an explanation of why acquiring a second language is more difficult in adulthood than during infancy. In addition to this unique sensitivity to the characteristics of language, infants also show a particular preference for language, compared to other auditory stimuli. For example, infants at the age of 2-months, and even newborns prefer to listen to speech compared to non-speech stimuli, even if the non-speech stimuli retain many of the spectral and temporal properties of the speech signal. Thus, there is growing evidence that infants are born with a unique interest and sensitivity to process human language. … it might be that infants are receptive towards speech because they also understand that speech can communicate about something. More specifically, they might understand that speech can convey information about the surrounding world and that words can refer to specific entities. Indeed, without this understanding, they would have great difficulty to accept relations between objects and their labels, and thus language acquisition would become impossible.

The experiments reported in the paper are designed to show whether infants (about 4 months old) understand that words can refer to objects in the world. They do show this, but also show that this depends on the infant recognizing the act of communication. The infant attends to eye-contact and when the face speaks language (not backward language or silent mimed language), the infant then appears to recognize it is being communicated with. Without the eye-contact or without the actual language, the infant does not assume an act of communication. Then the infant can go on to recognize that reference to something is what is being communicated. “… we suggest that during the perception of a direct eye-gaze, infants can recognize the communicative intention, even before they could assess the content of these intentions. Eye-gaze thus is able to establish a communicative context, which can direct the attention of the infant. However, we also suggest that while an infant-directed gaze acts as a communicative cue signaling that the infant was addressed by someone, additional cues are required to elicit the referential expectation of the infant (i.e. to understand that the speaker is talking about something). Following this, we propose that when the infant hears speech (without being able to actually understand the content of speech) and observes a person directly gazing at her/him (like in the Infant-directed gaze condition in our experiment), s/he will understand the communicative intention of the speaker (i.e. that s/he was addressed by the speaker), but s/he will still have to wait for additional referential cues to make an inference that the speaker is actually talking about something. This additional cue arrives when the direct eye contact is broken: the very moment when the speaker averts her gaze to a new direction, the infant will infer that some new and relevant information is being presented to her via the speech signals, and, as a consequence will be ready to seek this information.

Language is about communication. Children learn language by communicating, for communicating.

Abstract: “ Infants’ sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words.


REM sleep and saccades

When we sleep we pass through a cycle of types of sleep. One of these types is REM (rapid eye movement) sleep and if we are awqkened during REM sleep we report dreaming. During REM sleep movement of the body is suppressed (except of sleep-walkers). Only the eyes are moving under the eye lids. These eye movements resemble the regular rapid movement of the eyes from one fixation to another when awake, called saccades.

A recent paper (Thomas Andrillon, Yuval Nir, Chiara Cirelli, Giulio Tononi, Itzhak Fried. Single-neuron activity and eye movements during human REM sleep and awake vision. Nature Communications, 2015; 6: 7884 DOI: 10.1038/ncomms8884) compares REMs and saccades. The research was carried out on epileptic patients being prepared for surgery by having electrodes implanted deep in the brain for 10 days. In this case there were 9 patients with electrodes in the medial temporal lobe. The MTL is thought of as a bridge between vision and imagining visual scenes on the one hand and memory on the other. As well as recording activity from neurons, the researchers also recorded movement of the eye muscles and conventional EEC from various parts of the scalp.

They found the REMs and saccades were very similar in their pattern of activity in the brain. This says something very interesting about consciousness. When the eyes move in a saccade there is saccadic suppression of the information arriving via the optic nerve from the eyes. There are signals arriving and they affect brain activity but they do not register in consciousness; the gap in visual information is also hidden from consciousness. We are, in affect, as when watching a movie, viewing a series of still images and creating a continuous effect. So saccades are associated with creating a continuous effect from a train of discrete, discontinuous signals. (At least that is one theory.)

The type of dreaming that is associated with REM sleep can be seen as a very similar process, a form of consciousness that is cut off from actual sensory input and from control of skeletal muscles. The purpose of these dreams is unclear but it probably involves some aspect of memory processing. So perhaps the dreams are a series of discontinuous images divided by REMs, leaving no trace of the gaps during the eye movement, as in awake consciousness.

But why would the eye movement by required to create the conscious process during sleep? Perhaps it drives it, or controls it, or times its components, or perhaps it is not required but cannot be eliminated. What interesting questions! They may help to understand conventional consciousness as well as dreams.

Here is the abstract: “Are rapid eye movements (REMs) in sleep associated with visual-like activity, as during wakefulness? Here we examine single-unit activities (n 1⁄4 2,057) and intracranial electroencephalography across the human medial temporal lobe (MTL) and neocortex during sleep and wakefulness, and during visual stimulation with fixation. During sleep and wakefulness, REM onsets are associated with distinct intracranial potentials, reminiscent of ponto-geniculate-occipital waves. Individual neurons, especially in the MTL, exhibit reduced firing rates before REMs as well as transient increases in firing rate immediately after, similar to activity patterns observed upon image presentation during fixation without eye movements. Moreover, the selectivity of individual units is correlated with their response latency, such that units activated after a small number of images or REMs exhibit delayed increases in firing rates. Finally, the phase of theta oscillations is similarly reset following REMs in sleep and wakefulness, and after controlled visual stimulation. Our results suggest that REMs during sleep rearrange discrete epochs of visual-like processing as during wakefulness.

Not what you do but how you do it

I have been interested in communication using non-verbal channels for some time. Communication through posture, facial expression, gesture, tone of voice is an intriguing subject. Lately I have encountered another channel, vitality forms of actions. A particular action, say handing something to another person, can be done in a number of ways implying rudeness, caring, anger, generosity etc. A person’s actions can have a goal and an intent but can also give hints as to their state of mind or emotions during the action. Of course, we can be conscious, or not, of giving signals and conscious, or not, of receiving them – but there is communication none the less.

There is a new paper on this subject which I can not access. And there is an older similar paper which I have been able to read. The two citations are with the abstracts below. The research has looked at what differs in actions that have different vitality forms: time profile, force, space, direction. The diagram illustrates the difference between energetic and gentle action.

vitality graphs

vitality graphs

The stimuli were presented to the participants in pairs of consecutive videos, where the observed action (what) and vitality (how) could be the same or changed between video-pairs. To counterbalance all what–how possibilities, four different combinations of action-vitality were created: (i) same action-same vitality; (ii) same action-different vitality; (iii) different action-same vitality and (iv) different action-different vitality. All video combinations were presented in two tasks. The what task required the participants to pay attention to the type of action observed in the two consecutive videos and to decide whether the represented action was the same or different regardless of vitality form. The how task required the participants to pay attention to the vitality form and to decide whether the represented vitality was the same or different between the two consecutive videos regardless of the type of action performed.

A number of areas of the brain are active during an action but only one was active with ‘how’ and not ‘what’ tasks. This was the right dorso-central insula.

Here is the abstract of the older paper (Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60) The observation of goal-directed actions performed by another individual allows one to understand what that individual is doing and why he/she is doing it. Important information about others behaviour is also carried out by the dynamics of the observed action. Action dynamics characterize the vitality form of an action describing the cognitive and affective relation between the performing agent and the action recipient. Here, using the fMRI technique, we assessed the neural correlates of vitality form recognition presenting participants with videos showing two actors executing actions with different vitality forms: energetic and gentle. The participants viewed the actions in two tasks. In one task (what), they had to focus on the goal of the presented action; in the other task (how), they had to focus on the vitality form. For both tasks, activations were found in the action observation/execution circuit. Most interestingly, the contrast how vs what revealed activation in right dorso-central insula, highlighting the involvement, in the recognition of vitality form, of an anatomical region connecting somatosensory areas with the medial temporal region and, in particular, with the hippocampus. This somatosensory-insular-limbic circuit could underlie the observers capacity to understand the vitality forms conveyed by the observed action

And the abstract of the newer paper ( Di Cesare G, Di Dio C, Marchi M, Rissolatti G; Expressing our internal states and understand those of others; Proc Natl Acad Sci 2015) Vitality form is a term that describes the style with which motor actions are performed (e.g., rude, gentle, etc.). They represent one characterizing element of conscious and unconscious bodily communication. Despite their importance in interpersonal behavior, vitality forms have been, until now, virtually neglected in neuroscience. Here, using the functional MRI (fMRI) technique, we investigated the neural correlates of vitality forms in three different tasks: action observation, imagination, and execution. Conjunction analysis showed that, in all three tasks, there is a common, consistent activation of the dorsocentral sector of the insula. In addition, a common activation of the parietofrontal network, typically active during arm movements production, planning, and observation, was also found. We conclude that the dorsocentral part of the insula is a key element of the system that modulates the cortical motor activity, allowing individuals to express their internal states through action vitality forms. Recent monkey anatomical data show that the dorsocentral sector of the insula is, indeed, connected with the cortical circuit involved in the control of arm movements. correlates of vitality forms in three tasks: action observation, imagination, and execution. We found that, in all three tasks, there is a common specific activation of the dorsocentral sector of the insula in addition to the parietofrontal network that is typically active during arm movements production and observation. Thus, the dorsocentral part of the insula seems to represent a fundamental and previously unsuspected node that modulates the cortical motor circuits, allowing individuals to express their vitality forms and understand those of others.

Included graph is Fig S2 of the paper – Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60

Here is the caption for the graph: Fig. 2 Kinematic and dynamic profiles associated with one of the actions (passing a bottle) performed by the female actress with the two vitality forms (gentle; energetic). (A) Velocity profiles (y-axes) and duration (x-axes). (B) Trajectories (gentle, green line; energetic, red line). (C) Potential energy (blue line), that is the energy that the actress gave to the object during the lifting phase of the action; kinetic energy (red line), that is the energy that the actress gave to the object to move it with a specific velocity from the start to the end point. (D) Power required to perform the action on the object in an energetic (blue solid line) and gentle (blue dashed line) vitalities. As it can be observed in the graphs, the vitality forms gentle and energetic generally differ from each other on each of the tested parameters.


The learning of concepts

I once tried to learn a simple form of a Bantu language and failed (not surprising as I always fail to learn a new language). One of the problems with this particular attempt was classes of nouns. There were 10 or so classes, each with their own rules. Actually it works like the gender of nouns in most European languages, but it is much more complex and unlike gender it is less arbitrary. The nouns are grouped in somewhat descriptive groups like animals, people, places, tools etc. Besides the Bantu languages there are a number of other groups that have extensive noun classes, twenty or more.

Years ago I found the noun classes inexplicable. Why did they exist? But there has been a number of hints that it is a quite natural way for concepts to be stored in the brain – faces stored here, tools stored there, places stored somewhere else.

A recent paper (Andrew James Bauer, Marcel Adam Just. Monitoring the growth of the neural representations of new animal concepts. Human Brain Mapping, 2015; DOI: 10.1002/hbm.22842) studies how and where new concepts are stored.

Their review of previous finds illustrates the idea. “Research to date has revealed that object concepts (such as the concept of a hammer) are neurally represented in multiple brain regions, corresponding to the various brain systems that are involved in the physical and mental interaction with the concept. The concept of a hammer entails what it looks like, what it is used for, how one holds and wields it, etc., resulting in a neural representation distributed over sensory, motor, and association areas. There is a large literature that documents the responsiveness (activation) of sets of brain regions to the perception or contemplation of different object concepts, including animals (animate natural objects), tools, and fruits and vegetables. For example, fMRI research has shown that nouns that refer to physically manipulable objects such as tools elicit activity in left premotor cortex in right-handers, and activity has also been observed in a variety of other regions to a lesser extent. Clinical studies of object category-specific knowledge deficits have uncovered results compatible with those of fMRI studies. For example, damage to the inferior parietal lobule can result in a relatively selective knowledge deficit about the purpose and the manner of use of a tool. The significance of such findings is enhanced by the commonality of neural representations of object concepts across individuals. For example, pattern classifiers of multi-voxel brain activity trained on the data from a set of participants can reliably predict which object noun a new test participant is contemplating. Similarity in neural representation across individuals may indicate that there exist domain-specific brain networks that process information that is important to survival, such as information about food and eating or about enclosures that provide shelter.

Their study is concerned with how new concepts are formed (they have a keen interest in education). Collectively, the results show that before instruction about a feature, there were no stored representations of the new feature knowledge; and after instruction, the feature information had been acquired and stored in the critical brain regions. The activation patterns in the regions that encode the semantic information that was taught (habitat and diet) changed, reflecting the specific new concept knowledge. This study provides a novel form of evidence (i.e. the emergence of new multi-voxel representations) that newly acquired concept knowledge comes to reside in brain regions previously shown to underlie a particular type of knowledge. Furthermore, this study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

This is a different type of learning. It is conceptual knowledge learning rather than learning an intellectual skill such as reading or a motor skill such as juggling.

The storage of conceptual knowledge appears to be quite carefully structured rather than higgly piggly.

Here is the abstract. “Although enormous progress has recently been made in identifying the neural representations of individual object concepts, relatively little is known about the growth of a neural knowledge representation as a novel object concept is being learned. In this fMRI study, the growth of the neural representations of eight individual extinct animal concepts was monitored as participants learned two features of each animal, namely its habitat (i.e., a natural dwelling or scene) and its diet or eating habits. Dwelling/scene information and diet/eating-related information have each been shown to activate their own characteristic brain regions. Several converging methods were used here to capture the emergence of the neural representation of a new animal feature within these characteristic, a priori-specified brain regions. These methods include statistically reliable identification (classification) of the eight newly acquired multivoxel patterns, analysis of the neural representational similarity among the newly learned animal concepts, and conventional GLM assessments of the activation in the critical regions. Moreover, the representation of a recently learned feature showed some durability, remaining intact after another feature had been learned. This study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.