Category Archives: memory

Memory in and out

Do we have a system to form memories and another to recall them or are both processes done by the same system in the brain? This has been a long standing question about the hippocampus. ScienceDaily has a report (here) on a paper answering this question. (Nozomu H. Nakamura, Magdalena M. Sauvage. Encoding and reactivation patterns predictive of successful memory performance are topographically organized along the longitudinal axis of the hippocampus. Hippocampus, 2015; DOI: 10.1002/hipo.22491).

The researchers used tags for molecules known to be involved in memory formation and also tags for the ones used in retrieval, they found that the same cells did both jobs. This is not really surprising because the patterns of cortical activity had been shown to be very similar for a particular memory through its formation, strengthening and recalling. These patterns seemed to come from activity in the hippocampus. A single system for formation and recall also makes it easier to understand the ways that memories are changed when they are recalled and used.

For their studies with rats, researchers adapted a standardised word-based memory test for humans, using however scents instead of words. The researchers hid small treats in sand-filled cups. In addition, each cup also contained a different scent, such as thyme or coriander which could be smelled by the rats when searching for the treats. Each training unit consisted of three phases. During the learning phase, researchers presented several scents to the animals. A pause followed, and subsequently a recognition phase. In the latter, the animals were presented the scents from the learning phase as well as other smells. The animals demonstrated that they recognised a scent from the learning phase by running to the back wall of their cage, where they were rewarded with food for the correct response. If, on the other hand, they recognised that a scent had not been presented during the learning phase, they demonstrated it by digging in the sand with their front paws.

Here is the abstract: “An ongoing debate in human memory research is whether the encoding and the retrieval of memory engage the same part of the hippocampus and the same cells, or whether encoding preferentially involves the anterior part of the hippocampus and retrieval its posterior part. Here, we used a human to rat translational behavioural approach combined to high-resolution molecular imaging to address this issue. We showed that successful memory performance is predicted by encoding and reactivation patterns only in the dorsal part of the rat hippocampus (posterior part in humans), but not in the ventral part (anterior part in humans). Our findings support the view that the encoding and the retrieval processes per-se are not segregated along the longitudinal axis of the hippocampus, but that activity predictive of successful memory is and concerns specifically the dorsal part of the hippocampus. In addition, we found evidence that these processes are likely to be mediated by the activation/reactivation of the same cells at this level. Given the translational character of the task, our results suggest that both the encoding and the retrieval processes take place in the same cells of the posterior part of the human hippocampus.”

A train of discrete places

Place cells are active when an animal is moving about, when it is learning a route, when it is revisiting the path during sleep, when it is planning a route and when it is taking that route. The place cells are active in a sequence that defines the route.

ScienceDaily has an item (here) on a recent paper (B. E. Pfeiffer, D. J. Foster. Autoassociative dynamics in the generation of sequences of hippocampal place cells. Science, 2015; 349 (6244): 180). The paper describes the events in remembering a route.

Foster says, “My own introspective experience of memory tends to be one of discrete snapshots strung together, as opposed to a continuous video recording. Our data from rats suggest that our memories are actually organized that way, with one network of neurons responsible for the snapshots and another responsible for the string that connects them.

The research showed gaps between the ‘snapshot’ discrete memories of a place. “The trajectories that the rats reconstructed weren’t smooth. We were able to see that neural activity ‘hovers’ in one place for about 20 milliseconds before ‘jumping’ to another place, where it hovers again before moving on to the next point. At first, you get a ‘blurry’ representation of point A because a bunch of place cells all around point A fire, but, as time passes, the activity becomes more focused on A. Then the activity jumps to a “blurry” version of B, which then gets focused. We think that there is a whole network of cells dedicated to this process of fine-tuning and jumping. Without it, memory retrieval would be even messier than it is.

It seems to me that this discrete series of place memories may well be like consciousness – a discrete train of individual conscious moments rather than a continuous ‘movie’.

Here is the abstract:

Neuronal circuits produce self-sustaining sequences of activity patterns, but the precise mechanisms remain unknown. Here we provide evidence for autoassociative dynamics in sequence generation. During sharp-wave ripple (SWR) events, hippocampal neurons express sequenced reactivations, which we show are composed of discrete attractors. Each attractor corresponds to a single location, the representation of which sharpens over the course of several milliseconds, as the reactivation focuses at that location. Subsequently, the reactivation transitions rapidly to a spatially discontiguous location. This alternation between sharpening and transition occurs repeatedly within individual SWRs and is locked to the slow-gamma (25 to 50 hertz) rhythm. These findings support theoretical notions of neural network function and reveal a fundamental discretization in the retrieval of memory in the hippocampus, together with a function for gamma oscillations in the control of attractor dynamics.

A new way to parse language

For many years I have followed EB Bolles’ blog Babel’s Dawn (here) while he discussed the origin of human language. He has convinced me of many things about the history and nature of language. And they fit with how I thought of language. Now he has written a chapter in a book, “Attention and Meaning: The Attentional Basis of Meaning”. In his chapter, “Attentional-Based Syntax” (here), Bolles re-writes the mechanics of parsing phrases and sentences. He uses new entities, not nouns and verbs etc., and very different rules.

The reason I like this approach so much is the same reasons that I cannot accept Chomsky’s view of language. I see language from a biological point of view, a product of genetic and cultural evolution, and continuous with the communication of other animals. It is a type of biological communication. I imagine (rightly or wrongly) that Chomsky finds biology and especially animals distasteful and that he also has no feel for the way evolution works. I on the other hand, find a study of language that seems to only deal with complete written sentences on a white board, not of much interest. Instead of a form of biological communication, Chomsky gives us a form of logical thought.

Bolles summarizes his chapter like this. “The commonsense understanding of meaning as reference has dominated grammatical thought for thousands of years, producing many paradoxes while leaving many mysteries about language’s nature. The paradoxes wane if we assume that meaning comes by directing attention from one phenomenon to another. This transfer of meaning from objective reality to subjective experience breaks with the objective grammatical accounts produced by many philosophers, lexicographers, and teachers through the ages. The bulk of this paper introduces a formal system for parsing sentences according to an attention- based syntax. The effort proves surprisingly fruitful and is capable of parsing many sentences without reference to predicates, nouns or verbs. It might seem a futile endeavor, producing an alternative to a system used by every educated person in the world, but the approach explains many observations left unexplained by classical syntax. It also suggests a promising approach to teaching language usage. ”

The key change of concept is that words do not have meanings, nor do they carry meaning from a speaker to a listener – instead, they pilot attention within the brain. Or in other words they work by forcing items into working memory and therefore attention (or attention and therefore working memory). This makes very good sense. Take a simple word like ‘tree’: speaker says ‘tree’, listener hears ‘tree’ and memory automatically brings to the surface memories associated with ‘tree’. The word ‘tree’ is held in working memory and as long as it is there, the brain has recall or near recall of tree-ish concepts/ images/ ideas. The meaning of tree is found within the listener’s brain. No one thing, word or single element of memory has meaning; the meaning is formed when multiple things form a connection. It is the connections that gives meaning. I like this because I have thought for years that single words are without meaning. But words form a network of connections in any culture and a word’s connections in the network is what defines the word. Because we share cultural networks including a language, we can communicate. I also like this starting point because it explains why language is associated with consciousness (an oddity because very little else to do with thinking is so closely tied to consciousness). Consciousness is associated with working memory and attention, and the content of consciousness seems to be (or come from) the focus of attention in working memory.

Bolles uses a particular vocabulary in his parsing method: phenomenon is any conscious experience, sensation is a minimal awareness like a hue or tone, percept is a group of sensations like a loud noise, bound perception is a group of percepts that form a unified experience. We could also say phenomenon is another word for subjective consciousness. Then we have the process of perception. Perception starts with primary sensory input, memory and predictions. It proceeds to bind elements together to form a moment of perception, then serial momentary perceptions are bound into events. It matters little what words are used, the process is fairly well accepted. But what is more, it is not confined to how language is processed – it is how everything that passes through working memory and into the content of consciousness is processed. No magic here! No mutation required! Language uses what the brain more-or-less does naturally.

This also makes the evolution of language easier to visualize. The basic mechanism existed in the way that attention, working memory and consciousness works. It was harnessed by a communication function and that function drove the evolution of language: both biological evolution and a great deal of cultural evolution. This evolution could be slow and steady over a long period of time and does not have to be the result of a recent (only 50-150 thousand years ago) all powerful single mutation.

So - the new method of parsing is essentially to formulate the rules that English uses to bind focuses of attention together to make a meaningful event (or bound perception). Each language would have its own syntax rules. The old syntax rules and the new ones are similar because they are both describing English. But… it is not arbitrary rules any more but understandable rules in the context of working memory and attention. Gone is the feeling of memorizing rules to parse sentences on a white board. Instead is an understanding of English as it is used.

I have to stick in a little rant here about peeves. If someone can understand without effort or mistake what someone else has said then what is the problem? Why are arbitrary rules important if breaking them does not interfere at all with communication? With the new parsing method, it is easy to see what is good communication and what isn’t; it is clear what will hinder communication. The method can be used to improve perfectly good English into even better English. Another advantage is that the method can be used for narratives longer than a sentence.

I hope that this approach to syntax will be taken up by others.

 

Adaptive forgetting

We know that memories are changed by up-dating details, consolidating similar memories, and forgetting some altogether. In a recent paper, researchers have shown that forgetting a memory can be due to recall of other memories (citation and abstract below). Remembering a memory enhances that memory but can suppress similar memories that interfere with its recall. This ‘adaptive forgetting’ strengthens often recalled memories and causes forgetting of interfering memories.

The New York Times has a report on this paper (here) giving details of the method.

Wimber and others, using scans and pattern analysis were able to observe the activity of memories in the visual cortex. First the subjects were trained to associate words with unrelated pictures – each word was associated with two different pictures. Then they were given a word and asked to remember the first picture they were trained to associate with that word. The pattern analysis showed the extent of the pattern for the first picture and for the second picture. This trial was repeated several times amongst other trials. The pattern for the first picture grew stronger over the repeated trials and the pattern of the second picture grew weaker. To see what had happened to the second picture, the subjects were shown each picture along with a similar one and asked which picture they had been trained and tested on in each pair. They knew the correct first picture but not had more trouble identifying the correct second picture – in other words, the memory of the word and second picture association was being destroyed.

This has implications for witness testimony after repeated questioning – the questioning may have destroyed some memories by adaptive forgetting. It also weakens the theory that memories are not forgotten but overlaid and hidden by newer memories.

Here is the abstract of the paper (Wimber, Alink, Charest, Kriegeskorte, Anderson; Retrieval induces adaptive forgetting of competing memories via cortical pattern suppression. Nature Neuroscience, 2015) “Remembering a past experience can, surprisingly, cause forgetting. Forgetting arises when other competing traces interfere with retrieval and inhibitory control mechanisms are engaged to suppress the distraction they cause. This form of forgetting is considered to be adaptive because it reduces future interference. The effect of this proposed inhibition process on competing memories has, however, never been observed, as behavioral methods are ‘blind’ to retrieval dynamics and neuroimaging methods have not isolated retrieval of individual memories. We developed a canonical template tracking method to quantify the activation state of individual target memories and competitors during retrieval. This method revealed that repeatedly retrieving target memories suppressed cortical patterns unique to competitors. Pattern suppression was related to engagement of prefrontal regions that have been implicated in resolving retrieval competition and, critically, predicted later forgetting. Thus, our findings demonstrate a cortical pattern suppression mechanism through which remembering adaptively shapes which aspects of our past remain accessible.

Could this have anything to do with the urban myth about the professor who complained that every time he remembered a student’s name, he forgot the name of another fish?

The BBC report: “Dr Wimber told the BBC the implications of the new findings were not as simple as a “one in, one out” policy for memory storage. “It’s not that we’re pushing something out of our head every time we’re putting something new in. The brain seems to think that the things we use frequently are the things that are really valuable to us. So it’s trying to keep things clear - to make sure that we can access those important things really easily, and push out of the way those things that are competing or interfering.” The idea that frequently recalling something can cause us to forget closely related memories is not new; Dr Wimber explained that it had “been around since the 1990s“.

This probably is only be one of the ways we forget our memories.

 

Meta-memory surprises

There was a parlor game that was played when I was young. Something in the room would become the focus of attention. Maybe a calendar picture would be remarked on and a short discussion of the picture would follow. The trick was to get people to look carefully at the picture. Then the person who was fooling the rest would suddenly tell everyone to close their eyes and ask them if they thought they could remember the picture. A number of questions are asked of whoever is very confident: how many clouds in the sky?; how many windows in the house?; is the spout of the teapot to the left or right?; what colour is the vase with the flowers in it? The amusement was that the confident person often could not answer the questions.

What about something really simple? Researchers (see citation below) used the Apple logo. They found that people were confident but could not remember the logo well enough to draw it accurately. It is seen so often and is not a thing that has to be distinguished for similar images, so we remember the general gist of it but not the details. Myself, I thought I had drawn it correctly, but no, my leaf touched the apple, and my bite was on the wrong side. There are three things here. Do we remember well enough to: recognize something, reproduce the details of something, have confidence in the memory of it? Most people are very confident, moderately good at recognizing and hopeless with the details. However, we can remember detail if we need to (that seems to me an efficient strategy).

The researchers also make an interesting observation. “However, in naturalistic settings there is probably no intent to encode the details of the Apple logo, leading to an interesting dissociation: Increased exposure increases familiarity and confidence, but does not reliably affect memory. Despite frequent exposure to a simple and visually pleasing logo, attention and memory are not always tuned to remembering what we may think is memorable. ” The colours of the Google logo are also ubiquitous and not actually often remembered.

Here is the abstract: “People are regularly bombarded with logos in an attempt to improve brand recognition, and logos are often designed with the central purpose of memorability. The ubiquitous Apple logo is a simple design and is often referred to as one of the most recognizable logos in the world. The present study examined recall and recognition for this simple and pervasive logo and to what degree metamemory (confidence judgements) match memory performance. Participants showed surprisingly poor memory for the details of the logo as measured through recall (drawings) and forced-choice recognition. Only 1 participant out of 85 correctly recalled the Apple logo, and fewer than half of all participants correctly identified the logo. Importantly, participants indicated higher levels of confidence for both recall and recognition, and this overconfidence was reduced if participants made the judgements after, rather than before, drawing the logo. The general findings did not differ between Apple and PC users. The results provide novel support for theories of attentional saturation, inattentional amnesia, and reconstructive memory; additionally they show how an availability heuristic can lead to overconfidence in memory for logos. ”
ResearchBlogging.org

Blake, A., Nazarian, M., & Castel, A. (2015). The Apple of the mind’s eye: Everyday attention, metamemory, and reconstructive memory for the Apple logo The Quarterly Journal of Experimental Psychology, 1-8 DOI: 10.1080/17470218.2014.1002798

A radical suggestion

How much and what type of our thinking is consciously done? The naïve feeling is that all our thinking is conscious. But we know better and tend to believe that a good deal of our thoughts are created unconsciously. I want to put forward the notion that none of our thoughts are the product of consciousness. Please set aside your disbelief for a short while in order to understand this idea and then you can resume your critical faculties and judge it.

Consciousness is about memory not thought. We cannot remember experiences unless we consciously experienced them. We can only know that we have been unconscious be noticing a discontinuity in our memory. We are probably only forced to have conscious experience of items that have been held in working memory – this has been called type 2 cognition, which always forms a conscious experience and uses working memory. That does not necessarily mean that the type 2 cognition is a product of the mechanism of consciousness.

Memory of experiences has functions. Why would we remember an event? We might find such information useful in future is about the only answer. For example, if we know there is a nasty dog in a particular yard, we may want to notice whether the gate is closed before we pass by. The various places we have experienced and mapped in memory have a lot of information associated with them. That is useful to have ‘on tap’ when we find ourselves in a particular place. ‘Where’ is an important element of the memory of an event. Also ‘when’, ‘who’ and ‘what’ are elements of most events. This information is available from the mechanisms of perception, recognition, navigation etc. We know that the processes that create these elements are not conscious, but the end product is. We also want other pieces of information to form an event to remember and use in recall. We want to know the event’s place in chains of cause and effect, whether it was an important event, what our emotional involvement was, whether it was a surprise or predicted. A very important element has to do with agency. We want to know whether we had any part in causing the event, and if we did was it deliberate or accidental, and whether the outcome was favourable or not. We assume that much of this volition information is created by conscious rather than unconscious mechanisms but experiments put that in doubt. And quite honestly there is no way that we could tell the difference.

Consciousness only needs to contain what is worth remembering but not all may be remembered. We can think of consciousness as the leading edge of memory containing all the information needed for the stable memory. However, we really do need to tell the difference between the ‘now’ and the stored memory of the past. And, although a fairly full description of ‘now’ may be delivered to short-term memory, much of it may be discarded before it reaches a more stable form. Memories are sketchy compared to conscious experience. The conscious stage of memory also has access to the current state of much of the brain. Low-level vision, hearing, feeling etc. can be used by the conscious model of ‘now’ to give it vivid realism – this would not be as easy for older memories.

Of course, these episodic memories are not our only memories and there are memories that are not produced from consciousness. Consciousness may have other functions than memory. All that I am trying to show here is that it is possible that consciousness is not involved in cognition. It may record some aspects if they will be important to remember for the future, but consciousness is not a cognition or thought engine in the brain. It is the engine to assemble experiences to be remembered as experiences.

Resume critical faculties…

Stone faced

The extent to which emotions are shown and felt in the body as well as in consciousness is being uncovered. Facial expressions are an example but also posture and bodily feelings. A recent paper looks at the effect of an immobilized face on remembering and recalling emotional words. This adds to previous experiments on the initial recognition of emotional words. This face-emotion tie is a case of embodiment. By and large we automatically show our emotions on our faces and we read others’ emotions from their faces. Further if we force our face into the expression of a particular emotion, we feel that emotion. It is a two-way street as far as communicating and displaying emotion. What about processing emotion? Can the response to emotional words be affected by the face? Yes.

Here is the abstract for the paper (Baumeister, Rumiati, Foroni; When the mask falls: The role of facial motor resonance in memory for emotional language; Acta Psychologica Vol 155, Feb 2015; doi:10.1016/j.actpsy.2014.11.012): “The recognition and interpretation of emotional information (e.g., about happiness) has been shown to elicit, amongst other bodily reactions, spontaneous facial expressions occurring in accordance to the relevant emotion (e.g. a smile). Theories of embodied cognition act on the assumption that such embodied simulations are not only an accessorial, but a crucial factor in the processing of emotional information. While several studies have confirmed the importance of facial motor resonance during the initial recognition of emotional information, its role at later stages of processing, such as during memory for emotional content, remains unexplored. The present study bridges this gap by exploring the impact of facial motor resonance on the retrieval of emotional stimuli. In a novel approach, the specific effects of embodied simulations were investigated at different stages of emotional memory processing (during encoding and/or retrieval). Eighty participants underwent a memory task involving emotional and neutral words consisting of an encoding and retrieval phase. Depending on the experimental condition, facial muscles were blocked by a hardening facial mask either during encoding, during retrieval, during both encoding and retrieval, or were left free to resonate (control). The results demonstrate that not only initial recognition but also memory of emotional items benefits from embodied simulations occurring during their encoding and retrieval.

Processing into memory and retrieval from memory was inhibited for emotional words but not for neutral words when movement of facial muscles was blocked. “Benefits from embodied simulations” is one way to look at it. But it implies that emotion is not an activity of the whole body but of just the brain with the body doing some assistance (although I suspect the authors feel the assistance is very important). Over the spectrum of emotions we have the involvement to varying degrees of the bodies muscles including gut feelings, heart rate, breathing rate, flushing/blushing, goose bumps, skin temperature, hair movements, pupil size as well as skeletal muscles. This is not a little simulation add-on. We often feel the fright a fraction sooner than we recognize the danger. It sometimes takes a long time to figure out what exactly made us feel angry. And in a social animal the communication of emotion is important to peace and cooperation. We communicate automatically with face, voice, posture, and actions. It takes great skill and concentration to hide “tells”.

I think we should view emotions as integrated reactions of our whole body (the whole nervous system, not just our brain/mind) to our environment.

 

Imagination and reality

ScienceDaily has an item (here) on a paper (D. Dentico, B.L. Cheung, J. Chang, J. Guokas, M..e Boly, G. Tononi, B. Van Veen. Reversal of cortical information flow during visual imagery as compared to visual perception. NeuroImage, 2014; 100: 237) looking at EEC dynamics during thought.

The researchers examined electrical activity as subjects alternated between imagining scenes and watching video clips.

Areas of the brain are connected for various functions and these interactions change as during processing. The changes to network interactions appear as movement on the cortex. The research groups are trying to develop tools to study these changing networks: Tononi to study sleep and dreaming and Van Veen to study short-term memory.

The activity seems very directional. “During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe — from a higher-order region that combines inputs from several of the senses out to a lower-order region. In contrast, visual information taken in by the eyes tends to flow from the occipital lobe — which makes up much of the brain’s visual cortex — “up” to the parietal lobe… To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle — focusing on the details of shapes, colors and textures — before watching a short video of silent nature scenes.

The study has been used to verify their equipment, methods and calculations – could they discriminate the ‘flow’ in the two situations of imagining and perceiving. And it appears they could.

The actual directions of flow are not surprising. In perception, information starts in the primary sensory areas at the back of the brain. The information becomes more integrated as it moves forward to become objects in space, concepts and even word descriptions. On the other hand during imagining the starting points are objects, concepts and words. They must be rendered in sensory terms and so processing would be directed back towards the primary sensory areas. In both cases the end point would be a connection between sensory qualia and their high level interpretation. In perception the movement is from the qualia to the interpretation and in imagining it would be from the interpretation to the qualia.

 

Seeing clearly

Why do we not notice the limitations of our eyes and any time lag in perception? A recent paper by A. Herwig which was reported in ScienceDaily (here) looks at the mechanics of vision.

Only one portion of the retina has detailed vision, the fovea. If we hold our arm out, a bit about the size of a thumb nail, is seen clearly by the fovea. The rest of vision is not sharp. And yet we seem to have clear vision of a much larger area.

This paper puts forward a model that has the memory storing pairs of blurred and detailed images. When there is a blurred object in the visual field (but not in the fovea) it is replaced in the visual system by a detailed image of an object that fits the blurred image coming from the eyes. This is done so quickly that a person never observes the blurred object. These pairings of blurred and detailed objects are being continually updated.

The researchers used a very fast camera to follow a subject’s eye movements. During the extremely fast movements, saccades, from one fixed position to another, they changed the object that would be viewed. The subjects did not see the new object but rather the detailed pairing with the old blurred object.

The experiments show that our perception depends in large measure on stored visual experiences in our memory.” …these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement“). In other words: “We do not see the actual world, but our predictions.

This give us a clear visual picture that appears correct and immediate.

Here is the abstract (A. Herwig, W. Schneider; Predicting object features across saccades: Evidence from object recognition and visual search; Journal of Experimental Psychology: General (2014) 143-5)

When we move our eyes, we process objects in the visual field with different spatial resolution due to the nonhomogeneity of our visual system. In particular, peripheral objects are only coarsely represented, whereas they are represented with high acuity when foveated. To keep track of visual features of objects across eye movements, these changes in spatial resolution have to be taken into account. Here, we develop and test a new framework proposing a visual feature prediction mechanism based on past experience to deal with changes in spatial resolution accompanying saccadic eye movements. In 3 experiments, we first exposed participants to an altered visual stimulation where, unnoticed by participants, 1 object systematically changed visual features during saccades. Experiments 1 and 2 then demonstrate that feature prediction during peripheral object recognition is biased toward previously associated postsaccadic foveal input and that this effect is particularly associated with making saccades. Moreover, Experiment 3 shows that during visual search, feature prediction is biased toward previously associated presaccadic peripheral input. Together, these findings demonstrate that the visual system uses past experience to predict how peripheral objects will look in the fovea, and what foveal search templates should look like in the periphery. As such, they support our framework based on ideomotor theory and shed new light on the mystery of why we are most of the time unaware of acuity limitations in the periphery and of our ability to locate relevant objects in the periphery.

 

Remembering visual images

There is an interesting recent paper (see citation) on visual memory. The researchers’ intent is to map and areas and causal directions between them for a particular process in healthy individuals so that sufferers showing lost of that process can be studied in the same way and the areas/connections which are faulty identified. In this study they were looking at encoding of vision for memory.

40 healthy subjects were examined. “… participants were presented with stimuli that represented a balanced mixture of indoor (50%) and outdoor (50%) scenes that included both images of inanimate objects as well as pictures of people and faces with neutral expressions. Attention to the task was monitored by asking participants to indicate whether the scene was indoor or outdoor using a button

box held in the right hand. Participants were also instructed to memorize all scenes for later memory testing. During the control condition, participants viewed pairs of scrambled images and were asked to indicate using the same button box whether both images in each pair were the same or not (50% of pairs contained the same images). Use of the control condition allowed for subtraction of visuo-perceptual, decision-making, and motor aspects of the task, with a goal of improved isolation of the memory encoding aspect of the active condition.” All the subjects performed well on both tasks and on later recognition of the scene they were asked to remember. “Thirty-two ICA components were identified. Of these, 10 were determined to be task-related (i.e., not representing noise or components related to the control condition) and were included in further analyses and model generation. Each retained component was attributed to a particular network based on previously published data. ” Granger causality analysis was carried out on each pair of the 10 components.

Here is the resulting picture:

The authors give a description of the many functions that have been attributed to their 10 areas (independent components) which is interesting reading. But not very significant because the areas are on the large size and because it is reasonable to argue from a specific function to an active area but not from an active area to a specific function. The information does have a bearing on some theories and models. The fact that this work does not itself produce a model does not make it less useful in studying abnormal visual memory encoding.

The involvement of the ‘what’ visual stream rather than the stream used for motor actions is expected, as is the involvement of working memory. There is clearly a major importance of attention in this process. The involvement of language/concepts is interesting. “Episodic memory is defined as the ability to consciously recall dated information and spatiotemporal relations from previous experiences, while semantic memory consists of stored information about features and attributes that define concepts. The visual encoding of a scene in order to remember and recognize it later (i.e., visual memory encoding) engages both episodic and semantic memory, and an efficient retrieval system is needed for later recall.” The data is likely to be useful in evaluating theoretical ideas. The author mention support for the hemispheric encoding/retrieval asymmetry model.

The abstract:

Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19–59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (.90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.3365.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists. ”
ResearchBlogging.org

Nenert, R., Allendorfer, J., & Szaflarski, J. (2014). A Model for Visual Memory Encoding PLoS ONE, 9 (10) DOI: 10.1371/journal.pone.0107761