Monthly Archives: February 2014

The importance of communication

A recent paper (see citation below) has helped to clarify the relationship between linguistic and musical communication. The researchers used a standard type of communication between jazz players, called “trading fours”. The musicians alternate playing four bar phrases, each relating to the previous one, so that the players in effect answer one another. This back and forth is a musical conversation.

The authors used a number of controls that were not musical conversations as contrasts to the “trading fours”: scales, a practiced melody, improvisation without relating to another player. The resulting music was analyzed for “note density, pitch class distribution, pitch class transitions, duration distribution, duration transitions, interval distribution, interval transitions, melodic complexity, and self- organizing maps of key”. This was used to give a numeric value to the melodic complexity and to identify the nature of the conversation in the “trading fours” sessions. The improvisation in the “trading fours” music was more melodically complex and was related in a conversational way.

One of the players was scanned with fMRI during the sessions. The improvised conversation involved intense activation of two of the language centers (Broca’s and Wernicke’s areas ) and also their right hemisphere counterparts. The left side areas “are known to be critical for language production and comprehension as well as processing of musical syntax.” The right side match to Broca’s area is “associated with the detection of task relevant cues such as those involved in the identification of salient harmonic and rhythmic elements.” These two areas appear to perform syntactic processing for both music and speech. The Wernicke’s area is involved in harmonic processing and it’s right homologue is “implicated in auditory short-term memory, consistent with the maintenance of the preceding musical phrases.” These results are similar to a study of linguistic conversation and are consistent with the ‘shared syntactic integration resource hypotheses’. In other words they are consistent with music and language “sharing a common neural network for syntactic operations”.

However music and language are not semantically similar. In the ‘trading fours’ situation there is a marked deactivation of the angular gyrus which is related to “semantic processing of auditory and visual linguistic stimuli and the production of written language and written music.” It appears that during communication, language and music resemble one another in form (syntax) but not in meaning (semantics).

This points in a particular direction. There may be no language specific system in the brain but rather a communication specific system. Interesting.

Here is the abstract:

Interactive generative musical performance provides a suitable model for communication because, like natural linguistic discourse, it involves an exchange of ideas that is unpredictable, collaborative, and emergent. Here we show that interactive improvisation between two musicians is characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language. These

findings support the hypothesis that musical discourse engages language areas of the brain specialized for processing of syntax but in a manner that is not contingent upon semantic processing. Therefore, we argue that neural regions for syntactic processing are not domain-specific for language but instead may be domain-general for communication.

ResearchBlogging.org

Donnay, G., Rankin, S., Lopez-Gonzalez, M., Jiradejvong, P., & Limb, C. (2014). Neural Substrates of Interactive Musical Improvisation: An fMRI Study of ‘Trading Fours’ in Jazz PLoS ONE, 9 (2) DOI: 10.1371/journal.pone.0088665

Occam’s razor is dull

Occam’s Razor is a very respectable rule of thumb. Basically it says that if you have to choose between two explanations that appear equally strong, choose the simplest. This may sound great and may work in some circles but IT IS NOT A GOOD RULE IN BIOLOGY and that includes neuroscience.

When people illustrate simple theories they often pick one of the foundation theories of science: relativity, quantum mechanics, the periodic table, plate tectonics, cell theory, evolution by natural selection, to name a few. These are very wide theories; they cover a lot of ground in their explanations. And on the surface they appear simple because the basic idea of each can more or less be expressed in a paragraph of text and/or a few equations. But that simplicity is an illusion. The details of any of these theories is very complex. A complete textbook on any of these theories will be huge and dense.

Evolution has resulted in organisms becoming more and more complex and varied. They started out as fairly simple single cells without internal compartments differing only slightly from one another. And how do they look now? There are still cells somewhat like the early cells but there is also a multitude of multi-celled plants, animals and fungi with very complex inner workings to their cells. They form communities of various sizes and numbers of different species. Evolution has complicated life - it is not a simplifying process. It is not simplifying and it is not simple to detail. Nothing seems straightforward in biology. Nothing seems really new and efficiently created from scratch for its purpose. Everything seems to be a re-working of some other sort of thing. Most things do more than one function.

So why does Occam’s razor seem so reasonable. We like the idea of simplicity and often equate it with perfection. Simple theories are easy to put into words, and therefore easy to communicate and understand. But none of this makes a theory more useful or more likely to represent some aspect of reality. We like our theories to fit with previous theories and even if they are complex, they appear simple because they are more familiar. In many cases, Occam’s razor seems valid because it is only used when it is obvious to the user which theory ‘should’ be chosen and an argument can be made that the favorite is the simpler. But when it comes to it, evidence always trumps simplicity. If it didn’t we could just dream up explanations from ‘first principles’ and not concern ourselves with anything but the beautiful simplicity of those explanations. Because we insist that good theories make accurate predictions, we cannot just look at how parsimonious a theory is. And in many people’s experience it has not been the simple theories that have made the useful predictions or stood up against the evidence.

Over the years I have grown very suspicious of simplicity. I do not see any reason why the universe should be a simple place. And one thing is for certain: the brain is not simple. We do not expect the brain to be simple. We expect it to be, as they say, ‘quirky’. We expect it to be elegant in a muddled way rather than a streamlined way. We expect it to be elegant the way the eye is. The eye appears to be built backwards so that the light has to past through a lot of cells feeding the optic nerve before the light can reach the light sensitive rods and cones. No engineer would do that. But those cells that are in front of the rods and cones form pathways so that the light reaching the sensitive cells can only come from the source and not from bounces inside the lens and eyeball. They eliminate fuzziness. And the light they obstruct is not required anyway as the sensitive rods can almost register single photons. It works and that is what matters. But there is no feeling of simplicity here, but there is no feeling of an inefficient kludge either, just a feeling of biological quirkiness. Biological quirkiness is what I expect we will find in the brain.

Who’s phrase is ‘free will’ anyway?

There is a good post by Bill Skaggs on his blog (http://weskaggs.net/?p=1452) in which he comments on the Sam Harris – Daniel Dennett debate on free will. Skaggs puts it nicely: “Both Dennett and Harris agree that the “folk” concept of free will is hopeless nonsense. Dennett has spent a substantial part of his career trying to persuade philosophers and the public to redefine free will in a more reasonable way. Harris does not think that redefining a folk concept is a viable strategy. Regardless of who is right, this is the sort of thing that they should be able to argue about without insulting each other or flaming each other.” Skaggs is not taking sides here.

 

 

It has been clear for a long time that the compatibilists do not actually believe in old fashioned free will, but have redefined it so that they can say that they believe in “free will”. Like Harris, I find this the wrong way to go, simply dishonest and confusing. I also think that it just postpones some very needed readjustments. What readjustments?

 

 

  1. If we drop the use of the phrase “free will” then we can also drop the idea of “determinism”; we can stop thinking that these are opposites and mutually exclusive. We do not have a conscious will that is free from physical constraints and we also are not part of a clock-work type of causal universe where our decisions are fixed before we do the deciding. This useless argument consumes a lot of time and energy.

  2. We can re-examine responsibility and figure out when, how, why we are responsible for our actions. We should resolve the ways that we are responsible for our values, habits, morals, and unnecessary areas of willful blindness. What is our responsibility to actually put effort into doing things right?

  3. We could find out and face what consciousness is and is not. This is difficult when some of its models are mixed up with theories of free will and others are not. The understanding of decision making is also hampered by the contamination with free will. This is also a problem with other areas of neuroscience.

  4. We can produce a legal system that makes sense. The principles that hold up the current legal systems really need to be cleaned up. And there are other civic ideas that could do with a re-think. I get the feeling that many compatibilists are trying to avoid any changes at all to the legal system at all cost. It is like they feel the system is so fragile that it will collapse without free will. But changes in the system can be good and not lead to any breakdown in law-and-order. I don’t see it as that perfect a system that needs to be protected from scientific knowledge.

 

 

Changing the definition of free will makes these tasks harder and postpones facing them. But there is more. I also think that there is something very arrogant about setting out to redefine well established words. What is wrong with coining a new word? So if someone writes a paper or book, redefining a word, does he really expect hundreds of millions of people to say “yes, sir, I’ll obey, sir”.

 

 

Some words are fairly easy to hive off with some technical definition. But when the word is used by a large non-technical population in the same or similar contexts, then it is not reasonable to continue using the technical term without some marking of it. For example “tolerance” has an engineering definition and a social one, but the context in which they are used is so different that there is little confusion. But “rational” has a philosophical meaning, an economic meaning and a folk meaning. The economic and folk meanings do overlap in the media and this causes confusion. It would be better to use “economically rational” or a different word entirely. When people insist on causing confusion, they should not be surprised if they get accused of wanting the confusion.

 

 

Words change meaning naturally as the knowledge, need and context of their use changes. But changing them by fiat when they are in general use takes a lot of coercive power. Nor can people really control how their statements are put to use. So someone defines their terms and uses those terms to say something but he cannot be sure that the definitions and the quote will not get separated and the quote used to say the opposite of what was intended. Isn’t it better to avoid the confusion?

 

 

 

Memory stability and change

How is it that memories change and yet seem fairly stable? In a recent paper, Bridge and Voss, report studies on changes to memory. (see citation). They looked at a particular memory, the location of an object. They changed the background associated with the object. How does the memory change after the change of background? It changes in two different ways depending on the situation.

 

 

If the object remained in the same place when the background changed, the new background was added to the memory along with the old background. The memory was expanded. The original memory was probably strengthened and stabilized. On the other hand if the object was in a different place with the new background, then the placement of the object was changed in the original memory to match the placement in front of the new background. So the memory was expanded and perhaps strengthened, but in this case changed rather than stabilized. This sort of mechanism for stability verses change would make the memory more useful (as opposed to more faithful to the original event). How would I deal with a garden, which is different every day in some small way – something grows, something dies – if I had to have a new memory every time there was a change? No. I need only a few memories of the garden occasioned by large radical changes and continuous updating of the current state of the garden in memory. Then I know where the carrots are this year not a couple of years before, and I know how big they are this week not last month. This is a memory useful for day to day living.

 

 

No doubt things are not so simple and there are multiple versions, ways to fix on a particular memory and protect it, and many other mechanisms at work that complicate matters. However, here is a clear demonstration of one way that memories change.

 

 

Abstract:

 

Memory stability and change are considered opposite outcomes. We tested the counterintuitive notion that both depend on one process: hippocampal binding of memory features to associatively novel information, or associative novelty binding (ANB). Building on the idea that dominant memory features, or “traces,” are most susceptible to modification, we hypothesized that ANB would selectively involve dominant traces. Therefore, memory stability versus change should depend on whether the currently dominant trace is old versus updated; in either case, novel information will be bound with it, causing either maintenance (when old) or change (when updated). People in our experiment studied objects at locations within scenes (contexts). During reactivation in a new context, subjects moved studied objects to new locations either via active location recall or by passively dragging objects to predetermined locations. After active reactivation, the new object location became dominant in memory, whereas after passive reactivation, the old object location maintained dominance. In both cases, hippocampal ANB bound the currently dominant object-location memory with a context with which it was not paired previously (i.e., associatively novel). Stability occurred in the passive condition when ANB united the dominant original location trace with an associatively novel newer context. Change occurred in the active condition when ANB united the dominant updated object location with an associatively novel and older context. Hippocampal ANB of the currently dominant trace with associatively novel contextual information thus provides a single mechanism to support memory stability and change, with shifts in trace dominance during reactivation dictating the outcome.

 

 

 
ResearchBlogging.org

D.J. Bridge, & J.L. Voss (2014). Hippocampal Binding of Novel Information with Dominant Memory Traces Can Support Both Memory Stability and Change Journal of Neuroscience, 34 (6) DOI: 10.1523/JNEUROSCI.3819-13.2014

How many memory types?

What a lot of different memory types there are in the literature! This is made more confusing because so little is known about memory. So we have: sensory, working, short-term, long-term, explicit, implicit, declarative, procedural, semantic, episodic without mentioning obscure types like flashbulb memories. It is also not always clear where authors believe things are happening: amygdala, cerebellum, various parts of the cerebral cortex, hippocampus. Divisions seems to be made on the basis of: what is being stored, how it is being stored, how long it is stable, how large the store, where the store, how it is recalled, what it is used for, and whether it appears consciously.

When I think about it there seems much less division between types when they meld into one another. For example the phrase short-term memory sometimes seems to mean working memory and at other times seems to mean a memory that is identical to a long-term memory except that it has not gone through a final chemical ‘fixing’ operation. Sometime implicit is used for a type of learning that is never conscious – the information gathering and its processing is not conscious and neither is its retrieval and use. At other times the process of consciously practicing some skill until it is performed unconsciously is called implicit. But in this case the memory was transformed from explicit to implicit. It seems to me that semantic memories start out as episodic ones. One day when I was fairly young I had an episodic memory of a particular teacher on a particular day teaching the 7 times table. That memory is long gone as was the other memories of my early use of 7 in multiplication, but these disappeared memories were the source of some semantic memory of the arithmetic facts. And although procedural memories are so often unconscious, many (but not all) can be made conscious serial routines quite easily by just mentally ‘walking’ through them. It seems reasonable to assume that there are actually very few physical types of memory and the apparent differences are due to how/why the physically memory is formed and how/why it is used. It may be that there are actually only a very few memory stores and perhaps only one mechanism of storage (strengthened synapses).

In the literature, there is a background of attempts to force biological memory into a computer model. There is mention of encoding, indexing, bits of information in memory, registers, RAM. Although the words are useful at times, they do beg the question of exactly what is really happening.

A recent paper puts an interesting notion in the mix. Beaudry et al treat ‘focus of attention’ as a form of memory. This seemed unusual at first but then made great sense. Attention is sometimes considered on its own, or as part of working memory, or as part of consciousness. Something is connected in these three entities. What if the content being stored and accessed is physically the same but the ‘when’ and ‘why’ is different? What if there is no big difference between consciousness and memory? Just because memory is something very particular and separate in the computer, there is no reason, per se, why it needs to be separate from perception, cognition or consciousness in the brain. So I am quite grateful that this paper started me thinking along these lines.

Beaudry Abstract:
According to some current theories, the focus of attention (FOA), part of working memory, represents items in a privileged state that is more accessible than items stored in other memory systems. One line of evidence supporting the distinction between the FOA and other memory systems is the finding that items in the FOA are immune to proactive interference (when something learned earlier impairs the ability to remember something learned more recently). The FOA, then, is held to be unique: it is the only memory system that is not susceptible to proactive interference. We review the literature used to support this claim, and although there are many studies in which proactive interference was not observed, we found more studies in which it was observed. We conclude that the FOA is not immune to proactive interference: items in the FOA are susceptible to proactive interference just like items in every other memory system. And, just as in all other memory systems, it is how the items are represented and processed that plays a critical role in determining whether proactive interference will be observed.
ResearchBlogging.org

O. Beaudry, I. Neath, A.M. Surprenant, & G. Tahan (2014). The focus of attention is similar to other memory systems rather than uniquely different. frontiers in Human Neuroscience, 8 : doi:10.3389/fnhum.014.00056

A theory of the evolution of consciousness

In a recent article (citation below) Vandekerckhove, Bulnes and Panksepp put forward a theory of how consciousness has changed with the evolution of the brain. They envisage three types or stages of awareness: anoetic (without knowledge), noetic (with knowledge), and autonoetic (with meta-knowledge). The theory has each type building on the previous, both during evolution and during childhood development. They even postulate that the process may go backwards during the deepening of dementia.

Looking at each stage:

Anoetic: awake with a flow of awareness of here and now (not past and future), of the self (core self) in the world with phenomenal quality (qualia) of multimodal sensory-perceptional and body experiences, and affective feeling (emotional and homeostatic). It depends on subcortical neural networks, thalamic sensory relay nuclei, basal ganglia, and especially, midline mesencephalic and diencephalic attentional and affective systems .

Noetic: added to the anoetic flow is semantic memory (but not necessarily using language) and learning. This gives knowledge (implicit and explicit) of specific facts about the self and the world, including facts about the past (but not the feeling of being in the past). It depends on basal ganglia (amygdala, nucleus accumbers etc.), the dorsal medial prefrontal cortex, temporal lobes.

Autonoetic: added to noetic awareness is episodic memory and the ability to deal with thoughts, images, fantasies, expectations, memories in ‘the mind’s eye’ and to be aware of one’s awareness. The memory is explicit and in context. The self is biographical. Introspection is possible. It depends on most of the cortex.

I like the idea of the progressive change in consciousness with the evolution of the brain but I have great difficulty with the divisions. It seems to me that there are no postulated mechanisms here. For example, how do we form a semantic memory? I have always pictured this as a by-product of the episodic memory. As episodes pile up illustrating a particular ‘fact’ of the world, this can become a prediction that is separately ‘indexed’ from the episodes that gave rise to it. But without episodes and without semantic-type symbols, how is a factual memory formed? It is in essence an inductive process of forming predictive rules on the basis of experience – so you need the experiences and those are events, the sort that are stored in episodic memory. So, I do not find the problem has been ‘cut at the joints’, but it is such a nice start at studying the evolution of consciousness.

The abstract:

Based on an interdisciplinary perspective, we discuss how primary-process, anoetic forms of consciousness emerge into higher forms of awareness such as knowledge-based episodic knowing and self-aware forms of higher-order consciousness like autonoetic awareness. Anoetic consciousness is defined as the rudimentary state of affective, homeostatic, and sensory-perceptual mental experiences. It can be considered as the autonomic flow of primary-process phenomenal experiences that reflects a fundamental form of first-person “self-experience,” a vastly underestimated primary form of phenomenal consciousness. We argue that this anoetic form of evolutionarily refined consciousness

constitutes a critical antecedent that is foundational for all forms of knowledge acquisition via learning and memory, giving rise to a knowledge-based, or noetic, consciousness as well as higher forms of “awareness” or “knowing consciousness” that permits “time- travel” in the brain-mind. We summarize the conceptual advantages of such a multi-tiered neuroevolutionary approach to psychological issues, namely from genetically controlled primary (affective) and secondary (learning and memory), to higher tertiary (developmentally emergent) brain-mind processes, along with suggestions about how affective experiences become more cognitive and object-oriented, allowing the developmental creation of more subtle higher mental processes such as episodic memory which allows the possibility of autonoetic consciousness, namely looking forward and backward at one’s life and its possibilities within the “mind’s eye.”

ResearchBlogging.org

M. Vandekerckhove, L.C. Bulnes, & J. Panksepp (2014). The emergence of primary anoetic consciousness in episodic memory Frontiers in Behavioral Neuroscience, 7 : 10.3389/fnbeh.2013.00210

Unconscious vision

Milner (see citation below) reviews the evidence that the visual-motor control is not conscious.

 

Visual perception starts at the back of the optical lobe and moves forward in the cortex as processing proceeds. There are two tracks along which visual perception proceeds, called the dorsal stream and the ventral stream. The two streams have few interconnections. The dorsal stream runs from the primary visual cortex to the superior occipito-parietal cortex near the top the the head. The ventral stream runs from the primary visual cortex to the inferior occipito-temporal cortex at the side of the head. Their functions, as far as is known, differ. “The dorsal stream’s principal role is to provide real-time ‘bottom-up’ visual guidance of our movements online. In contrast, the ventral stream, in conjunction with top-down information from visual and semantic memory, provides perceptual representations that can serve recognition, visual thought, planning and memory offline… we have proposed that the visual products of dorsal stream1 processing are not available to conscious awareness—that they exist only as evanescent raw materials to provide the unconscious moment-to-moment sensory calibration of our movements.

 

The researchers used three methods in their studies: patients with lesions in their visual system, patients suffering from visual extinction, and fMRI experiments.

 

One patient had part of their ventral streams destroyed – they could reach and grasp objects that they were not conscious of. The opposite was true of other patients with damage to their dorsal streams – they had difficulties grasping objects that they were consciously aware of.

 

Visual extinction is a form of spatial neglect. The patient fails to detect a stimulus presented on the side of space opposite the brain damage when and only when there is simultaneously a stimulus on the good side. By carefully arranging an experimental setup, a patient with visual extinction took account of an obstacle that they were not conscious of when reaching for an object. Avoiding an obstacle depends of the dorsal stream because patients with damage to the dorsal stream did not adjust their reaching movements in the presence of obstacles.

 

There is visual feedback during reaching. “Under normal viewing conditions, the brain continuously registers the visual locations of both the reaching hand and the target, incorporating these two visual elements within a single ‘loop’ that operates like a servomechanism to progressively reduce their mutual separation in space (the ‘error signal’) as the movement unfolds. When the need to use such visual feedback is increased by the occasional introduction of unnoticed perturbations in the location of the target during the course of a reach, a healthy subject will make the necessary adjustments to the parameters of his or her movement quite seamlessly. ..In contrast, a patient with damage to the dorsal

 

stream was quite unable to take such target changes on board: she first had to complete the reach towards the original location, before then making a post hoc switch to the new target location…It thus

 

seems very likely that the ability to exploit the error signal between hand and target during reaching is dependent on the integrity of the dorsal stream.

 

The phenomenon of binocular rivalry where the subject has different images projected to the two retinas and is alternatively conscious of one or the other image has been studied with fMRI. It is possible to see which image is conscious by the activity in the ventral stream. But the dorsal stream is able to act on information even if it is not being processed by the ventral stream and therefore not consciously available.

 

The authors do point out that they are not saying that the dorsal stream plays no role in conscious perception. It may for example have some control over attention.

 

In the conclusion, they say “according to the model, such ventral-stream processing plays no causal role in the real-time visual guidance of the action, despite our strong intuitive inclination to believe otherwise (what Clark calls ‘the assumption of experienced-based control’). According to the Milner & Goodale model, that real-time guidance is provided through continuous visual monitoring by the dorsal stream of those very same visual inputs that we experience by courtesy of our ventral stream.

ResearchBlogging.org

A.D. Milner (2012). Is visual processing in the dorsal stream accessible to consciousness? Proc R Soc B, 2289-2298 DOI: 10.1098/rspb.2011.2663

Metaphor, Exaptation and Harnessing

We are used to the metaphor of time being related to distance, as in “back in the 1930s” or “it was a long day”. And there is a noticeable metaphor relating social relationships to distance, as in “a close friend” or “distant relatives”. But these are probably not just verbal metaphors, figures of speech, but much deeper connections. Parkinson (see citations below) has studied the neurobiology of this relationship and shows it is likely to be an exaptation, a shift in function of an existing evolutionary adaptation to a new or enlarged function. We have an old and well established brain system for dealing with space. This system has been used to also deal with time (rather than a new system being evolved), and later further co-opted to also deal with social relationships.

 

 

What spatial, temporal and social perception have in common in this system is that they are egocentric. Space is perceived as distances in every direction from here, with ourselves in the ‘here’ center. In the same way we are the center of the present ‘now’. We are also at the center of a social web with various people at a relative distance out from our center. Objects are placed in the perceptual space at various directions and distances from us. Events are placed various distances into the future or past. People are placed in the social web depending on the strength of our connection with them. It appear that with a small amount of adaptation (or learning) almost any egocentric system could be handled by the basically spatial system of the brain.

 

 

Parkinson has looked at the regions of the brain that process spatial information to see if and how they process temporal and social information. The paper has details but essentually, “relative egocentric distance could be decoded across all distance domains (spatial, temporal, social) … in voxels in a large cluster in the right inferior parietal lobule (IPL) extending into the posterior superior temporal gyrus (STG). Cross-domain distance decoding was also possible in smaller clusters throughout the right IPL, spanning both the supramarginal (SMG) and angular (AG) gyri, as well as in one cluster in medial occipital cortex”.

 

 

These findings provide preliminary support for speculation that IPL circuitry originally devoted to sensorimotor transformations and representing one’s body in space was “recycled” to operate analogously on increasingly abstract contents as this region expanded during evolution. Such speculations are analogous to cognitive linguists’ suggestions that we may speak about abstract relationships in physical terms (e.g., “inner circle”) because we think of them in those terms. Consistent with representations of spatial distance scaffolding those of more abstract distances, compelling behavioral evidence demonstrates that task-irrelevant spatial information has an asymmetrically large impact on temporal processing .” As well as the similarity to the linguistic theories of Lakoff and Johnson, this is also similar to Changizi’s ideas of cultural evolution harnessing the existing functionality of the brain for new uses such as writing.

 

 

Here is the abstract of the Parkinson 2014 paper:

 

Distance describes more than physical space: we speak of close friends and distant relatives, and of the near future and distant past. Did these ubiquitous spatial metaphors arise in language coincidentally or did they arise because they are rooted in a common neural computation? To address this question, we used statistical pattern recognition techniques to analyze human fMRI data. First, a machine learning algorithm was trained to discriminate patterns of fMRI responses based on relative egocentric distance within trials from one distance domain (e.g., photographs of objects relatively close to or far away from the viewer in spatial distance trials). Next, we tested whether the decision boundary generated from this training could distinguish brain responses according to relative egocentric distance within each of two separate distance domains (e.g., phrases referring to the immediate or more remote future within temporal distance trials; photographs of participants’ friends or acquaintances within social distance trials). This procedure was repeated using all possible combinations of distance domains for training and testing the classifier. In all cases, above-chance decoding across distance domains was possible in the right inferior parietal lobule (IPL). Furthermore, the representational similarity structure within this brain area reflected participants’ own judgments of spatial distance, temporal soon-ness, and social familiarity. Thus, the right IPL may contain a parsimonious encoding of proximity to self in spatial, temporal, and social frames of reference.

ResearchBlogging.org

Parkinson C, Liu S, & Wheatley T (2014). A common cortical metric for spatial, temporal, and social distance. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (5), 1979-87 PMID: 24478377

Parkinson C, & Wheatley T (2013). Old cortex, new contexts: re-purposing spatial perception for social cognition. Frontiers in human neuroscience, 7 PMID: 24115928

Accuracy in both time and space

There has been a problem with studying the human brain. It has been possible to look at activity in terms of where it is happening using fMRI but there is poor resolution of time. On the other hand activity can be looked at with a good deal of time resolution with MEG and EEG but the spatial resolution is not good. Only the placement of electrodes in epileptic patients has giving clear spatial and temporal resolution. However, these opportunities are not common and the placement of the electrodes is dictated by the treatment and not by any particular studies. This has meant that much of what we know about the brain was gained by studies on animals, especially monkeys. The results on animals have been consistent with what can be seen in humans, but there is rarely detailed specific confirmation. This may be about to change.

Researchers at MIT are using fMRI with resolutions of a millimeter and MEG with a resolution of a millsecond and combining them with a method called representational similarity analysis. They had subjects look at 90 images of various things for half a second each. They looked at the same series of images multiple times being scanned with fMRI and multiple times with MEG. They then found the similarities between each image’s fMRI and MEG records for each subject. This allowed them to match the two scans and see both the spatial and the temporal changes as single events, resolved in time and space.

We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast. This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.” This flow was extremely close to the flow found in monkeys.

It appears to take 50 milliseconds after exposure to an image for the visual information to reach the first area of the visual cortex (V1), during this time information would have passed through processing in the retina and the thalamus. The information then is processed by stages in the visual cortex and reaches the inferior temporal cortex at about 120 milliseconds. Here objects are identified and classified, all done by 160 milliseconds.

Here is the abstract:

A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively late. Using representational similarity analysis, we combined human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing with sources in V1 and IT. Finally, we correlated human MEG signals to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision.”

Source:

http://www.kurzweilai.net/where-and-when-the-brain-recognizes-categorizes-an-object - review of paper: Radoslaw Martin Cichy, Dimitrios Pantazis, Aude Oliva, Resolving human object recognition in space and time, Nature Neuroscience, 2014, DOI: 10.1038/nn.3635

Social groups are different

Have you noticed that there are people who don’t really like the elderly but really enjoy their grandmother and her friends? There are people who are on the side of the poor but cannot stand to be near any homeless person. I find that sort of thing in myself and in everyone I know. The more I think about it, the more I notice a disconnect between how we view individuals and social groups. This is probably a good thing. What would you do about someone who is great, really the sort of person you like, but has the drawback of one single belief that you just can’t take? Of course, you would continue to like your friend and dislike the people with the nasty belief. When thinking about your friend you who overlook their belief and in thinking about the group with that belief, you would overlook your friend’s membership. We simple accept the disconnect.

Some recent research has shown how we do it. It has long been known that people treat inanimate and animate things differently. A person can lose the ability to recognize one category while still being able to handle the other. And other people can have the problem the other way around. This ‘double dissociation’ is assumed to show that the brain has separate ways of storing/retrieving the items in the two conceptual categories. What the new research shows is a third category that can be dissociated from these two.

The research paper is: Raffaella I. Rumiati, Andrea Carnaghi, Erika Improta, Ana Laura Diez, Maria Caterina Silveri. Social groups have a representation of their own: Clues from neuropsychology. Cognitive Neuroscience, 2014; 1 DOI: 10.1080/17588928.2013.876981. Here is the abstract:

The most relevant evidence for the organization of the conceptual knowledge in the brain was first provided by the patterns of deficits in brain-damaged individuals affecting one or another semantic category. Patients with various etiologies showed a disproportionate impairment in producing and understanding names of either living (fruits, vegetables, animals) or nonliving things (tools, vehicles, clothes). These double dissociations between spared and impaired recognition of living and nonliving things led to suggest that these categories are discretely represented in the brain. Recently social groups were found to be represented independently of traditional living and nonliving categories. Here we tested 21 patients with different types of primary dementia with three word sorting tasks tapping their conceptual knowledge about living and nonliving entities and social groups. Patients double dissociated in categorizing words belonging to the three categories. These findings clarify that knowledge about social groups is distinct from other semantic categories.

More of these sorts of categories are not unheard of. Bantu languages have noun classes in their grammar that correspond roughly to categories like this. For Swahili there are 18 classes, roughly: persons, groups of persons, plants, groups of plants, fruits, groups of fruit, things, groups of things, animals, groups of animals, abstracts, actions, and 3 to do with locations.

In 2012 researchers at MIT showed that the brain organizes objects based on size. “By looking at the arrangement of the responses, they found a systematic organization of big to small object responses across the brain’s cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places, like the beach or a building. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver.”

It seems a puzzle whether these divisions are linguistic or not. Are categories reflecting how we process some types of objects, or are they the categories we learn with our mother tongue? Do we naturally have a lot of categories but lose many, or have naturally a few and create many more? What is the cost/benefit of categories – are they costly to maintain and work with, or do they make thinking easier?