Category Archives: methods

All pain is not the same

A popular illustration of embodied cognition is the notion that physical pain and social pain share the same neural mechanism. The researchers that first published this relationship, have now published a new paper that finds the two types of pain do not overlap in the brain but are just close neighbours, close enough to have appeared together on the original fMRI scans. But the pattern of activity is different. The data has not changed but the method of analyzing it has produced a much clearer picture.

Neuroskeptic has a good blog on this paper and observes: “ Woo et al. have shown commendable scientific integrity in being willing to change their minds and update their theory based on new evidence. That sets an excellent example for researchers.” Have a look at the Neuroskeptic post (here).

It would probably be wise for other groups to re-examine, using multivariant analysis, similar data they have previously published.

pains

 

 

 

 

Abstract of paper (Woo CW, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature Communications, 5 PMID: 25400102)

Current theories suggest that physical pain and social rejection share common neural mechanisms, largely by virtue of overlapping functional magnetic resonance imaging (fMRI) activity. Here we challenge this notion by identifying distinct multivariate fMRI patterns unique to pain and rejection. Sixty participants experience painful heat and warmth and view photos of ex-partners and friends on separate trials. FMRI pattern classifiers discriminate pain and rejection from their respective control conditions in out-of-sample individuals with 92% and 80% accuracy. The rejection classifier performs at chance on pain, and vice versa. Pain- and rejection-related representations are uncorrelated within regions thought to encode pain affect (for example, dorsal anterior cingulate) and show distinct functional connectivity with other regions in a separate resting-state data set (N=91). These findings demonstrate that separate representations underlie pain and rejection despite common fMRI activity at the gross anatomical level. Rather than co-opting pain circuitry, rejection involves distinct affective representations in humans.”

 

Can fMRI be trusted?

 

The use of brain images is often criticized. A recent article by M Farah looks at what ‘the kernals of truth’ behind the critiques are and how safe we are to trust the images. (citation below). She is concerned by the confusion of legitimate worries about imaging and false ones.

The first criticism that she addresses is that the BOLD signal in fMRI is from oxygenated blood and not brain activity. True, but she says that scientific measurements are very often indirect. What is important is the nature of the link involved in the measurement. In this case, the answer is that even though the exact nature of the link between brain activity and blood flow is not known – it has been established that they are causally related. One thing she does not make a point of is that there is not necessarily a time lag in the blood flow. The flow is controlled by astrocytes and these glia appear (at least in the case of attention) to anticipate the need for increased blood flow. “In many ‘cognitive’ paradigms, blood flow modulation occurs in anticipation of or independent of the receipt of sensory input” – Moore & Coa (citation below).

There are complaints that the presentation of images are fabrications of scale and colour. The colours are misleading and the differences they represent can be tiny. Scales can be arbitrary. Farah points out that this is true across science. Graphs and illustrations are arbitrary and exaggerated in order for them to be easier for readers to see and understand and this is not particularly prominent in fMRI images.

A number of different criticisms have been made about the emphasis that imaging puts on localization and modular thinking. Again this is somewhat true. But – only early imaging did localization for localization’s sake to prove the validity of the method. Looking for activity in locations that had been previously shown to be involved in a particular process. Today’s imaging has gone past that. Another related gripe is that there are no psychological hypotheses that can be decisively tested by imaging. Her answer is that this is true of all psychological methods, none are decisive. Never the less, imaging has helped to resolve issues. There are complaints that imaging favours the production of modular hypotheses, biasing research. But the questions that science, in general, asks are those it has the tools to answer. This is not new, not true only of imaging and not an unreasonable way to proceed.

Farah does agree with criticism of ‘wanton reverse inference’ but only when it is wanton. Although you can infer that a particular thought process is associated with a particular brain activity – you cannot turn that around – a particular brain activity does not imply a particular thought process. An area of the brain may do more than one thing. An example of this that I still notice occurs is the idea that the amygdala activity has to do with fear when fear is only one of the things the amygdala processes. Farah uses wanton because this criticism should not be applied to MVPA (multivoxel pattern analysis) which is a valid special case of back and forth of inference.

The statistics of imaging are another area where suspicion is raised. Some are simply concerned that statistics is a filter and not ‘the reality’. But the use of statistics in science is widespread; it is a very useful tool; stats do not mask reality but gives better approximates it than does raw data. There are two types of statistical analysis that Farah does feel are wrong. They are often referred to as dead salmon activity (multiple comparisons) and voodoo correlations (circularity). These two faulty statistical methods can also be found in large complex data sets in other sciences: psychometrics, epidemiology, genetics, and finance.

When significance testing is carried out with brain imaging data, the following problem arises: if we test all 50,000 voxels separately, then by chance alone, 2,500 would be expected to cross the threshold of significance at the p<0.05 level, and even if we were to use the more conservative p<0.001 level, we would expect 50 to cross the threshold by chance alone. This is known as the problem of multiple comparisons, and there is no simple solution to it…Statisticians have developed solutions to the problem of multiple comparisons. These include limiting the so-called family-wise error rate and false discovery rate.”

Some researchers first identified the voxels most activated by their experimental task and then—with the same data set—carried out analyses only on those voxels to estimate the strength of the effect.Just as differences due to chance alone inflate the uncorrected significance levels in the dead fish experiment, differences due to chance alone contribute to the choice of voxels selected for the second analysis step. The result is that the second round of analyses is performed on data that have been “enriched” by the addition of chance effects that are consistent with the hypothesis being tested. In their survey of the social neuroscience literature, Vul and colleagues found many articles reporting significant and sizeable correlations with proper analyses, but they also found a large number of articles with circular methods that inflated the correlation values and accompanying significance levels”

Finally she tackles the question of influence. The complaint is that images are too convincing, especially to the general public. This may be true in some cases but attempted replication of many of the undue influence studies have not shown the effect. It may be the notion of science rather than imaging in particular that is convincing. Or it may be that people have become used to images and the coloured blobs no longer have undue impact. There is also the question of resources. Some feel that image studies get the money, acceptance in important journals, interest from the media etc. There seems to be little actual evidence for this and it may often be sour grapes.

Should we trust fMRI? Yes, within reason. One single paper with images or without cannot be taken as True with that capital T, but provided the stats and inferences are OK, images are as trust worth as other methods.
ResearchBlogging.org

Farah MJ (2014). Brain images, babies, and bathwater: critiquing critiques of functional neuroimaging. The Hastings Center report, Spec No PMID: 24634081

Moore, C., & Cao, R. (2008). The Hemo-Neural Hypothesis: On The Role of Blood Flow in Information Processing Journal of Neurophysiology, 99 (5), 2035-2047 DOI: 10.1152/jn.01366.2006

I'm on ScienceSeeker-Microscope
This post was chosen as an Editor's Selection for ResearchBlogging.org

Close to truth

I have been thinking about induction and deduction. I was taught that I could prove something was true with deduction but not with induction. A logical argument gives truth with a capital T. But for years I have not accepted this way to thinking. All the logical argument gives is a relationship. If the axioms are True then the conclusion is True and if the conclusion is False then one or more of the axioms is False. But how do you get your first couple of True axioms, the axioms that are needed for the first True conclusion. Not with logic obviously. How axioms have been identified in the past is by induction. They are statements we find trustworthy because we have never encountered them to be suspect. It does seem a bit ironic that deduction is held to be more rigorous than induction when at the bottom of a deduction are axioms arrived at by induction. So I just assume there are no truths with a capital T.

But induction is much stronger than it is usually portrayed. Popper seemed to think that a strong case for inductive arguments could not be made and that the best that could be done was to falsify those that could be falsified and temporarily assuming that the rest were OK (but certainly not true even without a capital). This is somewhat counter-intuitive, because we do trust inductions more if they are ‘confirmed’. Confirmation is somehow more valued than falsification – probably because we are more interested in what has a good chance of being true than what is almost certain to be false.

The Bayesian probability adherents make the argument that by confirmations piling up, each making it more probable that a statement is true, a statement can become so close to true that it makes no-never-mind. Many believe that our minds use a Bayesian approach to understanding the world. Of course nothing that is statistical is going to merit an actual true with a capital T. So I have to again accept that there is no true with a capital T. Even if many confirmations and no falsification is close.

But there is a deeper problem than even induction under Bayesian rules of probability. Our knowledge is not little bits and pieces that can be confirmed or found false, this is a simplification that can confuse. What we have is a huge web of knowledge, not independent bits. This does not lend itself to actual Bayesian calculations, but the general idea is still valid. New (and therefore suspect) ideas are confirmed or falsified by being set in that web of knowledge – they eventually fit or don’t fit. Each confirmation strengthens the web as well as the new idea; and, each falsification can be interpreted as a fault in the web as well as the failure of the new idea, but it is almost always the web that stays and the new idea that is thrown away. This has been going on for a few centuries and the web is very strong. It takes an upheaval every once in a while but it is as close to true as we have. It is in essence a product of induction and not deduction.

Occam’s razor is dull

Occam’s Razor is a very respectable rule of thumb. Basically it says that if you have to choose between two explanations that appear equally strong, choose the simplest. This may sound great and may work in some circles but IT IS NOT A GOOD RULE IN BIOLOGY and that includes neuroscience.

When people illustrate simple theories they often pick one of the foundation theories of science: relativity, quantum mechanics, the periodic table, plate tectonics, cell theory, evolution by natural selection, to name a few. These are very wide theories; they cover a lot of ground in their explanations. And on the surface they appear simple because the basic idea of each can more or less be expressed in a paragraph of text and/or a few equations. But that simplicity is an illusion. The details of any of these theories is very complex. A complete textbook on any of these theories will be huge and dense.

Evolution has resulted in organisms becoming more and more complex and varied. They started out as fairly simple single cells without internal compartments differing only slightly from one another. And how do they look now? There are still cells somewhat like the early cells but there is also a multitude of multi-celled plants, animals and fungi with very complex inner workings to their cells. They form communities of various sizes and numbers of different species. Evolution has complicated life – it is not a simplifying process. It is not simplifying and it is not simple to detail. Nothing seems straightforward in biology. Nothing seems really new and efficiently created from scratch for its purpose. Everything seems to be a re-working of some other sort of thing. Most things do more than one function.

So why does Occam’s razor seem so reasonable. We like the idea of simplicity and often equate it with perfection. Simple theories are easy to put into words, and therefore easy to communicate and understand. But none of this makes a theory more useful or more likely to represent some aspect of reality. We like our theories to fit with previous theories and even if they are complex, they appear simple because they are more familiar. In many cases, Occam’s razor seems valid because it is only used when it is obvious to the user which theory ‘should’ be chosen and an argument can be made that the favorite is the simpler. But when it comes to it, evidence always trumps simplicity. If it didn’t we could just dream up explanations from ‘first principles’ and not concern ourselves with anything but the beautiful simplicity of those explanations. Because we insist that good theories make accurate predictions, we cannot just look at how parsimonious a theory is. And in many people’s experience it has not been the simple theories that have made the useful predictions or stood up against the evidence.

Over the years I have grown very suspicious of simplicity. I do not see any reason why the universe should be a simple place. And one thing is for certain: the brain is not simple. We do not expect the brain to be simple. We expect it to be, as they say, ‘quirky’. We expect it to be elegant in a muddled way rather than a streamlined way. We expect it to be elegant the way the eye is. The eye appears to be built backwards so that the light has to past through a lot of cells feeding the optic nerve before the light can reach the light sensitive rods and cones. No engineer would do that. But those cells that are in front of the rods and cones form pathways so that the light reaching the sensitive cells can only come from the source and not from bounces inside the lens and eyeball. They eliminate fuzziness. And the light they obstruct is not required anyway as the sensitive rods can almost register single photons. It works and that is what matters. But there is no feeling of simplicity here, but there is no feeling of an inefficient kludge either, just a feeling of biological quirkiness. Biological quirkiness is what I expect we will find in the brain.

Accuracy in both time and space

There has been a problem with studying the human brain. It has been possible to look at activity in terms of where it is happening using fMRI but there is poor resolution of time. On the other hand activity can be looked at with a good deal of time resolution with MEG and EEG but the spatial resolution is not good. Only the placement of electrodes in epileptic patients has giving clear spatial and temporal resolution. However, these opportunities are not common and the placement of the electrodes is dictated by the treatment and not by any particular studies. This has meant that much of what we know about the brain was gained by studies on animals, especially monkeys. The results on animals have been consistent with what can be seen in humans, but there is rarely detailed specific confirmation. This may be about to change.

Researchers at MIT are using fMRI with resolutions of a millimeter and MEG with a resolution of a millsecond and combining them with a method called representational similarity analysis. They had subjects look at 90 images of various things for half a second each. They looked at the same series of images multiple times being scanned with fMRI and multiple times with MEG. They then found the similarities between each image’s fMRI and MEG records for each subject. This allowed them to match the two scans and see both the spatial and the temporal changes as single events, resolved in time and space.

We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast. This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.” This flow was extremely close to the flow found in monkeys.

It appears to take 50 milliseconds after exposure to an image for the visual information to reach the first area of the visual cortex (V1), during this time information would have passed through processing in the retina and the thalamus. The information then is processed by stages in the visual cortex and reaches the inferior temporal cortex at about 120 milliseconds. Here objects are identified and classified, all done by 160 milliseconds.

Here is the abstract:

A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively late. Using representational similarity analysis, we combined human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing with sources in V1 and IT. Finally, we correlated human MEG signals to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision.”

Source:

http://www.kurzweilai.net/where-and-when-the-brain-recognizes-categorizes-an-object – review of paper: Radoslaw Martin Cichy, Dimitrios Pantazis, Aude Oliva, Resolving human object recognition in space and time, Nature Neuroscience, 2014, DOI: 10.1038/nn.3635