Category Archives: methods

Beta waves

Judith Copithorne image

Judith Copithorne image

Brain waves are measured for many reasons and they have been linked to various brain activities. But very little is known about how they arise. Are they the result or the cause of the activities they are associated with? How exactly are they produced at a cellular or network level? We know little about these waves.

One type of wave, beta waves (18-25 Hz) are associated with consciousness and alertness. In the motor cortex they are found when muscle contractions are isotonic (contractions that do not produce movement) but are absent just prior and during movement. They are increased during sensory feedback to static motor control and when movement is resisted or voluntarily suppressed. In the frontal cortex the beta waves are found during attention to cognitive tasks directed to the outside world. They are found in alert attentive states, problem solving, judgment, decision making, and concentration. The more involved the cognitive activity the faster the beta waves.

ScienceDaily reports a press release from Brown University on the work of Stephanie Jones and team, who are attempting to understand how beta waves arise. (here) Three types of study are used: MEG recordings, computer models, and implanted electrodes in animals.

The MEG recordings from the somatosensory cortex (sense of touch) and the inferior frontal cortex (higher cognition) showed a very distinct form for the beta waves, “they lasted at most a mere 150 milliseconds and had a characteristic wave shape, featuring a large, steep valley in the middle of the wave.” This wave form was recreated in a computer model of the layers of the cortex. “They found that they could closely replicate the shape of the beta waves in the model by delivering two kinds of excitatory synaptic stimulation to distinct layers in the cortical columns of cells: one that was weak and broad in duration to the lower layers, contacting spiny dendrites on the pyramidal neurons close to the cell body; and another that was stronger and briefer, lasting 50 milliseconds (i.e., one beta period), to the upper layers, contacting dendrites farther away from the cell body. The strong distal drive created the valley in the waveform that determined the beta frequency. Meanwhile they tried to model other hypotheses about how beta waves emerge, but found those unsuccessful.” The model was tested in mice and rhesus monkeys with implanted electrodes and was supported.

Where do the signals come from that drive the pyramidal neurons? The thalamus is a reasonable guess at the source. Thalamo-cortex-thalamus feedback loop makes those very contacts of the thalamus axons within the cortex layers. The thalamus is known to have signals with 50 millisecond duration. All of the sensory and motor information that enters the cortex (except smell) comes though the thalamus. It regulates consciousness, alertness and sleep. It is involved in processing sensory input and voluntary motor control. It has a hand in language and some types of memory.

The team is continuing their study. “With a new biophysical theory of how the waves emerge, the researchers hope the field can now investigate beta rhythms affect or merely reflect behavior and disease. Jones’s team in collaboration with Professor of neuroscience Christopher Moore at Brown is now testing predictions from the theory that beta may decrease sensory or motor information processing functions in the brain. New hypotheses are that the inputs that create beta may also stimulate inhibitory neurons in the top layers of the cortex, or that they may may saturate the activity of the pyramidal neurons, thereby reducing their ability to process information; or that the thalamic bursts that give rise to beta occupy the thalamus to the point where it doesn’t pass information along to the cortex.

It seems very clear that understanding of overall brain function will depend on understanding the events at a cellular/circuit level; and that those processes in the cortex will not be understood without including other regions like the thalamus in the models.

Powerful Induction

In an article in the Scientific American (here) Shermer points to ‘consilience of inductions’ or ‘convergence of evidence’. This is a principle that I have held for many, many years. Observations, theories and explanations are only trustworthy when they stop being a string of a few ‘facts’ and become a tissue or fabric of a great many independent ‘facts’.

I find it hard to take purely deductive arguments seriously – they are like rope bridges across a gap. They depend on every link in the argument and more importantly on the mooring points at either end. A causeway across the same gap does not depend on any single rock – it is dependable.

There is one theory that is put forward often and, to many, is ‘proven’, that is that brains can be duplicated with a computer. The reasoning goes something like: all computers are Turin machines, any program on a Turin machine can be duplicated on any other Turin machine, brains are computers and therefore Turin machines and can be duplicated on other computers. I see this as a very thin linear string of steps.

Step one is a somewhat circular argument in that being a Turin machine seems to be the definition of a ‘proper’ computer and so yes, all of those computers are Turin machines. What if there are other machines that do something that resembles computing but that are not Turin machines? Step two is pretty solid – unless someone disproves it which is unlikely but possible. The unlikely does happen; for example, someone did question the obvious ‘parallel lines do not meet’ to give us non-Euclidian geometry. Step three is the problem. Is the brain a computer in the sense of a Turin machine? People have said things like, “Well, brains do compute things so they are computers.” But no one has shown that any machine that can do any particular computation by any means is a Turin machine.

No one can say exactly how the brain does its thinking. But there are good reasons to question whether the brain does things step-wise using algorithms. In many ways the brain resembles an analog machine using massively parallel processing. The usual answer is that any processing method can be simulated on a digital algorithmic machine. There is a difference between duplication and simulation. No one says that a Turin machine can duplicate any other machine via a simulation. In fact, it is probable that this is not possible.

This is the sort of argument, a deductive one, that is hardly worth making. We will get somewhere with induction. It takes time: many experimental studies, methods have to be developed, models created and tested etc. But in the end it will be believable – we can trust that understanding because it is the product of a web or fabric of dependent inductions.


Memory switch

A new tool has been used for the first time to look at brain activity – ribosomal profiling. The method identifies the proteins that are being made at any time. Ribosomes make proteins using messenger RNA that was copied from the DNA of genes. The method is to destroy all the messenger RNA that is not actually within a ribosome, or in other words, being actively used to make protein. The protected RNA can be used to identify the genes that were being translated into proteins at the moment that the cell was broken and the free RNA destroyed.

ScienceDaily reports on a press release from the Institute for Basic Science describing the use of this technique to study memory formation. (here) The research was done in the IBS Center for RNA Research and Department of Biological Sciences at Seoul National University. There is a on-off switch for formation of memories that is based on changes in protein production.

When an animal experiences no stimulus in an environment the hippocampus undergoes gene repression which prevents the formation of new memories. Upon the introduction of a stimulus, the hippocampus’ repressive gene regulation is turned off allowing for new memory creation, and as Jun Cho puts it, “Our study illustrates the potential importance of negative gene regulation in learning and memory”.

I assume this research will appear in a journal paper and that the technique will be used in other studies of the brain. It is always good to hear of new methods being available.

Simplifying assumptions

There is an old joke about a group of horse betters putting out a tender to scientists for a plan to predict the results of races. A group of biologists submitted a plan to genetically breed a horse that would always win. It would take decades and cost billions. A group of statisticians submitted a plan to devise a computer program to predict races. It would cost millions and would only predict a little over chance. But a group of physicists said they could do it for a few thousand. They would be able to have the program finished in just a few weeks. The betters wanted to know how they could be so quick and cheap. “Well, we have equations for how the race variables interact. It’s a complex equation but we have made simplifying assumptions. First we said let each horse be a perfect rolling sphere. Then…

For over 3 decades ideas have appeared about how the brain must work from studies of electronic neural nets. These studies usually make a lot of assumptions. First, they assume that the only active cells in the brain are the neurons. Second, the neurons are simple (they have inputs which can be weighted and if the sum of the weighted inputs is over a threshold, the neuron fires its output signals) and there is only one type (or a very, very few different types). Third, the connections between the neurons are only structured in very simple and often statistically driven nets. There is only so much that can be learned about the real brain from this model.

But on the basis of electronic neural nets and information theory with, I believe, only a small input from the physiology of real brains, it became accepted that the brain used a ‘sparse coding’. What does this mean? At one end of a spectrum, the information held in a network depends on the state of just one neuron. This coding is sometimes referred to as grandmother cells because one and only one neuron would code for your grandmother. If the information depends on the state of all the neurons or in other words your grandmother would be coded by a particular pattern of activity that includes the states of all the neurons, that is the other end of the spectrum. Sparse coding uses only a few neurons so is near the grandmother cell end of the spectrum.

Since the 1980s it has generally been accepted that the brain uses sparse coding. But experiments with actual brains have been showing that it may not be the case. A recent paper (Anton Spanne, Henrik Jörntell. Questioning the role of sparse coding in the brain. Trends in Neurosciences, 2015; DOI: 10.1016/j.tins.2015.05.005) argues that it may not be sparse after all.

It was assumed that the brain would use the coding system that gives the lowest total activity without losing functionality. But that is not what the brain actually does. It has higher activity that it theoretically needs. This is probably because the brain sits in a fairly active state even at rest (a sort of knife edge) where it can quickly react to situations.

If sparse coding were to apply, it would entail a series of negative consequences for the brain. The largest and most significant consequence is that the brain would not be able to generalize, but only learn exactly what was happening on a specific occasion. Instead, we think that a large number of connections between our nerve cells are maintained in a state of readiness to be activated, enabling the brain to learn things in a reasonable time when we search for links between various phenomena in the world around us. This capacity to generalize is the most important property for learning.

Here is the abstract:


  • Sparse coding is questioned on both theoretical and experimental grounds.
  • Generalization is important to current brain models but is weak under sparse coding.
  • The beneficial properties ascribed to sparse coding can be achieved by alternative means.

Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.

How do morals work?

There is a way to study morality with little scenarios, little hypothetical questions, given to people, who then answer that they would do X or Y as a moral action in that situation. The scenarios have always struck me as simplistic and not believable, even sometimes impossible. And the answers people give do not have credibility – people are not always truthful and besides, in the the split second they would have to make a decision if the situation was really happening, with hardly any time for thought, they may do anything. The context is arbitrary and does not widen to include the society or the future beyond perhaps a day at most. This scenario method seems useless and misleading.

One of these scenarios goes like this. If you could time travel to Austria when Hitler was a small boy, would you try to kill him? Well, first of all this is not a believable story so the answerers will not actually take it really seriously. Secondly, we know what Hitler did but we have no idea what would have happened if there had been no Hitler. It could have been wonderful. Or for example, if no Hitler, then there may have been some other tyrant but a bit later on the scene. A war in the 50’s rather than the 40’s would be the future. This could have been after the invention of the nuclear bomb, so that instead of only 2, enough bombs were dropped to wipe out civilization.

Another question has been in the news lately. If you have a person who had hidden a bomb and that bomb was going to go off in a short time and kill many people, would you torture the person to get the location of the bomb? This again is not a believable scenario, although possible. This particular combination of knowing some things for absolutely, positively sure but not knowing the location at all is unlikely. No one has come up with a case like this having happened and it is unlikely to happen very often – maybe once in a few hundred years. If you were in this situation you would probably act without thinking and justify your action or inaction later. Again this scenario leaves out the future – over the course of future years you may cause many more deaths by opening a door to torture than you save by finding the bomb. It also leaves out a wider context by not including the fact that torture is not a very successful way to get correct information (you can do the torture and get nothing in return). In such a situation I want to think I would not torture but I am not sure. If someone says they would torture, I am not sure that they actually would. How people answer the question is next to meaningless.

A popular question (or questions really) is the trolley car that is going to kill 1 person or 5, where you can take action resulting in the death of the 1 or not take action and allow the 5 to die. Again the scenarios are not very believable and some not even credible. Here is one: you are on a bridge over the trolley line and you see 5 people tied to the track. A trolley car is coming and will hit the 5 if it is not stopped. Beside you is a fat man, weighing enough to stop the trolley if he is dropped on the track. Would you push him over the bridge onto the track? If you say you would then it is assumed that you are a utilitarian and decide moral questions by what gives the most total good or the least total bad. If you say no you would not, it is assumed that you follow moral rules and therefore will not participate in murder. In all the different models of this set-up there is no look at the future. What if the 1 that dies to save the 5, is about to find the cure for some fatal disease or something like that, saving thousands of lives? To a certain extent it is important that people do not think that someone may murder them for no other reason thean they were a convenient weight to save some other people. Societies need an amount of trust.

As it happens, I think I would not push the fat man but there are other trolley scenarios where I might sacrifice the 1. And again I am not sure that I know what I would do in some of the trolley questions. But – I am quite sure that I am not a utilitarian all the time or none of the time; ditto, with following rules. Sometimes I do and sometimes I don’t. I am not concerned with being consistent to a philosophical opinion of what should be labeled moral.

The reason we even have moral questions is that we are social beings and the health of our societies is important to our survival. Because we are social, there can be choices we have to make that have no absolutely right answer. We have to choose between two good things or which of two bad things to avoid. The problems are not clear cut nor do we have all the information needed to ‘solve’ them. We can use our intellect and find logical answers but these may not be the best answers because they don’t take into consideration the statistics of unknown repercussions. We can follow the rules of society but these may not be the best answers for us in certain situations. We can follow our emotional feelings but they are also not always the best route. In the real world, out brains sort this out using cognition, learned values and emotions. This can be done quickly or more slowly depending on the time available. We end up with an action plan and a justification, should we need it, but with practically no idea of how the action plan was arrived at. We can trust, for what it’s worth, that the brain used a mechanism that has withstood the test of our ancestors/societies survival. There is no guarantee that evolution will have provided us with a way to always be morally right just likely that it will be probabilistically better than alternatives. Children seem to come with a rudimentary moral sense which they improve with experience and learning from their culture – still no guarantee!

If we want to understand how the brain makes these difficult choices, we will have to use more realistic questions (whether in a scanner or on a questionnaire). Morality is unlikely to be understandable in terms of utility or rule, logic or emotion, or self-interest verses societal-interest.

New method – BWAS

There is a report of a new method of analyzing fMRI scans – using enormous sets of data and giving very clear results. Brain-wide association analysis (BWAS for short) was used in a comparison of autistic and normal brains in a recent paper (citation below).

The scan data is divided into 47,636 small areas of the brain, voxels, and then these are analyzed in pairs, each voxel with all other voxels. This gives 1,134,570,430 data points for each brain. This sort of analysis has been done in the past but only for restricted areas of the brain and not the whole brain. The method was devised by J. Feng, University of Warwick, Computer Department.

This first paper featuring the method shows its strengths. Cheng and others used data from over 900 existing scans from various sources that had matched autistic and normal pairs. The results are in the abstract below. (This blog does not usually deal with information on autism and similar conditions but tries to keep to normal function; I am not a physician. So the results are not being discussed, just the new method.)

bwasA flow chart of the brain-wide association study [termed BWAS, in line with genome-wide association studies (GWAS)] is shown in Fig. 1. This ‘discovery’ approach tests for differences between patients and controls in the connectivity of every pair of brain voxels at a whole-brain level. Unlike previous seed-based or independent components-based approaches, this method has the advantage of being fully unbiased, in that the connectivity of all brain voxels can be compared, not just selected brain regions. Additionally, we investigated clinical associations between the identified abnormal circuitry and symptom severity; and we also investigated the extent to which the analysis can reliably discriminate between patients and controls using a pattern classification approach. Further, we confirmed that our findings were robust by split data cross-validations.” FC = functional connectivity; ROI = region of interest.

The results are very clear and have a very good statistical probability.

Abstract: “Whole-brain voxel-based unbiased resting state functional connectivity was analysed in 418 subjects with autism and 509 matched typically developing individuals. We identified a key system in the middle temporal gyrus/superior temporal sulcus region that has reduced cortical functional connectivity (and increased with the medial thalamus), which is implicated in face expression processing involved in social behaviour. This system has reduced functional connectivity with the ventromedial prefrontal cortex, which is implicated in emotion and social communication. The middle temporal gyrus system is also implicated in theory of mind processing. We also identified in autism a second key system in the precuneus/superior parietal lobule region with reduced functional connectivity, which is implicated in spatial functions including of oneself, and of the spatial environment. It is proposed that these two types of functionality, face expression-related, and of one’s self and the environment, are important components of the computations involved in theory of mind, whether of oneself or of others, and that reduced connectivity within and between these regions may make a major contribution to the symptoms of autism.

Cheng, W., Rolls, E., Gu, H., Zhang, J., & Feng, J. (2015). Autism: reduced connectivity between cortical areas involved in face expression, theory of mind, and the sense of self Brain DOI: 10.1093/brain/awv051

I'm on ScienceSeeker-Microscope

An unnecessary exaggeration

Science 2.0 has a posting (here) on what is called brain to brain interfaces which they and other cannot resist calling telepathy; neither could the original press release writers.

I really think this ‘telepathy’ label is unnecessary. Telepathy implies communication directly on a mental (in the dualistic sense) rather than physical level. In other words telepathy is not natural but supernatural. What is being discussed now is a very physical communication involving a number of machines. No dualistic mental stage enters into it.

No doubt this technology, when it is perfected, will be useful in a number of ways. But as communication between most humans for most purposes, it will not beat language. In essence it is much like language: one brain has a thought and translates it into a form that can be transmitted, it is transmitted, and the receiver translates it back into a thought. That way of communicating sounds a lot like language to me. Just because it uses the internet to carry the message and non-intrusive machines to get information out of one brain and into another, does not mean it is different from language in principle. Language translates thoughts into words that are broadcast by the motor system, carried by sound through the air, received by the sensory system and made into words which can be translated into thoughts. It works well. If this new BBI stuff is telepathy then so is language (and semaphore for that matter).

Language also has some mind-control aspects. If I yell “STOP” it is very likely that another person will freeze before they can figure out why I yelled or why it may be a good idea to stop. It is as if I reached into their brain and pulled the halt cord. If you say “dog” I am going to look at the dog or search for one if there is no obvious dog. You have reached into my brain and pushed my attention from wherever it was focused onto a dog. If someone says “2 and 2 equals”, people will think “4” just like that. Someone has reached in and set the memory recall to find what completes that equation. People can also point metaphorically to shared concepts and so on. This amounts to people influencing one another’s brains.

With writing we have even managed to have time and distance gaps between speakers and listeners.

Language has other advantages but the greatest is that almost everyone has the mechanism already in a very advanced form. We are built to learn language as children and once learned it is handy, cheap and resilient.

Connectivity is not one idea

Sebastian Seung sold the idea that “we are our connectome”. What does that mean? Connectivity is a problem to me. Of course, the brain works only because there are connections between cells and between larger parts of the brain. But how can we measure and map it. Apparently there are measurement problems.

When some research says that A is connected to B it can mean a number of things. A could be a sizable area of the brain that has a largish nerve tract to B. This means that some neurons in A have axons that extend all the way to B, and some neurons in B have synapses with each of those axons. We could be talking about smaller and smaller groups of neurons until we have a pair of connected neurons. This is anatomy – it does not tell us when and how the connections are active or what they accomplish, just that a possible path is visible.

On the other hand A and B may share information. A and B are active at the same time in some circumstance. They are receiving the same information, either one from the other, or both from some other source. Quite often this means they are synchronized in their activity; it is locked together in a rhythm. Or they may react differently but always to the same type of information. Or one may feed the other information (directly or indirectly). A and B need only be connected when they are involved in the function that gives them shared information. Here we see the informational connection but necessarily the path.

A and B may be connected by a known causal link. A makes B active. Whenever A is active it causes B to be active too. This causal link gives no automatic information about path or even, at times, what information may be shared.

On a very small scale cells that are close together can be connected by contacts with glial cells, local voltage potentials and chemical gradients. Here the connections are even more difficult to map.

And finally overall there are control mechanisms that switch on and off various connection routes.

The whole brain is somewhat plastic and so can change its connectivity structure over time to better serve the needs of the individual. When it comes down to it, the connectivity that makes us each unique, the results of learning and memory, is the most plastic. It is changing all the time and can be very hard to map.

Saying “connectome” without any detailed specification is next to meaningless and “we are our connectome” is certainly true but somewhat vacuous.

A recent paper (citation below) took 4 common ways of measuring connectivity and compared them pair-wise. None of the pairs had a high level of agreement and some pairs had hardly any. There may be a lot of reasons for this but a big one has to be that the various methods were not measuring the same thing. In general, authors say that they are measuring, by what method and why. These nuances occasional do not make it to the abstract or conclusion, often never make it to the press release, and nearly never to news articles.

Here is the abstract and a diagram from the Jones paper.

connectionpairsMeasures of brain connectivity are currently subject to intense scientific and clinical interest. Multiple measures are available, each with advantages and disadvantages. Here, we study epilepsy patients with intracranial electrodes, and compare four different measures of connectivity. Perhaps the most direct measure derives from intracranial electrodes; however, this is invasive and spatial coverage is incomplete. These electrodes can be actively stimulated to trigger electrophysical responses to provide the first measure of connectivity. A second measure is the recent development of simultaneous BOLD fMRI and intracranial electrode stimulation. The resulting BOLD maps form a measure of effective connectivity. A third measure uses low frequency BOLD fluctuations measured by MRI, with functional connectivity defined as the temporal correlation coefficient between their BOLD waveforms. A fourth measure is structural, derived from diffusion MRI, with connectivity defined as an integrated diffusivity measure along a connecting pathway. This method addresses the difficult requirement to measure connectivity between any two points in the brain, reflecting the relatively arbitrary location of the surgical placement of intracranial electrodes. Using a group of eight epilepsy patients with intracranial electrodes, the connectivity from one method is compared to another method using all paired data points that are in common, yielding an overall correlation coefficient. This method is performed for all six paired-comparisons between the four methods. While these show statistically significant correlations, the magnitudes of the correlation are relatively modest (r2 between 0.20 and 0.001). In summary, there are many pairs of points in the brain that correlate well using one measure yet correlate poorly using another measure. These experimental findings present a complicated picture regarding the measure or meaning of brain connectivity.”

Jones, S., Beall, E., Najm, I., Sakaie, K., Phillips, M., Zhang, M., & Gonzalez-Martinez, J. (2014). Low Consistency of Four Brain Connectivity Measures Derived from Intracranial Electrode Measurements Frontiers in Neurology, 5 DOI: 10.3389/fneur.2014.00272

I'm on ScienceSeeker-Microscope

Questioning oxytocin research

You may have heard of oxytocin as the “moral molecule” or the “hug hormone” or the “cuddle chemical”. Unleashed by hugs, available in a handy nasal spray, and possessed with the ability to boost trust, empathy and a laundry list of virtues, it is apparently the cure to all the world’s social ills. Except it’s not.” That was written by Ed Yong in July 2012. He was not the first of the last to question the hype.

And yet 6 months later we have io9 website with the headline, “10 Reasons Why Oxytocin Is The Most Amazing Molecule In The World”. And they are: it’s easy to get, a love potion that’s built right in, it helps Mom to be Mom, reduces social fears, healing and pain relief, a diet aid, an anti-depressant, stress relief, increases generosity, it’s what makes us human. It even helps autism! But “oxytocin increases in-group trust, it produces the opposite feeling for those in the out-group — so it’s not the “perfect drug” some might proclaim it to be.

But like right brained/left brained people it is a myth that just will not go away. The hype just continued with a number of clinics and authors making lots of money from it.

There is no doubt that oxytocin is a powerful hormone and does have some of these effects. But probably not all. Now it turns out that some, perhaps much, of the research is flawed. A new paper (citation below) looks at the tests used to measure oxytocin. They found much of the testing unreliable because of how samples were prepared. Christensen and others, looked at previously published results and found much variation in typical concentrations of oxytocin in human plasma including baseline levels.

There is considerable disagreement regarding typical levels for oxytocin. “we identified 47 publications … to demonstrate high variability in ‘‘normal’’ and expected oxytocin concentrations. Average concentrations within each publication ranged from 0.5 pg/mL to 3.6 ng/mL, with a mean of 169 pg/mL across all 47 studies. (note the big difference in units picagrams to nanograms) In analyzing the methods used in these publications, the largest apparent contributor to this variability, by far, was the use of pre-assay sample extraction. (to avoid components in the serum interfering in the test) Without any sort of extraction, 23 publications produced a mean concentration of 360.9 pg/mL (SD: 731.6), while extracted samples produced a mean of 10.4 pg/ mL (SD: 20.4) in the remaining 24 publications.” They also cautioned against using rodent data on behaviour in a human context, as rodent levels of oxytocin can be 2000 times those in humans – so there must be some differences in its physiology. They also question how much is known about the relationship between blood oxytocin and the amount in various regions of the brain. oxytocin

In their first experiment they used the two popular kits for measuring oxytocin on samples with and without extraction and with and without 10pg/ml of added oxytocin (to measure the percentage recovery in the test). The ELISA test had unacceptable variation without extraction and the RIA test could not recover the added oxytocin without extraction. They used the RIA test with extraction in the second experiment which was to test the effect of oxytocin on trust in the Prisoner’s Dilemma setting, with known partners and with strangers. Using these improved methods, they could not replicate the published effects.

This clearly demands re-investigations of the various effects attributed to oxytocin.

Here is the abstract: “Expanding interest in oxytocin, particularly the role of endogenous oxytocin in human social behavior, has created a pressing need for replication of results and verification of assay methods. In this study, we sought to replicate and extend previous results correlating plasma oxytocin with trust and trustworthy behavior. As a necessary first step, the two most commonly used commercial assays were compared in human plasma via the addition of a known quantity of exogenous oxytocin, with and without sample extraction. Plasma sample extraction was found to be critical in obtaining repeatable concentrations of oxytocin. In the subsequent trust experiment, twelve samples in duplicate, from each of 82 participants, were collected over approximately six hours during the performance of a Prisoner’s Dilemma task paradigm that stressed human interpersonal trust. We found no significant relationship between plasma oxytocin concentrations and trusting or trustworthy behavior. In light of these findings, previous published work that used oxytocin immunoassays without sample extraction should be reexamined and future research exploring links between endogenous human oxytocin and trust or social behavior should proceed with careful consideration of methods and appropriate biofluids for analysis.

Christensen, J., Shiyanov, P., Estepp, J., & Schlager, J. (2014). Lack of Association between Human Plasma Oxytocin and Interpersonal Trust in a Prisoner’s Dilemma Paradigm PLoS ONE, 9 (12) DOI: 10.1371/journal.pone.0116172

I'm on ScienceSeeker-Microscope


Why introspection doesn’t work

What do we have when we introspect? – we have consciousness of a memory of a short part of the recent stream of consciousness. We are not looking directly at an instant of consciousness. We are looking at recently past consciousness and we are not looking at an instant but at whatever is grouped in one unit of memory. The consciousness that we experience is not permanent – it was gone almost immediately leaving only a little memory. As soon as we try to examine its details, we are looking at a memory. Unless we have a photographic memory, a lot of detail is lost in forming a memory and there is a ‘smudging’ of the experience over a somewhat longer period of time in the memory process. There is no reason to believe that a recalled memory is identical to the original conscious experience. We experience consciousness but we cannot actually examine it directly, only the memory of it.

Well, the memory of recent conscious experience might by useful. Suppose it is very close to the conscious experience – what does that give us? Conscious experience is not what it seems. It seems like consciousness is looking directly at the input of sensory information. But this is not so. Its formation is entirely opaque; we cannot experience the making of conscious experience. The creation of consciousness is a purely unconscious process and it is complex. The conscious experience is constructed from the sensory input and the prediction of what the sensory input was assumed to be, and our knowledge of the world. It is many layers of processing from the raw sensory input. Our consciousness of movement is the movement we planned and not necessarily the resulting movement. Everything is constructed including the ‘self’ that experiences the conscious stream. Our conscious models of thoughts, decisions, values, and emotions are constructed with even less contact with the real operations of the brain than sensory/motor information. Examining this stream of consciousness with a conscious examination of it is playing in a hall of mirrors.

Consciousness does not exist to allow us to understand our brain. Why should it? Why would there be any evolutionary pressure for our brains to understand our brains? What the brain constructs is experiences and it does it in a way that makes them a useful memory library we can use and learn from. If we want to learn about our own brains there is a problem with the usefulness of introspection. It can only answer some ‘what’ questions of limited value. To understand how the brain works we really want the ‘how’ and ‘why’ questions addressed and they are precisely what memory of consciousness or even consciousness experience itself cannot give us.

We must forget about studying our brains from subjective, inside observation. We must treat our brains objectively to gain understanding of how they work. There are many people who do not accept this and insist that we can study the mind in a subjective way. Indeed, to some people the subjective mind is the only interesting part of thought or brain which is worth studying. This subjective approach seems to me to be a waste of time and effort. It is rather boring (scientifically) at best and misleading at worst. All that this studying would give us is what we already have. It will give us a subjective experience of a copy of a subjective experience. It will not give us what consciousness physically is or how or why it is as it is.