Click bait PR in science

ScienceDaily reports on a recent paper (Leon Gmeindl, Yu-Chin Chiu, Michael S. Esterman, Adam S. Greenberg, Susan M. Courtney, Steven Yantis. Tracking the will to attend: Cortical activity indexes self-generated, voluntary shifts of attention. Attention, Perception, & Psychophysics, 2016) which looks at the areas in the brain involved in volition. Here is the abstract:

The neural substrates of volition have long tantalized philosophers and scientists. Over the past few decades, researchers have employed increasingly sophisticated technology to investigate this issue, but many studies have been limited considerably by their reliance on intrusive experimental procedures (e.g., abrupt instructional cues), measures of brain activity contaminated by overt behavior, or introspective self-report techniques of questionable validity. Here, we used multivoxel pattern time-course analysis of functional magnetic resonance imaging data to index voluntary, covert perceptual acts—shifts of visuospatial attention—in the absence of instructional cues, overt behavioral indices, and self-report. We found that these self-generated, voluntary attention shifts were time-locked to activity in the medial superior parietal lobule, supporting the hypothesis that this brain region is engaged in voluntary attentional reconfiguration. Self-generated attention shifts were also time-locked to activity in the basal ganglia, a novel finding that motivates further research into the role of the basal ganglia in acts of volition. Remarkably, prior to self-generated shifts of attention, we observed early and selective increases in the activation of medial frontal (dorsal anterior cingulate) and lateral prefrontal (right middle frontal gyrus) cortex—activity that likely reflects processing related to the intention or preparation to reorient attention. These findings, which extend recent evidence on freely chosen motor movements, suggest that dorsal anterior cingulate and lateral prefrontal cortices play key roles in both overt and covert acts of volition, and may constitute core components of a brain network underlying the will to attend.

I have not been able to read the original paper but I assume that it is a careful and useful study of how intentions and decisions happen when there is no compulsion involved. It has further evidence of the dorsal anterior cingulate and lateral prefrontal areas being involved in preparation of voluntary action. I assume that the authors do not stoop to ‘click bait’ in the original paper; I assume they use the sort of language that they use in the abstract. The press release put out by Johns Hopkins University is the problem. There are repeated uses of the phrase ‘free will’ and even the phrase “volition, or free will” implying that these words are interchangeable. And ‘free will’ is even used in the title of the press release, which seems like clear click bait to me. There is still debate on whether free will exists and if it does what its mechanism is. Because of this many people would be interested in a scientific paper that deals with free will. Mentioning free will in the PR for the paper is click bait unless the paper actually deals with the subject. Instead the paper seems to be about how decisions prepared and executed. The problem is that the study did not involve any measure of if-when-how the intention or the act was felt in the subject’s consciousness. We do not know what the subjects thought.

There are a number of definitions of free will: in religion it is lack of predestination; in philosophy it is lack of material determination (classic dualism); in jurisprudence it is owning the responsibility for an action (not coerced, accidentally or unconsciously done but in involving conscious intent); in neuroscience it has come to mean a decision taken under conscious control (an action that is started or can be stopped by conscious intent) – very similar to the legal meaning. What the last three have in common is control of intent/execution by conscious thought. Volition is a word without any necessary connection to consciousness. Unless an experiment tracks conscious events as well as other events, it has nothing to say about free will. It can have a great deal to say about volition, decision, intention, motor control, action plans etc. etc. but without involving consciousness, it has absolutely nothing to say about free will. As I said above, I have not been able to read the original paper, but if as I suspect it does not measure or time conscious feelings of intent or execution then its PR is misleading.

Synergy and Modular control

When we learned the simple overview of the nervous system in grade school, we were taught that the brain sent signals to muscles to contract and that is how we moved. And by brain, we assumed the thinking part up high in the head. But it cannot be so.

A little deer is born and in a very short time is standing and in a little longer is taking its first wobbly step. Within a couple of days it is running and frolicking. Deer are not that special; other animals ‘learn’ to get around very quickly too. Even humans babies, if they are held upright with their feet touching a surface will walk along that surface. In a sense, the spinal cord knows how to walk by lifting and moving forward alternate legs. It does not know how to walk well, but the basics are there. Human babies are slower at managing to get around because they are born at a less developed stage and walking on two legs rather than four is trickier. In all sorts of observations and experiments there is evidence that the ability to walk is innate in the spinal cord and does not require the brain.

The spinal cord has some primitive control modules or muscle synergies. Muscle synergies are present in a number of natural behaviors; they are low-level control networks found in the brain stem and spinal cord that coordinate a group of muscles. They make common movements easier to order up. We have the ‘intent to go over there’ and without any more conscious thought we do it in an automatic way. Now if we had to trigger individual muscles in the right time sequence, it would likely take many hours to get not very far with a number of falls along the way. One could say that we would ‘get the hang of it’ as we did it. But that is saying we would make parts of it automatic (create modules and synergies).

This modularization of motor control is layered. The simplest control is in the spinal cord, but it is modified and adapted to conditions by the brain stem and especially the cerebellum. The cerebellum gets instructions from other parts of the brain and finally these modules within modules are able to execute the simple ‘intention to go over there’.

The synergies in a baby’s spinal cord are an ancient set that is similar of all mammals (probably all land vertebrates). The muscles work in a rhythm where each event triggers the next in a circle. There are two primitives that are involved in human walking that we are born with. One is to bend the leg so that the foot leaves the ground and moves forward then goes back down and straightens. Two is a forward push against the ground by the straight leg. These two complexes of muscle contractions and relaxations are wired so that their action in one leg inhibits their action in the other. When the left leg does one, the right leg cannot do one but can do two. And when the left leg does two, the right cannot do two but can do one. They are also wired so that in each leg it is the end of one that triggers the start of two and the end of two triggers the start of one. It is the same in four legged animals except there is another set of inhibitions between the front and hind legs. At this level it is not very adaptive and can only react to sensory information that comes through the spinal cord from the muscles, joints and skin. Babies cannot use this facility to get around because they do not have the strength to maintain the posture needed with such a large heavy head on such a little body, and more importantly, the spinal cord has no information from the ears about balance. Balance is very important for bipedal walking. The baby must create two other synergies: to react to balance information and to use the hips, back and arms to keep the center of gravity over the legs. In the meantime, when they don’t have the strength, they can crawl using the 4 legged modules.

The cerebellum and brain stem add the control of balance and of pace (there are relative changes to the timing of events when the whole process is sped up). They can correct for uneven ground. They can keep the direction of motion toward a target. But the coordination control of the lower brain is not just direct signals to muscles but uses the synergies built into the spinal cord. And it is much more complex than the action in the spinal cord. In fact, the cerebellum has more neurons that the whole rest of the brain. It manages the modules, timing, adjustments to modules, effects from sensory input and feedback and commands from higher levels of the brain, then packages it all for execution. Another great trick of the cerebellum is to do two things at the same time, say walk and throw a ball. Both may be deep seated modules but there are adjustment to be made where they interfere with one another.

The point I am making here is that although movement seems so easy for us to execute, that is because it is not arranged consciously, or even largely in the cerebral hemispheres. It is modularized so that a simple request in the cerebral cortex goes through layers of calculation and fine-tuning to become individual signals to individual muscles. It is synergy/modularization that gives us this flexible but easy to use system. We are surprised that it is easier to create a program to play chess in the abstract (and win) than it is to program a robot to physically move the pieces and operate the time clock in a game. When we do not understand how something is done, it appears easy. It is a common trap.


Metaphors and shapes

Judith Copithorne image

Judith Copithorne image

Metaphors (including analogs and similitudes) appear to be very basic to thought. These are very important to language and communication. A large bulk of dictionary meanings of words are actually old metaphors, that have been used so much and for so long that the words has lost its figurative root and become literal in their meaning. We simply do not recognize that it was once a metaphor. Much of our learning is metaphorical. We understand one complex idea by noticing its similarity to another complex idea that we already understand. For example, electricity is not easy to understand at first but we have learned to understand a great deal about how water flows as we have grown up by watching it. Basic electrical theory is often taught by comparing it to water. By and large, when we examine our knowledge of the world, we find it is rife with metaphors. We can trace many ways we think about things and events to ‘grounding’ in experiences of infants. The way babies establish movement and sensory information is the foundation of enormous trees and pyramids of metaphorical understanding.

But what is a metaphor? We can think of it as a number of entities that are related in some way (in space, in time, in cause-effect, or in logic etc.) to form a structure that we can understand and think of/ remember/ name/ use as a predictive model and treat as a single thing. This structure can be reused without being reinvented. The entities can be re-labeled and so can the relations between them. So if we know water flowing through a pipe will be limited by a narrower length of pipe we can envisage an electrical current in a wire being limited by a resistor. Nothing needs to be retained in a metaphor but the abstract structure. This facility of being able to manipulate metaphors is important to thinking, learning, communicating. Is there more? Perhaps.

A recent paper (Rolf Inge Godøy, Minho Song, Kristian Nymoen, Mari Romarheim Haugen, Alexander Refsum Jensenius; Exploring Sound-Motion Similarity in Musical Experience; Journal of New Music Research, 2016; 1) talks about the use of a type of metaphor across the senses and movement. Here is the abstract:

People tend to perceive many and also salient similarities between musical sound and body motion in musical experience, as can be seen in countless situations of music performance or listening to music, and as has been documented by a number of studies in the past couple of decades. The so-called motor theory of perception has claimed that these similarity relationships are deeply rooted in human cognitive faculties, and that people perceive and make sense of what they hear by mentally simulating the body motion thought to be involved in the making of sound. In this paper, we survey some basic theories of sound-motion similarity in music, and in particular the motor theory perspective. We also present findings regarding sound-motion similarity in musical performance, in dance, in so-called sound-tracing (the spontaneous body motions people produce in tandem with musical sound), and in sonification, all in view of providing a broad basis for understanding sound-motion similarity in music.”

The part of this paper that I found most interesting was a discussion of abstract ‘shapes’ being shared by various senses and motor actions.

A focus on shapes or objects or gestalts in perception and cognition has particularly concerned so-called morphodynamical theory … morphodynamical theory claims that human perception is a matter of consolidating ephemeral sensory streams (of sound, vision, touch, and so on) into somehow more solid entities in the mind, so that one may recall and virtually re-enact such ephemeral sensations as various kinds of shape images. A focus on shape also facilitates motion similarity judgments and typically encompasses, first of all, motion trajectories (as so-called motion capture data) at various timescales (fast to slow, including quasi-stationary postures) and amplitudes (from large to small, including relative stillness). But shapes can also capture perceptually and affectively highly significant derivatives, such as acceleration and jerk of body motion, in addition.

The authors think of sound objects as occurring in the time range of half a second to five seconds. Sonic objects have pitch and timbre envelopes, rhythmic, melodic and harmonic patterns. In terms of dynamics, sonic objects can: be impulsive with an envelop showing an abrupt onset and then decay, or be sustained with a gradual onset and longer duration, or be iterative with rapidly repeated sound, tremolo, or drum roll. Sonic objects could have pitch that is stable, variable or just noise. These sonic objects are related to similar motion objects – objects in the same time range that produce music or react to it. For example the sonic objects in playing a piano piece or in dancing. They also have envelopes of velocity and so on. This reminds me of the similar emotions that are triggered by similar envelopes of musical sound and speech. Or, the objects that fit with the nonsense words ‘bouba’ and ‘kiki’ being smooth or sharp. ‘Shape’ is a very good description of the vague but strong and real correspondences between objects from different domains. It is probably the root of being able to use adjectives across domains. For example, we can have soft light, soft velvet, soft rustle, soft steps, soft job, and more or less soft anything. Soft describes different things in different domains but, despite the differences, it is a metaphoric connection between domains so that concrete objects can be made by combining a number of individual sensory/motor objects which share abstract characteristics like soft.

In several studies of cross-modal features in music, a common element seems to be the association of shape similarity with sound and motion, and we believe shape cognition can be considered a basic amodal element of human cognition, as has been suggested by the aforementioned morphodynamical theory …. But for the implementation of shape cognition, we believe that body motion is necessary, and hence we locate the basis for amodal shape cognition in so-called motor theory. Motor theory is that which can encompass most (or most relevant) modalities by rendering whatever is perceived (features of sound, textures, motion, postures, scenes and so on) as actively traced shape images.

The word ‘shape’, used to describe corresponding characteristics from different domains, is very like the word ‘structure’ in metaphors and may point to the foundation of our cognition mechanisms, including much more than just the commonplace metaphor.


Fish integrate their senses

Judith Copithorne image

Judith Copithorne image

Consciousness seems to have at its foundation the melding of information from all the senses into a integrated model of the world (and ourselves in it). It would be impossible to meld a sound with a sight, for example, without having a common framework of space and of time. And without the different senses informing one another, they would lose much of their usefulness. Therefore when we see melding of sensory information into a model, we can guess that there is a good probability that some level of consciousness exists. Two recent papers on fish show this sort of hint.

The first paper (Thompson, Vanwalleghem, Heap, Scott; Functional Profiles of Visual-, Auditory-, and Water Flow-Responsive Neurons in the Zebrafish Tectum; Current Biology 2016) shows that the tectum integrates sense information in a similar way to the human superior colliculus. “In order to function efficiently, fish and humans need a unified sensory view of the external world contributed to by multiple senses”, says Ethan Scott.

Using calcium imaging in transparent zebrafish, the dynamics of visual processing were shown to replicate previous studies. When sound or waterflow stimuli were used, a small number of cells in the tectum responded, similarly to the visual response but not showing the same cells. The visual response was somewhat less when other signals were present at the same time. This was similar to processes in the mammalian superior colliculus – information from various senses is integrated there.

The second paper (Schumacher, de Perera, Thenert, von der Emde; Cross-modal object recognition and dynamic weighting of sensory inputs in a fish; Proceedings of the National Academy of Sciences 2016) showed that fish can switch between senses as do monkeys, dophins, rats and humans.

The elephantnose fish explores objects in its surroundings by using its eyes or its electrical sense – sometimes both together. The skin contains numerous sensor organs that perceive objects in the water by means of the changed electrical field. “This is a case of active electrolocation, in principle the same as the active echolocation of bats, which use ultrasound to perceive a three- dimensional image of their environment.” Electrolocation is more useful at close range and vision is better at longer distances. The fish can, in effect, turn off one of the senses if the information from the other sense is more reliable.

Using darkness to force electrolocation and electrically transparent objects to force vision, the researchers could study the switching of the senses. They found that the fish could remember, find and recognize shapes experienced with one sense when using the other sense. They form a model of the space which could be used by either or both senses.

It seems that fish form a model of the their environment that all their senses can contribute to in an integrated way.

To see others as we see ourselves

In psychology there is a theory about the ‘fundamental attribution error’, the error in how we attribute causes to actions. When we look at our own actions, they are caused by our cognition in the circumstances in which we are deciding what to do. When we look at the actions of others, they are caused by their personality or character traits. So we do not really take into consideration the circumstances of others when we judge their actions. Nor do we consider the fixed patterns of our own behavior that do not enter into our conscious thoughts when we judge our own actions. We just do what is reasonable at the time and they just do what they always do. I can be too busy to help while they can be too thoughtless. This is a problem for us but at least we can understand the problem and occasionally overcome it. (My way to deal with it is to just assume that people are intelligent and well-meaning most of the time. If they do something that seems dumb or nasty, I look at the circumstances to see if there is a reasonable explanation. There very often is. I realize that this view of my own behaviour is somewhat ironic in its internal attribution – well nothing is perfect.)

But this problem with attribution is much greater than human social interaction. We do the same thing with animals. Elephants were tested for self recognition with the mirror test. If they recognize a black spot appearing on their forehead then it is clear that they know it is their forehead. Elephants failed the test and so they were said to not have a sense of self. It turned out that the mirrors used were too small. The elephants could not make out that it was an elephant in the mirror let alone themselves. If we start out underestimating an animals intelligence, and either not test that assumption or test it in a way that is inappropriate for the animal – then we are making a big attribution error.

There is an assumption on the part of many that vertebrate brains are quite different in the various sorts of vertebrates. This is not true! All animals with a spine have the same brain pattern with the same regions. All vertebrates have seven parts and no more or less: accessory olfactory bulb; cerebellum; cerebral hemispheres; medulla oblongata; olfactory bulb; optic tectum; and pituitary gland. There are differences in size, details and subdivisions, but there are no missing parts. (R.G. Northcutt; Understanding Vertebrate Brain Evolution; Integr. Comp. Biol. 2002 42(4) 743-756). There is every reason to believe that the brain works in fundamentally the same way in mammals, birds, reptiles, amphibians and fish. And by and large, this same pattern of brain has the same functions – to move, find/eat food, escape enemies and so on. It is obvious that animals have motor control and sensory perception.

What evidence is there that other animals have emotions, memory, or consciousness? Can they be automatons with no mental life? The reports trickle in year after year that add to the evidence that animals have a mental life similar to ours.

Reptiles probably dream. Most animal species sleep, from invertebrates to primates. However, neuroscientists have until now only actively recorded the sleeping brains of birds and mammals. Shein-Idelson et al. now describe the electrophysiological hallmarks of sleep in reptiles. Recordings from the brains of Australian dragons revealed the typical features of slow-wave sleep and rapid eye movement (REM) sleep. These findings indicate that the brainstem circuits responsible for slow-wave and REM sleep are not only very ancient but were already involved in sleep dynamics in reptiles.(Shein-Idelson, Ondracek, Liaw, Reiter, Laurent; Slow waves, sharp waves, ripples, and REM in sleeping dragons; Science 2016 Vol 352 (6285) 590-596) These wave types in sleep also are evidence for a memory system similar to ours.

Fish don’t make noise or wave their fins to show emotion but that does not mean they don’t have emotions. “Whether fishes are sentient beings remains an unresolved and controversial question. Among characteristics thought to reflect a low level of sentience in fishes is an inability to show stress-induced hyperthermia (SIH), a transient rise in body temperature shown in response to a variety of stressors. This is a real fever response, so is often referred to as ‘emotional fever’. It has been suggested that the capacity for emotional fever evolved only in amniotes (mammals, birds and reptiles), in association with the evolution of consciousness in these groups. According to this view, lack of emotional fever in fishes reflects a lack of consciousness. We report here on a study in which six zebrafish groups with access to a temperature gradient were either left as undisturbed controls or subjected to a short period of confinement. The results were striking: compared to controls, stressed zebrafish spent significantly more time at higher temperatures, achieving an estimated rise in body temperature of about 2–48C. Thus, zebrafish clearly have the capacity to show emotional fever. While the link between emotion and consciousness is still debated, this finding removes a key argument for lack of consciousness in fishes.” (Rey, Huntingford, Boltana, Vargas, Knowles, Mackenzie; Fish can show emotional fever: stress-induced hyperthermia in zebrafish; 2015 Proc. R. Soc. B 282: 20152266)

One of the problems with comparing the brains of different vertebrates is that they have been named differently. When development is followed through the embryos, many differently named regions should really have a single name. Parts of the tectum are the same as our superior colliculus and they have been found to act in the same way. They integrate sensory stimuli from various senses. They can register whether events are simultaneous. For example in tadpoles the tectum can tell if a sight and vibration stimulus are simultaneous. That is the same function with the same development in the same part of the brain in an amphibian and a mammal. (Felch, Khakhalin, Aizenmen; Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition. 2016 eLife 5)

We should be assuming that other vertebrates think like we do to a large extent – just as we should assume that other people do – and try to understand their actions without an attribution error.

A sense of rhythm

In a recent scientific press release, the opening sentence is, “A sense of rhythm is a uniquely human characteristic.” I am used to this sort of thing in opening sentences; I think to myself that they definitely have no evidence for that statement; they have not studied most animals, done a literature search or watched the videos of parrots dancing on the back of chairs. Never mind, it is an opener, just read on.

But the next paragraph starts, “What most people call the sense of rhythm — the mechanism that enables us to clap along or dance to music — is an intangible ability that is exclusive to human beings.” So it is not just the usual unexamined opener. And to top it off, the third paragraph starts with, “Human beings are the only species that recognise these patterns and scientists suspect that an evolutionary development is at the root of it.” Well I am not convinced that they have even thought much about these statements.

I find it very difficult to believe that anything is really, purely, uniquely human. The first assumption until proven false should be that our anatomy, genome, behavior etc. is part of the general characteristics of mammals. There will be other examples, or very similar examples, or the un-elaborated roots of any human ability to be found in some other animals. That is an assumption that is almost forced on us by the nature of evolution. But so many resist this approach and assume uniqueness without evidence of it.

Having a sense of rhythm would be very useful to many animals in their ordinary lives. And rhythms of many kinds occur in all living bodies. Movement in particular is rhythmic (perhaps particularly for swinging through trees). It would be something of a miracle if being able to entrain to a beat was not found in any other animal – just unbelievable.

And this reminds me of how annoying it is to still run across the rule against being anthropomorphic. It is not that we should assume that animals are like us in their mental lives without testing the idea. But it is also wrong to assume the opposite without testing. If it looks like a duck, and walks like a duck and quacks like a duck, hey, it just maybe is a duck. If the only way I can understand and predict the actions of my dog is to assume she has emotions similar to mine; then my tentative assumption is that she has those emotions. The rule against seeing similarities between ourselves and other animals shows a level of misunderstanding of both.

The need to see ourselves as unique and as fundamentally different from other animals is a left-over from the old beliefs in there being a hierarchy of life with man at the pinnacle. It is about time we got over this notion, just like we had to get over being the center of the universe. Our biggest differences from the rest of the animal world is the extent of our culture, not our basic biology. Other animals have consciousness, memory, emotion and intelligence, just like us. We are all different (each unique in their own way) but as variations of a theme – the fundamental plan of vertebrates is the starting point for all vertebrates. And I would bet money that a sense of rhythm is part of that basic plan.

A look at colour

Judith Copithorne image

Judith Copithorne image

Back to the OpenMIND collection and a paper on colour vision (Visual Adaptation to a Remapped Spectrum – Grush, Jaswal, Knoepfler, Brovold) (here). The study has some shortcomings which the authors point out. “A number of factors distinguish the current study from an appropriately run and controlled psychological experiment. The small n and the fact that both subjects were also investigators in the study are perhaps the two most significant differences. These limitations were forced by a variety of factors, including the unusual degree of hardship faced by subjects, our relatively small budget, and the fact that this protocol had never been tried before. Because of these limitations, the experiments and results we report here are intended to be taken only as preliminary results—as something like a pilot study. Even so, the results, we believe, are quite interesting and suggestive.” To quote Chesterton, if it is worth doing it is worth doing poorly.

The researchers used LCD goggles driven by a video camera so that the scene the subject saw was shifted in colour. The shift was 120 degrees of a colour wheel (red to blue, green to red, yellow to purple). The result was blue tomatoes, lilac people, and green sky. (video) The study lasted a week with one subject wearing the gear all the time he was not in the dark while the other wore the gear for several hours each day and had normal vision the rest of the time. How did they adapt to the change in colour?

Colour consistency is an automatic correction the visual system makes so that colours do not change under different light conditions. Colours do not appear to change when viewed in sunlight, twilight, candle light, fluorescent lamps etc. What perception is aiming at is the characteristic of the surface that is reflecting the light and not the nature of the light. Ordinarily we are completely unaware of this correction. The colour shifting gear disrupted colour consistency until the visual system adapted to the new spectrum.

We did not test color constancy in any controlled way, but the subjective reports are quite nmistakable. Subject RG noticed that upon first wearing the rotation gear color constancy went “out the window.” To take one example, in normal conditions RG’s office during the day is brightly lit enough that turning on the fluorescent light makes no noticeable difference to the appearance of anything in the office. But when he turned the lights on after first donning the gear, everything had an immediate significant change of hue (though not brightness). He spent several minutes flipping the light on and off in amazement. Another example is that he also noticed that when holding a colored wooden block, the surfaces changed their apparent color quite noticeably as he moved it and rotated it, as if the surfaces were actively altering their color like a chameleon. This was also a source of prolonged amusement. However, after a few days the effect disappeared. Turning the office light on had little noticeable effect on the color of anything in his office, and the surfaces of objects resumed their usual boring constancy as illumination conditions or angles altered.” Interestingly the subject who wore the gear only part of each day never lost his normal colour consistency as he adapted to the other consistency; but the subject who wore the gear all the time had to re-adapt when he took off the gear although it took much less time than the adaption when the gear was first put on. I have often wonder how difficult it would be to lose this correction and for a while used a funny prism child’s toy to look at the uncorrected color of various shadows.

Did an adaption happen to bring the colours back to there original colours? Did the blue tomatoes start to look more red? It seems not, at least in this study. But again there were some interesting events.

On two occasions late into his six-day period of wearing the gear, JK went into a sudden panic because he thought that the rotation equipment was malfunctioning and no longer rotating his visual input. Both times, as he reports it, he suddenly had the impression that everything was looking normal. This caused panic because if there was a glitch causing the equipment to no longer rotate his visual input, then the experimental protocol would be compromised. …However, the equipment was not malfunctioning on either occasion, a fact of which JK quickly convinced himself both times by explicitly reflecting on the colors that objects, specifically his hands, appeared to have: “OK, my hand looks purplish, and purple is what it should like under rotation, so the equipment is still working correctly.”…the lack of a sense of novelty of strangeness made him briefly fear … He described it as a cessation of a “this is weird” signal.

Before and after the colour adaption period, they tested the memory-colour effect. This is done by adjusting the colour of an object until it appears a neutral grey. If the object always has a particular colour (bananas are yellow) then people over correct and move the colour past the neutral grey point. “One possible explanation of this effect is that when the image actually is grey scale, subjects’ top-down expectations about the usual color make it appear (in some way or another) to be slightly tinted in that hue. So when the image of the banana is actually completely grey scale subjects judge it to be slightly yellow. The actual color of the image must be slightly in the direction opposite yellow (periwinkle) in order to cancel this top- down effect and make the image appear grey. This is the memory-color effect.” This effect was slightly reduced after the experiment – as if bananas were not expected to be as yellow as they had been before the experiment.

They also looked at other aspects of adaption. “As we found, aesthetic judgments had started to adapt, … And though we did not find evidence of semantic adaptation, it would be quite surprising, given humans’ ability to learn new languages and dialects, if after a more extended period of time semantic adaptation did not occur.” They do not have clear evidence to say anything about qualia verses enactive adaptation but further similar experiments may give good evidence.

Emotional communication

Judith Copithorne image

Judith Copithorne image

It has been suspected for many years that if the body is forced to experience the signs of an emotion then the emotion will be felt. So… when we feel an emotion we will have a particular bodily expression of that emotion; and, if we have the bodily expression of an emotion we feel the emotion. If we are happy we smile and if we smile we will feel happy. This connection does not need to be obvious – if we are a tiny bit happy we will make a tiny bit of a smile and a tiny smile can increase our happiness a tiny bit.

A definitive experiment was done on this connection (Strack, Martin, Stepper; 1988; “Inhibiting and Facilitating Conditions of the Human Smile: A Nonobtrusive Test of the Facial Feedback Hypothesis”; Journal of Personality and Social Psychology 54 (5): 768–777) and here is the abstract: “We investigated the hypothesis that people’s facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1’s results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1’s findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.The important aspect in this study is that the subjects did not think they were mimicking a smile or a frown or that they were being tested for their emotional state.

It later became clear that the reason that emotions are somewhat contagious is that we mimic others bodies and expressions. When someone smiles at us, we are inclined to smile back and it is very difficult to completely inhibit the return of a smile. It seems that this is a form of communication. We read others and others read us by our bodily emotional expressions.

What does failure to express an emotion with the body do? It can inhibit the emotion. It was found that people with facial paralysis that interfered with smiling showed increased symptoms of depression while people with botox treatment that interfered with frowning had their depression symptoms decreased. (Lewis etal 2009 J Cosmetic Dermatology).

And now it is found that interference with bodily expression of emotion can interfere with understanding the emotions of others. When we mimic another’s facial expression is when we can understand their state of mind.

A recent paper shows this effect. (Baumeister, Papa, Foroni; “Deeper than skin deep – The effect of botulinum toxin-A on emotion processing”; Toxicon, 2016; 118: 86) Here is the abstract:


  • Effect of facial Botox use on perception of emotional stimuli was investigated.
  • Particularly perception of slightly emotional stimuli was blunted after Botox use.
  • The perception of very emotional stimuli was less affected.
  • After Botox use, reaction times to slightly emotional stimuli increased.
  • Specifically weakly emotional stimuli seem to benefit from facial feedback.

The effect of facial botulinum Toxin-A (BTX) injections on the processing of emotional stimuli was investigated. The hypothesis, that BTX would interfere with processing of slightly emotional stimuli and less with very emotional or neutral stimuli, was largely confirmed. BTX-users rated slightly emotional sentences and facial expressions, but not very emotional or neutral ones, as less emotional after the treatment. Furthermore, they became slower at categorizing slightly emotional facial expressions under time pressure.”

The press release for this paper (here) gives more details. “The thankfully temporary paralysis of facial muscles that this toxin causes impairs our ability to capture the meaning of other people’s facial expressions. … The idea (embodied cognition) is that the processing of emotional information, such as facial expressions, in part involves reproducing the same emotions on our own bodies. In other words, when we observe a smile, our face too tends to smile (often in an imperceptible and automatic fashion) as we try to make sense of that expression. However, if our facial muscles are paralyzed by Botox, then the process of understanding someone else’s emotion expression may turn out to be more difficult.

It is not about rules

The question of the trolley has always bothered me. You probably have encountered the scenario many times. You are on a bridge over a trolley track with another person you do not know. There are 5 people on the track some way off. A run-away trolley is coming down the track and will hit the 5 people. Do you allow this to happen or do you throw the person beside you onto the track in front of the trolley to stop it? This question comes in many versions and is used to categorize types of moral reasoning. My problem is that I do not know what I would do in the few seconds I would have to consider the situation and I don’t believe that others know either.

In another dip into OpenMIND (here) I find a paper on morality by Paul Churchland, “Rules: The Basis of Morality?”. This is the abstract:

Most theories of moral knowledge, throughout history, have focused on behavior-guiding rules. Those theories attempt to identify which rules are the morally valid ones, and to identify the source or ground of that privileged set. The variations on this theme are many and familiar. But there is a problem here. In fact, there are several. First, many of the higher animals display a complex social order, one crucial to their biological success, and the members of such species typically display a sophisticated knowledge of what is and what is not acceptable social behavior —but those creatures have no language at all. They are unable even to express a single rule, let alone evaluate it for moral validity. Second, when we examine most other kinds of behavioral skills—playing basketball, playing the piano, playing chess—we discover that it is surpassingly difficult to articulate a set of discursive rules, which, if followed, would produce a skilled athlete, pianist, or chess master. And third, it would be physically impossible for a biological creature to identify which of its myriad rules are relevant to a given situation, and then apply them, in real time, in any case. All told, we would seem to need a new account of how our moral knowledge is stored, accessed, and applied. The present paper explores the potential, in these three regards, of recent alternative models from the computational neurosciences. The possibilities, it emerges, are considerable.

Apes, wolves/dogs, lions and many other intelligent social animals appear to have a moral sense without any language. They have ways of behaving that show cooperation, empathy, trust, fairness, sacrifice for the group and punishment of bad behavior. They train their young in these ways. No language codifies this behavior. Humans that lose their language through brain damage and can not speak or comprehend language still have other skills intact, including their moral sense. People who are very literate and very moral can often not give an account of their moral rules – some can only put forward the Golden Rule. If we were actually using rules they would be able to report them.

We should consider morality a skill that we learn rather than a set of rules. It is a skill that we learn and continue learning thoughout our lives. A skill that can take into consideration a sea of detail and nuance, that is lightning fast compared to finding the right rule and applying it. “Moral expertise is among the most precious of our human virtues, but it is not the only one. There are many other domains of expertise. Consider the consummate skills displayed by a concert pianist, or an all-star basketball player, or a grandmaster chess champion. In these cases, too, the specific expertise at issue is acquired only slowly, with much practice sustained over a period of years. And here also, the expertise displayed far exceeds what might possibly be captured in a set of discursive rules consciously followed, on a second-by-second basis, by the skilled individuals at issue. Such skills are deeply inarticulate in the straightforward sense that the expert who possesses them is unable to simply tell an aspiring novice what to do so as to be an expert pianist, an effective point guard, or a skilled chess player. The knowledge necessary clearly cannot be conveyed in that fashion. The skills cited are all cases of knowing how rather than cases of knowing that. Acquiring them takes a lot of time and a lot of practice.

Churchland then describes how the neural bases of this sort of skill is possible (along with perception and action). He uses a model of Parallel Distributed Processing where a great deal of input can quickly be transformed into a perception or an action. It is an arrangement that learns skills. “It has to do with the peculiar way the brain is wired up at the level of its many billions of neurons. It also has to do with the very different style of representation and computation that this peculiar pattern of connectivity makes possible. It performs its distinct elementary computations, many trillions of them, each one at a distinct micro-place in the brain, but all of them at the same time. … a PDP network is capable of pulling out subtle and sophisticated information from a gigantic sensory representation all in one fell swoop.” I found Churchland’s explanation very clear and to the point but I also thought he was using AI ideas of PDP rather than biological ones in order to be easily understood. If you are not familiar with parallel processing ideas, this paper is a good place to find a readable starting explanation.

Another slight quibble with the paper is that he does not point out that some of the elements of morality appear to be inborn and those elements probably steer the moral learning process. Babies often seem to ‘get it’ prior to the experience need develop and improve the skill.


A prediction engine

Judith Copithorne image

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.