To see others as we see ourselves

In psychology there is a theory about the ‘fundamental attribution error’, the error in how we attribute causes to actions. When we look at our own actions, they are caused by our cognition in the circumstances in which we are deciding what to do. When we look at the actions of others, they are caused by their personality or character traits. So we do not really take into consideration the circumstances of others when we judge their actions. Nor do we consider the fixed patterns of our own behavior that do not enter into our conscious thoughts when we judge our own actions. We just do what is reasonable at the time and they just do what they always do. I can be too busy to help while they can be too thoughtless. This is a problem for us but at least we can understand the problem and occasionally overcome it. (My way to deal with it is to just assume that people are intelligent and well-meaning most of the time. If they do something that seems dumb or nasty, I look at the circumstances to see if there is a reasonable explanation. There very often is. I realize that this view of my own behaviour is somewhat ironic in its internal attribution – well nothing is perfect.)

But this problem with attribution is much greater than human social interaction. We do the same thing with animals. Elephants were tested for self recognition with the mirror test. If they recognize a black spot appearing on their forehead then it is clear that they know it is their forehead. Elephants failed the test and so they were said to not have a sense of self. It turned out that the mirrors used were too small. The elephants could not make out that it was an elephant in the mirror let alone themselves. If we start out underestimating an animals intelligence, and either not test that assumption or test it in a way that is inappropriate for the animal – then we are making a big attribution error.

There is an assumption on the part of many that vertebrate brains are quite different in the various sorts of vertebrates. This is not true! All animals with a spine have the same brain pattern with the same regions. All vertebrates have seven parts and no more or less: accessory olfactory bulb; cerebellum; cerebral hemispheres; medulla oblongata; olfactory bulb; optic tectum; and pituitary gland. There are differences in size, details and subdivisions, but there are no missing parts. (R.G. Northcutt; Understanding Vertebrate Brain Evolution; Integr. Comp. Biol. 2002 42(4) 743-756). There is every reason to believe that the brain works in fundamentally the same way in mammals, birds, reptiles, amphibians and fish. And by and large, this same pattern of brain has the same functions – to move, find/eat food, escape enemies and so on. It is obvious that animals have motor control and sensory perception.

What evidence is there that other animals have emotions, memory, or consciousness? Can they be automatons with no mental life? The reports trickle in year after year that add to the evidence that animals have a mental life similar to ours.

Reptiles probably dream. Most animal species sleep, from invertebrates to primates. However, neuroscientists have until now only actively recorded the sleeping brains of birds and mammals. Shein-Idelson et al. now describe the electrophysiological hallmarks of sleep in reptiles. Recordings from the brains of Australian dragons revealed the typical features of slow-wave sleep and rapid eye movement (REM) sleep. These findings indicate that the brainstem circuits responsible for slow-wave and REM sleep are not only very ancient but were already involved in sleep dynamics in reptiles.(Shein-Idelson, Ondracek, Liaw, Reiter, Laurent; Slow waves, sharp waves, ripples, and REM in sleeping dragons; Science 2016 Vol 352 (6285) 590-596) These wave types in sleep also are evidence for a memory system similar to ours.

Fish don’t make noise or wave their fins to show emotion but that does not mean they don’t have emotions. “Whether fishes are sentient beings remains an unresolved and controversial question. Among characteristics thought to reflect a low level of sentience in fishes is an inability to show stress-induced hyperthermia (SIH), a transient rise in body temperature shown in response to a variety of stressors. This is a real fever response, so is often referred to as ‘emotional fever’. It has been suggested that the capacity for emotional fever evolved only in amniotes (mammals, birds and reptiles), in association with the evolution of consciousness in these groups. According to this view, lack of emotional fever in fishes reflects a lack of consciousness. We report here on a study in which six zebrafish groups with access to a temperature gradient were either left as undisturbed controls or subjected to a short period of confinement. The results were striking: compared to controls, stressed zebrafish spent significantly more time at higher temperatures, achieving an estimated rise in body temperature of about 2–48C. Thus, zebrafish clearly have the capacity to show emotional fever. While the link between emotion and consciousness is still debated, this finding removes a key argument for lack of consciousness in fishes.” (Rey, Huntingford, Boltana, Vargas, Knowles, Mackenzie; Fish can show emotional fever: stress-induced hyperthermia in zebrafish; 2015 Proc. R. Soc. B 282: 20152266)

One of the problems with comparing the brains of different vertebrates is that they have been named differently. When development is followed through the embryos, many differently named regions should really have a single name. Parts of the tectum are the same as our superior colliculus and they have been found to act in the same way. They integrate sensory stimuli from various senses. They can register whether events are simultaneous. For example in tadpoles the tectum can tell if a sight and vibration stimulus are simultaneous. That is the same function with the same development in the same part of the brain in an amphibian and a mammal. (Felch, Khakhalin, Aizenmen; Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition. 2016 eLife 5)

We should be assuming that other vertebrates think like we do to a large extent – just as we should assume that other people do – and try to understand their actions without an attribution error.

A sense of rhythm

In a recent scientific press release, the opening sentence is, “A sense of rhythm is a uniquely human characteristic.” I am used to this sort of thing in opening sentences; I think to myself that they definitely have no evidence for that statement; they have not studied most animals, done a literature search or watched the videos of parrots dancing on the back of chairs. Never mind, it is an opener, just read on.

But the next paragraph starts, “What most people call the sense of rhythm — the mechanism that enables us to clap along or dance to music — is an intangible ability that is exclusive to human beings.” So it is not just the usual unexamined opener. And to top it off, the third paragraph starts with, “Human beings are the only species that recognise these patterns and scientists suspect that an evolutionary development is at the root of it.” Well I am not convinced that they have even thought much about these statements.

I find it very difficult to believe that anything is really, purely, uniquely human. The first assumption until proven false should be that our anatomy, genome, behavior etc. is part of the general characteristics of mammals. There will be other examples, or very similar examples, or the un-elaborated roots of any human ability to be found in some other animals. That is an assumption that is almost forced on us by the nature of evolution. But so many resist this approach and assume uniqueness without evidence of it.

Having a sense of rhythm would be very useful to many animals in their ordinary lives. And rhythms of many kinds occur in all living bodies. Movement in particular is rhythmic (perhaps particularly for swinging through trees). It would be something of a miracle if being able to entrain to a beat was not found in any other animal – just unbelievable.

And this reminds me of how annoying it is to still run across the rule against being anthropomorphic. It is not that we should assume that animals are like us in their mental lives without testing the idea. But it is also wrong to assume the opposite without testing. If it looks like a duck, and walks like a duck and quacks like a duck, hey, it just maybe is a duck. If the only way I can understand and predict the actions of my dog is to assume she has emotions similar to mine; then my tentative assumption is that she has those emotions. The rule against seeing similarities between ourselves and other animals shows a level of misunderstanding of both.

The need to see ourselves as unique and as fundamentally different from other animals is a left-over from the old beliefs in there being a hierarchy of life with man at the pinnacle. It is about time we got over this notion, just like we had to get over being the center of the universe. Our biggest differences from the rest of the animal world is the extent of our culture, not our basic biology. Other animals have consciousness, memory, emotion and intelligence, just like us. We are all different (each unique in their own way) but as variations of a theme – the fundamental plan of vertebrates is the starting point for all vertebrates. And I would bet money that a sense of rhythm is part of that basic plan.

A look at colour

Judith Copithorne image

Judith Copithorne image

Back to the OpenMIND collection and a paper on colour vision (Visual Adaptation to a Remapped Spectrum – Grush, Jaswal, Knoepfler, Brovold) (here). The study has some shortcomings which the authors point out. “A number of factors distinguish the current study from an appropriately run and controlled psychological experiment. The small n and the fact that both subjects were also investigators in the study are perhaps the two most significant differences. These limitations were forced by a variety of factors, including the unusual degree of hardship faced by subjects, our relatively small budget, and the fact that this protocol had never been tried before. Because of these limitations, the experiments and results we report here are intended to be taken only as preliminary results—as something like a pilot study. Even so, the results, we believe, are quite interesting and suggestive.” To quote Chesterton, if it is worth doing it is worth doing poorly.

The researchers used LCD goggles driven by a video camera so that the scene the subject saw was shifted in colour. The shift was 120 degrees of a colour wheel (red to blue, green to red, yellow to purple). The result was blue tomatoes, lilac people, and green sky. (video) The study lasted a week with one subject wearing the gear all the time he was not in the dark while the other wore the gear for several hours each day and had normal vision the rest of the time. How did they adapt to the change in colour?

Colour consistency is an automatic correction the visual system makes so that colours do not change under different light conditions. Colours do not appear to change when viewed in sunlight, twilight, candle light, fluorescent lamps etc. What perception is aiming at is the characteristic of the surface that is reflecting the light and not the nature of the light. Ordinarily we are completely unaware of this correction. The colour shifting gear disrupted colour consistency until the visual system adapted to the new spectrum.

We did not test color constancy in any controlled way, but the subjective reports are quite nmistakable. Subject RG noticed that upon first wearing the rotation gear color constancy went “out the window.” To take one example, in normal conditions RG’s office during the day is brightly lit enough that turning on the fluorescent light makes no noticeable difference to the appearance of anything in the office. But when he turned the lights on after first donning the gear, everything had an immediate significant change of hue (though not brightness). He spent several minutes flipping the light on and off in amazement. Another example is that he also noticed that when holding a colored wooden block, the surfaces changed their apparent color quite noticeably as he moved it and rotated it, as if the surfaces were actively altering their color like a chameleon. This was also a source of prolonged amusement. However, after a few days the effect disappeared. Turning the office light on had little noticeable effect on the color of anything in his office, and the surfaces of objects resumed their usual boring constancy as illumination conditions or angles altered.” Interestingly the subject who wore the gear only part of each day never lost his normal colour consistency as he adapted to the other consistency; but the subject who wore the gear all the time had to re-adapt when he took off the gear although it took much less time than the adaption when the gear was first put on. I have often wonder how difficult it would be to lose this correction and for a while used a funny prism child’s toy to look at the uncorrected color of various shadows.

Did an adaption happen to bring the colours back to there original colours? Did the blue tomatoes start to look more red? It seems not, at least in this study. But again there were some interesting events.

On two occasions late into his six-day period of wearing the gear, JK went into a sudden panic because he thought that the rotation equipment was malfunctioning and no longer rotating his visual input. Both times, as he reports it, he suddenly had the impression that everything was looking normal. This caused panic because if there was a glitch causing the equipment to no longer rotate his visual input, then the experimental protocol would be compromised. …However, the equipment was not malfunctioning on either occasion, a fact of which JK quickly convinced himself both times by explicitly reflecting on the colors that objects, specifically his hands, appeared to have: “OK, my hand looks purplish, and purple is what it should like under rotation, so the equipment is still working correctly.”…the lack of a sense of novelty of strangeness made him briefly fear … He described it as a cessation of a “this is weird” signal.

Before and after the colour adaption period, they tested the memory-colour effect. This is done by adjusting the colour of an object until it appears a neutral grey. If the object always has a particular colour (bananas are yellow) then people over correct and move the colour past the neutral grey point. “One possible explanation of this effect is that when the image actually is grey scale, subjects’ top-down expectations about the usual color make it appear (in some way or another) to be slightly tinted in that hue. So when the image of the banana is actually completely grey scale subjects judge it to be slightly yellow. The actual color of the image must be slightly in the direction opposite yellow (periwinkle) in order to cancel this top- down effect and make the image appear grey. This is the memory-color effect.” This effect was slightly reduced after the experiment – as if bananas were not expected to be as yellow as they had been before the experiment.

They also looked at other aspects of adaption. “As we found, aesthetic judgments had started to adapt, … And though we did not find evidence of semantic adaptation, it would be quite surprising, given humans’ ability to learn new languages and dialects, if after a more extended period of time semantic adaptation did not occur.” They do not have clear evidence to say anything about qualia verses enactive adaptation but further similar experiments may give good evidence.

Emotional communication

Judith Copithorne image

Judith Copithorne image

It has been suspected for many years that if the body is forced to experience the signs of an emotion then the emotion will be felt. So… when we feel an emotion we will have a particular bodily expression of that emotion; and, if we have the bodily expression of an emotion we feel the emotion. If we are happy we smile and if we smile we will feel happy. This connection does not need to be obvious – if we are a tiny bit happy we will make a tiny bit of a smile and a tiny smile can increase our happiness a tiny bit.

A definitive experiment was done on this connection (Strack, Martin, Stepper; 1988; “Inhibiting and Facilitating Conditions of the Human Smile: A Nonobtrusive Test of the Facial Feedback Hypothesis”; Journal of Personality and Social Psychology 54 (5): 768–777) and here is the abstract: “We investigated the hypothesis that people’s facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1’s results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1’s findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.The important aspect in this study is that the subjects did not think they were mimicking a smile or a frown or that they were being tested for their emotional state.

It later became clear that the reason that emotions are somewhat contagious is that we mimic others bodies and expressions. When someone smiles at us, we are inclined to smile back and it is very difficult to completely inhibit the return of a smile. It seems that this is a form of communication. We read others and others read us by our bodily emotional expressions.

What does failure to express an emotion with the body do? It can inhibit the emotion. It was found that people with facial paralysis that interfered with smiling showed increased symptoms of depression while people with botox treatment that interfered with frowning had their depression symptoms decreased. (Lewis etal 2009 J Cosmetic Dermatology).

And now it is found that interference with bodily expression of emotion can interfere with understanding the emotions of others. When we mimic another’s facial expression is when we can understand their state of mind.

A recent paper shows this effect. (Baumeister, Papa, Foroni; “Deeper than skin deep – The effect of botulinum toxin-A on emotion processing”; Toxicon, 2016; 118: 86) Here is the abstract:

Highlights

  • Effect of facial Botox use on perception of emotional stimuli was investigated.
  • Particularly perception of slightly emotional stimuli was blunted after Botox use.
  • The perception of very emotional stimuli was less affected.
  • After Botox use, reaction times to slightly emotional stimuli increased.
  • Specifically weakly emotional stimuli seem to benefit from facial feedback.

The effect of facial botulinum Toxin-A (BTX) injections on the processing of emotional stimuli was investigated. The hypothesis, that BTX would interfere with processing of slightly emotional stimuli and less with very emotional or neutral stimuli, was largely confirmed. BTX-users rated slightly emotional sentences and facial expressions, but not very emotional or neutral ones, as less emotional after the treatment. Furthermore, they became slower at categorizing slightly emotional facial expressions under time pressure.”

The press release for this paper (here) gives more details. “The thankfully temporary paralysis of facial muscles that this toxin causes impairs our ability to capture the meaning of other people’s facial expressions. … The idea (embodied cognition) is that the processing of emotional information, such as facial expressions, in part involves reproducing the same emotions on our own bodies. In other words, when we observe a smile, our face too tends to smile (often in an imperceptible and automatic fashion) as we try to make sense of that expression. However, if our facial muscles are paralyzed by Botox, then the process of understanding someone else’s emotion expression may turn out to be more difficult.

It is not about rules

The question of the trolley has always bothered me. You probably have encountered the scenario many times. You are on a bridge over a trolley track with another person you do not know. There are 5 people on the track some way off. A run-away trolley is coming down the track and will hit the 5 people. Do you allow this to happen or do you throw the person beside you onto the track in front of the trolley to stop it? This question comes in many versions and is used to categorize types of moral reasoning. My problem is that I do not know what I would do in the few seconds I would have to consider the situation and I don’t believe that others know either.

In another dip into OpenMIND (here) I find a paper on morality by Paul Churchland, “Rules: The Basis of Morality?”. This is the abstract:

Most theories of moral knowledge, throughout history, have focused on behavior-guiding rules. Those theories attempt to identify which rules are the morally valid ones, and to identify the source or ground of that privileged set. The variations on this theme are many and familiar. But there is a problem here. In fact, there are several. First, many of the higher animals display a complex social order, one crucial to their biological success, and the members of such species typically display a sophisticated knowledge of what is and what is not acceptable social behavior —but those creatures have no language at all. They are unable even to express a single rule, let alone evaluate it for moral validity. Second, when we examine most other kinds of behavioral skills—playing basketball, playing the piano, playing chess—we discover that it is surpassingly difficult to articulate a set of discursive rules, which, if followed, would produce a skilled athlete, pianist, or chess master. And third, it would be physically impossible for a biological creature to identify which of its myriad rules are relevant to a given situation, and then apply them, in real time, in any case. All told, we would seem to need a new account of how our moral knowledge is stored, accessed, and applied. The present paper explores the potential, in these three regards, of recent alternative models from the computational neurosciences. The possibilities, it emerges, are considerable.

Apes, wolves/dogs, lions and many other intelligent social animals appear to have a moral sense without any language. They have ways of behaving that show cooperation, empathy, trust, fairness, sacrifice for the group and punishment of bad behavior. They train their young in these ways. No language codifies this behavior. Humans that lose their language through brain damage and can not speak or comprehend language still have other skills intact, including their moral sense. People who are very literate and very moral can often not give an account of their moral rules – some can only put forward the Golden Rule. If we were actually using rules they would be able to report them.

We should consider morality a skill that we learn rather than a set of rules. It is a skill that we learn and continue learning thoughout our lives. A skill that can take into consideration a sea of detail and nuance, that is lightning fast compared to finding the right rule and applying it. “Moral expertise is among the most precious of our human virtues, but it is not the only one. There are many other domains of expertise. Consider the consummate skills displayed by a concert pianist, or an all-star basketball player, or a grandmaster chess champion. In these cases, too, the specific expertise at issue is acquired only slowly, with much practice sustained over a period of years. And here also, the expertise displayed far exceeds what might possibly be captured in a set of discursive rules consciously followed, on a second-by-second basis, by the skilled individuals at issue. Such skills are deeply inarticulate in the straightforward sense that the expert who possesses them is unable to simply tell an aspiring novice what to do so as to be an expert pianist, an effective point guard, or a skilled chess player. The knowledge necessary clearly cannot be conveyed in that fashion. The skills cited are all cases of knowing how rather than cases of knowing that. Acquiring them takes a lot of time and a lot of practice.

Churchland then describes how the neural bases of this sort of skill is possible (along with perception and action). He uses a model of Parallel Distributed Processing where a great deal of input can quickly be transformed into a perception or an action. It is an arrangement that learns skills. “It has to do with the peculiar way the brain is wired up at the level of its many billions of neurons. It also has to do with the very different style of representation and computation that this peculiar pattern of connectivity makes possible. It performs its distinct elementary computations, many trillions of them, each one at a distinct micro-place in the brain, but all of them at the same time. … a PDP network is capable of pulling out subtle and sophisticated information from a gigantic sensory representation all in one fell swoop.” I found Churchland’s explanation very clear and to the point but I also thought he was using AI ideas of PDP rather than biological ones in order to be easily understood. If you are not familiar with parallel processing ideas, this paper is a good place to find a readable starting explanation.

Another slight quibble with the paper is that he does not point out that some of the elements of morality appear to be inborn and those elements probably steer the moral learning process. Babies often seem to ‘get it’ prior to the experience need develop and improve the skill.

 

A prediction engine

Judith Copithorne image

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.

 

Roots of communication

Judith Copithorne image

Judith Copithorne image

20 or so years ago I took an interest in non-verbal communication and how it interacted with speech. A number of ideas became very clear in my thoughts: we communicate with our whole bodies whether we want to or even realize what we are doing; the gestures, facial expressions, sounds and postures that we use are evolutionarily very old; and, if we try to consciously plan our non-verbal communication, we are likely to send confusing and ambiguous signals. Communication in language only, stripped of its non-verbal patterns, has to change from the rules of verbal language to the rules of written language or it can be unintelligible. We rely on the non-verbal clues to know in what frame to interpret the words and rely on the cadence of speech to organize the connection of words and thoughts.

A recent post by M. Graziano in Aeon (here) is very interesting and worth a read. Here I am just pointing to the central idea of Graziano’s revelation. There is much more of interest in the original post.

Most vertebrates have a personal space which they monitor and protect. If they suspect an invasion of their space, they automatically react. Graziano gives a description of this reaction in primates, which protects vulnerable areas such as eyes, face, neck, and abdomen: “… he squints. His upper lip pulls up, bunching the cheeks towards the eyes. The head pulls down, the shoulders lift, the torso curves, the arms pull across the abdomen or face. A swipe near the eyes or a bonk on the nose might even produce tears, another component of a classical defensive reaction. His grunts begin to be tinged with distress calls.” This is not really communication on the part of the primate whose space has been invaded but a defense of himself that is innate and automatic. However, an observing primate can interpret the reaction as meaning that the defending primate actually, honestly feels threatened. Slowly, through evolution, this reaction, and parts of it, can become signals and symbols useful in communication.

In Graziano’s theory, smiles are a mild version of the facial defense of the eyes. It simply communicates friendliness and a lack of aggression by mimicking defense as opposed to offense. An exchange of smiles establishes a mutual non-aggression state. Even though we might think that showing teeth is aggressive, it is part of protecting the eyes. That can be seen more clearly in genuine smiles rather than polite or faked smiles, the ones which start with squinting around the eyes rather than the lifting of the lip.

Play is the situation giving rise to laughter in Graziano’s thinking. Play is governed in mammals by signals that keep the action from getting dangerous even if it looks it, like the safe words in S&M. These signals are universal enough that the young from different species can rough and tumble together without mishap. Laughter mimics the defense of personal space with a facial expression similar to a smile along with a stereotypical noise somewhat like an alarm cry. When it is intense there is a protection of the abdomen by bending forward and putting the arms across the stomach. A laugh seems to indicate that the defenses of the personal space have been breached. Someone has reached in and tickled protected parts of the body, or something, a joke perhaps, has surprised you. You are allowing the game to invade your space because you are enjoying it and the laugh communicates that.

Then there is crying. Now the communication is “enough” because I am hurt. If it is intense there is a sobbing cry and lots of tears, the hands protect the eyes and a defensive posture forms a little ball. (Laughter can even end up as crying if it is strong enough.) Tears are asking for relief and comfort – and they usually get it, as all children seem to know.

It is somewhat amazing that so much communication might be made out of one innate reaction through the process of evolution. Being able to effectively communicate is a powerful selective force. “And why should so many of our social signals have emerged from something as seemingly unpromising as defensive movements? This is an easy one. Those movements leak information about your inner state. They are highly visible to others and you can rarely suppress them safely. In short, they tattletale about you. Evolution favours animals that can read and react to those signs, and it favours animals that can manipulate those signs to influence whoever is watching. We have stumbled on the defining ambiguity of human emotional life: we are always caught between authenticity and fakery, always floating in the grey area between involuntary outburst and expedient pretence.

Another look at consciousness

Judith Copithorne image

Judith Copithorne image

There is an interesting new paper with a proposed model of consciousness (Michael H. Herzog, Thomas Kammer, Frank Scharnowski. Time Slices: What Is the Duration of a Percept? PLOS Biology, 2016; 14 (4): e1002433 DOI: 10.1371/journal.pbio.1002433). It reviews various theories and experiments in the literature on the subject. Their model is similar to how I have viewed consciousness for a few years, but with important and interesting differences.

They view consciousness as non-continuous, like the frames of a movie, which has seemed the only way to look at consciousness that fits the knowledge that we have of what appear to happen in the brain. They are not dealing with the neurology though and give space to reasons why people have resisted discrete frame and clung to continuous consciousness.

Another aspect of their theory that I like to hear is that the heavy lifting of perception is done unconsciously. The final product of the unconscious processing is a ‘frame’ of consciousness. This fits with the notion that there is not a conscious mind in the sense that we think of a mind. There is only the unconscious mind or simply the mind. Consciousness is a presentation, a moment of experience to remember, a global awareness of a percept.

I have in the past thought of a best-fit-scenario end point of perception, the stable point that would end the iterations of a complex analog computation and be the perception on which consciousness is based. The authors talk of Bayesian statistical computations stopping when they reach an ‘attractor’. This seems the same basic idea but more amenable to experimentation and modeling.

During the unconscious processing period, the brain collects information to solve the ill-posed problems of vision, for example, using Bayesian priors. The percept is the best explanation in accordance with the priors given the input. … One important question is how the brain “knows” when unconscious processing is complete and can be rendered conscious. We speculate that percepts occur when processing has converged to an attractor state. One possibility is that hitting an attractor state leads to a signal that renders the content conscious, similarly to, for example, broadcasting in the global workspace theory. … Related questions are the role of cognition, volition, and attention in these processes. We speculate that these can strongly bias unconscious processing towards specific attractor states. For example, when viewing ambiguous figures, a verbal hint or shifting attention can bias observers to perceive either one of the possible interpretations, each corresponding to a different attractor state.

The most interesting idea (to me) is that the conscious precept is not a snap shot in a series of snap shots but a constructed slab or slice of time in a series of slices. The frames are of short duration but represent slices of time rather than moments. The implication is that we do not, in any sense, have a direct experience of the world, but a highly processed and codified one.

All features become conscious simultaneously, and the percept contains all the feature information derived from the various detectors. Hence, (a green line) is not actually consciously perceived as green during its actual (sensual stimulus) but later when rendered conscious. The same holds true for temporal features. The stimulus is not perceived during the 50 ms when it is presented. The stimulus is even not perceived for a duration of 50 ms. Its duration is just encoded as a “number,” signifying that the duration was 50 ms in the same way that the color is of a specific hue and saturation.

I hope this paper stimulates some ingenious experimentation.

 

The opposite trap

Judith Copithorne image

Judith Copithorne image

I vaguely remember as a child that one of the ways to learn new words and get some understanding of their meaning was to learn pairs of words that were opposites. White and black, day and night, left and right, and endless pairs were presented. But in making learning easier for children, this model of how words work makes learning harder for adults.

There are ideas that people insist on seeing as opposites – more of one dictates less of the other. They can be far from opposite but it is difficult for people to abandon this relationship. It seems that a mechanism we have for words is making our understanding of reality more difficult. An example is economy and environment. The notion that what is good for the environment has to be bad for the economy and vice versa is not strictly true because there are actions that are good for both and actions that are bad for both, as well as the actions that favour only one. We do not seem to look for the win-win actions and even distrust people who do try.

Another pair is nurture against nature or environment against genetics. These are very simply not opposites, really, they are not even a little bit so. Almost every feature of our bodies is under the overlapping control of our genetics and our environment. They are interwoven factors. And, it is not just our current environment but our environmental history and also that of our parents and sometimes our grandparents that is mixed in with our genetics.

In thinking about our thoughts and actions, opposites just keep being used. We are given a picture of our heads as venues for various parts of our minds to engage in wars and wrestling matches. We can start with an old one: mind versus brain or non-material mental versus material neural dualism. This opposition is almost dead but its ghost walks still. Some people divide themselves at the neck and ask whether the brain controls the body or does the body control the brain – and they appear to actually want a clear-cut answer. There is the opposition we inherited from Freud: a thought process that is conscious and one that is unconscious presented as two opposed minds (or three in the original theory). This separation is still with us, although it has been made more reasonable in the form of system1 and system2 thinking. System2 uses working memory and is therefore registered in consciousness. It is slow, takes effort, is limited in scope and is sequential. System1 does not use working memory and therefore does not register in consciousness. It is fast, automatic, can handle many inputs and is not sequential. These are not separate minds but interlocking processes. We use them both all the time and not in opposition. But they are often presented as opposites.

Recently, there has been added a notion that the hemispheres of the brain can act separately and in opposition. This is nonsense – the two hemispheres complement each other and cooperate in their actions. But people seem to love the idea of one dominating the other and so it does not disappear.

It would be easier to think about many things without the tyranny of some aspects of language, like opposites, that we learn as very young children and have to live with for the rest of our lives. The important danger is not when we name the two ends of a spectrum, but when we name two states as mutually exclusive, they had better actually be so or we will have problems. It is fine to label a spectrum from left-handed to right-handed but if they were opposites then all the levels of ambidextrous handedness would be a problem. The current problem with the rights of LBGT would be less if the difference between women and men was viewed as a complex of a few spectra rather than a single pair of opposites.

Neuroscience and psychology need to avoid repeatedly falling into opposite-traps. It still has too many confusions, errors, things to be discovered, dots to be connected and old baggage to be discarded.

Thanks Judith for the use of your image

 

Out of the box

I have not been reading science reports as much of late and have not been writing. My mind has wandered to less conventional ideas. I hope you find them entertaining and maybe a little useful.

Because we got stuck years ago with a computer model for thinking about the brain, we may have misjudged the importance of memory. It is seen as a storage unit. Memory has been shown to be a very active thing, but still seen as an active storage thing. We know it is involved with learning and imagining as well as recalling, but thinking functions are seen as just how we may use what is remembered. No matter how people think of the brain or the mind, memory stays over to the side as a separate store. Even though there are many types of memory (implicit, explicit and working for a start) they are still just storage. They are seen as the RAM and hard disks of the mind.

Suppose (just for an exercise) that we had started out putting memory in the role of an operating system when we first started using the computer model to get our bearings on thought. Think of it as a form of Windows rather than a hard disk. Actually, this is not as far-fetched as you may think. There is a system called MUMPS which runs on a computer without any other operating system under it and consists of a single large data storage structure and a computer language to use the data. It was invented in the ’60s and is still used in many medical computer systems because it is very fast, and accurate in that it does not impose format restrictions on the data. I am not supposing that the brain is like MUMPS, far from it; but simply pointing out that there is more than one way to view the role of memory.

So – back to the ‘what if’.

The interesting thing about the brain is its plasticity. The changes are not rare or special but are happening all the time. Whatever the brain does leaves it changed a bit. The greatest producers of change are remembering, learning, imagining, recalling – or anything that involves the memory. Every time one neuron causes another neuron to fire, the synapses between those two neurons are strengthened. Remembering makes changes to the connectivity of the brain or in computer terms it changes the architecture of the hardware.

Connecting separate memories (memory integration) is how we make inferences; chains of inferences lead to decisions. If my memory A is connected to B, and B is connected to C, than C and A can be connected. That is the sort of thing that happens when we think. Recognition is also a memory function. If I say it is greenish, you might think of vegetation or Ireland or toys. If it is upside down, it is not Ireland but other things become more likely. But if I then say that it is furry – well then it is likely to be a sloth or some silly soft toy. Saying that it moves slowly would clinch it. The word green is connected to a great many other words and so is upside down but their intersection is small. It gets tiny when it must overlap with the fur connections.

Memory is waiting to help. When I am someplace doing something with some aim, everything I sense and everything I know about the place and the activity, all the memories that may be useful to me are alerted and stand primed, really to be useful. I would not be aware of all these alerted memories until I use them and even then I might be unaware of them. It is actually extremely difficult (probably impossible) to have memory-free thoughts. Even something like vision is not just stimuli processed into image, it is wrapped in memories to connect one moment with the next, predict what will be next, identify objects and give meaning to the image.

The mechanisms that store memories appear to provide our sense of place, the consecutive order of events, the flow of time and the assigning of cause and effect links. It even involves part of our sense of self. We either store memories that way because that is how we understand the world or we understand the world that way because of how we remember it. These sound like two opposed ideas but really are the same idea if memory is in effect our ‘operating system’.

We can see memory as the medium of our thoughts and the mechanisms for using memories as part of our cognition. But it could be seen as even more fundamental than that. We live in a model of the world and ourselves in that world. We project that model around us. We seem to view the projection through a hole in our heads from a vantage point a couple of inches behind the bridge of the nose. It is not just visual but includes sound and other senses. This model houses our consciousness but also our recollections and our imaginings. It is a sort of universal pattern or framework for consciousness, memory and a fair bit of cognition. It seems possible that this framework and the elements in it may be one of the ways that different parts of the brain can share information. (Like Baar’s global workspace and similar theories.)

But what could be the connection between consciousness and explicit memory? Again we can look at something that is more familiar – a tape recorder. The little head with its gap writes on the tape as the tape passes by it. There is another head very close to the writing head that reads the tape. The tape can be monitored using this head and earphones almost simultaneously with the sounds being recorded, but they are the sounds that have just been recorded being read from the tape. This may be what consciousness is – an awareness of what has just been put in memory. There is something to think about.