Monthly Archives: May 2016

A sense of rhythm

In a recent scientific press release, the opening sentence is, “A sense of rhythm is a uniquely human characteristic.” I am used to this sort of thing in opening sentences; I think to myself that they definitely have no evidence for that statement; they have not studied most animals, done a literature search or watched the videos of parrots dancing on the back of chairs. Never mind, it is an opener, just read on.

But the next paragraph starts, “What most people call the sense of rhythm — the mechanism that enables us to clap along or dance to music — is an intangible ability that is exclusive to human beings.” So it is not just the usual unexamined opener. And to top it off, the third paragraph starts with, “Human beings are the only species that recognise these patterns and scientists suspect that an evolutionary development is at the root of it.” Well I am not convinced that they have even thought much about these statements.

I find it very difficult to believe that anything is really, purely, uniquely human. The first assumption until proven false should be that our anatomy, genome, behavior etc. is part of the general characteristics of mammals. There will be other examples, or very similar examples, or the un-elaborated roots of any human ability to be found in some other animals. That is an assumption that is almost forced on us by the nature of evolution. But so many resist this approach and assume uniqueness without evidence of it.

Having a sense of rhythm would be very useful to many animals in their ordinary lives. And rhythms of many kinds occur in all living bodies. Movement in particular is rhythmic (perhaps particularly for swinging through trees). It would be something of a miracle if being able to entrain to a beat was not found in any other animal – just unbelievable.

And this reminds me of how annoying it is to still run across the rule against being anthropomorphic. It is not that we should assume that animals are like us in their mental lives without testing the idea. But it is also wrong to assume the opposite without testing. If it looks like a duck, and walks like a duck and quacks like a duck, hey, it just maybe is a duck. If the only way I can understand and predict the actions of my dog is to assume she has emotions similar to mine; then my tentative assumption is that she has those emotions. The rule against seeing similarities between ourselves and other animals shows a level of misunderstanding of both.

The need to see ourselves as unique and as fundamentally different from other animals is a left-over from the old beliefs in there being a hierarchy of life with man at the pinnacle. It is about time we got over this notion, just like we had to get over being the center of the universe. Our biggest differences from the rest of the animal world is the extent of our culture, not our basic biology. Other animals have consciousness, memory, emotion and intelligence, just like us. We are all different (each unique in their own way) but as variations of a theme – the fundamental plan of vertebrates is the starting point for all vertebrates. And I would bet money that a sense of rhythm is part of that basic plan.

A look at colour

Judith Copithorne image

Back to the OpenMIND collection and a paper on colour vision (Visual Adaptation to a Remapped Spectrum – Grush, Jaswal, Knoepfler, Brovold) (here). The study has some shortcomings which the authors point out. “A number of factors distinguish the current study from an appropriately run and controlled psychological experiment. The small n and the fact that both subjects were also investigators in the study are perhaps the two most significant differences. These limitations were forced by a variety of factors, including the unusual degree of hardship faced by subjects, our relatively small budget, and the fact that this protocol had never been tried before. Because of these limitations, the experiments and results we report here are intended to be taken only as preliminary results—as something like a pilot study. Even so, the results, we believe, are quite interesting and suggestive.” To quote Chesterton, if it is worth doing it is worth doing poorly.

The researchers used LCD goggles driven by a video camera so that the scene the subject saw was shifted in colour. The shift was 120 degrees of a colour wheel (red to blue, green to red, yellow to purple). The result was blue tomatoes, lilac people, and green sky. (video) The study lasted a week with one subject wearing the gear all the time he was not in the dark while the other wore the gear for several hours each day and had normal vision the rest of the time. How did they adapt to the change in colour?

Colour consistency is an automatic correction the visual system makes so that colours do not change under different light conditions. Colours do not appear to change when viewed in sunlight, twilight, candle light, fluorescent lamps etc. What perception is aiming at is the characteristic of the surface that is reflecting the light and not the nature of the light. Ordinarily we are completely unaware of this correction. The colour shifting gear disrupted colour consistency until the visual system adapted to the new spectrum.

We did not test color constancy in any controlled way, but the subjective reports are quite nmistakable. Subject RG noticed that upon first wearing the rotation gear color constancy went “out the window.” To take one example, in normal conditions RG’s office during the day is brightly lit enough that turning on the fluorescent light makes no noticeable difference to the appearance of anything in the office. But when he turned the lights on after first donning the gear, everything had an immediate significant change of hue (though not brightness). He spent several minutes flipping the light on and off in amazement. Another example is that he also noticed that when holding a colored wooden block, the surfaces changed their apparent color quite noticeably as he moved it and rotated it, as if the surfaces were actively altering their color like a chameleon. This was also a source of prolonged amusement. However, after a few days the effect disappeared. Turning the office light on had little noticeable effect on the color of anything in his office, and the surfaces of objects resumed their usual boring constancy as illumination conditions or angles altered.” Interestingly the subject who wore the gear only part of each day never lost his normal colour consistency as he adapted to the other consistency; but the subject who wore the gear all the time had to re-adapt when he took off the gear although it took much less time than the adaption when the gear was first put on. I have often wonder how difficult it would be to lose this correction and for a while used a funny prism child’s toy to look at the uncorrected color of various shadows.

Did an adaption happen to bring the colours back to there original colours? Did the blue tomatoes start to look more red? It seems not, at least in this study. But again there were some interesting events.

On two occasions late into his six-day period of wearing the gear, JK went into a sudden panic because he thought that the rotation equipment was malfunctioning and no longer rotating his visual input. Both times, as he reports it, he suddenly had the impression that everything was looking normal. This caused panic because if there was a glitch causing the equipment to no longer rotate his visual input, then the experimental protocol would be compromised. …However, the equipment was not malfunctioning on either occasion, a fact of which JK quickly convinced himself both times by explicitly reflecting on the colors that objects, specifically his hands, appeared to have: “OK, my hand looks purplish, and purple is what it should like under rotation, so the equipment is still working correctly.”…the lack of a sense of novelty of strangeness made him briefly fear … He described it as a cessation of a “this is weird” signal.

Before and after the colour adaption period, they tested the memory-colour effect. This is done by adjusting the colour of an object until it appears a neutral grey. If the object always has a particular colour (bananas are yellow) then people over correct and move the colour past the neutral grey point. “One possible explanation of this effect is that when the image actually is grey scale, subjects’ top-down expectations about the usual color make it appear (in some way or another) to be slightly tinted in that hue. So when the image of the banana is actually completely grey scale subjects judge it to be slightly yellow. The actual color of the image must be slightly in the direction opposite yellow (periwinkle) in order to cancel this top- down effect and make the image appear grey. This is the memory-color effect.” This effect was slightly reduced after the experiment – as if bananas were not expected to be as yellow as they had been before the experiment.

They also looked at other aspects of adaption. “As we found, aesthetic judgments had started to adapt, … And though we did not find evidence of semantic adaptation, it would be quite surprising, given humans’ ability to learn new languages and dialects, if after a more extended period of time semantic adaptation did not occur.” They do not have clear evidence to say anything about qualia verses enactive adaptation but further similar experiments may give good evidence.

Emotional communication

Judith Copithorne image

It has been suspected for many years that if the body is forced to experience the signs of an emotion then the emotion will be felt. So… when we feel an emotion we will have a particular bodily expression of that emotion; and, if we have the bodily expression of an emotion we feel the emotion. If we are happy we smile and if we smile we will feel happy. This connection does not need to be obvious – if we are a tiny bit happy we will make a tiny bit of a smile and a tiny smile can increase our happiness a tiny bit.

A definitive experiment was done on this connection (Strack, Martin, Stepper; 1988; “Inhibiting and Facilitating Conditions of the Human Smile: A Nonobtrusive Test of the Facial Feedback Hypothesis”; Journal of Personality and Social Psychology 54 (5): 768–777) and here is the abstract: “We investigated the hypothesis that people’s facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1’s results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1’s findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.The important aspect in this study is that the subjects did not think they were mimicking a smile or a frown or that they were being tested for their emotional state.

It later became clear that the reason that emotions are somewhat contagious is that we mimic others bodies and expressions. When someone smiles at us, we are inclined to smile back and it is very difficult to completely inhibit the return of a smile. It seems that this is a form of communication. We read others and others read us by our bodily emotional expressions.

What does failure to express an emotion with the body do? It can inhibit the emotion. It was found that people with facial paralysis that interfered with smiling showed increased symptoms of depression while people with botox treatment that interfered with frowning had their depression symptoms decreased. (Lewis etal 2009 J Cosmetic Dermatology).

And now it is found that interference with bodily expression of emotion can interfere with understanding the emotions of others. When we mimic another’s facial expression is when we can understand their state of mind.

A recent paper shows this effect. (Baumeister, Papa, Foroni; “Deeper than skin deep – The effect of botulinum toxin-A on emotion processing”; Toxicon, 2016; 118: 86) Here is the abstract:

Highlights

  • Effect of facial Botox use on perception of emotional stimuli was investigated.
  • Particularly perception of slightly emotional stimuli was blunted after Botox use.
  • The perception of very emotional stimuli was less affected.
  • After Botox use, reaction times to slightly emotional stimuli increased.
  • Specifically weakly emotional stimuli seem to benefit from facial feedback.

The effect of facial botulinum Toxin-A (BTX) injections on the processing of emotional stimuli was investigated. The hypothesis, that BTX would interfere with processing of slightly emotional stimuli and less with very emotional or neutral stimuli, was largely confirmed. BTX-users rated slightly emotional sentences and facial expressions, but not very emotional or neutral ones, as less emotional after the treatment. Furthermore, they became slower at categorizing slightly emotional facial expressions under time pressure.”

The press release for this paper (here) gives more details. “The thankfully temporary paralysis of facial muscles that this toxin causes impairs our ability to capture the meaning of other people’s facial expressions. … The idea (embodied cognition) is that the processing of emotional information, such as facial expressions, in part involves reproducing the same emotions on our own bodies. In other words, when we observe a smile, our face too tends to smile (often in an imperceptible and automatic fashion) as we try to make sense of that expression. However, if our facial muscles are paralyzed by Botox, then the process of understanding someone else’s emotion expression may turn out to be more difficult.

It is not about rules

The question of the trolley has always bothered me. You probably have encountered the scenario many times. You are on a bridge over a trolley track with another person you do not know. There are 5 people on the track some way off. A run-away trolley is coming down the track and will hit the 5 people. Do you allow this to happen or do you throw the person beside you onto the track in front of the trolley to stop it? This question comes in many versions and is used to categorize types of moral reasoning. My problem is that I do not know what I would do in the few seconds I would have to consider the situation and I don’t believe that others know either.

In another dip into OpenMIND (here) I find a paper on morality by Paul Churchland, “Rules: The Basis of Morality?”. This is the abstract:

Most theories of moral knowledge, throughout history, have focused on behavior-guiding rules. Those theories attempt to identify which rules are the morally valid ones, and to identify the source or ground of that privileged set. The variations on this theme are many and familiar. But there is a problem here. In fact, there are several. First, many of the higher animals display a complex social order, one crucial to their biological success, and the members of such species typically display a sophisticated knowledge of what is and what is not acceptable social behavior —but those creatures have no language at all. They are unable even to express a single rule, let alone evaluate it for moral validity. Second, when we examine most other kinds of behavioral skills—playing basketball, playing the piano, playing chess—we discover that it is surpassingly difficult to articulate a set of discursive rules, which, if followed, would produce a skilled athlete, pianist, or chess master. And third, it would be physically impossible for a biological creature to identify which of its myriad rules are relevant to a given situation, and then apply them, in real time, in any case. All told, we would seem to need a new account of how our moral knowledge is stored, accessed, and applied. The present paper explores the potential, in these three regards, of recent alternative models from the computational neurosciences. The possibilities, it emerges, are considerable.

Apes, wolves/dogs, lions and many other intelligent social animals appear to have a moral sense without any language. They have ways of behaving that show cooperation, empathy, trust, fairness, sacrifice for the group and punishment of bad behavior. They train their young in these ways. No language codifies this behavior. Humans that lose their language through brain damage and can not speak or comprehend language still have other skills intact, including their moral sense. People who are very literate and very moral can often not give an account of their moral rules – some can only put forward the Golden Rule. If we were actually using rules they would be able to report them.

We should consider morality a skill that we learn rather than a set of rules. It is a skill that we learn and continue learning thoughout our lives. A skill that can take into consideration a sea of detail and nuance, that is lightning fast compared to finding the right rule and applying it. “Moral expertise is among the most precious of our human virtues, but it is not the only one. There are many other domains of expertise. Consider the consummate skills displayed by a concert pianist, or an all-star basketball player, or a grandmaster chess champion. In these cases, too, the specific expertise at issue is acquired only slowly, with much practice sustained over a period of years. And here also, the expertise displayed far exceeds what might possibly be captured in a set of discursive rules consciously followed, on a second-by-second basis, by the skilled individuals at issue. Such skills are deeply inarticulate in the straightforward sense that the expert who possesses them is unable to simply tell an aspiring novice what to do so as to be an expert pianist, an effective point guard, or a skilled chess player. The knowledge necessary clearly cannot be conveyed in that fashion. The skills cited are all cases of knowing how rather than cases of knowing that. Acquiring them takes a lot of time and a lot of practice.

Churchland then describes how the neural bases of this sort of skill is possible (along with perception and action). He uses a model of Parallel Distributed Processing where a great deal of input can quickly be transformed into a perception or an action. It is an arrangement that learns skills. “It has to do with the peculiar way the brain is wired up at the level of its many billions of neurons. It also has to do with the very different style of representation and computation that this peculiar pattern of connectivity makes possible. It performs its distinct elementary computations, many trillions of them, each one at a distinct micro-place in the brain, but all of them at the same time. … a PDP network is capable of pulling out subtle and sophisticated information from a gigantic sensory representation all in one fell swoop.” I found Churchland’s explanation very clear and to the point but I also thought he was using AI ideas of PDP rather than biological ones in order to be easily understood. If you are not familiar with parallel processing ideas, this paper is a good place to find a readable starting explanation.

Another slight quibble with the paper is that he does not point out that some of the elements of morality appear to be inborn and those elements probably steer the moral learning process. Babies often seem to ‘get it’ prior to the experience need develop and improve the skill.

 

A prediction engine

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.