Category Archives: motor control

Inner speech is close to uttered speech

There has recently been a paper in eLife by Whitford etal, Neurophysiological evidence of efference copies to inner speech, Dec 2017 doi 10.7554/eLife.28197.001, examining inner speech. They find it very similar to overt speech.

When we speak a series of motor commands are prepared and executed by the mouth, throat and vocal cords. Copies of these commands, called efferemce copies are used to predict what the auditory area will hear. This prediction is called the internal forward model. When incoming sounds match the prediction, the auditory area lowers its response to the speech. This efference copy mechanism applies to other motor commands and is why we cannot tickle ourselves. The sensory pattern that will result from an action is predicted so that self-generated sensory input is attenuated compared to input that is not self-generated. In the case of speech, the actual sounds are predicted and when input arrives at the right time that matches the expected sound, the sound is dampened. This dampening can be measured. The sounds result in a particular brain wave which has an amplitude that matches the volume of the sound and it can be seen in EEG traces. It is called N1 indicating that it is the first event produced negative wave. This wave has less amplitude for sounds in self-generated speech than for identical sounds that were not self-generated.

In their introduction the author say “…the central aim of the present study is to explore whether N1-suppression, which has consistently been observed in response to overt speech, also occurs in response to inner speech, which is a purely mental action. Inner speech – also known as covert speech, imagined speech, or verbal thoughts – refers to the silent production of words in one’s mind. Inner speech is one of the most pervasive and ubiquitous of human activities; it has been estimated that most people spend at least a quarter of their lives engaged in inner speech. An influential account of inner speech suggests that it ultimately reflects a special case of overt speech in which the articulator organs (e.g., mouth, tongue, larynx) do not actually move; that is, inner speech is conceptualized as ‘a kind of action’. Support for this idea has been provided by studies showing that inner speech activates similar brain regions to overt speech, including audition and language-related perceptual areas and supplementary motor areas, but does not typically activate primary motor cortex. While previous data suggest that inner and overt speech share neural generators, relatively few neurophysiological studies have explored the extent to which these two processes are functionally equivalent. If inner speech is indeed a special case of overt speech – ‘a kind of action – then it would also be expected to have an associated internal forward model.” The researcher show that thinking of a particular sound (such as ba) attenuates the N1 signal of an external sound if they are the same sound at the matching timing. The inner speech efference copy and its internal forward model are produced in inner speech and can dampen an external sound if it matches the internal one.

Here is their abstract. “Efference copies refer to internal duplicates of movement-producing neural signals. Their primary function is to predict, and often suppress, the sensory consequences of willed movements. Efference copies have been almost exclusively investigated in the context of overt movements. The current electrophysiological study employed a novel design to show that inner speech – the silent production of words in one’s mind – is also associated with an efference copy. Participants produced an inner phoneme at a precisely specified time, at which an audible phoneme was concurrently presented. The production of the inner phoneme resulted in electrophysiological suppression, but only if the content of the inner phoneme matched the content of the audible phoneme. These results demonstrate that inner speech – a purely mental action – is associated with an efference copy with detailed auditory properties. These findings suggest that inner speech may ultimately reflect a special type of overt speech.

This probably explains the nature of ‘hearing voices’. If this mechanism failed and inner speech was not properly predicted, it would appear to be external speech. It would not be ‘owned’ by the individual.

Beta waves

Judith Copithorne image

Judith Copithorne image

Brain waves are measured for many reasons and they have been linked to various brain activities. But very little is known about how they arise. Are they the result or the cause of the activities they are associated with? How exactly are they produced at a cellular or network level? We know little about these waves.

One type of wave, beta waves (18-25 Hz) are associated with consciousness and alertness. In the motor cortex they are found when muscle contractions are isotonic (contractions that do not produce movement) but are absent just prior and during movement. They are increased during sensory feedback to static motor control and when movement is resisted or voluntarily suppressed. In the frontal cortex the beta waves are found during attention to cognitive tasks directed to the outside world. They are found in alert attentive states, problem solving, judgment, decision making, and concentration. The more involved the cognitive activity the faster the beta waves.

ScienceDaily reports a press release from Brown University on the work of Stephanie Jones and team, who are attempting to understand how beta waves arise. (here) Three types of study are used: MEG recordings, computer models, and implanted electrodes in animals.

The MEG recordings from the somatosensory cortex (sense of touch) and the inferior frontal cortex (higher cognition) showed a very distinct form for the beta waves, “they lasted at most a mere 150 milliseconds and had a characteristic wave shape, featuring a large, steep valley in the middle of the wave.” This wave form was recreated in a computer model of the layers of the cortex. “They found that they could closely replicate the shape of the beta waves in the model by delivering two kinds of excitatory synaptic stimulation to distinct layers in the cortical columns of cells: one that was weak and broad in duration to the lower layers, contacting spiny dendrites on the pyramidal neurons close to the cell body; and another that was stronger and briefer, lasting 50 milliseconds (i.e., one beta period), to the upper layers, contacting dendrites farther away from the cell body. The strong distal drive created the valley in the waveform that determined the beta frequency. Meanwhile they tried to model other hypotheses about how beta waves emerge, but found those unsuccessful.” The model was tested in mice and rhesus monkeys with implanted electrodes and was supported.

Where do the signals come from that drive the pyramidal neurons? The thalamus is a reasonable guess at the source. Thalamo-cortex-thalamus feedback loop makes those very contacts of the thalamus axons within the cortex layers. The thalamus is known to have signals with 50 millisecond duration. All of the sensory and motor information that enters the cortex (except smell) comes though the thalamus. It regulates consciousness, alertness and sleep. It is involved in processing sensory input and voluntary motor control. It has a hand in language and some types of memory.

The team is continuing their study. “With a new biophysical theory of how the waves emerge, the researchers hope the field can now investigate beta rhythms affect or merely reflect behavior and disease. Jones’s team in collaboration with Professor of neuroscience Christopher Moore at Brown is now testing predictions from the theory that beta may decrease sensory or motor information processing functions in the brain. New hypotheses are that the inputs that create beta may also stimulate inhibitory neurons in the top layers of the cortex, or that they may may saturate the activity of the pyramidal neurons, thereby reducing their ability to process information; or that the thalamic bursts that give rise to beta occupy the thalamus to the point where it doesn’t pass information along to the cortex.

It seems very clear that understanding of overall brain function will depend on understanding the events at a cellular/circuit level; and that those processes in the cortex will not be understood without including other regions like the thalamus in the models.

Fighting Libet’s experiment

A post in Science of Us in Feb, by Christian Jarrett, reviews the Libet experiment and recent attempts to overturn the implications of it. (http://nymag/scienceofus/2016/02/a-neuroscience-finding-on-free-will.html ) I find the struggle to reverse Libet’s finding to be the result of a mistaken way of viewing thought. An enormous amount of effort has gone into failed attempts to show this experiment was flawed over the last 30 years. Why are the implications so hard for people to accept?

Here is the first bit of Jarrett’s article (underlining is mine).

Back in the 1980s, the American scientist Benjamin Libet made a surprising discovery that appeared to rock the foundations of what it means to be human. He recorded people’s brain waves as they made spontaneous finger movements while looking at a clock, with the participants telling researchers the time at which they decided to waggle their fingers. Libet’s revolutionary finding was that the timing of these conscious decisions was consistently preceded by several hundred milliseconds of background preparatory brain activity (known technically as “the readiness potential”).

The implication was that the decision to move was made nonconsciously, and that the subjective feeling of having made this decision is tagged on afterward. In other words, the results implied that free will as we know it is an illusion — after all, how can our conscious decisions be truly free if they come after the brain has already started preparing for them?

For years, various research teams have tried to pick holes in Libet’s original research. It’s been pointed out, for example, that it’s pretty tricky for people to accurately report the time that they made their conscious decision. But, until recently, the broad implications of the finding have weathered these criticisms, at least in the eyes of many hard-nosed neuroscientists, and over the last decade or so his basic result has been replicated and built upon with ever more advanced methods such as fMRI and the direct recording of neuronal activity using implanted electrodes.

These studies all point in the same, troubling direction: We don’t really have free will. In fact, until recently, many neuroscientists would have said any decision you made was not truly free but actually determined by neural processes outside of your conscious control.

That is the stumbling block: ‘neural processes outside of conscious control’. That is what some scientists are fighting so hard not to lose. The whole notion of what free will is rests on how we view who we are, what our consciousness is, and how control works.

When we think of who we are, we cannot separate self from non-self within our bodies. We are not really divided at the neck, or between the upper and lower parts of the brain, or between different ‘minds’ co-existing in one skull. This idea of two separate minds, that was inherited from Freud and others, has not been demonstrated to be true. It has not been shown that we have two distinct thinking minds that are somehow separate. Thinking appears to be a complex, widespread but interconnected and unified affair. Whether a particular thought process becomes conscious or remains non-conscious does not depend on the basic process of thought.

There is every reason to reject the notion of a separate conscious mind that thinks in a ‘conscious’ manner to produce conscious thoughts. We are aware of thoughts (some thoughts) but we are not aware of the mechanisms that produced the thoughts. We do not metaphorically hear the gears of thought production grinding. We are simply not aware of how thought happens. Consciousness is a form of awareness and probably not much more. There is awareness of some things that go on in the brain but not of all things or even the bulk of things.

So why are some thoughts made conscious while others aren’t? A good guess is that consciousness gives a remembered experience, an episodic memory, or at least the material for such memories. With memories of our actions, it would be important information to remember whether the action was our doing or just happened to us, whether it was accidental or intended, whether it was a choice or coerced, carefully planned or an automatic habit and so on. These pieces of information are important to save and so would be incorporated into conscious events. We need that information to learn from experience. Just because the feeling of having an intent, an urge and then an execution of an action is there in our conscious awareness does not mean that they were a form of conscious control. They are there as important parts of the event that consciousness is recording.

We can still control our actions, and we still can be aware of controlling our actions, but that does not mean that our awareness is producing the control that we are aware of. Consciousness does not produce the tree that I am aware of – it just produces the awareness. And you are just you, and not your awareness of you. There is reality and there are models of reality; there is territory and there are maps of the territory; there is an original and there are copies of the original. There is you and there is your awareness of you. You make decisions (with neural activity) but your awareness of a decisions is not the same as making it.

I personally find it a little difficult to understand why this idea of a conscious mind as opposed to a conscious awareness is so strong and indestructible an idea to most people. I cannot remember exactly how or when (it was a gradual process) but some time in my late teens, over 50 years ago, my consciousness became a flickering imperfect movie screen and not a thinking mind. So “determined by neural processes outside of conscious control” is obvious because there is no such thing as conscious control and what is more, it is a comforting rather than alarming viewpoint.

I am assuming that the current experiments with showing ‘free won’t’ will not turn out to be any more robust than the attempts to show free will. We shall see.

Synergy and Modular control

When we learned the simple overview of the nervous system in grade school, we were taught that the brain sent signals to muscles to contract and that is how we moved. And by brain, we assumed the thinking part up high in the head. But it cannot be so.

A little deer is born and in a very short time is standing and in a little longer is taking its first wobbly step. Within a couple of days it is running and frolicking. Deer are not that special; other animals ‘learn’ to get around very quickly too. Even humans babies, if they are held upright with their feet touching a surface will walk along that surface. In a sense, the spinal cord knows how to walk by lifting and moving forward alternate legs. It does not know how to walk well, but the basics are there. Human babies are slower at managing to get around because they are born at a less developed stage and walking on two legs rather than four is trickier. In all sorts of observations and experiments there is evidence that the ability to walk is innate in the spinal cord and does not require the brain.

The spinal cord has some primitive control modules or muscle synergies. Muscle synergies are present in a number of natural behaviors; they are low-level control networks found in the brain stem and spinal cord that coordinate a group of muscles. They make common movements easier to order up. We have the ‘intent to go over there’ and without any more conscious thought we do it in an automatic way. Now if we had to trigger individual muscles in the right time sequence, it would likely take many hours to get not very far with a number of falls along the way. One could say that we would ‘get the hang of it’ as we did it. But that is saying we would make parts of it automatic (create modules and synergies).

This modularization of motor control is layered. The simplest control is in the spinal cord, but it is modified and adapted to conditions by the brain stem and especially the cerebellum. The cerebellum gets instructions from other parts of the brain and finally these modules within modules are able to execute the simple ‘intention to go over there’.

The synergies in a baby’s spinal cord are an ancient set that is similar of all mammals (probably all land vertebrates). The muscles work in a rhythm where each event triggers the next in a circle. There are two primitives that are involved in human walking that we are born with. One is to bend the leg so that the foot leaves the ground and moves forward then goes back down and straightens. Two is a forward push against the ground by the straight leg. These two complexes of muscle contractions and relaxations are wired so that their action in one leg inhibits their action in the other. When the left leg does one, the right leg cannot do one but can do two. And when the left leg does two, the right cannot do two but can do one. They are also wired so that in each leg it is the end of one that triggers the start of two and the end of two triggers the start of one. It is the same in four legged animals except there is another set of inhibitions between the front and hind legs. At this level it is not very adaptive and can only react to sensory information that comes through the spinal cord from the muscles, joints and skin. Babies cannot use this facility to get around because they do not have the strength to maintain the posture needed with such a large heavy head on such a little body, and more importantly, the spinal cord has no information from the ears about balance. Balance is very important for bipedal walking. The baby must create two other synergies: to react to balance information and to use the hips, back and arms to keep the center of gravity over the legs. In the meantime, when they don’t have the strength, they can crawl using the 4 legged modules.

The cerebellum and brain stem add the control of balance and of pace (there are relative changes to the timing of events when the whole process is sped up). They can correct for uneven ground. They can keep the direction of motion toward a target. But the coordination control of the lower brain is not just direct signals to muscles but uses the synergies built into the spinal cord. And it is much more complex than the action in the spinal cord. In fact, the cerebellum has more neurons that the whole rest of the brain. It manages the modules, timing, adjustments to modules, effects from sensory input and feedback and commands from higher levels of the brain, then packages it all for execution. Another great trick of the cerebellum is to do two things at the same time, say walk and throw a ball. Both may be deep seated modules but there are adjustment to be made where they interfere with one another.

The point I am making here is that although movement seems so easy for us to execute, that is because it is not arranged consciously, or even largely in the cerebral hemispheres. It is modularized so that a simple request in the cerebral cortex goes through layers of calculation and fine-tuning to become individual signals to individual muscles. It is synergy/modularization that gives us this flexible but easy to use system. We are surprised that it is easier to create a program to play chess in the abstract (and win) than it is to program a robot to physically move the pieces and operate the time clock in a game. When we do not understand how something is done, it appears easy. It is a common trap.


A prediction engine

Judith Copithorne image

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.


Language in the left hemisphere

Here is the posting mentioned in the last post. A recent paper (Harvey M. Sussman; Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots; Biolinguistics Vol 9 Dec 2015), deals with the nature of language processing in the left hemisphere and why it is that in right-handed people with split brains only the left cortex can talk although both sides can listen. There is a lot of interesting information in this paper (especially for someone like me who is left-handed and dyslexic). He has a number of ‘dots’ and he connects them.

Dot 1 is infant babbling. The first language-like sounds babies make are coos and these have a very vowel-like quality. Soon they babble consonant-vowel combinations in repetitions. By noting the asymmetry of the mouth it can be shown that babbling comes from the left hemisphere, non-babbling noises from both, and smiles from the right hemisphere. A speech sound map is being created by the baby and it is formed at the dorsal pathway’s projection in the frontal left articulatory network.

Dot 2 is the primacy of the syllable. Syllables are the unit of prosodic events. A person’s native language syllable constraints are the origin of the types of errors that happen in second language pronunciation. Also syllables are the units of transfer in language play. Early speech sound networks are organized in syllable units (vowel and associated consonants) in the left hemisphere of right-handers.

Dot 3 is the inability for the right hemisphere to talk in split brain people. When language tasks are directed at the right hemisphere the stimulus exposure must be longer (greater than 150 msec) than when directed to the left. The right hemisphere can comprehend language but does not evoke a sound image from seen objects and words although the meaning of the objects and words is understood by that hemisphere. The right hemisphere cannot recognize if two words rhyme from seeing illustations of the words. So the left hemisphere (in right-handers) has the only language neural network with sound images. This network serves as the neural source for generating speech, therefore in a split brain only the left side can speak.

Dot 4 deals with the problems of DAS, Development Apraxia of Speech. I am going to skip this.

Dot 5 is the understanding of speech errors. The ‘slot-segment’ hypothesis is based on analysis of speech errors. Two thirds of errors are the type where phonemes are substituted, omitted, transposed or added. The picture is of a two-tiered neural ‘map’ with syllable slots serially ordered as one tier, and an independent network of consonant sounds in the other tier. The tiers are connected together. The vowel is the heart of the syllable in the nucleus slot. Forms are built around it with consonants (CV, CVC, CCV etc.). Spoonerisms are restricted to consonants exchanging with consonants and vowels exchanging with vowels; and, exchanges occurring between the same syllable positions – first with first, last with last etc.

Dot 6 is Hawkin’s model, “the neo-cortex uses stored memories to produce behaviors.” Motor memories are used sequentially and operate in an auto-associative way. Each memory elicits the next in order (think how hard it is to do things backwards). Motor commands would be produced in a serial order, based on syllables – learned articulatory behaviors linked to sound equivalents.

Dot 7 is experiments that showed representations of sounds in human language at the neural level. For example there is a representation of a generic ‘b’ sound, as well as representations of various actual ‘b’s that differ from one another. This is why we can clearly hear a ‘b’ but have difficulty identifying a ‘b’ when the sound pattern is graphed.

Here is the abstract:

Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

Close but not quite

I wonder how often we are almost right but not quite. It seems to be a fairly common trap in biology.

Motor somatoIt has been thought for many years (140+ years) that the primary motor cortex (lying across the top of the head) mapped the muscles of the body and controlled their contractions. From this we got the comical homunculus with its huge lips and hands on a spindly little body. Each small area on this map was supposed to activate one muscle.

A recent paper by Graziano, Ethological Action Maps: A Paradigm Shift for the Motor Cortex (here), argues that this is not as it appears. What is being mapped are actions and not muscles. Here is the abstract:

The map of the body in the motor cortex is one of the most iconic images in neuroscience. The map, however, is not perfect. It contains overlaps, reversals, and fractures. The complex pattern suggests that a body plan is not the only organizing principle. Recently a second organizing principle was discovered: an action map. The motor cortex appears to contain functional zones, each of which emphasizes an ethologically relevant category of behavior. Some of these complex actions can be evoked by cortical stimulation. Although the findings were initially controversial, interest in the ethological action map has grown. Experiments on primates, mice, and rats have now confirmed and extended the earlier findings with a range of new methods.

Trends – For nearly 150 years, the motor cortex was described as a map of the body. Yet the body map is overlapping and fractured, suggesting that it is not the only organizing principle. In the past 15 years, a second fundamental organizing principle has been discovered: a map of complex, meaningful movements. Different zones in the motor cortex emphasize different actions from the natural movement repertoire of the animal. These complex actions combine multiple muscles and joints. The ‘action map’ organization has now been demonstrated in primates, prosimians, and rodents with various stimulation, lesion, and neuronal recording methods. The action map was initially controversial due to the use of electrical stimulation. The best argument that the action map is not an artifact of one technique is the growing confirming evidence from other techniques.”

Even settled science when it is neuroscience should be taken with a grain of salt. Any part of it could be something similar but not the same.

Not what you do but how you do it

I have been interested in communication using non-verbal channels for some time. Communication through posture, facial expression, gesture, tone of voice is an intriguing subject. Lately I have encountered another channel, vitality forms of actions. A particular action, say handing something to another person, can be done in a number of ways implying rudeness, caring, anger, generosity etc. A person’s actions can have a goal and an intent but can also give hints as to their state of mind or emotions during the action. Of course, we can be conscious, or not, of giving signals and conscious, or not, of receiving them – but there is communication none the less.

There is a new paper on this subject which I can not access. And there is an older similar paper which I have been able to read. The two citations are with the abstracts below. The research has looked at what differs in actions that have different vitality forms: time profile, force, space, direction. The diagram illustrates the difference between energetic and gentle action.

vitality graphs

vitality graphs

The stimuli were presented to the participants in pairs of consecutive videos, where the observed action (what) and vitality (how) could be the same or changed between video-pairs. To counterbalance all what–how possibilities, four different combinations of action-vitality were created: (i) same action-same vitality; (ii) same action-different vitality; (iii) different action-same vitality and (iv) different action-different vitality. All video combinations were presented in two tasks. The what task required the participants to pay attention to the type of action observed in the two consecutive videos and to decide whether the represented action was the same or different regardless of vitality form. The how task required the participants to pay attention to the vitality form and to decide whether the represented vitality was the same or different between the two consecutive videos regardless of the type of action performed.

A number of areas of the brain are active during an action but only one was active with ‘how’ and not ‘what’ tasks. This was the right dorso-central insula.

Here is the abstract of the older paper (Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60) The observation of goal-directed actions performed by another individual allows one to understand what that individual is doing and why he/she is doing it. Important information about others behaviour is also carried out by the dynamics of the observed action. Action dynamics characterize the vitality form of an action describing the cognitive and affective relation between the performing agent and the action recipient. Here, using the fMRI technique, we assessed the neural correlates of vitality form recognition presenting participants with videos showing two actors executing actions with different vitality forms: energetic and gentle. The participants viewed the actions in two tasks. In one task (what), they had to focus on the goal of the presented action; in the other task (how), they had to focus on the vitality form. For both tasks, activations were found in the action observation/execution circuit. Most interestingly, the contrast how vs what revealed activation in right dorso-central insula, highlighting the involvement, in the recognition of vitality form, of an anatomical region connecting somatosensory areas with the medial temporal region and, in particular, with the hippocampus. This somatosensory-insular-limbic circuit could underlie the observers capacity to understand the vitality forms conveyed by the observed action

And the abstract of the newer paper ( Di Cesare G, Di Dio C, Marchi M, Rissolatti G; Expressing our internal states and understand those of others; Proc Natl Acad Sci 2015) Vitality form is a term that describes the style with which motor actions are performed (e.g., rude, gentle, etc.). They represent one characterizing element of conscious and unconscious bodily communication. Despite their importance in interpersonal behavior, vitality forms have been, until now, virtually neglected in neuroscience. Here, using the functional MRI (fMRI) technique, we investigated the neural correlates of vitality forms in three different tasks: action observation, imagination, and execution. Conjunction analysis showed that, in all three tasks, there is a common, consistent activation of the dorsocentral sector of the insula. In addition, a common activation of the parietofrontal network, typically active during arm movements production, planning, and observation, was also found. We conclude that the dorsocentral part of the insula is a key element of the system that modulates the cortical motor activity, allowing individuals to express their internal states through action vitality forms. Recent monkey anatomical data show that the dorsocentral sector of the insula is, indeed, connected with the cortical circuit involved in the control of arm movements. correlates of vitality forms in three tasks: action observation, imagination, and execution. We found that, in all three tasks, there is a common specific activation of the dorsocentral sector of the insula in addition to the parietofrontal network that is typically active during arm movements production and observation. Thus, the dorsocentral part of the insula seems to represent a fundamental and previously unsuspected node that modulates the cortical motor circuits, allowing individuals to express their vitality forms and understand those of others.

Included graph is Fig S2 of the paper – Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60

Here is the caption for the graph: Fig. 2 Kinematic and dynamic profiles associated with one of the actions (passing a bottle) performed by the female actress with the two vitality forms (gentle; energetic). (A) Velocity profiles (y-axes) and duration (x-axes). (B) Trajectories (gentle, green line; energetic, red line). (C) Potential energy (blue line), that is the energy that the actress gave to the object during the lifting phase of the action; kinetic energy (red line), that is the energy that the actress gave to the object to move it with a specific velocity from the start to the end point. (D) Power required to perform the action on the object in an energetic (blue solid line) and gentle (blue dashed line) vitalities. As it can be observed in the graphs, the vitality forms gentle and energetic generally differ from each other on each of the tested parameters.


The center of the universe

When we are conscious we look out at the world through a large hole in our heads between our noses and our foreheads, or so it seems. It is possible to pin-point the exact place inside our heads which is the ‘here’ to which everything is referenced. That spot is about 4-5 centimeters behind the bridge of the nose. Not only sight but hearing, touch and the feelings from inside our bodies are some distance in some direction from that spot. As far as we are concerned, we carry the center of the universe around in our heads.

Both our sensory system and our motor system use this particular three dimensional arrangement centered on that particular spot and so locations are the same for both processes. How, why and where in the brain is this first person, ego-centric space produced? Bjorn Merker has a paper in a special topic issue of Frontiers of Psychology, Consciousness and Action Control (here). The paper is entitled “The efference cascade, consciousness and its self: naturalizing the first person pivot of action control”. He believes evidence points to the roof of the mid-brain, the superior colliculus.

If we consider the center of our space, then attention is like a light or arrow pointing from the center to a particular location in that space and what is in it. That means that we are oriented in that direction. “The canonical form of this re-orienting is the swift and seamlessly integrated joint action of eyes, ears (in many animals), head, and postural adjustments that make up what its pioneering students called the orienting reflex.

This orientation has to occur before any action directed at the target or any examination of the point of interest by our senses. First the orientation and then the focus of attention. But how does the brain decide which possible focus of attention is the one to orient towards. “The superior colliculus provides a comprehensive mutual interface for brain systems carrying information relevant to defining the location of high priority targets for immediate re-orienting of receptor surfaces, there to settle their several bids for such a priority location by mutual competition and synergy, resulting in a single momentarily prevailing priority location subject to immediate implementation by deflecting behavioral or attentional orientation to that location. The key collicular function, according to this conception, is the selection, on a background of current state and motive variables, of a single target location for orienting in the face of concurrent alternative bids. Selection of the spatial target for the next orienting movement is not a matter of sensory locations alone, but requires access to situational, motivational, state, and context information determining behavioral priorities. It combines, in other words, bottom-up “salience” with top-down “relevance.”

We are provided with the illusion that we sit behind our eyes and experience the world from there and from there we plan and direct our actions. A lot of work and geometry that we are unaware of goes into this illusion. It allows us to integrate what we sense with what we do, quickly and accurately.


A tiny eye



A single celled organism called Erythropsidinium has been reported to have a tiny eye. This organism is not a simple bacteria sort of cell but is a eukaryote. It is single celled but has the kind of cell that is found in multicelled organisms like us. It is not a bag of chemicals but is highly organized with a nucleus and organelles. Among the organelles is a little eye and a little harpoon – ‘all the better to hunt with, my dear’. The eye (called an ocelloid) is like a camera with a lens and pigment responders; while the harpoon is a piston that can elongate 20 or so times in length very quickly and has a poison tip. The prey is transparent but has a nucleus that polarizes light and it is the polarized light that the ocelloid detects. This results in the harpoon being aimed in the direction of the prey before it is fired.

That sounds like a link between a sensory organelle and a motor organelle. As far as I can see, it is not known how the linking mechanism works but in a single celled organism the link has to be relatively simple (a mechanical or chemical molecular event or short chain of events). This is like a tiny nervous system but without the nerves. There is a sensor and an actor and in a nervous system there would be a web of inter-neurons that that connected the two and allowed activity to be appropriate to the situation. What ever the link is in Erythropsidinium, it does allow the steering of the harpoon to an effective direction. The cell can move the ocelloid and the harpoon. Are they physically tied together? Or is there more information processing than just a ‘fire’ signal?

This raises an interesting question. Can we say that this organism is aware? If the ability to sense and to act is found coordinated within a single cell – can that cell be said to be aware of its actions and its environment? And if it is aware, is it conscious in some simple way? That would raise the question of whether complexity is a requirement for consciousness. These are semantic arguments, all about how words are defined and not about how the world works.