Not what you do but how you do it

I have been interested in communication using non-verbal channels for some time. Communication through posture, facial expression, gesture, tone of voice is an intriguing subject. Lately I have encountered another channel, vitality forms of actions. A particular action, say handing something to another person, can be done in a number of ways implying rudeness, caring, anger, generosity etc. A person’s actions can have a goal and an intent but can also give hints as to their state of mind or emotions during the action. Of course, we can be conscious, or not, of giving signals and conscious, or not, of receiving them – but there is communication none the less.

There is a new paper on this subject which I can not access. And there is an older similar paper which I have been able to read. The two citations are with the abstracts below. The research has looked at what differs in actions that have different vitality forms: time profile, force, space, direction. The diagram illustrates the difference between energetic and gentle action.

vitality graphs

vitality graphs

The stimuli were presented to the participants in pairs of consecutive videos, where the observed action (what) and vitality (how) could be the same or changed between video-pairs. To counterbalance all what–how possibilities, four different combinations of action-vitality were created: (i) same action-same vitality; (ii) same action-different vitality; (iii) different action-same vitality and (iv) different action-different vitality. All video combinations were presented in two tasks. The what task required the participants to pay attention to the type of action observed in the two consecutive videos and to decide whether the represented action was the same or different regardless of vitality form. The how task required the participants to pay attention to the vitality form and to decide whether the represented vitality was the same or different between the two consecutive videos regardless of the type of action performed.

A number of areas of the brain are active during an action but only one was active with ‘how’ and not ‘what’ tasks. This was the right dorso-central insula.

Here is the abstract of the older paper (Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60) The observation of goal-directed actions performed by another individual allows one to understand what that individual is doing and why he/she is doing it. Important information about others behaviour is also carried out by the dynamics of the observed action. Action dynamics characterize the vitality form of an action describing the cognitive and affective relation between the performing agent and the action recipient. Here, using the fMRI technique, we assessed the neural correlates of vitality form recognition presenting participants with videos showing two actors executing actions with different vitality forms: energetic and gentle. The participants viewed the actions in two tasks. In one task (what), they had to focus on the goal of the presented action; in the other task (how), they had to focus on the vitality form. For both tasks, activations were found in the action observation/execution circuit. Most interestingly, the contrast how vs what revealed activation in right dorso-central insula, highlighting the involvement, in the recognition of vitality form, of an anatomical region connecting somatosensory areas with the medial temporal region and, in particular, with the hippocampus. This somatosensory-insular-limbic circuit could underlie the observers capacity to understand the vitality forms conveyed by the observed action

And the abstract of the newer paper ( Di Cesare G, Di Dio C, Marchi M, Rissolatti G; Expressing our internal states and understand those of others; Proc Natl Acad Sci 2015) Vitality form is a term that describes the style with which motor actions are performed (e.g., rude, gentle, etc.). They represent one characterizing element of conscious and unconscious bodily communication. Despite their importance in interpersonal behavior, vitality forms have been, until now, virtually neglected in neuroscience. Here, using the functional MRI (fMRI) technique, we investigated the neural correlates of vitality forms in three different tasks: action observation, imagination, and execution. Conjunction analysis showed that, in all three tasks, there is a common, consistent activation of the dorsocentral sector of the insula. In addition, a common activation of the parietofrontal network, typically active during arm movements production, planning, and observation, was also found. We conclude that the dorsocentral part of the insula is a key element of the system that modulates the cortical motor activity, allowing individuals to express their internal states through action vitality forms. Recent monkey anatomical data show that the dorsocentral sector of the insula is, indeed, connected with the cortical circuit involved in the control of arm movements. correlates of vitality forms in three tasks: action observation, imagination, and execution. We found that, in all three tasks, there is a common specific activation of the dorsocentral sector of the insula in addition to the parietofrontal network that is typically active during arm movements production and observation. Thus, the dorsocentral part of the insula seems to represent a fundamental and previously unsuspected node that modulates the cortical motor circuits, allowing individuals to express their vitality forms and understand those of others.

Included graph is Fig S2 of the paper – Giuseppe Di Cesare, Cinzia Di Dio, Magali J. Rochat, Corrado Sinigaglia, Nadia Bruschweiler-Stern, Daniel N. Stern, Giacomo Rizzolatti; The neural correlates of ‘vitality’ recognition: a fMRI study; Social Cognitive and Affective Neuroscience 2014, 9 (7): 951-60

Here is the caption for the graph: Fig. 2 Kinematic and dynamic profiles associated with one of the actions (passing a bottle) performed by the female actress with the two vitality forms (gentle; energetic). (A) Velocity profiles (y-axes) and duration (x-axes). (B) Trajectories (gentle, green line; energetic, red line). (C) Potential energy (blue line), that is the energy that the actress gave to the object during the lifting phase of the action; kinetic energy (red line), that is the energy that the actress gave to the object to move it with a specific velocity from the start to the end point. (D) Power required to perform the action on the object in an energetic (blue solid line) and gentle (blue dashed line) vitalities. As it can be observed in the graphs, the vitality forms gentle and energetic generally differ from each other on each of the tested parameters.

 

The learning of concepts

I once tried to learn a simple form of a Bantu language and failed (not surprising as I always fail to learn a new language). One of the problems with this particular attempt was classes of nouns. There were 10 or so classes, each with their own rules. Actually it works like the gender of nouns in most European languages, but it is much more complex and unlike gender it is less arbitrary. The nouns are grouped in somewhat descriptive groups like animals, people, places, tools etc. Besides the Bantu languages there are a number of other groups that have extensive noun classes, twenty or more.

Years ago I found the noun classes inexplicable. Why did they exist? But there has been a number of hints that it is a quite natural way for concepts to be stored in the brain – faces stored here, tools stored there, places stored somewhere else.

A recent paper (Andrew James Bauer, Marcel Adam Just. Monitoring the growth of the neural representations of new animal concepts. Human Brain Mapping, 2015; DOI: 10.1002/hbm.22842) studies how and where new concepts are stored.

Their review of previous finds illustrates the idea. “Research to date has revealed that object concepts (such as the concept of a hammer) are neurally represented in multiple brain regions, corresponding to the various brain systems that are involved in the physical and mental interaction with the concept. The concept of a hammer entails what it looks like, what it is used for, how one holds and wields it, etc., resulting in a neural representation distributed over sensory, motor, and association areas. There is a large literature that documents the responsiveness (activation) of sets of brain regions to the perception or contemplation of different object concepts, including animals (animate natural objects), tools, and fruits and vegetables. For example, fMRI research has shown that nouns that refer to physically manipulable objects such as tools elicit activity in left premotor cortex in right-handers, and activity has also been observed in a variety of other regions to a lesser extent. Clinical studies of object category-specific knowledge deficits have uncovered results compatible with those of fMRI studies. For example, damage to the inferior parietal lobule can result in a relatively selective knowledge deficit about the purpose and the manner of use of a tool. The significance of such findings is enhanced by the commonality of neural representations of object concepts across individuals. For example, pattern classifiers of multi-voxel brain activity trained on the data from a set of participants can reliably predict which object noun a new test participant is contemplating. Similarity in neural representation across individuals may indicate that there exist domain-specific brain networks that process information that is important to survival, such as information about food and eating or about enclosures that provide shelter.

Their study is concerned with how new concepts are formed (they have a keen interest in education). Collectively, the results show that before instruction about a feature, there were no stored representations of the new feature knowledge; and after instruction, the feature information had been acquired and stored in the critical brain regions. The activation patterns in the regions that encode the semantic information that was taught (habitat and diet) changed, reflecting the specific new concept knowledge. This study provides a novel form of evidence (i.e. the emergence of new multi-voxel representations) that newly acquired concept knowledge comes to reside in brain regions previously shown to underlie a particular type of knowledge. Furthermore, this study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

This is a different type of learning. It is conceptual knowledge learning rather than learning an intellectual skill such as reading or a motor skill such as juggling.

The storage of conceptual knowledge appears to be quite carefully structured rather than higgly piggly.

Here is the abstract. “Although enormous progress has recently been made in identifying the neural representations of individual object concepts, relatively little is known about the growth of a neural knowledge representation as a novel object concept is being learned. In this fMRI study, the growth of the neural representations of eight individual extinct animal concepts was monitored as participants learned two features of each animal, namely its habitat (i.e., a natural dwelling or scene) and its diet or eating habits. Dwelling/scene information and diet/eating-related information have each been shown to activate their own characteristic brain regions. Several converging methods were used here to capture the emergence of the neural representation of a new animal feature within these characteristic, a priori-specified brain regions. These methods include statistically reliable identification (classification) of the eight newly acquired multivoxel patterns, analysis of the neural representational similarity among the newly learned animal concepts, and conventional GLM assessments of the activation in the critical regions. Moreover, the representation of a recently learned feature showed some durability, remaining intact after another feature had been learned. This study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

Simplifying assumptions

There is an old joke about a group of horse betters putting out a tender to scientists for a plan to predict the results of races. A group of biologists submitted a plan to genetically breed a horse that would always win. It would take decades and cost billions. A group of statisticians submitted a plan to devise a computer program to predict races. It would cost millions and would only predict a little over chance. But a group of physicists said they could do it for a few thousand. They would be able to have the program finished in just a few weeks. The betters wanted to know how they could be so quick and cheap. “Well, we have equations for how the race variables interact. It’s a complex equation but we have made simplifying assumptions. First we said let each horse be a perfect rolling sphere. Then…

For over 3 decades ideas have appeared about how the brain must work from studies of electronic neural nets. These studies usually make a lot of assumptions. First, they assume that the only active cells in the brain are the neurons. Second, the neurons are simple (they have inputs which can be weighted and if the sum of the weighted inputs is over a threshold, the neuron fires its output signals) and there is only one type (or a very, very few different types). Third, the connections between the neurons are only structured in very simple and often statistically driven nets. There is only so much that can be learned about the real brain from this model.

But on the basis of electronic neural nets and information theory with, I believe, only a small input from the physiology of real brains, it became accepted that the brain used a ‘sparse coding’. What does this mean? At one end of a spectrum, the information held in a network depends on the state of just one neuron. This coding is sometimes referred to as grandmother cells because one and only one neuron would code for your grandmother. If the information depends on the state of all the neurons or in other words your grandmother would be coded by a particular pattern of activity that includes the states of all the neurons, that is the other end of the spectrum. Sparse coding uses only a few neurons so is near the grandmother cell end of the spectrum.

Since the 1980s it has generally been accepted that the brain uses sparse coding. But experiments with actual brains have been showing that it may not be the case. A recent paper (Anton Spanne, Henrik Jörntell. Questioning the role of sparse coding in the brain. Trends in Neurosciences, 2015; DOI: 10.1016/j.tins.2015.05.005) argues that it may not be sparse after all.

It was assumed that the brain would use the coding system that gives the lowest total activity without losing functionality. But that is not what the brain actually does. It has higher activity that it theoretically needs. This is probably because the brain sits in a fairly active state even at rest (a sort of knife edge) where it can quickly react to situations.

If sparse coding were to apply, it would entail a series of negative consequences for the brain. The largest and most significant consequence is that the brain would not be able to generalize, but only learn exactly what was happening on a specific occasion. Instead, we think that a large number of connections between our nerve cells are maintained in a state of readiness to be activated, enabling the brain to learn things in a reasonable time when we search for links between various phenomena in the world around us. This capacity to generalize is the most important property for learning.

Here is the abstract:

Highlights

  • Sparse coding is questioned on both theoretical and experimental grounds.
  • Generalization is important to current brain models but is weak under sparse coding.
  • The beneficial properties ascribed to sparse coding can be achieved by alternative means.

Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.

Memory in and out

Do we have a system to form memories and another to recall them or are both processes done by the same system in the brain? This has been a long standing question about the hippocampus. ScienceDaily has a report (here) on a paper answering this question. (Nozomu H. Nakamura, Magdalena M. Sauvage. Encoding and reactivation patterns predictive of successful memory performance are topographically organized along the longitudinal axis of the hippocampus. Hippocampus, 2015; DOI: 10.1002/hipo.22491).

The researchers used tags for molecules known to be involved in memory formation and also tags for the ones used in retrieval, they found that the same cells did both jobs. This is not really surprising because the patterns of cortical activity had been shown to be very similar for a particular memory through its formation, strengthening and recalling. These patterns seemed to come from activity in the hippocampus. A single system for formation and recall also makes it easier to understand the ways that memories are changed when they are recalled and used.

For their studies with rats, researchers adapted a standardised word-based memory test for humans, using however scents instead of words. The researchers hid small treats in sand-filled cups. In addition, each cup also contained a different scent, such as thyme or coriander which could be smelled by the rats when searching for the treats. Each training unit consisted of three phases. During the learning phase, researchers presented several scents to the animals. A pause followed, and subsequently a recognition phase. In the latter, the animals were presented the scents from the learning phase as well as other smells. The animals demonstrated that they recognised a scent from the learning phase by running to the back wall of their cage, where they were rewarded with food for the correct response. If, on the other hand, they recognised that a scent had not been presented during the learning phase, they demonstrated it by digging in the sand with their front paws.

Here is the abstract: “An ongoing debate in human memory research is whether the encoding and the retrieval of memory engage the same part of the hippocampus and the same cells, or whether encoding preferentially involves the anterior part of the hippocampus and retrieval its posterior part. Here, we used a human to rat translational behavioural approach combined to high-resolution molecular imaging to address this issue. We showed that successful memory performance is predicted by encoding and reactivation patterns only in the dorsal part of the rat hippocampus (posterior part in humans), but not in the ventral part (anterior part in humans). Our findings support the view that the encoding and the retrieval processes per-se are not segregated along the longitudinal axis of the hippocampus, but that activity predictive of successful memory is and concerns specifically the dorsal part of the hippocampus. In addition, we found evidence that these processes are likely to be mediated by the activation/reactivation of the same cells at this level. Given the translational character of the task, our results suggest that both the encoding and the retrieval processes take place in the same cells of the posterior part of the human hippocampus.”

First and last syllables

Have you wondered why rhyme and alliteration are so common and pleasing, why they assist memorization? They seem to be taking advantage of the way words are ‘filed’ in the brain.

A ScienceDaily item (here) looks at a paper on how babies hear syllables. (Alissa L. Ferry, Ana Fló, Perrine Brusini, Luigi Cattarossi, Francesco Macagno, Marina Nespor, Jacques Mehler. On the edge of language acquisition: inherent constraints on encoding multisyllabic sequences in the neonate brain. Developmental Science, 2015; DOI: 10.1111/desc.12323).

It is known that our cognitive system recognizes the first and last syllables of words better than middle syllables. For example there is a trick of being able to read print where the middle of the words are changed. It has also been noted that the edges of words are often information rich, especially with grammatical information.

This paper shows that this is a feature of our brains from birth – no need to learn it.At just two days after birth, babies are already able to process language using processes similar to those of adults. SISSA researchers have demonstrated that they are sensitive to the most important parts of words, the edges, a cognitive mechanism which has been repeatedly observed in older children and adults.” The babies were also sensitive to the very short pause between words as a way to tell when one word ends and another begins.

Here is the abstract: “To understand language, humans must encode information from rapid, sequential streams of syllables – tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences. After familiarization with a six-syllable sequence, the neonate brain responded to the change (as shown by an increase in oxy-hemoglobin) when the two edge syllables switched positions but not when two middle syllables switched positions (Experiment 1), indicating that they encoded the syllables at the edges of sequences better than those in the middle. Moreover, when a 25ms pause was inserted between the middle syllables as a segmentation cue, neonates’ brains were sensitive to the change (Experiment 2), indicating that subtle cues in speech can signal a boundary, with enhanced encoding of the syllables located at the edges of that boundary. These findings suggest that neonates’ brains can encode information from multisyllabic sequences and that this encoding is constrained. Moreover, subtle segmentation cues in a sequence of syllables provide a mechanism with which to accurately encode positional information from longer sequences. Tracking the order of syllables is necessary to understand language and our results suggest that the foundations for this encoding are present at birth.

The power of words

ScienceDaily has an item (here) on an interesting paper. (B. Boutonnet, G. Lupyan. Words Jump-Start Vision: A Label Advantage in Object Recognition. Journal of Neuroscience, 2015; 35 (25): 9329 DOI: 10.1523/JNEUROSCI.5111-14.2015)

The researchers demonstrated how words can affect perception. A particular wave that occurs a tenth of a second after a visual image appears was enhanced by a matching word but not by a matching natural sound. And the word made the identification of the visual quicker but the natural sound did not. For example a picture of a dog, the spoken word ‘dog’, and a dog’s bark would be a set.

They believe this is because the word is about a general category and the natural sound is a specific example from that category. Symbols such as words are the only way to indicate categories. “Language allows us this uniquely human way of thinking in generalities. This ability to transcend the specifics and think about the general may be critically important to logic, mathematics, science, and even complex social interactions.

Here is the abstract: “People use language to shape each other’s behavior in highly flexible ways. Effects of language are often assumed to be “high-level” in that, whereas language clearly influences reasoning, decision making, and memory, it does not influence low-level visual processes. Here, we test the prediction that words are able to provide top-down guidance at the very earliest stages of visual processing by acting as powerful categorical cues. We investigated whether visual processing of images of familiar animals and artifacts was enhanced after hearing their name (e.g., “dog”) compared with hearing an equally familiar and unambiguous nonverbal sound (e.g., a dog bark) in 14 English monolingual speakers. Because the relationship between words and their referents is categorical, we expected words to deploy more effective categorical templates, allowing for more rapid visual recognition. By recording EEGs, we were able to determine whether this label advantage stemmed from changes to early visual processing or later semantic decision processes. The results showed that hearing a word affected early visual processes and that this modulation was specific to the named category. An analysis of ERPs showed that the P1 was larger when people were cued by labels compared with equally informative nonverbal cues—an enhancement occurring within 100 ms of image onset, which also predicted behavioral responses occurring almost 500 ms later. Hearing labels modulated the P1 such that it distinguished between target and nontarget images, showing that words rapidly guide early visual processing.

 

The center of the universe

When we are conscious we look out at the world through a large hole in our heads between our noses and our foreheads, or so it seems. It is possible to pin-point the exact place inside our heads which is the ‘here’ to which everything is referenced. That spot is about 4-5 centimeters behind the bridge of the nose. Not only sight but hearing, touch and the feelings from inside our bodies are some distance in some direction from that spot. As far as we are concerned, we carry the center of the universe around in our heads.

Both our sensory system and our motor system use this particular three dimensional arrangement centered on that particular spot and so locations are the same for both processes. How, why and where in the brain is this first person, ego-centric space produced? Bjorn Merker has a paper in a special topic issue of Frontiers of Psychology, Consciousness and Action Control (here). The paper is entitled “The efference cascade, consciousness and its self: naturalizing the first person pivot of action control”. He believes evidence points to the roof of the mid-brain, the superior colliculus.

If we consider the center of our space, then attention is like a light or arrow pointing from the center to a particular location in that space and what is in it. That means that we are oriented in that direction. “The canonical form of this re-orienting is the swift and seamlessly integrated joint action of eyes, ears (in many animals), head, and postural adjustments that make up what its pioneering students called the orienting reflex.

This orientation has to occur before any action directed at the target or any examination of the point of interest by our senses. First the orientation and then the focus of attention. But how does the brain decide which possible focus of attention is the one to orient towards. “The superior colliculus provides a comprehensive mutual interface for brain systems carrying information relevant to defining the location of high priority targets for immediate re-orienting of receptor surfaces, there to settle their several bids for such a priority location by mutual competition and synergy, resulting in a single momentarily prevailing priority location subject to immediate implementation by deflecting behavioral or attentional orientation to that location. The key collicular function, according to this conception, is the selection, on a background of current state and motive variables, of a single target location for orienting in the face of concurrent alternative bids. Selection of the spatial target for the next orienting movement is not a matter of sensory locations alone, but requires access to situational, motivational, state, and context information determining behavioral priorities. It combines, in other words, bottom-up “salience” with top-down “relevance.”

We are provided with the illusion that we sit behind our eyes and experience the world from there and from there we plan and direct our actions. A lot of work and geometry that we are unaware of goes into this illusion. It allows us to integrate what we sense with what we do, quickly and accurately.

 

Making sense of the sense of smell

This is another post on Morsella’s ideas.

In developing the Passive Frame theory of consciousness, the group uses olfaction as the sensory source to focus on. This seems surprising at first, but they have good reasons for this.

First, it is an old system from an evolutionary viewpoint. As in this quote from Shepherd: “the basic architecture of the neural basis of consciousness in mammals, including primates, should be sought in the olfactory system, with adaptations for the other sensory pathways reflecting their relative importance in the different species”.

Second, its connections are simple compared to vision and hearing. Olfactory signals go straight to the cortex rather than arriving in the cortex via the thalamus and they enter an old part of the cortex, the paleocortex rather than the neocortex (which has primary processing areas for the other senses). The processing of smell is more or less confined to one area in the frontal region and does not extend to the extensive areas at the back of the brain where visual and auditory processing occurs. The sense of smell is much easier to track anatomically than the other ‘higher’ senses. To understand minimal consciousness, it is reasonable to use the least elaborate sense as a model.

Third, looking at what lesions interfere with olfactory consciousness, it seems that connections outside the cortex are not needed for awareness of odours. This implies that at a basic level consciousness does not require the thalamus or mid-brain areas (although consciousness of other senses does require those areas). Some links to the thalamus and other areas may be involved in further processing smell signals but not in being conscious of them.

Fourth, the addition of a smell into the contents of consciousness has a sort of purity. The sense is only there when it is there. We are aware of silence and of complete darkness but we are not aware of a lack of odour unless we question ourselves. If odours are at very low concentrations or if we have habituated to them because they are not changing in concentration, we are not conscious of those odours and also not conscious of their absence. “The experiential nothingness associated with olfaction yields no conscious contents of any kind to such an extent that, absent memory, one in such a circumstance would not know that one possessed an olfactory system.” So addition of a smell to the contents of consciousness is a distinct change in awareness and can of itself focus attention on it.

Fifth, olfaction is not connected with a number of functions. There are no olfactory symbols being manipulated and the like. It is difficult to hold olfactory ‘images’ in working memory. Also “olfactory experiences are less likely to occur in a self-generated, stochastic manner: Unlike with vision and audition, in which visually-rich daydreaming or ‘ear worms’ occur spontaneously during an experiment and can contaminate psychophysical measures, respectively, little if any self-generated olfactory experiences could contaminate measures.

As well as these reasons given by Morsella in justifying the choice of olfaction in developing the Passive Frame theory, it occurs to me that there is a significant difference in memory. There is a type of recall prompted by smell that seems instantaneous, effortless and very detailed. For example, when you enter a house that you have not been in since childhood and the house has changed in so many ways over the years, the first breath gives a forgotten smell and a vivid sense of the original house along with many images from memories you know you could not normally recall. There seems to be some direct line between the memory of a ‘place’ and the faint odour of that place.

This olfactory approach to consciousness does cut away much of the elaborations and fancy details of consciousness and allows the basic essentials to be clearer.

A tiny eye

Erythropsidinium

Erythropsidinium

A single celled organism called Erythropsidinium has been reported to have a tiny eye. This organism is not a simple bacteria sort of cell but is a eukaryote. It is single celled but has the kind of cell that is found in multicelled organisms like us. It is not a bag of chemicals but is highly organized with a nucleus and organelles. Among the organelles is a little eye and a little harpoon – ‘all the better to hunt with, my dear’. The eye (called an ocelloid) is like a camera with a lens and pigment responders; while the harpoon is a piston that can elongate 20 or so times in length very quickly and has a poison tip. The prey is transparent but has a nucleus that polarizes light and it is the polarized light that the ocelloid detects. This results in the harpoon being aimed in the direction of the prey before it is fired.

That sounds like a link between a sensory organelle and a motor organelle. As far as I can see, it is not known how the linking mechanism works but in a single celled organism the link has to be relatively simple (a mechanical or chemical molecular event or short chain of events). This is like a tiny nervous system but without the nerves. There is a sensor and an actor and in a nervous system there would be a web of inter-neurons that that connected the two and allowed activity to be appropriate to the situation. What ever the link is in Erythropsidinium, it does allow the steering of the harpoon to an effective direction. The cell can move the ocelloid and the harpoon. Are they physically tied together? Or is there more information processing than just a ‘fire’ signal?

This raises an interesting question. Can we say that this organism is aware? If the ability to sense and to act is found coordinated within a single cell – can that cell be said to be aware of its actions and its environment? And if it is aware, is it conscious in some simple way? That would raise the question of whether complexity is a requirement for consciousness. These are semantic arguments, all about how words are defined and not about how the world works.

Content generation for passive frame consciousness

This is a continuation of posts on Morsella’s passive frame theory of consciousness.

Content is generated by modules that have input from bottom-up sensory paths, and from top-down paths. The generators are sensitive to context – a picture of a snake and a real snake are different. And they are unconscious – we cannot unsee a visual illusion even if we have knowledge of the real presentation.

The contents enter consciousness in an automatic manner. They are pushed in unconsciously not pulled in consciously – they just seem to happen, to ‘pop up’. The contents are under the control of unconscious associations – a word presented as a purely visual stimulus can be a phonetic representation in consciousness.

As well as sensory content generators, there are generators of action-related urges. Morsella uses the example: “when one holds one’s breath while underwater, or runs barefoot across the hot desert sand in order to reach water, one cannot help but consciously experience the inclinations to inhale or to avoid touching the hot sand, respectively. Regardless of the adaptiveness of the expressed actions, the conscious strife triggered by the external stimuli cannot be turned off voluntarily.

Thus the sensory presentation and the urges are generated in a way that is insulated or encapsulated from voluntary control. “Thus, although inclinations triggered by external stimuli can be behaviorally suppressed, they often cannot be mentally suppressed. One can think of many cases in which externally triggered conscious contents are more difficult to control than is overt behavior.

The contents of consciousness are independent of one another whether they are memories, stimuli from the environment are whatever, and this is adaptive. Cross contamination would interfere with successful behavior. The safer influence by context-sensitivity is unconscious, not the result of a conscious whim. This is an important point of difference with some other theories. “This view stands in contrast to several influential theoretical frameworks in which both the activation of, and nature of, conscious contents are influenced by what can be regarded as over-arching goals or current task demands. Because of the principle of encapsulation, conscious contents cannot influence each other either at the same time nor across time, which counters the everyday notion that one conscious thought can lead to another conscious thought. In the present framework, not only do contents not influence each other in the conscious field, but as Merker concludes, content generators cannot communicate the content they generate to another content generator. For example, the generator charged with generating the color orange cannot communicate ‘orange’ to any other content generator, for only this generator (a perceptual module) can, in a sense, understand and instantiate ‘orange.’ Hence, if the module charged with a particular content is compromised, that content is gone from the conscious field and no other module can ‘step in’ to supplant that content. As Merker notes, in constructing the conscious field, modules can send, not messages with content, but only ‘activation’ to each other. This activation, in turn, influences whether the receiver module will generate, not the kind of content generated by the module from which it received activation, but rather its own kind of content (e.g., a sound). Because messages of content cannot be transmitted to other content generators, the neural correlates of the content for X must include activation of the module that generates X, for a content cannot be segregated from the process by which it was engendered, as stated above.” Thus it seems that the contents of consciousness are not marshalled onto a stage or theatre but rather a network is formed connecting the original generators or modules.

The mosiac of independent content is discontinuous and arises in each conscious moment which quickly follow one another. What is watching this content? The passive frame theory says: “ “Importantly, the collective influence of the combination of contents in the conscious field is not toward the conscious field itself; instead … the conscious field is apprehended by the (unconscious) mechanisms comprising the skeletomotor output system. Thus, the conscious contents of blue, red, a smell, or the urge to blink are the tokens of a mysterious language understood, not by consciousness itself (nor by the physical world), but by the unconscious action mechanisms of skeletomotor output system. Why do things appear the way they do in the field? Because, in order to benefit action selection, they must differentiate themselves from all other tokens of the field—across various modalities/systems but within the same decision space.”

I have to add that this may indeed be the original evolutionary reason for consciousness and it may be the over-riding determinant of the mechanisms involved. However, it seems to me that having created a moment of consciousness the brain is loath to throw it away. It is somehow saved and used to form an episodic memory.