Monthly Archives: June 2014

Children’s effect on language

 

It seems that children can invent language, but adults cannot and they only invent ‘pidgins’. Languages once invented also are re-made by each generation’s learning of them. So it may be that languages carry the marks of how children think and communicate. A recent paper by Clay and others (citation below) investigates this idea.

They notice that the Nicaraguan Sign Language, in its development by deaf children, appeared to be driven by pre-adolescent children rather than older ones. “In its initial 10 to 15 years, NSL users developed an increasingly strong tendency to segment complex information into elements and express them in a linear fashion. Senghas et al. investigated how NSL signs and Spanish speakers’ gestures expressed a complex motion event, in which a shape’s manner and path of motion are shown simultaneously. They compared signs produced by successive cohorts of deaf NSL signers, who entered the special education school as young children (age 6 or younger) at different periods in the history of NSL…the second and third cohorts showed stronger tendencies to segment manner and path (of a movement) in two separate signs and linearly ordered the two elements.

However, just using an artificial language transmitted from one person to another in a chain also shows some segmentation and linear expression of originally complex words. This paper sets out to test whether young children, adolescents and adults differ in their tendency to make complex actions into segmented and linear language.

Subjects of different ages were asked to do pantomimes of video clips. The clips were of one of two objects going up or down a hill either with bounces or rotations. So there were three aspects of the motion (object, direction, manner) and the subjects were rated on how much they separated the aspects and mimicked them in a linear string as opposed to mimicking the total motion in one go.

Compared with adolescents and adults, young children (under 4) showed the strongest tendencies to segment and linearize the manner and path of a motion event that had been represented to them simultaneously. Moreover, the difference in the pantomime performance between the three age groups cannot be attributed to young children’s poor event perception or memory because the children performed very well in the event-recognition task and because the children’s performances in the pantomime task and the recognition task did not correlate. The results indicate that young children, but not adolescents and adults, have a bias to segment and linearize information in communication. ”

The authors suggest that it may be the limited processing capacity of young children that might limit them to dealing with one aspect at a time.

Here is the abstract:

Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children’s learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system.
ResearchBlogging.org

Clay, Z., Pople, S., Hood, B., & Kita, S. (2014). Young Children Make Their Gestural Communication Systems More Language-Like: Segmentation and Linearization of Semantic Elements in Motion Events Psychological Science DOI: 10.1177/0956797614533967

Can fMRI be trusted?

 

The use of brain images is often criticized. A recent article by M Farah looks at what ‘the kernals of truth’ behind the critiques are and how safe we are to trust the images. (citation below). She is concerned by the confusion of legitimate worries about imaging and false ones.

The first criticism that she addresses is that the BOLD signal in fMRI is from oxygenated blood and not brain activity. True, but she says that scientific measurements are very often indirect. What is important is the nature of the link involved in the measurement. In this case, the answer is that even though the exact nature of the link between brain activity and blood flow is not known – it has been established that they are causally related. One thing she does not make a point of is that there is not necessarily a time lag in the blood flow. The flow is controlled by astrocytes and these glia appear (at least in the case of attention) to anticipate the need for increased blood flow. “In many ‘cognitive’ paradigms, blood flow modulation occurs in anticipation of or independent of the receipt of sensory input” - Moore & Coa (citation below).

There are complaints that the presentation of images are fabrications of scale and colour. The colours are misleading and the differences they represent can be tiny. Scales can be arbitrary. Farah points out that this is true across science. Graphs and illustrations are arbitrary and exaggerated in order for them to be easier for readers to see and understand and this is not particularly prominent in fMRI images.

A number of different criticisms have been made about the emphasis that imaging puts on localization and modular thinking. Again this is somewhat true. But – only early imaging did localization for localization’s sake to prove the validity of the method. Looking for activity in locations that had been previously shown to be involved in a particular process. Today’s imaging has gone past that. Another related gripe is that there are no psychological hypotheses that can be decisively tested by imaging. Her answer is that this is true of all psychological methods, none are decisive. Never the less, imaging has helped to resolve issues. There are complaints that imaging favours the production of modular hypotheses, biasing research. But the questions that science, in general, asks are those it has the tools to answer. This is not new, not true only of imaging and not an unreasonable way to proceed.

Farah does agree with criticism of ‘wanton reverse inference’ but only when it is wanton. Although you can infer that a particular thought process is associated with a particular brain activity – you cannot turn that around – a particular brain activity does not imply a particular thought process. An area of the brain may do more than one thing. An example of this that I still notice occurs is the idea that the amygdala activity has to do with fear when fear is only one of the things the amygdala processes. Farah uses wanton because this criticism should not be applied to MVPA (multivoxel pattern analysis) which is a valid special case of back and forth of inference.

The statistics of imaging are another area where suspicion is raised. Some are simply concerned that statistics is a filter and not ‘the reality’. But the use of statistics in science is widespread; it is a very useful tool; stats do not mask reality but gives better approximates it than does raw data. There are two types of statistical analysis that Farah does feel are wrong. They are often referred to as dead salmon activity (multiple comparisons) and voodoo correlations (circularity). These two faulty statistical methods can also be found in large complex data sets in other sciences: psychometrics, epidemiology, genetics, and finance.

When significance testing is carried out with brain imaging data, the following problem arises: if we test all 50,000 voxels separately, then by chance alone, 2,500 would be expected to cross the threshold of significance at the p<0.05 level, and even if we were to use the more conservative p<0.001 level, we would expect 50 to cross the threshold by chance alone. This is known as the problem of multiple comparisons, and there is no simple solution to it…Statisticians have developed solutions to the problem of multiple comparisons. These include limiting the so-called family-wise error rate and false discovery rate.”

Some researchers first identified the voxels most activated by their experimental task and then—with the same data set—carried out analyses only on those voxels to estimate the strength of the effect.Just as differences due to chance alone inflate the uncorrected significance levels in the dead fish experiment, differences due to chance alone contribute to the choice of voxels selected for the second analysis step. The result is that the second round of analyses is performed on data that have been “enriched” by the addition of chance effects that are consistent with the hypothesis being tested. In their survey of the social neuroscience literature, Vul and colleagues found many articles reporting significant and sizeable correlations with proper analyses, but they also found a large number of articles with circular methods that inflated the correlation values and accompanying significance levels”

Finally she tackles the question of influence. The complaint is that images are too convincing, especially to the general public. This may be true in some cases but attempted replication of many of the undue influence studies have not shown the effect. It may be the notion of science rather than imaging in particular that is convincing. Or it may be that people have become used to images and the coloured blobs no longer have undue impact. There is also the question of resources. Some feel that image studies get the money, acceptance in important journals, interest from the media etc. There seems to be little actual evidence for this and it may often be sour grapes.

Should we trust fMRI? Yes, within reason. One single paper with images or without cannot be taken as True with that capital T, but provided the stats and inferences are OK, images are as trust worth as other methods.
ResearchBlogging.org

Farah MJ (2014). Brain images, babies, and bathwater: critiquing critiques of functional neuroimaging. The Hastings Center report, Spec No PMID: 24634081

Moore, C., & Cao, R. (2008). The Hemo-Neural Hypothesis: On The Role of Blood Flow in Information Processing Journal of Neurophysiology, 99 (5), 2035-2047 DOI: 10.1152/jn.01366.2006


This post was chosen as an Editor's Selection for ResearchBlogging.org

What is in a smile?

 

We distinguish genuine from fake smiles, even though we appreciate the polite sort of fake smile in many cases. I have thought it was a settled matter. Smiles are marked by the raising of the corners of the mouth and pulling them back. A broad smile (fake or real) opens the mouth by lowering the jaw. But only authentic smiles are marked by crow’s feet at the corners of the eyes. This is the Duchenne marker. Would you believe that it is just not that simple? The smile is a dynamic thing and research has mostly used static pictures to investigate smiles. A recent paper by Korb (citation below) examines dynamic smiles. Here is the abstract:

The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major (smile muscle - positive), Orbicularis Oculi (Duchenne muscle - positive), and Corrugator muscles (frown muscle - negative). Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.”

In these experiments stronger smiles were found both more realistic and more authentic. This did not depend as much as previously thought on the eyes. The smile muscle action, the opening of the mouth and the lack of a frown in the brow were as important as the Duchenne marker. The subjects showed electrical activity in the muscles of their own faces mimicking the video being shown and whether the subject found the smile genuine could be predicted from this mimicry. The most clear mimicry was the combination of smile muscle and frown muscles. These two are correlated: in a smile the Zygomaticus is activated and the Corrugator is relaxed, while the opposite happens in a frown. The Masseter (jaw) muscle did not show mimicry. Since this is different from findings on static smiles, the question is raised whether smiles are judged by a different pathway when they are dynamic.

Embodiment theories propose that facial mimicry is a low-level motor process that can generate or modify emotional processes via facial feedback. However, other scholars favor the view that facial expressions are the downstream reflection of an internally generated emotion, and therefore play at best a minor role at a later stage of the emotion generation process. The main critique of the embodiment view is based on the observation that, in addition to their well-documented role in facial mimicry, the Zygomaticus and Corrugator muscles respond, respectively, to positive and negative emotional stimuli not containing facial expressions. However, the Orbicularis Oculi muscle is not clearly associated with positive or negative emotions and contracts, for example, during smiling (producing crow’s feet) as well as during a startle reflex in response to a sudden loud noise.”

This points to a low-level motor process because the Duchenne marker is mimicked in the Orbicularis muscle even though it is not actually a diagnostic for a smile. (It can occur in other situations and can be missing in some smiles.) It is more likely that the identification of a smile is due to mimicry than that mimicry is due to the identification of a smile. The authors suggest that this should be further investigated.

Nevertheless, the hypothesis that facial mimicry mediates the effect of smile characteristics on rated authenticity remains the most parsimonious one based on the fact that 1) facial mimicry is a costly behavior for the organism, 2) participants spontaneously mimicked the perceived smiles, and 3) this mimicry predicted ratings of authenticity. Importantly, the reverse hypothesis, i.e. that perceived authenticity may have caused participants’ facial reactions, seems less likely based on the finding that participants’ Orbicularis Oculi muscle was most activated in response to two types of smiles that contained the highest degree of the corresponding (marker), but resulted in very different ratings of authenticity.”

I hope that researchers will follow up on the idea that static and dynamic images of smiles are processed differently. Would there be clues in the order and timing of a smile unfolding that would point to its authenticity? If fake and genuine smiles are produced by different mechanisms then perhaps they would by quite different in their dynamics. Using avatars is a neat way to vary the dynamics of the muscle movements.
ResearchBlogging.org

Korb, S., With, S., Niedenthal, P., Kaiser, S., & Grandjean, D. (2014). The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0099194

Why do we get pleasure from sad music?

 

Sadness is a negative emotion; and, we recognize sadness in some music; but yet, we often enjoy listening to sad music. We can be positive about a negative emotion. A recent paper by Kawakami (citation below) differentiates between some hypotheses to explain this contradiction.

The hypotheses that the response has to do with musical training (ie that the pleasure comes from the appreciation and familiarity with the art involved) was shown false by finding no difference in response between musicians and non-musicians in their experiments. “Participants’ emotional responses were not associated with musical training. Music that was perceived as tragic evoked fewer sad and more romantic notions in both musicians and non-musicians. Therefore, our hypothesis—when participants listened to sad (i.e., minor-key) music, those with more musical experience (relative to those with less experience) would feel (subjectively experience) more pleasant emotions than they would perceive (objectively hear in the music)—was not supported.

The key innovation in this experimental setup was that the subjects were not just asked how sad they found the music but were given an extensive quiz. For each of 2 pieces of music, played in both minor and major keys, the subjects rated the experience in terms of 62 words and phrases, rating both their perception of the music’s emotional message and the personal emotion they actually felt. Four factors were extracted from the 62 emotional descriptions: tragic emotion, heightened emotion, romantic emotion, blithe emotion.

As would be expected the tragic emotion was rated higher for the minor key and lower for the major key music for both perceived and felt emotion. Likewise, there is no surprise that the blithe emotion was the opposite, high for the major and low for the minor for both felt and perceived emotion. The heightened emotion was only slightly higher for the sad minor music over the happy major. Romantic emotion was moderately higher for the happy music over the sad. However, there were differences between felt and perceived emotion. These were significant for the minor music: it was felt to be less tragic, more romantic and more blithe than it was perceived. This difference between felt and perceived is not too difficult to imagine. Suppose you are arguing with someone and you make them very angry. You can perceive their anger while your own feelings may be of smug satisfaction. Although emotion can be very contagious, it is not a given that felt emotion will be identical to perceived emotion.

The hypothesis of catharsis would imply a deeply felt sadness to lift depression. But this is not what was seen. The next hypothesis the authors discuss is ‘sweet anticipation’. A listener has certain expectations of what will be heard next and a positive emotion is felt when the prediction is fulfilled. This could contribute to the effect (but not because of musical training).

A third hypothesis is that we have an art-experience-mode in which we have positive emotions from exposure to art. If we believe we are in the presence of ‘art’ that in itself is positive. “When we listen to music, being in a listening situation is obvious to us; therefore, how emotion is evoked would be influenced by our cognitive appraisal of listening to music. For example, a cognitive appraisal of listening to sad music as engagement with art would promote positive emotion, regardless of whether that music evoked feelings of unpleasant sadness, thereby provoking the experience of ambivalent emotions in response to sad music. ” Again this could contribute.

Their new and favourite hypothesis is ‘vicarious emotion’. “In sum, we consider emotion experienced in response to music to be qualitatively different from emotion experienced in daily life; some earlier studies also proposed that music may evoke music-specific emotions. The difference between the emotions evoked in daily life and music-induced emotions is the degree of directness attached to emotion-evoking stimuli. Emotion experienced in daily life is direct in nature because the stimuli that evoke the emotion could be threatening. However, music is a safe stimulus with no relationship to actual threat; therefore, emotion experienced through music is not direct in nature. The latter emotion is experienced via an essentially safe activity such as listening to music. We call this type of emotion

vicarious emotion.” … That is, even if the music evokes a negative emotion, listeners are not faced with any real threat; therefore, the sadness that listeners feel has a pleasant, rather than an unpleasant, quality to it. This suggests that sadness is multifaceted, whereas it has previously been regarded as a solely unpleasant emotion. ”

I find the notion of vicarious emotion could also explain why we can be entertained and enjoy frightening plays, books and movies. All sorts of negative emotions are sought as vicarious experiences and enjoyed. Many things we do for leisure and our enjoyment of much of art have a good deal of vicarious emotional content for us to safely enjoy and even learn from.

ResearchBlogging.org

Kawakami, A., Furukawa, K., & Okanoya, K. (2014). Music evokes vicarious emotions in listeners Frontiers in Psychology, 5 DOI: 10.3389/fpsyg.2014.00431

Forget the hype

 

I have just done three very difficult posts and I want to do an easy one. How about a rant on mirror neuron theories?

Suppose I find a magic neuron that ‘lights up’ when the subject says ‘tree’. It also reacts if someone else says ‘tree’, or someone points to a tree. But it is silence for a little bush or the ‘word’ bush. This magic neuron allows me to understand a tree and know what is happening in someones mind when they say tree or point at one. It is probably the foundation of empathy, civilization, language and all good things. You will probably say nonsense, the cell just ‘lights up’ for the concept of tree – first you have to identify this thing and that it is called ‘tree’ before you can have a cell react to the concept. Understanding of concept causes a cell reacting to concept – not, cell reacting to concept causes understanding of concept. It is not a magic cell and so neither are mirror neurons. A cell reacting to the concept of ‘reaching’ is no more unusual or special than a cell reacting to the concept of ‘tree’.

I have ranted about this before. Others have ranted too, but somehow the magic just seems to stay associated to mirror neurons.

About a year ago Costandi in the Guardian said the whole subject was based on slim evidence. The neurons may not be where expected or act as described. (here) He was not the first to doubt the hype.

The doubts have come to the fore again with reactions to a paper by Heyes. (here)

Abstract: Fifty years ago, Niko Tinbergen defined the scope of behavioural biology with his four problems: causation, ontogeny, survival value and evolution. About 20 years ago, there was another highly significant development in behavioural biology-the discovery of mirror neurons (MNs). Here, I use Tinbergen’s original four problems (rather than the list that appears in textbooks) to highlight the differences between two prominent accounts of MNs, the genetic and associative accounts; to suggest that the latter provides the defeasible ‘best explanation’ for current data on the causation and ontogeny of MNs; and to argue that functional analysis, of the kind that Tinbergen identified somewhat misleadingly with studies of ‘survival value’, should be a high priority for future research. In this kind of functional analysis, system-level theories would assign MNs a small, but potentially important, role in the achievement of action understanding-or another social cognitive function-by a production line of interacting component processes. These theories would be tested by experimental intervention in human and non-human animal samples with carefully documented and controlled developmental histories.

Nautilus posted a review of the Heyes paper (here) in which he points out that mirror neurons are produced by associative learning – even Heyes agrees.

Move along folks – no magic here!

 

The John paper 3

 

This is the third post about this paper, E. Roy John; The neurophysics of consciousness; Brain Research Reviews, 39, 2002 pp 1-28. One of the things that stands out in the paper is the idea of a ‘field’ theory of consciousness. John takes time to look at a Quantum theory and the Tononi-Edelman theory to illustrate other ways of looking at non-local brain activity.

Other contemporary theorists have recognized the need to focus upon the system rather than its individual elements. An electrical field must be generated by synchronized oscillations and the resulting inhomogeneity of ionic charge distribution within the space of the brain. Llinas and his colleagues suggest that consciousness is inherent in a synchronized state of the brain, modulated by sensory inputs. Libet proposed that subjective experience may arise from a field emerging from neural synchrony and coherence, not reducible to any known physical process. Squires suggested that consciousness may be a primitive ingredient of the world, i.e. not reducible to other properties in physics, and includes the qualia of experience. Others have proposed that consciousness arises within a dynamic core, a persisting reverberation of interactions in an ensemble of neurons which maintains unity even if its composition is constantly changing. ”

He leans towards the Tononi-Edelman picture and the emergence of consciousness from global brain activity. “This paper illustrates the increasingly recognized need to consider global as well as local processes in the search for better explanations of how the brain accomplishes the transformation from synchronous and distributed neuronal discharges to seamless global subjective awareness.

John says that consciousness is analog in nature (or a combination of digital local activity and analog non-local activity). What exactly is meant by analog mechanisms? An analog is a mimic of the system you want to solve or understand. The elements and the relations between elements are all represented in the analog. Analogs are physical copies. One of the most famous analogs is an electrical circuit and a hydraulic circuit. There are pairs of elements and the same forms of equation describing behavior. Voltage is like the head of pressure and so on. In teaching it is used both ways as some things are easier to comprehend in water and some in electrical current. The same elements and equations can be used in a mechanical analog or a pneumatic one. Analogs can be used to make calculations. The analog is a real physical system with real behavior and its values are continuous rather than digital. One of the great advantages of analog computers (electrical analogs of other systems built anew on a patch board for each problem or calculation) was that they did iterative problems in a flash. Digital computers soon were able to do iteration very quickly and patch boards became a thing of the past. The brain, however, is not a lightening fast thing. If it was doing iteration it would take significant time.

The brain is faced with a large number of semi-independent pieces of information from the senses, the memory, previous predictions, motor programs, knowledge of the world, on-going tasks/goals etc. These pieces of information are held by a huge number of cells. These cells have contact with many others and that contact is specific to each pair of cells. Step by step algorithms are not going to make a moment of perception out of that mass in less than several minutes, maybe much longer, because much of the work is iterative. But that mass using massive parallel and overlapping feedback loops can make an analog of the world in that moment in ‘a flash’. Signals may fly in all directions but the whole thing will only be stable in a few best-fit-scenarios and once in a stable point, will stay there. Presto, a global perception, including in its scope all the constraints and also not losing or degrading the original pieces of information (the qualia and feelings).

But then there is the ‘almighty leap’ – how is this perception shared and how are we made consciously aware of it. The ‘hard problem’ is not the qualia but the awareness. John skips over this. His explanation in its shortest form is:

CONSCIOUSNESS EMERGES FROM RESONATING ORGANIZED ENERGY: Simultaneously, the global perception is projected to the consciousness system. - subjective awareness of the percept emerges as a property of an electrical field resonating throughout the consciousness system.”

Now, many neuroscientists are convinced that the ‘unitary self’ is an illusion created from many selves such as an internal sense self, a motor self, an external sense self and so on. And why do we have an illusionary unitary self – well, to be aware of consciousness, to have subjective awareness. But if the subjective self is an illusion, why can’t the awareness by an illusion too.

After all, now every cell in the created analog is, if effect, in possession of the only part of the analog that it can possess. You could say it was ‘aware’ of the analog from its point of view. There was nothing about the new perceptual moment that it needs to be told.

But if self and awareness are illusions. What is this illusion in aid of? I would guess that it is needed for a useful memory. One perceptual moment has to be tied to another to construct a narrative, a biographical narrative so that there can be a longer term continuity in our thought and action.

That is the end of my posting on the John paper.

 

The John paper 2

 

This is the second part of a look at an old paper: E. Roy John; The neurophysics of consciousness; Brain Research Reviews, 39, 2002 pp 1-28. It is about brain waves.

Some background is needed to understand his model. EEG measures the voltage potentials on the scalp and from these are inferred the voltage potentials on/in the the cortex outer layers. From the changes in these potentials it is possible to look at the underlying waves. The trace is mathematically separated into component waves and these are graphed in separate bands of frequency. The amount of energy in each band is calculated. This is the power spectrum. The conventional bands are: gamma (25-50 cycles per second or Hz), beta (12.5-20), alpha (7.5-12.5), theta (3.5-7.5), delta (1.5-3.5). These fluctuations in potential affect the activity of neurons. The potential across the neuron’s membrane has a threshold which allows initiation of an electrical signal to be propagated along the neuron cell and axon. The potentials imposed by the electrical waves bring the neuron membrane closer or further from the threshold and therefore make activity easier or harder. (John does not deal with any contribution glia cells make to this system – perhaps too early a paper for that.) The imposition of a wave will tend to synchronize the activity of neurons because they will tend to reach threshold at nearly the same time and then have similar periods of recovery when they cannot signal. Each cycle of the wave will bring more neurons into the synchrony. The waves arise in pacemaker cells that naturally oscillate at a particular frequency (like heart pace maker cells).

John gives the following picture of the actions of brain waves:

The observed predictability of the EEG power spectrum arises from regulation by anatomically complex homeostatic systems in the brain. Brainstem, limbic, thalamic and cortical processes involving large neuronal populations mediate this regulation, utilizing all the major neurotransmitters. Pacemaker neurons distributed throughout the thalamus normally oscillate synchronously in the alpha (7.5–12.5 Hz) frequency range. Efferent globally distributed thalamo-cortex projections produce the rhythmic electrical activity known as the alpha rhythm, which dominates the EEG of an alert healthy person at rest.”

Note: there are a number of ‘nuclei reticularis’ or ‘reticular nuclei’ in the brain and John does not say which he is referring to. The problem is not serious. There is an extention of the spinal cord that runs through the brain stem, midbrain and ends in the thalamus, called the reticular formation. Somewhere between the brain stem and the thalamus, very probably at the thalamus end is the nucleus he is referring to. The reticular system includes the ascending reticular activating system and it must be active for the brain to function normally.

Nucleus reticularis can hyperpolarize the cell membranes of thalamic neurons by gamma-amino-butyric acid (GABA) release, slowing the dominant alpha rhythm into the lower theta range (3.5–7.5 Hz), and diminishing sensory throughput to the cortex. Theta activity can also be generated in the limbic system, possibly by theta pacemaker cells in the septal nuclei which can be inhibited by entorhinal and hippocampal influences. Slow delta activity (1.5–3.5 Hz) is believed to originate in oscillator neurons in deep cortical layers and in the thalamus, normally inhibited by input from the ascending reticular activating system in the midbrain. Delta activity may reflect hyperpolarization of cortical neurons resulting in dedifferentiation of neural activity. Activity in the beta band (12.5–20 Hz) is believed to reflect cortico-cortical and thalamo-cortical transactions related to specific information processing. Activity in the gamma bands (25– 50 Hz) may reflect cortico-thalamo-cortical reverberatory circuits, as well as back-propagation of axonal discharges to the dendrites of cortical pyramidal cells, which may play an important role in perception as proposed in this paper.”

Note: Although the cortex does affect these rhythms, it is not the source of the original pacemakers.

John works through an example: “…Assume that a subject is asleep, with diminished activity in the ascending reticular activating system, an EEG dominated by slow delta and theta waves reflecting inhibition of the thalamus by nucleus reticularis and consequent diminution of sensory input to the cortex…a sudden increase of stimuli in the environment results in inhibition of nucleus reticularis releasing the thalamic cells from inhibition by n. reticularis. The dominant activity of the EEG power spectrum becomes more rapid, with return of alpha activity. Increased flow of information through the thalamus to the cortex is facilitated, resulting in cortico-cortical interactions reflected by increased beta activity. Coincidence detection by pyramidal cells comparing this exogenous input with readout of endogenous activity activates cortical-thalamic loops generating gamma activity and mediating perception of the sensory information. Collaterals (side branches) go to n. reticularis from corticothalamic axons. The cortex can activate n. reticularis by these axons indirectly en passage or directly by glutamatergic path-ways, to suppress the arrival of information to the cortical level. Indirectly, as an alternative result of cortical influences, dopaminergic striatal projections can inhibit the (reticular formation). Such inhibition enables inhibition of thalamic neurons by n. reticularis, blocking transmission from the thalamus to the cortex. The dominant activity of the power spectrum slows toward or into the theta range. The cortex can thus modulate its own information input. The potential role of this mechanism in awareness and the focusing of attention should be apparent.”

Examination of the momentary voltage fields (LFPs local field potentials) on the scalp reveals a kaleidoscope with positive hills and negative valleys on a landscape, or ‘microstate’, which changes continuously. Computerized classification of microstates observed in EEGs of 400 normal subjects, aged 6–80 years, yielded the same small number of basic topographic patterns in every individual, with approximately equal prevalence. The topographies of these instantaneous brain voltage fields closely resemble the computed modes of factor loadings obtained in SPC (calculated spatial principle component analysis) studies. This correspondence suggests that the SPC loadings are not a computational artifact, but may reflect biologically meaningful processes.

The mean microstate duration slowly decreases during childhood, stabilizing for healthy young adults at ~82+/-4 ms. Although the field strength waxes and wanes, the stable landscapes persist with this duration. … The transition probabilities from microstate to microstate are apparently altered during cognitive tasks. Different microstates seem to correlate with distinctive modes of ideation. The stability of the microstate topographies and their mean duration across much of the human life span again supports the suggestion of genetic regulation.

Note: John does not mention modes such as the default mode but this seems like a description of mode changes – again too early for that.

Perceptual time is regulated, parsed into discontinuous intervals. Although subjective time is experienced as continuous, brain time is discontinuous, parsed by some neuro-physiological process into epochs of ~80 ms which define a ‘traveling moment of perception’. Sequential stimuli that occur within this brief time interval will be perceived as simultaneous, while events separated by a longer time are perceived as sequential. Other evidence has led to similar proposals that consciousness is discontinuous and is parsed into sequential episodes by synchronous thalamo-cortical activity. Multimodal asynchronous sensory information may thereby be integrated into a global instant of conscious experience. The correspondence between the experimentally obtained durations of each subjective episode and the mean duration of microstates suggest that a microstate may correspond to a ‘perceptual frame’. The phenomenon of ‘backward masking’ or metacontrast, consisting of the ability of a later sensory input to block perception of an event earlier in time, suggests that perhaps two separate events within a single frame are required for conscious perception. These two events might represent independent inputs to a comparator. (This seems to mean the a stimulus must be stable over a good part of a frame to be saved.)

The exact time at which conscious perception occurs following sensory input is unclear. Certainly, it is delayed beyond 50–100 ms since stimuli are particularly susceptible to masking by a competing stimulus during this period. Psychophysical evidence shows that the perceptual frame closes at ~80–100 ms after occurrence of a specific event. Although it is clear that time for the brain is discontinuous, the frame duration may differ in the various sensory modalities. A mechanism may be required to synchronize sensory elements sampled at different rates in disparate modalities. Based on train duration studies, Libet has suggested that perception may occur as late as 300–500 ms post stimulus. Extending train duration of repetitive direct cortical stimuli up to but not beyond 300–500 ms lowered perceptual threshold. These train duration effects have been reproduced for stimuli applied to the cerebral cortex via intracerebral electrodes. Similar duration effects have been shown using repetitive transcranial magnetic or direct electrical stimulation of the cortex and sensory deficit or neglect in healthy volunteers.”

In order to achieve the stable persistence of LFP topography revealed by microstate analysis, while displaying such duration effects and susceptibility to disruption by masking stimuli, some reentrant or reverberatory brain process must sustain cortical transactions as a steady state, independent of the activity of individual neurons.”

Note: John seems to be thinking in terms of standing waves here.

Such a process, called the ‘hyperneuron’, has been postulated and described in some detail. This persistent electrical field, produced by reverberating loops, may correspond to a neural correlate of the ‘dynamic core’ postulated by Tononi and Edelman. According to this concept, there must exist a set of spatially distributed and meta-stable thalamo-cortical elements that sustains continuity of awareness in spite of constantly changing composition of the neurons within that set.”

More in a later post.

 

The John paper 1

 

I have been looking at a paper that is not very recent but none the less very interesting. It took ages to find a copy of it that I could access and download but now when I go back it is no longer there. The paper is E. Roy John; The neurophysics of consciousness; Brain Research Reviews, 39, 2002 pp 1-28. Those of you who have access to various sources will be able to download the pdf but I can no longer supply a link.

The paper describes a proposed global brain process. There are local processes to establish ‘fragments of sensation’ and these are connected to give ‘fragments of perception’, but a complete ‘frame’ of perception requires non-local processing. Consciousness appears to require a brain-wide process. John sees these processes in terms of EEG patterns. I am going to deal with his theory in several posts rather than try to put everything in one. In this post is John’s diagram of consciousness and descriptions of the numbered phases. I feel there are some ‘almighty leaps’ here but there are also some very convincing processes too.

 

 

 

 

 

 

 

 

 

 

 

Perceptual frame opens:

process 1 – stimuli from the environment are captured by the sense organs and directed to the thalamus

process 2 – input in the thalamus is directed to the thalamic regions specific for each modality/sense

process 3 – the thalamic regions send volleys, by fast, direct paths to the cortical primary sensory areas of the cortex, the volleys are parsed into perceptual frames – the information is distributed in the sensory cortex and decomposed into ‘fragments of sensation’

process 4 – activity in the local ensembles becomes non-random, and local negative entropy deepens

process 5 - each perceptual frame lasts ~70–100 ms (1 / alpha frequency) and successive frames are each offset by 20–25 ms (1 / gamma frequency) - “this multiplexed activity will produce a steady state which will persist independent of the discharge of particular neurons” (sample and hold).

Now we have a perceptual frame – a field potential pattern throughout the cortex controlled by volleys of sensory specific thalamo-cortical signals.

Relevant context is represented:

At the same time as the processes above, 1 to 5, other information is entering the frame.

process 6 – activity in the thalamus also activates other (non-cortical) regions such as the brainstem, cerebellum, and limbic system which are primed by inputs in the immediate past

process 7 – this activity adds information from recent working memory, episodic memories, states of autonomic, emotional, motivational and motor systems to the non-sensory thalamic regions where this information forms a value/meaning and is directed to cortical areas by the thalamus – it arrives after the original sensory data because of its longer path and provides context to the ‘sample and hold’.

Sensory fragments are converted to fragments of perception:

process 8 – as frames coalesce, elements with most relevance (strong value signal) deliver stronger signals to the comparator system distributed throughout the cortex

process 9 – those pyamidal cells whose fields produce ‘fragments of sensation’ and who also have value signal input, shift their membrane potential above a critical threshold, giving a higher rate of cortico-thalamic activity. This selects them automatically to convert to ‘fragments of perception’.

Perceptual elements are bound together:

A cooperative process is required for the multidimensional binding which provides the fine texture of consciousness and the global nature of a momentary cognitive instant of experience. No cell nor ensemble can subserve the large scale integration required for cognitive interpretation of the totality of significant departures from randomness which constitutes the GNEGP (global negative potential), the integration of LNEGP (local negative potential) activity synchronized across spatially distributed neuronal masses. The actual binding process has been envisaged as a global resonance state, resulting from the coincidence detection of concurrent specific and nonspecific neuronal processes.”

process 10 - rhythmic oscillatory potentials, synchronous and phase-locked across the cortex, act like a scanning voltage which modulates the membrane potentials of the cortical cells. “Rhythmic fluctuation of cortical membrane potentials intensifies a multi-modal cortico-thalamic volley of the distributed LNEGP fragments of perception that is synchronously projected from many cortical areas upon appropriate thalamic regions. These fragments of percepts converge as a coherent cortico-thalamic volley upon the intralaminar nuclei of the thalamus, where they are ‘bound’ into the multimodal, global negative entropy of perception, GNEG- P1. GNegP is the information content of momentary self-awareness.”

process 11 - At the same time signals cause the nucleus reticularis of the thalamus to inhibit the thalamic regions from sending signals to areas of the cortex that have not been caught up in the concordance that produced the GNEGP. This defines the information content of the moment of awareness.

Consciousness emerges from resonating organized energy:

A reverberatory thalamo-cortico-thalamo interaction arises between the thalamic nodes representing GNEGP and those brain regions wherein LNEGP arose, which endows GNEGP1 with specific sensory and emotional dimensions, the ‘qualia’ of the subjective experience.”

process 12 – At the same time, the global perception is projected from the thalamus to the consciousness system, the set of brain regions which change state reversibly with loss of consciousness, causing the consciousness system to becomes highly coherent.

Process 13 - “coherent activation within this set of structures transposes GNEGP into a concentrated electromagnetic field. Establishment of a sufficiently non-random spatio-temporal charge aggregate within a critical neural mass is postulated to produce consciousness, an emergent property of sufficiently organized energy in matter.” This coherent reverberating activation in the CS acts like an ‘analog’ electrical field in a restricted space. John says that the brain is a hybrid digital-analog system.

The content of consciousness and the self:

process 14 - resonance between the consciousness system and the intralaminar nuclei of the thalamus, the percept is given the qualia of the primary sensory regions of the cortex.

process 15 - subjective awareness emerges as an intrinsic property of the coupling of the analog CS with the digital microstate. “It is postulated that much of the early life of human beings is devoted to learning how to reconcile these two classes of brain activity”

process 16 - “the resonant activity impinging upon the adaptive output systems provides feedback to update the value system. Interactions of the intralaminar nucleus and other thalamic nuclei with CS structures modulate efferent systems to produce adaptive outputs such as speech, movement and emotional expression”

 

More in future posts.

 

 

Descartes again

 

E B Bolles in Babel’s Dawn (here) has a good article on the problem of some linguist’s attitude to evolution by natural selection.

He states the problem: “I have often thought that if I could just get a grip on the reason Chomskyans have such a distaste for natural selection, I would have a much clearer grasp of what lies at the root of our disagreement. Chomsky is making an assumption; I hold a counter-assumption.” Bolles found a clue in Ferretti and Adornetti. They said, “As a good Cartesian, Chomsky has always expressed his complete aversion to the possibility of considering universal grammar as an adaptation of natural selection.” So Bolles looked at Descartes to see if his problem was answered. I recommend reading the posting for the full picture of what Bolles is saying.

I have had slightly different problems with Chomsky and the idea of a Cartesian outlook also illuminates them.

First is the denial that language is ultimately about communication. I find it difficult to see language in any other light. Second is the insistence that language comes in sentences, only sentences, and grammatically correct sentences at that. Third is the notion that language is a logical structure akin to symbolic logic or mathematics. (I know that it is moulded by generations of children learning their mother tongue, into grammatical forms that are easy to learn. But these are not always logical as such but rather they are smoothed of awkwardnesses that tax the memory.) Fourth is the notion that thought (other than semantic thought) depends on the internal use of language. This would imply the visual artists, musicians, athletes do not think about and ‘in’ their major activity. Fifth is the dislike of anything biological – no wet, squishy, warm ideas about the actual brain, environment, survival, only clean neat diagrams on white boards. There is little concern about how language is produced and whether it is the way the UG theory say it is. Also there is no concern with how people actually speak or why. Sixth is how hard is the fight against any idea that any animal might have any tiny bit of a language.

I always started with the last problem. My notion was that Chomsky is a person who holds that humans are uniquely unique and that is because only they have language which is so powerful as to make humans not just different from other animals but a new and different sort of animal. The result of this is to label anything that animals can do as ‘not real language’. So language cannot be about communication because almost all living things do some form of communication. Language cannot be an ordinary function of living things. Descartes held that only humans thought, animals didn’t. Thought was the primary thing to Descartes – I think therefore I am. This backward idea was his key – not I am but I think was where he started.

Once you are rid of animals having language or of language having survival functions like communication, you then have the awkward problem of how did it appear. Normal evolution would involve slow development of language in pre-human animals and its selection by how well it functioned in communication. As these are no-nos, you are left with an instantaneous creation by one or a few simultaneous mutations. That is really hard for a biologist to swallow. Descartes did not have that problem, he pre-dated the ideas of evolution and natural selection. There did not have to be a start to the characteristics of any animal or of humans.

Bolles’ look at Descartes also made him think about environment. Chomsky is not happy with language having a recent cultural evolution as well as being unhappy with a conventional biological one. That gets in the way of all-or-nothing language. It again points to a communication function. Chomsky wants language without the bother of an environmental reality. If language was independent of the body and its environment, then language could be immune to culture and the biological constraints of the body. Descartes does this nicely – there is no material mind. Mind and matter are separate. For Chomsky, language is firstly internal and independent of the environment and actual speech. Very much like Descartes mind.

Bolles uncovers something else, another type of Cartesian dualism. “There is also mathematical dualism, the belief that the mathematical world and physical world both exist and follow separate laws. Chomsky is definitely this kind of a dualist, and many other smart people agree with him. Mathematical dualism asserts the independence of its subject from the other reality. … Chomsky doesn’t mean merely self-governing or even self-developing. He means natural language occupies its own reality, just as the natural numbers do. Furthermore, they are independent of any meaning assigned to them culturally. Chomsky believes language (or at least its Universal Grammar) is the same way. So, of course, he is not going to expect much help from biological questions in trying to understand the nature of language. Who would turn to biology to understand mathematics?” This is interesting. I often find it amusing that some mathematicians insist that they discover pre-existing mathematical structures rather than invent them. How quaintly humble it sounds! This idea is older than Descartes. Oh, the things that Descartes and Plato have to answer for.

 

What is peculiar here?

 

Vaughan Bell (here) has posted a list of Robert Bjork’s from a conference slide:

Important peculiarities of the human memory system

  • A remarkable capacity for storing information is coupled with a highly fallible retrieval process.
  • What is accessible in memory is highly dependent on the current environmental, interpersonal, emotional and body-state cues.
  • Retrieving information from memory is a dynamic process that alters the subsequent state of the system.
  • Access to competing memory representations regresses towards the earlier representation over time”

I found this an interesting list. The idea that these are peculiar to human memory strikes me as an unnecessary qualifier. I don’t know of any studies of these characteristics in any other animals: elephants for example. They may not be peculiar to humans at all. They seem to me to be the sort of characteristics that we would expect from evolutionary development and it would be surprising if they were not widespread amongst animals.

Take the second item. It makes sense that where and what you are doing should make it easier to recall memories that have something to do with that place and activity. It seems that in remembering events they are ‘tagged’ with time and place information. Why? So that they can be retrieved when they would be useful. When I want to remember something, I do not want lots of similar but useless memories, I want the one that helps me now in the current situation. How can this characteristic be thought of as a human peculiarity (or a quirk/kluge). I am sure it is not peculiar or only found in human memories.

The third item seems so reasonable how can it be peculiar. Am I expected not to learn? I want my memories to be as up-to-date in their associations with other memories and knowledge as possible. Otherwise they will eventually not be retrievable because their associations will be long gone. And if still retrievable, they would be curious fossils and not very useful. What is particularly peculiar or human-only about this.

Item one is not always so – there are people that cannot forget. They would, I am told, love to lose that ability. A really superhuman ability to remember is actually a handicap. When I was schooled 40-50 years ago I learned a great deal. Most of it is now considered dated or wrong, much of the rest is of no use or interest to me now. I do not want it competing with more recent memory. Again not peculiar and probably widespread in memories.

I cannot make any comment on the last item. It doesn’t make much sense to me and so I assume I am not understanding it.

Perhaps if I had been at the lecture I would have a different take. Perhaps Bjork explained how such characteristics might evolve. With just the slide with its title, I can only image Bjork is comparing our memories to those of some animal like the fruit-fly or with computers.