Hallucinogens

A recent article in The Psychologist by Carhart, Kaelen and Nutt (here) reviews what is known about the action of chemicals that cause hallucinations: LSD from ergot fungi, dimethyltryptamine from ayahuasca and psilocybin from magic mushrooms.

The molecular action of the hallucinogens is to excite particular pyramidal neurons by mimicking the action of the transmitter serotonin. These layer 5 pyramidal neurons are important for projecting to lower centers outside the cortex and within it. They mostly project to neurons that are inhibitory. The net effect is that the exciting of the pyramid cells creates an inhibitory signal from other neurons. It tends to generally shut things down. The oscillations in the cortex, so important to the workings of the brain, are decreased in strength and also desynchronized. The disruption of brain waves seems to stem from the interference with the pyramidal cells – inhibitory cell chains.

This decrease in activity is especially evident in some very important hubs in the brain: the thalamus, posterior cingulate cortex and the medial prefrontal cortex, all integrating and executive control hubs. This may be the source of the lack of integration and constraint seen in hallucinations. There is a lack of distinctness in the network structure of the brain and networks seem to melt into one another. One of the effects of this is increased cognitive flexibility and ability to learn. This may be very useful to therapists in controlled doses. The inhibition of some centers allows areas they control to escape that control. When the cat is away, the mice with play.

It is interesting that the hallucinogens “profoundly alter the quality of consciousness whilst leaving arousal or wakefulness intact.” How does the hallucinogen create the complex vivid visual hallucinations or the loss of ego? During periods of hallucination there are ‘phasic discharges’ from the hippocampus, amygdala and septal nuclei (medial temporal lobe sites) in contrast to the general disorder of brain waves. This resembles the activity of the medial temporal lobe in REM sleep and dreaming. Stimulation of the MTL experimentally can produce hallucinations and distortions of vision.

One of the most common yet abstract experiences described in relation to the hallucinogenic drug state is a disintegration or dissolution of the self or ego. Such an experience is difficult to fathom from the vantage of normal waking consciousness, where an integrated sense of self is felt as pervasive and permanent. It is perhaps not surprising therefore that the experience of ego-disintegration is described as profoundly disconcerting and unusual …Classic accounts of so-called ‘mystical’ or ‘spiritual’ experiences have placed emphasis on the necessity for self or ego disintegration for their occurrence. Thus, in order to investigate the neurobiological basis of ego-disintegration and mystical-type experiences, it is useful to first examine the neural correlates of self-awareness.” The strength of alpha waves in the posterior cingulate cortex, a major hub in the default mode network is correlated with the strength of the self. In hallucinogen sessions, the activity of the PCC decreases in correlation with ego-disintegration. The self is also weak during dreaming.

Astrocyte role in gamma waves

The study of the brain has been very neuron centered. Glial cells outnumber neuron by about 10 to 1 in the cortex and are known to be important to brain function but it is not clear just what they do other than some housekeeping tasks and shepherding neurons to their final locations during development. Astrocyte roles appear to be important but unknown.

Now Lee et al, (see citation below) have published an excellent paper showing one role connected with gamma oscillations. The work was very impressive, but too specialized to describe here – it is summarized in the abstract below. The paper really ‘nailed down’ one role of the astrocytes.

In hippocampus slices they showed that astrocyte intercellular calcium rises before the start of gamma oscillations . This rise does not trigger the gamma but is required for the waves to have duration. They were able to block glutamate release of astrocytes without affecting neuron activity and showed that this glutamate release was the mechanism for maintaining gamma duration. They developed a strain of mouse where astrocyte glutamate release could be switched on and off, and again they showed that neuron behavior was not affected. When the glutamate release from astrocytes was blocked, the gamma power spectrum decreased in the 20 to 40 Hz range. The power spectrum decrease happened only during waking and not in REM or non-REM sleep. The behavior of the mice was examined. There was no difference in maze navigation or in fear conditioning, but novel object recognition was defective when the mice were turned ‘off’ and normal when they were ‘on’. So gamma oscillation in the hippocampus is required for novel object recognition and this ability depends on glutamate release from astrocytes.

They explain in their discussion why there would be a difference in the three behavior tests. “Although both the Y-maze task and the NOR test rely on the rodent’s innate exploratory behavior in the absence of externally applied positive or negative reinforcement, defects were selectively observed in the case of the NOR test. This is particularly relevant because the Y-maze task evaluates a simpler form of memory processing, i.e., short-term spatial working memory, whereas NOR involves a higher memory load engaging long-term storage, retrieval, and restorage of memory processing. During the test phase of the NOR test, a novel object needs to be detected and encoded, whereas the memory trace of a familiar object needs to be updated and reconsolidated after long delays. In contrast, fear conditioning might constitute a strong and highly specific form of learning involving a sympathetic reflex reaction with suppression of voluntary movements (freezing), in which subtle changes in memory content might not be detectable. Moreover, there is strong evidence that suggests fear-conditioned learning encodes a long-term memory process involving the amygdala and the hippocampus, whereas the NOR paradigm engages different structures: the hippocampus and adjacent cortical areas including entorhinal, perirhinal, and parahippocampal cortex.”

Here is the abstract:

Glial cells are an integral part of functional communication in the brain. Here we show that astrocytes contribute to the fast dynamics of neural circuits that underlie normal cognitive behaviors. In particular, we found that the selective expression of tetanus neurotoxin (TeNT) in astrocytes significantly reduced the duration of carbachol-induced gamma oscillations in hippocampal slices. These data prompted us to develop a novel transgenic mouse model, specifically with inducible tetanus toxin expression in astrocytes. In this in vivo model, we found evidence of a marked decrease in electroencephalographic (EEG) power in the gamma frequency range in awake-behaving mice, whereas neuronal synaptic activity remained intact. The reduction in cortical gamma oscillations was accompanied by impaired behavioral performance in the novel object recognition test, whereas other forms of memory, including working memory and fear conditioning, remained unchanged. These results support a key role for gamma oscillations in recognition memory. Both EEG alterations and behavioral deficits in novel object recognition were reversed by suppression of tetanus toxin expression. These data reveal an unexpected role for astrocytes as essential contributors to information processing and cognitive behavior.

Perhaps astrocytes are involved in the production of other brain waves in other locations too.
ResearchBlogging.org

Lee, H., Ghetti, A., Pinto-Duarte, A., Wang, X., Dziewczapolski, G., Galimi, F., Huitron-Resendiz, S., Pina-Crespo, J., Roberts, A., Verma, I., Sejnowski, T., & Heinemann, S. (2014). Astrocytes contribute to gamma oscillations and recognition memory Proceedings of the National Academy of Sciences, 111 (32) DOI: 10.1073/pnas.1410893111

I'm on ScienceSeeker-Microscope

Discovering rules unconsciously

Dijksterhuis and Nordgren put forward a theory of unconscious thought. They propose that there are two types of thought process: conscious and unconscious. “CT (conscious thought) refers to object-relevant or task-relevant cognitive or affective thought processes that occur while the object or task is the focus of one’s conscious attention, whereas UT (unconscious thought) refers to object-relevant or task-relevant cognitive or affective thought processes that occur while conscious attention is directed elsewhere.’’

Like Kahneman’s System 1 and System 2 thought there is no implication here that there is purely conscious thought with no unconscious components but only that conscious awareness is part of the process. I prefer the System name as it avoids the possible interpretation that there might be purely conscious thought. System 1 is like UT and is characterized as: autonomous, fast, effortless, hidden/unconscious, simultaneous/parallel/complex. System 2 is like CT: deliberate, slow, effortful, conscious, serial/logical/simple. The most telling difference is whether working memory is used; working memory restricts the number of items that can be manipulated in thought to about 7 or less at a time and introduces the conscious awareness of the working memory. It is often viewed as a difference between calculation and estimation, or between explicit and implicit knowledge.

The way these two processes are compared is to set out a problem and then compare the results after one of three activities: the subjects can consciously think about the problem for a certain length of time; the subjects can spend the same amount of time doing something that completely engages their consciousness; or they can be giving no time at all and asked for the answer immediately after the problem is presented. It has been found that with complex problems with many ingredients, that System 1/UT gives more quality results then System 2/CT and both are better than immediate answers.

A recent paper by Li, Zhu and Yang looks at another comparison of the two ways of thinking. (citation below)

Abstract:

According to unconscious thought theory (UTT), unconscious thought is more adept at complex decision-making than is conscious thought. Related research has mainly focused on the complexity of decision-making tasks as determined by the amount of information provided. However, the complexity of the rules generating this information also influences decision making. Therefore, we examined whether unconscious thought facilitates the detection of rules during a complex decision-making task. Participants were presented with two types of letter strings. One type matched a grammatical rule, while the other did not. Participants were then divided into three groups according to whether they made decisions using conscious thought, unconscious thought, or immediate decision. The results demonstrated that the unconscious thought group was more accurate in identifying letter strings that conformed to the grammatical rule than were the conscious thought and immediate decision groups. Moreover, performance of the conscious thought and immediate decision groups was similar. We conclude that unconscious thought facilitates the detection of complex rules, which is consistent with UTT.

It is a characteristic of System 2/CT that it is used to rigorously follow rules to calculate a result. However there is a difference between following a rule and discovering one. This rule discovery activity may be the same as implicit learning. “Mealor and Dienes (2012) combined UT and implicit learning research paradigms to investigate the impact of UT on artificial grammar learning. A classic implicit learning paradigm consists of two stages: training and testing. ” The UT group had better results but they categorized the process as random selection. The current paper shows that the UT group can find the grammatical rules illustrated in the training and then identify grammatical as opposed to ungrammatical strings. System 1/UT is better at uncovering rules and of identifying examples that break the rules. This does not seem to be a rigorous following of rules as in System 2 but more a statistical tendency or a stereotypical categorization of the nature of implicit learning.

It is important to be clear that System 2 or CT is thought that has a conscious component and it does not imply that the thought is conducted ‘in’ consciousness. We are aware of the steps in a train of thought, but not aware of the process, they are hidden.

ResearchBlogging.org

Li, J., Zhu, Y., & Yang, Y. (2014). The Merits of Unconscious Thought in Rule Detection PLoS ONE, 9 (8) DOI: 10.1371/journal.pone.0106557

I'm on ScienceSeeker-Microscope

Mind to mind transfer

 

I read the abstract of a new paper (see citation below) about brain-to-brain communication. I had been thinking while I read the title that we already do brain-to-brain communication – it’s called language. And sure enough the first sentence of the abstract said, “Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization.” What Grau and others were aiming for and succeeded in doing was to bypass language, motor output or peripheral sensory input without invading the skulls – from conscious thought-to-conscious thought via computer based hardware. “The main differences of this work relative to previous brain-to brain research are a) the use of human emitter and receiver subjects, b) the use of fully non-invasive technology and c) the conscious nature of the communicated content. Indeed, we may use the term mind-to-mind transmission here as opposed to brain-to-brain, because both the origin and the destination of the communication involved the conscious activity of the subjects.”Their abstract is below.

But lets look at how we do mind-to-mind now. We have to share a language, and to a large extent that means we also have to share a good deal of a culture. For normal human communication, it takes a fairly rich language and culture. It the case of the paper’s experiment, the language was patterns of ls and 0s. The sender and his equipment output the pattern and the receiver with his equipment input it. And to understand that the patterns were meaningful required a cultural agreement on their meaning.

It is the language/culture part that is important to the communication. It is as if I utter a phrase which has meaning to me, you hear the phrase, and with it I seem to reach into your brain to pick out that meaning and put it into your stream of consciousness. Without the shared language and culture this trick would not be possible. If anyone thinks that his thoughts can be loaded into a computer and delivered to someone else’s brain by some means that avoids a shared language/culture of some type – he will be disappointed.

Abstract:

Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.

Note: Some in the press have been calling this transfer telepathy. It is not telepathy!!

ResearchBlogging.org

Grau C, Ginhoux R, Riera A, Nguyen TL, Chauvat H, Berg M, Amengual JL, Pascual-Leone A, & Ruffini G (2014). Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PloS one, 9 (8) PMID: 25137064

I'm on ScienceSeeker-Microscope

Language and handedness

 

I am both left handed and dyslexic and so a recent paper on the connection in hemispheric dominance for hand and for language was a paper I had to read. The Mazoyer study seems to be the first to use a reasonable number of left- and as well as right-handed people to look at language lateralization. (citation below).

Whether someone was left-handed or right-handed was determined by the self-reported category (the LH and RH identifier in the paper). However, the subjects were also given the Edinburgh questions which give an index between -100 (most left-handed), +100 (most right-handed) with 0 as perfectly ambidextrous. This was used as a measure of the extent and direction of lateralization of the hand’s motor control. This index need not tally with self-reporting, but actually does quantify the lateralization. They used fMRI measurements for the lateralization of language. Reciting a very over-learned list (like the months of the year) is almost symmetrical (not lateralized) and so it was used as a base compared to forming a sentence which varies in lateralization. Language is usually biased to the left hemisphere as is hand control in right-handed people.

Their conclusion was: “This study demonstrates that, except in a small sample of strong LH with rightward asymmetry, concordance of hemispheric dominance for hand and for language production occurs by chance. The present result thus questions the existence of a link between control of the hand and of language by the same hemisphere, while indicating that a rightward representation of language, although rare, is a normal variant of language lateralization.”

At first glance this is not what the graph appears to show. But if you ignore the white data points at the bottom then it appears that the amount of language lateralization (y axis) is heavily biased to the left hemisphere but the amount of bias is evenly spread across the hand lateralization (x axis). The white data points on the other hand show that extreme right hemisphere lateralization of language only seems to occur in a small group of extremely left-handed people. These people would be approximately 1% of the population. This group was also identified by Gaussian analysis which found 4 peaks, the 4th being this group of atypical left-handed people. Without this group the peaks for left and right-handed people were not statistically different.

lateralization graph

 

 

 

 

 

 

Lateralization of language plotted against lateralization of hand control: “Figure 5. Plot of hemispheric functional lateralization for language as a function of manual preference strength. Manual preference strength was assessed using the Edinburgh inventory, ranging from 100 (exclusive use of the right hand) to -100 (exclusive use of the left hand). Subjects also self-reported whether they consider themselves as right- handed (RH, squares) or left-handed (LH, circles). HFLI, an index of hemispheric functional lateralization for language measured with fMRI during covert generation of sentences compared to covert generation of list of words, was used for classifying subjects as « Typical » (HFLI>50, bright color symbols), « Ambilateral» (-20<HFLI<50, pale color symbols), or « Strongly-atypical » (HFLI<-20, open symbols).”

Personally I find this very interesting. I have to assume I am in this small strongly atypical group. I score -100 on the Edinburgh test and have fought with dyslexia all my life. But from a more general perspective it is interesting that the lateralization of language has a natural spread without regard to another lateralization that gives handedness. Another interesting piece of data is that left-handed people appear (on the surface) to not be as left-handed as right-handed people are right-handed. The crossover seems to be at Edinburgh 50 (not 0 or -50). This may be an artifact. Left-handed people may learn to do a number of tasks in a right-handed manner because of the general handedness of the environment. A right-handed person has no incentive to do any particular task with the left-hand. We may be looking at motivation rather than anatomy. Finally, although this is a good start to looking at the lateralization of language, language is a complex function and there may be a lot of detail hidden in a single fMRI procedure. This authors mention this. “Because typical subjects represent 90% of the population, it is important to assess whether or not they constitute a homogeneous group with respect to hemispheric dominance. Gaussian mixture model suggests the existence two distinct subgroups of typical individuals, having strong and moderate left language lateralization, respectively, this holding both for RH and for LH. ”

Here is the abstract:

Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH). A hemispheric functional lateralization index (HFLI) for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH) and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely ‘‘Typical’’ (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH), ‘‘Ambilateral’’ (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH) and ‘‘Strongly-atypical’’ (right-hemisphere dominance with clear negative HFLI values, 7% of LH). Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population) who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.”
ResearchBlogging.org

Mazoyer, B., Zago, L., Jobard, G., Crivello, F., Joliot, M., Perchey, G., Mellet, E., Petit, L., & Tzourio-Mazoyer, N. (2014). Gaussian Mixture Modeling of Hemispheric Lateralization for Language in a Large Sample of Healthy Individuals Balanced for Handedness PLoS ONE, 9 (6) DOI: 10.1371/journal.pone.0101165

I'm on ScienceSeeker-Microscope

Distractions

 

What happens when you overcome distraction and remain focused. The brain can retain its concentration. How? Science Daily (here) reports on a paper by Jacobs and Nieder in Neuron, which shows that one part of the brain ignores the distraction completely while another attends to it very briefly and then returns to the memory task at hand.

Science Daily says, “The monkeys had to remember the number of dots in an image and reproduce the knowledge a moment later. While they were taking in the information, a distraction was introduced, showing a different number of dots. And even though the monkeys were mostly able to ignore the distraction, their concentration was disturbed and their memory performance suffered.

Measurements of the electrical activity of nerve cells in two key areas of the brain showed a surprising result: nerve cells in the prefrontal cortex signaled the distraction while it was being presented, but immediately restored the remembered information (the number of dots) once the distraction was switched off. In contrast, nerve cells in the parietal cortex were unimpressed by the distraction and reliably transmitted the information about the correct number of dots.”

The paper’s highlights and summary were:

  • Prefrontal suppression of distractors is not required to filter interfering stimuli
  • Distractors can be bypassed by storing and retrieving target information
  • Frontal and parietal cortex assume complementary functions to control working memory

Prefrontal cortex (PFC) and posterior parietal cortex are important for maintaining behaviorally relevant information in working memory. Here, we challenge the commonly held view that suppression of distractors by PFC neurons is the main mechanism underlying the filtering of task-irrelevant information. We recorded single-unit activity from PFC and the ventral intraparietal area (VIP) of monkeys trained to resist distracting stimuli in a delayed-match-to-numerosity task. Surprisingly, PFC neurons preferentially encoded distractors during their presentation. Shortly after this interference, however, PFC neurons restored target information, which predicted correct behavioral decisions. In contrast, most VIP neurons only encoded target numerosities throughout the trial. Representation of target information in VIP was the earliest and most reliable neuronal correlate of behavior. Our data suggest that distracting stimuli can be bypassed by storing and retrieving target information, emphasizing active maintenance processes during working memory with complementary functions for frontal and parietal cortex in controlling memory content.

It is interesting that this as not what the researchers expected to find. “The researchers were surprised by the two brain areas’ difference in sensitivity to distraction. “We had assumed that the prefrontal cortex is able to filter out all kinds of distractions, while the parietal cortex was considered more vulnerable to disturbances,” says Professor Nieder. “We will have to rethink that. The memory-storage tasks and the strategies of each brain area are distributed differently from what we expected.””

But I’m sure they found it made sense after thinking about it. We can look at it this way: the ventral intrapariental area is involved with the task, concentrating on the task and little else (bottom-up). The prefrontal cortex on the other hand is involved in somewhat higher level executive operations (top-down). It looks at what is happening, and as it is those researchers trying to distract me, I ignore it and carry on with the task. If on the other hand it is a big machine about to hit me, I will not ignore it and stop the silly dot test while getting out of the way. Something has to be a look-out, take note of things that are happening and decide whether to ignore distractions.

 

Chimps appreciate rhythm

 

Science Daily has an item (here) on musical appreciation in chimpanzees. Previous studies using blues, classical and pop music have found that although chimps can distinguish features of music and have preferences, they still preferred silence to the music. So were the chimps able to ‘hear’ the music but not appreciate its beauty? A new paper has different results using non-western music: West African akan, North Indian raga, and Japanese taiko. Here the chimps liked the African and Indian music but not the Japanese. They seemed to base their appreciation on the rhythm. The Japanese music has very regular prominent beats like western music, while the African and Indian music had varied beats. “The African and Indian music in the experiment had extreme ratios of strong to weak beats, whereas the Japanese music had regular strong beats, which is also typical of Western music.”

It may be that they like a more sophisticated rhythm. Or de Waal says, ““Chimpanzees may perceive the strong, predictable rhythmic patterns as threatening, as chimpanzee dominance displays commonly incorporate repeated rhythmic sounds such as stomping, clapping and banging objects.”

Here is the abstract for M. Mingle, T. Eppley, M. Campbell, K. Hall, V. Horner, F. de Waal; Chimpanzees Prefer African and Indian Music Over Silence;Journal of Experimental Psychology: Animal Learning and Cognition, 2014:

All primates have an ability to distinguish between temporal and melodic features of music, but unlike humans, in previous studies, nonhuman primates have not demonstrated a preference for music. However, previous research has not tested the wide range of acoustic parameters present in many different types of world music. The purpose of the present study is to determine the spontaneous preference of common chimpanzees (Pan troglodytes) for 3 acoustically contrasting types of world music: West African akan, North Indian raga, and Japanese taiko. Sixteen chimpanzees housed in 2 groups were exposed to 40 min of music from a speaker placed 1.5 m outside the fence of their outdoor enclosure; the proximity of each subject to the acoustic stimulus was recorded every 2 min. When compared with controls, subjects spent significantly more time in areas where the acoustic stimulus was loudest in African and Indian music conditions. This preference for African and Indian music could indicate homologies in acoustic preferences between nonhuman and human primates.”

 

Animal – human bias

 

There is a paper (F. Gaunet, How do guide dogs of blind owners and pet dogs of sighted owners (Canis familiaris) ask their owners for food?, Animal Cognition 2008) mentioned in a blog (here) that is billed as showing that guide dogs do not know their owners are blind. Here is the abstract:

Although there are some indications that dogs (Canis familiaris) use the eyes of humans as a cue during human-dog interactions, the exact conditions under which this holds true are unclear. Analysing whether the interactive modalities of guide dogs and pet dogs differ when they interact with their blind, and sighted owners, respectively, is one way to tackle this problem; more specifically, it allows examining the effect of the visual status of the owner. The interactive behaviours of dogs were recorded when the dogs were prevented from accessing food that they had previously learned to access. A novel audible behaviour was observed: dogs licked their mouths sonorously. Data analyses showed that the guide dogs performed this behaviour longer and more frequently than the pet dogs; seven of the nine guide dogs and two of the nine pet dogs displayed this behaviour. However, gazing at the container where the food was and gazing at the owner (with or without sonorous mouth licking), gaze alternation between the container and the owner, vocalisation and contact with the owner did not differ between groups. Together, the results suggest that there is no overall distinction between guide and pet dogs in exploratory, learning and motivational behaviours and in their understanding of their owner’s attentional state, i.e. guide dogs do not understand that their owner cannot see (them). However, results show that guide dogs are subject to incidental learning and suggest that they supplemented their way to trigger their owners’ attention with a new distal cue.

It may or may not be true that these dogs do not know that their owners are blind. This experiment indicates that but not too strongly. I could do an experiment with people talking on telephones and show that a good many of them believe that the person on the other end of the phone can see them because they would use hand gestures while talking. Or I could show that my dog has knowledge of the difference between my eyesight and my husband’s. This is because she does not move out of the way if we step over her in the daytime. She moves at night so as not to be stepped on. But if there is a lot of moonlight she moves for my husband who has poor sight in low light but not for me. She could have learned this by trial and error or she could have reasoned it out as a difference in eyesight. We don’t know. But we do know that the person on the telephone that gestures is not ignorant of what the other person can see. That person is using a habitual routine without even being aware of how silly it is.

The problem is that we treat other people differently from other animals when we try and understand their thinking. We assume animal are unintelligent as a first assumption and have to prove any instance of smarts. On the other hand we insist that humans think things out consciously and have to establish any instance of behavior being not under conscious control. We really should be using similar criteria for all animals ourselves included.

 

My problem with Merge

 

When linguists talk about language they use the idea of a function called Merge. Chomsky has the theory that without Merge there is no Language. The idea is that two things are merged together and make one composite thing. And it can be done iteratively to make longer and longer strings. Is this the magic key to language?

The ancient Greeks had ‘elements’ and everything was a combination of elements. The elements were water, fire, earth and air. That is a pretty good guess: matter in its three states and energy. This system was used to understand the world. It was not until it became clear that matter was atomic and atoms came in certain varieties that our current idea of elements replaced the Greek one. It was not that the Greek elements were illogical or that they could not be used to describe the world. The problem was that there was now a much better way to describe the world. The new way was less intuitive, less simple, less beautiful but it explained more, predicted better and fit well with other new knowledge about the world.

This illustrates my problem with conventional syntax and especially Merge. Syntax is not a disembodied logic system because we know it is accomplished in the brain by cells and networks of cells in the brain. It is a biological thing. So a description of how language is formatted has to fit with our knowledge of how the brain works. It is not our theories of language that dictate how the brain works; it is the way the brain works that dictates how we understand language. Unfortunately, we have only just begun to understand the brain.

Some of the things that we think the brain does fit well with language. The brain uses the idea of causal links, events are understood in terms of cause and effect and even in terms of actor – action – outcome. So it is not surprising that a great many utterances have a form that expresses this sort of relationship: subject – verb or subject – verb – object. We are not surprised that the brain would use the same type of relationship to express an event as it does to create that event from sensory input and store it. Causal events are natural to the brain.

So is association, categorization and attribution natural. We see a blue flower but these are separate in the brain until they are bound together. Objects are identified and their color is identified and then they are combined. So not only nouns and verbs are natural to the brain’s way of working but so are attributes – adjectives and adverbs for example. Copula forms are another example: they link an entity with another or with an attribute. And so it goes, most things I can think of about language are natural seeming to the brain (time, place, proper names, interjections etc.).

Even Merge in a funny way is normal to the brain in the character of clumping. The working memory is small and holds 4 to 7 items, we think. But by clumping items together and treating them as one item the memory is able to deal with more items. Clumping is natural to the brain.

This picture is like Changizi’s harnessing theory. The things we have created, were created by harnessing pre-existing abilities of the brain. The abilities needed no new mutation to be harnessed to a new function, mutations to make a better fit would come after they were used for the new function – otherwise there would be no selective pressure modifying the ability to the new function.

So what is my problem with conventional syntax and especially with Merge? It is not a problem with most of the entities – parts of speech, cases, tenses, word order and the like. It is a problem with the rigidity of thought. Parsing diagrams make me grind my teeth. There is an implication that these trees are the way the brain works. I have yet to encounter any good evidence that those diagrams reflect processes in the brain. The idea that a language is a collection of possible sentences bothers me – why does language have to be confined to sentences. I have read verbatum court records – actual complete and correctly formed sentences appear to be much less common than you would think. It is obvious that utterances are not always (probably not mostly) planned ahead. The mistakes people make often imply that they changed horse in mid-sentence. Most of what we know about our use of language implies that the process is not at all like the diagrams or the approaches of grammarians.

The word ‘Merge’, unlike say ‘modify’, is capitalized. This is apparently because some feel it is the essence of language, the one thing that makes human language unique and the one mutation required for our languages. But if merge is just an ordinary word and pretty much like clumping, which I think it is, than poof goes the magic. My dog can clump and merge things into new wholes – she can organize a group of things into a ritual and recognize that ritual event with a single word or short phrase or indicate it with a small action.

What is unique about humans is not Merge but the extent and sophistication of our communication. We do not need language to think in the way we do, language is built on the way we think. We need language in order to communicate better.