Monthly Archives: November 2014

Virtual reality is not that real

Virtual reality is used in many situations and is often seen as equivalent to actual experience. For example, it is used in training where actual experience is too expensive or dangerous. In science, it is used in experiments with the assumption that it can be compared to reality. A recent paper (Z. Aghajan, L. Acharya, J. Moore, J. Cushman, C. Vuong, M. Mehta; Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality; Nature Neuroscience 2014) shows that virtual reality and ‘real’ reality are treated differently in the hippocampus where spatial mapping occurs. ScienceDaily reports on this paper (here).

It is assumed that cognitive maps are made by the neurons of the hippocampus, computing the distances to landmarks. Of course, this is not the only way a map could be constructed: sounds and echos could give clues, smells could identify places, and so on. To test whether visual clues alone could give the information to create a map, the researchers compared the activity of neurons in the hippocampus in a virtual walk and a real walk that were visually identical. In the real set-up the rat walked across a scene while in the virtual set-up the rat walked on the treadmill while the equivalent visual ‘movie’ was projected all around the rat.

The results showed that the mapping of the two environments was different. The mapping during real experience involved more activity by more neurons and was not random. In the virtual experiment, the activity was random and more sparse. It appeared, using neuron activity, as if the rat could not map virtual reality and was somewhat lost or confused, even though they appeared to be acting normally. “Careful mathematical analysis showed that neurons in the virtual world were calculating the amount of distance the rat had walked, regardless of where he was in the virtual space.

In the same report, other research by the same group is reported. Mehta describes the complex rhythms involved in learning and memory in the hippocampus, “The complex pattern they make defies human imagination. The neurons in this memory-making region talk to each other using two entirely different languages at the same time. One of those languages is based on rhythm; the other is based on intensity.” The two languages are used simultaneously by hippocampal neurons. “Mehta’s group reports that in the virtual world, the language based on rhythm has a similar structure to that in the real world, even though it says something entirely different in the two worlds. The language based on intensity, however, is entirely disrupted.

As a rat hippocampus is very similar to a human one and the virtual reality set up was a very realistic one, this study throws doubt on experiments and techniques that use virtual reality with humans. It is also very interesting to notice another surprising ability of neurons, to process two types of signal at the same time.

Abstract: “During real-world (RW) exploration, rodent hippocampal activity shows robust spatial selectivity, which is hypothesized to be governed largely by distal visual cues, although other sensory-motor cues also contribute. Indeed, hippocampal spatial selectivity is weak in primate and human studies that use only visual cues. To determine the contribution of distal visual cues only, we measured hippocampal activity from body-fixed rodents exploring a two-dimensional virtual reality (VR). Compared to that in RW, spatial selectivity was markedly reduced during random foraging and goal-directed tasks in VR. Instead we found small but significant selectivity to distance traveled. Despite impaired spatial selectivity in VR, most spikes occurred within ~2-s-long hippocampal motifs in both RW and VR that had similar structure, including phase precession within motif fields. Selectivity to space and distance traveled were greatly enhanced in VR tasks with stereotypical trajectories. Thus, distal visual cues alone are insufficient to generate a robust hippocampal rate code for space but are sufficient for a temporal code.

Synesthesia can be learned

Synesthesia is a condition where one stimulus (like a letter) automatically is experienced with another attribute (like a colour) that is not actually present. About 4% of people have some form of this sensory mixing. It has been generally assumed that synesthesia is inherited because it runs in families. But it has been clear that some learning is involved in triggering and shaping synesthesia. “Simner and colleagues tested grapheme-color consistency in synesthetic children between 6 and 7 years of age, and again in the same children a year later. This interim year appeared critical in transforming chaotic pairings into consistent fixed associations. The same cohort were retested 3 years later, and found to have even more consistent pairings. Therefore, GCS (grapheme-color synesthesia) appears to emerge in early school years, where first major pressures to use graphemes are encountered, and then becomes cemented in later years. In fact, for certain abstract inducers, such as graphemes, it is implausible that humans are born with synesthetic associations to these stimuli. Hence, learning must be involved in the development of at least some forms of synesthesia.” There have been attempts to train people to have synesthetic experiences but these have not shown the conscious experience of genuine synesthesia.

In the paper cited below Bor and others managed to produce these genuine experiences in people showing no previous signs of synesthesia or a family history of it. They feel their success is due to more intensive training. “Here, we implemented a synesthetic training regime considerably closer to putative real-life synesthesia development than has previously been used. We significantly extended training time compared to all previous studies, employed a range of measures to optimize motivation, such as making tasks adaptive, and we selected our letter-color associations from the most common associations found in synesthetic and normal populations. Participants were tested on a range of cognitive and perceptual tasks before, during, and after training. We predicted that this extensive training regime would cause our participants to simulate synesthesia far more closely than previous synesthesia training studies have achieved. ”

The phenomenology in these subjects was mild and not permanent, but definitely real synesthesia. The work has shown that although there is a genetic tendency, in typical synesthetics the condition is learned, probably during intensive, motivated, developmental training. It also seems that the condition is simply an associative memory one and not ‘extra wiring’.

Here is the abstract:

Synesthesia is a condition where presentation of one perceptual class consistently evokes additional experiences in different perceptual categories. Synesthesia is widely considered a congenital condition, although an alternative view is that it is underpinned by repeated exposure to combined perceptual features at key developmental stages. Here we explore the potential for repeated associative learning to shape and engender synesthetic experiences. Non-synesthetic adult participants engaged in an extensive training regime that involved adaptive memory and reading tasks, designed to reinforce 13 specific letter-color associations. Following training, subjects exhibited a range of standard behavioral and physiological markers for grapheme-color synesthesia; crucially, most also described perceiving color experiences for achromatic letters, inside and outside the lab, where such experiences are usually considered the hallmark of genuine synesthetes. Collectively our results are consistent with developmental accounts of synesthesia and illuminate a previously unsuspected potential for new learning to shape perceptual experience, even in adulthood.”
ResearchBlogging.org

Bor, D., Rothen, N., Schwartzman, D., Clayton, S., & Seth, A. (2014). Adults Can Be Trained to Acquire Synesthetic Experiences Scientific Reports, 4 DOI: 10.1038/srep07089

Imagination and reality

ScienceDaily has an item (here) on a paper (D. Dentico, B.L. Cheung, J. Chang, J. Guokas, M..e Boly, G. Tononi, B. Van Veen. Reversal of cortical information flow during visual imagery as compared to visual perception. NeuroImage, 2014; 100: 237) looking at EEC dynamics during thought.

The researchers examined electrical activity as subjects alternated between imagining scenes and watching video clips.

Areas of the brain are connected for various functions and these interactions change as during processing. The changes to network interactions appear as movement on the cortex. The research groups are trying to develop tools to study these changing networks: Tononi to study sleep and dreaming and Van Veen to study short-term memory.

The activity seems very directional. “During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe — from a higher-order region that combines inputs from several of the senses out to a lower-order region. In contrast, visual information taken in by the eyes tends to flow from the occipital lobe — which makes up much of the brain’s visual cortex — “up” to the parietal lobe… To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle — focusing on the details of shapes, colors and textures — before watching a short video of silent nature scenes.

The study has been used to verify their equipment, methods and calculations – could they discriminate the ‘flow’ in the two situations of imagining and perceiving. And it appears they could.

The actual directions of flow are not surprising. In perception, information starts in the primary sensory areas at the back of the brain. The information becomes more integrated as it moves forward to become objects in space, concepts and even word descriptions. On the other hand during imagining the starting points are objects, concepts and words. They must be rendered in sensory terms and so processing would be directed back towards the primary sensory areas. In both cases the end point would be a connection between sensory qualia and their high level interpretation. In perception the movement is from the qualia to the interpretation and in imagining it would be from the interpretation to the qualia.

 

A new old discovery

Some say that science has been looking at the brain for some time now and yet there is no agreed explanation of how it works. This is sometimes followed by – and therefore maybe science will never understand the brain. But the brain is much more complex than most people think and the tools to examine it are far less powerful as well. On a regular basis new aspects of the brain are discovered, not little details but major discoveries.

Recently there was a large white matter tract found. Really it was re-found because it had been previously reported, doubted, and forgotten. It had been found in the 1880s. This is basic brain anatomy in the most closely studied part of the cortex, the visual cortex, and it illustrates just how little is known about the brain. It would be like a major artery was missing from the knowledge of the circulatory system.

ScienceDaily has an item on this (here). The announcement is in the paper: Jason D. Yeatman et al. The vertical occipital fasciculus: A century of controversy resolved by in vivo measurements. PNAS, November 2014 DOI: 10.1073/pnas.1418503111.

Carl Wernicke discovered it; Yeatman and Weiner re-discovered it. They call it the vertical occipital fasciculua (VOF). There are three ways in which the knowledge could have been forgotten.

A sc ientific disagreement — In an 1881 neuroanatomy atlas, Wernicke, a well-know n anatomist who in 1874 discovered “Wernicke’s area,” which is essential for language, wrote about a fiber pathway in a monkey brain he was examining. He called it “senkrechte Occiptalbündel” (translated as vertical occipital bundle). But its vertical orientation contradicted the belief of one of the most renowned neuroanatomists of the era, Theodor Meynert, who asserted that brain connections could only travel in between the front and the back of the brain, not up and down. Haphazard naming methods — The 1880s and 1890s were a fertile time in the neuroanatomy world, but scientists lacked a shared process for naming the brain structures they found. Looking at drawings of the brain from this time period, Yeatman and coauthors saw that the fiber pathway that they were looking for appeared in brain atlases but was called different things, including “Wernicke’s perpendicular fasciculus,” “perpendicular occipital fasciculus of Wernicke,” and “stratum profundum convexitatis.” “When we started, it was just for our own knowledge and curiosity,” said Weiner, who’s also the director of public information at the Institute for Applied Neuroscience, a nonprofit based in Palo Alto, California. “But, after a while, we realized that there was an important story to tell that contained a series of missing links that have been buried for so long within this puzzle of historical conversation among many who are considered the founders of the entire neuroscience field.” Also the way dissections were done changed so that the VOF was less visible.

There are more details in Mo Castandi’s blog (here)

The new measurements delineate the full extent of the VOF, revealing it as a flat sheet of white matter tracts that extends up through the brain for a distance of 5.5cm, connecting the ‘lower’ and ‘upper’ streams of the visual pathway. These run in parallel, and are sometimes called the ‘What’ and ‘Where’ pathways, for the type of information they carry: the lower stream, connects brain regions involved in processes such as object recognition, including the fusiform gyrus, and the upper stream connects the angular gyrus to other areas involved in attention, motion detection, and visually-guided behaviour. The front portion of the VOF links the intraparietal sulcus, which encodes information about eye movements, to the occipito-temporal sulcus, which encodes representations of word forms. The portion further back links higher order visual areas within the two streams, which encode complex maps of the visual field. Given the functions of these brain regions, the researchers speculate that the VOF likely plays an important role in perceptual processes such as reading and recognising faces.”

It seems a pretty important piece of anatomy to have been lost for a 100 years.

 

Habits and learning

Habits allow us to perform actions without attending to every detail; we can do complex things and more than one action at a time without overloading our cognitive and motor systems. They are goal-directed macro actions made up of a sequence of simple primitive actions. A habit allows a complex action to be launched as a unit and efficiently reach the goal of the habit without each step needing its own specific goal.

In forming a habit, a sequence of actions is consolidated by passing from a closed reward loop to an open reward loop. In other words the whole sequence comes to be evaluated rather than each step. Passing from step to step becomes much faster when it is automatic. “To explain how these sequences are consolidated, Dezfouli and Balleine distinguish between closed-loop and open- loop execution. At the beginning of learning, feedback is crucial. The organism needs a reward or some clues in the environment to identify and perform the proper behavior (closed-loop execution). In advanced stages of training, a step in the sequence is conditioned by the previous step, regardless of feedback stimuli or reward (open-loop execution). This independence accounts for the insensitivity to the outcome shown in experiments of reward devaluation and contingency degradation that are standard measures to determine if a habit has been acquired .” It takes persistent failure of the expected reward to disrupt the habit.

Learning is adaptation of an individual to the environment by changes in behavior resulting from regularities in the environment. Learning is adaptive because it is a response to regularity. As habits present regularities in the environment because one step automatically follows another, they can be the basis of learning.

The author, Balderas, (see citation below) uses the fast-mapping that dogs do in learning to associate a name with an object, to illustrate the intertwining of habit and learning. Only some dogs do fast-mapping: learning that a new word applies to the only new object available using exclusion logic. Other dogs stand about looking lost. She explains the learning of a particular dog, Rico, that uses two habits (automatic sequences): one is playing fetch and the other is associating a name with an object. The fetch sequence has three main actions (a) go for (b) select (c) deliver. Select however can be seen as a sub-sequence (1) look-for (2) match (3) take. If there is no new object/name then abc can be executed without interruption. But during fast-mapping it becomes more complex. “In this case, take can not start because match was not executed. Since Rico does not dispose of a name-object association that enables it to complete the task, it is in a situation where it has to make a decision in the middle of the selection task, so the goal-directed system regains control. After solving the problem, the fetching-game sequence follows its tendency to completion and Rico returns to the sequence: it goes to take and to c (deliver). This description also follows the hierarchical view because at the starting point the behavior begins as a habit, when a decision is required it becomes goal-directed and ends again as a habit after overcoming the difficulty.” The dog uses the exclusion principle, and that involves the matching of previously learned pairs to eliminate them. When the dog finds the only possible answer is the unmatched object, he must select this object in order to deliver and reach the end-point, the habit’s goal. This sequence results in learning a new name/object matching. Habits modulate behavior and guide the animal to detect and solve a problem and thus learn.

I have to admit that part of the reason for this post is my love of a former dog (a much missed border collie – husky cross) who could learn vocabulary, including by the exclusion principle. We were building a house and the internal walls were only the studs. I had shown people around and the dog had followed. I would stand in a space and say this is the kitchen and then go on to the next room. After a few times the dog preceded the group. Then I would stay in the middle of the house and say, “Badger, show them the kitchen”. She did the tour with me only naming the rooms. Then one day I said, “show them the basement”. The dog looked at me and around the space, a couple of times and then trotted to the top of the stairs to the basement. I don’t think she picked up the word ‘basement’ from conversations or she would not have been puzzled at first, but she did recognize that it was the only space left that she could possibly show them. From then on she could be told to go to the basement and she understood. When the walls were finished she could still be told to go to a particular room, although now she had to use the doors.
ResearchBlogging.org

Balderas, G. (2014). Habits as learning enhancers Frontiers in Human Neuroscience, 8 DOI: 10.3389/fnhum.2014.00918

Integration-to-bound decision model

Neuroskeptic has a posting (here) with the title ‘Do Rats have Free Will?’ It is a review of a paper by Murakami and others - abstract is below.

The paper supports the integration-to-bound model of decision making. A population of secondary motor cortex neurons ramp up their output to a constant threshold. Crossing the threshold triggers the motor action. The researchers found a second group of neurons that appeared to establish the rate of rise of the integrating neurons and therefore the time that elapses before the threshold is reached. This fits the model. But what does it say about free will?

The abstract does not mention free will but Neuroskeptic does. It is fortunate that he has talked with the group and shared it in his post. He points out the similarity between the intergration signal and the readiness potential that Libet and others found preceded an action and preceded conscious awareness of a decision to act. He quotes Murakami: “activity preceding bound crossing, either input or accumulated activity, could be said to participate causally in the timing of an action, but does not uniquely specify it. The integration-to-bound theory implies that no decision has been made until the bound has been reached… as at any moment up to bound crossing, the arrival of opposing inputs may avert an action.” Neuroskeptic comments that the readiness potential may be a contributor to a decision rather than the consequence of a decision. And again quotes Murakami: “Crossing the threshold from unawareness to awareness [could be] a reflection of bound crossing [in the integrator]…In this way, the integration-to-bound theory may help to resolve the contradiction between the subjective report of free will and the requirement for causal antecedents to non-capricious, willed actions.…our results provide a starting point for investigating mechanisms underlying concepts such as self, will and intention to act, which might be conserved among mammalian species.”

Although their results do give confirmation to the integration-to-bound theory, I do not think they say much about free will. First, I cannot see how they have any information on when the rats are consciously aware of whatever they may be aware of in a decision. Second, if another signal is controlling the rate of integration, when was it set on course and what are the signals that might control it? This is a long way from an understanding of how decisions are made and whether consciousness is involved.

Abstract of paper (Murakami M, Vicente MI, Costa GM, & Mainen ZF (2014). Neural antecedents of self-initiated actions in secondary motor cortex. Nature neuroscience, 17 (11), 1574-82 PMID: 25262496):

The neural origins of spontaneous or self-initiated actions are not well understood and their interpretation is controversial. To address these issues, we used a task in which rats decide when to abort waiting for a delayed tone. We recorded neurons in the secondary motor cortex (M2) and interpreted our findings in light of an integration-to-bound decision model. A first population of M2 neurons ramped to a constant threshold at rates proportional to waiting time, strongly resembling integrator output. A second population, which we propose provide input to the integrator, fired in sequences and showed trial-to-trial rate fluctuations correlated with waiting times. An integration model fit to these data also quantitatively predicted the observed inter-neuronal correlations. Together, these results reinforce the generality of the integration-to-bound model of decision-making. These models identify the initial intention to act as the moment of threshold crossing while explaining how antecedent subthreshold neural activity can influence an action without implying a decision.

 

A multitasking neuron with a name

Mention of C.elegans always makes me smile. It is a small simple worm. It has exactly 302 neurons (each is named) and its connectome is completely known. And yet the relationship between the actions of those neurons and the animal’s behaviour is not yet understood. In a recent paper reviewed by NeuroScienceNews (here) researchers have found a multitasking neuron (AIY by name).

Multitasking neurons have been suspected in other animal and humans, but ways for them to do this have not be understood. The researchers found that AIY sends an analog excitatory signal to one circuit having to do with speed of movement and a digital inhibitory signal to another circuit having to do with switching direction. The neurotransmitter is the same for both signals but the receptor that receives the signal is a different type in the two circuits.

Here is their diagram and abstract for Z. Li, J. Liu, M. Zheng, S. Xu; Encoding of Both Analog- and Digital-like Behavioral Outputs by One C. elegans Interneuron; Cell 159-4 Nov 2014:

Model organisms usually possess a small nervous system but nevertheless execute a large array of complex behaviors, suggesting that some neurons are likely multifunctional and may encode multiple behavioral outputs. Here, we show that the C. elegans interneuron AIY regulates two distinct behavioral outputs: locomotion speed and direction-switch by recruiting two different circuits. The “speed” circuit is excitatory with a wide dynamic range, which is well suited to encode speed, an analog-like output. The “direction-switch” circuit is inhibitory with a narrow dynamic range, which is ideal for encoding direction-switch, a digital-like output. Both circuits employ the neurotransmitter ACh but utilize distinct postsynaptic ACh receptors, whose distinct biophysical properties contribute to the distinct dynamic ranges of the two circuits. This mechanism enables graded C. elegans synapses to encode both analog- and digital-like outputs. Our studies illustrate how an interneuron in a simple organism encodes multiple behavioral outputs at the circuit, synaptic, and molecular levels.

The ghost is us

In schizophrenia, some other conditions and extreme physical situations, people can feel an unseen presence accompanying them, a ghost. But this ghost has been shown to probably be ourselves. NeuroScienceNews (here) has a review of a new paper, including a video linked below.

The self that we experience is constructed from a number of sources: individual senses, internal body senses, motor prediction. This usually works seamlessly and we feel that we inhabit this self/body. The construct relies on three areas of the brain cooperating. If one of these areas is damaged or the ability to work together is faulty, part of the self may be detached from the rest and then be experienced as a ‘presence’, near but displaced from the rest of the self. “Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease – or, in this case, a robot – this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”

The researchers duplicated the effect in the lab with a robotic device which is clearly shown in a video (here).

I have found ghosts interesting since a conversation with my mother many years ago. She did not believe in ghosts or anything like that, but she found that after my father died, she could talk to him. She knew that it was herself talking in his voice in her head. She said that she knew him well enough to know what he would say and how. If fact she encouraged the voice – it was comforting. When she had a problem and want to know what he would advise, if he were alive, she would ask him. It worked best just as she was going to sleep. After a time the effect weakened and then was no longer available. Her grief and her immediate change in responsibility would have affected her, and given her problems that she had not faced before. In trying to figure out what he would have done she made those thoughts into a separate verbal presence. At first, she also thought she could see him out of the corner of her eye, but when she turned there was no one there. She put that down to missing him and changing any little movement, half seen, into him.

I figure there were a number of tiny areas of her brain that were dedicated to monitoring my dad. When he died they were not called on to do any work and eventually started creating sightings of him, like our brains react to sensory deprivation with hallucinations. I have been told that such things are quite common, but people do not mention them for fear of being ridiculed. Also, it is reported that many people hear voices from time to time, but do not report it for fear of being thought mad.

Here is the abstract of the paper (“Neurological and Robot-Controlled Induction of an Apparition”; O. Blanke, P. Pozeg, M. Hara, L. Heydrich, A. Serino, A. Yamamoto, T. Higuchi, R. Salomon, M. Seeck, T. Landis, S. Arzy, B. Herbelin, H. Bleuler, and G. Rognini; Current Biology 2014):

Tales of ghosts, wraiths, and other apparitions have been reported in virtually all cultures. The strange sensation that somebody is nearby when no one is actually present and cannot be seen (feeling of a presence, FoP) is a fascinating feat of the human mind, and this apparition is often covered in the literature of divinity, occultism, and fiction. Although it is described by neurological and psychiatric patients and healthy individuals in different situations, it is not yet understood how the phenomenon is triggered by the brain. Here, we performed lesion analysis in neurological FoP patients, supported by an analysis of associated neurological deficits. Our data show that the FoP is an illusory own-body perception with well-defined characteristics that is associated with sensorimotor loss and caused by lesions in three distinct brain regions: temporoparietal, insular, and especially frontoparietal cortex. Based on these data and recent experimental advances of multisensory own-body illusions, we designed a master-slave robotic system that generated specific sensorimotor conflicts and enabled us to induce the FoP and related illusory own-body perceptions experimentally in normal participants. These data show that the illusion of feeling another person nearby is caused by misperceiving the source and identity of sensorimotor (tactile, proprioceptive, and motor) signals of one’s own body. Our findings reveal the neural mechanisms of the FoP, highlight the subtle balance of brain mechanisms that generate the experience of “self” and “other,” and advance the understanding of the brain mechanisms responsible for hallucinations in schizophrenia.

 

Which is the illusion?

There is a nice recent review of the state of play with regard to ‘free will’ (here). I must say that the comments on this blog were very frustrating. They seem to bypass important questions and facts.

  1. Almost everyone seems to believe that determinism and free will are opposites. There are compatibilists who say the free will can be defined so that it is not in opposition to determinism. Fine, but why do this? I don’t like the phrase, ‘free will’; I don’t want it saved; I want to be rid of the phrase and its baggage. We do not have to accept determinism either. They are not one right and one wrong, not both right, but they are both wrong, in my opinion.
  2. What is wrong with free will is the insistence that we make conscious decisions. We make decisions, freely in the sense that they cannot be predicted before we make them. But that does not mean they are in any sense conscious at that point. They (at least some times) rise into conscious awareness, but that does not mean that they were ‘made consciously’; they were made and then entered consciousness. The decision is ours whether we are aware of it or not, and if we are aware of it, that awareness is after the decision is made.
  3. Our conscious awareness of the justifications for a decision – that is not necessarily the real reasons. It is an illusion that we know our actual reasons. We guess, usually correctly but sometimes very incorrectly. Our justification mechanism can be fooled.
  4. Our conscious awareness takes responsibility for any action that appears to be ours, even if it is not. In a situation where we never made a decision or moved a muscle, we can be fooled into being mistakenly aware of doing both.
  5. In order to learn we need not only to remember actions and their outcomes, but also whether we caused the actions or not. We learn by making causal hypotheses. In episodic memory, we remember only the events that reach consciousness. It is important that the fact that we did something involved in an event is remembered along with the event. So we remember decisions as appropriate, but those decisions are not ‘made in memory’ any more than they are ‘made in consciousness’. Without this information about causes, we could not learn from experience.
  6. We are of course responsible for every single thing we do. But we are responsible to an extra degree (some would say morally responsible) if we have taken ownership of that action by labeling it with a ‘decision tag’. Again, we can fool ourselves, and some people are very good at not taking responsibility, or taking responsibility but fudging the justifications. People can also through false memory, take responsibility for an action they were not involved in.
  7. Absolutely nothing has been lost. These effects are noticable though carefully planned experimental set ups, that are most unnatural. But the experiments can fool this system and bring to light the picture of all thought being unconscious in its construction. This does not mean that we cannot continue to function normally.
  8. Calling what we have ‘free will’ is dangerous. It carries implications that are false. Forgetting ‘free will’ and just talking about decisions is a much better way to go. And given what we know about quantum mechanics (not to mention the practical impossibility of predicting as complex a system as the brain and all that might go into a decision) we should jettison ‘determinism’ too.
  9. The really important change in viewpoint is about the nature of consciousness. Simple consciousness is not an illusion – we have that stream of awareness and we know it. The idea that consciousness is more than an awareness-attention-memory sort of thing is the illusion; conscious mind as opposed to consciousness is an illusion; introspection is an illusion; conscious decision is an illusion; conscious thought is an illusion; a self watching consciousness is an illusion. We do our thinking unconsciously and then, not before, we may or may not be consciously aware of our thoughts. Even in the step-wise linear thinking that appears to be conscious, the creation of each step is still unconscious.

Carving Nature at its joints

If you have done any butchery or even carved the meat at the table, you will understand this metaphor. In order not to hack and end up with a terrible mess, you must follow the actual anatomy of the meat. In particular, the place to separate two bones leaving their muscles attached is at the joint. That is where you cut and break the two bones apart. This was Plato’s metaphor for making valid categories, ones that fit with the underlying ‘anatomy’ of nature.

It seems to me that we are not cutting at the joint in neuroscience. How does a science know if its concepts, categories, technical terms, contrasts/opposites are mirroring nature? Well, strictly speaking, there is no way to know that our categories are in keeping with nature. However, we can tell ways in which they are not. Perfection may not be possible but improvement almost always is. When we have to make room for odd little exceptions, when we can’t use the categories to make good predictions, when they are not easy to use, when they seem fragile to cultural or semantic differences, when they seem part of a slippery slope, when they do not fit with our theories – we have to think again about where the joints might be.

Why should neuroscience be in trouble with its categories? First, it is a very new science. It only really started in the last century; some would say it didn’t get going until into the 1980s and that would be 30 years ago. It does not have any overarching theory (not like Relativity, Quantum mechanics, Molecular theory, Plate Techonics, Cell theory, Evolution and the like). Its territory is more in ignorance than in light. Finding the joints is almost a matter of luck.

Second, it is immensely complex.

Third, neuroscience has inherited a lot of folk psychology; a great burden of Freudian psychology and other older theories; medical terminology and theories to do with mental illness; dated biological theories; attempts to simulate thought with computers; philosophical, legal and religious notions and theories. It is little wonder that agreed categories are next to impossible at the present time.

Take schizophrenia as an example. Most people treat that name as denoting a single disease. But it is more likely to be denoting a variety of diseases with differing causes, courses, symptoms, treatments, outcomes. There is no reason to accept, and many reasons to doubt, that it is a single disease. So what exactly does a statement like, “people suffering from schizophrenia hear voices”, mean. Not all schizophrenics hear voices and not everyone who hears voices is schizophrenic. And so it is with most symptoms of this ‘disease’. The same problem dogs ‘autism’ and some other conditions.

Intelligence is also hard to see as a clean category. How can it be measured? Is it one general thing or many specific one? Which specific ones? Do we know what personality is? Can we agree on subdividing it? What is its relationship to other things? There are so many, many words with such vague meanings. Neuroscience has words acquired from many sources. I read a philosophical paper and I wonder where do these words touch physical reality? What, I wonder, is a ‘mental state’; could it be a real thing? The popular press and some academics talk of ‘ego’. That is a Freudian concept and his division of the brain (ego, superego, id) is very clearly not at any ‘joints’? The computer set uses ‘algorithm’; just where are we likely to find algorithms in the brain?

It would seem that the closer a scientist is working to the level of cells and cell assemblies, the more likely they are to see the joints. But they would be less likely to be answering questions that people outside of neuroscience want answered. But unless people want to wade through oceans of muddy water, they may have to wait for answers to ‘important’ questions until after many boring questions have been investigated. My guess would be that the semantic arguments will continue because the words in which people are thinking are not doing a good job of the carving.