Category Archives: perception

Making sense of the sense of smell

This is another post on Morsella’s ideas.

In developing the Passive Frame theory of consciousness, the group uses olfaction as the sensory source to focus on. This seems surprising at first, but they have good reasons for this.

First, it is an old system from an evolutionary viewpoint. As in this quote from Shepherd: “the basic architecture of the neural basis of consciousness in mammals, including primates, should be sought in the olfactory system, with adaptations for the other sensory pathways reflecting their relative importance in the different species”.

Second, its connections are simple compared to vision and hearing. Olfactory signals go straight to the cortex rather than arriving in the cortex via the thalamus and they enter an old part of the cortex, the paleocortex rather than the neocortex (which has primary processing areas for the other senses). The processing of smell is more or less confined to one area in the frontal region and does not extend to the extensive areas at the back of the brain where visual and auditory processing occurs. The sense of smell is much easier to track anatomically than the other ‘higher’ senses. To understand minimal consciousness, it is reasonable to use the least elaborate sense as a model.

Third, looking at what lesions interfere with olfactory consciousness, it seems that connections outside the cortex are not needed for awareness of odours. This implies that at a basic level consciousness does not require the thalamus or mid-brain areas (although consciousness of other senses does require those areas). Some links to the thalamus and other areas may be involved in further processing smell signals but not in being conscious of them.

Fourth, the addition of a smell into the contents of consciousness has a sort of purity. The sense is only there when it is there. We are aware of silence and of complete darkness but we are not aware of a lack of odour unless we question ourselves. If odours are at very low concentrations or if we have habituated to them because they are not changing in concentration, we are not conscious of those odours and also not conscious of their absence. “The experiential nothingness associated with olfaction yields no conscious contents of any kind to such an extent that, absent memory, one in such a circumstance would not know that one possessed an olfactory system.” So addition of a smell to the contents of consciousness is a distinct change in awareness and can of itself focus attention on it.

Fifth, olfaction is not connected with a number of functions. There are no olfactory symbols being manipulated and the like. It is difficult to hold olfactory ‘images’ in working memory. Also “olfactory experiences are less likely to occur in a self-generated, stochastic manner: Unlike with vision and audition, in which visually-rich daydreaming or ‘ear worms’ occur spontaneously during an experiment and can contaminate psychophysical measures, respectively, little if any self-generated olfactory experiences could contaminate measures.

As well as these reasons given by Morsella in justifying the choice of olfaction in developing the Passive Frame theory, it occurs to me that there is a significant difference in memory. There is a type of recall prompted by smell that seems instantaneous, effortless and very detailed. For example, when you enter a house that you have not been in since childhood and the house has changed in so many ways over the years, the first breath gives a forgotten smell and a vivid sense of the original house along with many images from memories you know you could not normally recall. There seems to be some direct line between the memory of a ‘place’ and the faint odour of that place.

This olfactory approach to consciousness does cut away much of the elaborations and fancy details of consciousness and allows the basic essentials to be clearer.

A tiny eye

Erythropsidinium

Erythropsidinium

A single celled organism called Erythropsidinium has been reported to have a tiny eye. This organism is not a simple bacteria sort of cell but is a eukaryote. It is single celled but has the kind of cell that is found in multicelled organisms like us. It is not a bag of chemicals but is highly organized with a nucleus and organelles. Among the organelles is a little eye and a little harpoon – ‘all the better to hunt with, my dear’. The eye (called an ocelloid) is like a camera with a lens and pigment responders; while the harpoon is a piston that can elongate 20 or so times in length very quickly and has a poison tip. The prey is transparent but has a nucleus that polarizes light and it is the polarized light that the ocelloid detects. This results in the harpoon being aimed in the direction of the prey before it is fired.

That sounds like a link between a sensory organelle and a motor organelle. As far as I can see, it is not known how the linking mechanism works but in a single celled organism the link has to be relatively simple (a mechanical or chemical molecular event or short chain of events). This is like a tiny nervous system but without the nerves. There is a sensor and an actor and in a nervous system there would be a web of inter-neurons that that connected the two and allowed activity to be appropriate to the situation. What ever the link is in Erythropsidinium, it does allow the steering of the harpoon to an effective direction. The cell can move the ocelloid and the harpoon. Are they physically tied together? Or is there more information processing than just a ‘fire’ signal?

This raises an interesting question. Can we say that this organism is aware? If the ability to sense and to act is found coordinated within a single cell – can that cell be said to be aware of its actions and its environment? And if it is aware, is it conscious in some simple way? That would raise the question of whether complexity is a requirement for consciousness. These are semantic arguments, all about how words are defined and not about how the world works.

Lingua Franca of the brain

Ezequiel Morsella has been kind enough to send me more information on the Passive Frame theory of consciousness. So here is another posting on ideas from that source.

From time to time I encounter notions of there being a ‘language of the brain’ or a brain coding system. Although I would not say that there was no extra language layer (who knows?), I have never seen the necessity for it. The idea seems a product of thinking of the brain in the context of software algorithms, digital transmission, information theory, universal Turing machines and the like rather than in biological cell to cell communication.

Look at forming and retrieving episodic memories: they are conscious experiences before they are stored and conscious experiences when they are retrieved. Awareness is in the form of consciousness and so is the access of various parts of the brain to information from other parts. We understand movement of ourselves and others in similar terms. The Passive Frame proponents talk of perception-like tokens – “they represent well-crafted representations occurring at a stage of processing between sensory analysis and motor programming” and are presumably accessible to both. Here we can have a lingua franca for sensory-motor interaction.

Of course for both sensory and motor processing we need a space and viewpoint for the perception-like tokens. This is often thought of as a stage or a sensorium, but I like to think of it as a model of the environment with the organism active in it. In this ‘space’ the objects we perceive can be placed and our actions can be simulated.

Colour words

It is well known that all languages do not have words for what we would call the basic colours of the rainbow – red, orange, yellow, green, blue, purple – along with white and black. Why can this be so?

First we can get rid of the idea that because they have no word for a colour, they cannot see it? Of course they can see it, they simply have no category that for that particular colour. Take a language without a word for blue: we would call a darker blue a shade of black and a lighter blue as a shade of white. To wonder why we would answer this way is like wondering why someone calls both straw and apricot shades of yellow. It is not that they cannot see the difference but that they have not formed those particular categories (because they have never spent hours picking the colours of paints, for example). How many colour names we have and the exact lines of demarcation between them depend on the culture/language we live in. When we see a colour in front of us, we see the visual perception and not the category/word/concept of a particular colour. We can compare two shades in front of us and say whether they are the same or different even if we only have one colour word for both of them.

Seeing is one thing but saying is another. All words are categories or concepts and encompass a good deal of variation. In the ‘space-landscape’ of colour, words are like large countries. As children we learn the geography of this space and the borders of each word’s domain. When we are asked to name a colour, we use the word that is the colour’s best category. We sort of understand where in the landscape that colour is and therefore which country it is in. To communicate we need to more of less agree on the borders of the categories and the word of each – otherwise that is no communication. If you say it is a red flower, I will imagine an archetypal flower with an average red colour.

Our culture does more than that. Culture can make connections between objects and colours. Some objects get defined by their colour. What colour is the sky? It is blue. It is a well known fact that the sky is blue. But the sky is not always blue – black on a dark night, various shades of grey (from almost white to quite dark grey with clouds), pink in the dawn, orange and red in the sunset, green with northern lights. Water is also blue by agreement although it is often grey, green, brown, yellow or red. If I think of leaf, green comes along. If I think of lemon, I also bring up yellow. The sky and blue is one of these conventional pairings. But where the colour is important it (in a sense) splits the object concept. It matters whether a wine is red or white, a chess-piece is black or white. The culture will force the noticing of colour when it is important in that culture. Quite often colours are identified by an object (like the apricot and straw mentioned above). This has been going on for a long time: orange from a Persian word for the fruit, yellow from a West German for gold, green from an old Germanic word of new growth, purple from the Greek for a mollusc that gave the royal dye.

Languages acquire colour words over time. Berlin and Kay examined the history of 110 languages and found that words for colour started with light and dark (not just white and black), followed by red (sometimes used as bright coloured), then green and yellow (sometimes together and then separating), then blue. Other colours where added later brown and orange (together sometimes at first), purple, pink, grey. Then we have many, many subcategories (sky blue, pea green) and border ones (aquamarine/turquoise at the green-blue border). I notice that lately when people list basic colours, they include pink along with the primary colours. This is new and implies that red has split to be red and pink. People do not want to call a pink thing red.

Unless it is very important, it seems that colour can be omitted from a memory. It is surprising how little we remember the colour of things. We can see things every day and not be able to remember their colour. There sometimes is simply no reason to remember.

We cannot know what people experience from looking at the words they have. The ancient Greeks lacked many colour words. But the idea that, “It seemed the Greeks lived in a murky and muddy world, devoid of color, mostly black and white and metallic, with occasional flashes of red or yellow”, is just wrong. Their poetry is not full of colourful images but that does not mean that their live was devoid of colour.

 

The smell of the land

The sense of smell is intriguing. It is not as readily conscious as sight, hearing, touch and taste; so it is often discounted. However, humans do have the ability is smell quite well and can learn to do so in sophisticated and conscious ways. Perfumers are an example of this. We also know that a particular smell can bring back the memory of a place in a flash that seems quite miraculous. It is the most important sense for many mammals – used to identify objects and places, track and navigate, and communicate emotional signals. There is no reason to think that we are that much different; we probably use smell as a background (largely unconscious) canvas on which to perceive the world.

Recent research (citation below) has indicated such a canvas. Jacobs and others experimented with human subjects to see if they could map their surroundings using odour gradients. They used a large room with two distinct sources of different odours. The subjects were disoriented, placed in a spot and asked to remember its smell. They were disoriented again and asked to find the spot using their memory of the odour. This was done first with sight and hearing blocked and only the sense of smell available, then repeated with sight as the only sense available, and finally with all three senses blocked. The subjects could come close to the target spot with scent alone, compared to the control of none of the three senses being available.

This is a distinct ability and not the same a tracking a smell or identifying an object or place. This is the formation of a map based on odour gradients. Spatial maps are created in the hippocampus and the olfactory bulb is strongly connected to the hippocampus. The authors address the relationship between the odour map, the sound map (echo location), and the visual map etc. “The ability to navigate accurately is critical to survival for most species. Perhaps for this reason, it is a general property of navigation that locations are encoded redundantly, using multiple orientation mechanisms, often from multiple sensory systems. Encoding the location with independent systems is also necessary to correct and calibrate the accuracy of any one system. As a general principle, then, navigational accuracy and robustness should increase with the number of unique properties exhibited by redundant orientation systems.

Abstract: “Although predicted by theory, there is no direct evidence that an animal can define an arbitrary location in space as a coordinate location on an odor grid. Here we show that humans can do so. Using a spatial match-to-sample procedure, humans were led to a random location within a room diffused with two odors. After brief sampling and spatial disorientation, they had to return to this location. Over three conditions, participants had access to different sensory stimuli: olfactory only, visual only, and a final control condition with no olfactory, visual, or auditory stimuli. Humans located the target with higher accuracy in the olfaction-only condition than in the control condition and showed higher accuracy than chance. Thus a mechanism long proposed for the homing pigeon, the ability to define a location on a map constructed from chemical stimuli, may also be a navigational mechanism used by humans.”

Citation: Jacobs LF, Arter J, Cook A, Sulloway FJ (2015) Olfactory Orientation and Navigation in Humans. PLoS ONE 10(6): e0129387. Doi:10.1371/ journal.pone.0129387

Looking at qualia

For years there have been questions about whether we see the same colours, hear the same sounds, smell the same odours. How can we tell what someone else experiences in their conscious awareness? Well plainly, today at least, we can’t tell what someone else experiences.

But multivariate pattern analyses gives a type of decoding of patterns of activity in the brain. It has been used to do a type of ‘mind reading’ – but with the disadvantage that the code to ‘read’ a particular perception or thought is highly personal. The activity pattern resulting from a particular picture will be decoded only after many examples of pictures have been studied in that individual to create a decoding program. This tell us nothing (or very, very little) about how our perceptions may be similar.

Sight, hearing and smell are complex domains, and so it is not surprising that the activity patterns are individual. But taste has only a handful of qualities (sweet, salty, sour, bitter, savory) compared to extremely large numbers of qualities in colour, pitch and basic odours. Of course a perception has added qualities of intensity, has various mixtures of the basic qualities, and has emotional overtones. Still a low number of basic qualities and a restricted range of intensities gives a much more tractable decoding program. Taste can also be analysed early in its perception and some of the pattern’s elements are ‘hardwired’. The Crouzet paper (abstract below) has highlights: “large-scale electrophysiological response patterns code for taste quality in humans; taste quality is represented early in the central gustatory system; neural response patterns correlate with subjective perceptual experience.”

If it is further found in future research that these patterns are similar for different individuals tasting the same taste, then it would raise the probability that our experiences of other senses are also similar. If the patterns for different individuals show no similarity, then the probability that we share qualia is low.

Here is the abstract of the paper (S. Crouzet, N. Busch, K. Ohla; Taste Quality Decoding Parallels Taste Sensations; 2015 Current Biology):

In most species, the sense of taste is key in the distinction of potentially nutritious and harmful food constituents and thereby in the acceptance (or rejection) of food. Taste quality is encoded by specialized receptors on the tongue, which detect chemicals corresponding to each of the basic tastes (sweet, salty, sour, bitter, and savory), before taste quality information is transmitted via segregated neuronal fibers, distributed coding across neuronal fibers, or dynamic firing patterns to the gustatory cortex in the insula. In rodents, both hardwired coding by labeled lines and flexible, learning-dependent representations and broadly tuned neurons seem to coexist. It is currently unknown how, when, and where taste quality representations are established in the cortex and whether these representations are used for perceptual decisions. Here, we show that neuronal response patterns allow to decode which of four tastants (salty, sweet, sour, and bitter) participants tasted in a given trial by using time-resolved multivariate pattern analyses of large-scale electrophysiological brain responses. The onset of this prediction coincided with the earliest taste-evoked responses originating from the insula and opercular cortices, indicating that quality is among the first attributes of a taste represented in the central gustatory system. These response patterns correlated with perceptual decisions of taste quality: tastes that participants discriminated less accurately also evoked less discriminated brain response patterns. The results therefore provide the first evidence for a link between taste-related decision-making and the predictive value of these brain response patterns.”

Note: Kathrine Ohla sent this

I just came across your blog entry "looking for qualia" and like to
thank you for discussing our work. I am glad it raises so much interest
also outside the "taste community".
If it's not too much to ask, I would appreciate if you included the
hyperlink to the paper. This would allow your readers to find the full
article quicker. 
http://dx.doi.org/10.1016/j.cub.2015.01.057

Some visual-form areas are really task areas

There are two paths for visual information, one to the motor areas (dorsal ‘where’ stream) and one to the areas concerned with consciousness, memory and cognition (ventral ‘what’ stream). The visual ventral stream has areas for the recognition of various categories of object: faces, body parts, letters for example. But are these areas really ‘visual’ areas or can they deal with input from other senses? There is recent research into an area concerned with numerals. (see citation below) There are some reasons to doubt a ‘vision only’ processing in these areas. “…cortical preference in the ‘visual’ cortex might not be exclusively visual and in fact might develop independently of visual experience. Specifically, An area showing preference for reading, at the precise location of the VWFA (visual word-form area), was shown to be active in congenitally blind subjects during Braille reading large-scale segregation of the ventral stream into animate and inanimate semantic categories have also been shown to be independent of visual experience. More generally, an overlap in the neural correlates of equivalent tasks has been repeatedly shown between the blind and sighted using different sensory modalities.” Is an area specialized in one domain because of cultural learning through visual experience or is the specialization the result of the specific connectivity of an area?

Abboud and others used congenitally blind subjects to see if the numeral area could process numerals arriving from auditory signals. Congenitally blind subjects cannot have categorical area that are based on visual learning. The letter area and numeral area are separate even though the letter symbols and numeral symbols are very similar – in fact can be identical. The researchers predicted that the word area had connections to language areas and the numeral area connected to quantitative areas.

eye-music application

eye-music application

The subjects were trained in eye-music, a sight substitute based on time, pitch, timbre and volume. While being scanned, the subjects heard the same musical description of an object and were asked to identify the object as part of a word, part of a number, or a colour. Roman numerals were used to give a large number of identical musical descriptions of numbers and letters. What they found was that the numeric task gave activation in the same area as it does in a sighted person and that blind and sighted subjects had the same connections, word area to language network and numeral area to quantity network. It is the connectivity patterns, independent of visual experience, that create the visual numeral-form area. “…neither the sensory-input modality and visual experience, nor the physical sensory stimulation itself, play a critical role in the specialization observed in this area. ” It is which network is active (language or quantity) that is critical.

…these results are in agreement with the theory of cultural recycling, which suggests that the acquisition of novel cultural inventions is only feasible inasmuch as it capitalizes on prior anatomical and connectional constraints and invades pre- existing brain networks capable of performing a function sufficiently similar to what is needed by the novel invention. In addition, other factors such as the specifics of how literacy and numeracy are learned, as well as the distinctive functions of numerals and letters in our education and culture, could also account for the segregation of their preferences.

Here is the abstract: “Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. ”
ResearchBlogging.org

Abboud, S., Maidenbaum, S., Dehaene, S., & Amedi, A. (2015). A number-form area in the blind Nature Communications, 6 DOI: 10.1038/ncomms7026

I'm on ScienceSeeker-Microscope

Echo-location in humans

We can echo-locate but it is only possible to master well if blind. This is because, to be well done, echolocation uses parts of the visual cortex. A few years ago Thaler et al published the details (see citation below). Here is their description of this natural ability.

The enormous potential of this ‘natural’ echolocation ability is realized in a segment of the blind population that has learned to sense silent objects in the environment simply by generating clicks with their tongues and mouths and then listening to the returning echoes. The echolocation click produced by such individuals tends to be short (approximately 10 ms) and spectrally broad. Clicks can be produced in various ways, but it has been suggested that the palatal click, produced by quickly moving the tongue backwards and downwards from the palatal region directly behind the teeth, is best for natural human echolocation. For the skilled echolocator, the returning echoes can potentially provide a great deal of information regarding the position, distance, size, shape and texture of objects. ”

They found that their blind echo-locating subjects (early-blind and late-blind) used visual areas of cortex in processing echo information. When they were presented with recordings of clicks with and without the resulting echoes, they found activity in the calcarine cortex when there were echoes but not in echo free recordings. But there was no difference in the activity of the auditory cortex in hearing the two recordings. There was also activity of other visual areas when listening to echoes reflected by moving objects. They conclude that blind echo-locating experts use brain regions typically used for vision rather than auditory areas to process the echoes into a perception of objects in space.

The calcarine cortex has other names: primary visual cortex, striate cortex, V1 visual area. It is the area that first receives information from the eye (via the thalamus). It contains a point-to-point map of the retina. V1 is known for several visual abilities: the identification of simple forms such as lines with orientations and lengths, aiming the eyes (saccades) towards interesting clues in the peripheral vision, and participating in forming images even with the eyes closed. This is the sort of processing area that can be taken over to process echoes into a spatial image when vision is not able to use it.

It is likely that our senses are all (to some extent) building 3D models of our surroundings and they would all contribute to our working model of the world. Particularly what we see, what we hear and what we feel, all seem to be part of one reality, not three. This must mean that the models from each sense are fitted together somewhere, or that the models of each sense fed information into each other, or, of course, both. In the end though, the visual model seems, in our case, to be the more influential part of our working model.

The mechanisms for finding discontinuities in light and finding their linear orientation and length, would not be that much different from finding discontinuities in echoes and finding their linear orientation and length. Fitting this sort of information into a perceptual model would use mechanisms that are used for visual lines and objects in sighted people. But is there evidence of this coordination of perception?

Buckingham et al (see citation below) have looked at this and found “that echolocation is not just a functional tool to help visually-impaired individuals navigate their environment, but actually has the potential to be a complete sensory replacement for vision.” There is an illusion where the size of an object affects the perceived weight. With boxes weighing exacting the same but of different sizes – the smaller boxes feel heavier than the larger ones. This illusion was used to show the size information which is usually visual can be replaced by echolocation information without changing the illusion.

Here is their abstract: “Certain blind individuals have learned to interpret the echoes of self-generated sounds to perceive the structure of objects in their environment. The current work examined how far the influence of this unique form of sensory substitution extends by testing whether echolocation-induced representations of object size could influence weight perception. A small group of echolocation experts made tongue clicks or finger snaps toward cubes of varying sizes and weights before lifting them. These echolocators experienced a robust size-weight illusion. This experiment provides the first demonstration of a sensory substitution technique whereby the substituted sense influences the conscious.”

Why don’t sighted people echo-locate. I do not believe it has been shown that we don’t. If we do it is not rendered consciously or used in preference to visual data. But there is no reason to assume that it is not there in the background helping to form a perceptual working model. For example, if an echo based edge coincided with an optical edge in the V1 area, it could give additional information about the nature of the edge.

I also think it may be that in order to simplify auditory perception, our brains suppress the low level echoes of any sound we make. We would be aware of the sound we made but much less aware of echoes of that sound. The auditory cortex would then be unable to echo-locate and the visual cortex would be busy with vision (and perhaps some echoes and other sounds) producing a visual model. In this case, we would not consciously hear our echoes and we would not directly consciously ‘see’ them either, although we might be processing them as additions to visual input.
ResearchBlogging.org

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts PLoS ONE, 6 (5) DOI: 10.1371/journal.pone.0020162

Buckingham, G., Milne, J., Byrne, C., & Goodale, M. (2014). The Size-Weight Illusion Induced Through Human Echolocation Psychological Science DOI: 10.1177/0956797614561267

I'm on ScienceSeeker-Microscope

Agency and intention

Nautilus has a post (here) by Matthew Hutson that is a very interesting review of the connection between our perception of time and of causation. If we believe that two events are causally related we perceive less time between them than a clock would register; and if we believe the events are not causally connected, time is increased between them. And on the other side of the coin. If we perceive a shorter time between two events, we are more likely to believe they are causally connected; and if the time is longer between them, it is harder for us to believe they are causally related. This effect is called intentional binding. The article describes the important experiments that underpin this concept.

But intentional binding is part of a larger concept. How is our sense of agency created and why? To learn how to do things in this world, we have to know what we set in motion and what was caused by something other then ourselves. Our memory of an event has to be marked as caused by us if it is, in order to be useful in future situations. As our memory of an event is based on our consciousness of it, our consciousness must reflect whether we caused the outcome. So the question becomes – how do our brains make the call to mark an event as our agency. If the actual ‘causing’ was a conscious process, there would be no need for a procedure to establish whether we were the agents of the action. However there is a procedure.

I wrote about this previously (here) in looking at Chapter 1 of ‘The New Unconscious’, ‘Who is the Controller of Controlled Processes?’. What needs to happen for us to feel that we have willed an action? We have to believe that thoughts which reach our consciousness have caused our actions. Three things are needed for us to make a causal connection between the thoughts and the actions:

  1. priority

The thought has to reach consciousness before the action if it is going to appear a cause. Actually it must occur quite closely, within about 30 sec., before the action. Wegner and Wheatley investigated the principle with fake thoughts fed through earphones and fake actions gently forced by equipment, to give people the feeling that their thought caused their action.

  1. consistency

The thought has to be about the action in order for it to appear to be the cause. Wegner, Sparrow and Winerman used a mirror so that a subject saw the hands of another person standing behind them instead of their own. If the thoughts fed to the subject through earphones matched the hand movements then the subject experienced willing the movements. If the earphones gave no ‘thoughts’ or contradictory ones, there was no feeling of will.

  1. exclusivity

The thought must be the only apparent source of a cause for the action. If another cause that seems more believable is available it will be used. The feeling of will can disappear when the subject is in a trance and feels controlled by another agent such as a spirit.

Also previously (here) I discussed a report in Science, “Movement Intention after Parietal Cortex Stimulation in Humans”, by M. Desnurget and others, with the following summary:

Parietal and premotor cortex regions are serious contenders for bringing motor intentions and motor responses into awareness. We used electrical stimulation in seven patients undergoing awake brain surgery. Stimulating the right inferior parietal regions triggered a strong intention and desire to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region provoked the intention to move the lips and to talk. When stimulation intensity was increased in parietal areas, participants believed they had really performed these movements, although no electromyographic activity was detected. Stimulation of the premotor region triggered overt mouth and contralateral limb movements. Yet, patients firmly denied that they had moved. Conscious intention and motor awareness thus arise from increased parietal activity before movement execution.”

The feeling of agency is not something that we can change even if we believe it is not true. Here is Rodolfo Llinas describing an experiment that he conducted on himself that I discussed previously (here). It was in a video interview of Rodolfo Llinas (video). There are many interesting ideas in this hour long discussion. The part I am quoting from the transcript is Llinas’ self-experimentation on the subject of free-will.

“…I understand that free will does not exist; I understand that it is the only rational way to relate to each other, this is to assume that it does, although we deeply know that it doesn’t. Now the question you may ask me is how do you know? And the answer is, well, I did an actually lovely experiment on myself. It was extraordinary really. There is an instrument used in neurology called a magnetic stimulator…its an instrument that has a coil that you put next to the top of the head and you pass a current such that a big magnetic field is generated that activates the brain directly, without necessary to open the thing. So if you get one of these coils and you put it on top of the head, you can generate a movement. You put it in the back, you see a light, so you can stimulate different parts of the brain and have a feeling of what happens when you activate the brain directly without, in quotes, you doing it. This of course is a strange way of talking but that’s how we talk. So I decide to put it on the top of the head where I consider to be the motor cortex and stimulate it and find a good spot where my foot on the right side would move inwards. It was *pop* no problem. And we did it several time and I tell my colleague, I know anatomy, I know physiology, I can tell you I’m cheating. Put the stimulus and then I move, I feel it, I’m moving it. And he said well, you know, there’s no way to really know. I said, I’ll tell you how I know. I feel it, but stimulate and I’ll move the foot outwards. I am now going to do that, so I stimulate and the foot moves inwards again. So I said but I changed my mind. Do it again. So I do it half a dozen times… (it always moved inward)…So I said, oh my god, I can’t tell the difference between the activity from the outside and what I consider to be a voluntary movement. If I know that it is going to happen, then I think I did it, because I now understand this free will stuff and this volition stuff. Volition is what’s happening somewhere else in the brain, I know about and therefore I decide that I did it…In other words, free will is knowing what you are going to do. That’s all.”

Synesthesia can be learned

Synesthesia is a condition where one stimulus (like a letter) automatically is experienced with another attribute (like a colour) that is not actually present. About 4% of people have some form of this sensory mixing. It has been generally assumed that synesthesia is inherited because it runs in families. But it has been clear that some learning is involved in triggering and shaping synesthesia. “Simner and colleagues tested grapheme-color consistency in synesthetic children between 6 and 7 years of age, and again in the same children a year later. This interim year appeared critical in transforming chaotic pairings into consistent fixed associations. The same cohort were retested 3 years later, and found to have even more consistent pairings. Therefore, GCS (grapheme-color synesthesia) appears to emerge in early school years, where first major pressures to use graphemes are encountered, and then becomes cemented in later years. In fact, for certain abstract inducers, such as graphemes, it is implausible that humans are born with synesthetic associations to these stimuli. Hence, learning must be involved in the development of at least some forms of synesthesia.” There have been attempts to train people to have synesthetic experiences but these have not shown the conscious experience of genuine synesthesia.

In the paper cited below Bor and others managed to produce these genuine experiences in people showing no previous signs of synesthesia or a family history of it. They feel their success is due to more intensive training. “Here, we implemented a synesthetic training regime considerably closer to putative real-life synesthesia development than has previously been used. We significantly extended training time compared to all previous studies, employed a range of measures to optimize motivation, such as making tasks adaptive, and we selected our letter-color associations from the most common associations found in synesthetic and normal populations. Participants were tested on a range of cognitive and perceptual tasks before, during, and after training. We predicted that this extensive training regime would cause our participants to simulate synesthesia far more closely than previous synesthesia training studies have achieved. ”

The phenomenology in these subjects was mild and not permanent, but definitely real synesthesia. The work has shown that although there is a genetic tendency, in typical synesthetics the condition is learned, probably during intensive, motivated, developmental training. It also seems that the condition is simply an associative memory one and not ‘extra wiring’.

Here is the abstract:

Synesthesia is a condition where presentation of one perceptual class consistently evokes additional experiences in different perceptual categories. Synesthesia is widely considered a congenital condition, although an alternative view is that it is underpinned by repeated exposure to combined perceptual features at key developmental stages. Here we explore the potential for repeated associative learning to shape and engender synesthetic experiences. Non-synesthetic adult participants engaged in an extensive training regime that involved adaptive memory and reading tasks, designed to reinforce 13 specific letter-color associations. Following training, subjects exhibited a range of standard behavioral and physiological markers for grapheme-color synesthesia; crucially, most also described perceiving color experiences for achromatic letters, inside and outside the lab, where such experiences are usually considered the hallmark of genuine synesthetes. Collectively our results are consistent with developmental accounts of synesthesia and illuminate a previously unsuspected potential for new learning to shape perceptual experience, even in adulthood.”
ResearchBlogging.org

Bor, D., Rothen, N., Schwartzman, D., Clayton, S., & Seth, A. (2014). Adults Can Be Trained to Acquire Synesthetic Experiences Scientific Reports, 4 DOI: 10.1038/srep07089

I'm on ScienceSeeker-Microscope