Category Archives: anatomy

Making sense of the sense of smell

This is another post on Morsella’s ideas.

In developing the Passive Frame theory of consciousness, the group uses olfaction as the sensory source to focus on. This seems surprising at first, but they have good reasons for this.

First, it is an old system from an evolutionary viewpoint. As in this quote from Shepherd: “the basic architecture of the neural basis of consciousness in mammals, including primates, should be sought in the olfactory system, with adaptations for the other sensory pathways reflecting their relative importance in the different species”.

Second, its connections are simple compared to vision and hearing. Olfactory signals go straight to the cortex rather than arriving in the cortex via the thalamus and they enter an old part of the cortex, the paleocortex rather than the neocortex (which has primary processing areas for the other senses). The processing of smell is more or less confined to one area in the frontal region and does not extend to the extensive areas at the back of the brain where visual and auditory processing occurs. The sense of smell is much easier to track anatomically than the other ‘higher’ senses. To understand minimal consciousness, it is reasonable to use the least elaborate sense as a model.

Third, looking at what lesions interfere with olfactory consciousness, it seems that connections outside the cortex are not needed for awareness of odours. This implies that at a basic level consciousness does not require the thalamus or mid-brain areas (although consciousness of other senses does require those areas). Some links to the thalamus and other areas may be involved in further processing smell signals but not in being conscious of them.

Fourth, the addition of a smell into the contents of consciousness has a sort of purity. The sense is only there when it is there. We are aware of silence and of complete darkness but we are not aware of a lack of odour unless we question ourselves. If odours are at very low concentrations or if we have habituated to them because they are not changing in concentration, we are not conscious of those odours and also not conscious of their absence. “The experiential nothingness associated with olfaction yields no conscious contents of any kind to such an extent that, absent memory, one in such a circumstance would not know that one possessed an olfactory system.” So addition of a smell to the contents of consciousness is a distinct change in awareness and can of itself focus attention on it.

Fifth, olfaction is not connected with a number of functions. There are no olfactory symbols being manipulated and the like. It is difficult to hold olfactory ‘images’ in working memory. Also “olfactory experiences are less likely to occur in a self-generated, stochastic manner: Unlike with vision and audition, in which visually-rich daydreaming or ‘ear worms’ occur spontaneously during an experiment and can contaminate psychophysical measures, respectively, little if any self-generated olfactory experiences could contaminate measures.

As well as these reasons given by Morsella in justifying the choice of olfaction in developing the Passive Frame theory, it occurs to me that there is a significant difference in memory. There is a type of recall prompted by smell that seems instantaneous, effortless and very detailed. For example, when you enter a house that you have not been in since childhood and the house has changed in so many ways over the years, the first breath gives a forgotten smell and a vivid sense of the original house along with many images from memories you know you could not normally recall. There seems to be some direct line between the memory of a ‘place’ and the faint odour of that place.

This olfactory approach to consciousness does cut away much of the elaborations and fancy details of consciousness and allows the basic essentials to be clearer.

A tiny eye

Erythropsidinium

Erythropsidinium

A single celled organism called Erythropsidinium has been reported to have a tiny eye. This organism is not a simple bacteria sort of cell but is a eukaryote. It is single celled but has the kind of cell that is found in multicelled organisms like us. It is not a bag of chemicals but is highly organized with a nucleus and organelles. Among the organelles is a little eye and a little harpoon – ‘all the better to hunt with, my dear’. The eye (called an ocelloid) is like a camera with a lens and pigment responders; while the harpoon is a piston that can elongate 20 or so times in length very quickly and has a poison tip. The prey is transparent but has a nucleus that polarizes light and it is the polarized light that the ocelloid detects. This results in the harpoon being aimed in the direction of the prey before it is fired.

That sounds like a link between a sensory organelle and a motor organelle. As far as I can see, it is not known how the linking mechanism works but in a single celled organism the link has to be relatively simple (a mechanical or chemical molecular event or short chain of events). This is like a tiny nervous system but without the nerves. There is a sensor and an actor and in a nervous system there would be a web of inter-neurons that that connected the two and allowed activity to be appropriate to the situation. What ever the link is in Erythropsidinium, it does allow the steering of the harpoon to an effective direction. The cell can move the ocelloid and the harpoon. Are they physically tied together? Or is there more information processing than just a ‘fire’ signal?

This raises an interesting question. Can we say that this organism is aware? If the ability to sense and to act is found coordinated within a single cell – can that cell be said to be aware of its actions and its environment? And if it is aware, is it conscious in some simple way? That would raise the question of whether complexity is a requirement for consciousness. These are semantic arguments, all about how words are defined and not about how the world works.

Humans can detect polarized light

Here is a most interesting development that fits in with the idea that we are still ignorant of many details of the nervous system. Humans have the ability to perceive polarized light. We are not aware of this sense and do not use it, but it is there. Actually it has been known for sometime but not generally.

The paper by S. Temple (citation below) describes research which started with polarized light sight in sea animals, then its mechanism in humans, and finally it appears to offer a method of diagnosing AMD before it affects vision.

In the paper there are directions for seeing the polarization. I have to say I was skeptical and quite surprised when the faint yellow bowtie appeared and rocked back and forth as I tilted my head. When I stopped moving my head the bowtie disappeared. “We detect the orientation of polarized light using ‘Haidinger’s brushes’, an entoptic visual phenomenon described by Wilhelm Karl von Haidinger in 1844. He reported that when viewing a polarized light field, with no spatial variation in intensity or colour, it was possible for someone with normal sight to perceive a faint pattern of yellow and blue bowtie-like shapes that intersect at the viewer’s point of fixation. Haidinger’s brushes can be observed by looking at a region of blue sky approximately 90° from the sun, particularly around sunset or sunrise, or by looking at a region of white on a liquid crystal display (LCD). The effect vanishes within about 5 s, but can be maintained and/or increased in salience by rotating the eye around the primary visual axis relative to the light field, e.g. tilting one’s head side to side.” Entoptic means that the phenomenon has an origin within the eye rather than the outside world.

The bowties are created by two structures in the eye. The cornea has layers of collagen molecules arranged to create birefringence. This can be thought of as slow and fast orientations depending on the polarization angle of light rays. This interacts with carotenoid pigments in the macula or fovea which are also arranged in a particular way to form an interference filter (dichroic filter). The center of the lens behind the cornea, the center of the macula and the object of visual attention are in a straight line. Therefore the orientation of the collagen, carotenoid pigments and the direction of the light are always the same.

By studying the Haidinger’s brushes in an individual it is possible to examine aspects of the structure of the macula and the cornea.

Here is the abstract:

Like many animals, humans are sensitive to the polarization of light. We can detect the angle of polarization using an entoptic phenomenon called Haidinger’s brushes, which is mediated by dichroic carotenoids in the macula lutea. While previous studies have characterized the spectral sensitivity of Haidinger’s brushes, other aspects remain unexplored. We developed a novel methodology for presenting gratings in polarization-only contrast at varying degrees of polarization in order to measure the lower limits of human polarized light detection. Participants were, on average, able to perform the task down to a threshold of 56%, with some able to go as low as 23%. This makes humans the most sensitive vertebrate tested to date. Additionally, we quantified a nonlinear relationship between presented and perceived polarization angle when an observer is presented with a rotatable polarized light field. This result confirms a previous theoretical prediction of how uniaxial corneal birefringence impacts the perception of Haidinger’s brushes. The rotational dynamics of Haidinger’s brushes were then used to calculate corneal retardance. We suggest that psychophysical experiments, based upon the perception of polarized light, are amenable to the production of affordable technologies for self-assessment and longitudinal monitoring of visual dysfunctions such as age-related macular degeneration.”

Citation: Temple SE, McGregor JE, Miles C, Graham L, Miller J, Buck J, Scott-Samuel NE, Roberts NW. 2015 Perceiving polarization with the naked eye: characterization of human polarization sensitivity. Proc. R. Soc. B 282: 20150338. http://dx.doi.org/10.1098/rspb.2015.0338

 

Curious sponge behaviour

In a little tidy-up of old files, I ran across a paper on sneezing sponges. This is not an April Fool’s joke – today is the 2nd of April. When we sneeze, we fill our lungs and then hold the air in while increasing the pressure on the lung. When we open up and let the air out, it rushes out, moving particles, mucous and irritants as it rushes. Sponges take in water all over their surface but the water exits through one hole. In order to rid themselves of particles and irritants, they close that single hole while continuing to take in water. When the exit is opened, the water inside comes out with some force.

Sponges are such primitive animals that they have no muscles and no nervous system. Until recently it was believed that they also lacked sense organs. But they do have a sense organ and that is how they are able to organize a sneeze. The exit hole (osculum) is lined with cells that have little hairs (cilia) protruding into the water stream. These cilia can sense grit and changes in flow. The cilia are of a type called ‘primary’; they cannot move but can sense being moved. Primary cilia are found in a number of sensory organs in other animals; for example, in our ears and the lateral line of fishes. The general molecular structure of primary cilia is more or less conserved over multicellular animals.

In the same way that the molecules needed for neurons and synapses are seen in organisms that have no nervous systems, this is another component of our nervous system that has a very ancient lineage.

Here is the abstract of the paper ( D. Ludeman, N. Farrar, A. Riesgo, J. Paps, S. Leys; Evolutionary origins of sensation in metazoans: functional evidence for a new sensory organ in sponges. BMC Evolutionary Biology, 2014; 14 (1): 3 ):

Background: One of the hallmarks of multicellular organisms is the ability of their cells to trigger responses to the environment in a coordinated manner. In recent years primary cilia have been shown to be present as ‘antennae’ on almost all animal cells, and are involved in cell-to-cell signaling in development and tissue homeostasis; how this sophisticated sensory system arose has been little-studied and its evolution is key to understanding how sensation arose in the Animal Kingdom. Sponges (Porifera), one of the earliest evolving phyla, lack conventional muscles and nerves and yet sense and respond to changes in their fluid environment. Here we demonstrate the presence of non-motile cilia in sponges and studied their role as flow sensors.

Results: Demosponges excrete wastes from their body with a stereotypic series of whole-body contractions using a structure called the osculum to regulate the water-flow through the body. In this study we show that short cilia line the inner epithelium of the sponge osculum. Ultrastructure of the cilia shows an absence of a central pair of microtubules and high speed imaging shows they are non-motile, suggesting they are not involved in generating flow. In other animals non-motile, ‘primary’, cilia are involved in sensation. Here we show that molecules known to block cationic ion channels in primary cilia and which inhibit sensory function in other organisms reduce or eliminate sponge contractions. Removal of the cilia using chloral hydrate, or removal of the whole osculum, also stops the contractions; in all instances the effect is reversible, suggesting that the cilia are involved in sensation. An analysis of sponge transcriptomes shows the presence of several transient receptor potential (TRP) channels including PKD channels known to be involved in sensing changes in flow in other animals. Together these data suggest that cilia in sponge oscula are involved in flow sensation and coordination of simple behaviour.

Conclusions: This is the first evidence of arrays of non-motile cilia in sponge oscula. Our findings provide support for the hypothesis that the cilia are sensory, and if true, the osculum may be considered a sensory organ that is used to coordinate whole animal responses in sponges. Arrays of primary cilia like these could represent the first step in the evolution of sensory and coordination systems in metazoans. ”

Connectivity is not one idea

Sebastian Seung sold the idea that “we are our connectome”. What does that mean? Connectivity is a problem to me. Of course, the brain works only because there are connections between cells and between larger parts of the brain. But how can we measure and map it. Apparently there are measurement problems.

When some research says that A is connected to B it can mean a number of things. A could be a sizable area of the brain that has a largish nerve tract to B. This means that some neurons in A have axons that extend all the way to B, and some neurons in B have synapses with each of those axons. We could be talking about smaller and smaller groups of neurons until we have a pair of connected neurons. This is anatomy – it does not tell us when and how the connections are active or what they accomplish, just that a possible path is visible.

On the other hand A and B may share information. A and B are active at the same time in some circumstance. They are receiving the same information, either one from the other, or both from some other source. Quite often this means they are synchronized in their activity; it is locked together in a rhythm. Or they may react differently but always to the same type of information. Or one may feed the other information (directly or indirectly). A and B need only be connected when they are involved in the function that gives them shared information. Here we see the informational connection but necessarily the path.

A and B may be connected by a known causal link. A makes B active. Whenever A is active it causes B to be active too. This causal link gives no automatic information about path or even, at times, what information may be shared.

On a very small scale cells that are close together can be connected by contacts with glial cells, local voltage potentials and chemical gradients. Here the connections are even more difficult to map.

And finally overall there are control mechanisms that switch on and off various connection routes.

The whole brain is somewhat plastic and so can change its connectivity structure over time to better serve the needs of the individual. When it comes down to it, the connectivity that makes us each unique, the results of learning and memory, is the most plastic. It is changing all the time and can be very hard to map.

Saying “connectome” without any detailed specification is next to meaningless and “we are our connectome” is certainly true but somewhat vacuous.

A recent paper (citation below) took 4 common ways of measuring connectivity and compared them pair-wise. None of the pairs had a high level of agreement and some pairs had hardly any. There may be a lot of reasons for this but a big one has to be that the various methods were not measuring the same thing. In general, authors say that they are measuring, by what method and why. These nuances occasional do not make it to the abstract or conclusion, often never make it to the press release, and nearly never to news articles.

Here is the abstract and a diagram from the Jones paper.

connectionpairsMeasures of brain connectivity are currently subject to intense scientific and clinical interest. Multiple measures are available, each with advantages and disadvantages. Here, we study epilepsy patients with intracranial electrodes, and compare four different measures of connectivity. Perhaps the most direct measure derives from intracranial electrodes; however, this is invasive and spatial coverage is incomplete. These electrodes can be actively stimulated to trigger electrophysical responses to provide the first measure of connectivity. A second measure is the recent development of simultaneous BOLD fMRI and intracranial electrode stimulation. The resulting BOLD maps form a measure of effective connectivity. A third measure uses low frequency BOLD fluctuations measured by MRI, with functional connectivity defined as the temporal correlation coefficient between their BOLD waveforms. A fourth measure is structural, derived from diffusion MRI, with connectivity defined as an integrated diffusivity measure along a connecting pathway. This method addresses the difficult requirement to measure connectivity between any two points in the brain, reflecting the relatively arbitrary location of the surgical placement of intracranial electrodes. Using a group of eight epilepsy patients with intracranial electrodes, the connectivity from one method is compared to another method using all paired data points that are in common, yielding an overall correlation coefficient. This method is performed for all six paired-comparisons between the four methods. While these show statistically significant correlations, the magnitudes of the correlation are relatively modest (r2 between 0.20 and 0.001). In summary, there are many pairs of points in the brain that correlate well using one measure yet correlate poorly using another measure. These experimental findings present a complicated picture regarding the measure or meaning of brain connectivity.”
ResearchBlogging.org

Jones, S., Beall, E., Najm, I., Sakaie, K., Phillips, M., Zhang, M., & Gonzalez-Martinez, J. (2014). Low Consistency of Four Brain Connectivity Measures Derived from Intracranial Electrode Measurements Frontiers in Neurology, 5 DOI: 10.3389/fneur.2014.00272

I'm on ScienceSeeker-Microscope

A mapped tiny bit of brain

Kristin Harris has spent years mapping all the cells and connections between them, in a very small volume of brain. She introduces and shows how it is mapped and put together in a video (here). It is very interesting to see just how complex and crowded the brain is. We have been spoiled on the microscope slides with individual neurons pick out amongst the multitude, as in a Golgi stained preparation.

More about neurons

I want to make a point here that we know less about the brain than is generally acknowledged. Our picture of the functioning of a neuron is taken as more or less settled knowledge; only small refinements are likely. But the refinements that are regularly published are not small. Now we have a paper (citation below) that is extraordinary.

Bywalez and others have shown that the little spines on the dendrite trees of neurons can themselves act as miniature neurons accomplishing computations similar to a full neuron (at least in the olfactory bulb part of the brain and probably other parts too) and that some synapses can be two sided, transmitting signals in both directions. This allows dendrite to dendrite communication. In effect the neck of the spine can isolate the spine from the rest of the neuron, allowing it to reach an action potential level of voltage in its area without interference from the rest of the dendrite tree, and so it is able to send a signal backwards out of the spine.

classic neuron

classic neuron

We are used to thinking of neurons as, in effect, huge add-gates that take a multitude of synapses giving inputs of various strengths and those inputs are combined in the dendrites into a voltage level in the main cell body. If that voltage is above a threshold, an action potential voltage, a signal, is propagated down the neuron’s axon to the dendrites other, usually distant, neurons. There it influences how those other neurons act by contributing a positive or negative voltage to the receiving dendrites’ totals. It is fairly easy to imagine how this works and to mimic it with electronic circuits.

But neuroscience keeps finding exceptions to this theory. There are glial cells assisting and interfering with the process and they can communicate with each other by a different mechanism. There are signals that bypass the whole dendrite calculation and input their signal at the cell body root of the axon, thereby over-riding other inputs. There are axon to axon synapses. Neurons can multitask by calculating and then sending two separate message codes to two separate groups of receiving neurons. Signals can go backwards up the axon. Some neurons can learn timing delays in their signaling. And now this: action potentials can be generated in the little spines of the dendrites and some synapses are not one way transmitters with pre and post halves, but can work both ways. The standard model is getting tattered with exceptions. No doubt there are many more exceptions to come. I venture that we are nowhere near understanding neurons and neuron network behavior.
ResearchBlogging.org

Bywalez, W., Patirniche, D., Rupprecht, V., Stemmler, M., Herz, A., Pálfi, D., Rózsa, B., & Egger, V. (2015). Local Postsynaptic Voltage-Gated Sodium Channel Activation in Dendritic Spines of Olfactory Bulb Granule Cells Neuron DOI: 10.1016/j.neuron.2014.12.051

I'm on ScienceSeeker-Microscope

Some visual-form areas are really task areas

There are two paths for visual information, one to the motor areas (dorsal ‘where’ stream) and one to the areas concerned with consciousness, memory and cognition (ventral ‘what’ stream). The visual ventral stream has areas for the recognition of various categories of object: faces, body parts, letters for example. But are these areas really ‘visual’ areas or can they deal with input from other senses? There is recent research into an area concerned with numerals. (see citation below) There are some reasons to doubt a ‘vision only’ processing in these areas. “…cortical preference in the ‘visual’ cortex might not be exclusively visual and in fact might develop independently of visual experience. Specifically, An area showing preference for reading, at the precise location of the VWFA (visual word-form area), was shown to be active in congenitally blind subjects during Braille reading large-scale segregation of the ventral stream into animate and inanimate semantic categories have also been shown to be independent of visual experience. More generally, an overlap in the neural correlates of equivalent tasks has been repeatedly shown between the blind and sighted using different sensory modalities.” Is an area specialized in one domain because of cultural learning through visual experience or is the specialization the result of the specific connectivity of an area?

Abboud and others used congenitally blind subjects to see if the numeral area could process numerals arriving from auditory signals. Congenitally blind subjects cannot have categorical area that are based on visual learning. The letter area and numeral area are separate even though the letter symbols and numeral symbols are very similar – in fact can be identical. The researchers predicted that the word area had connections to language areas and the numeral area connected to quantitative areas.

eye-music application

eye-music application

The subjects were trained in eye-music, a sight substitute based on time, pitch, timbre and volume. While being scanned, the subjects heard the same musical description of an object and were asked to identify the object as part of a word, part of a number, or a colour. Roman numerals were used to give a large number of identical musical descriptions of numbers and letters. What they found was that the numeric task gave activation in the same area as it does in a sighted person and that blind and sighted subjects had the same connections, word area to language network and numeral area to quantity network. It is the connectivity patterns, independent of visual experience, that create the visual numeral-form area. “…neither the sensory-input modality and visual experience, nor the physical sensory stimulation itself, play a critical role in the specialization observed in this area. ” It is which network is active (language or quantity) that is critical.

…these results are in agreement with the theory of cultural recycling, which suggests that the acquisition of novel cultural inventions is only feasible inasmuch as it capitalizes on prior anatomical and connectional constraints and invades pre- existing brain networks capable of performing a function sufficiently similar to what is needed by the novel invention. In addition, other factors such as the specifics of how literacy and numeracy are learned, as well as the distinctive functions of numerals and letters in our education and culture, could also account for the segregation of their preferences.

Here is the abstract: “Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. ”
ResearchBlogging.org

Abboud, S., Maidenbaum, S., Dehaene, S., & Amedi, A. (2015). A number-form area in the blind Nature Communications, 6 DOI: 10.1038/ncomms7026

I'm on ScienceSeeker-Microscope

Another sensory channel

There is another recent discovery to highlight how little we know about our nervous system. Theories are accepted because we believe we have a handle on the anatomy, physiology, biochemistry and biophysics of the nervous systems. But the ‘facts’ change regularly. This time it is connections between the gut and the brain – a direct sensory path and a door through which viruses can pass from gut to brain.

The paper (see citation below) deals with a type of cell in the gut lining that has been thought to communicate with the brain via hormone secretion. The researchers have shown that the cells are in physical contact with neurons and communicate directly. Further, they show that viruses can pass through that physical contact and enter a neuron. This research adds another layer of communication between gut and brain.

The cells, called enteroendocrine cells, are sensory cells reacting to chemicals in the gut. It was thought that the cells produced hormones that traveled through the bloodstream to sensory neurons. This may still be true but it is now known that the enteroendocrine cells grow long processes, neuropods, that reach nerves and form synaptic-like contact with the nerves. They are accompanied by enteric glia cells and respond to neurotrophins.

This communication is important. “Satiety, food preference, and even mood behaviors are a few of the functions modulated by gut chemosensation. Ingested nutrients and bacterial by-products contacting the gut epithelium stimulate enteroendocrine cells.enteroendocrine

Here is the abstract: “Satiety and other core physiological functions are modulated by sensory signals arising from the surface of the gut. Luminal nutrients and bacteria stimulate epithelial biosensors called enteroendocrine cells. Despite being electrically excitable, enteroendocrine cells are generally thought to communicate indirectly with nerves through hormone secretion and not through direct cell-nerve contact. However, we recently uncovered in intestinal enteroendocrine cells a cytoplasmic process that we named neuropod. Here, we determined that neuropods provide a direct connection between enteroendocrine cells and neurons innervating the small intestine and colon. Using cell-specific transgenic mice to study neural circuits, we found that enteroendocrine cells have the necessary elements for neurotransmission, including expression of genes that encode pre-, post-, and transsynaptic proteins. This neuroepithelial circuit was reconstituted in vitro by coculturing single enteroendocrine cells with sensory neurons. We used a monosynaptic rabies virus to define the circuit’s functional connectivity in vivo and determined that delivery of this neurotropic virus into the colon lumen resulted in the infection of mucosal nerves through enteroendocrine cells. This neuroepithelial circuit can serve as both a sensory conduit for food and gut microbes to interact with the nervous system and a portal for viruses to enter the enteric and central nervous systems.
ResearchBlogging.org

Bohórquez, D., Shahid, R., Erdmann, A., Kreger, A., Wang, Y., Calakos, N., Wang, F., & Liddle, R. (2015). Neuroepithelial circuit formed by innervation of sensory enteroendocrine cells Journal of Clinical Investigation DOI: 10.1172/JCI78361

I'm on ScienceSeeker-Microscope

Echo-location in humans

We can echo-locate but it is only possible to master well if blind. This is because, to be well done, echolocation uses parts of the visual cortex. A few years ago Thaler et al published the details (see citation below). Here is their description of this natural ability.

The enormous potential of this ‘natural’ echolocation ability is realized in a segment of the blind population that has learned to sense silent objects in the environment simply by generating clicks with their tongues and mouths and then listening to the returning echoes. The echolocation click produced by such individuals tends to be short (approximately 10 ms) and spectrally broad. Clicks can be produced in various ways, but it has been suggested that the palatal click, produced by quickly moving the tongue backwards and downwards from the palatal region directly behind the teeth, is best for natural human echolocation. For the skilled echolocator, the returning echoes can potentially provide a great deal of information regarding the position, distance, size, shape and texture of objects. ”

They found that their blind echo-locating subjects (early-blind and late-blind) used visual areas of cortex in processing echo information. When they were presented with recordings of clicks with and without the resulting echoes, they found activity in the calcarine cortex when there were echoes but not in echo free recordings. But there was no difference in the activity of the auditory cortex in hearing the two recordings. There was also activity of other visual areas when listening to echoes reflected by moving objects. They conclude that blind echo-locating experts use brain regions typically used for vision rather than auditory areas to process the echoes into a perception of objects in space.

The calcarine cortex has other names: primary visual cortex, striate cortex, V1 visual area. It is the area that first receives information from the eye (via the thalamus). It contains a point-to-point map of the retina. V1 is known for several visual abilities: the identification of simple forms such as lines with orientations and lengths, aiming the eyes (saccades) towards interesting clues in the peripheral vision, and participating in forming images even with the eyes closed. This is the sort of processing area that can be taken over to process echoes into a spatial image when vision is not able to use it.

It is likely that our senses are all (to some extent) building 3D models of our surroundings and they would all contribute to our working model of the world. Particularly what we see, what we hear and what we feel, all seem to be part of one reality, not three. This must mean that the models from each sense are fitted together somewhere, or that the models of each sense fed information into each other, or, of course, both. In the end though, the visual model seems, in our case, to be the more influential part of our working model.

The mechanisms for finding discontinuities in light and finding their linear orientation and length, would not be that much different from finding discontinuities in echoes and finding their linear orientation and length. Fitting this sort of information into a perceptual model would use mechanisms that are used for visual lines and objects in sighted people. But is there evidence of this coordination of perception?

Buckingham et al (see citation below) have looked at this and found “that echolocation is not just a functional tool to help visually-impaired individuals navigate their environment, but actually has the potential to be a complete sensory replacement for vision.” There is an illusion where the size of an object affects the perceived weight. With boxes weighing exacting the same but of different sizes – the smaller boxes feel heavier than the larger ones. This illusion was used to show the size information which is usually visual can be replaced by echolocation information without changing the illusion.

Here is their abstract: “Certain blind individuals have learned to interpret the echoes of self-generated sounds to perceive the structure of objects in their environment. The current work examined how far the influence of this unique form of sensory substitution extends by testing whether echolocation-induced representations of object size could influence weight perception. A small group of echolocation experts made tongue clicks or finger snaps toward cubes of varying sizes and weights before lifting them. These echolocators experienced a robust size-weight illusion. This experiment provides the first demonstration of a sensory substitution technique whereby the substituted sense influences the conscious.”

Why don’t sighted people echo-locate. I do not believe it has been shown that we don’t. If we do it is not rendered consciously or used in preference to visual data. But there is no reason to assume that it is not there in the background helping to form a perceptual working model. For example, if an echo based edge coincided with an optical edge in the V1 area, it could give additional information about the nature of the edge.

I also think it may be that in order to simplify auditory perception, our brains suppress the low level echoes of any sound we make. We would be aware of the sound we made but much less aware of echoes of that sound. The auditory cortex would then be unable to echo-locate and the visual cortex would be busy with vision (and perhaps some echoes and other sounds) producing a visual model. In this case, we would not consciously hear our echoes and we would not directly consciously ‘see’ them either, although we might be processing them as additions to visual input.
ResearchBlogging.org

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts PLoS ONE, 6 (5) DOI: 10.1371/journal.pone.0020162

Buckingham, G., Milne, J., Byrne, C., & Goodale, M. (2014). The Size-Weight Illusion Induced Through Human Echolocation Psychological Science DOI: 10.1177/0956797614561267

I'm on ScienceSeeker-Microscope