Tag Archives: vision

What meets where

A paper looking at the newly re-found nerve bundle, the vertical occipital fasciculus, connects it with “The Man who Mistook his Wife for a Hat”. Why would someone reach for an object in a different place than the object was? ScienceDaily has a report (here), “Scientists chart a lost highway in the brain”.

The ‘what’ and ‘where’ visual pathways have been studied for some time, but there appeared to be little connecting them until they had passed out of the early visual area. A nerve tract that was known over a hundred years ago and was lost to the anatomy texts until recently, connects the ‘what’ and ‘where’ maps. “Our new study shows that the VOF may provide the fundamental white matter connection between two parts of the visual system: that which identifies objects, words and faces and that which orients us in space. … The structure forms a ‘highway’ between the lower, ventral part of the visual system, which processes the properties of faces, words and objects, and the upper, dorsal parietal regions, which orients attention to an object’s spatial location.

This long flat nerve tract was described long ago and then mysteriously disappeared from view. Why? “The answer may be scientific rivalry. In their earlier paper, Pestilli and collaborators attributed the VOF’s disappearance to competing beliefs among 19th-century neuroanatomists. In contrast to Wernicke, Theodor Meynert, another prominent scientist in Germany, never accepted the new structure due to his belief that all white matter pathways ran horizontally. Over time, the VOF faded into obscurity.

Are other structures being overlooked because they are up and down rather than side to side or back to front, or in some other way are counter to orthodoxy?

Here is the abstract (H. Takemura, A. Rokem, J. Winawer, J. D. Yeatman, B. A. Wandell, F. Pestilli. A Major Human White Matter Pathway Between Dorsal and Ventral Visual Cortex. Cerebral Cortex, 2015; DOI: 10.1093/cercor/bhv064): “Human visual cortex comprises many visual field maps organized into clusters. A standard organization separates visual maps into 2 distinct clusters within ventral and dorsal cortex. We combined fMRI, diffusion MRI, and fiber tractography to identify a major white matter pathway, the vertical occipital fasciculus (VOF), connecting maps within the dorsal and ventral visual cortex. We use a model-based method to assess the statistical evidence supporting several aspects of the VOF wiring pattern. There is strong evidence supporting the hypothesis that dorsal and ventral visual maps communicate through the VOF. The cortical projection zones of the VOF suggest that human ventral (hV4/VO-1) and dorsal (V3A/B) maps exchange substantial information. The VOF appears to be crucial for transmitting signals between regions that encode object properties including form, identity, and color and regions that map spatial information.


Humans can detect polarized light

Here is a most interesting development that fits in with the idea that we are still ignorant of many details of the nervous system. Humans have the ability to perceive polarized light. We are not aware of this sense and do not use it, but it is there. Actually it has been known for sometime but not generally.

The paper by S. Temple (citation below) describes research which started with polarized light sight in sea animals, then its mechanism in humans, and finally it appears to offer a method of diagnosing AMD before it affects vision.

In the paper there are directions for seeing the polarization. I have to say I was skeptical and quite surprised when the faint yellow bowtie appeared and rocked back and forth as I tilted my head. When I stopped moving my head the bowtie disappeared. “We detect the orientation of polarized light using ‘Haidinger’s brushes’, an entoptic visual phenomenon described by Wilhelm Karl von Haidinger in 1844. He reported that when viewing a polarized light field, with no spatial variation in intensity or colour, it was possible for someone with normal sight to perceive a faint pattern of yellow and blue bowtie-like shapes that intersect at the viewer’s point of fixation. Haidinger’s brushes can be observed by looking at a region of blue sky approximately 90° from the sun, particularly around sunset or sunrise, or by looking at a region of white on a liquid crystal display (LCD). The effect vanishes within about 5 s, but can be maintained and/or increased in salience by rotating the eye around the primary visual axis relative to the light field, e.g. tilting one’s head side to side.” Entoptic means that the phenomenon has an origin within the eye rather than the outside world.

The bowties are created by two structures in the eye. The cornea has layers of collagen molecules arranged to create birefringence. This can be thought of as slow and fast orientations depending on the polarization angle of light rays. This interacts with carotenoid pigments in the macula or fovea which are also arranged in a particular way to form an interference filter (dichroic filter). The center of the lens behind the cornea, the center of the macula and the object of visual attention are in a straight line. Therefore the orientation of the collagen, carotenoid pigments and the direction of the light are always the same.

By studying the Haidinger’s brushes in an individual it is possible to examine aspects of the structure of the macula and the cornea.

Here is the abstract:

Like many animals, humans are sensitive to the polarization of light. We can detect the angle of polarization using an entoptic phenomenon called Haidinger’s brushes, which is mediated by dichroic carotenoids in the macula lutea. While previous studies have characterized the spectral sensitivity of Haidinger’s brushes, other aspects remain unexplored. We developed a novel methodology for presenting gratings in polarization-only contrast at varying degrees of polarization in order to measure the lower limits of human polarized light detection. Participants were, on average, able to perform the task down to a threshold of 56%, with some able to go as low as 23%. This makes humans the most sensitive vertebrate tested to date. Additionally, we quantified a nonlinear relationship between presented and perceived polarization angle when an observer is presented with a rotatable polarized light field. This result confirms a previous theoretical prediction of how uniaxial corneal birefringence impacts the perception of Haidinger’s brushes. The rotational dynamics of Haidinger’s brushes were then used to calculate corneal retardance. We suggest that psychophysical experiments, based upon the perception of polarized light, are amenable to the production of affordable technologies for self-assessment and longitudinal monitoring of visual dysfunctions such as age-related macular degeneration.”

Citation: Temple SE, McGregor JE, Miles C, Graham L, Miller J, Buck J, Scott-Samuel NE, Roberts NW. 2015 Perceiving polarization with the naked eye: characterization of human polarization sensitivity. Proc. R. Soc. B 282: 20150338. http://dx.doi.org/10.1098/rspb.2015.0338


Remembering visual images

There is an interesting recent paper (see citation) on visual memory. The researchers’ intent is to map and areas and causal directions between them for a particular process in healthy individuals so that sufferers showing lost of that process can be studied in the same way and the areas/connections which are faulty identified. In this study they were looking at encoding of vision for memory.

40 healthy subjects were examined. “… participants were presented with stimuli that represented a balanced mixture of indoor (50%) and outdoor (50%) scenes that included both images of inanimate objects as well as pictures of people and faces with neutral expressions. Attention to the task was monitored by asking participants to indicate whether the scene was indoor or outdoor using a button

box held in the right hand. Participants were also instructed to memorize all scenes for later memory testing. During the control condition, participants viewed pairs of scrambled images and were asked to indicate using the same button box whether both images in each pair were the same or not (50% of pairs contained the same images). Use of the control condition allowed for subtraction of visuo-perceptual, decision-making, and motor aspects of the task, with a goal of improved isolation of the memory encoding aspect of the active condition.” All the subjects performed well on both tasks and on later recognition of the scene they were asked to remember. “Thirty-two ICA components were identified. Of these, 10 were determined to be task-related (i.e., not representing noise or components related to the control condition) and were included in further analyses and model generation. Each retained component was attributed to a particular network based on previously published data. ” Granger causality analysis was carried out on each pair of the 10 components.

Here is the resulting picture:visual plan

The authors give a description of the many functions that have been attributed to their 10 areas (independent components) which is interesting reading. But not very significant because the areas are on the large size and because it is reasonable to argue from a specific function to an active area but not from an active area to a specific function. The information does have a bearing on some theories and models. The fact that this work does not itself produce a model does not make it less useful in studying abnormal visual memory encoding.

The involvement of the ‘what’ visual stream rather than the stream used for motor actions is expected, as is the involvement of working memory. There is clearly a major importance of attention in this process. The involvement of language/concepts is interesting. “Episodic memory is defined as the ability to consciously recall dated information and spatiotemporal relations from previous experiences, while semantic memory consists of stored information about features and attributes that define concepts. The visual encoding of a scene in order to remember and recognize it later (i.e., visual memory encoding) engages both episodic and semantic memory, and an efficient retrieval system is needed for later recall.” The data is likely to be useful in evaluating theoretical ideas. The author mention support for the hemispheric encoding/retrieval asymmetry model.

The abstract:

Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19–59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (.90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.3365.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists. ”

Nenert, R., Allendorfer, J., & Szaflarski, J. (2014). A Model for Visual Memory Encoding PLoS ONE, 9 (10) DOI: 10.1371/journal.pone.0107761

I'm on ScienceSeeker-Microscope

Accuracy in both time and space

There has been a problem with studying the human brain. It has been possible to look at activity in terms of where it is happening using fMRI but there is poor resolution of time. On the other hand activity can be looked at with a good deal of time resolution with MEG and EEG but the spatial resolution is not good. Only the placement of electrodes in epileptic patients has giving clear spatial and temporal resolution. However, these opportunities are not common and the placement of the electrodes is dictated by the treatment and not by any particular studies. This has meant that much of what we know about the brain was gained by studies on animals, especially monkeys. The results on animals have been consistent with what can be seen in humans, but there is rarely detailed specific confirmation. This may be about to change.

Researchers at MIT are using fMRI with resolutions of a millimeter and MEG with a resolution of a millsecond and combining them with a method called representational similarity analysis. They had subjects look at 90 images of various things for half a second each. They looked at the same series of images multiple times being scanned with fMRI and multiple times with MEG. They then found the similarities between each image’s fMRI and MEG records for each subject. This allowed them to match the two scans and see both the spatial and the temporal changes as single events, resolved in time and space.

We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast. This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.” This flow was extremely close to the flow found in monkeys.

It appears to take 50 milliseconds after exposure to an image for the visual information to reach the first area of the visual cortex (V1), during this time information would have passed through processing in the retina and the thalamus. The information then is processed by stages in the visual cortex and reaches the inferior temporal cortex at about 120 milliseconds. Here objects are identified and classified, all done by 160 milliseconds.

Here is the abstract:

A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively late. Using representational similarity analysis, we combined human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing with sources in V1 and IT. Finally, we correlated human MEG signals to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision.”


http://www.kurzweilai.net/where-and-when-the-brain-recognizes-categorizes-an-object – review of paper: Radoslaw Martin Cichy, Dimitrios Pantazis, Aude Oliva, Resolving human object recognition in space and time, Nature Neuroscience, 2014, DOI: 10.1038/nn.3635