Category Archives: perception

Meaning of consciousness - Part 2

In part 1 a particular meaning of consciousness was picked out of the group of meanings. So what can be said about this neural idea of consciousness as simply awareness of self and surrounding world, here and now? Is it as it appears? Well, no.

To start with it is questionable whether it is a continuous stream, the ‘stream of consciousness’, as it seems to be. Instead It is likely to be discrete displays. A number of experiments (replicated a number of times) have shown that consciousness is not continuous. But there are also experiments that shown that it is not a straight forward series of frames as in a movie film. There is a fairly convincing in-between theory proposed by Herzag (PLOS Biol 2016). “We experience the world as a seamless stream of percepts. However, intriguing illusions and recent experiments suggest that the world is not continuously translated into conscious perception. Instead, perception seems to operate in a discrete manner, just like movies appear continuous although they consist of discrete images. To explain how the temporal resolution of human vision can be fast compared to sluggish conscious perception, we propose a novel conceptual framework in which features of objects, such as their color, are quasi-continuously and unconsciously analyzed with high temporal resolution. Like other features, temporal features, such as duration, are coded as quantitative labels. When unconscious processing is “completed,” all features are simultaneously rendered conscious at discrete moments in time, sometimes even hundreds of milliseconds after stimuli were presented.” They have a two stage model: a first stage of unconscious processing of features and the binding of these features to entities that ends after at least 400 milliseconds with a best fit solution and the triggering of the second stage of integration into a consciously perceived output. This precept is static but is labeled with features like colour, pitch and also duration, movement, location and the like. Although the precept is unchanging, it is experienced as having duration – as a slice of time although it does not exist for that duration.

This view of consciousness implies that we have no direct knowledge of how this conscious experience is created. We can report our conscious experiences but not how they were created or how accurate they may be. Introspection as a method of observing processes of thought is not a reasonable concept - it is not possible to interrogate the ‘mind’ subjectively, from the inside. “The human mind operates largely out of view, and yet people are unaware of their unawareness, confabulating reasons for their actions and preferences.” (Wilson Science 2008). Subjectively, creating consciousness is a transparent process. It can only be understood through objective study.

If perception is not done consciously and neither is motor control, exactly why do we need the conscious experience? It is almost like consciousness has no function – an extra that the brain does not need.

Chalmers put forward the distinction of the hard question and the easy question. This appears to be a new type of dualism, separating objective knowledge of consciousness from the subjective experience of it. There is an thought experiment called philosophical zombies. Why could there not be people who have no consciousness but who act the same as someone who does? This would mean that the function of consciousness in the brain does not include the experience of consciousness. The function of the easy part, the objective part, the neural part is separate from the function-less, hard, subjective, and mystical part. The core idea here is that a physical brain cannot produce a subjective experience, or if it does then it is by way of some special process that is not known to current science. It has been put forward that consciousness is a universal primitive like mass and everything has consciousness, or it ’emerges’ through some information theory mathematics in objects that are complex enough, or it is some product of quantum mechanics that has not been studied, or it is simply not physical but spiritual. So some people who try to explain consciousness are told that they have not explained it but ‘explained it away’ because the mystery is gone. And other people who try to explain consciousness are told that they have not explained it but made it a mystery because the physical world is over-stepped. This divide is unlikely to disappear in the near future. I have to say that I personally find it impossible to be a dualist. I want a physical explanation of consciousness, preferably in biological terms .

From my biological point of view, consciousness must have a function in the brain because it is very biologically expensive. It cannot just be entertainment. What function does experiencing ourselves in the world have? Does its reason of existence help explain it? That is for part 3.

Where is consciousness?

A particular type of epilepsy has been treated by cutting the corpus callosum, the tracks of nerves connecting the two hemispheres of the cerebrum. The result had very little side effects on the patients. However, with closer experimental studies, the nature of the split brain was examined. Only the left hemisphere spoke and so only stimuli presented to the left visual field resulted in spoken replies and responses of the right hand. The right hemisphere could understand written language presented to the right visual field and made responses with the left hand but never spoke. Based on this and similar evidence, it was assumed that there were two minds (that is two consciousnesses) in a split brain.

A recent paper has upset this hypothesis: Pinto, Neville, Otten, Corballis, Lamme, de Haan, Foschi, & Fabri; Split brain: divided perception but undivided consciousness; Brain Jan 2017. Here is the abstract:

In extensive studies with two split-brain patients we replicate the standard finding that stimuli cannot be compared across visual half-fields, indicating that each hemisphere processes information independently of the other. Yet, crucially, we show that the canonical textbook findings that a split-brain patient can only respond to stimuli in the left visual half-field with the left hand, and to stimuli in the right visual half-field with the right hand and verbally, are not universally true. Across a wide variety of tasks, split-brain patients with a complete and radiologically confirmed transection of the corpus callosum showed full awareness of presence, and well above chance-level recognition of location, orientation and identity of stimuli throughout the entire visual field, irrespective of response type (left hand, right hand, or verbally). Crucially, we used confidence ratings to assess conscious awareness. This revealed that also on high confidence trials, indicative of conscious perception, response type did not affect performance. These findings suggest that severing the cortical connections between hemispheres splits visual perception, but does not create two independent conscious perceivers within one brain.

When they showed an object in both visual fields and if the objects were the same or different, the split brain subject could not answer that question with either hand or by speech. They could not examine the objects together – so it was correct that the perception in the two hemispheres was separate and isolated. But if an object was placed in either or both visual fields, the subjects could say how many objects there were in total and there was no different in the answer coming from the left or right hand, or the voice. So although they could not examine the objects together, their consciousness covered the entire visual field – there was only one consciousness.

What can explain this if the results hold up? Perhaps the two hemispheres have learned unusual ways of communicating outside of the normal connections. Perhaps it is some dualistic magic. Or, to me more likely, consciousness is not a product of the cerebrum. It is created in some other part of the brain that can receive information from both hemispheres and can store its creation in immediate memory where it is available to the hemispheres. There is an obvious candidate, the thalamus. It is not cut in half by the cutting of the corpus callosum. It is connected to almost all areas of the brain and almost all information passes through it at some stage of its processing. It is the one part of the brain that must be functioning for consciousness to occur.

There has been for years an assumption that the cerebrum is the engine of thought and a number of things are puzzles because they cannot be understood looking at the cerebral cortex alone. It is time to thing about the possibility that the thalamus drives the cerebrum: it feeds information to the cortex, it creates the rhythms and synchronization in the cortex, and it controls the communication networks in the cortex. The thalamus may have the cortex as an on-line computer, to use that metaphor. But then the thalamus is in the center of the brain and the cortex is laid out on the surface. It is easier the examine the cortex and so the rest of the brain gets neglected. Like the man looking for his keys under the street lamp because the light is better there even though he lost them elsewhere.

 

Metaphors and shapes

Judith Copithorne image

Metaphors (including analogs and similitudes) appear to be very basic to thought. These are very important to language and communication. A large bulk of dictionary meanings of words are actually old metaphors, that have been used so much and for so long that the words has lost its figurative root and become literal in their meaning. We simply do not recognize that it was once a metaphor. Much of our learning is metaphorical. We understand one complex idea by noticing its similarity to another complex idea that we already understand. For example, electricity is not easy to understand at first but we have learned to understand a great deal about how water flows as we have grown up by watching it. Basic electrical theory is often taught by comparing it to water. By and large, when we examine our knowledge of the world, we find it is rife with metaphors. We can trace many ways we think about things and events to ‘grounding’ in experiences of infants. The way babies establish movement and sensory information is the foundation of enormous trees and pyramids of metaphorical understanding.

But what is a metaphor? We can think of it as a number of entities that are related in some way (in space, in time, in cause-effect, or in logic etc.) to form a structure that we can understand and think of/ remember/ name/ use as a predictive model and treat as a single thing. This structure can be reused without being reinvented. The entities can be re-labeled and so can the relations between them. So if we know water flowing through a pipe will be limited by a narrower length of pipe we can envisage an electrical current in a wire being limited by a resistor. Nothing needs to be retained in a metaphor but the abstract structure. This facility of being able to manipulate metaphors is important to thinking, learning, communicating. Is there more? Perhaps.

A recent paper (Rolf Inge Godøy, Minho Song, Kristian Nymoen, Mari Romarheim Haugen, Alexander Refsum Jensenius; Exploring Sound-Motion Similarity in Musical Experience; Journal of New Music Research, 2016; 1) talks about the use of a type of metaphor across the senses and movement. Here is the abstract:

People tend to perceive many and also salient similarities between musical sound and body motion in musical experience, as can be seen in countless situations of music performance or listening to music, and as has been documented by a number of studies in the past couple of decades. The so-called motor theory of perception has claimed that these similarity relationships are deeply rooted in human cognitive faculties, and that people perceive and make sense of what they hear by mentally simulating the body motion thought to be involved in the making of sound. In this paper, we survey some basic theories of sound-motion similarity in music, and in particular the motor theory perspective. We also present findings regarding sound-motion similarity in musical performance, in dance, in so-called sound-tracing (the spontaneous body motions people produce in tandem with musical sound), and in sonification, all in view of providing a broad basis for understanding sound-motion similarity in music.”

The part of this paper that I found most interesting was a discussion of abstract ‘shapes’ being shared by various senses and motor actions.

A focus on shapes or objects or gestalts in perception and cognition has particularly concerned so-called morphodynamical theory … morphodynamical theory claims that human perception is a matter of consolidating ephemeral sensory streams (of sound, vision, touch, and so on) into somehow more solid entities in the mind, so that one may recall and virtually re-enact such ephemeral sensations as various kinds of shape images. A focus on shape also facilitates motion similarity judgments and typically encompasses, first of all, motion trajectories (as so-called motion capture data) at various timescales (fast to slow, including quasi-stationary postures) and amplitudes (from large to small, including relative stillness). But shapes can also capture perceptually and affectively highly significant derivatives, such as acceleration and jerk of body motion, in addition.

The authors think of sound objects as occurring in the time range of half a second to five seconds. Sonic objects have pitch and timbre envelopes, rhythmic, melodic and harmonic patterns. In terms of dynamics, sonic objects can: be impulsive with an envelop showing an abrupt onset and then decay, or be sustained with a gradual onset and longer duration, or be iterative with rapidly repeated sound, tremolo, or drum roll. Sonic objects could have pitch that is stable, variable or just noise. These sonic objects are related to similar motion objects – objects in the same time range that produce music or react to it. For example the sonic objects in playing a piano piece or in dancing. They also have envelopes of velocity and so on. This reminds me of the similar emotions that are triggered by similar envelopes of musical sound and speech. Or, the objects that fit with the nonsense words ‘bouba’ and ‘kiki’ being smooth or sharp. ‘Shape’ is a very good description of the vague but strong and real correspondences between objects from different domains. It is probably the root of being able to use adjectives across domains. For example, we can have soft light, soft velvet, soft rustle, soft steps, soft job, and more or less soft anything. Soft describes different things in different domains but, despite the differences, it is a metaphoric connection between domains so that concrete objects can be made by combining a number of individual sensory/motor objects which share abstract characteristics like soft.

In several studies of cross-modal features in music, a common element seems to be the association of shape similarity with sound and motion, and we believe shape cognition can be considered a basic amodal element of human cognition, as has been suggested by the aforementioned morphodynamical theory …. But for the implementation of shape cognition, we believe that body motion is necessary, and hence we locate the basis for amodal shape cognition in so-called motor theory. Motor theory is that which can encompass most (or most relevant) modalities by rendering whatever is perceived (features of sound, textures, motion, postures, scenes and so on) as actively traced shape images.

The word ‘shape’, used to describe corresponding characteristics from different domains, is very like the word ‘structure’ in metaphors and may point to the foundation of our cognition mechanisms, including much more than just the commonplace metaphor.

 

A look at colour

Judith Copithorne image

Back to the OpenMIND collection and a paper on colour vision (Visual Adaptation to a Remapped Spectrum – Grush, Jaswal, Knoepfler, Brovold) (here). The study has some shortcomings which the authors point out. “A number of factors distinguish the current study from an appropriately run and controlled psychological experiment. The small n and the fact that both subjects were also investigators in the study are perhaps the two most significant differences. These limitations were forced by a variety of factors, including the unusual degree of hardship faced by subjects, our relatively small budget, and the fact that this protocol had never been tried before. Because of these limitations, the experiments and results we report here are intended to be taken only as preliminary results—as something like a pilot study. Even so, the results, we believe, are quite interesting and suggestive.” To quote Chesterton, if it is worth doing it is worth doing poorly.

The researchers used LCD goggles driven by a video camera so that the scene the subject saw was shifted in colour. The shift was 120 degrees of a colour wheel (red to blue, green to red, yellow to purple). The result was blue tomatoes, lilac people, and green sky. (video) The study lasted a week with one subject wearing the gear all the time he was not in the dark while the other wore the gear for several hours each day and had normal vision the rest of the time. How did they adapt to the change in colour?

Colour consistency is an automatic correction the visual system makes so that colours do not change under different light conditions. Colours do not appear to change when viewed in sunlight, twilight, candle light, fluorescent lamps etc. What perception is aiming at is the characteristic of the surface that is reflecting the light and not the nature of the light. Ordinarily we are completely unaware of this correction. The colour shifting gear disrupted colour consistency until the visual system adapted to the new spectrum.

We did not test color constancy in any controlled way, but the subjective reports are quite nmistakable. Subject RG noticed that upon first wearing the rotation gear color constancy went “out the window.” To take one example, in normal conditions RG’s office during the day is brightly lit enough that turning on the fluorescent light makes no noticeable difference to the appearance of anything in the office. But when he turned the lights on after first donning the gear, everything had an immediate significant change of hue (though not brightness). He spent several minutes flipping the light on and off in amazement. Another example is that he also noticed that when holding a colored wooden block, the surfaces changed their apparent color quite noticeably as he moved it and rotated it, as if the surfaces were actively altering their color like a chameleon. This was also a source of prolonged amusement. However, after a few days the effect disappeared. Turning the office light on had little noticeable effect on the color of anything in his office, and the surfaces of objects resumed their usual boring constancy as illumination conditions or angles altered.” Interestingly the subject who wore the gear only part of each day never lost his normal colour consistency as he adapted to the other consistency; but the subject who wore the gear all the time had to re-adapt when he took off the gear although it took much less time than the adaption when the gear was first put on. I have often wonder how difficult it would be to lose this correction and for a while used a funny prism child’s toy to look at the uncorrected color of various shadows.

Did an adaption happen to bring the colours back to there original colours? Did the blue tomatoes start to look more red? It seems not, at least in this study. But again there were some interesting events.

On two occasions late into his six-day period of wearing the gear, JK went into a sudden panic because he thought that the rotation equipment was malfunctioning and no longer rotating his visual input. Both times, as he reports it, he suddenly had the impression that everything was looking normal. This caused panic because if there was a glitch causing the equipment to no longer rotate his visual input, then the experimental protocol would be compromised. …However, the equipment was not malfunctioning on either occasion, a fact of which JK quickly convinced himself both times by explicitly reflecting on the colors that objects, specifically his hands, appeared to have: “OK, my hand looks purplish, and purple is what it should like under rotation, so the equipment is still working correctly.”…the lack of a sense of novelty of strangeness made him briefly fear … He described it as a cessation of a “this is weird” signal.

Before and after the colour adaption period, they tested the memory-colour effect. This is done by adjusting the colour of an object until it appears a neutral grey. If the object always has a particular colour (bananas are yellow) then people over correct and move the colour past the neutral grey point. “One possible explanation of this effect is that when the image actually is grey scale, subjects’ top-down expectations about the usual color make it appear (in some way or another) to be slightly tinted in that hue. So when the image of the banana is actually completely grey scale subjects judge it to be slightly yellow. The actual color of the image must be slightly in the direction opposite yellow (periwinkle) in order to cancel this top- down effect and make the image appear grey. This is the memory-color effect.” This effect was slightly reduced after the experiment – as if bananas were not expected to be as yellow as they had been before the experiment.

They also looked at other aspects of adaption. “As we found, aesthetic judgments had started to adapt, … And though we did not find evidence of semantic adaptation, it would be quite surprising, given humans’ ability to learn new languages and dialects, if after a more extended period of time semantic adaptation did not occur.” They do not have clear evidence to say anything about qualia verses enactive adaptation but further similar experiments may give good evidence.

A prediction engine

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.

 

Cooperation of sight and sound

As a child you were probably taught to tell how far away lightening was. When there is a flash, you count with a particular rhythm until you hear the thunder and that is how many miles the lightening is away from you. Parents are not going to stop teaching this because it is something for a nervous child to do in a thunder storm and it convinces them that they are usually a safe distance from danger. But it only works for distant events.

Events that are close by are synchronized by the brain and consciously we collapse the vision and hearing clues both for time and space to make a single event. We are not conscious of a difference in the timing or of any slight difference in the placing of the event. A particular region of the brain does this aligning - “the superior colliculus, a midbrain region that functions imperatively for integrating auditory and visual signals for attending to and localizing audiovisual stimuli”. But if the difference is too large between the vision and hearing, the collapse into a single event does not happen.

However, we know that, even though it is not consciously experienced, the information about small differences in sound arrival can be used by blind humans to echo-locate by making continuous little clicking noises. Could it be that the discrepancy between sound and sight could be used in other ways? A recent paper (Jaekl P, Seidlitz J, Harris LR, Tadin D (2015) Audiovisual Delay as a Novel Cue to Visual Distance. PLoS ONE 10(10): e0141125. doi:10.1371/journal.pone.0141125) studies the effect of sound delays on the perception of distance. Like the lightening calculation, but it is done unconsciously.

Here is the abstract:

For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.

Two things on language

There are a couple of interesting reports about language.

First, it has been shown that repeating something aloud helps us remember it. But a recent study goes further – we remember even better if we repeat it aloud to someone. The act of communication helps the memory. The paper is: Alexis Lafleur, Victor J. Boucher. The ecology of self-monitoring effects on memory of verbal productions: Does speaking to someone make a difference? Consciousness and Cognition, 2015; 36: 139 DOI:10.1016/j.concog.2015.06.015.

From ScienceDaily (here) Previous studies conducted at Professor Boucher’s Phonetic Sciences Laboratory have shown that when we articulate a sound, we create a sensory and motor reference in our brain, by moving our mouth and feeling our vocal chords vibrate. “The production of one or more sensory aspects allows for more efficient recall of the verbal element. But the added effect of talking to someone shows that in addition to the sensorimotor aspects related to verbal expression, the brain refers to the multisensory information associated with the communication episode,” Boucher explained. “The result is that the information is better retained in memory.

No one can tell me that language is not about and for communication.

The second item is reported in ScienceDaily (here) Infants cannot perceive the difference between certain sounds when their tongue is restricted with a teether. They have to be able to mimic the sounds in order to distinguish them. The paper is: Alison G. Bruderer, D. Kyle Danielson, Padmapriya Kandhadai, and Janet F. Werker. Sensorimotor influences on speech perception in infancy. PNAS, October 12, 2021 DOI: 10.1073/pnas.1508631112.

From ScienceDaily: …teething toys were placed in the mouths of six-month-old English-learning babies while they listened to speech sounds-two different Hindi “d” sounds that infants at this age can readily distinguish. When the teethers restricted movements of the tip of the tongue, the infants were unable to distinguish between the two “d” sounds. But when their tongues were free to move, the babies were able to make the distinction. Lead author Alison Bruderer, a postdoctoral fellow in the School of Audiology and Speech Sciences at UBC, said the findings call into question previous assumptions about speech and language development. “Until now, research in speech perception development and language acquisition has primarily used the auditory experience as the driving factor,” she said. “Researchers should actually be looking at babies’ oral-motor movements as well.”

hey say that parents do not need to worry about using teething toys but a child should also have time to freely use their tongue for good development.

 

Liking the easy stuff

It is not just true that if something is not understood, it is assumed to be easily done. It is also true that if it is easier to grasp then it is more likeable. A recent study looked at this connection between fluency and appreciation. (Forster M, Gerger G, Leder H (2015) Everything’s Relative? Relative Differences in Processing Fluency and the Effects on Liking. PloS ONE 10(8): e0135944. doi:10.1371/journal. pone.0135944)

The question Forster asks is whether the judgement of fluency is absolute or relative. If we have internal reference standards for liking that depend on the ease of perceiving then the level of liking is an absolute judgement. Internal standards seem to be the case for perfect pitch and the feeling of familiarity when something is recalled from memory. But in the case of the effort of perception, our feeling of liking is a relative judgement – a comparison with other amounts of effort for other images.

Abstract: “Explanations of aesthetic pleasure based on processing fluency have shown that ease-of-processing fosters liking. What is less clear, however, is how processing fluency arises. Does it arise from a relative comparison among the stimuli presented in the experiment? Or does it arise from a comparison to an internal reference or standard? To address these questions, we conducted two experiments in which two ease-of-processing manipulations were applied: either (1) within-participants, where relative comparisons among stimuli varying in processing ease were possible, or (2) between-participants, where no relative comparisons were possible. In total, 97 participants viewed simple line drawings with high or low visual clarity, presented at four different presentation durations, and rated for felt fluency, liking, and certainty. Our results show that the manipulation of visual clarity led to differences in felt fluency and certainty regardless of being manipulated within- or between-participants. However, liking ratings were only affected when ease-of-processing was manipulated within-participants. Thus, feelings of fluency do not depend on the nature of the reference. On the other hand, participants liked fluent stimuli more only when there were other stimuli varying in ease-of-processing. Thus, relative differences in fluency seem to be crucial for liking judgements.”

The power of words

ScienceDaily has an item (here) on an interesting paper. (B. Boutonnet, G. Lupyan. Words Jump-Start Vision: A Label Advantage in Object Recognition. Journal of Neuroscience, 2015; 35 (25): 9329 DOI: 10.1523/JNEUROSCI.5111-14.2015)

The researchers demonstrated how words can affect perception. A particular wave that occurs a tenth of a second after a visual image appears was enhanced by a matching word but not by a matching natural sound. And the word made the identification of the visual quicker but the natural sound did not. For example a picture of a dog, the spoken word ‘dog’, and a dog’s bark would be a set.

They believe this is because the word is about a general category and the natural sound is a specific example from that category. Symbols such as words are the only way to indicate categories. “Language allows us this uniquely human way of thinking in generalities. This ability to transcend the specifics and think about the general may be critically important to logic, mathematics, science, and even complex social interactions.

Here is the abstract: “People use language to shape each other’s behavior in highly flexible ways. Effects of language are often assumed to be “high-level” in that, whereas language clearly influences reasoning, decision making, and memory, it does not influence low-level visual processes. Here, we test the prediction that words are able to provide top-down guidance at the very earliest stages of visual processing by acting as powerful categorical cues. We investigated whether visual processing of images of familiar animals and artifacts was enhanced after hearing their name (e.g., “dog”) compared with hearing an equally familiar and unambiguous nonverbal sound (e.g., a dog bark) in 14 English monolingual speakers. Because the relationship between words and their referents is categorical, we expected words to deploy more effective categorical templates, allowing for more rapid visual recognition. By recording EEGs, we were able to determine whether this label advantage stemmed from changes to early visual processing or later semantic decision processes. The results showed that hearing a word affected early visual processes and that this modulation was specific to the named category. An analysis of ERPs showed that the P1 was larger when people were cued by labels compared with equally informative nonverbal cues—an enhancement occurring within 100 ms of image onset, which also predicted behavioral responses occurring almost 500 ms later. Hearing labels modulated the P1 such that it distinguished between target and nontarget images, showing that words rapidly guide early visual processing.

 

The center of the universe

When we are conscious we look out at the world through a large hole in our heads between our noses and our foreheads, or so it seems. It is possible to pin-point the exact place inside our heads which is the ‘here’ to which everything is referenced. That spot is about 4-5 centimeters behind the bridge of the nose. Not only sight but hearing, touch and the feelings from inside our bodies are some distance in some direction from that spot. As far as we are concerned, we carry the center of the universe around in our heads.

Both our sensory system and our motor system use this particular three dimensional arrangement centered on that particular spot and so locations are the same for both processes. How, why and where in the brain is this first person, ego-centric space produced? Bjorn Merker has a paper in a special topic issue of Frontiers of Psychology, Consciousness and Action Control (here). The paper is entitled “The efference cascade, consciousness and its self: naturalizing the first person pivot of action control”. He believes evidence points to the roof of the mid-brain, the superior colliculus.

If we consider the center of our space, then attention is like a light or arrow pointing from the center to a particular location in that space and what is in it. That means that we are oriented in that direction. “The canonical form of this re-orienting is the swift and seamlessly integrated joint action of eyes, ears (in many animals), head, and postural adjustments that make up what its pioneering students called the orienting reflex.

This orientation has to occur before any action directed at the target or any examination of the point of interest by our senses. First the orientation and then the focus of attention. But how does the brain decide which possible focus of attention is the one to orient towards. “The superior colliculus provides a comprehensive mutual interface for brain systems carrying information relevant to defining the location of high priority targets for immediate re-orienting of receptor surfaces, there to settle their several bids for such a priority location by mutual competition and synergy, resulting in a single momentarily prevailing priority location subject to immediate implementation by deflecting behavioral or attentional orientation to that location. The key collicular function, according to this conception, is the selection, on a background of current state and motive variables, of a single target location for orienting in the face of concurrent alternative bids. Selection of the spatial target for the next orienting movement is not a matter of sensory locations alone, but requires access to situational, motivational, state, and context information determining behavioral priorities. It combines, in other words, bottom-up “salience” with top-down “relevance.”

We are provided with the illusion that we sit behind our eyes and experience the world from there and from there we plan and direct our actions. A lot of work and geometry that we are unaware of goes into this illusion. It allows us to integrate what we sense with what we do, quickly and accurately.