Monthly Archives: October 2014

Why no brain-in-a-vat

A comment on the previous blog asked for a discussion of embodied cognition. I will try to express why I find embodied cognition a more attractive model than classic cognition. My natural approach to living things is biological – I just think that way – and if something does not make much sense from a biological standpoint than I am suspicious.

So to start, why don’t all living things have brains? Brains seem to be confined to animals, organisms that move. This makes sense: to move an organism needs mechanisms for propulsion (muscles for example), mechanisms to sense the environment (eyes for example), and mechanisms for coordinating and planning movement (nervous systems). So we have motor neurons that activate muscles and sensory neurons that sample the environment and the two are connected in the simplest nervous systems. But all we have in this simple setup is reflexes and habituation. But if there are nets of inter-neurons between the motor and sensory ones then complex actions and thoughts are possible including learning, memory, a working model of reality, emotion, problem solving etc. (brains). In other words, I picture cognition as coming into being and then being honed by evolution as an integral part of the whole organism: its niche or way of life, its behaviour, its anatomy.

Did the evolutionary process give us a brain that is a general computer? Why would it? There tends to be a loss of anatomy/physiology when they are not particularly useful. For example, moles lost sight because their niche is without light; parasites can lose all functions except nutrition and reproduction. A general computer would be a costly organ so it would only be evolved if it were definitely useful.

Today science does not hold that there are exactly three dimensions but talks of 4, 11 ½, 37 etc. We can accept more than 3, believe there are more than 3, but we cannot put ourselves in more than 3 dimensions no matter how we try. Our brain is constructed to create a model of the world with 3 dimensions and that is that. Why? We sense our orientation, acceleration, balance from the semi-circular canals of the inner ear. There are 3 canals and they are at mutual right angles to each other – physical x,y,z planes are evident in this arrangement. The parts of the brain that do the cognitive processes to track orientation, acceleration and balance are built to use signals from the inner ear. It is not a general computing ability that could deal with the mathematics of any number of dimensions – no, it is a task-specific cognitive ability that only deals in 3 dimensions. I think that all our cognitive abilities are like this; they are very sophisticated in what they do but limited to tasks that are useful and matched to what the body and environment can supply.

Further, when evolutionary pressures are forcing new behaviours and reality modeling, new cognitive abilities are not created from scratch, because changes to old cognitive abilities are faster. They will win the race. Take time for example. Animal usually have circadian rhythms and often seasonal/tidal rhythms too. But to incorporate time into our model of reality would probably require a lot of change if done from scratch. However we already have an excellent system for incorporating space in our reality. The system of place cells, grid cells, border cells, heading cells etc. is elaborate. So we can just deal with time as if it was space. Many of these re-uses of old abilities can be seen in the metaphors that people use. A whole branch of embodiment is dedicated to identifying these metaphors used in our normal thinking.

This business of re-using one ability to serve other domains brings up the question of ‘grounding’. People often remark on the circularity of dictionaries. Each word is defined by other words. As we pile up metaphoric schemes each an elaboration and re-identification of elements of other metaphors, the situation appears circular and unsupported. But with a dictionary, what is needed is that a few primitive words are defined by pointing at the object. In the same way each pile of metaphors needs to be grounded in the body. There are primitive schemes that babies are born with or that they learn naturally as they learn to use their bodies. In other words all the cognitive abilities can be traced back to the nature of the body and environment.

There is one case where it can be proven that the cognition is embodied and not classic. When a fielder catches a fly ball, the path he runs is that of an embodied method and not a classic one. The fielder makes no calculation or predictions, he simply keeps running in such a way as to keep the image of the ball in the sky in a particular place. He will end up with the ball and his glove meeting along that image line. There are good write ups of this. (here)

By contrast, classical cognition is seen as isolated and independent from the body and environment, using algorithms to manipulate symbols and capable of running any algorithm (ie a general computer). It just does not ring true to me. I see the brain-in-a-vat as about as useful as a car engine in a washing machine. Why would anyone want a brain-in-a-vat? As a thought experiment to support skepticism it is so-so, because like many philosophical ideas it is concerned with Truth, capitalized. Whereas the brain is not aiming at truth but at appropriate behaviour. A heart can be kept alive on an isolated heart perfusion apparatus and it will beat away and pump a liquid – but to what purpose? Even robots need bodies to really think in a goal directed, real time, real place way and so they are fitted with motors, cameras, arms etc. Robots can be embodied.

 

Embodied thinking

TalkingBrains has a posting, “Embodied or Symbolic? Who Cares?” (here). Greg Hickok is asking what exactly is the difference between embodied and symbolic cognition. He takes a nice example of a neurocomputation that is understood, the way a barn owl turns its head to a sound source. If you have not seen it before have a look at the link – it is well explained and easy to follow.

He asks:

Question: what do we call this kind of neural computation? Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn’t work without that!) and I suppose we can talk of a certain “resonance” of the external world with neural activation. In that sense, it’s embodied. On the other hand, the network can be said to represent information in a neural code-the pattern of activity in network of cells-that no longer resembles the air pressure wave that gave rise to it. In fact, we can write a symbolic code to describe the computation of the network.

I think, however, that the example is a bit off the subject. Of course there are many examples in the brain of clear computations that could be presented in the form of a computer program or an algorithm for manipulating symbols. And it is generally assumed that the brain manipulates entities that are best called symbols: words, objects, concepts, places and the like. Even the brains great ability to work with metaphors is like substituting symbols in schemes that relate a number of symbols in a particular way. Symbols and their manipulation seems useful in understanding the brain. Symbols in the brain, of course, would always be metaphors for actual processes, but then the idea of a symbol is by its nature always a sort of metaphor standing in for whatever it is a symbol of.

But just because some, or a great many perhaps, processes in the brain can be pictured as manipulations of symbols, in ways akin to algorithms, this does not mean that the brain acts like a general computing device. Embodied cognition is quite clearly computation only in the sense of task specific processes and architecture and, not the actions of a general device. To be understood, the brain has to be seen as an integral part of the body. It is and does its part of what the body is and does. The cognitive abilities and facilities of the brain are the ones the body needs to function. If those abilities are sometimes used for arbitrary and abstract things like playing chess, this does not mean that they are not individually ‘grounded’ in the body’s requirements and limitations.

Just because some task could be done in a particular way, does not mean that it is done that way. The brain is what it is; metaphors can help us understand its workings or they can also stand in the way of understanding. They do not dictate the nature of the brain. We always should keep in mind that metaphors are somewhat limited tools.

Seeing clearly

Why do we not notice the limitations of our eyes and any time lag in perception? A recent paper by A. Herwig which was reported in ScienceDaily (here) looks at the mechanics of vision.

Only one portion of the retina has detailed vision, the fovea. If we hold our arm out, a bit about the size of a thumb nail, is seen clearly by the fovea. The rest of vision is not sharp. And yet we seem to have clear vision of a much larger area.

This paper puts forward a model that has the memory storing pairs of blurred and detailed images. When there is a blurred object in the visual field (but not in the fovea) it is replaced in the visual system by a detailed image of an object that fits the blurred image coming from the eyes. This is done so quickly that a person never observes the blurred object. These pairings of blurred and detailed objects are being continually updated.

The researchers used a very fast camera to follow a subject’s eye movements. During the extremely fast movements, saccades, from one fixed position to another, they changed the object that would be viewed. The subjects did not see the new object but rather the detailed pairing with the old blurred object.

The experiments show that our perception depends in large measure on stored visual experiences in our memory.” …these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement“). In other words: “We do not see the actual world, but our predictions.

This give us a clear visual picture that appears correct and immediate.

Here is the abstract (A. Herwig, W. Schneider; Predicting object features across saccades: Evidence from object recognition and visual search; Journal of Experimental Psychology: General (2014) 143-5)

When we move our eyes, we process objects in the visual field with different spatial resolution due to the nonhomogeneity of our visual system. In particular, peripheral objects are only coarsely represented, whereas they are represented with high acuity when foveated. To keep track of visual features of objects across eye movements, these changes in spatial resolution have to be taken into account. Here, we develop and test a new framework proposing a visual feature prediction mechanism based on past experience to deal with changes in spatial resolution accompanying saccadic eye movements. In 3 experiments, we first exposed participants to an altered visual stimulation where, unnoticed by participants, 1 object systematically changed visual features during saccades. Experiments 1 and 2 then demonstrate that feature prediction during peripheral object recognition is biased toward previously associated postsaccadic foveal input and that this effect is particularly associated with making saccades. Moreover, Experiment 3 shows that during visual search, feature prediction is biased toward previously associated presaccadic peripheral input. Together, these findings demonstrate that the visual system uses past experience to predict how peripheral objects will look in the fovea, and what foveal search templates should look like in the periphery. As such, they support our framework based on ideomotor theory and shed new light on the mystery of why we are most of the time unaware of acuity limitations in the periphery and of our ability to locate relevant objects in the periphery.

 

More on the definition of consciousness

In my last post, I said that the phrase “subjective mental states”, used by Mark Conard, was without meaning. I did not explain why I find it meaningless, so I will now. You can read Conard’s review of my last post (here).

First, subjective - what can it mean? A thing, an event, a process or whatever, either exists or it doesn’t exist. And if it exists, it can be viewed in different ways. I can view something subjectively or objectively; how I view the something does not change what it is. And if I can view it subjectively then I most certainly can view it objectively, and vice versa. It makes absolutely no sense to say that something is solely subjective. As it happens consciousness can be viewed by introspection, it can also be viewed by inspecting the neural correlates of consciousness (NCC). Introspection does not make consciousness exclusively subjective and NCC do not make consciousness exclusively objective. I think we get a better, more useful view if we look objectively. You cannot say that the definition of consciousness is that it IS subjective. Subjectivity is in the mind of the beholder.

Second, mental - what does that mean? Mental as opposed to what? In this use it cannot mean just vaguely to do with thought. It must be taking the dualist meaning to do with mind as opposed to matter. I cannot deal in terms of magic-mind-matter, it is just meaningless.

And finally, state - what in this context can state mean? It implies that consciousness is a noun sort of thing rather than a verb sort of thing. If it is a state then it has to be somewhat static and be somewhere, but nothing in the brain seems static and in one place. We have to think of consciousness as a process and not a state.

I see consciousness as a process that is not yet clearly understood but involves the integration of a number of sources (sensory, motor/sensory prediction, emotion, volition) into a momentary perception of the world and our interaction within it. There are a number of events that are associated with this such as the synchronous two-way communication between the cortex and the thalamus, and the use of working memory. There may be many functions for consciousness, but one important one is to create experience to be stored in episodic memory. Our awareness of this moment of consciousness has the same basic form as our experience of a memory. Introspection seems to be the steering of attention on to the moment of consciousness and experiencing this as a sort of immediate memory. This way of looking at consciousness has the ring of truth about it, it is easy for me to live with.

But if consciousness has the definition of “subjective mental state” then as far as I am concerned it does not exist and I must find another name for the beautiful perceptions and emotions etc. that I experience. However, I have every right to use the word consciousness for the experiences I have and the ones others say they have, that sound to be very similar to mine. I do not accept that my consciousness is described by ‘subjective mental state’ and I insist that I have consciousness. And further I am not a freak of nature, I have a sane, working, experiencing brain.

 

What is consciousness?

Consciousness is a word that we can almost point at. When I say it I am fairly sure I don’t have to give a definition – I mean every one experiences consciousness and so they will know what I am talking about. But it is not so. As Inigo Montoya says, “You keep using that word. I do not think it means what you think it means”.

I read in a comment somewhere, long ago, that there were three ways to approach a physical explanation of consciousness: you could claim that as consciousness is not a physical thing, the explanation is impossible; or you could claim that it is physical but too mysterious to explain, the explanation is too hard; or you may claim that it is not what it appears to be and the explanation is obvious - it is not explained but explained away. It has been said that Dennett did this in his book Consciousness Explained – just explained it away.

As I said in a previous post (seeing past the trick) you cannot explain a magic trick as it appears but you can if you don’t believe the trick and look for the sleight of hand or the misdirection. If the subjective, non-physical, experience of a conscious mind is what has to be explained then that is a dead end and will remain a mystery. We have to give up our naïve sense of what consciousness is in order to understand it.

Michael Graziano did a piece in the NewYork Times Sunday Review (here) that portrays consciousness in a useful way.

… I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do. …

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information. …

But the argument here is that there is no subjective impression; there is only information in a data-processing device. When we look at a red apple, the brain computes information about color. It also computes information about the self and about a (physically incoherent) property of subjective experience. The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing. Cognition is captive to those internal models. Such a brain would inescapably conclude it has subjective experience. …

In the attention schema theory, attention is the physical phenomenon and awareness is the brain’s approximate, slightly incorrect model of it. In neuroscience, attention is a process of enhancing some signals at the expense of others. It’s a way of focusing resources. Attention: a real, mechanistic phenomenon that can be programmed into a computer chip. Awareness: a cartoonish reconstruction of attention that is as physically inaccurate as the brain’s internal model of color.

In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it.”

I have picked out these bits of the argument but it is worth the time to read the original article. He (like philosophers Dennett, Churchland, Metzinger and others) is not explaining consciousness away but looking at what consciousness may actually be. Most scientists working on consciousness are also on this route – they are assuming that consciousness has a physical explanation, looking for evidence and, like Graziano, building theoretical models.

We cannot explain magic but we can explain why some things happen while appearing to be impossible. Look for what really happened and ignore what appeared to happen.

After writing this post but before posting it, I ran across a near perfect example of the problem. A philosopher called Mark Conard has a post called ‘When Science Gets Stupid’ (here). I doubt that he understood Graziano’s piece because he starts right out defining consciousness in exactly the form that it probably isn’t, “to be conscious is to be aware. It’s to have subjective mental states about one’s environment”. He does not refute Graziano’s argument but ignores it. Well, if you start with that as a firm definition, then you have already pre-judged the issue. You cannot explain scientifically ‘subjective mental states’ but possibly you can explain something that appears to be a subjective mental state. I have consciousness, personally, and I call it consciousness, but I very definitely do not feel I have subjective mental states. That is not the explanation I am looking for – I want an explanation of my consciousness not some other definition, subjective mental states, that seems meaningless. What on earth is a subjective mental state?

I found it offensive that Graziano was referred to as “a guy named Michael Graziano”. He is a very well respected scientist. Conrad also down grades Dennett and Churchland by implying that they are not somehow doing philosophy right (not with a capital P). “With it’s methods, science is wonderful, helpful, generates real knowledge about the world; but it’s incapable of investigating lived human experience in all its richness and meaningfulness. That isn’t to say, mind you, that there is no reasoned approach to human experience, no arguments to be made, no evidence to examine. It’s only to say that we need a different methodology–that of Philosophy!”As I had never encountered Conrad before, his pulling rank does not impress me. And his arguments just miss the point entirely. “You keep using that word. I do not think it means what you think it means”.

Remembering visual images

There is an interesting recent paper (see citation) on visual memory. The researchers’ intent is to map and areas and causal directions between them for a particular process in healthy individuals so that sufferers showing lost of that process can be studied in the same way and the areas/connections which are faulty identified. In this study they were looking at encoding of vision for memory.

40 healthy subjects were examined. “… participants were presented with stimuli that represented a balanced mixture of indoor (50%) and outdoor (50%) scenes that included both images of inanimate objects as well as pictures of people and faces with neutral expressions. Attention to the task was monitored by asking participants to indicate whether the scene was indoor or outdoor using a button

box held in the right hand. Participants were also instructed to memorize all scenes for later memory testing. During the control condition, participants viewed pairs of scrambled images and were asked to indicate using the same button box whether both images in each pair were the same or not (50% of pairs contained the same images). Use of the control condition allowed for subtraction of visuo-perceptual, decision-making, and motor aspects of the task, with a goal of improved isolation of the memory encoding aspect of the active condition.” All the subjects performed well on both tasks and on later recognition of the scene they were asked to remember. “Thirty-two ICA components were identified. Of these, 10 were determined to be task-related (i.e., not representing noise or components related to the control condition) and were included in further analyses and model generation. Each retained component was attributed to a particular network based on previously published data. ” Granger causality analysis was carried out on each pair of the 10 components.

Here is the resulting picture:

The authors give a description of the many functions that have been attributed to their 10 areas (independent components) which is interesting reading. But not very significant because the areas are on the large size and because it is reasonable to argue from a specific function to an active area but not from an active area to a specific function. The information does have a bearing on some theories and models. The fact that this work does not itself produce a model does not make it less useful in studying abnormal visual memory encoding.

The involvement of the ‘what’ visual stream rather than the stream used for motor actions is expected, as is the involvement of working memory. There is clearly a major importance of attention in this process. The involvement of language/concepts is interesting. “Episodic memory is defined as the ability to consciously recall dated information and spatiotemporal relations from previous experiences, while semantic memory consists of stored information about features and attributes that define concepts. The visual encoding of a scene in order to remember and recognize it later (i.e., visual memory encoding) engages both episodic and semantic memory, and an efficient retrieval system is needed for later recall.” The data is likely to be useful in evaluating theoretical ideas. The author mention support for the hemispheric encoding/retrieval asymmetry model.

The abstract:

Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19–59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (.90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.3365.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists. ”
ResearchBlogging.org

Nenert, R., Allendorfer, J., & Szaflarski, J. (2014). A Model for Visual Memory Encoding PLoS ONE, 9 (10) DOI: 10.1371/journal.pone.0107761

Another new neuron type

 

In a press release (here) about a Neuron Journal paper (see citation below), it was announced that there were neurons in the hippocampus with a newly discovered anatomy, in fact they are common there.

The model of a neuron is that it has a cell body with branches (dendrites) in one area that receive input from other neurons and a long extension (axon) that has branches at its end to output signals to other neurons. The standard picture is that there is a complex summation of synaptic inputs on the dendrite branches and then a summation on the body of the cell of the dendrites which either reaches the threshold for firing or not. If threshold is reached the activity travels down the axon to the synapses with other neurons.

The newly discovered neurons have a bypass, shunt or privileged path. The axon in these cells does not start on the cell body but on a dendrite that is on the axon side of the cell body. Therefore input to this particular dendrite does not have to pass though the cell body but can directly send signals down the axon. This axon can fire if the dendrite it is attached to reaches threshold or if the cell body reaches threshold due to activity on the other dendrites.

A metaphor might be like this. The decision whether or not to fire is taken by small committees with pro and con members, then the results of those committees goes to a higher committee. If that committee decides to fire then firing will happen. On the other hand, the boss and his advisors can just walk in and order fire if they choose.

These pyramid cells in the hippocampus would have an important role in memory. What the function of this arrangement is has not yet been researched.

Here is the abstract:

Neuronal processing is classically conceptualized as dendritic input, somatic integration, and axonal output. The axon initial segment, the proposed site of action potential generation, usually emanates directly from the soma. However, we found that axons of hippocampal pyramidal cells frequently derive from a basal dendrite rather than from the soma. This morphology is particularly enriched in central CA1, the principal hippocampal output area. Multiphoton glutamate uncaging revealed that input onto the axon-carrying dendrites (AcDs) was more efficient in eliciting action potential output than input onto regular basal dendrites. First, synaptic input onto AcDs generates action potentials with lower activation thresholds compared with regular dendrites. Second, AcDs are intrinsically more excitable, generating dendritic spikes with higher probability and greater strength. Thus, axon-carrying dendrites constitute a privileged channel for excitatory synaptic input in a subset of cortical pyramidal cells.

Citation: C. Thome, T. Kelly, A. Yanez, C. Schultz, M. Engelhardt, S. B. Camebridge, M. Both, A. Draguhn, H. Beck and A. V. Egorov (2014): Axon-Carrying Dendrites Convey Privileged Synaptic Input in Hippocampal Neurons. Neuron, 83, 1418-1430.

Fine control

My last blog on timing in some neurons in the cerebellum has started a string of thoughts. Here we have a part of the brain with an anatomy that is well mapped as opposed to many other parts. It has more neurons than the rest of the brain put together. It has grown relatively larger in human evolution then any other part of the brain. There are theories about how the system works, and yet, its actions are not understood in detail and new information on one of its important cell types was a surprise. (previous blog)

Abstract (Barton see citation below):

Humans’ unique cognitive abilities are usually attributed to a greatly expanded neocortex, which has been described as “the crowning achievement of evolution and the biological substrate of human mental prowess”. The human cerebellum, however, contains four times more neurons than the neocortex and is attracting increasing attention for its wide range of cognitive functions. Using a method for detecting evolutionary rate changes along the branches of phylogenetic trees, we show that the cerebellum underwent rapid size increase throughout the evolution of apes, including humans, expanding significantly faster than predicted by the change in neocortex size. As a result, humans and other apes deviated significantly from the general evolutionary trend for neocortex and cerebellum to change in tandem, having significantly larger cerebella relative to neocortex size than other anthropoid primates. These results suggest that cerebellar specialization was a far more important component of human brain evolution than hitherto recognized and that technical intelligence was likely to have been at least as important as social intelligence in human cognitive evolution. Given the role of the cerebellum in sensory-motor control and in learning complex action sequences, cerebellar specialization is likely to have underpinned the evolution of humans’ advanced technological capacities, which in turn may have been a preadaptation for language.

This enlargement has been in the neocerebellum which is not primarily concerned with the fine tuning of movements of the whole body and limbs. What appears to have increased is: the ability to learn by being taught, imitating and practice; fine control of the hands as is needed for tool making; find control of the larynx as is needed for speaking; and it might said, fine control of any sequential process including language and some types of thought.

Wikipedia gives a summary of the neocerebral connections. “The lateral zone, which in humans is by far the largest part, constitutes the cerebrocerebellum, also known as neocerebellum. It receives input exclusively from the cerebral cortex (especially the parietal lobe) via the pontine nuclei (forming cortico-ponto-cerebellar pathways), and sends output mainly to the ventrolateral thalamus (in turn connected to motor areas of the premotor cortex and primary motor area of the cerebral cortex) and to the red nucleus. There is disagreement about the best way to describe the functions of the lateral cerebellum: it is thought to be involved in planning movement that is about to occur, in evaluating sensory information for action, and in a number of purely cognitive functions as well, such as determining the verb which best fits with a certain noun (as in “sit” for “chair”).

A computing mechanism for fine control of a process using feedback from the environment has an almost universal usefulness. It does not initiate but controls an action. It evolved to give us balance and posture, to smooth our actions and make them more accurate, to move the eyes and give us a stationary vision from moving eyes, and to steer the eyes to points of attention. What we appear to have gained is extremely fine control of some muscles and the ability to use the same mechanisms for language, music and other forms of thought and social communication. It appears essential to supervised learning. And here is a biggy – it may be responsible for knitting together the fragments of memory and knowledge that produces imagination.
ResearchBlogging.org

Barton, R., & Venditti, C. (2014). Rapid Evolution of the Cerebellum in Humans and Other Great Apes Current Biology DOI: 10.1016/j.cub.2014.08.056

A new feature of neurons

There are articles asking, “Are we ever going to understand the brain?” They imply that we have been studying the brain for long enough to be able to say how it works, if we are ever going to, and therefore hinting that it is a permanent mystery. But every week or so some new wrinkle on the brain’s nature comes to light. The brain is far more complicated and far less understood than many think.

Recently a paper appeared that pointed to a wholly new feature of neurons. (citation below) Johansson and his colleagues demonstrate a surprising feature of at least some neurons. They looked at a well known response. When a puff of air is directed at the eye, there is a blink. If this is done over and over with the same time interval between a signal and the puff, a reflex is formed so that the blink happens at just the right time to protect the eye from the puff. This is a standard conditioned reflex and we thought we understood conditioned reflexes. The researchers found that the learning of the time between signal and puff was not a function of a network of cells but an internal function of one type of cell. “The data strongly suggest that the main timing mechanism is within the Purkinje cell and that its nature is cellular rather than a network property. Parallel fiber input lacking any temporal pattern can elicit Purkinje cell responses timed to intervals at least as long as 300 ms. … In addition, the data show that a main part of the timing of the conditioned response relies on intrinsic cellular mechanisms rather than on a temporal pattern in the input signal. ” We have been modeling neurons as firing, or not, as a result of the strength of the signals at their synapses; and firing, if they do, immediately. Any timing effects were assumed to be produced by network structures. Neurons were modeled as very fancy switches but with no timing capabilities. Now understanding has changed. Large changes in understanding, like this one, happen regularly. We are a long way from understanding the mechanisms in the brain.

Here is the Significance and Abstract:

The standard view of neural signaling is that a neuron can influence its target cell by exciting or inhibiting it. An important aspect of the standard view is that learning consists of changing the efficacy of synapses, either strengthening (long-term potentiation) or weakening (long-term depression) them. In studying how cerebellar Purkinje cells change their responsiveness to a stimulus during learning of conditioned responses, we have found that these cells can learn the temporal relationship between two paired stimuli. The cells learn to respond at a particular time that reflects the time between the stimuli. This finding radically changes current views of both neural signaling and learning.

The standard view of the mechanisms underlying learning is that they involve strengthening or weakening synaptic connections. Learned response timing is thought to combine such plasticity with temporally patterned inputs to the neuron. We show here that a cerebellar Purkinje cell in a ferret can learn to respond to a specific input with a temporal pattern of activity consisting of temporally specific increases and decreases in firing over hundreds of milliseconds without a temporally patterned input. Training Purkinje cells with direct stimulation of immediate afferents, the parallel fibers, and pharmacological blocking of interneurons shows that the timing mechanism is intrinsic to the cell itself. Purkinje cells can learn to respond not only with increased or decreased firing but also with an adaptively timed activity pattern.
ResearchBlogging.org

Johansson, F., Jirenhed, D., Rasmussen, A., Zucca, R., & Hesslow, G. (2014). Memory trace and timing mechanism localized to cerebellar Purkinje cells Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1415371111

 

Down with untrue intros

 

There are often opening sentences like, “only humans can x” or “only primates can x”. Why do people assume these sorts of statements are true without checking? Why does no one seem to complain? Either authors and readers don’t really care if the statements are true – they are just openers and not the important part of the piece; or they want the statements to be true and so are shy about looking at any evidence.

A recent paper (Anna Kis, Ludwig Huber, Anna Wilkinson. Social learning by imitation in a reptile (Pogona vitticeps). Animal Cognition, 2014) was reported in ScienceDaily with the opening line, “The ability to acquire new skills through the ‘true imitation’ of others’ behaviour is thought to be unique to humans and advanced primates, such as chimpanzees.” I knew this was not true and that other animals have this skill (a number of mammals besides primates and a number of birds). Looking at the abstract of the paper, I found a similar opening line. It was not so restrictive - “The ability to learn through imitation is thought to be the basis of cultural transmission and was long considered a distinctive characteristic of humans. There is now evidence that both mammals and birds are capable of imitation.” But even this is a bit restricted as octopuses also learn from one another. We should not be sure that some animal (an social insect for example) does not do x unless we have looked to see. And throw-away openings should be at least true.

But the paper has interesting news. Bearded dragons can learn from one another! Reptiles can be included too. We are more closely related to these reptiles than to birds. This finding strengthens the idea that social learning is an ancient skill in vertebrates, rather than separately evolved in various types of vertebrates. Although it is still reasonable to think that it evolved separately in invertebrates.

Here is the abstract:

The ability to learn through imitation is thought to be the basis of cultural transmission and was long considered a distinctive characteristic of humans. There is now evidence that both mammals and birds are capable of imitation. However, nothing is known about these abilities in the third amniotic class—reptiles. Here, we use a bidirectional control procedure to show that a reptile species, the bearded dragon (Pogona vitticeps), is capable of social learning that cannot be explained by simple mechanisms such as local enhancement or goal emulation. Subjects in the experimental group opened a trap door to the side that had been demonstrated, while subjects in the ghost control group, who observed the door move without the intervention of a conspecific, were unsuccessful. This, together with differences in behaviour between experimental and control groups, provides compelling evidence that reptiles possess cognitive abilities that are comparable to those observed in mammals and birds and suggests that learning by imitation is likely to be based on ancient mechanisms.