Category Archives: cognition

All pain is not the same

A popular illustration of embodied cognition is the notion that physical pain and social pain share the same neural mechanism. The researchers that first published this relationship, have now published a new paper that finds the two types of pain do not overlap in the brain but are just close neighbours, close enough to have appeared together on the original fMRI scans. But the pattern of activity is different. The data has not changed but the method of analyzing it has produced a much clearer picture.

Neuroskeptic has a good blog on this paper and observes: “ Woo et al. have shown commendable scientific integrity in being willing to change their minds and update their theory based on new evidence. That sets an excellent example for researchers.” Have a look at the Neuroskeptic post (here).

It would probably be wise for other groups to re-examine, using multivariant analysis, similar data they have previously published.

pains

 

 

 

 

Abstract of paper (Woo CW, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature Communications, 5 PMID: 25400102)

Current theories suggest that physical pain and social rejection share common neural mechanisms, largely by virtue of overlapping functional magnetic resonance imaging (fMRI) activity. Here we challenge this notion by identifying distinct multivariate fMRI patterns unique to pain and rejection. Sixty participants experience painful heat and warmth and view photos of ex-partners and friends on separate trials. FMRI pattern classifiers discriminate pain and rejection from their respective control conditions in out-of-sample individuals with 92% and 80% accuracy. The rejection classifier performs at chance on pain, and vice versa. Pain- and rejection-related representations are uncorrelated within regions thought to encode pain affect (for example, dorsal anterior cingulate) and show distinct functional connectivity with other regions in a separate resting-state data set (N=91). These findings demonstrate that separate representations underlie pain and rejection despite common fMRI activity at the gross anatomical level. Rather than co-opting pain circuitry, rejection involves distinct affective representations in humans.”

 

Fluid, flow, zone and zen

So we have conscious and unconscious, type 1 and type 2 cognitive processes, default and task related modes, fluid intelligence, being in the flow, being in the zone and the Zen mind. I am wondering which are really the same but just expressed in different semantic frameworks. What might actually be the same physical thing from a different view point. I suspect that these are all ways of expressing various aspects of how we use or fail to use unconscious cognition.

Here was an interesting Scientific American blog (here) by SB Kaufman last January, looking at the relationship between fluid reasoning and working memory. Fluid reasoning works across all domains of intelligence and uses very little prior knowledge, expertise or practice to build relationships, patterns and inferences. How much it depends on working memory is controlled by speed. If the fluid reasoning is done quickly, it requires good working memory; but it can be done slowly with less need for working memory. Is this the difference between quick and deep thinkers, both described as intelligent?

Fluid reasoning does not fit nicely with the two types of cognitive processes: type 1—intuitive, fast, automatic, unconscious, effortless, contextualized, error-prone, and type 2—reflective, slow, deliberate, cogitative, effortful, decontextualized, normatively correct. As type 2 is typified as using working memory and type 1 as not using it, there is an implication that when speed is required for fluid reasoning, more working memory is required and therefore the thinking is leaning towards type 2 processing which is the slower of the two. It is a bit of a paradox. Perhaps what sets apart fluid reasoning is the type of problem rather than the type of process. Maybe the two types of process are ends of a spectrum rather than some sort of opposites. Let’s imagine the reasoning as being little spurts of type 1 process feeding a type 2 use of working memory. This could be a spectrum: at one end continuous type 1 thinking with working memory and consciousness only being involved in the beginning and the end. The other end would be a continuous back and forth as working memory steps through a solution. Let’s imagine that there is little control of efficiency in the type 1 working. The unconscious does not necessarily stick to a plan, while the use of working memory almost dictates a step-wise method. Fluid problems which occur in areas with little expertise, knowledge and practice may tax the type 1 reasoning unless it is closely monitored and controlled with working memory. A ‘step-wise plan’ may restrict and slow down progress on a well-practiced task; not having such a plan, may overwhelm the process with irrelevant detail and slow down an unfamilar task. There may (for any situation) be an optimal amount of type 2 control of type 1 free-wheeling speed.

People talking about ‘flow’ and ‘zone’ tend to acknowledge the similarity in the two concepts. But flow seems less concentrated and describes a way of living and especially working. While zone seems to describe short periods of more intense activity, as in a sport. This is almost the opposite of fluid reasoning in that neither flow nor zone can be achieved without first acquiring skill (expertise, knowledge and practice are basic). This seems to be type 1 processing at its best. In fact, one way to lose the zone is to try and think about or consciously control what you are doing. That is how to choke.

Mihály Csíkszentmihályi has documented flow for most of his career. His theory of Flow has three conditions for achieving the flow state: be involved in an activity with a clear set of goals and progress (direction and structure); have clear and immediate feedback to allow change and adjustment; have balance between the perceived challenges and perceived skills (confidence in one’s ability for the task). The person in flow is experiencing the present moment, a sense of control, a loss of sense of time and of self-consciousness, with a feeling of great reward and enjoyment. There is an automatic connection of action and perception and an effortless relaxation, but still a feeling of control.

Young and Pain have studied being ‘in the zone’. It is described as “a state in which an athlete performs to the best of his or her ability. It is a magical and…special place where performance is exceptional and consistent, automatic and flowing. An athlete is able to ignore all the pressures and let his or her body deliver the performance that has been learned so well. Competition is fun and exciting.” Athletes reporting on ‘in the zone’ moments report: “clear inner process”, “felt all together”, “awareness of power”, “clear focus”, “strong sense of self”, “free from outer restrictions”, “need to complete”, “absorption”, “intention”, “process ‘clicked’”, “personal understanding & expression”, “actions & thoughts spontaneous”, “event was practiced”, “performance”, “fulfillment”, “intrinsic reward”, “loss of self”, “spiritual”, “loss of time and space”, “unity of self and environment”, “enjoyed others”, “prior related involvement”, “fun”, “action or behavior”, “goals and structure”. Zone seems more intense and more identified with a very particular event than flow.

The hallmark of both flow and zone is that it appears to be the unconscious, fully equiped and practiced, in charge and doing the task well and effortlessly. The other thing to note is that the task mode is being used and not the default mode. Introspection, memory and imagination are taking second place.

The flow/zone way of acting is even more extreme in some Eastern religious exercises and also a few Western ones. The pinnacle of this is perhaps Zen states of mind. One in particular is like zone. “Mushin means “Without Mind” and it is very similar in practice to the Chinese Taoist principle of wei wuwei . Of all of the states of mind, I think not only is working toward mastery of mushin most important, it’s also the one most people have felt at some point in time. In sports circles, mushin is often referred to as “being in the zone”. Mushin is characterized by a mind that is completely empty of all thoughts and is existing purely in the current moment. A mind in mushin is free from worry, anger, ego, fear or any other emotions. It does not plan, it merely acts. If you’ve ever been playing a sport and you got so into it you stopped thinking about what you were doing and just played, you’ve experienced mushin.” I find the use of mind with this meaning misleading, but it is clear in the context that they are referring to just the conscious part of the mind when they use the word ‘mind’. It could be replaced with the word consciousness without changing the meaning.

In summary, unconscious control of tasks have been extremely well learned (the learning likely requires conscious thought) leads to states of mind that are valued, very skilled, without effort and agreeable. The default mode is suppressed and the self recedes in importance as do past and future because introspection, recall of past events and dreaming of future ones require the default mode. It is not an all or nothing thing but one of degree.

Virtual reality is not that real

Virtual reality is used in many situations and is often seen as equivalent to actual experience. For example, it is used in training where actual experience is too expensive or dangerous. In science, it is used in experiments with the assumption that it can be compared to reality. A recent paper (Z. Aghajan, L. Acharya, J. Moore, J. Cushman, C. Vuong, M. Mehta; Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality; Nature Neuroscience 2014) shows that virtual reality and ‘real’ reality are treated differently in the hippocampus where spatial mapping occurs. ScienceDaily reports on this paper (here).

It is assumed that cognitive maps are made by the neurons of the hippocampus, computing the distances to landmarks. Of course, this is not the only way a map could be constructed: sounds and echos could give clues, smells could identify places, and so on. To test whether visual clues alone could give the information to create a map, the researchers compared the activity of neurons in the hippocampus in a virtual walk and a real walk that were visually identical. In the real set-up the rat walked across a scene while in the virtual set-up the rat walked on the treadmill while the equivalent visual ‘movie’ was projected all around the rat.

The results showed that the mapping of the two environments was different. The mapping during real experience involved more activity by more neurons and was not random. In the virtual experiment, the activity was random and more sparse. It appeared, using neuron activity, as if the rat could not map virtual reality and was somewhat lost or confused, even though they appeared to be acting normally. “Careful mathematical analysis showed that neurons in the virtual world were calculating the amount of distance the rat had walked, regardless of where he was in the virtual space.

In the same report, other research by the same group is reported. Mehta describes the complex rhythms involved in learning and memory in the hippocampus, “The complex pattern they make defies human imagination. The neurons in this memory-making region talk to each other using two entirely different languages at the same time. One of those languages is based on rhythm; the other is based on intensity.” The two languages are used simultaneously by hippocampal neurons. “Mehta’s group reports that in the virtual world, the language based on rhythm has a similar structure to that in the real world, even though it says something entirely different in the two worlds. The language based on intensity, however, is entirely disrupted.

As a rat hippocampus is very similar to a human one and the virtual reality set up was a very realistic one, this study throws doubt on experiments and techniques that use virtual reality with humans. It is also very interesting to notice another surprising ability of neurons, to process two types of signal at the same time.

Abstract: “During real-world (RW) exploration, rodent hippocampal activity shows robust spatial selectivity, which is hypothesized to be governed largely by distal visual cues, although other sensory-motor cues also contribute. Indeed, hippocampal spatial selectivity is weak in primate and human studies that use only visual cues. To determine the contribution of distal visual cues only, we measured hippocampal activity from body-fixed rodents exploring a two-dimensional virtual reality (VR). Compared to that in RW, spatial selectivity was markedly reduced during random foraging and goal-directed tasks in VR. Instead we found small but significant selectivity to distance traveled. Despite impaired spatial selectivity in VR, most spikes occurred within ~2-s-long hippocampal motifs in both RW and VR that had similar structure, including phase precession within motif fields. Selectivity to space and distance traveled were greatly enhanced in VR tasks with stereotypical trajectories. Thus, distal visual cues alone are insufficient to generate a robust hippocampal rate code for space but are sufficient for a temporal code.

Why no brain-in-a-vat

A comment on the previous blog asked for a discussion of embodied cognition. I will try to express why I find embodied cognition a more attractive model than classic cognition. My natural approach to living things is biological – I just think that way – and if something does not make much sense from a biological standpoint than I am suspicious.

So to start, why don’t all living things have brains? Brains seem to be confined to animals, organisms that move. This makes sense: to move an organism needs mechanisms for propulsion (muscles for example), mechanisms to sense the environment (eyes for example), and mechanisms for coordinating and planning movement (nervous systems). So we have motor neurons that activate muscles and sensory neurons that sample the environment and the two are connected in the simplest nervous systems. But all we have in this simple setup is reflexes and habituation. But if there are nets of inter-neurons between the motor and sensory ones then complex actions and thoughts are possible including learning, memory, a working model of reality, emotion, problem solving etc. (brains). In other words, I picture cognition as coming into being and then being honed by evolution as an integral part of the whole organism: its niche or way of life, its behaviour, its anatomy.

Did the evolutionary process give us a brain that is a general computer? Why would it? There tends to be a loss of anatomy/physiology when they are not particularly useful. For example, moles lost sight because their niche is without light; parasites can lose all functions except nutrition and reproduction. A general computer would be a costly organ so it would only be evolved if it were definitely useful.

Today science does not hold that there are exactly three dimensions but talks of 4, 11 ½, 37 etc. We can accept more than 3, believe there are more than 3, but we cannot put ourselves in more than 3 dimensions no matter how we try. Our brain is constructed to create a model of the world with 3 dimensions and that is that. Why? We sense our orientation, acceleration, balance from the semi-circular canals of the inner ear. There are 3 canals and they are at mutual right angles to each other – physical x,y,z planes are evident in this arrangement. The parts of the brain that do the cognitive processes to track orientation, acceleration and balance are built to use signals from the inner ear. It is not a general computing ability that could deal with the mathematics of any number of dimensions – no, it is a task-specific cognitive ability that only deals in 3 dimensions. I think that all our cognitive abilities are like this; they are very sophisticated in what they do but limited to tasks that are useful and matched to what the body and environment can supply.

Further, when evolutionary pressures are forcing new behaviours and reality modeling, new cognitive abilities are not created from scratch, because changes to old cognitive abilities are faster. They will win the race. Take time for example. Animal usually have circadian rhythms and often seasonal/tidal rhythms too. But to incorporate time into our model of reality would probably require a lot of change if done from scratch. However we already have an excellent system for incorporating space in our reality. The system of place cells, grid cells, border cells, heading cells etc. is elaborate. So we can just deal with time as if it was space. Many of these re-uses of old abilities can be seen in the metaphors that people use. A whole branch of embodiment is dedicated to identifying these metaphors used in our normal thinking.

This business of re-using one ability to serve other domains brings up the question of ‘grounding’. People often remark on the circularity of dictionaries. Each word is defined by other words. As we pile up metaphoric schemes each an elaboration and re-identification of elements of other metaphors, the situation appears circular and unsupported. But with a dictionary, what is needed is that a few primitive words are defined by pointing at the object. In the same way each pile of metaphors needs to be grounded in the body. There are primitive schemes that babies are born with or that they learn naturally as they learn to use their bodies. In other words all the cognitive abilities can be traced back to the nature of the body and environment.

There is one case where it can be proven that the cognition is embodied and not classic. When a fielder catches a fly ball, the path he runs is that of an embodied method and not a classic one. The fielder makes no calculation or predictions, he simply keeps running in such a way as to keep the image of the ball in the sky in a particular place. He will end up with the ball and his glove meeting along that image line. There are good write ups of this. (here)

By contrast, classical cognition is seen as isolated and independent from the body and environment, using algorithms to manipulate symbols and capable of running any algorithm (ie a general computer). It just does not ring true to me. I see the brain-in-a-vat as about as useful as a car engine in a washing machine. Why would anyone want a brain-in-a-vat? As a thought experiment to support skepticism it is so-so, because like many philosophical ideas it is concerned with Truth, capitalized. Whereas the brain is not aiming at truth but at appropriate behaviour. A heart can be kept alive on an isolated heart perfusion apparatus and it will beat away and pump a liquid – but to what purpose? Even robots need bodies to really think in a goal directed, real time, real place way and so they are fitted with motors, cameras, arms etc. Robots can be embodied.

 

Embodied thinking

TalkingBrains has a posting, “Embodied or Symbolic? Who Cares?” (here). Greg Hickok is asking what exactly is the difference between embodied and symbolic cognition. He takes a nice example of a neurocomputation that is understood, the way a barn owl turns its head to a sound source. If you have not seen it before have a look at the link – it is well explained and easy to follow.

He asks:

Question: what do we call this kind of neural computation? Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn’t work without that!) and I suppose we can talk of a certain “resonance” of the external world with neural activation. In that sense, it’s embodied. On the other hand, the network can be said to represent information in a neural code–the pattern of activity in network of cells–that no longer resembles the air pressure wave that gave rise to it. In fact, we can write a symbolic code to describe the computation of the network.

I think, however, that the example is a bit off the subject. Of course there are many examples in the brain of clear computations that could be presented in the form of a computer program or an algorithm for manipulating symbols. And it is generally assumed that the brain manipulates entities that are best called symbols: words, objects, concepts, places and the like. Even the brains great ability to work with metaphors is like substituting symbols in schemes that relate a number of symbols in a particular way. Symbols and their manipulation seems useful in understanding the brain. Symbols in the brain, of course, would always be metaphors for actual processes, but then the idea of a symbol is by its nature always a sort of metaphor standing in for whatever it is a symbol of.

But just because some, or a great many perhaps, processes in the brain can be pictured as manipulations of symbols, in ways akin to algorithms, this does not mean that the brain acts like a general computing device. Embodied cognition is quite clearly computation only in the sense of task specific processes and architecture and, not the actions of a general device. To be understood, the brain has to be seen as an integral part of the body. It is and does its part of what the body is and does. The cognitive abilities and facilities of the brain are the ones the body needs to function. If those abilities are sometimes used for arbitrary and abstract things like playing chess, this does not mean that they are not individually ‘grounded’ in the body’s requirements and limitations.

Just because some task could be done in a particular way, does not mean that it is done that way. The brain is what it is; metaphors can help us understand its workings or they can also stand in the way of understanding. They do not dictate the nature of the brain. We always should keep in mind that metaphors are somewhat limited tools.

Sometimes choices are not thought out

In some competitive situations animals can produce random behavior rather than behavior based on prior experience. The anterior cingulate cortex is where strategies based on models of reality and history are generated; switching to random behavior is done by inputs to this part of the brain from the locus coeruleus. This was reported in a recent paper (citation below).

We generally assume that deciding what to do is based on the best guess of what will be the successful thing to do. Why would random behavior ever be better? It would be if the world seemed to change from what it had been and a new model needed to be constructed. Random exploration would be helpful. Or, there is the case of an opponent that is better at the fight. “We find that when faced with a competitor that they cannot defeat by counterprediction, animals switch to a distinct mode of action selection consistent with stochastic choice. In this mode, characterized by highly variable choice sequences, behavior becomes dramatically less dependent on the history of outcomes associated with different actions and becomes independent from the ACC. ” Primates appear to always try counterprediction before using random choice.

The random behaviour is not the product of the ACC system but is generated elsewhere. A mixture of modelling in the ACC and a random overlay seems the normal state with the amount of randomness depending on the confidence in the modelling. It is like a balance between exploitation and exploration set by the performance of the ACC model. “We note that complete abandonment of an internal model and adoption of a fully stochastic behavioral mode is normally maladaptive because of the associated insensitivity to new information. In rats, such a mode appears to be triggered when repeated modeling efforts prove to be ineffective and thus bears a similarity to the condition of learned helplessness thought to follow the sustained experience of the futility of one’s actions. Intriguingly, functional imaging studies in humans have suggested that a chronic reduction in ACC activity might play a role in this disorder .

This arrangement also seems to fit with the ‘deliberate’ errors that happen in well-learned sequences in sports, bird song, and children’s speech. Confidence in the model is occasionally tested.

Here is the abstract:

Behavioral choices that ignore prior experience promote exploration and unpredictability but are seemingly at odds with the brain’s tendency to use experience to optimize behavioral choice. Indeed, when faced with virtual competitors, primates resort to strategic counterprediction rather than to stochastic choice. Here, we show that rats also use history- and model-based strategies when faced with similar competitors but can switch to a “stochastic” mode when challenged with a competitor that they cannot defeat by counterprediction. In this mode, outcomes associated with an animal’s actions are ignored, and normal engagement of anterior cingulate cortex (ACC) is suppressed. Using circuit perturbations in transgenic rats, we demonstrate that switching between strategic and stochastic behavioral modes is controlled by locus coeruleus input into ACC. Our findings suggest that, under conditions of uncertainty about environmental rules, changes in noradrenergic input alter ACC output and prevent erroneous beliefs from guiding decisions, thus enabling behavioral variation.
ResearchBlogging.org

Tervo, D., Proskurin, M., Manakov, M., Kabra, M., Vollmer, A., Branson, K., & Karpova, A. (2014). Behavioral Variability through Stochastic Choice and Its Gating by Anterior Cingulate Cortex Cell, 159 (1), 21-32 DOI: 10.1016/j.cell.2014.08.037

I'm on ScienceSeeker-Microscope

Doing a task while asleep

A recent paper (citation below) describes subjects working away at a task, categorizing words, while asleep. Here is the abstract:

Falling asleep leads to a loss of sensory awareness and to the inability to interact with the environment. While this was traditionally thought as a consequence of the brain shutting down to external inputs, it is now acknowledged that incoming stimuli can still be processed, at least to some extent, during sleep. For instance, sleeping participants can create novel sensory associations between tones and odors or reactivate existing semantic associations, as evidenced by event-related potentials. Yet, the extent to which the brain continues to process external stimuli remains largely unknown. In particular, it remains unclear whether sensory information can be processed in a flexible and task-dependent manner by the sleeping brain, all the way up to the preparation of relevant actions. Here, using semantic categorization and lexical decision tasks, we studied task-relevant responses triggered by spoken stimuli in the sleeping brain. Awake participants classified words as either animals or objects (experiment 1) or as either words or pseudowords (experiment 2) by pressing a button with their right or left hand, while transitioning toward sleep. The lateralized readiness potential (LRP), an electrophysiological index of response preparation, revealed that task-specific preparatory responses are preserved during sleep. These findings demonstrate that despite the absence of awareness and behavioral responsiveness, sleepers can still extract task- relevant information from external stimuli and covertly prepare for appropriate motor responses.

This study does not address whether a task can be initiated while asleep because the subjects fell asleep while engaged in the task. And, of course, as movement is blocked during REM sleep, the initiation of movement while unconscious was also not tested. What was tested was the processing required to carry on the task and prepare for movement.

Some previous postings have looked at unconscious processes. Some experiments used unconscious priming to test whether such priming can result in particular processes. In (does control of cognition have to be conscious?) it was indicated that control of cognition (conflict adaption) can be unconscious and in (unconscious effects) it was shown that unconscious priming could be responsible for perceiving, doing semantic operations and making decisions. Other experiments have used control over the use of consciousness by forcing its content. In (discovering rules unconsciously) blocking the use of consciousness for a particular problem showed that unconscious processing was superior to conscious processing for discovering ‘grammatical’ rules. Now we have a third method, the comparison between awake and asleep states showing that task-related processing can proceed unconsciously, starting from perception, through processing and decision making to preparation of motor responses.

It is not news any more that most of the processes in the brain can be done unconsciously. We are not aware of these processes, naturally, because they are unconscious, but that does not mean they do not happen. The bulk of the brain’s activity is unconscious. We should not be surprised at unconscious thought. The exceptions that, to date, appear to require consciousness are the formation of explicit memories, the use of working memory, and the particular form of awareness that we associate with consciousness. Perhaps consciousness has more to due with a particular use of memory rather than a particular type of thought process.
ResearchBlogging.org

Kouider, S., Andrillon, T., Barbosa, L., Goupil, L., & Bekinschtein, T. (2014). Inducing Task-Relevant Responses to Speech in the Sleeping Brain Current Biology, 24 (18), 2208-2214 DOI: 10.1016/j.cub.2014.08.016

I'm on ScienceSeeker-Microscope

Discovering rules unconsciously

Dijksterhuis and Nordgren put forward a theory of unconscious thought. They propose that there are two types of thought process: conscious and unconscious. “CT (conscious thought) refers to object-relevant or task-relevant cognitive or affective thought processes that occur while the object or task is the focus of one’s conscious attention, whereas UT (unconscious thought) refers to object-relevant or task-relevant cognitive or affective thought processes that occur while conscious attention is directed elsewhere.’’

Like Kahneman’s System 1 and System 2 thought there is no implication here that there is purely conscious thought with no unconscious components but only that conscious awareness is part of the process. I prefer the System name as it avoids the possible interpretation that there might be purely conscious thought. System 1 is like UT and is characterized as: autonomous, fast, effortless, hidden/unconscious, simultaneous/parallel/complex. System 2 is like CT: deliberate, slow, effortful, conscious, serial/logical/simple. The most telling difference is whether working memory is used; working memory restricts the number of items that can be manipulated in thought to about 7 or less at a time and introduces the conscious awareness of the working memory. It is often viewed as a difference between calculation and estimation, or between explicit and implicit knowledge.

The way these two processes are compared is to set out a problem and then compare the results after one of three activities: the subjects can consciously think about the problem for a certain length of time; the subjects can spend the same amount of time doing something that completely engages their consciousness; or they can be giving no time at all and asked for the answer immediately after the problem is presented. It has been found that with complex problems with many ingredients, that System 1/UT gives more quality results then System 2/CT and both are better than immediate answers.

A recent paper by Li, Zhu and Yang looks at another comparison of the two ways of thinking. (citation below)

Abstract:

According to unconscious thought theory (UTT), unconscious thought is more adept at complex decision-making than is conscious thought. Related research has mainly focused on the complexity of decision-making tasks as determined by the amount of information provided. However, the complexity of the rules generating this information also influences decision making. Therefore, we examined whether unconscious thought facilitates the detection of rules during a complex decision-making task. Participants were presented with two types of letter strings. One type matched a grammatical rule, while the other did not. Participants were then divided into three groups according to whether they made decisions using conscious thought, unconscious thought, or immediate decision. The results demonstrated that the unconscious thought group was more accurate in identifying letter strings that conformed to the grammatical rule than were the conscious thought and immediate decision groups. Moreover, performance of the conscious thought and immediate decision groups was similar. We conclude that unconscious thought facilitates the detection of complex rules, which is consistent with UTT.

It is a characteristic of System 2/CT that it is used to rigorously follow rules to calculate a result. However there is a difference between following a rule and discovering one. This rule discovery activity may be the same as implicit learning. “Mealor and Dienes (2012) combined UT and implicit learning research paradigms to investigate the impact of UT on artificial grammar learning. A classic implicit learning paradigm consists of two stages: training and testing. ” The UT group had better results but they categorized the process as random selection. The current paper shows that the UT group can find the grammatical rules illustrated in the training and then identify grammatical as opposed to ungrammatical strings. System 1/UT is better at uncovering rules and of identifying examples that break the rules. This does not seem to be a rigorous following of rules as in System 2 but more a statistical tendency or a stereotypical categorization of the nature of implicit learning.

It is important to be clear that System 2 or CT is thought that has a conscious component and it does not imply that the thought is conducted ‘in’ consciousness. We are aware of the steps in a train of thought, but not aware of the process, they are hidden.

ResearchBlogging.org

Li, J., Zhu, Y., & Yang, Y. (2014). The Merits of Unconscious Thought in Rule Detection PLoS ONE, 9 (8) DOI: 10.1371/journal.pone.0106557

I'm on ScienceSeeker-Microscope

Knowing your grandmother

There is a spectrum of ways in which the brain may hold concepts that range from very localized to very distributed, and there is little agreement of where along that spectrum various concepts are held. At the one end is the ultimate local storage: a single ‘grandmother’ neuron that recognizes your grandmother in matter how she is presented (words, images, sounds, actions etc.). This single cell, if it exists, would literally be the concept of grandmother. At the other end of the spectrum is a completely distributed storage where a concept is a unique pattern of activity across the whole cortex with every cell being involved in the pattern of many concepts. Both of these extremes have problems. Our concept of grandmother does not disappear if part of the cortex is destroyed – no extrmely small area has ever been found that obliterates grandma. On the other hand, groups of cells have been found that are relatively tuned to one concept. When we look at the extreme of distributed storage, there is the problem of localized specialties such as the fusiform face area. And more telling, is the problem of a global pattern being destroyed if multiple concepts are activated at the same time. Each neuron would be involved in a significant fraction of all the concepts and so there would be confusion if a dozen or more concepts were part of a thought/memory/process. As we leave the extremes the local storage becomes a larger group of neurons with more distribution and the distributed storage becomes patterns in smaller groups of neurons.

 

 

The idea of localized concepts was thought to be improbable in the 70’s and the grandmother cell became something of a joke. The type of network that computer scientists were creating became the assumed architecture of the brain.

 

 

Computer simulations have long used a ‘neural network’ called PDP or parallel distributed processing. This is not a network made of neurons, in spite of the name, but a mathematical network. Put extremely simply there are layers of units; each unit has a value for its level of activity; the units have inputs from other units and outputs to other units; the connections between units can be weighted in their strength. The bottom layer of units takes input from the experimenter and this travels through ‘hidden’ layers to an output layer which reveals the output to the experimenter. Such a setup can learn and compute in various ways that depend of the programs that control the weightings and other parameters. This PDP model has favoured the distributed network idea when modeling actual biological networks. Some researchers have made a PDP network do more than one thing at once (but ironically this entails having more localization in the hidden layer). This might seem a small problem for PDP but PDP does suffer from a limitation that makes rapid one-trial learning difficult. That type of learning is the basis of episodic memory. Because each unit in PDP is involved in many representations – any change in weighting affects most of those representations and so it takes many iterations to get the new representation worked into the system. Rapid one-trial learning in PDP destroys previous learning; this is termed catastrophic interference or the stability-plasticity dilemma. The answer has been that the hippocampus may have a largely local arrangement for its fast one-trial learning but the rest of the cortex can have a dense distribution. But there is a problem. When a fully distributed network tries to represent more than one thing it has problems of ambiguity. This is a real problem because the cortex does not handle one concept at a time – in fact, it handles many concepts at once and often some are novel. There is no way that thought processes could work with this kind of chaos. This can be overcome in PDP networks but again the fix is to move towards local representations.

 

 

This is the abstract from a paper to be published soon (citation below).

A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the co-activation of multiple “things” (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the co-activation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to co-activate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.

 

 

The result is that our model of the brain moves a good way along the spectrum toward the grandmother cell end. And lately there has been a new methods to study the brain. Epilepsy patients have electrodes placed in their brains to monitor seizures prior to surgery. These patients can volunteer for experiments while waiting for their operations. So it is now possible to record the activity of small groups of neurons in awake functioning human beings. And something very similar to grandmother cells have been found. Some electrodes respond to a particular person – Halle Berry and Jennifer Aniston were two of the first concepts to be found to each have their own local patch of a hundred or so neurons. There was a response in these cells to not just various images, but written names and voices too. It happened with objects as well as people. This home of concepts held as small local groups of neurons has been observed in the area of the hippocampus.

 

 

The idea that the brain was one great non-localized network has also suffered from the results of brain scans. Areas of the brain (far from the hippocampus) appear to be specialized. Very specific functions can be lost completely by the destruction of smallish areas of the brain as a result of stroke. The old reasons for rejecting a localized brain organization are disappearing while the arguments against a globally distributed organization are growing. This does not mean that there is no distributed operations or that there are unique single cells for a concept – it just means that we are well to the local end of the spectrum.

 

 

Rodrigo Quian Quiroga, Itzhak Fried and Christof Koch wrote a recent piece in the Scientific American (here) in which they look at this question and explain what it means for memory. The whole article is very interesting and worth looking at.

Concept cells link perception to memory; they give an abstract and sparse representation of semantic knowledge—the people, places, objects, all the meaningful concepts that make up our individual worlds. They constitute the building blocks for the memories of facts and events of our lives. Their elegant coding scheme allows our minds to leave aside countless unimportant details and extract meaning that can be used to make new associations and memories. They encode what is critical to retain from our experiences. Concept cells are not quite like the grandmother cells that Lettvin envisioned, but they may be an important physical basis of human cognitive abilities, the hardware components of thought and memory.

ResearchBlogging.org

Bowers JS, Vankov II, Damian MF, & Davis CJ (2014). Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe. Psychological review PMID: 24564411

I'm on ScienceSeeker-Microscope

Metaphor, Exaptation and Harnessing

We are used to the metaphor of time being related to distance, as in “back in the 1930s” or “it was a long day”. And there is a noticeable metaphor relating social relationships to distance, as in “a close friend” or “distant relatives”. But these are probably not just verbal metaphors, figures of speech, but much deeper connections. Parkinson (see citations below) has studied the neurobiology of this relationship and shows it is likely to be an exaptation, a shift in function of an existing evolutionary adaptation to a new or enlarged function. We have an old and well established brain system for dealing with space. This system has been used to also deal with time (rather than a new system being evolved), and later further co-opted to also deal with social relationships.

 

 

What spatial, temporal and social perception have in common in this system is that they are egocentric. Space is perceived as distances in every direction from here, with ourselves in the ‘here’ center. In the same way we are the center of the present ‘now’. We are also at the center of a social web with various people at a relative distance out from our center. Objects are placed in the perceptual space at various directions and distances from us. Events are placed various distances into the future or past. People are placed in the social web depending on the strength of our connection with them. It appear that with a small amount of adaptation (or learning) almost any egocentric system could be handled by the basically spatial system of the brain.

 

 

Parkinson has looked at the regions of the brain that process spatial information to see if and how they process temporal and social information. The paper has details but essentually, “relative egocentric distance could be decoded across all distance domains (spatial, temporal, social) … in voxels in a large cluster in the right inferior parietal lobule (IPL) extending into the posterior superior temporal gyrus (STG). Cross-domain distance decoding was also possible in smaller clusters throughout the right IPL, spanning both the supramarginal (SMG) and angular (AG) gyri, as well as in one cluster in medial occipital cortex”.

 

 

These findings provide preliminary support for speculation that IPL circuitry originally devoted to sensorimotor transformations and representing one’s body in space was “recycled” to operate analogously on increasingly abstract contents as this region expanded during evolution. Such speculations are analogous to cognitive linguists’ suggestions that we may speak about abstract relationships in physical terms (e.g., “inner circle”) because we think of them in those terms. Consistent with representations of spatial distance scaffolding those of more abstract distances, compelling behavioral evidence demonstrates that task-irrelevant spatial information has an asymmetrically large impact on temporal processing .” As well as the similarity to the linguistic theories of Lakoff and Johnson, this is also similar to Changizi’s ideas of cultural evolution harnessing the existing functionality of the brain for new uses such as writing.

 

 

Here is the abstract of the Parkinson 2014 paper:

 

Distance describes more than physical space: we speak of close friends and distant relatives, and of the near future and distant past. Did these ubiquitous spatial metaphors arise in language coincidentally or did they arise because they are rooted in a common neural computation? To address this question, we used statistical pattern recognition techniques to analyze human fMRI data. First, a machine learning algorithm was trained to discriminate patterns of fMRI responses based on relative egocentric distance within trials from one distance domain (e.g., photographs of objects relatively close to or far away from the viewer in spatial distance trials). Next, we tested whether the decision boundary generated from this training could distinguish brain responses according to relative egocentric distance within each of two separate distance domains (e.g., phrases referring to the immediate or more remote future within temporal distance trials; photographs of participants’ friends or acquaintances within social distance trials). This procedure was repeated using all possible combinations of distance domains for training and testing the classifier. In all cases, above-chance decoding across distance domains was possible in the right inferior parietal lobule (IPL). Furthermore, the representational similarity structure within this brain area reflected participants’ own judgments of spatial distance, temporal soon-ness, and social familiarity. Thus, the right IPL may contain a parsimonious encoding of proximity to self in spatial, temporal, and social frames of reference.

ResearchBlogging.org

Parkinson C, Liu S, & Wheatley T (2014). A common cortical metric for spatial, temporal, and social distance. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (5), 1979-87 PMID: 24478377

Parkinson C, & Wheatley T (2013). Old cortex, new contexts: re-purposing spatial perception for social cognition. Frontiers in human neuroscience, 7 PMID: 24115928

I'm on ScienceSeeker-Microscope