Monthly Archives: December 2013

Bonn model of volition

In a recent paper, Bonn (citation below) puts forward a model of freewill. Although I wish that we could simply stop using the words freewill and determinism, Bonn’s type of treatment is the next best thing.

 

First he redefines freewill. (This is the part that bothers my sensitivities. By denying conscious freewill but not some other kind of freewill, communication gets difficult and often misleading.)

 

“... psychologists tend to operationalize free will by relating it to self-report which requires a form of self-reflective conscious awareness. The implicit condition is that one must be able to report upon all the processes leading up to a decision or behavior in order for it to be “free,” and conversely, if my brain generates an idea or initiates an action without my conscious awareness it is somehow not “me” doing the thinking or acting. The conception of freedom argued for here, on the other hand, merely requires that thoughts and resulting actions be novel and internally generated, that they result from a combination of experiences and characteristics which is unique to the individual. Unconscious, or implicit, processes are, in this view, essential components of how an individual processes information: Regardless of whether a particular process can be observed and narrated by the conscious, self-aware part of the brain, it can still make unique and important contributions toward thought and action, and thus, to the independence of the individual. The arguments here, thus, specifically reject the simplistic notion that free will requires complete conscious awareness of the processes involved.

 

He make clear that he believes that internal processes belong to the individual whether they are conscious or unconscious, with no exclusively conscious “me”.

 

Then he begins to describe a model of how volition may work with our memory being an important part of this model.

 

Our memory is known to be inaccurate and many feel this is a fault (kludge if you like). “the lack of factual accuracy in our recollections may instead be the signature of a system that evolved, not to store accurate representations of the past, but instead, to provide a means of flexibly imagining the future, as well as conceiving of other hypothetical scenarios. Surviving in the real world does not depend upon accurate recall of every past detail as much as an ability to predict future contingencies. A system that can integrate details of multiple past events and is more sensitive to broad patterns and associations rather than accurately representing minutia would be well suited to this purpose. ..Growing evidence points to a core network of brain regions involved in remembering the past and imagining the future, as well as other forms of mental simulation… remembering the past and predicting the future incorporate memory systems in the medial temporal lobes, the lateral parietal lobes and the hippocampal formation, in addition to areas in the medial frontal lobes which are involved in perspective taking and theory of mind, or understanding others’ mental states. It seems that many forms of self-projection; imagining the past and future, navigation (imagining the self in different physical locations) and theory of mind (taking the perspective of other people) depend on this same core network of memory-related brain areas …When the brain is not occupied with processing external stimuli, activity reverts to this area where stored impressions are consolidated and reorganized. The default network seems to facilitate the internal experience of scenarios and perspectives that transcend simple recall, and it seems to do so automatically through making connections between, or recombining, elements of multiple memory traces. ”

 

This system is very personal to each of us – our history, values, habits, emotions all make the plans and goals that result from this memory system unique to us.

 

Bonn describes two motor systems. “The first motor control system runs from the sensory cortices to the primary motor region via the premotor area. Activity in these areas relates to stimulus-driven, or reflexive, responses to sensory input as well as to habitual behaviors such as grasping, eating, and walking which are performed largely unconsciously . The second motor system involves multiple regions, including the cingulate, frontal cortices, and basal ganglia, which connect to the primary motor cortex via the pre- supplementary and supplementary motor areas. Behaviors that require planning and goal maintenance engage some or all of this system. Processes mediated by pre-supplementary motor area (preSMA) connections generally allow for the flexible, online integration of goal states, decisions, and action priorities with feedback from the environment…The preSMA, along with the frontopolar cortex and the rostral cingulate, is active in tasks requiring decisions between multiple options…The frontopolar cortex is also involved in maintaining goal states such as suppressing responses to immediate environmental demands and, along with the anterior cingulate (ACC), is seemingly involved in the production of goal-directed action sequences. The ACC, through the preSMA, also seems capable of selecting and initiating action in the absence of external prompts, as well as monitoring and adjusting those actions in response to feedback. All told, there are extensive findings indicating that the preSMA is involved in interfacing multiple goal and decision-related subsystems with the primary motor cortex.” There seems little doubt that we have the ability to control our actions. The supplementary and pre-supplementary motor areas can internally guide choice and selectively inhibit action in a ‘volition-like way. We have control.

 

Bonn puts together the memory and the motor abilities in his model.

 

To this point we have established two important concepts. First, processing in the default network allows humans to create novel combinations of information. Information stored in memory is broken down to elemental form and connections made between elements during times of reduced sensory input. This allows for patterns and relationships among multiple impressions to be extracted and for the flexible generation of counterfactual simulations. Second, faculties exist for internally maintained goals to exert flexible control over behavior. Humans can replace automatic, reflexive behaviors with internally guided, goal-directed action. ”

 

The task related network and the default network have been pictured as mutually exclusive, not working at the same time, but recent work shows that they interface and engage together in planning.

 

Here is a diagram of the model:

Here is the paper’s abstract:

 

This paper examines the concept of free will, or independent action, in light of recent research in psychology and neuroscience. Reviewing findings in memory, prospection, and mental simulation, as well as the neurological mechanisms underlying behavioral control, planning, and integration, it is suggested in accord with previous arguments (e.g., Wegner, 2003; Harris, 2012) that a folk conception of free will as entirely conscious control over behavior should be rejected. However, it is argued that, when taken together, these findings can also support an alternative conception of free will. The constructive nature of memory and an integrative “default network” provide the means for novel and

 

creative combinations of information, such as the imagining of counterfactual scenarios and alternative courses of action. Considering recent findings of extensive functional connections between these systems and those that subsume motor control and goal maintenance, it is argued that individuals have the capability of producing novel ideas and translating them into actionable goals. Although most of these processes take place beneath conscious awareness, it is argued that they are unique to the individual and thus, can be considered a form of independent control over behavior, or free will.

ResearchBlogging.org

Bonn GB (2013). Re-conceptualizing free will for the 21st century: acting independently with a limited role for consciousness. Frontiers in psychology, 4 PMID: 24367349

Another viewpoint

In my last post I gave my own ideas about consciousness and unconsciousness. I have to say that my outlook would be considered a bit extreme by some very respected neuroscientists, and so here I give a more orthodox view.

 

Bargh and Morsella (citation below) (The Unconscious Mind) have examined the various versions of unconsciousness. Here is the abstract:

 

The unconscious mind is still viewed by many psychological scientists as the shadow of a “real” conscious mind, though there now exists substantial evidence that the unconscious is not identifiably less flexible, complex, controlling, deliberative, or action-oriented than is its counterpart. This “conscious-centric” bias is due in part to the operational definition within cognitive psychology that equates unconscious with subliminal. We review the evidence challenging this restricted view of the unconscious emerging from contemporary social cognition research, which has traditionally defined the unconscious in terms of its unintentional nature; this research has demonstrated the existence of several independent unconscious behavioral guidance systems: perceptual, evaluative, and motivational. From this perspective, it is concluded that in both phylogeny and ontogeny, actions of an unconscious mind precede the arrival of a conscious mind—that action precedes reflection.”

 

The paper is realistic in its view of the limits of consciousness but still models the brain as having two minds, a conscious mind and an unconscious mind. In contrast, my view is that consciousness and unconsciousness are two parts of a single mind with consciousness not having much of a role outside of awareness.

 

Bargh and Morsella, contrast the views of cognitive psychology and social psychology. Cognitive psychology concerned itself with unconscious information processing, subliminal information. “Because subliminal-strength stimuli are relatively weak and of low intensity by definition, the mental processes they drive are necessarily minimal and unsophisticated, and so these studies have led to the conclusion that the powers of the unconscious mind are limited and that the unconscious is rather “dumb”.”Social psychology looked at mental processes that were hidden from awareness. “This research, in contrast with the cognitive psychology tradition, has led to the view that the unconscious mind is a pervasive, powerful influence over such higher mental processes.” There is also the popular view of the unconscious, the Freudian model. The details of Freud’s model have not survived later science but the authors feel that the general idea survives. “… in broad-brush terms the cognitive and social psychological evidence does support Freud as to the existence of unconscious mentation and its potential to impact judgments and behavior.” Pre-Freudian ideas of unconsciousness are rare and the conscious mind was viewed naively as ‘the mind’ or most of it.

 

The authors say that there is a consensus on conscious thought but not on unconscious thought. “…the qualities of conscious thought processes: they are intentional, controllable, serial in nature (consumptive of limited processing resources), and accessible to awareness (i.e., verbally reportable).” They point out that two of the unconscious processes that were studied in some detail: the pre-conscious perception processes that supply conscious awareness; and, the acquisition of skills through practice so that they become unconscious. Also in the mix, is the idea derived from hypnosis of ‘unconscious’ meaning a person being unaware of the causes for their behavior.

 

And this equation of unconscious with unintentional is how unconscious phenomena have been conceptualized and studied within social psychology for the past quarter century or so. Nisbett and Wilson’s seminal article posed the question, “To what extent are people aware of and able to report on the true causes of their behavior?” The answer was “not very well”, which was surprising and controversial at the time given the overall assumption of many that judgments and behavior (the higher mental processes) were typically consciously intended and thus available to conscious awareness. If these processes weren’t accessible to awareness, then perhaps they weren’t consciously intended, and if they weren’t consciously intended, then how in fact were they accomplished? This latter question motivated the social psychology research into priming and automaticity effects, which investigated the ways in which the higher mental processes such as judgment and social behavior could be triggered and then operate in the absence of conscious intent and guidance. Consequently, this research operationally defined unconscious influences in terms of a lack of awareness of the influences or effects of a triggering stimulus and not of the triggering stimulus itself. And what a difference this change in operational definition makes! If one shifts the operational definition of the unconscious from the processing of stimuli of which one is not aware to the influences or effects of stimulus processing of which one is not aware, suddenly the true power and scope of the unconscious in daily life become apparent. Defining the unconscious in terms of the former leads directly to the conclusion that it is dumb as dirt, whereas defining it in terms of the latter affords the opinion that it is highly intelligent and adaptive….social cognition research on priming and automaticity effects have shown the existence of sophisticated, flexible, and adaptive unconscious behavior guidance systems. These would seem to be of high functional value, especially as default behavioral tendencies when the conscious mind, as is its wont, travels away from the present environment into the past or the future. It is nice to know that the unconscious is minding the store when the owner is absent.”

 

So although this paper shows that the consciousness-bias is no longer strongly in vogue, it also shows that my viewpoint, that consciousness is awareness and not processes of cognition (or perception, action, emotion, volition etc.), is not generally accepted. I am not alone though – there is Thomas Metzinger.

 
ResearchBlogging.org

John A. Bargh, & Ezequiel Morsella (2008). The Unconscious Mind Perspect Pyschol Sci., 3 (1), 73-79

If it is not understood, it is simple

There is a truism that I cannot find a good quote of – but it is a truism all the same, and no doubt there is a quote somewhere. ‘If I don’t understand something it is simple; if I don’t know how to do something it is easy’. It is similar to ‘don’t underestimate what you don’t know’. And similar to the Dunning-Kruger effect: unskilled people think they have superior skill. It is a constant trap waiting to catch us out. It is also related to the ‘unknown unknown’.

 

In the mid 1950s automatic machine translation of natural languages was just a few years away and this remained the case, the system was just a few years away, for the next 30 or so years. 20 years further on there is still not a really, really good machine translation system. Why was this problem underestimated? It was not understood; natural language was not understood. What could be difficult? There are words, dictionaries, grammars, idioms - so we use them, like a person who just opens their mouths and meaningful utterances come out. People could not see the problems before they tried to do it.

 

Playing master level chess is considered difficult but moving chess pieces is easy – it takes no intelligence at all to reach out quickly, move a pawn, and hit the clock button. It is figuring out what to move that takes the intelligence. But computers could play good chess long before they could move the pieces without being too slow or knocking over other pieces. Playing chess is easier because we know how it is done but it seems harder for the same reason. Moving pieces is hard because we don’t know how it is done but it seems easier for the same reason.

 

This is part of our problem with consciousness. We have a ‘thought’ and we believe it was consciously produced. We are not just aware of this thought but ‘we had the thought consciously’ and it is a ‘conscious thought’. But the process that created that thought is not conscious. Did we hear any metaphorical gears or motors, see any metaphorical flashing lights, smell any metaphorical chemical reactions? No, the thought just happened, like a virgin birth. We are simply not able to examine the processes of thought. They are hidden, invisible and transparent to us. And so thought seems simple to understand, we just do it, easy-peasy. There are no conscious processes – there are processes that supply content to consciousness and processes that create the state of consciousness – but there are not processes within consciousness. No processes we are aware of. Our perception, cognition, emotion, action, volition, executive control, all those sorts of processes are not done in consciousness. We are just aware of the outcomes and sometimes important signposts or steps on the way to outcomes. In other words we have consciousness but not a conscious mind because ‘mind’ implies a type of process that we are not conscious of.

 

This does not mean, as some would have it, that consciousness is useless or that philosophical zombies are possible. No, consciousness is an integral part of how the brain functions. It has got to be because it is too expensive to be useless. Also our behavior is different without consciousness for any extended period of time. We need to figure out its role but that will be awkward if we keep thinking of consciousness as a mind. Mind implies a certain sort of whole person-ness. It implies processes that do not occur in consciousness. And unconscious processes, taken together, also do not add up to a mind – so no unconscious mind either. There is a mind and part of it, a smallish part, is consciousness. The single un-divided mind has a certain sort of whole person-ness.

 

When are we going to understand this mind? Do not count on it happening in a few years. Don’t hold your breath. The neuroscientists seem to be discovering new questions faster than new answers. Understanding is going to take a while. But in order to get to that understanding, it is time to stop being dualistic in every sense: mind-body, spiritual-material, mental-physical, conscious-unconscious.

 

 

Does control of cognition have to be conscious?

What are the functions of consciousness in cognition? In fact, are there any? Over many experiments, it has been shown that unconscious information processing is common, powerful, sophisticated and not completely unlike ‘conscious processing’. Unconscious processing can reach higher semantic levels. But many theories, some very widely accepted such as the Global Neuronal Workspace, contain the idea of cognitive control and postulate that it is always associated with consciousness. These theories assume that unconscious stimuli cannot trigger top-down cognitive control, planning of strategies, or correction of possible errors. Other theories do not assume this and accept the possibility of unconscious control of cognition.

 

The authors, Desender and others, (see citation) of a recent paper set out to test unconscious control in a particular setting, conflict adaptation.

 

Cognitive control kicks in when routine activation of behavior is no longer sufficient for optimal performance. When people encounter interference they adjust their behavior to overcome it. This interference can take various forms. For example, in a situation where relevant and irrelevant information can activate differential responses, this potential response conflict requires remedial action. In the current study we will focus on this particular expression of cognitive control, known as conflict adaptation. To study this affect we used a priming paradigm in which subjects are instructed to categorize a target (i.e. the relevant information) as fast as possible, while ignoring a preceding prime (i.e. the irrelevant information). When prime and target trigger the same response (i.e. congruent trial) responses are typically fast and error rates low. However, when prime and target trigger a different response (i.e. incongruent trial), both sources are highly conflicting, which typically leads to slower response times and elevated error rates. The interesting observation is that subjects continuously adapt to this conflicting information. When they experience a conflict on the previous trial, they will react to this by reducing the detrimental influence of the irrelevant information, leading to reduced priming effects (i.e. faster responses to congruent compared to incongruent trials) on the current trial. This is achieved by inhibiting irrelevant information and/or focusing on relevant information. This effect, also known as the Gratton effect, is typically calculated by computing the difference between congruency effects following congruent and following incongruent trials. It is a hightly robust finding, independent of the particular paradigm being used.”

 

The researchers used this method (in ways avoiding several weaknesses of some previous similar experiments) with the primer being either visible and therefore conscious, or masked to be subliminal, invisible and therefore unconscious. The cognitive control would either be triggered by unconscious information or not. They found it was triggered; there was unconscious conflict adaption. “Consequently, our results add to the growing literature showing that many aspects of cognitive control do not seem to have an exclusive link with consciousness.”

 

The study also showed, using neutral primes, that the adaptation effect was caused by conflict in incongruent trials and not lack of conflict in the congruent trials in both the conscious and the unconscious trials. The origin of the adaptation was either through facilitation (faster, more accurate congruent trials) or interference (slower, less accurate incongruent trials). Again comparing with neutral primes, they found a clear pattern of interference as the source of the adaptation in conscious trials but unconscious trials were less clear and may show facilitation. The authors feel this last observation requires more study.

 

We conclude that conflict adaptation is possible with the conflicting information remains unconscious, confirming the findings of Gaal et al. This, conflict adaptation, as a prevailing expression of cognitive control, does not seem to be a function exclusively reserved for consciousness. This observation contributes to the search for the limits and possibilities of unconscious processing and can be helpful to further unravel the mystery of the function of consciousness.

 

This is another question mark for the idea of exclusive conscious control of anything. There seems to be growing evidence of conscious control not being needed for perception, action, volition, emotion, or cognition. When do we start thinking of consciousness as awareness not control? When do we start thinking of ourselves as whole beings and not disembodied consciousnesses? When do we stop identifying our very ‘selves’ with a flickering image?

ResearchBlogging.org

Desender K, Van Lierde E, & Van den Bussche E (2013). Comparing conscious and unconscious conflict adaptation. PloS one, 8 (2) PMID: 23405242

I have a point of view – here is a look at it

I have recently been in a conversation with a computer expert and I notice again how different my ideas are from those of the computer community. There are differences in starting points, methods of proof, definitions, and goals to mention a few.

 

Starting point: I start dealing with the notion of thought at the point in evolution when animals start to actively move. In order to move, an animal must know (at some level) where it is, where it wants to go, and how to get there. This seems to be the first time in evolution that a nervous system was needed (or occurred), although many of the building blocks existed with other functions. The animal needed motor neurons to do the moving and sensory neurons to monitor the movement. Now that the ball was rolling, evolution would result in ever more sophistication: habituation, learning, memory, planning etc. These complexities demand interneurons between the motor and sensory neurons. The interneuron web would then become larger and more complex, ending up as a brain capable of thought. The nervous systems of different types of organisms may seem very different, but in essence they are very similar: a motor side and a sensory side with a thinking web between them.

 

Now the common approach is to put the concept of thinking in a basket with logic, problem solving, symbol manipulation, and the like. I put thinking in a basket with movement processes, modeling the world, intention, decision making, and the like.

 

One of the common examples of this difference is the ‘catching a fly ball problem’. One way to do this is to develop an algorithm to calculate the path of the ball and therefore where it can be intercepted, the fielder runs to that spot and catches the ball. The thing that is wrong with this is that the fielder does not have enough information to do the calculation nor the time. However the fielder can simply monitor the bearing and height of the ball and run with directions and speeds that keep the ball at the same spot of his visual field. If he can successfully do this he will end up intercepting the ball. This sort of brain process is more like a servo system than a calculation. The oddly curved path that fielders run shows that this is indeed how they catch the fly ball.

 

Thinking is probably done many ways and they probably tend to be task type specific because the brain was not built to be a general computing device; it evolved as a survival kit.

 

Methods of proof: History is littered with logical proofs of ideas that turned out to be false. There are two difficulties with logic. First it cannot disprove anything: we take agreed axioms, do impeccable logic to get a conclusion. Does this prove that an alternative conclusion is wrong? Well, we have a choice – either our alternative conclusion is wrong or one of our assumed axioms is wrong – take your pick.

 

The second problem is that logic is not impeccable if it is done in natural language about actual real entities. Words are too slippery and they can shift their meaning with the context. So for instance, the word ‘death’ can be a noun for a state of being (not being) or a noun for the event of entering that state. It can change back and forth in a logical verbal argument.

 

I prefer the observation of reality to logic. That is, I am an empiricists rather than a rationalist. Now I know that any experiment or observation can be mistaken, and is faulty more often than logical arguments, perhaps. However, the experiments/observations do not purport to be individually correct. It is the weight of evidence that is convincing. I follow science as opposed to philosophy, and I follow it in a critical frame of mind.

 

So people can argue using logic, that what they know about conventional computers (digital, algorithmic) also applies to the brain, but I want to see each element of the mapping between computers and brains observed and tested experimentally. Brains are only metaphorically like computers – if they are more than that it has to be demonstrated. The experimental evidence for brains being general computing devices, digital and algorithmic, is extremely scant at best.

 

Definitions: Different fields of study have various different definitions for the same word. A couple of years ago I was reading a paper without any knowledge of authors or their field and I was finding it very unusual and surprising. I kept thinking that they should say what their evidence was for what they were saying – they were implying models and theories that were not (or not yet) accepted. Finally I gave up and looked at some of the papers they referenced. They turned out to be all about computer science. This was not a paper about the brain at all but about computer simulations of the brain. It used the word neuron without any explanation that this ‘neuron’ was not a neuron at all but a little electronic circuit. I accept that words are used metaphorically and that is useful. But in this case a little hint would have helped. I re-read the paper and got the idea that the authors had been taken in by their re-definition of neuron and actually believed that they were saying something that was applicable to cellular networks in the brain because they were seeing it in electronic networks of e-neurons.

 

Understanding is not about semantics. Each word or standard phrase can have many meanings and a load of implied baggage from old theories. Words are useful for communication but cannot be taken too seriously. What is important is the understanding, not its verbal expression. This is why I have more faith in science than in many other fields – scientists are in contact with their reality and they can see, hear, smell, feel, prod and manipulate the reality they are studying.

 

Jim Andrews, in a blog post (http://netartery.vispo.com/?p=1174), says, “Oh, by the way, here is a mathematization of all conceivable machines – here is the universal machine, the machine that can compute anything that any conceivable machine can compute.” This uses an almost tautological definition of machine. The concept of machines appears to be only those that can be mathematized and can compute. So it follows that any (such) machine is able to compute things and the universal machine (which is a mathematical creation) can match them. This says nothing about anything that is not a mathematizable, computing device. Machine has many other definitions too. For example, the classic simple machines are 6 in number: lever, wheel, pulley, inclined plane, wedge and screw. None of these is a mathematized computing machine. And computing has lost much of its general meaning and become married to the computer. So many people do not use it for the processes of the brain and call those processes cognition instead.

 

Playing with semantics is not what interests to me. I do not confuse words with the real things they stand in for.

 

Goals: People who study brains have many different goals. Some want to prove or disprove some theory. Some want to identify what (exactly) is the unique difference between man and other animals that makes man uniquely unique. Some want to cure mental disabilities and illnesses. Some just want to understand how the brain works in the same way as any other organ works (the heart or the stomach say). Some want to understand themselves. I find the complexity of the brain intriguing and just want to understand it. I also am interested in cultural/scientific shocks. The Copernician revolution for example, or the Darwinian revolution were awesome events and we are entering another which does not have a name yet but will be just as big an event. It has only just started but every month there are surprising new discoveries. I want to follow the progress.

 

 

So that is my viewpoint: biological, evidential/empirical, avoiding semantic traps, open-minded but critical, watching the science unfold.

 

Metaphors are basic

Metaphors are basic

A few weeks ago, a friend asked what I thought about metaphors. Actually I think they are extremely important to cognition. Many years ago I was looking at a list of rhetorical devices/figures of speech. Each had its Latin name under which it was taught as part of rhetoric in ancient and medieval times. What stood out was how different metaphor, simile, allegory, analogue (and the figurative by any other name) were from the other devices and how similar they were to each other. It was as if these were ways of thinking as well as forms of speaking.

This prompted me to look at investigators such as Lakeoff and Johnson. Many of the ideas and theories about metaphor are very well known and I do not want to repeat them here. I want to deal with some less well known ideas.

Embodied cognition bridges the gap between babies being born with an empty mind, a ‘blank slate’, and having to figure everything out for themselves; and the other extreme in which babies are born with all the cognitive concepts they need to understand the world. Neither of these extremes are credible. But being born with some very useful starting points and tools, but quite a small group of them, can allow the child to get to a general understanding relatively quickly. We can think of metaphor in this sense. The child has embodied cognition that uses metaphor to get from a physical grounding point to complex and abstract notions.

Take the structure that can be built from the child’s idea of motion that is grounded in the child’s own ability to engage in intentional movement. We could draw a little map of this: there is ‘here’ where I am now, there is ‘start’ where I was when this movement started, ‘target’ where I want to get to, ‘path’, ‘goal’, ‘obstacle’, ‘finish’ and so on. As the child matures other grounded concepts get added. Eventually the child has the concept of a journey which is more complex but still heavily grounded in the child’s physical experience. But journey can become another map including many more ingredients in its structure. Lakeoff did a lot of work on this particular metaphoric structure and I will not repeat those structures (like career, life, transport, exploration) here. As adults we end up (metaphorically) with nested piles of maps, each giving a structure: concepts and relationship between the concepts of a group things that can be related by metaphor.

If I want to explain a computer memory, I say that each bit of data is stored in memory in a particular address. What does this do? The word address brings up a map set, let’s call it the postal system map set. Here everything has an address and there is a standard way to identify an address. Things (letters) can be delivered to an address by a system (postal system) using various forms of transport etc. Once we understand the postal system, we can understand many other systems with similar structures by relabeling the concepts and making small modifications to the relationships, a little tweaking and a new map goes on the pile. In a sense what the words in a language do is to point out to the listener appropriate metaphorical maps to aid in understanding what is being said. It is not just language, we can get these prods and nudges from many things in our environment and from our own thoughts. There are visual, auditory, kinesthetic metaphorical ‘maps’ too. One of the problems with experiments in this area is that very small unnoticed clues can affect the results – a sort of human ‘clever Hans’ effect.

There is a sense in which language is just one huge metaphoric machine. There are dead metaphors. If you take a page of a dictionary and examine a word’s different meanings and etymology you can see how many words are obviously derived from metaphors that have lost their figurativeness through long use and become literal. Look at the word ‘go’ as a good example. What does it mean to die as a metaphor and become literal? One, it is processed in a different part of the brain. Two, it has lost some of its poetic and emotional power. But more importantly, its metaphoric base has changed type; it no longer seems to cause recall its metaphorical roots.

It is a very important question for neuroscience and linguistics to answer: how is what I have (metaphorically) described as grounding – mapping - dieing – pointing-to etc. actually happen in the brain. In terms of autism, it is also a medical question. How is this powerful tool of learning, thinking and communicating realized in the flesh?

A way to talk to yourself

What does talking to oneself actually do? Who knows? There may be some science out there on this subject but I have not encountered it. If any readers have info, please let me know.

 

However, even without firm ground we can still speculate. Talking to ourselves seems to me to be one part of the brain or one neural process sending information to other parts of the brain or neural processes via consciousness as a link in the form of language. Why not just communicate without the conscious link? Presumably that is how most communication in the brain is done, but for some reason it is not always possible. And why does the communication involve language? Again, presumably it usually doesn’t, all the sense information that is made globally accessible through consciousness does not have to pass through a language description. Instead we experience the perceptual model of ourselves in the world.

 

I notice that a large proportion of the internal talk has to do with action: setting goals, planning action, doing actions, resisting action, preparing not to do a reflex act. It may be that this self talking is a way for the motor system to communicate within itself and with other systems.

 

One of the simplest instances of internal talk to understand is the stopping of a reflex. If we pick up something very hot, we immediately drop it. This is a spinal reflex that does not use the brain at all. It would seem that I have no choice, but I know that if I decide ahead of time (and it must be ahead of time) that it is more important to not drop the object than it is to avoid a burn, then I can tell myself to be prepared for the heat and steel myself against the reflex to drop a hot object. I say to myself, “OK now, keep your concentration, whatever happens, don’t let it drop!”. So the process that decided priorities for goals/actions and can see the reflex coming has communicated through conscious language with the process that can reach down into the spinal cord and interfere with a spinal reflex. These two processes may have other ways to communicate (or not), but the conscious language route may be the fastest or the easiest. I suspect they can communicate by other means because it is possible to form a habit in these situations (I can drop red objects but not white ones, or the like) and this habit does not need the verbal command to myself each time it is used.

 

It is thought that although we do not speak aloud and do not actually hear the words, that internal language uses all the normal language processes including the motor and sensory ones. We actually form the action plan to say a word and we actually mimic the auditory input from hearing the word. And the word although not said, affects memory recall and formation, emotional states and cognitive thinking just like a real word would. So an internal command (or question or other type of statement) has the same effect on our brain as an external command. Whether we follow the command/answer the question depends on who/what/why etc. We do not always follow commands like a robot. But the process is probably the same in responding to language whether it is internal or external in origin. If someone shouted at me just before I reached for a hot object, “Don’t drop it!”, I probably would react in the same way as when I tell myself the same message. We are harnessing a very powerful phenomenon in language – as a communication tool IT WORKS. When someone says something to us it forces us to find the meaning and that amounts to finding what those meanings are associated with in our brains. It is like that person has reached into our brains and created a thought out of what they found there matching their thought. We can barely stop the process no matter how hard we try. But we can accept or reject the usefulness of the new thought. When people communicate with us they do not necessarily convince us and the same goes for internal language – it can broadcast the thought but not force any part of the brain to use the thought if it is not actually useful.

 

It is said that you can’t play games with your mind. I presume that is true in the same way that other’s cannot play games with your mind. If I find that someone is untrustworthy or gaming me for a fool – I simply ignore them. If I try and lie to myself – I simply ignore the message. I get the message but I do not use it. If something happens unexpectedly that is bad and someone says, “don’t let that happen again”. My reaction would be, “and how am I supposed to stop it, smarty-pants?”. And so I would be wise not to give the command to myself but instead ask the question, “how am I going to see this coming next time and stop it?” That is something the brain will work on.

 

There are very many people who imagine they are divided in two, and they have pictured one (me-part) as the commander and the other (my brain part) as the slave. The commander only needs will and if he holds his mouth right and he can force the brain to do things the way he wants. But the slave brain is not to be trusted and will sabotage the commander. The commander can summon more will and the brain can become more stubborn. They have created an image of a state of battle, a power struggle, like Freud described. If we are used to thinking this way, we have to remind ourselves that we are one person and we are simply using language internally to be more effective – it is not half of us talking to the other half. We have to talk to ourselves in a realistic, honest, effective way if we want to get things done and solve problems using internal speech.

 

So there is my speculation without very much scientific evidence. I find it useful and I hope you can too.