Monthly Archives: May 2014

Discrete or continuous consciousness


Here is a confession – I asserted, in a recent posting, that consciousness is discrete like the frames of a movie. A regular commenter, Quentin Ruyant, asked for the evidence that this was an accepted consensus in neuroscience. I have found that it is not accepted by some important neuroscientists. It is an old idea, still current but not in a sense ‘established’. It is still the way I view consciousness because that is how it seems to me, but I will be more open minded about it.

One reason the idea of a cyclic mechanism of consciousness appeals to me is that I have experienced it or something like it. I have on several occasions been a passenger in a car at night on a winding road and I have fought falling asleep by keeping my eyes open. This has produced a discrete series of images rather than a smooth vision of the road. This proves nothing but it does make me very comfortable with the idea. I have not heard anyone else describe this effect until I was reviewing again literature about discrete consciousness.

I am not alone in feeling I have experienced the discontinuity. Oliver Sacks says the following:

There is a rare but dramatic neurological disturbance that a number of my patients have experienced during attacks of migraine, when they may lose the sense of visual continuity and motion and see instead a flickering series of “stills.”

The stills may be clear-cut and sharp, and succeed one another without superimposition or overlap, but more commonly they are somewhat blurred, as with a too-long photographic exposure, and they persist for so long that each is still visible when the next “frame” is seen, and three or four frames, the earlier ones progressively fainter, are apt to be superimposed on each other. While the effect is somewhat like that of a film (albeit an improperly shot and presented one, in which each exposure has been too long to freeze motion completely and the rate of presentation too slow to achieve fusion), it also resembles some of E.J. Marey’s “chronophotographs” of the 1880s, in which one sees a whole array of photographic moments or time frames superimposed on a single plate.

I heard several accounts of such visual effects while working in the late 1960s with a large number of migraine patients, and when I wrote about this in my 1970 book Migraine, I noted that the rate of flickering in these episodes seemed to be between six and twelve per second. There might also be, in cases of migraine delirium, a flickering of kaleidoscopic patterns or hallucinations. (The flickering might then accelerate to restore the appearance of normal motion or of a continuously modulated hallucination.) Finding no good accounts of the phenomenon in the medical literature—perhaps not entirely surprising, for such attacks are brief, rare, and not readily predicted or provoked—I used the term “cinematographic” vision for them; for patients always compared them to films run too slow.”

In his article (here) for the New York Reviews of Books, Sacks gives a history of this idea that is very readable and interesting.

From what I can gather, the main problem with the discrete model of consciousness has to do with EEG measurements and how much of the cyclic nature of the waves is due to eye muscles rather than brain waves. The papers I have been reading have not produced a clear idea of the controversy so I intend to study this further in the future.


Remember the thalamus


What are we fairly sure about in regards to consciousness?

  • It only happens when the thalamus and cerebrum are in communication;

  • it seems to be associated with a positive signal 300 ms after an event (the P300);

  • it seems to be connected with working memory and the start of further stages of memory;

  • it seems to be connected to the focus of attention;

  • it is probably discontinuous (like frames of a movie);

  • it seems to be involved in communication of its contents to many parts of the brain;

  • gamma waves (about 40 a second), start in the thalamus and move front to back in the cortex, create synchrony in their path, and are essential for consciousness.

There are theories that put these observations together, such as the global workspace model. I am not convinced that these theories are complete because they appear to by-pass the importance of the thalamus. For years there has been a concentration on the cerebral cortex as if it was the “brain”.

Where is the model that puts the thalamus in the center of the action?

Let us assume for a while that the thalamus is the seat of consciousness. When we wake up a group of neurons in the brain stem “wake up” the thalamus, it “wakes up” the cortex and establishes the thalamo-cortical loops, then we are conscious. When we go to sleep the brain stem puts the thalamus “to sleep” and it puts the cortex “to sleep”. Signals from the thalamus control the activity of most regions of the cortex: the input from outside, activity levels, wave synchrony. Signals from the thalamus coordinate attention and the use of working memory – but, they also are the source of the cycle that produces a “frame” of consciousness and feeds it to the hippocampus to become part of a memory. The gathering of the contents of a frame of consciousness through synchrony or short-term memory in the hippocampus allows a certain type of global access to relevant information across the cortex.

Of course the cortex also affects the activity in the thalamus – this is a two way street. But it does seem to me that the thalamus drives the mechanics of consciousness. Something like attention probably is controlled by the cortical products of cognitive processes, acting through cortical executive processes which feed back to the thalamus to be implemented by its control of cortical activity. There would be all sorts of complex interactions like this where the control was in effect circular. But - the thalamus would be the time keeper and the trigger for each stage of the cycle that produces consciousness.

I cannot see a model of consciousness that ignores the thalamus as any more acceptable than one that ignores the cortex.


Tyson and philosophy


Neil deGrasse Tyson is a science TV star. He is very popular but perhaps not with philosophers because he often shows his low regard for their subject. He has raised a lot of ire by advising bright students to go into science rather than philosophy. He just seems to lack respect for philosophy.

Philosophers answer that he is being anti-intellectual, even philistine, but I have to say that apart from philosophy and religion, he is not critical of the arts and humanities. He does not seem anti-intellectual just anti-philosophical. I suspect that a lot of scientists would not quite agree with Tyson but come very close to it. They are just not as out-spoken.

The problem seems to be different views of what is a big, deep or important question and what is to be done with such questions. Science, philosophy and religion all deal in ‘big questions’ and their questions overlap. Each has its own criteria for what an answer would look like. They are bound to disagree often. To many, the solution is to cut up the inquiry with boundaries, but science in particular never stays within its boundaries if it sees a method to tackle a question. Thus it always seems to be muscling in on other subject’s territory and ignoring their ‘knowledge’.

Massimo Pigliucci, who claims to be his friend, has written an article (here), Neil Tyson And The Value Of Philosophy. In it Pigliucci gives a philosopher’s answers to Tyson. He is also a biologist and so his remarks are more nuanced than some. He claims not to be upset by Tyson’s amount of air time. This is obviously not true of some of Tyson’s critics.

Tyson like many people is frustrated and annoyed by semantic discussions and points out that in philosophy discussions seems to end up being about words and not ideas or actual things. It seems to me that this is one of the things that prompts some people to lean towards science and others towards philosophy. Pigliucci describes it differently but it amounts to the same division. He has philosophy as being a conceptual exploration as opposed to science as being a empirical one. Exactly - and another way of saying that is that philosophy is about verbal concepts and science is about the physical world.

Pigliucci says both science and philosophy are dwelling on the same questions. That may be, but the nature of acceptable answers is so different that the questions are actually not the same. Tyson is frustrated with the lack of pursuit of a question caused by the distractions of all the philosophical baggage that a question has accumulated. He just wants to leave the philosophy to the side and get on with solving the question.

Tyson has said that philosophy is not helpful or useful to science. Pigliucci disagrees and his main argument is that science is the child of philosophy. True, but children leave home and do not always end up the way their parents had hoped. I have noted recently that many philosophers are annoyed that neuroscience has not followed their lead in many ways. Too bad.

This brings us to the final point. Tyson says that philosophy cannot help with the frontiers of physical sciences (like quantum mechanics) because there is a limit to what can be done thinking in an armchair. We have to agree with that: quantum mechanics would not/ could not be developed without experimentation. Pigliucci seems to have only a weak answer - some good things can be just thought up.

Personnally, I think you can be interested in philosophy or not (I am moderately interested), but philosophy does not have much to do with science or how science should be done.


Split-brain that is not so noticable

My brother had a cleft palate when he was born. During foetal development, when the two sides of his face came together, there was a fault in the process and the two sides of the roof of his mouth did not meet. This had to be corrected with surgery. He had great difficulty with language as a child. When he started school no one could understand him except his family. In many ways he was a normal child but there was always something different about him. I am quite sure he was never examined by a doctor for the odd aspects of his behavior. Later, a number of people thought he had mild Asperger’s syndrome. But when he was young that sort of thing was not known on the rural prairies.

But in the back of my mind is a conversation I had with a neuroscience tutor many years ago. Drinking in a pub with other students and tutors, I was asked to explain my dyslexia (also never looked at by a doctor) – how it felt from the inside. I started and at some point said that it might be that I was left-handed but right-eyed, and one of the tutors interrupted and said that theory was wrong. He asked me a lot of questions that had nothing to do with reading or writing. But they did seem at the time to be hitting a bunch of odd things where my answers were surprising to the group. I remember two in particular: do you hear something but not clearly and then about the time you ask ‘what?’ you know what was said?; and, which do you identify most with your conscious mind or your unconscious mind? Some in the group thought I was not being truthful and said things like, “if you really were like that then we would be able to tell and you seem perfectly normal.” Then came the weird question: does anyone closely related to you have a harelip or a cleft palate? And when I said my brother did, the question was – is he ‘normal’? I had to say that he was not as normal as all that and maybe somewhat handicapped with language. Well, says the tutor, “you probably have part of the connection between the two hemispheres missing, and probably your brother has more missing than you.” It was an eye-opener for me about how others viewed themselves just as it was for them to hear my answers.

That conversation stayed there in the back of my mind, without proof or disproof, for close to 40 years. Recently there has been work on dyslexics showing that a very particular part of the nerve connections between the two hemisphere is missing (the corpus callosum is partially missing in the auditory/language region). The fault is graphically shown in the paper: Plessen et al; Less developed corpus callosum in dyslexic subjects – a structural MRI study; Neuropsychologia (2002) (pdf).

Very recently it has been shown that some corpus callosum faults may be partially made up for by the creation of unusual communication nerve connections between the hemispheres. Tovar-Moll et al; Structural and functional brain rewiring clarifies preserved interhemispheric transfer in humans born without the corpus callosum; PNAS (2014) (PNAS). Here is the abstract:

Significance: Individuals subjected to surgical transection of the corpus callosum (“split-brains”) fail to transfer information between the cerebral hemispheres, a condition known as “disconnection syndrome.” On the other hand, subjects born without the callosum (callosal dysgenesis, CD) typically show preserved interhemispheric communication. To clarify this paradox, which has defied neuroscientists for decades, we investigated CD subjects using functional and structural neuroimaging and neuropsychological tests. Results demonstrated the existence of anomalous interhemispheric tracts that cross through the midbrain and ventral forebrain, linking the parietal cortices bilaterally. These findings provide an explanation for the preserved cross-transfer of tactile information between hemispheres in CD. We suggest that this condition is associated with extensive brain rewiring, generating a new circuitry that provides functional compensatory interhemispheric integration.

Abstract: Why do humans born without the corpus callosum, the major interhemispheric commissure, lack the disconnection syndrome classically described in callosotomized patients? This paradox was discovered by Nobel laureate Roger Sperry in 1968, and has remained unsolved since then. To tackle the hypothesis that alternative neural pathways could explain this puzzle, we investigated patients with callosal dysgenesis using structural and functional neuroimaging, as well as neuropsychological assessments. We identified two anomalous white-matter tracts by deterministic and probabilistic tractography, and provide supporting resting-state functional neuroimaging and neuropsychological evidence for their functional role in preserved interhemispheric transfer of complex tactile information, such as object recognition. These compensatory pathways connect the homotopic posterior parietal cortical areas (Brodmann areas 39 and surroundings) via the posterior and anterior commissures. We propose that anomalous brain circuitry of callosal dysgenesis is determined by long-distance plasticity, a set of hardware changes occurring in the developing brain after pathological interference. So far unknown, these pathological changes somehow divert growing axons away from the dorsal midline, creating alternative tracts through the ventral forebrain and the dorsal midbrain midline, with partial compensatory effects to the interhemispheric transfer of cortical function.



Can we upload our brains to computers?


Some year’s ago Chris Chatham posted a look at the differences between a brain and a computer (Chatham post) and recently Steven Donne re-visited the idea in a post (Donne post) These are both interesting reading.

I part company with Donne on several points. The first has to due with the definition of computer. Some people define ‘computer’ so widely that it includes anything that computes anything. In that case the brain is a computer and there is no metaphor to examine. On the other hand it is reasonable to include more than the stock home or business computer. Super-computers, robotic computers and those that are just around the corner are metaphor material. Donne brings up computers that are built precisely to mimic and explore the brain - simulations of the brain. As a metaphor this is lame. If I build a replica of something, there is nothing to be gained in understanding by a metaphor between the original and the replica. So we are left with brain simulations in fairly conventional but advanced computers or some more faithful replica of the brain.

Second, Donne feels that there will not be a problem with size and appeals to the idea that computing power increases exponentially so it cannot be all that long before a computer could be built that would handle a brain simulation in real time. He points to a 1 second of brain activity having been simulated. Well, that should be ‘sort-of-simulated’. The 1 second took 40 minutes to compute. (factor of 2400) Then the brain activity for the simulation was a simple network exercise – not really brain activity, missing the complications of real brain physiology. (factor of ?) The amount of brain simulated was small – 1.73 billion neurons simulated with about 83000 processors. (factor of 50) 10.4 trillion synapses were modeled. (factor 100+). I assume that the glia calcium ion communication, magnetic and chemical fields and so on were not part of the simulation. (factor ?) So I am assuming that something like 5 million times the size of this simulation would be needed for a realistic one and that would be 40-50 years of Moore’s Law type exponential growth at a bare minimum. But this would not give a brain-receiving computer that could accept the upload of a real human brain. That is a much bigger problem than a standard simulation. There would have to be an understanding of how and where all information was held in that human brain, a way to ‘read it out’ and place it in the simulation so that it has the same usefulness. Are we going to understand the brain at that level within 50 years – maybe but I doubt it.

Thirdly, Donne says that if it is possible, it will happen. I think that is possible - once. But the idea that anyone who wants to be immortal could just have their brain up-loaded on death is plainly silly. It would be too expensive to do more than a few times even if it were possible. I can imagine what would happen the first time there was not enough ‘power’ for both the living people and the simulated brains. The power would be switched off of some simulations. It seems the height of arrogance for someone to assume that they have the right to be immortal and to have future generations honour that right. The people at a time more than 50 years into the future will have more pressing problems, given current predictions of climate change, population growth, resource depletion, pollution, more destructive wars and whatever else is in store. Immortal brains in simulations seem to me part of the optimistic myopic vision of the science fiction lovers – futures of space travel, infinite resources, even time travel. Humans will be lucky to live through the century without being reduced to a rough and hard dark age.


Seeing past the trick

We are used to this happening: we see a magic trick, it is convincing magic, but we know it is not actually magic even though we do not know how the trick is done. If we ask someone to explain what happened, we do not want an explanation of the apparent magic; we want an explanation of the trick. It is not an explanation to tell me about the powers of a wand to make an object disappear. What I want as an explanation is a non-magical one. I want to know how the magician lead me, and many more other people, to have our attention on one spot while he manipulated something at another spot.


When I was a child, I thought there were little people in the radio, singing and talking. I found this hard to believe but I saw no alternative. I was very relieved when I learned about radio waves. I did not resist the explanation but welcomed it, even though I hardly understood it at that age. Anything was going to be welcome compared to tiny people – I did not feel robbed when the radio was not filled with singers and announcers.


The situation with consciousness is similar but the reaction is different. We know from experience that things are not always as we perceive them. We can be fooled by magic, can hallucinate, can be hypnotized, or just simply can get things wrong. We can see thing differently because of what we hear and vice verse. There are optical illusions. There is synesthesia and other oddities and abnormalities of perception. Yet consciousness is not usually dealt with as a fragile thing prone to distortions or as an illusion. We also know that much of what seems to be the product of conscious thought is not. But conscious thought is not usually dealt with as a fraud. Here we have something that is very much like a magic trick but usually it is the magic that is believed and not the trick.


It is clear that you cannot solve a magic trick by believing what you are intended to see. To understand you have to look at the problem from a different angle, bypass the ingredients of the illusion and concentrate on the ingredients that could cause the happening. We have to refuse to accept the theory of the wand causing things to disappear. It is ruled out and we look at what is left.


But when someone does this with consciousness. When they rule the supernatural out, refuse to treat it as a mystery, and do not invoke new laws of physics – they are criticized for not actually explaining the mystery in the mystery’s terms. They are said to have sidestepped the real question.


It is possible to just think of consciousness as a monitor screen showing some of what is going on and consider that it is not ‘you’ but just a small part of you. Then there would be a chance of understanding things without the illusion.




Avoid the simplistic


Why is the popular notions of neuroscience so often simplistic? I am going to list here a few (8) of the many reasons. I am sure there are more.

  1. Neuroscience is very young and very active. That means there are new facts to be considered almost weekly. Even areas that you would think should be relatively firm are not: plain anatomy and biochemistry of the brain, for example. Also many new ‘facts’ evaporate with further investigation. In other works everything is in flux. This would not be a problem if there was not a great public interest in the subject, with reporters publishing new results whenever possible. But the public would actually like some firm answers that they can understand, remember and use in their lives. At the present time many popular ‘answers’ are bound to be simplistic for that very reason.

  1. We are used to simplistic explanations of the brain/mind. There is a saying that what you don’t understand is simple. There were millennia of no explanation at all. We want to move – we put one foot in front of another. We want to say something – we open our mouths and the words come out. We want to recall something – it pops into our heads. There was no effort, no feeling of complex happenings in the background. What’s to explain? When explanations were finally sought, they were simplistic because they were arrived at by looking at the behavior that comes out of a ‘black box’ and thinking of the simplest way that behavior could arise.

  1. The brain is actually not easy to study and the methods are complex. Quite often, with scans for example, more seems to be shown than actually is. It is naturally misleading and is presented in misleading ways in the popular press. The notion that this spot ‘lights up’ because it is the spot for recognizing sports cars is just not reasonable.

  1. Philosophers and other thinkers had thought about the mind (without the brain being actually involved) for a long time and had developed a number of concepts that together made a model of the mind. But when these long-standing concepts were looked for in the brain there was not an easy fit. Some would say no fit at all. So there were many very simplistic explanations of brain functions. ‘Self’ for example is about as unitary as things get in a mind model, but in a ‘brain’ model it is possible to find a number of self-like functions, but no unitary one. The application of these older mind concepts to brain functions leads to some very simplistic notions. What is more, many thinkers feel that these legacy concepts have more validity than the process they are describing. For example that ‘willpower’ is real and a process in the brain must conform to that specification.

  1. There have been brain descriptions prior to our current one that still leave metaphors often with simplistic explanations. We have had vapour systems, hydraulics, telephone exchanges, computers and I am sure others. Bits and pieces like ‘pressure’ are still used in explanations although they have not been shown to apply to our present picture of the brain and are not actually explaining anything. As well as these older mind and brain concepts, there is a current one that treats the brain (and the mind) as a calculating mechanism. Many explanations based on calculation are put forward without any evidence from neuro- or behavioral-science, but only that they work in an electronic network. Models based on a metaphor can be very simplistic.

  1. There are many theoretical models of how the brain works (or some parts sort of work) that have some evidence to back them but not enough to make them convincing to a consensus of neuroscientists. In fact it seems that no relatively deep explanation has yet emerged that is accepted by most of the science. Each of these theories have concepts and mechanisms associated with them and the followers of that theory use these words as if they described real obvious or proven things. Nothing is wrong with that – it is how science works – but it is confusing to those trying to make practical sense of all these words. They sometimes end up with bits from different models in a structure of their own making. That is everyone’s right but these simplistic hodge-podges should not be published in the popular press as science.

  1. Neuroscience has become a popular way to bolster an idea. Want to sell something? Make it good for your memory. Want to stop something? Make it bad for your child’s upbring. Some of the claims contain a grain of truth, some are maybe not untrue and some are just rubbish. But, they all tend to be simplistic because those simple, symetrical, catch-phrasy things work in salesmanship. They are not out there to enlighten you but to manipulate you.

  1. There are also ways that neuroscience is used by those that have an axe to grind rather than a product to sell. In legal, religious, and political arguments, neuroscience in a very simplistic form is being used. If the point is to win the argument rather than find the best result, anything that will work is OK.

The brain is extremely complex and is not yet fully (maybe not more than a fraction) known. It is not as it appears to us but has an illusionary quality. Anyone who gives a neat, comfortable, easy to grasp model is likely to be wrong so you need to develop your antennae for recognizing the simplistic. Here is an example, a graphic from Eric Braveman.








This is neat; it has a certain symmetry; it has words that you have encountered elsewhere used by knowledgeable people; it implies a completeness, there is no hedging. But if you notice these attributes, you will see that it is simplistic (rather than simple). If a car had this much solidity, you would not buy it from a used car salesman. If you are not put off by this chart, you might read a bit by him. You then might notice that the idea of balance without any idea of what and how balance is achieved is suspect. Or you may notice that a prominent idea, “the edge effect”, is credited to Llinas but used with an entirely different meaning. There are little statements that are factually wrong, and an unreasonable attachment to the number 4 (Greek elements, humours etc.). It will then not surprise you that this doctor makes a very huge amount of money in supplements, tests, books and consulting based on his theory. Nor will you be surprised that he has been criticized for his methods. All these things are there to notice and check up on but the important thing is to learn to be suspicious of the simplistic approach in the first place.





There is a new posting on Babel’s Dawn that is very interesting. Bolles is outlining a new higher level analysis of language: phonetics, vocabulary, syntax, and he adds sociality. Roughly sociality is a set of limits on language allowing speaker and listener to cope. He starts with the notion that a sentence can have a topic and a sub-topic can be added, but a sub-sub-topic is something that we cannot cope with.

It is a nice post and I recommend reading it. ( a fourth component of language )

This sociality idea touches on ideas that I have been playing with.

  1. Dealing with written language leaves out something very important. Oral language is limited in its complexity by the number of things that can be held in working memory (quoted as 2-7, average about 4). This is not literally 4 words because there can be chunking, where a number of words form one ‘thing’ in the mind. So: understanding an oral sentence requires that the words are in an order that allows chunking and that the total number of resultant ‘things’ is not greater than the working memory limit of the listener. In other words, the limit of coping may be a firm physical limit.

  2. The sociality level of analysis is really about communication. If language is being used to communicate then the principles that allow language to achieve communication are the tools of sociality analysis. Oral communication is the primary function of language and therefore the highest level of analysis should properly be how oral communication is possible.

  3. Once we get to the sociality analysis, I think we can dispense with the sentence as THE unit of language. Phrases on the one hand and multi-sentence groups on the other can be analyzed at this level. For example, the use of ‘so’, not just in its usually senses, but to signal a change of speaker or of the overall topic of conversation, has to be thought of as a meta-communication. In a parsing diagram it would have to appear outside the sentence.


Going up and coming down


Most people think of speaking as a top-down process and listening as a bottom-up one. So if I say something, the assumption is: I have an idea, it is put into words then commands to muscles, and the sounds of the words come out of my mouth. All is top-down, driven from a high-level intent. And, if I listen: the sound waves enter my ears, the input is processed into words and then into meaning. This describes a bottom-up process based on a low-level input becoming high-level perception. There is an implication that these are serial operations – like one descending a staircase and the other climbing a similar one. But there are experimental results that make the picture more confused.

My grandmother was a great one for finishing other people’s sentences. When all went smoothly, Grandma and her friends would end their sentences in unison. But if the friend hesitated, Grandma didn’t stop and finished the sentence for them. In the stairs metaphor, she didn’t wait for the speech to come up the stairs but started down to meet it. It seems we all do this to varying degrees but usually not aloud.

Dikker et al (citation below) investigated the prediction of language. They made a series of somewhat silly or surreal drawings and had them rated for how predicable a description of them would be. They then had one speaker and 9 listeners in fMRI scanners view each picture followed by the descriptive sentence being uttered. The activity in the posterior superior temporal gyrus, which is associated with language and with prediction of language was measured (it was given the title of attentional gain).





We here adopt the term “attentional gain” to describe how generating internal forward model /prediction may increase the excitability of neuronal populations associated with predicted representations in language production as well as comprehension. During speech planning, it has been argued that speakers internally simulate articulatory commands, and that highly predictable speech acts increase the attentional gain for their expected perceptual consequences, the neural effects of which persist into the perceptual stage, but also during perception listeners show prediction error responses to unpredicted words, whereas lexical-semantic prediction error appears to play no role in the speaker. The speaker likely produced each sentence exactly as planned/anticipated. Predictability more strongly affects attentional gain in comprehension, not only during anticipation. Thus, as summarized in the figure, our results suggest that both speakers and listeners take predictability into account when generating estimates of upcoming linguistic stimuli. These changes in activation resulting from predictive processing, in turn, impact the extent to which brain activity is correlated between speakers and listeners.” Listeners predict the speaker’s words (if they feel they are predictable) and react if the prediction is wrong.

I have a different peculiarity from my grandmother - I often first know I have said something when I hear it. Many people have this happen to them when they are very upset or angry; they just say things and are surprised that they had no warning, intention, or preparation. I have it happen often in normal conversation and the words I hear are usually ones I like, the sort of thing I wanted to say. This implies that a certain amount of top-down preparation is absent at least from conscious awareness. Can it be that there is some bottom-up processing in speaking?

Lind et al (citation below) have studied to what extent people were actually aware of what they were saying before it was said. They manipulated the sound of the speaker so that they appeared to give different answers than they had actually done. In many cases this substitution was accepted along with the implications of the meaning.

Abstract: Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one’s utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one’s own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops.

Things are not always as they appear or as simple as we think.

Dikker, S., Silbert, L., Hasson, U., & Zevin, J. (2014). On the Same Wavelength: Predictable Language Enhances Speaker-Listener Brain-to-Brain Synchrony in Posterior Superior Temporal Gyrus Journal of Neuroscience, 34 (18), 6267-6272 DOI: 10.1523/JNEUROSCI.3796-13.2014
Lind, A., Hall, L., Breidegard, B., Balkenius, C., & Johansson, P. (2014). Speakers’ Acceptance of Real-Time Speech Exchange Indicates That We Use Auditory Feedback to Specify the Meaning of What We Say Psychological Science DOI: 10.1177/0956797614529797