Monthly Archives: January 2014

Is consciousness a state of matter?

Max Tegmark has a new theory and everyone is thinking it over. I am confident that I will never feel comfortable with this theory. I will not be convinced, I fear. The whole thing seems to be an elaborate waste of time.

 

There is a problem with that great tool, mathematics. It is the same problem we have with language and logic. We take a situation and divide it into entities, more or less arbitrarily, hoping it is less rather than more arbitrary. We give these entities symbols and relationships so that we can play with them using rules that we are confident of. A description is created of the situation and now we have understanding of it. But wait – the description is only as good as its translation into symbols/relationships and translation back out. In other words, we have to have enough actual contact with the situation to do this translation well and to judge whether the process has been a success. Metaphorically, we have to play with the actual situation, hold it in our hands, look at it from all angles, and get a feel for it. Without that contact we will just do a ‘garbage in, garbage out’ exercise. I am not saying that there is anything wrong with mathematics or that it is not a magnificent tool, but even if the formulas are without error-free it does not mean that they have been interpreted reasonably.

 

What Tegmark appears to have done is to relate two very prominent unknowns: the why, how and even what of consciousness; and what the meaning/interpretation of quantum mechanics should be in terms of how the classic world relates to the quantum world. That sort of thing is great science when it is convincing. Theories like plate tectonics, DNA structure, atomic structure and evolution by natural selection were great because they tied up a lot of little problems and loose ends in one large solution. They became fundamental platforms for whole branches of science. It is also true that great changes are often done by new-comers or outsiders to the scientific area. They will not have been so indoctrinated that they cannot think outside the box.

 

But given the need to keep an open mind on new, unusual theories – I find this one just out to lunch. Tegmark’s paper (here) has a somewhat unfortunate idea of what consciousness is. Table 2 lists “Conjectured necessary conditions for consciousness that we explore in this paper. Principle of Information- A conscious system has substantial information storage capacity; Principle of Dynamics- A conscious system has substantial information processing capacity; Principle of Independence- A conscious system has substantial independence from the rest of the world; Principle of Integration- A conscious system cannot consist of nearly independent parts; Principle of Utility- A conscious system records mainly information that is useful for it; Principle of Autonomy- A conscious system has substantial dynamics and independence.” I find these principles unbelievable.

 

What is a conscious system as opposed to a system that has consciousness? Is there somewhere within our brains a conscious system along-side an unconscious one? Or is a whole person what is referred to?

 

Does consciousness alone have substantial information capacity? No, some include working memory in consciousness (but others don’t) but that is about the extent of the capacity. It has it’s contents at any moment but that could not be called substantial. Does consciousness alone have substantial information processing capacity? No, it has practically none. Has consciousness alone substantial independence from the rest of the world? No, it does not have much independence (none I would think) from the other processes in the brain. As far as ‘parts’ is concerned – they would have to be listed because it is not clear how it might be divided. I don’t think that consciousness has control over its contents, and those contents are probably useful to our whole bodies rather than just our consciousness. The idea that consciousness has autonomy is plainly not so. If we are talking about consciousness then we are talking about a series of momentary awarenesses/sharings of small packets of information across the brain. It has none of the principles that Tegmark lists. If he is talking about a system which has consciousness as one of its processes then we cannot draw a boundary short of the whole nervous system, in other words approximately the whole person.

 

But Tegmark is not talking about the whole nervous system. In his section about the integration paradox he is clearly talking about a much smaller process, like consciousness alone.

 

This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits? If Tononi’s information and integration principles from Section I are correct, the integration paradox forces us to draw at least one of the following three conclusions: 1. Our brains use some more clever scheme for encoding our conscious bits of information…2. These conscious bits are much fewer than we might naively have thought from introspection…3. To be relevant for consciousness, the definition of integrated information that we have used must be modified or supplemented by at least one additional principle.”

 

You can probably guess that it is 3 that he runs with and this leads deep into quantum mechanics, several dead ends and finally the theory. But the whole argument starts with a view of consciousness that is not in keeping with current research. Reading the paper this feels like reading Alice in Wonderland. So here is the theory (the abstract):

 

We examine the hypothesis that consciousness can be understood as a state of matter, “perceptronium”, with distinctive information processing abilities. We explore five basic principles that may distinguish conscious matter from other physical systems such as solids, liquids and gases: the information, integration, independence, dynamics and utility principles. If such principles can identify conscious entities, then they can help solve the quantum factorization problem: why do conscious observers like us perceive the particular Hilbert space factorization corresponding to classical space (rather than Fourier space, say), and more generally, why do we perceive the world around us as a dynamic hierarchy of objects that are strongly integrated and relatively independent? Tensor factorization of matrices is found to play a central role, and our technical results include a theorem about Hamiltonian separability (defined using Hilbert-Schmidt superoperators) being maximized in the energy eigenbasis. Our approach generalizes Giulio Tononi’s integrated information framework for neural-network-based consciousness to arbitrary quantum systems, and we find interesting links to error-correcting codes, condensed matter criticality, and the Quantum Darwinism program, as well as an interesting connection between the emergence of consciousness and the emergence of time. ”

 

As well as the weird definition of consciousness, there is a reliance on the use of computer networks rather than cellular ones, and on the idea that processing is algorithmic. The only tools brought to bear on the question are those of information theory and quantum mechanics. These are, of course, relevant tools, but given the lack of understanding of how the brain works, it is not clear in what ways they may be relevant. They are certainly not the obvious tools to be using to understand consciousness. It seems positively perverse to restrict the problem solving to only those two tools. The metaphor of a new state of matter is somewhat confusing. In gases, liquids, and solids the atoms and molecules do not change, but only their interaction. So the idea is that there are atoms/molecules in the brain that interact slightly differently than ordinary interactions of matter. It sound a bit like vitalism, with the special type of matter that was once used to explain life. Here we have perceptronium, the special type of matter that explains consciousness. This seems a step backwards rather then forwards in our understanding. There is the nagging feeling that information is not the important thing. I feel we have elaborate nervous systems because of action, not because of perception. He seems to ignore what is known about the perception of objects and treats it as a mystery. There is no mention of oscillations in the brain and their function in communicating information. I could go on but won’t because to be truthful it is somewhat boring.

 

The upshot is that Tegmarks theory does not have the ring of truth about it for me. It doesn’t even sound vaguely truthy. Even further, I find it incomprehensible. It appears a silly exercise going from nowhere to nowhere.

 

However Tegmark’s mind must be very interesting. He is a theoretical physicist who is well known for his defense of multiverses, his mathematical universe hypothesis (the universe, each multiverse of it, is literally a mathematical structure - not just well described by mathematics but actual just the mathematics all by itself) and now his consciousness is a state of matter theory. All three things I cannot really even conceive of, let alone accept.

 

 

The Edge Question 6

This is the last posting on the Edge Question responses. You can find all (over 100) answers (here). The question was: What scientific idea is ready for retirement?

 

Some responses were critical of essentialist views: Barrett, essentialist views of the mind; Richerson, human nature; Shafir, opposites can’t both be right; Dawkins, essentialism.

 

Lisa Barrett (University Distinguished Professor of Psychology, Northeastern University; Research Scientist and Neuroscientist, Massachusetts General Hospital/Harvard Medical School) “In pre-Darwinian biology, for example, scholars believed each species had an underlying essence or physical type, and variation was considered error. Darwin challenged this essentialist view, observing that a species is a conceptual category containing a population of varied individuals, not erroneous variations on one ideal individual….In my field of psychology, essentialist thought still runs rampant….This (subcategorization of emotions) technique of creating ever finer categories, each with its own biological essence, is considered scientific progress, rather than abandoning essentialism as Darwin and Einstein did….Essentialism can also be seen in studies that scan the human brain, trying to locate the brain tissue that is dedicated to each emotion. At first, scientists assumed that each emotion could be localized to a specific brain region (e.g., fear occurs in the amygdala), but they found that each region is active for a variety of emotions, more than one would expect by chance. Since then, scientists have been searching for the brain essence of each emotion in dedicated brain networks, and in probabilistic patterns across the brain, always with the assumption that each emotion has an essence to be found, rather than abandoning essentialism….The data are screaming out that essentialism is wrong: individual brain regions, circuits, networks and even neurons are not single-purpose…. every psychological theory in which emotions and cognitions battle each other, or in which cognitions regulate emotions, is wrong….This discussion is more than a bunch of metaphysical musings. Adherence to essentialism has serious, practical impacts on national security, the legal system, treatment of mental illness, the toxic effects of stress on physical illness… the list goes on. Essentialism leads to simplistic “single cause” thinking when the world is a complex place.”

 

Peter Richerson (Distinguished Professor Emeritus, University of California-Davis; Visiting Professor, Institute of Archaeology, University College London)

 

The concept of human nature has considerable currency among evolutionists who are interested in humans. Yet when examined closely it is vacuous. Worse, it confuses the thought processes of those who attempt to use it. Useful concepts are those that cut nature at its joints. Human nature smashes bones. Human nature implies that our species is characterized by common core of features that define us. Evolutionary biology teaches us that this sort of essentialist concept of species is wrong. A species is an assemblage of variable individuals, albeit individuals who are sufficiently genetically similar that they can successful interbreed.The concept of human nature causes people to look for explanations under the wrong rock. Take the most famous human nature argument: are people by nature good or evil? In recent years, experimentalists have conducted tragedy of the commons games and observed how people solve the tragedy (if they do). A common finding is that roughly a third of participants act as selfless leaders, using whatever tools the experimenters make available to solve the dilemma of cooperation, roughly a tenth are selfish exploiters of any cooperation that arises, and the balance are guarded cooperators with flexible morals….In no field is the deficiency of the human nature concept better illustrated than in its use to try to understand learning, culture and cultural evolution. Human nature thinking leads to the conclusion that causes of behavior can be divided into nature and nurture. Nature is conceived of as causally prior to nurture both in evolutionary and developmental time. What evolves is nature and cultural variation, whatever it is, has to the causal handmaiden of nature. This is simply counterfactual…. Using the human nature concept, like essentialism more generally, makes it impossible think straight about human evolution.

 

Edlar Shafir (William Stewart Tod Professor of Psychology and Public Affairs Ph.D., Princeton University; Co-author, Scarcity)

 

We typically assume, for example, that happiness and sadness are polar opposites and, thus, mutually exclusive. But recent research on emotion suggests that positive and negative affects should not be thought of as existing on opposite sides of a continuum, and that, in fact, feelings of happiness and sadness can co-occur. When participants are surveyed immediately after watching certain films, or graduating from college, they are found to feel both profoundly happy and sad. …(people can be) both caring and indifferent, displaying one trait or the other depending on arbitrary twists of fate. From the little I understand, physicists question the classical distinction between wave and matter, and biologists refuse to choose between nature and nurture. But let me stay close to what I know best. In the social sciences, there is ongoing, and often quite heated, debate about whether or not people are rational, and about whether they’re selfish. And there are compelling studies in support of either camp… People can be cold, precise, selfish and calculating. Or they can be hot-headed, confused, altruistic, emotional and biased. In fact, they can be a little of both; they can exhibit these conflicting traits at the very same time.”

 

Richard Dawkins(Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, The Greatest Show on Earth, The Magic of Reality) “Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things… The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness…Essentialism rears its ugly head in racial terminology. The majority of “African Americans” are of mixed race. Yet so entrenched is our essentialist mind-set, American official forms require everyone to tick one race/ethnicity box or another: no room for intermediates….I mainly want to call attention to our society’s essentialist determination to dragoon a person into one discrete category or another. We seem ill-equipped to deal mentally with a continuous spectrum of intermediates. We are still infected with the plague of Plato’s essentialism. Moral controversies such as those over abortion and euthanasia are riddled with the same infection. At what point is a brain-dead accident-victim defined as “dead”? At what moment during development does an embryo become a “person”? Only a mind infected with essentialism would ask such questions…. Our essentialist urge toward rigid definitions of “human” (in debates over abortion and animal rights) and “alive” (in debates over euthanasia and end-of-life decisions) makes no sense in the light of evolution and other gradualistic phenomena. We define a poverty “line”: you are either “above” or “below” it. But poverty is a continuum. Why not say, in dollar-equivalents, how poor you actually are?

 

You can surely think of many other examples of “the dead hand of Plato”—essentialism. It is scientifically confused and morally pernicious. It needs to be retired.

 

The next group does not have much in common except that they complain about widely held opinion in science and/or the public.

 

Chalupa responded with a request to retire brain plasticity. He is not saying that the brain is not plastic but that it always is and therefore there is no need to keep saying it it.

 

Leo M. Chalupa (Ophthalmologist and Neurobiologist, George Washington University) “Brain plasticity refers to the fact that neurons are capable of changing their structural and functional properties with experience. …The field of brain plasticity primarily derives from the pioneering studies of Torsten Wiesel and David Hubel ….These studies convincingly demonstrated that early brain connections are not hard-wired, but could be modified by early experience hence they were plastic…. Since that time there have been thousands of studies showing a wide diversity of neuronal changes in virtually every region of the brain, ranging from molecular to the systems level….As a result, by the end of the 20th century our view of the brain evolved from the hard wired to the seemingly ever changeable…. the widespread use of “brain plasticity” to virtually every type of change in neuronal structure and function has rendered this term largely meaningless…. many studies invoke brain plasticity as the underlying cause of modified behavioral states without having any direct evidence for neuronal changes….There are large profits to be made as evident by the number of (brain training) companies that have proliferated in this sector in recent years….But please refrain from invoking brain plasticity, remarkable or otherwise, to explain the resulting improvements.”

 

Blackmore would like to retire the Neural Correlates of Consciousness, NCCs, and the theory behind them. This one is difficult for me because I agree with everything she says except that there may be no neural correlates of consciousness. Is the activity of the thalamo-cortical loops not part of the NCCs? I don’t think one has to be a dualist to look for the way that the brain creates each individual conscious moment or how it creates the vivid illusions that are the content of each moment.

 

Susan Blackmore (Psychologist; Author, Consciousness: An Introduction) Consciousness is a hot topic in neuroscience and some of the brightest researchers are hunting for the neural correlates of consciousness (NCCs)—but they will never find them. The implicit theory of consciousness underlying this quest is misguided and needs to be retired…the mystery of how subjective experience arises from (or is created by or generated by) objective events in a brain—then it’s easy to imagine that there must be a special place in the brain where this happens. Or if there is no special place then some kind of ‘consciousness neuron’, or process or pattern or series of connections. We may not have the first clue how any of these objective things could produce subjective experience but if we could identify which of them was responsible (so the thinking goes), then we would be one step closer to solving the mystery….The underlying intuition is that consciousness is an added extra—something additional to and different from the physical processes on which it depends. Searching for the NCCs relies on this difference. On one side of the correlation you measure neural processes using EEG, fMRI or other kinds of brain scan; on the other you measure subjective experiences or ‘consciousness itself’….Dualist thinking comes so naturally to us. We feel as though our conscious experiences are of a different order from the physical world. But this is the same intuition that leads to the hard problem seeming hard. It is the same intuition that produces the philosopher’s zombie—a creature that is identical to me in every way except that it has no consciousness. It is the same intuition that leads people to write, apparently unproblematically, about brain processes being either conscious or unconscious….Am I really denying this difference? Yes. Intuitively plausible as it is, this is a magic difference. Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it is an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents and so build up the false idea of a persisting self that has consciousness and free will….All we will ever find is the neural correlates of thoughts, perceptions, memories and the verbal and attentional processes that lead us to think we are conscious.When we finally have a better theory of consciousness to replace these popular delusions we will see that there is no hard problem, no magic difference and no NCCs.

 

Churchland goes after Brain Modules. The word ‘module’ has implications that do not fit what is known about the brain.

 

Patricia S. Churchland (Philosopher and Neuroscientist, UC San Diego; Author, Touching a Nerve: The Self as Brain)“The concept of ‘module’ in neuroscience (meaning sufficient for a function, given gas-in-the-tank background conditions) invariably causes more confusion than clarity. The problem is that any neuronal business of any significant complexity is underpinned by spatially distributed networks, and not just incidentally but essentially—and not just cortically, but between cortical and subcortical networks. ….What is poorly understood is how nervous systems solve the coordination problem; i.e. how does the brain orchestrate the right pattern of neuronal activation across networks to get the job done?…This is not all that is amiss with ‘module’. Traditionally, modules are supposed to beencapsulated–aka insulated. I think of ‘module’ in the way I think of ‘nervous breakdown’—mildly useful in the old days when we had no clue about what was going on under the skull, but of doubtful explanatory significance these days.

 

Sacktor wants to to get rid of the idea that Long-Term Memory is Immutable. My impression is that this is gone from science but still is current in the general public.

 

Todd C. Sacktor (Distinguished Professor of Physiology, Pharmacology, and Neurology, State University of New York Downstate Medical Center) For over a century psychological theory held that once memories are consolidated from a short-term into long-term form, they remain stable and unchanging. Whether certain long-term memories are very slowly forgotten or are always present but cannot be retrieved was a matter of debate…. (This made sense for 50 years.) Two recent lines of evidence have relegated this dominant theory of long-term memory ready for retirement. First is the discovery of reconsolidation. When memories are recalled, they undergo a brief period in which they are once again susceptible to disruption by many of the same biochemical inhibitors that affect the initial conversion of short- into long-term memory. This means that long-term memories are not immutable, but can be converted into short-term memory, and then reconverted back into long-term memory. If this reconversion doesn’t happen, the specific long-term memory is effectively disrupted. The second is the discovery of a few agents that do indeed erase long-term memories…. Memory reconsolidation allows specific long-term memories to be manipulated. Memory erasure is extraordinarily potent and likely disrupts many, if not all long-term memories at the same time. When these two fields are combined, specific long-term memories will be erased or strengthened in ways never conceivable in prior theories.

 

Clark goes after the Input-Output Model of Perception and Action.

 

Andy Clark (Philosopher and Cognitive Scientist, University of Edinburgh; Author: Supersizing the Mind: Embodiment, Action, and Cognitive Extension) “It’s time to retire the image of the mind as a kind of cognitive couch potato—a passive machine that spends its free time just sitting there waiting for an input to arrive to enliven its day. When an input arrives, this view suggests, the system swings briefly into action, processing the input and preparing some kind of output (the response, which might be a motor action or some kind of decision, categorization, or judgement). Output delivered, the cognitive couch potato in your head slumps back awaiting the next stimulation.The true story looks to be almost the reverse.Naturally intelligent systems (humans, other animals) are not passively awaiting sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive. … Systems like that are already (pretty much constantly) poised to act, and all they need to process are any sensed deviations from the predicted state. Action itself then needs to be reconceived. …These hyperactive systems are constantly predicting their own upcoming states, and moving about so as to bring some of them into being. In this way we bring forth the evolving streams of sensory information that keep us viable (keeping us fed, warm, and watered) and that serve our increasingly recondite ends. As ever-active prediction engines these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled.

 

Hoffman wants to lose the idea that Truer Perceptions are Fitter Perceptions.

 

Donald D. Hoffman (Cognitive Scientist, UC, Irvine; Author, Visual Intelligence) “Those of our predecessors who perceived the world more accurately enjoyed a competitive advantage over their less-fortunate peers. They were thus more likely to raise children and to become our ancestors. … But with these provisos noted, it is fair to conclude on evolutionary grounds that our perceptions are, in general, reliable guides to reality. This is the consensus of researchers studying perception via brain imaging, computational modeling and psychophysical experiments. It is mentioned in passing in many professional publications, and stated as fact in standard textbooks. But it gets evolution wrong. Fitness and truth are distinct concepts in evolutionary theory. To specify a fitness function one must specify not just the state of the world but also, inter alia, a particular organism, a particular state of that organism, and a particular action. …Monte Carlo simulations using evolutionary game theory, with a wide range of fitness functions and a wide range of randomly created environments, find that truer perceptions are routinely driven to extinction by perceptions that are tuned to the relevant fitness functions. Perceptions tuned to fitness are typically far less complex than those tuned to truth. They require less time and resources to compute, and are thus advantageous in environments where swift action is critical. …We must take our perceptions seriously. They have been shaped by natural selection to guide adaptive behaviors and to keep us alive long enough to reproduce. We should avoid cliffs and snakes. But we must not take our perceptions literally. They are not the truth; they are simply a species-specific guide to behavior. Observation is the empirical foundation of science. The predicates of this foundation, including space, time, physical objects and causality, are a species-specific adaptation, not an insight. Thus this view of perception has implications for fields beyond perceptual science, including physics, neuroscience and the philosophy of science. The old assumption that fitter perceptions are truer perceptions is deeply woven into our conception of science.”

 

Dennett wants the Hard Problem buried.

 

Daniel C. Dennett (Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, Intuition Pumps) “One might object that the Hard Problem of consciousness (so dubbed by philosopher David Chalmers in his 1996 book, The Conscious Mind) isn’t a scientific idea at all, and hence isn’t an eligible candidate for this year’s question, but since the philosophers who have adopted the term have also persuaded quite a few cognitive scientists that their best scientific work addresses only the “easy” problems of consciousness, this idea qualifies as scientific: it constrains scientific thinking, distorting scientists’ imaginations as they attempt to formulate genuinely scientific theories of consciousness. No doubt on first acquaintance the philosophers’ thought experiments succeed handsomely at pumping the intuitions that zombies are “conceivable” and hence “possible” and that this prospect, the (mere, logical) possibility of zombies, “shows” that there is a Hard Problem of consciousness untouched by any neuroscientific theories of how consciousness modulates behavioral control, introspective report, emotional responses, etc., etc. But if the scientists impressed by this “result” from philosophers were to take a good hard look at the critical literature in philosophy exploring the flaws in these thought experiments, they would—I hope—recoil in disbelief. (I am omitting Dennett’s discussion of the faults in zombies and other philosophical contortions.) …Is the Hard Problem an idea that demonstrates the need for a major revolution in science if consciousness is ever to be explained, or an idea that demonstrates the frailties of human imagination? That question is not settled at this time, so scientists should consider adopting the cautious course that postpones all accommodation with it. That’s how most neuroscientists handle ESP and psychokinesis—assuming, defeasibly, that they are figments of imagination.”

 

 

The Edge Question 5

This post, the fifth in this series, covers two other areas where there were several similar responses to the Edge Question: What scientific idea is ready for retirement? (here) The first area is around concepts of self, free will and agency and the second is around the separation of man from other animals.

 

There has recently been a fair amount of discussion on how science should treat free will: deny its possibility or change its definition. My personal opinion is that both free will and determinism (but not just one) should be declared to be flawed – both wrong (not both right or one right). This is not the general approach to the question.

 

Hood objects to the self which he sees used as a stand-in for complex mechanisms and that impedes understanding those mechanisms.

 

Bruce Hood (Director of the Bristol Cognitive Development Centre in the Experimental Psychology Department at the University of Bristol; Author, The Self-Illusion) “It seems almost redundant to call for the retirement of the free willing self as the idea is neither scientific nor is this the first time that the concept has been dismissed for lacking empirical support. The self did not have to be discovered as it is the default assumption that most of us experience, so it was not really revealed by methods of scientific enquiry. Challenging the notion of a self is also not new….Yet, the self, like a conceptual zombie, refuses to die. It crops up again and again in recent theories of decision-making as an entity with free will that can be depleted. It re-appears as an interpreter in cognitive neuroscience as capable on integrating parallel streams of information arising from separable neural substrates. Even if these appearances of the self are understood to be convenient ways of discussing the emergent output of multiple parallel processes, students of the mind continue to implicitly endorse that there is a decision-maker, an experiencer, a point of origin….We know that the self is constructed because it can be so easily deconstructed through damage, disease and drugs. It must be an emergent property of a parallel system processing input, output and internal representations. It is an illusion because it feels so real, but that experience is not what it seems. The same is true for free will. Although we can experience the mental anguish of making a decision, our free will cannot be some kind of King Solomon in our mind weighing up the pros and cons as this would present the problem of logical infinite regress …How notable that we do this (use ‘self’ as a convenience) all so easily when talking about humans but as soon as we apply the same approach to animals, one gets accused of anthropomorphism! By abandoning the free willing self, we are forced to re-examine the factors that are really behind our thoughts and behavior and the way they interact, balance, over-ride and cancel out. Only then we will begin to make progress in understanding how we really operate.”

 

Coyne objects to Free Will, plain and simple.

 

Jerry Coyne (Professor, Department of Ecology and Evolution, University of Chicago; Author, Why Evolution Is True) “Among virtually all scientists, dualism is dead….Our choices, therefore, must also obey those laws (laws of physics). This puts paid to the traditional idea of dualistic or “libertarian” free will: that our lives comprise a series of decisions in which we could have chosen otherwise. We know now that we can never do otherwise…In short, the traditional notion of free will—defined by Anthony Cashmore as “a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature”—is dead on arrival….recent experiments support the idea that our “decisions” often precede our consciousness of having made them. Increasingly sophisticated studies using brain scanning show that those scans can often predict the choices one will make several seconds before the subject is conscious of having chosen! Indeed, our feeling of “making a choice” may itself be a post hoc confabulation, perhaps an evolved one. When pressed, nearly all scientists and most philosophers admit this. Determinism and materialism, they agree, win the day. But they’re remarkably quiet about it. …they’d rather invent new “compatibilist” versions of free will: versions that comport with determinism….In the end, there’s nothing “free” about compatibilist free will. It’s a semantic game in which choice becomes an illusion: something that isn’t what it seems….reminds me of the (probably apocryphal) statement of the Bishop of Worcester’s wife when she heard about Darwin’s theory: “My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray it will not become generally known. What puzzles me is why compatibilists spend so much time trying to harmonize determinism with a historically non-deterministic concept instead of tackling the harder but more important task of selling the public on the scientific notions of materialism, naturalism, and their consequence: the mind is produced by the brain. These consequences of “incompatibilism” mean a complete rethinking of how we punish and reward people.” Coyne does not think this rethink would be a bad thing.

 

Accepting incompatibilism also dissolves the notion of moral responsibility….by rejecting moral responsibility, we are free to judge actions not by some dictate, divine or otherwise, but by their consequences: what is good or bad for society. Finally, rejecting free will means rejecting the fundamental tenets of the many religions that depend on freely choosing a god or a savior. The fears motivating some compatibilists—that a version of free will must be maintained lest society collapse—won’t be realized. The illusion of agency is so powerful that even strong incompatibilists like myself will always act as if we had choices, even though we know that we don’t. We have no choice in this matter. But we can at least ponder why evolution might have bequeathed us such a powerful illusion.”

 

I cannot go this far with Coyne, for I think that determinism is also flawed as well as free will. Our brains make decisions and the decisions we make, we own, because they in large part depend on our values, ideas, motivations and so on. They are our decisions. We are responsible for those decisions and for maintaining our own values, ideas etc. Children, mentally ill people and those with little intelligence may not be responsible for doing good maintenance of their attitudes and habits but most of us are responsible for who we are.

 

Metzinger is concerned with eliminating, or perhaps clarifying, the idea of cognitive agency.

 

Thomas Metzinger (Philosophisches Seminar, Johannes Gutenberg-Universität Mainz; Author, The Ego Tunnel) “Western culture, traditional philosophy of mind and even cognitive neuroscience have been deeply influenced by the Myth of Cognitive Agency. It is the myth of the Cartesian Ego, the active thinker of thoughts, the epistemic subject that acts—mentally, rationally, in a goal-directed manner—and that always has the capacity to terminate or suspend its own cognitive processing at will. It is the theory that conscious thought is a personal-level process, something that by necessity has to be ascribed to you, the person as a whole. This theory has now been empirically refuted. As it now turns out, most of our conscious thoughts are actually the product of subpersonal processes, like breathing or the peristaltic movements in our gastrointestinal tract. The Myth of Cognitive Agency says that we are mentally autonomous beings. We can now see that this is an old, but self-complacent fairy tale. It is time to put it to rest….The sudden loss of inner autonomy (mind wandering)—which all of us experience many hundred times every day—seems to be based on a cyclically recurring process in the brain. The ebb and flow of autonomy and meta-awareness might well be a kind of attentional see-sawing between our inner and outer worlds, caused by a constant competition between the brain networks underlying spontaneous subpersonal thinking and goal-oriented cognition….There are also periods of “mind blanking”, and these episodes may often not be remembered and also frequently escape detection by external observers. In addition, there is clearly complex, but uncontrollable cognitive phenomenology during sleep….A conservative estimate would therefore be that for much more than half of our life-time, we are not cognitive agents in the true sense of the word. This still excludes periods of illness, intoxication, or insomnia, in which people suffer from dysfunctional forms of cognitive control….I think that one global function of Mind Wandering may be “autobiographical self-model maintenance”. Mind Wandering creates an adaptive form of self-deception, namely, an illusion of personal identity across time. It helps to maintain a fictional “self” that then lays the foundation for important achievements like reward prediction or delay discounting. As a philosopher, my conceptual point is that only if an organism simulates itself as being one and the same across time will it be able to represent reward events or the achievement of goals as a fulfillment of its own goals, as happening to the same entity. I like to call this the “Principle of Virtual Identity Formation”: Many higher forms of intelligence and adaptive behavior, including risk management, moral cognition and cooperative social behavior, functionally presuppose a self-model that portrays the organism as a single entity that endures over time. Because we are really only cognitive systems, complex processes without any precise identity criteria, the formation of an (illusory) identity across time can only be achieved on a virtual level, for example through the creation of an automatic narrative. This could be the more fundamental and overarching computational goal of mind wandering, and one it may share with dreaming. If I am right, the default mode of the autobiographical self-modeling constructs a domain-general functional platform enabling long-term motivation and future planning…. the ability to act autonomously implies not only reasons, arguments and rationality. Much more fundamentally it refers to the capacity to wilfully inhibit, suspend, or terminate our own actions—bodily, socially, or mentally. The breakdown of this ability is what we call Mind Wandering. It is not an inner action at all, but a form of unintentional behavior, an involuntary form of mental activity.”

 

As usual, Metzinger is one of the most understandable and convincing and useful philosophers around.

 

Now a change of subject. There were four replies that attacked the idea of a great difference between humans and other animals: Pepperberg, humaniqueness; Baron-Cohen, Radical Behaviorism; Das, Anthropocentricity; Jeffery, Animal Mindlessness. In other posts I have dealt with this problem until I am afraid of boring readers – here I just give a few quotes to give the flavor of the responses. This human-centered attitude really must be retired as quickly as possible.

 

Irene Pepperberg (Research Associate & Lecturer, Harvard; Adjunct Associate Professor, Brandeis; Author, Alex & Me) “Clearly I don’t contest data that show that humans are unique in many ways, and I certainly favor studying the similarities and differences across species, but think it is time to retire the notion that human uniqueness is a pinnacle of some sort, denied in any shape, way, or form to other creatures.”

 

Simon Baron-Cohen (Psychologist, Autism Research Centre, Cambridge University; Author, The Science of Evil) “Every student of psychology is taught that Radical Behaviorism was displaced by the cognitive revolution, because it was deeply flawed scientifically. Yet it is still practiced in animal behavior modification, and even in some areas of contemporary human clinical psychology. Here I argue that the continued application of Radical Behaviorism should be retired not just on scientific but also on ethical grounds….Given these scientific arguments, you’d have thought Radical Behaviorism would have been retired long ago, and yet it continues to be the basis of ‘behavior modification’ programs, in which a trainer aims to shape another person’s or an animal’s behavior, rewarding them for producing surface behavior whilst ignoring their underlying evolved neurocognitive make-up.” This is disrespectful to intelligent animals including humans.

 

Satyajit Das (Expert, Financial Derivatives and Risk; Author, Extreme Money: The Masters of the Universe and the Cult of Risk) “The human mind has evolved a specific physical structure and bio-chemistry that shapes thought processes. The human cognitive system determines our reasoning and therefore our knowledge. Language, logic, mathematics, abstract thought, cultural beliefs, history and memories create a specific human frame of reference, which may restrict what we can know or understand…Transcending anthropocentricity may allow new frames of reference expanding the boundary of human knowledge. It may allow human beings to think more clearly, consider different perspectives and encourage possibilities outside the normal range of experience and thought. It may also allow a greater understanding of our existential place within nature and in the order of things.

 

Kate Jeffery (Professor of Behavioural Neuroscience, Head, Dept. of Cognitive, Perceptual and Brain Sciences, University College, London) “We humans have had a tough time coping with our unremarkable place in the grand scheme of things. First Copernicus trashed our belief that we live at the centre of the universe, followed shortly thereafter by Herschel and co. who suggested that our sun was not at the centre of it either; then Darwin came along and showed that according to our biological heritage, we are just another animal. But we have clung on for dear life to one remaining belief about our specialness; that we, and we alone, have conscious minds. It is time to retire, or indeed euthanize and cremate, this anthropocentric pomposity….Behaviorism arose from the argument of parsimony (Occam’s razor)—why postulate mental states in animals when their behavior can be explained in simpler ways? The success of Behaviorism arose in part from the fact that the kinds of behaviors studied back then could, indeed, be explained by operation of mindless, automatic processes….When we look into the animal brain we see the same things we see in our own brains. Of course we do, because we are just animals after all. It is time to admit yet again that we are not all that special. If we have minds, creatures with brains very like ours probably do too.”

 

 

The Edge Question 4

This post looks at the Edge Question responses that want to retire the genetic-environmental dichotomy. See (here) for the full responses to the question – What scientific idea needs to be retired?

 

It seems that ‘nature vs nurture’ is so out of favour that very few scientists refer to it in their work and when they do they are sharply criticized by other scientists. But it is something that the popular press and the general public will not drop. It is the cat that comes back. I believe the reason is that NvN has worked its way into how people view child rearing, policy decisions, politics and ideology. Each side of many arguments gather any scrap of evidence that they can to bolsters their side with either genetic or environmental causation of some effect in the population. So this fixation of NvN is no better for politics than it was for science. It is no wonder that four people chose this idea to retire.

 

Gopnik and Everett want to be rid of innateness. They give good and similar reasons based on the way that genetics and environment can not be separated. Some very good examples are included.

 

Alison Gopnik (Psychologist, UC, Berkeley; Author, The Philosophical Baby)

 

Gopnik gives three reasons to challenge the idea of innate traits. One development is the very important new work exploring what are called epigenetic accounts of development, and the new empirical evidence for those epigenetic processes. These studies show the many complex ways that gene expression, which is what ultimately leads to traits, is itself governed by the environment. Next: The increasingly influential Bayesian models of human learning, models that have come to dominate recent accounts of human cognition, also challenge the idea of innateness in a different way…the hypotheses and evidence are inextricably intertwined. Finally: The third development is increasing evidence for a new picture of the evolution of human cognition….The evolutionary theorist Eva Jablonka has described the evolution of human cognition as more like the evolution of a hand—a multipurpose flexible tool capable of performing unprecedented behaviors and solving unprecedented problems—than like the construction of a Swiss Army Knife…All three of these scientific developments suggest that almost everything we do is not just the result of the interaction of nature and nurture, it is both simultaneously.

 

Daniel L. Everett (Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, Language: The Cultural Tool)

 

Everett says the the terms innate and instinctive are not useful. (The newborns) cells have been thoroughly bathed in their environment before their parents mated—a bath whose properties are determined by their parents’ behavior, environment, and so on. The effects of the environment on development are so numerous, unstudied, and untested in this sense that we currently have no basis for distinguishing environment from innate predispositions or instincts. And further, many things that we believe to be instinctual can change radically when the environment changes radically, even aspects of the environment that we might not have thought relevant. In order to use the concept we would need to understand the genetic and evolutionary details of a trait. We are in no position at present to know the answers. And we will never be able to know some of the answers. Therefore, there simply is no utility to the terms instinct and innate. Let’s retire these terms so the real work can begin.

 

Pinker and Sapolsky go further and argue that the notions of gene-environment interaction, or that behavior = genes + environment are close to meaningless. They are discussing how this ‘interaction’ is not simple and the terms of the concept cannot be well defined.

 

Steven Pinker (Johnstone Family Professor, Department of Psychology; Harvard University; Author, The Better Angels of Our Nature)

 

Pinker starts by looking at the words in behavior = genes + environment and finding them confused.

 

Behavior: More than half a century after the cognitive revolution, people still ask whether a behavior is genetically or environmentally determined. Yet neither the genes nor the environment can control the muscles directly. The cause of behavior is the brain…

 

Genes: Molecular biologists have appropriated the term “gene” to refer to stretches of DNA that code for a protein. Unfortunately, this sense differs from the one used in population genetics, behavioral genetics, and evolutionary theory, namely any information carrier that is transmissible across generations and has sustained effects on the phenotype…. DNA is regulated by signals from the environment. How else could it be? The alternative is that every cell synthesizes every protein all the time! The epigenetics bubble inflated by the science media is based on a similar confusion.

 

Environment: This term for the inputs to an organism is also misleading. Of all the energy impinging on an organism, only a subset, processed and transformed in complex ways, has an effect on its subsequent information processing… The bad habit of assuming that anything not classically genetic must be “environmental” has blinkered behavioral geneticists (and those who interpret their findings) into the fool’s errand of looking for environmental effects for what may be randomness in developmental processes. Pinker describes the mess well – the mess of how the concept of environment is used and in the calculation of the so-called percentages of inherited and environmental influences on traits.

 

He also attacks the + sign. A final confusion in the equation is the seemingly sophisticated add-on of “gene-environment interactions.” This is also designed to confuse. Gene-environment interactions do not refer to the fact that the environment is necessary for genes to do their thing (which is true of all genes). It refers to a flipflop effect in which genes affect a person one way in one environment but another way in another environment, whereas an alternative genes has a different pattern.

 

Robert Sapolsky (Neuroscientist, Stanford University; Author, Monkeyluv)

 

Despite starting out writing about something else, he ends up talking about ‘gene-environment interaction’. He thinks that the phrase is OK, was good once, but now is misleading because it implies a single interaction. “My problem with the concept is with the particularist use of “a” gene-environment interaction, the notion that there can be one. This is because, at the most benign, this implies that there can be cases where there aren’t gene-environment interactions. Worse, that those cases are in the majority. Worst, the notion that lurking out there is something akin to a Platonic ideal as to every gene’s actions—that any given gene has an idealized effect, that it consistently “does” that, and that circumstances where that does not occur are rare and represent either pathological situations or inconsequential specialty acts. Thus, a particular gene may have a Platonically “normal” effect on intelligence unless, of course, the individual was protein malnourished as a fetus, had untreated phenylketonuria, or was raised as a wild child by meerkats…The problem with “a gene-environment interaction” is the same as asking what height has to do with the area of a rectangle, and being told that in this particular case, there is a height/length interaction.”

 

The Edge Question 3

I am continuing my read-through of some responses to the Edge Question: What Scientific Idea is ready for retirement? The question was asked by Laurie Santos this year. (here) One of the most popular answers was a rejection of the computer metaphor for the brain. There were also complaints about the idea of human rationality as used in economic theory. Interestingly, the three responses about the computer metaphor were made by computer experts.

 

Schank feels Artificial Intelligence should be shelved as a goal and we should just make better computer applications rather than mimic the human mind.

 

Roger Schank (Psychologist & Computer Scientist; Engines for Education Inc.; Author, Teaching Minds: How Cognitive Science Can Save Our Schools)

 

It was always a terrible name, but it was also a bad idea. Bad ideas come and go but this particular idea, that we would build machines that are just like people, has captivated popular culture for a long time. Nearly every year, a new movie with a new kind of robot that is just like a person appears in the movies or in fiction. But that robot will never appear in reality. It is not that Artificial Intelligence has failed, no one actually ever tried. (There I have said it.)…The fact is that the name AI made outsiders to AI imagine goals for AI that AI never had. The founders of AI (with the exception of Marvin Minsky) were obsessed with chess playing, and problem solving (the Tower of Hanoi problem was a big one.) A machine that plays chess well does just that, it isn’t thinking nor is it smart….I declare Artificial Intelligence dead. The field should be renamed ” the attempt to get computers to do really cool stuff” but of course it won’t be….There really is no need to create artificial humans anyway. We have enough real ones already.”

 

Brooks discusses the shortcomings of the Computational Metaphor that has become so popular in cognitive science.

 

Rodney A. Brooks (Roboticist; Panasonic Professor of Robotics (emeritus) , MIT; Founder, Chairman & CTO, Rethink Robotics; Author, Flesh and Machines)

 

But does the metaphor of the day have impact on the science of the day? I claim that it does, and that the computational metaphor leads researchers to ask questions today that will one day seem quaint, at best….The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist. The new mechanisms that we do discover outside of straight computational metaphors get pasted on to computational models but it is becoming unwieldy, and worse, that unwieldiness is hard to see for those steeped in its traditions, racing along to make new publishable increments to our understanding. I suspect that we will be freer to make new discoveries when the computational metaphor is replaced by metaphors that help us understand the role of the brain as part of a behaving system in the world. I have no clue what those metaphors will look like, but the history of science tells us that they will eventually come along.”

 

Gelernter tackles the question of whether the Grand Analogy of computer and brain is going to help in understanding the brain.

 

David Gelernter (Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats))

 

Today computationalists and cognitive scientists—those researchers who see digital computing as a model for human thought and the mind—are nearly unanimous in believing the Grand Analogy and teaching it to their students. And whether you accept it or not, the analogy is milestone of modern intellectual history. It partly explains why a solid majority of contemporary computationalists and cognitive scientists believe that eventually, you will be able to give your laptop a (real not simulated) mind by downloading and executing the right software app. …”

 

Gelernter gives his reasons for this conclusion. (One) “The software-computer system relates to the world in a fundamentally different way from the mind-brain system. Software moves easily among digital computers, but each human mind is (so far) wedded permanently to one brain. The relationship between software and the world at large is arbitrary, determined by the programmer; the relationship between mind and world is an expression of personality and human nature, and no one can re-arrange it…. (Two) The Grand Analogy presupposes that minds are machines, or virtual machines—but a mind has two equally-important functions, doing and being; a machine is only for doing. We build machines to act for us. Minds are different: yours might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted—or you might merely be conscious. Emotions in particular are not actions, they are ways to be. … (Three) The process of growing up is innate to the idea of human being. Social interactions and body structure change over time, and the two sets of changes are intimately connected. A toddler who can walk is treated differently from an infant who can’t. No robot could acquire a human-like mind unless it could grow and change physically, interacting with society as it did…. (Four) Software is inherently recursive; recursive structure is innate to the idea of software. The mind is not and cannot be recursive. A recursive structure incorporates smaller versions of itself: an electronic circuit made of smaller circuits, an algebraic expression built of smaller expressions. Software is a digital computer realized by another digital computer. (You can find plenty of definitions of digital computer.) “Realized by” means made-real-by or embodied-by. The software you build is capable of exactly the same computations as the hardware on which it executes. Hardware is a digital computer realized by electronics (or some equivalent medium)….

 

He wants to stop the pretending. “Computers are fine, but it’s time to return to the mind itself, and stop pretending we have computers for brains; we’d be unfeeling, unconscious zombies if we had.”

 

Another model of human behavior got some criticism. Again it is from within the fold. Levi wants to retire Homo Economicus and then base understanding of our actions on a realistic model of humans.

 

Margaret Levi (Political Scientist, University Professor, University of Washington & University of Sydney)

 

Homo economicus is an old idea and a wrong idea, deserving a burial of pomp and circumstance but a burial nonetheless. …The theories and models derived from the assumption of homo economicus generally depend on a second, equally problematic assumption: full rationality….Even if individuals can do no better than “satisfice,” that wonderful Simon term, they might still be narrowly self-interested, albeit—because of cognitive limitations—ineffective in achieving their ends. This perspective, which is at the heart of homo economicus, must also be laid to rest. …The power of the concept of Homo economicus was once great, but its power has now waned, to be succeeded by new and better paradigms and approaches grounded in more realistic and scientific understandings of the sources of human action.”

 

The notion of rationality and H. econ got another thumbs down from Fiske who wanted to retire Rational Actor Models: the Competence Corollary.

 

Susan Fiske (Eugene Higgins Professor, Department of Psychology, Princeton University)

 

The idea that people operate mainly in the service of narrow self-interest is already moribund, as social psychology and behavioral economics have shown. We now know that people are not rational actors, instead often operating on automatic, based on bias, or happy with hunches. Still, it’s not enough to make us smarter robots, or to accept that we are flawed. The rational actor’s corollary—all we need is to show more competence—also needs to be laid to rest. …People are most effective in social life if we are—and show ourselves to be—both warm and competent. This is not to say that we always get it right, but the intent and the effort must be there. This is also not to say that love is enough, because we do have to prove capable to act on our worthy intentions. The warmth-competence combination supports both short-term cooperation and long-term loyalty. In the end, it’s time to recognize that people survive and thrive with both heart and mind.”

 

It looks like we are on the way to changes in metaphor for human thought and actions. No metaphor is perfect (we cannot expect to find perfect ones) - but there comes a time when an old or inappropriate metaphor is a drag on science.

 

 

 

The Edge Question 2

In the last post I looked at some answers to this year’ Edge Question: what scientific idea is ready for retirement. (here) I agreed with those. This post is about some responses to the Edge Question that I disagree with.

 

There are two answers that deal with ‘culture’ – this is it, just culture; they use the single word ‘culture‘ to answer the question of what scientific idea should be retired. Betzig is against the idea that culture is something superzoological (that) shapes the course of human events. Boyer is against the use of culture to explain material phenomena— representations and behaviors—in terms of a non-material entity. So the culture they complain about is ether non-biological or even non-material. Personally, I do not believe that it is possible to understand human behaviour without the concept of culture (or something very similar with a different name). Both of these responders are anthropologists and so they may be coming from an environment where there is an over-use of culture as an explanation. If so, I would say that we need not throw out the baby with the bath water. First I will give their ideas a good airing, before countering some of their arguments.

 

Laura Betzig (Anthropologist; Historian)

 

Betzig put an historical case for viewing human civilizations as mechanism of ruler’s reproductive success. “What if the 100,000-odd year-old evidence of human social life—from the arrowheads in South Africa, to the Venus figurines at Dordogne—is the effect of nothing, more or less, but our efforts to become parents?  What if the 10,000-odd year-old record of civilization—from the tax accounts at temples in the Near East, to the inscription on a bronze statue in New York Harbor—is the product of nothing, more or less, but our struggle for genetic representation in future generations?”

 

The history is interesting and has a lot of credibility. Next is a jump to a different use of the word culture. “CULTURE is a 7-letter word for GOD. Good people—some of the best, and intelligent people—some of the smartest, have found meaning in religion: they have faith that something supernatural guides what we do. Other good, intelligent people have found meaning in culture: they believe that something superzoological shapes the course of human events. Their voices are often beautiful; and it’s wonderful to be part of a chorus. But in the end, I don’t get it. For me, the laws that apply to animals apply to us. And in that view of life, there is grandeur enough.”

 

Pascal Boyer (Anthropologist and Psychologist, Washington University in St. Louis; Author, Religion Explained: The Evolutionary Origins of Religious Thought)

 

Boyer takes aim at exactly what culture is. “Culture is like trees. Yes, there are trees around. But that does not mean that we can have a science of trees. … the notion is of no use to scientists… Don’t get me wrong—we can and should engage in a scientific study of ‘cultural stuff’. Against the weird obscurantism of many traditional sociologists, historians or anthropologists, human behavior and communication can and should be studied in terms of their natural causes. But this does not imply that there will or should be a science of culture in general….When we say that some notion or behavior is “cultural”, we are just saying that it bears some similarity to notions and behaviors of other people. That is a statistical fact. It does not tell us much about the processes that caused that behavior or notion.”

 

But all this is not news. So why is Boyer trying to rid science of culture. “Is the idea of culture really a Bad Thing? Yes, a belief in culture as a domain of phenomena has hindered the development of a proper science of human behavior in groups—what ought to be the domain of social sciences.”

 

It seems that these are not pleas to retire culture from science. They are something else, some other complaint about how culture is studied or used or something. Boyer’s comment that ‘culture’ is only a statistical similarity is true but so is ‘species’. Species cannot be understood as something divorced from the rest of science and is only a statistical similarity between real individual animals. But species is a concept within science and a very useful one. Similarly, culture is a statistical similarity between real individual animals. Culture likewise is a useful concept within science. How exactly would we go about examining behavior without using the concept of culture. Betzig seems to be saying that using the concept of culture will mean denying we are animals. But biology is also studying culture in some other animals as well as humans. Culture is not a good answer to the question: what scientific idea is ready for retirement? The word culture is being used as a scientific concept where it is useful, or it is being used as something else in which case it is not scientific. Culture as a scientific idea is not bypassing biology but part of it. Put as a dull, semantic thing – there can be more than one way to view culture and there is at least one way of viewing culture that we actually need in science.

 

Another response that I didn’t agree with was Lombrozo’s, she wanted to retire “the mind is just the brain”. This again seems to be a semantic problem but in this case an important rather than a dull one.

 

Tania Lombrozo (Assistant Professor of Psychology, University of California)

 

Lombrozo starts with a clear denial of dualism. “In fact, it appears the mind is just the brain. Or perhaps, to quote Marvin Minsky, “the mind is what the brain does.” But then comes a switch. “In our enthusiasm to find a scientifically-acceptable alternative to dualism, some of us have gone too far the other way, adopting a stark reductionism. Understanding the mind is not just a matter of understanding the brain.” To illustrate ‘stark reductionism’ she gives us a discussion of cake baking which I have read over and over and cannot understand what it is saying about reductionism. If mind is what the brain does, then we can try and understand mind and understand how brain does it. This is how reduction works. And when we try and understand brain we will, of course, also try and understand how cellular physiology does cells and so on down to quarks. There are hierarchies in any science and there are ways of understanding/studying/theorizing that are best suited to each level and each level tries to fit on the understanding of the one beneath it. What is the problem with this? Why is this not reductionism. Or (as it obviously is reductionism) why is reductionism not acceptable?

 

The third answer that struck me as wrong was Waytz response, “humans are by nature social animals”. Again it is a semantic problem. He seems to think social-animal means nice-social-animal.

 

Adam Waytz (Psychologist; Assistant Professor of Management and Organizations, Kellogg School of Management at Northwestern University)

 

Waytz states how social we are and then puts limits on it. “Certainly sociality is a dominant force that shapes thought, behavior, physiology, and neural activity. However, enthusiasm over the social brain, social hormones, and social cognition must be tempered with evidence that being social is far from easy, automatic, or infinite.” He finds that in experiments people have to view the situation with a social context to react socially. This does not seem surprising. “Humans may be ready and willing to view the world through a social lens, but they do not do so automatically.” True, they do it in what they see as a social context.

 

Our social nature is not infinite. “Despite possessing capacities far beyond other animals to consider others’ minds, to empathize with others’ needs, and to transform empathy into care and generosity, we fail to employ these abilities readily, easily, or equally. We engage in acts of loyalty, moral concern, and cooperation primarily toward our inner circles, but do so at the expense of people outside of those circles. Our altruism is not unbounded; it is parochial.” True, but is there any social animal that extends its empathy easily outside its actual social groups, not wolves, chimps, elephants or bees. I cannot think of any. And finally he says, “At the same time, the concept of humans as “social by nature” has lent credibility to numerous significant ideas: that humans need other humans to survive, that humans tend to be perpetually ready for social interaction, and that studying specifically the social features of human functioning is profoundly important.” If we are not social animals then why would we live in societies?

 

This is the end of the semantic nit-picking. The next post will be back to positive reactions.

 

 

The Edge Question 1

It is January again and time for the Edge Question. The answers are out (here). This year’s question:

 

Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858-1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals. Why wait that long?

 

WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?

 

Ideas change, and the times we live in change. Perhaps the biggest change today is the rate of change. What established scientific idea is ready to be moved aside so that science can advance?

 

I have picked out some answers here.

 

Two people responded with a plea to retire the left-brain right-brain myth. It seems to even stretch the point to call this a ‘scientific idea’. I think it is even a harmful idea. It is often part of a con or a half-baked self-help method. Blakemore and Kosslyn wrote on this one.

 

Sarah-Jayne Blakemore (Royal Society University Research Fellow and Full Professor of Cognitive Neuroscience at the Institute of Cognitive Neuroscience, University College London; co-author, The Learning Brain)

 

…The notion that the two hemispheres of the brain are involved in different ‘modes of thinking’ and that one hemisphere dominates over the other has become widespread, in particular in schools and the workplace. There are numerous websites where you can find out whether you are left-brained or right-brained and that offer to teach you how to change this. This is pseudo-science and is not based on knowledge of how the brain works….Whether left-brain/right-brain notions should influence the way people are educated is highly questionable. There is no validity in categorizing people in terms of their abilities as either a left-brain or a right-brain person. In terms of education, such categorization might even act as an impediment to learning, not least because it might be interpreted as being innate or fixed to a large degree. Yes, there are large individual differences in cognitive strengths. But idea that people are left-brained or right-brained needs to be retired.

 

Stephen M. Kosslyn (Founding Dean, Minerva Schools at the Keck Graduate Institute)

 

Solid science sometimes devolves into pseudoscience, but the imprimatur of being science nevertheless may remain. No better example of this is the popular “left brain/right brain” narrative about the specializations of the cerebral hemispheres. According to this narrative, the left hemisphere is logical, analytic, and linguistic whereas the right is intuitive, creative, and perceptual. Moreover, each of us purportedly relies primarily on one half-brain, making us “left-brain thinkers” or “right-brain thinkers.” …First, the idea that each of us relies primarily on one or the other hemisphere is not empirically justifiable. The evidence indicates that each of us uses all of our brain, not primarily one side or the other. The brain is a single, interactive system, with the parts working in concert to accomplish a given task. Second, the functions of the two hemispheres have been mischaracterized…

 

Gruber puts forward another candidate for retirement that also features often in the popular press and is even found in some therapies. It has always bothered me - why is sadness always treated as bad and happiness as good?

 

June Gruber (Assistant Professor of Psychology, Yale University)

 

One idea in the study of emotion and its impact on psychological health is overdue for retirement: that negative emotions (like sadness or fear) are inherently bad or maladaptive for our psychological well-being, and positive emotions (like happiness or joy) are inherently good or adaptive….(evidence that negative emotion is not always bad) First, from an evolutionary perspective, negative emotions aid in our survival—they provide important clues to threats or problems that need our attention (such as an unhealthy relationship or dangerous situation). Second, negative emotions help us focus: they facilitate more detailed and analytic thinking, reduce stereotypic thinking, enhance eyewitness memory, and promote persistence on challenging cognitive tasks. Third, attempting to thwart or suppress negative emotions—rather than accept and appreciate them—paradoxically backfires and increases feelings of distress and intensifies clinical symptoms of substance abuse, overeating, and even suicidal ideation. Counter to these hedonic theories of well-being, negative emotions are hence not inherently bad for us. Moreover, the relative absence of them predicts poorer psychological adjustment….(evidence that positive emotion is not always good) First, positive emotions foster more self-focused behavior, including increased selfishness, greater stereotyping of out-group members, increased cheating and dishonesty, and decreased empathic accuracy in some contexts. Second, positive emotions are associated with greater distractibility and impaired performance on detail-oriented cognitive tasks. Third, because positive emotion may promote decreased inhibition it has been associated with greater risk-taking behaviors and mortality rates. Indeed, the presence of positive emotions is not always adaptive and sometimes can impede our well-being and even survival….the context in which an emotion unfolds can determine whether it helps or hinders an individual’s goal, or which types of emotion regulatory strategies (reappraising or distracting) will best match the situation…

 

Freud’s theories have mostly disappeared except for those that have entered the language as metaphors. But repression still seems to be generally accepted as more than a metaphor. Again this belief may actually be causing harm.

 

David G. Myers (Social Psychologist; Hope College; Author, Psychology, 10th Edition)

 

In today’s Freud-influenced popular psychology, repression remains big. People presume, for example, that unburying repressed traumas is therapeutic. …Actually, say today’s memory researchers, there is little evidence of such repression, and much evidence of its opposite. …Traumas more commonly get etched on the mind as persistent, haunting memories. Moreover, extreme stress and its associated hormones enhance memory, producing unwanted flashbacks that plague survivors….The scientist-therapist “memory war” lingers, but it is subsiding. Today’s psychological scientists appreciate the enormity of unconscious, automatic information processing, even as mainstream therapists and clinical psychologists report increasing skepticism of repressed and recovered memories.

 

Do we understand sleep?

There is a new theory to explain changes in memory during sleep. It is not that new, Tononi introduced it a decade ago. Since then Tononi and his group have been amassing confirmations of the idea. A recent paper (see citations Tononi 2014) has discussed SHY or the synaptic homeostasis hypothesis in a general way. In essence, one reason for sleep is to promote learning (or plasticity) and that requires being disconnected from the outside world. Their logic goes: the brain must economize on energy and therefore needs to signal as little as possible (sparse signaling). Sparse signals need a very selective response (high signal to noise). The way we learn when we are awake is to strengthen the synapses that need to be enhanced, but as the day wears on, the synapses get stronger all over the brain and on individual neurons. This means that the neurons fire more frequently and also in response to noise. The neurons are stressed and use more energy. During sleep, this situation is corrected by weakening all the synapses so that the total signaling would be back at the base level. This weakening is done in a way that protects old memories, and reduces noise more than the new learning of the day. They have studied slow wave sleep as the mechanism for this weakening of synapses. The group has studied aspects of this theory from insects to humans, in many preparations and in live animals, and with computer simulations. (see citations). The theory appears to explain a number of aspects of memory and learning: consolidation of procedural and declarative memories, gist extraction, and the integration of new with old memories. There are a number of very interesting concepts in these papers and I intend to post on some in the future.

 

But for now I want to sound a note of caution. On a number of counts this theory should not be welcomed with open arms. First SHY really deals with a smallish part of the subject of learning and memory. In reading the papers there is very little discussion of REM sleep, the hippocampus, the amygdala, and the cerebellum – most of the work deals with slow wave sleep and the neo-cortex. Next, I find the computer simulation very informative in understanding the theory but not in convincing me that it is a theory that accurately models reality. There are just too many assumptions. In looking for studies by other groups, I found a review by Frank (see citation). Here is the abstract:

 

Converging lines of evidence strongly support a role for sleep in brain plasticity. An elegant idea that may explain how sleep accomplishes this role is the “synaptic homeostasis hypothesis (SHY).” According to SHY, sleep promotes net synaptic weakening which offsets net synaptic strengthening that occurs during wakefulness. SHY is intuitively appealing because it relates the homeostatic regulation of sleep to an important function (synaptic plasticity). SHY has also received important experimental support from recent studies in Drosophila melanogaster. There remain, however, a number of unanswered questions about SHY. What is the cellular mechanism governing SHY? How does it fit with what we know about plasticity mechanisms in the brain? In this review, I discuss the evidence and theory of SHY in the context of what is known about Hebbian and non-Hebbian synaptic plasticity. I conclude that while SHY remains an elegant idea, the underlying mechanisms are mysterious and its functional significance unknown. ”

 

Franks produces arguments that learning is characterized by a number of mechanisms and so we would expect sleep to have multiple mechanisms too, such as Hebbian long-term potentiation and depression, downscaling and upscaling. He points out that for all the evidence put forward, what is missing is a causal link between the changes during sleep and the effects on memory and learning. The only link is the computer simulation and it does not address actual realistic mechanisms.

 

As new experiments accumulated, their predictive power failed, and they became little theories that only explained—often imperfectly—single sleep phenomena. It is too soon to say where SHY fits in this story. SHY is a seminal theory, bold in its scope and challenging in its implications, but it seems oddly disconnected from our rapidly evolving views of synaptic plasticity. The proponents of SHY have also amassed an impressive set of supportive findings, but these have yet to be pursued in depth. These are not trivial matters. In the absence of a clearly proposed mechanism (informed by current views on synaptic plasticity), the empirical supports of SHY are hard to interpret. Therefore, the significance of SHY—and what it may one day reveal about sleep and synaptic plasticity—remains elusive. ”

ResearchBlogging.org

Giulio Tononi, & Chiara Cirelli (2014). Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration Neuron, 81 (1) : 10.1016/j.neuron.2013.12.025

Hashmi A. Nere, & Tononi G (2013). Sleep-dependent synaptic down-selection (II): single-neuron level benefits for matching, selectivity, and specificity. Frontiers in Neuroscience, 4 : 10.3389/fneur.2013.00148

Nir Y, & Tononi G (2010). Dreaming and the brain: from phenomenology to neurophysiology. Trends in cognitive sciences, 14 (2), 88-100 PMID: 20079677

Frank MG (2012). Erasing synapses in sleep: is it time to be SHY? Neural plasticity, 2012 PMID: 22530156

Measuring consciousness

Recently there was a Scientific American article (here) about a paper by A. Casali (abstract here) on a method of measuring whether consciousness is present in patients. There was an implied question in the Sc Am article as to whether the method was actually measuring consciousness.

 

P. Mitra says, “The scientists report that their measure performs impressively in distinguishing states of consciousness within subjects, as well as across subjects in different clinically identified consciousness stages. These promising results will no doubt attract further study. However, the claim that the measure is theoretically grounded in a conceptual understanding of consciousness deserves a closer look. It is tempting to think that a concretely grounded clinical study of consciousness naturally advances our scientific understanding of the phenomenon, but is this necessarily the case? It is common in medicine to see engineering-style associative measurements, measurements which aid pragmatic actions but do not originate from a fundamental understanding.” In other words, he asks if this study gets us any closer to a neural correlation of consciousness.

 

One thing we know about consciousness is that it depends on the thalamus-cortex conversation. When this loop stops functioning, consciousness disappears. The other thing we know is that activity in the cortex that is present during some states of unconsciousness is very local.

 

Imagine a pond with a light rain falling on it. Each drop sets off a train of ripples that travel a long way before they fade away. The disturbance is not just local (integrated). The ripples of many rain drops interact and give complex patterns to the surface of the water. The disturbance is complex (information rich). All that Casali is saying is that he can measure, by using a disturbance, whether the brain is in a state where it is complex and integrated. His measurement is straightforward – EEG records after the disturbance by transcranial magnetic stimulation (TMS) are put through a mathematical procedure like image compression to give his perturbational complexity index (PCI). There is nothing questionable here and all the pieces are trusted methods. The compression algorithm is simple to imagine. Think of a picture on a TV screen. If a block of colour is all the same colour then every pixel need not be transmitted in order to reproduce that block and if some part of the picture stays exactly the same for some period of time then the information for that block need not by re-transmitted over and over. How much an image can be compressed is a measure of its spatial and temporal complexity. TMS does perturb the activity of the brain by causing electrical fields to change and EEGs do reflect the activity in the brain. This seems to be expected and was shown by trials.

 

But what is the connection between the thalmo-cortical loops and the loss of integration and complexity? I ask you, for a few moments, to consider an idea; suspend disbelief for a while to take in the idea and then bring your critical facilities back. The thalamus is a group of small regions tucked up under the cortex. Almost all the information that goes to the cortex gets there through the thalamus – it gets to the spots that the thalamus sends it to. In effect the cortex knows very, very little that the thalamus has not told it. The thalamus sits on top of the end of the spinal cord (if you see the brain stem and the reticular formation as extension of the spinal cord). It receives information from most parts of the brain like a sort of Grand Central Station. Sleep starts when a place down in the brain stem signals to the thalamus. The thalamus then shuts down the thalmo-cortical loops and the cortex is left on its own. Shortly after this the cortex loses its waking state sort of activity. What if the each local region of the cortex cannot communicate with other parts of the cortex without the thalamus opening a gate? Now the cortex is not only isolated from the rest of the world but it is also isolated into little local systems separate from each other. Consciousness, which is all about the global integration of information, would disappear without the information and the integration. If we must use a computer metaphor: we can think of the thalamus as having its own computer called the cortex that it uses all day (for things like the stream of consciousness) and at night it turns it off (or lets it do its cleanups, refreshes and backups). Now you can come back to your critical persona and see if you like the idea.

 

Words for odours

Recent research by Majid etal. has found a language that has words for abstract odours. Here is the abstract:

 

From Plato to Pinker there has been the common belief that the experience of a smell is impossible to put into words. Decades of studies have confirmed this observation. But the studies to date have focused on participants from urbanized Western societies. Cross-cultural research suggests that there may be other cultures where odors play a larger role. The Jahai of the Malay Peninsula are one such group. We tested whether Jahai speakers could name smells as easily as colors in comparison to a matched English group. Using a free naming task we show on three different measures that Jahai speakers find it as easy to name odors as colors, whereas English speakers struggle with odor naming. Our findings show that the long-held assumption that people are bad at naming smells is not universally true. Odors are expressible in language, as long as you speak the right language.”

 

I think the important thing about the dozen or so Jahai words is that they are abstract (like our words for colours – only our word for orange might seem to refer to a concrete object but it is probably the other way around). They are words that describe different qualities of smell. Jahai and English speakers were tested with “what colour is this?” and “what odour is this?” questions. Answers were compared by the time it took to answer, the type of answer, and how much the speakers agreed with one another in the words they used. Jahai speakers answered the same to colour and odour questions in time, type of words and agreement. English speakers took 5 times longer to answer odour questions compared to colour ones. The answers varied in type and did not agree across the speakers. So it seems the not having words for odours is cultural and not a biological fact of the brains architecture.

 

But this is not a new idea. We have known for many years that there are people who learn to identify odours, have agreed words for abstract types of odour. They are the perfumers and the people who do quality control on wines, cheeses and other products which depend on their odour for much of their quality. These professionals use specific terminologies to describe and classify the components of the odours they are interested in. A recent paper by Royet and others looks at the nature and acquisition of this skill. Here is some of what they have to say about the language of odours.

 

…perfumers (or wine professionals) are less prone to classify odors in terms of their hedonic quality than non-experts, suggesting that they are able to discern (or label) perceptual qualities not available to untrained individuals. Chollet and Valentin suggested that the perceptual representation of wine is similar in experts and novices but the verbalization of this representation varies with the level of expertise. Experts use analytical terms, whereas non-experts use holistic terms…it was demonstrated, in an experimental frame, that discrimination and memory performances can partly be improved by verbalization of the stimuli or the knowledge of their names.

 

With regards to olfaction, the widespread assertion is that it is very difficult for the average person to mentally imagine odors, in contrast to our ability to mentally imagine images, sounds, or music. Despite behavioral and psychophysical studies demonstrating the existence of odor imagery, several authors have even claimed that recalling physically absent odors is not possible. However, odor experts do not appear to have difficulty in mentally smelling odors. When perfumers are questioned, they claim that they are quite able to do this and that these images provide the same sensations as the olfactory experiences evoked by odorous stimuli themselves.”

 

I would assume that the big difference between the Jahai speaker and the perfumer is that the perfumer is learning a language and skill as an adult while the Jahai child learns the odour words at the same time as the colour words. We are not talking about a language type difference or a sensory difference but a cultural one – how important is odour to the culture? When and why are individuals taught to identify odours and to be able to converse about them?

ResearchBlogging.org

Jean-Pierre Royet, J Plailly, A Saive, A Veyrac, & C Delon-Martin (213). The impact of expertise in olfaction Frontiers in Psychology, 4 DOI: 10.3389/fpsyg.2013.00928

Majid A, & Burenhult N (2014). Odors are expressible in language, as long as you speak the right language. Cognition, 130 (2), 266-70 PMID: 24355816