Category Archives: morality

Meaning of consciousness – part 3

What is the function of consciousness? Is the function thinking? There is type 1 thinking, unconscious thinking, and type 2 thinking, which we are conscious of. But it appears that what we are really conscious of is working memory, and not conscious of how an item is created and put in working memory. Type 2 thought is just unconscious processes using working memory as a tool for certain sorts of processing (some language, step-wise logic chains or calculations for example) and the contents of working memory are rendered into consciousness. If type 2 thinking is a function of consciousness then it implies that working memory is somehow dependent of consciousness.

We tend to associate moral responsibility with decisions made consciously, but for thirty of so years there has been growing evidence that we make decisions and execute actions unconsciously before registering them consciously. Libet’s experiment and its descendants just will not go away in spite of decades of trying. The notion that free will and ‘free won’t’ are functions of consciousness just will not work. What seems to be in consciousness is a metaphorical note saying, “I intended this action, I did it, and I morally own it.” There is a phrase, ‘fringe qualia’, which seem to be metaphorical notes about non-sensory information: states of mood and emotion, recognitions, ownership of actions and thoughts, important so take note, and so on. None of these arise within consciousness; they are add from and by unconscious processes. Consciousness can register responsibility for an action but not actually cause the action. There is a theory that consciousness is required to insure that there are not overlaps and gaps in motor plans. The idea is that the motor system needs a working model of the body and environment to proof its plans. This is probably true but not necessarily.

Is the function to give us a sense of self? The impression we have is that we are seeing the world through a hole in our heads around the bridge of the nose from about an inch and a half or so into the brain. But the ‘self’ is a complex mixture of what we control with our muscles, the sensory feelings from inside our bodies, sensory signals from the skin, our memories making a personal narrative, and very especially our consciousness. We naturally seem to identify with some sort of conscious ‘me/I’. Consciousness, as an awareness of ‘ourselves in the world’, has to create the watcher, listener, actor, that is in the world. Self seems essential to consciousness but not perhaps the central function.

Can memory be a function of consciousness? If we think about it, consciousness and memory do seem to go together, at least episodic memory. We remember things that we are conscious of and not things that we are unconscious of. We are aware we have been unconscious when there is a discontinuity in our memory train. It does not seem to require some sort of translation to bring a memory into consciousness – it appears to happen easily. It seems that imaginings are constructed of bits and pieces of memories and they also seem to fit into consciousness without effort. In order to remember experiences, we have to have experiences, and what is it that we experience – it is consciousness. the action. Consciousness can be experimentally tricked into being wrong, taking responsibility for actions the individual did not cause. But we are usually right. Knowing which actions we intentionally cause must be important to judging outcomes and learning from experience. Consciousness seems connected to various short-term memory systems: working memory, sensory memory, verbal memory. Episodic memory also is held together by a continuous self, all events and episodes happen to the same self. Consciousness may be what is prepared for episodic memory, the ‘leading edge’ of episodic memory, so to speak. Or it may be a monitor or newly formed memories, like the monitor head on a tape recorder. The creation of episodic memory would certainly be a function worth the biological cost of consciousness. Being part of the episodic memory system would fit with being an important anchor of the ‘self’. Even the metaphorical notes of the fringe qualia would fit it this episodic memory.

The question is – what exactly is the dependency of memory on consciousness. Episodic memory, imagination and consciousness seem to have the same basic nature or structure or coding. And this structure must be the vehicle of the subjective experience. I have looked for a clear statement of this idea in the literature and the closest seems to be the global workspace of Bernard Baars. He proposed a architecture that would give momentary active subjective experience of events in working memory, consciousness, recalled memory, inner speech and visual imagery.

Do other animals have consciousness? It certainly seems reasonable to assume that most vertebrates do. The source of the awake state comes from deep in the brain stem. Activity from there activates higher regions, the thalamus in particular. Awake, in animals, may not necessarily mean aware, but it would be wiser to assume awareness until proven otherwise, than to do as we have been doing, assume no awareness until proven otherwise. The cerebral cortex does not itself mount consciousness, it is done in partnership with the thalamus, probably be driven by the thalamus. It would seem that a rudimentary consciousness would be possible without a cerebral cortex. It has been found recently that split-brain subjects have one consciousness and not two. This implies that the source of consciousness is is not in the cerebral hemispheres, but must be in some lower region. But the vivid detail of the content must be from the cortex.

Still we do not have a explanation of the subjective nature of consciousness yet but that is for part 4.

 

It is not about rules

The question of the trolley has always bothered me. You probably have encountered the scenario many times. You are on a bridge over a trolley track with another person you do not know. There are 5 people on the track some way off. A run-away trolley is coming down the track and will hit the 5 people. Do you allow this to happen or do you throw the person beside you onto the track in front of the trolley to stop it? This question comes in many versions and is used to categorize types of moral reasoning. My problem is that I do not know what I would do in the few seconds I would have to consider the situation and I don’t believe that others know either.

In another dip into OpenMIND (here) I find a paper on morality by Paul Churchland, “Rules: The Basis of Morality?”. This is the abstract:

Most theories of moral knowledge, throughout history, have focused on behavior-guiding rules. Those theories attempt to identify which rules are the morally valid ones, and to identify the source or ground of that privileged set. The variations on this theme are many and familiar. But there is a problem here. In fact, there are several. First, many of the higher animals display a complex social order, one crucial to their biological success, and the members of such species typically display a sophisticated knowledge of what is and what is not acceptable social behavior —but those creatures have no language at all. They are unable even to express a single rule, let alone evaluate it for moral validity. Second, when we examine most other kinds of behavioral skills—playing basketball, playing the piano, playing chess—we discover that it is surpassingly difficult to articulate a set of discursive rules, which, if followed, would produce a skilled athlete, pianist, or chess master. And third, it would be physically impossible for a biological creature to identify which of its myriad rules are relevant to a given situation, and then apply them, in real time, in any case. All told, we would seem to need a new account of how our moral knowledge is stored, accessed, and applied. The present paper explores the potential, in these three regards, of recent alternative models from the computational neurosciences. The possibilities, it emerges, are considerable.

Apes, wolves/dogs, lions and many other intelligent social animals appear to have a moral sense without any language. They have ways of behaving that show cooperation, empathy, trust, fairness, sacrifice for the group and punishment of bad behavior. They train their young in these ways. No language codifies this behavior. Humans that lose their language through brain damage and can not speak or comprehend language still have other skills intact, including their moral sense. People who are very literate and very moral can often not give an account of their moral rules – some can only put forward the Golden Rule. If we were actually using rules they would be able to report them.

We should consider morality a skill that we learn rather than a set of rules. It is a skill that we learn and continue learning thoughout our lives. A skill that can take into consideration a sea of detail and nuance, that is lightning fast compared to finding the right rule and applying it. “Moral expertise is among the most precious of our human virtues, but it is not the only one. There are many other domains of expertise. Consider the consummate skills displayed by a concert pianist, or an all-star basketball player, or a grandmaster chess champion. In these cases, too, the specific expertise at issue is acquired only slowly, with much practice sustained over a period of years. And here also, the expertise displayed far exceeds what might possibly be captured in a set of discursive rules consciously followed, on a second-by-second basis, by the skilled individuals at issue. Such skills are deeply inarticulate in the straightforward sense that the expert who possesses them is unable to simply tell an aspiring novice what to do so as to be an expert pianist, an effective point guard, or a skilled chess player. The knowledge necessary clearly cannot be conveyed in that fashion. The skills cited are all cases of knowing how rather than cases of knowing that. Acquiring them takes a lot of time and a lot of practice.

Churchland then describes how the neural bases of this sort of skill is possible (along with perception and action). He uses a model of Parallel Distributed Processing where a great deal of input can quickly be transformed into a perception or an action. It is an arrangement that learns skills. “It has to do with the peculiar way the brain is wired up at the level of its many billions of neurons. It also has to do with the very different style of representation and computation that this peculiar pattern of connectivity makes possible. It performs its distinct elementary computations, many trillions of them, each one at a distinct micro-place in the brain, but all of them at the same time. … a PDP network is capable of pulling out subtle and sophisticated information from a gigantic sensory representation all in one fell swoop.” I found Churchland’s explanation very clear and to the point but I also thought he was using AI ideas of PDP rather than biological ones in order to be easily understood. If you are not familiar with parallel processing ideas, this paper is a good place to find a readable starting explanation.

Another slight quibble with the paper is that he does not point out that some of the elements of morality appear to be inborn and those elements probably steer the moral learning process. Babies often seem to ‘get it’ prior to the experience need develop and improve the skill.