Tag Archives: morals

How do morals work?

There is a way to study morality with little scenarios, little hypothetical questions, given to people, who then answer that they would do X or Y as a moral action in that situation. The scenarios have always struck me as simplistic and not believable, even sometimes impossible. And the answers people give do not have credibility – people are not always truthful and besides, in the the split second they would have to make a decision if the situation was really happening, with hardly any time for thought, they may do anything. The context is arbitrary and does not widen to include the society or the future beyond perhaps a day at most. This scenario method seems useless and misleading.

One of these scenarios goes like this. If you could time travel to Austria when Hitler was a small boy, would you try to kill him? Well, first of all this is not a believable story so the answerers will not actually take it really seriously. Secondly, we know what Hitler did but we have no idea what would have happened if there had been no Hitler. It could have been wonderful. Or for example, if no Hitler, then there may have been some other tyrant but a bit later on the scene. A war in the 50’s rather than the 40’s would be the future. This could have been after the invention of the nuclear bomb, so that instead of only 2, enough bombs were dropped to wipe out civilization.

Another question has been in the news lately. If you have a person who had hidden a bomb and that bomb was going to go off in a short time and kill many people, would you torture the person to get the location of the bomb? This again is not a believable scenario, although possible. This particular combination of knowing some things for absolutely, positively sure but not knowing the location at all is unlikely. No one has come up with a case like this having happened and it is unlikely to happen very often – maybe once in a few hundred years. If you were in this situation you would probably act without thinking and justify your action or inaction later. Again this scenario leaves out the future – over the course of future years you may cause many more deaths by opening a door to torture than you save by finding the bomb. It also leaves out a wider context by not including the fact that torture is not a very successful way to get correct information (you can do the torture and get nothing in return). In such a situation I want to think I would not torture but I am not sure. If someone says they would torture, I am not sure that they actually would. How people answer the question is next to meaningless.

A popular question (or questions really) is the trolley car that is going to kill 1 person or 5, where you can take action resulting in the death of the 1 or not take action and allow the 5 to die. Again the scenarios are not very believable and some not even credible. Here is one: you are on a bridge over the trolley line and you see 5 people tied to the track. A trolley car is coming and will hit the 5 if it is not stopped. Beside you is a fat man, weighing enough to stop the trolley if he is dropped on the track. Would you push him over the bridge onto the track? If you say you would then it is assumed that you are a utilitarian and decide moral questions by what gives the most total good or the least total bad. If you say no you would not, it is assumed that you follow moral rules and therefore will not participate in murder. In all the different models of this set-up there is no look at the future. What if the 1 that dies to save the 5, is about to find the cure for some fatal disease or something like that, saving thousands of lives? To a certain extent it is important that people do not think that someone may murder them for no other reason thean they were a convenient weight to save some other people. Societies need an amount of trust.

As it happens, I think I would not push the fat man but there are other trolley scenarios where I might sacrifice the 1. And again I am not sure that I know what I would do in some of the trolley questions. But – I am quite sure that I am not a utilitarian all the time or none of the time; ditto, with following rules. Sometimes I do and sometimes I don’t. I am not concerned with being consistent to a philosophical opinion of what should be labeled moral.

The reason we even have moral questions is that we are social beings and the health of our societies is important to our survival. Because we are social, there can be choices we have to make that have no absolutely right answer. We have to choose between two good things or which of two bad things to avoid. The problems are not clear cut nor do we have all the information needed to ‘solve’ them. We can use our intellect and find logical answers but these may not be the best answers because they don’t take into consideration the statistics of unknown repercussions. We can follow the rules of society but these may not be the best answers for us in certain situations. We can follow our emotional feelings but they are also not always the best route. In the real world, out brains sort this out using cognition, learned values and emotions. This can be done quickly or more slowly depending on the time available. We end up with an action plan and a justification, should we need it, but with practically no idea of how the action plan was arrived at. We can trust, for what it’s worth, that the brain used a mechanism that has withstood the test of our ancestors/societies survival. There is no guarantee that evolution will have provided us with a way to always be morally right just likely that it will be probabilistically better than alternatives. Children seem to come with a rudimentary moral sense which they improve with experience and learning from their culture – still no guarantee!

If we want to understand how the brain makes these difficult choices, we will have to use more realistic questions (whether in a scanner or on a questionnaire). Morality is unlikely to be understandable in terms of utility or rule, logic or emotion, or self-interest verses societal-interest.