Massimo Pigluicci has written an excellent review of Sam Harris’ book on morality.
John Lawrence Aspden has an interesting post on morality. As someone who has studied moral theory a bit (not as much as I have other areas in philosophy like metaphysics and epistemology), I thought I might be able to enlighten the discussion a little bit.
I read the other day that thousands of years of philosophical thought had produced three ethical schools, and that they were called utilitarian ethics, deontological ethics, and virtue ethics.
I hadn’t heard that there were three answers. Ten minutes of research seems to indicate that they can be characterized as: ‘act for best consequences’, ‘follow rules’, and ‘be virtuous of character’?
There’s at least are a few others that are missing: divine command theory – do what God says; intuitionism – do what your intuitions tell you. There’s probably a few others out there in the books and journals, but that covers most of the theories. But they are a bit more complicated than this.
Let’s start with consequentialism, a set which contains utilitarianism as a component. John dismisses these:
Which leaves only ‘act for best consequences’, but of course, we need to say who is to judge the best consequences. If the judger is me, then surely that’s the definition of evil? If the judger is some sort of average of everyone, then it defines a sort of altruism. I don’t like either of those.
This is an interesting objection, but it isn’t a persuasive one. To see why, you have to see what utilitarians actually say. Bentham reckoned that a utilitarian calculus could be constructed such that you could calculate the effects of an action on all. Such a calculus would be an objective, shared, non-contextual tool that lets you measure the consequences of actions in the world. As with mathematics, from where the name of the calculus derives, two people should be able to weigh up the same consequences and get to the same conclusion. A better wording of the utilitarian principle is “act for the greatest good for the greatest number”.
Take the classic easy case for the utilitarian: you are walking along in a brand new suit with some very expensive shoes and you see a small child struggling in a pond. You immediately see that she is quite likely to drown, but the water is shallow. You can quite easily step into the pond and save her from an untimely death but it is quite likely that you will ruin your shoes and maybe your clothes too. Peter Singer and others have argued that it would be very, very wrong for you to not save the child. A child’s life is worth more than even the most expensive clothes and shoes.
The utilitarian will say that you can calculate the costs to yourself – namely, the cost of replacing the shoes and clothes, the coldness of the water against your skin, and maybe the cost of missing, being late to, or arriving in waterlogged shoes and spoiled clothes to whatever it is you are on your way to, perhaps having to make a witness statement to the police, or using up a small amount of your phone’s battery to call an ambulance – and weigh them up with the rewards you get – the feeling of satisfaction, the shiny medal given to you by the mayor, the gratification of the parents, possible reciprocal effects (the idea that your actions might inspire others to act in a morally brave or helpful way) – and the obvious good effects for others: for the child, for the child’s family and friends, for society overall.
If the utilitarians are right, the utilitarian calculus would be possible to do by anyone. That’s not to say it would be the same with other people in your place. It’s not saying that if you were to substitute another child in the place of the businessman, they would come to the same conclusion, but rather that another person would be able to do the same calculation of costs and benefits and come to the same conclusion if they are presented with the same facts.
The utilitarian is not ignorant of the fact that people will come to different conclusions: they are saying that with maximal knowledge, people would ideally act for the best consequences. You can subscribe to an ethical theory without believing that you’ll always obey it. In fact, that seems to be something of a feature of moral acts: for something to be a moral act, you actually need to make some kind of effort. One of the reasons I’m not a moral vegetarian is because it isn’t something where I am working against my own inclinations: I’m a vegetarian because I don’t like meat; the moral objections I could have to eating meat are not my primary reason for not eating meat.12
Let’s deal with another example: if I find myself on a Saturday morning lying in bed, I may be rather enjoying myself doing exactly nothing. But just down the road is considerable quantities of litter. It would promote the general good if I were to spend my morning removing the litter from the side of the road. It would come at little cost to me: I wouldn’t be able to lie in bed and read Twitter (I might also risk getting hit by a car while walking on the road, or I might encounter some vicious snake on the verge – not that there are many in rural Sussex). But I would get some exercise, perhaps feel good for helping others, and others would benefit aesthetically from the lack of litter. I would reduce the cost to the council in cleaning litter from the verges (and thus perhaps reduce in a negligible amount the taxes we all have to pay), and I would perhaps increase the value of properties in the area. I might even save some poor little animals from dying a horrific death inside an empty crisp packet or something equally good.
That’s all well and good, but I still don’t feel an obligation to do this. Laying on more benefits and reducing the costs even further do not change this. That something can be utilitarian-good but still impose on me no obligation to do it seems to suggest that morality might be a little bit more complicated than this.
There’s some other problems too. Take the little girl in the pond example again. I’m walking along, and before acting, am I supposed to weigh up these costs and benefits to determine whether it is the right thing to do? That takes time and energy. And I’m supposed to do this cost-benefit analysis before I do act or, I guess, not act. Some utilitarians have responded to this and other critiques3 by suggesting that we instead apply the utilitarian calculus only to rules. Act according to rules that as a whole cause more good than harm. The problem with this is that to satisfy the intuitions that led one to utilitarian ethics in the first place – namely, that of judging acts by consequences – one still needs to give some kind of opt-out clause to acts that do not fall under some rule or where applying the rule leads to terrible consequences. In (what I believe to be good and just) legal systems, laws are there to serve justice, rather than the institutes of justice being there simply to serve the arbitrary or badly-written laws.
Let’s move on then. What does John say about deontological ethics?
'Follow rules' seems at best silly and at worst evil. If you've made up your own list of rules, then again, you need some way of working out what's on the list. If you're following someone else's rules, then they had the same problem, plus you've now got to worry that they might be trying to get you to act in their interests, plus their rules might have been corrupted in the process of being transmitted from their head to yours.
Absolutely right. The question is what rules you use. When people say ‘deontological ethics’, they tend to be referring not so much to the rule being “just follow rules”. Rather, they are saying that one ought to look at the specific rules in question. Whose rules? Well, Immanuel Kant. Or – if you believe Wikipedia – Ayn Rand. But in this case, I’d rather not worry about Ayn Rand. So, instead, we worry about Kant.
What’s he say then? “Always act according to that maxim whose universality as a law you can at the same time will”.
That’s not quite the same as saying “follow rules”, but rather follow rules that can be made universal. Kant asks you to imagine a possible world where everyone acts in the way you consider acting and to consider if any “contradictions or irrationalities” arise.
This isn’t simply following rules, this is following rules that have a specific property - that of being capable of being followed universally.
To show a problem with this, consider a rule that most Kantians including Kant seem to think is justified: not lying. The obvious objection – and I claim no originality here – is simply that of the Gestapo turning up on your door and asking if you have seen a particular Jewish person who they are trying to round up. You are in fact sheltering them. Surely, if you are protecting someone from being sent to a concentration camp, a little white lie to the Gestapo is not a breach of one’s moral duty? No, says the orthodox Kantian, but you are allowed to not tell the whole truth. The example I heard was that you are allowed to say “oh, I saw that person going to the shops”. Earlier on in the day, they had gone to the shops and you saw them walking up the street to the shops. You are telling the truth, but in such a way as to misdirect them.
If this seems legalistic and unsatisfying, you are right. It is legalistic and unsatisfying. You can reframe it though. “You should not lie unless the consequences of telling the truth will lead to grossly unjust outcomes for another”. Congratulations, you’ve just become a Kantian utilitarian, which makes about as much sense as a redheaded blonde or an abstinent sexaholic.
The problem, it seems, is that neither of these answers our moral intuitions very well. That may be because our moral intuitions are actually inconsistent. We ought to do cost-benefit analysis for something like the much-maligned National Institute of Clinical Excellence4, and no amount of irrelevant utilitarian hand-waving gives you the right to feed Christians to the lions in the Colosseum or innocent men to electric chairs in Texas. And, err, good luck meshing the two together.
That brings us to virtue ethics. John says:
'Be virtuous of character' seems vacuous. How are you supposed to decide what virtue is?
Here, I am going to refer to the Stanford Encyclopedia of Philosophy. The entry on virtue ethics is quite good.
The important thing about virtue ethics is that it isn’t about actions or rules so much as it is about persons. John is a Clojure programmer so I’ll give him an analogy: some programming languages can be easily switched. You can go from Python to Ruby pretty easily. But the difference between, say, Forth and Lisp is quite huge. Virtue ethics is trying to answer the meta-ethical question by reformulating the question as one of being how to live a flourishing life: it says the fundamental unit of ethical reflection isn’t actions or rules but what enables people to flourish. The very question that virtue ethics is trying to address is different: it is trying to push one to reflect differently on moral questions.
I’ll leave it there and I’d like to encourage John to look deeper: there is so much more to ethics than the potted summaries, and the questions of ethics require a lot of reflection to answer well.
The pious and chaste wannabe saint would wish for a complete lack of sexual desire (or perhaps a helpful friend to tie his legs together with rope when he felt lust), but would find it hard to be a saint as he would no longer be struggling morally against his own desires. ↩
As an aside: I am agnostic as to the moral argument of eating meat. I do find the environmental argument about meat eating reasonably convincing, and I do think there is a major problem with cruelty to animals. I cannot work out whether if the meat industry were to be reformed such that cruelty to animals were no longer an issue, I would be for or against eating meat. On a personal level, this is not a concern as I do not feel the need to eat meat. ↩
The objection I am thinking of is the age-old one of the utilitarian seeming to justify the execution of an innocent for a crime he didn’t commit to justify the vengeful bloodlust of the mob if the cost-benefit analysis ends up being in favour of the execution. ↩
Woe unto the government for fucking that up, by the way. NICE is a necessary evil because the alternative to having someone do a rational cost-benefit analysis on drugs is we decide on the basis of who shouts loudest. Which is self-evidently a much worse way of doing it for fucking fuck’s sake. ↩
I’m just listening to a Premier Christian Radio debate between Andrew Copson of the BHA and a Catholic apologist called Peter D. Williams: said apologist has decided to drag the debate into a 20 minute harangue on the objectivity and transcendence of morality. Said apologist used a little philosophical learning like, as the proverb says, a drunkard uses a lamppost - for support rather than illumination. Really, where do people like this learn their ethics? I dread to think, or rather the apologist seems to.
Said apologist was successful: he managed to throw out so many red herrings so that very little time could be spent on the child abuse scandals. You just got the pompous Catholic Englishman’s version of being Gish galloped, Andrew!