We have previously, in the session on free will, discussed determinism: the idea that everything that happens in the world is part of a determinate – albeit unknown – causal chain. Some philosophers take this to mean that there is no free will, since determinism implies that the future is fixed and we cannot change it. Furthermore, some would argue that this means that although we may take ourselves and each other as morally responsible for our actions, in fact we are not, because we could not do otherwise.
A view in moral philosophy is consequentialism. Consequentialism is the view that what matters morally are the consequences of our actions, rather than our intentions or our will. In this view, it doesn’t matter whether we have a free will or not, since only the consequences of what we do matter. Whether we did them freely or not has no bearing on the consequences of our actions, therefore it is morally irrelevant.
Consequentialism, as well as the idea that free will is an illusion, are controversial doctrines in philosophy. Many reject these doctrines, for various reasons. Anscombe is one of them. What is particular about Anscombe’s argument against consequentialism and determinism is that it is founded on her development of the idea of intentional action.
The case that intentions matter morally, and not only consequences, is fairly easily made. Imagine you’re working high up on scaffolding, laying bricks. In spite of your careful precautions, you lose grip on one of the bricks and it slips out of your hands, falling down towards the street, where, unfortunately, it hits a passer-by on the head, killing him instantly. You did not intend to kill him. Now imagine you are laying bricks, high up on scaffolding, but this time you throw a brick at a passer-by, aiming at his head in order to kill him – and you are successful in your wicked plan. In both cases, the consequences are the same, but we would normally judge these situations differently; we would judge the intentional killer more harshly than the accidental killer. So, intentions seem to matter, morally.
But what does it mean to act intentionally? To answer that question we should look at what sort of action an intentional action is, says Anscombe.
All sorts of movements happen in the world, but only some of those movements are the behaviour of agents. But not every instance of behaviour is an action. So actions are again a subset of the behaviour of agents. Scratching a mosquito bite in your sleep is behaviour, for instance, but it differs from an action in that you are not in a privileged position to account for it, compared to any other observer. You were asleep and didn’t even know that you were doing it. Actions, on the other hand, are things an agent does knowingly.
In the category of actions, we can make a further distinction: that between intentional and unintentional actions. When we do something, we can have a certain intention with which we do it. I hammer nails in wood with the intention to construct a table, for instance. But in addition, there can be unintentional side effects. By hammering nails in wood, I am bound to make noise, but making noise is not my intention. My action, hammering nails in wood, is therefore intentional when described in terms of making a table, but it is an unintentional action when described in terms of making noise.
An important claim made by Anscombe is that intentions are not causes. Intentions are an answer to the question “why?”, which can only be answered by the agent, and not by an observer. Causes, on the other hand, can only be identified upon observation. To take the carpentry example again: any observer can point out a causal chain that eventually results in my behaviour of hammering nails in wood. Parts of that causal explanation can refer to my brain, my hands, the movement of my muscles et cetera. But no matter how adequate the causal explanation is, it says nothing about why I’m hammering nails in wood. The answer to the why-question is: because I want to make a table. Observers cannot give this explanation, only the agent can.
Of course, you can point out that perhaps an observer might be able to identify my desire to make a table in a state-of-the-art brain scan. Or you might argue (with Davidson) that my intention to make a table is just one shackle in the causal chain. But that doesn’t refute the point that an answer to the why-question is a different kind of answer than the question ‘what caused this behaviour’? Intentions explain the significance of an action to the agent, causes do not. Intentions correspond to the reasons for which an action is done, causes do not.
If we accept this distinction between intentions and causes, then this sheds light on the difference between two mental states: beliefs and desires. A belief is a description of the world – or at least how we take it to be – in our mind. The belief that Paris is the capital of France, the belief that H2O is the chemical composition of water, et cetera. Desires are not descriptive. They are not how our mind describes facts, but they indicate what we would like the world to be like. John Searle calls this a difference in ‘direction of fit’. Whereas beliefs are meant to fit the world as it is, desires are meant to make the world fit the desire.
Anscombe uses the example of a shopping list to illustrate this difference. Imagine you are walking in a supermarket with a shopping list you wrote in advance. The shopping list expresses your intention to buy the things listed on it; it expresses your desire to acquire those items. It is not a prediction of what you are in fact going to buy. It does not express your belief that you will in the future buy these items, although you might very well have that belief, too. Imagine that there is a detective following you, writing down exactly which items you pay for at the till. His list of items might be the same as your shopping list, but there is an important difference: his list is not an expression of anyone’s intentions to purchase these items. Instead, it is a description of the items you have in fact bought. You can see the difference in ‘direction of fit’ when we consider a mistake in either case. Suppose you come home to discover that instead of butter – which was on your list – you accidentally bought lard. You would not correct this mistake by crossing out the word ‘butter’ on your shopping list and replacing it with the word ‘lard’. Instead, you would go back to the shop to change it for butter. You would make the world fit your shopping list, not make your shopping list fit the world. On the other hand, if the detective had accidentally written down that you had bought butter, and later discovered that you in fact bought lard, he would just correct his list, because his list is merely a record. It is meant to fit the world as it is, not to express desired changes in the world.
Anscombe used her action theory to defend a version of the Doctrine of Double Effect. This doctrine holds that there is a difference between intending harm and merely foreseeing harm, and that this difference is morally relevant. Whereas intending harm is morally forbidden, according to this doctrine, it may not always be morally forbidden to perform an action that has harm as a foreseen side-effect. For example: a doctor might administer drugs to a patient with the intention of relieving the patient’s pain, and she might foresee that death might occur as a side-effect of the drugs. By contrast, a doctor might administer heavy painkillers with the intention to kill a patient. Those who accept the Doctrine of Double Effect would evaluate these two cases differently. In the first case, the doctor’s action is permissible, in the second case it is not (assuming the patient wants to live, if possible, but not in pain). The test included in the Doctrine of Double Effect is the following: if the agent would avoid the foreseen side-effect if they could whilst still performing the intended action (use a different, less harmful painkiller, for instance), then the action is permissible, in spite of foreseen harm. The Doctrine of Double Effect does not, however, excuse all actions with horrible side-effects, since it does not relieve the agent from the responsibility to give these foreseen side-effects due consideration.
This doctrine explains the difference between a strategic bomber and a terror bomber. A strategic bomber intends to bomb a particular object, but foresees that in doing so, civilians living in the area will probably be killed. The strategic bomber may or may not accept that side-effect, but it is not his intention. A terror bomber, however, intends to kill civilians. The death of the civilians is not a foreseen side-effect, but the very reason for the bombing in the first place.
Anscombe’s main concern is that the Doctrine of Double Effect is often abused, that is, appealed to without taking this test into consideration. She used this argument to argue against the University of Oxford awarding a Honorary Degree to Harry Truman, who was, in Anscombe’s eyes, a mass murderer. Defenders of Truman argued that the bombings of Hiroshima and Nagasaki were strategic bombings, with large numbers of civilian casualties as foreseen but unintended side-effects. Anscombe, however, believed that the demonstration of atomic weapons – if that were the intention – could have been conducted in more isolated areas. She believed that Truman intended to kill the population and was therefore a terror bomber rather than a strategic bomber.