The Trolley Problem (aka the Trolley Dilemma) is
sometimes used thought experiment used by psychologists and philosophers to
gauge a person’s moral compass. The though experiment goes something like this:
Suppose there is a runaway train car (a trolley)
that is rolling down the tracks towards five people tied to the tracks, or are
lying on the tracks and are otherwise unable to move. You are off in the
distance observing this and happen to see a lever next to you that if pulled
will switch the runaway train car from its current course onto another set of
tracks. However, on the diverted track there is single person tied to the
tracks and will be killed if you pull the lever. Question is: Do you pull the
lever to save five people and kill one or take no action and let five people
die?
Keep in mind this is the Trolley Problem in its
simplest iteration. There are several variations of this thought experiment
which involve intentionally pushing a fat man onto the tracks to save five
people (the Fat Man version) or intentionally pushing the man (a “fat villain”)
who tied up the potential victims onto the tracks to avert the deaths of the
innocent. Let’s not concern ourselves with these versions or ask questions
about the characters of all the potential victims. For the sake of realism,
however, we are going to alter the details of the thought experiment to a more
likely scenario than initially presented. We’re going to do this because the
Trolley Problem such as it is described above doesn’t present a realistic
situation one would find themselves in and be forced to make a moral judgement.
What I’d like to do is introduce a modern equivalent to the Trolley Problem,
changing the problem to something more akin to what philosopher Judith Jarvis
Thompson had in mind with her “brilliant surgeon” version of the Trolley
Problem.
Here’s my version: Suppose you are a politician
and if you don’t vote on a particular bill, five random people will lose their
health care coverage and die from pre-existing conditions. If you do vote for
this particular bill, the five people will keep their health care coverage but
another single random person will lose their coverage and die from their pre-existing
condition. In summary, if you don’t vote – if you take no action regarding the
bill – five people will die. If you do vote for the bill – take action on the
bill – one person will die and five will be saved. Do you vote for the bill or
not?
Philosophically, what you as the politician is
likely to do is based upon whether you are a utilitarian or a deontologist.
That is, if you seek to do the greatest good, as a utilitarian you’re going to
vote for the bill. If, on the other hand you think that committing certain
actions like intentional harming someone are wrong, then you’re not going to
vote for the bill. Of course, the obvious flaw with the deontologist’s position
is that not voting for the bill – an inaction
– is just as bad as intentional harm if it is your intention to abstain from the vote. In other words, an inaction is
just as bad as an action if one intends
towards the inaction. (The major flaw in Deontology is that intentions matter.)
There is a choice to be made – vote or don’t vote – and once it is made, there
is intention behind the given choice; this fact cannot be escaped. The
deontologist likes to think that by not intending to ‘intentionally’ cause
harm, they are absolved from whatever harm does happen. Obviously, this is
madness as not voting for the bill – intentionally – causes harm and makes the
deontologist’s actions morally impermissible.
This deconstruction of the deontologist’s
position philosophically compels you to vote for the bill, thus taking the
utilitarian route (if there’s no false dilemma here, which there may well be).
Without knowledge of any of the random people involved, without knowing whether
saving the five will result in a better or worse world, you should vote for the
bill on the assumption that the death of five people is likely to wreak more
sorrow and havoc than the death of one person. All things being equal among the
random people, you are compelled to vote for the bill on purely philosophical
grounds if you want to be considered a morally just person (such as morality is
construed in the Western Industrialized world). However, what people are likely
to do is much different in reality.
In reality, most people take the deontologist’s
route and think they are avoiding taking an action that intentionally harms a
single person. This outcome was confirmed by a 2007 online experiment conducted
by psychologists Fiery Cushman and Liane Young and biologist Marc Hauser. They
concluded that by taking a positive action (doing something) that resulted in a positively negative consequence evoked
emotions that clouded ‘better’ judgement. But why should this be the case? Why
would ‘taking action’ result in feelings that assume the outcome will be worse
than taking no action at all? Why does an apparently personal investment in an
outcome change what a person will decide to do?
We may want an evolutionary psychologist to weigh
in here or we may hypothesize that people generally ‘don’t want to get their
hands dirty’ for fear of negative consequences, meaning, precisely, being
responsible for one’s actions. As those of us familiar with the workings of
Western culture know, we tend to forgive inaction that leads to harm as we work
from the assumption that such consequences weren’t malicious in nature. Only,
if one knows the consequences – that five people will die through the inaction
of not voting – it is difficult to reason why this outcome isn’t just as
malicious as voting. Again, the deontologist works from the premise that an
action can be wrong and an inaction not wrong, but I’ve already argued this is
demonstrably false as inaction is in fact an action because the decision itself
is based on intention. The deontologist
intends not to intentionally kill someone not realizing the initial intention
intentionally kills five people. No matter what angle you view such moral
dilemmas from, given only two choices, the deontological reasoning falters.
None of this takes into account other reason why
you might vote or not vote. Perhaps you view situations like this and feel the
need to do something, and therefore decide to vote. Or perhaps you’re a
misanthrope and are indifferent to the people dying. There may also be the way
such dilemmas are presented (for example, in the way they are written or are
viewed in a virtual world) that may influence decision-making. Regardless of the
one of the two choices made – if that’s all that are given – it still tells us
something about the moral compass of the person making the choice. In my
example, a deontologist has no firm philosophical ground to stand on. They are,
in other words, irrational.
And this is why the Trolley Problem, formulated
in 1967, is still relevant today. It would be wise to know when a populous is
too irrational, if for no other reason than to prompt a re-evaluation of, say, educational
programs. Of course, there is the other side of the coin in which people in
powerful positions rely on an irrational populous, so such moral tests would be
wise for them to administer as well so they might be aware of when citizenry
might be becoming too smart for them to fool. Thought experiments, long
considered the realm of lowly philosophers, are beneficial to everyone. And
when they’re not, they still make for good conversations when you’re high.
No comments:
Post a Comment