ethics

Ethics: FAQ

This section will never be finished.

You may also find this "Consequentialism FAQ" Siskind interesting.

Slavery of the Few

Some people argue that utilitarianism can justify the slavery of the few to the benefit of the many.

Suppose, that we're considering whether 50% of the population should be enslaved and given 1-to-1 to the other half of the population. From a preference utilitarian perspective, this question is equivalent to asking whether you would accept a 50% chance of becoming a slave given a 50% chance of owning a slave. The answer for the vast majority of people is no, so this clearly isn't advocated for by utilitarianism.

If the proportion of the population is smaller - say 5% - this does reduce the utilitarian costs by an order of magnitude, but it reduces the benefits (to the owners) by the same order of magnitude. Would you prefer to live in a society in which you have a 5% chance of being a slave? I thought not.

Similar arguments apply for non-preference utilitarianism. Moreover, my preferred interpretation would require that each person's ideal version of themselves wants this society with slavery.

TV Show Accident

Consider this story

A TV studio is live broadcasting an event to billions of people. A piece of equipment falls during the broadcast and threatens to kill Alice - an employee. The studio can save her, but that will require cutting the broadcast short - what should they do?

The naive answer is that a utilitarian would let Alice die. After all, the "utility" gained by literally billions of people watching the show outweigh Alice's life.

I would agree that hedonistic utilitarianism implies this, but more nuanced forms do not. For instance, for preference utilitarianism, to reach that conclusion the average person in this society would have to prefer Alice die to allow the broadcast to continue. In the version I advocate, the ideal version of the average person would need to want that. That seems unlikely.

Moreover, all of this tacitly admits that allowing Alice to die is wrong - I'm not convinced it is. Human intuition is pretty awful when it comes to huge numbers, so I'm not terribly trusting of it here. Moreover, there are much more important ethical issues to resolve before we start crossing out ethical systems for relatively minor nuisances like this.

Intractability

Some people claim that this is all great in theory - but that it's totaly intractable in practice. After all its not like we can measure these utilities.

On the one hand, I want to say - fine, but then this is still all correct - right? And isn't the fact that this is theoreticaly correct fascinating and important?

On the other hand, I think that this framework isn't just the most consistent and coherent moral framework ever devised, it's also the most useful. There are deep connections between utilitarianism and economics, for instance, which means this framework is automatically useful for about half of political issues. Socially, utilitarianism generally supports giving people greater freedom and equality, which puts it in line with the general liberalizing trends of recent centuries.

More philosophically, there actually is a rich literature Quality-adjusted life year on dealing with the real-world difficulties of implementing utilitarian policies, and that fact this literature isn't widely know isn't an argument against utilitarianism.

Finally, this isn't charitable, but I worry that the real reason some (but not all!) people don't like utilitarianism is that it implies you can't just yell political catchphrases - that you should actually do real intellectual work on the consequences before having a strong opinion on political issues.

Too Demanding

On a related note, others argue that utilitarianism is too demanding, because it demands people ascribe equal value to strangers and friends/family/self. This strikes me as a particularly odd criticism.

Who ever claimed the being a perfect soccer player should be easy - or a perfect artist. Why on Earth should behaving perfectly ethically be easy? If anything, shouldn't we expect moral perfecting to be difficult, if not impossible?

This is all kind of moot, though. Utilitarianism doesn't talk about what is morally required, it simply spits out a ranking of actions. This "morally required" interpretation is foistered upon it by us humans, and for some reason some people seem to think "always choose the optimal action" is the "proper" interpretation.

Trolley Problems

The trolley problem is probably the most commonly brought up thought experiment relating to utilitarianism. It goes something like this Trolley problem:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?

This problem ultimately targets the question of whether there's an important distinction between letting someone die and killing someone.

As with the slavery issue, this problem is 100% resolved behind a veil of ignorance. If you don't know whether you'll be the lever-puller, the lone person, or in the group of five, then you should rationally want the lever to be pulled. Indeed, about 90% of people agree that pulling the switch is the better choice.

However, this problem has prompted several variations to push the non-utilitarian point farther. Here's one:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed.

This thought experiment is less pure than the previous one, because we can expect significant second-order effects. In particular, if judges make a policy of appeasing rioters with framings, then we can expect an increase in riots (which are now more "effective"). This also makes it more likely that the true guilty person won't be apprehended, which should increase crime rates in the long-run.

If we assume these effects away, then I agree that utilitarianism says the judge should lie. However, again, this is supported by the veil of ignorance - if you're part of that community and the victim is chosen randomly, then it's rational to support the judge lying.

Finally, summatarianism might differ significantly from utilitarianism here, because it's plausible that many people consider honest judges an intrinisic part of their welfare.

A third variation is to imagine you're a doctor with 5 patients in need of organ transplants. Should you kill one healthy patient to save the five? The analysis for this is similar to the judge situation.

Intuition Clashes

Which brings me to what is probably the most common counter-argument against utilitarianism: it just feels wrong.

I believe this is only half true. As consequences get larger, everyone eventually becomes utilitarian. For instance, some people might not pull the trolley lever to save five people, but virtually everyone would pull the lever to prevent 5 billion deaths.

And these "larger" consequences need not merely be numeric. Imagine we revisit the trolley problem and I take your child and tell you that I took them and five other people. At random, I tied five to the default track and one to the other. In this scenario, pulling the lever increases the odds of your child living from 17% to 83%. Suddenly, everyone decides doing and following the math might be worth it.

The only time people choose non-consequentialist reasoning is when the consequences are small. This happens to be nearly all the time in real life. In these situations, other considerations dominate such as maintaining the narrative that you're a good, trustworthy person to both yourself and others - not a cold, unfeeling machine.

todo Huemer

Quality-adjusted life year. (2017, July 14). In Wikipedia. Retrieved December 8, 2017, from https://en.wikipedia.org/w/index.php?title=Quality-adjusted_life_year&oldid=790601363 Trolley problem. (2017, November 8). In Wikipedia. Retrieved November 13, 2017, from https://en.wikipedia.org/w/index.php?title=Trolley_problem&oldid=809345261#Original_dilemma Siskind, S. (2011). The Consequentalism FAQ. http://archive.today/DczV1 Huemer, M. (2022). Why I Am Not a Utilitarian. Fake Nous. https://fakenous.net/?p=2757