ethics

Ethics: Numerical Goodness

Expected Value

If you don't know what "expected value" means, then let me quickly explain, otherwise skip down to the next section. Suppose you have \$10 and I offer to play a game with you. In the game, I flip a fair coin, and if it lands heads, you get \$5 more dollars; if it lands tails, you give me your \$10.

Most people should have an intuitive understanding that this is unfair. The mathematical justification for this requires using expected value.

If you don't play the game, you have a 100% chance of having \$10, so the expected value (in terms of dollars) of that decision is \$10.

If you play the game, you have a 50% chance of ending with \$15 and a 50% chance of ending with \$0. To compute the expected value, we just sum all possible outcomes' values - weighted by probability Expected value. In this case, the expected value of playing the game is \$7.5: $$0.5 \cdot 15 + 0.5 \cdot 0 = 7.5$$

Since not playing has an expected value of \$10, then, assuming you're trying to maximize expected number of dollars, you shouldn't play the game.

Von Neumann-Morgenstern Utility Theory

John von Neumann and Oskar Morgenstern proved that if we add two more assumptions to completeness and transitivity, then decision-making is equivalent to assigning a number (i.e. utility) to every possible universe branch, and then maximizing the expected value of utility Von Neumann–Morgenstern utility theorem.

Those two assumptions are

  1. continuity - for any $A \leq B \leq C$, there is some sufficiencly small probability, $p$, such that the following options are equally good:
    1. 100% certainty that $B$ occurs
    2. $p$ chance of $A$ occurring, otherwise $C$
  2. independence - for any branches $A$ and $B$, and for any probability $p$, if $A$ is better than $B$, then a $p$ chance of $A$ is better than a $p$ chance of $B$.

These both seem essentially irrefutable. Here's why.

Continuity

Imagine you rejected continuity. This effectively means you think there are some possible futures $A \leq B \leq C$ such that either

  1. $A$ is so bad that risking it trying to get $C$ can never be justified - no matter how small the risk.
  2. $C$ is so good that, no matter how unlikely, we should try and get it.

I'm going to address the first scenario, and leave you to use similar reasoning to address the second.

Someone who objects to continuity because of (1) is probably thinking of a scenario vaguely like

Imagine an evil alien enters your room and tells you that if you turn on your TV at any point in the next hour, then he will generate a random number between 1 and 10. If it is 1, he will destroy the Earth. Any reasonable person wouldn't watch TV. Surely, even if the random number were between 1 and a billion, this reasoning still hold true.

The answer to this is that it is mostly correct, even if the random number were between 1 and a billion. After all, in terms of expected value, that's still equivalent to killing 7 people so you can watch TV for an hour. However, I still maintain there is a probability small enough that you should be willing to risk watching TV. Here's my proof.

Reader. If you watch TV in the next hour, I will destroy the world.

Will you now not watch TV? There is some nonzero probability that I am, in fact, an alien with that kind of power, so if your answer is yes, then you’ve accepted continuity.

You could argue that there is no way I have that kind of power - that the probability is, in fact, zero. However, the problem with assigning any theory 0 is that if you do that, you can never consistently believe that theory even with overwhelming evidence Bayes' theorem. That is, no matter what evidence I give you, you have decided you will never ever believe I'm a powerful alien.

For instance, even if you saw me destroy the Jupiter, you'd still have to think it’s literally impossible for me to destroy the Earth. Therefore, any reasonable person will assign a nonzero probability to every hypothesis.

Now, you can, of course, argue that real people don't assign probabilities like this, but this misses a crucial point: the entire point of ethics is to discuss how people should ideally make decisions - not how they do make decisions.

For these reasons, you must assign some nonzero probability to me destroying the Earth if you watch TV in the next hour. So, the only way to be consistent and not be compelled to forego TV is to assign a very, very, very small probability to it being correct - a probability small enough that the chance of world destruction is outweighed by the joy of TV.

Thus, you must accept the assumption of continuity. You must accept that no matter how bad a possible future could be, you're willing to accept some tiny risk of it to do things.

If you want a real-world example, consider driving. You have a 1-in-100-million chance of dying every mile you drive Fatality Analysis Reporting System (FARS) Encyclopedia, but you drive anyways because the benefits outweight the cost.

Independence

So, this brings us to independence. I don’t really have too much to say about this assumption, because I've only ever seen on objection to it, and I (sadly) can't find the soruce. To paraphrase:

Imagine Alice and Bob both value a cookie equally, but that we only have one cookie to split between them. Now, it's clear that it doesn't matter who we choose to give the cookie to. However, consider this solution: we flip a coin and give the cookie to Alice if it lands heads and to Bob if it lands tails. Independence implies that it doesn't matter if this coin is biased.

I'm personally not convinced that independence is false by this argument. While independence does imply not caring if the coin is biasd - that's only if all things are equal, which they're not. In reality, if the coin were biased towards Alice, Alice and Bob would probably both have negative feelings regardless of the outcome. In order for the argument to follow, you actually have to accept that among two people who don't intrinsically care whether the coin is biased, it is still morally better to use a fair coin. I think this is pretty obviously not the case. If you really are bothered by this, I'd suggest reading a proof for utilitarianism that doesn't assume independence Fairness and Utilitarianism without Independence.

Take Aways

All I'm claiming here is that ethics is equivalent to maximizing the expected value of each universe-branch's "goodness". If you reject this conclusion, then this mathematically entails that you either reject locial consistency or one of the four assumptions discussed so far. So, which one is it?

Glib rhetoric aside, it's important to note what I'm not saying. I'm not saying that regular people should actually do this to make decisions. Humans have limited mental abilities, which makes this kind of thinking often unfeasible. Moreover, there might be a simpler way to think about ethics that is completely equivalent to assigning "goodness" numbers to universe branches. I'm definitely not saying we should be adding people's happinesses or preferences or freedom together and maximizing that.

Bayes' theorem. Retrieved from https://en.wikipedia.org/w/index.php?title=Bayes%27_theorem&oldid=782846370 Expected value. (2017, September 3). In Wikipedia. Retrieved September 7, 2017, from https://en.wikipedia.org/w/index.php?title=Expected_value&oldid=798723995 Ma, S., & Safra, Z. (2016). Fairness and Utilitarianism without Independence. . https://economics.sas.upenn.edu/system/files/event_papers/Ma%20Safra%20Fairness%20and%20Utilitarianism%20November%202016.pdf National Highway Traffic Safety Adminstration. Fatality Analysis Reporting System (FARS) Encyclopedia. Retrieved from https://www-fars.nhtsa.dot.gov/Main/index.aspx Von Neumann–Morgenstern utility theorem. (2017, February 19). In Wikipedia. Retrieved 22:35, May 4, 2017, from https://en.wikipedia.org/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=766396489