ethics

Ethics: Personal Utility

In the previous post, I showed that any reasonable ethical system is equivalent to maximizing expected utility - where utility is a "goodness" number assigned to potential universe branches.

If you look at the four assumptions again, it should become apparent that individual wellbeing also follows these assumptions:

  1. Either I'm better off in universe $X$ or $Y$ (or I'm equally well of in both).
  2. If I'm better off in $X$ than in $Y$ and in $Y$ than in $Z$, then I'm better off in $X$ than in $Z$
  3. No matter how bad an outcome is, there is some sufficiently small probability of it I'm willing to risk for good things
  4. If I prefer $X$ to $Y$, then I prefer a $p$ chance of $X$ to a $p$ chance of $Y$.

The arguments are more-or-less identical:

  1. This is need to make coherent decisions.
  2. Without this, you are exploitable.
  3. Without this, you won't behave reasonably to unreasonable threats
  4. This seems obviously true, and I've never heard an argument against it.

In short, even though we're not sure exactly how "individual wellbeing" should be defined, we can be reasonably confident that any coherent definition follows these assumptions. From this, it follows that that individual wellbeing can be fully represented with a single number from a decision-making perspective Von Neumann–Morgenstern utility theorem.

This is not saying that all decisions will be easy. That everyone will always know, for instance, which college to go to or which car to buy. Indeed, nothing I've said has indicated what "individual wellbeing" even is. All this is claiming is that whatever process you should be using to make decisions should be equivalent to optimizing expected utility.

Von Neumann–Morgenstern utility theorem. (2017, February 19). In Wikipedia. Retrieved 22:35, May 4, 2017, from https://en.wikipedia.org/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=766396489