Ethics: Personal Utility
In the previous post, I showed that any reasonable ethical system is equivalent to maximizing expected utility - where utility is a "goodness" number assigned to potential universe branches.
If you look at the four assumptions again, it should become apparent that individual wellbeing also follows these assumptions:
- Either I'm better off in universe $X$ or $Y$ (or I'm equally well of in both).
- If I'm better off in $X$ than in $Y$ and in $Y$ than in $Z$, then I'm better off in $X$ than in $Z$
- No matter how bad an outcome is, there is some sufficiently small probability of it I'm willing to risk for good things
- If I prefer $X$ to $Y$, then I prefer a $p$ chance of $X$ to a $p$ chance of $Y$.
The arguments are more-or-less identical:
- This is need to make coherent decisions.
- Without this, you are exploitable.
- Without this, you won't behave reasonably to unreasonable threats
- This seems obviously true, and I've never heard an argument against it.
In short, even though we're not sure exactly how "individual wellbeing" should be defined, we can be reasonably confident that any coherent definition follows these assumptions. From this, it follows that that individual wellbeing can be fully represented with a single number from a decision-making perspective Von Neumann–Morgenstern utility theorem.
This is not saying that all decisions will be easy. That everyone will always know, for instance, which college to go to or which car to buy. Indeed, nothing I've said has indicated what "individual wellbeing" even is. All this is claiming is that whatever process you should be using to make decisions should be equivalent to optimizing expected utility.