ethics

Ethics: What is Utility

The Summatarian Framework

So far, we have developed a framework to talk about ethical theories - a framework I've dubbed "summatarianism". The basic premise is that there is "individual wellbeing" and that, for a fixed population, the ethical thing to do is to maximize the expected sum of individual wellbeings. I've argued that all reasonable people should accept it. Among those who do accept it, discussion of ethics is reduced to two questions:

  1. What is "utility"?
  2. How do we account for decisions that change populations?
Having established this framework, we're going to discuss (1) here and (2) in the next post. I should stress that I consider the rest of this chapter much more tenative than the six parts. The first six posts are basically an argument that the vast majority of ideas for ethical systems are at best incomplete and at worst completely incoherent. These last three posts attempt (with mixed success) to pin down a single ethical system.

Individual Utilities

In 2004, Eliezer Yudkowsky offered an idea outside of the utilitarian context that he calls "coherent extrapolated volition" Coherent Extrapolated Volition. This inspired what I’m about to propose as a theoretical formula for individual utility.

Imagine Alice asks you for the utility you assign to various universe branches. Like any reasonable human, you reply that your mind is neither 100% consistent nor powerful, so you can’t actually do so.

Alice responds that, to aide you, she will let you specify a sequence of steps to take to determine your utility function for you. You might respond

Alice, I want you to make someone as like me as possible, but smarter, wiser, more experienced, and generally more who I wish I was, but whose sole goal is to assign utilities to universes in a way that corresponds maximally with my values. Then ask him/her the same question.

If Alice does so, she may find that this meta-person will provide the same request. And on and on, we have someone who’s near infinite knowledge/intelligence and complete empathy for you allows them to assign utilites to universe branches on your behalf.

I believe that these utilities are the correct ones. This is my definition of well-being.

Practical Implications

Now, I freely admit that this algorithm is impossible to follow in practice, which limits the (um) utility of this definition. However, I think that accepting this as the gold standard helps focus discussion - similar to how Solomonoff induction is impossible to implement in practice, but yields a notion of ideal reasoning Solomonoff's theory of inductive inference.

However, it'd be silly to simply assert this gold standard is useful without giving examples of its usefulness.

The immediate suggestion from this gold standard is that we should default more-or-less to people's preferences when measuring their utility. After all, of all the people in the world, there is exactly one with access to the inner workings of your mind: yourself.

That's not to say that I think the proper ethical system simply reverts to preference utilitarianism in practice. Rather, I think that this pins down a particular flavor of preference utilitarianism that is, in fact, a fairly big umbrella of ethical system owing largely to different people's beliefs about what a valid preference is.

For instance, I know people who believe that if you can't (and won't be able to) tell whether you're in situation A or B, then you can't prefer one to the other. This is the subjectivist spin on utilitarianism. For instance, if you can't tell whether you live in a simulation, you can't have a preference about whether you should. Since, as far as I can tell, your ideal self could care about this, I view this distinction as a valid preference.

Likewise, if you're religious, would you only want to believe in God only if He exists or would you want to want to believe regardless? Either is valid with my specification but not with the subjectivist one.

Another way this vagueness regarding valid preferences occurs is when you have conflicting preferences. For instance, you may want to lose weight and you pass a doughnut shop on the way to work. You may, in a moment of weakness, buy the doughnut. A naive economic view of preferences would say you preferred the doughnut to losing weight. However, you can simultaneously wish the doughnut shop wasn't there so you could never buy a doughnut. Which is the valid preferences? Whichever your ideal self would advocate (probably not eating the doughnut). This whole scenario is closely related to the more general problem of people not caring about their future selves enough. My position that people should be helped with this problem has pretty significant implications for real world policy.

But I digress. My main point is that there are many flavors of utilitarianism that differ only in how they define "utility". I think the one I've described is superior to the others I've seen and results in basically the same interpretation as Harsanyi's Harsanyi.

Solomonoff's theory of inductive inference. (2017, July 23). In Wikipedia. Retrieved November 12, 2017, from https://en.wikipedia.org/w/index.php?title=Solomonoff%27s_theory_of_inductive_inference&oldid=791942425 Yudkowsky, E. (2004). Coherent Extrapolated Volition. Singularity Institute for Artificial Intelligence. https://intelligence.org/files/CEV.pdf Harsanyi, J. C. (1996). Utilities, preferences, and substantive goods. Social choice and welfare, 14(1), 129-145.