# Interpersonal Utility Comparison

## Unique up to Linear Transformations

This still leaves an important problem with individual utility: utility functions are only unique up to linear transformation Utility.

More precisely, if Alice has some function $f(x)$ that assigns each universe branch a utility for her, then there is now way to tell whether $f$ is her function or whether $10 \cdot f(x) + 3$ the function. There's nothing special about $10$ and $3$, either - any linear function of $f(x)$ will be impossible to distinguish from a decision-making perspective [assuming the slope is positive].

At first glance, this doesn't seem to be a huge deal. After all, if two functions are idential in terms of making decisions, then who cares if, they're mathematically different?

The problem pops up when you start adding people's utilities together. For example, suppose Alice assigns a utility of $0$ to branch $A$ and a utility of $1$ to branch $B$. Suppose, moreover, that Bob assigns the opposite: $1$ to branch $A$ and $0$ to branch $B$. In this case, we'd say that, ethically, these two branches are identical.

Here's the problem, suppose we double Alice's utility function so that it assigns $2$ to branch $B$, now it's more ethical to choose $B$ and not $A$.

## Solutions

Now, it isn't hard to prove that adding a constant amount to someone's utility function doesn't change anything - at least outside of population ethics. Instead, this interpersonal-comparison problem is caused by the fact that we don't know how to scale these individual utilities before adding them.

When Newton wrote his gravity equation that used $G$ as an unknown constant specifying the strength of gravity, the fact that he didn't know the constant's value was evidence against his theory of gravity; it was just a blank to be filled in.

Similarly, this "problem" with interpersonal utility comparisons isn't an argument that the summatarianism framework is false, but is instead a blank to be filled in - a degree of freedom not yet specified.

This means it is wrong to point to this problem as an argument that the summatarian framework is false. Even if it is impossible to resolve this issue, it is still the case that rational people should view ethics as the sum of individual welfare.

With that being said, I think there are some promising avenues. The obvious one is to hope that neuroscience eventually can provide a more scientific basis to our notions of utility. In the meantime, I have a different proposal that's a little vague, but, I think a useful/interesting direction of inquery.

Imagine you live in a society with $R$ resources to divy up among $N$ people. Behind the veil of ignorance, we still don't know how much to scale people's utilities, but we do know that whatever scaling we come up with, we want it to advocate an equitable distribution of resources, because there are diminishing returns of utility to resources/income.

The way to do this is mathematically straightforward. Suppose Alice's utility as a function of resources is $f_A$ and Bob's is $f_B$. If we want to advocate for equal resource distribution, we should scale these functions so that $df_A/dr = df_B/dr$. More generally, everyone's functions should be scaled so that $df/dr$ is equal for everyone at $R/N$.

In reality, of course, policies can change the size of $R$, so some more thinking is needed, but I think this is a fruitful line of inquiry.

For more thoughtful commentary on this see sections 6 and 7 of u/no_bear_so_low.