What is of increasing marginal value, and what is of decreasing?

See also: this comment

There’s a key question that should govern the way we engage with the world: “Which things have increasing marginal returns and which things have decreasing marginal returns.”

Sometimes people are like “I’ll compromise between what I like doing and what has impact, finding something that scores pretty good on both.” Or they’ll say, “I was already planning to get a PhD in [field]/ run a camp for high schoolers / do personal development training / etc. I’ll make this more EA on the margin.”

They are doomed. There’s a logistic success curve, and there are orders of magnitude differences in the impact of different interventions. Which problem you work on is by far the most important determinant of your impact on the world, since most things don’t matter very much at all. The difference between the best project you can find and some pretty good project, is often so large as to swamp every other choice that you make. And within a project, there are a bunch of choices to be made, which themselves can make orders of magnitude of difference, often the difference between “this is an amazing project and this is basically worthless.”

By deciding to compromise, to only half-way optimize, you’re knowingly throwing away most of your selection pressure. You’ve lost lost pretty much all your hope of doing anything meaningful. In a domain where the core problems are not solved by additive incremental improvements, a half-assed action rounds down to worthless.

On the other hand, sometimes people push themselves extra hard to work one more hour at the end of the day when they’re tired and flagging. Often, but not always, they are pwnd. Your last hour of the day is not your best hour of the day. Very likely, it’s your worst hour. For most of the work that we have to do, higher quality hours are superlinearly more valuable than lower quality hours. Slowly burning yourself out, or limiting your rest (which might curtail your highest quality hours tomorrow), so that you can eke out a one more low quality hour of work, is a bad trade. You would be better off not worrying that much about it, and you definitely shouldn’t be taking on big costs for “optimizing your productivity” if “optimizing your productivity” is mainly about getting in increasingly low marginal value work hours.

Some variables have increasing marginal returns. We need to identify those, so that we can aggressively optimize as hard as we can on those, including making hard sacrifices to climb a little higher on the ladder.

Some variables have decreasing marginal returns. We want to identify those so that we can sacrifice on them, and otherwise not spend much attention or effort on them.

Getting which is which right, is basically crucial. Messing up in either direction can leak most of the potential value. Given that, it seems like more people attempting to be ambitiously altruistic should be spending more cognitive effort, trying to get this question right.

My policy on attempting to get people crypreserved

My current policy: If for, whatever reason, I have been allocated decision making power over what to do with a person’s remains, I will, by default,  attempt to get them cryopreserved. But if they expressed a different preference while alive, I would honor that preference.

For instance, if [my partner] was incapacitated right now, and legally dead, and I was responsible for making that decision, I would push to cryopreserve her. This is not a straightforward extrapolation of her preferences, since, currently, she is not opposed in principle, but doesn’t want to spend money that could be allocated for a better altruistic payoff. But she’s also open to being convinced. If, after clear consideration, she prefers not to be cryopreserved, I would respect and act on that preference. But if I needed to make a decision right now, without the possibility of any further discussion, I would try to get her cryopresereved.

(Also, I would consider if there was an acausal trade that I could make with her values as I understand them, such that those values as I understand them would benefit from the situation, attempting to simulate how the conversation that we didn’t have would have gone. But I don’t commit to fully executing her values as I currently understand them. In places of ambiguity, I would err on the side of what I think is good to do, from my own perspective. That said, after having written the previous sentence. I think it is wrong, in that it doesn’t pass the golden rule for what I would hope she would do if our positions on reversed, which suggests that on general principles, when I am deciding on behalf of a person, I should attempt to execute their values as faithfully as I can (modulo, my own clearly stated ethical injunctions), and if I want something else, to attempt to acausally compensate their values for the trade…That does seem like the obviously correct thing to do.

Ok. I now think that that’s what I would do in this situation: cryopreserve my partner, in part on behalf of my own desire that she live and in part on behalf of the possibility that she would want to be cryopreserved on further reflection, had she had the opportunity for further reflection. And insofar as I am acting on behalf of my own desire that she live, I would attempt to make some kind of trade with her values such that the fraction of probability in which she would have concluded that this is not what she wants, had she had more time to reflect, is appropriately compensated, somehow.

That is a little bit tricky, because most of my budget is already eaten up by optimization for the cosmic altruistic good, so I’m not sure what what I would have to trade, that I counter-factually would not have given anyway. And the fact that I’m in this situation, suggests that I actually do need more of a slack budget that isn’t committed to the cosmic altruistic good, so that I have a budget to trade with. But it seems like something weird has happened if considering how to better satisfy my partner’s values has resulted in my generically spending less of my resources on what my partner values, as a policy. So it seems like something is wonky here.)

Same policy with my family: If my dad or sister was incapacitated, soon to be legally dead, I would push to cryopreserve him/her. But if they had seriously considered the idea and decided against, for any reason, including reasons that I think are stupid, I would respect, and execute, their wishes. [For what it is worth, my dad doesn’t have “beliefs” exactly, so much as postures, but last time he mentioned cryonics, he said something like “I’m into cryonics”/“I think it makes sense.“]

This policy is in part because I guess that cryonics is the right choice and in part because this option preserves optionality in a way that the other doesn’t. If a person is undecided, or hasn’t thought about it much, I want to pick the reversible option for them.

[Indeed, this is mostly why I am signed up myself. I suspect that the philosophy of the future won’t put much value on personal identity. But also, it seems crazy to permanently lock in a choice on the basis of philosophical speculations, produced with my monkey brain, in a confused pre-intelligence-explosion civilization.]

Separately, if a person expressed a wish to be cryopreserved, including casually in conversation (eg “yeah, I think cryonics makes sense”), but hadn’t filled out the goddamn forms, I’ll spend some budget of heroics on trying to get them cryopreserved

I have now been in that situation twice in my life. :angry: Sign up for cryonics, people! Don’t make me do a bunch of stressful coordination and schlepping, to get you a much worse outcome, than if you had just done the paperwork.I do not think it is ok to push for cryopreservation unless one of these conditions (I have been given some authority to decide or the person specifically requested cryo) obtains. I think it is not ok to randomly seize control of what happens to a person’s remains, counter to their wishes.

Disembodied materialists and embodied hippies?

An observation:

Philosophical materialists (for instance, Yudkowskian rationalists) are often rather disembodied. In contrast, hippies, who express a (sometimes vague) philosophy of non-material being, are usually very embodied.

On the face of it, this seems backwards. If materialists were living their philosophy in practice, it seems like they would be doing something different. This isn’t merely a matter of preference or aesthetics; I think that materialists often mis-predict reality on this dimension. I’ve several times heard an atheist materialist express surprise that, after losing weight or getting in shape, their mood or their ability to think is different. Usually, they would not have verbally endorsed the proposition that one’s body doesn’t impact one’s cognition, but nevertheless, the experience is a surprise for them, as if their implicit model of reality is one of dualism. [an example: Penn Jillette expressing this sentiment following his weight loss]

Ironically, we materialists tend to have an intuitive view of ourselves as disembodied minds inhabiting a body, as opposed to the (more correct) view that flows from our abstract philosophy, that our mind is a body, and if you change my body in various ways, would change me. And hippies, ironically, seem much less likely to make that sort of error.

Why is this?

One possibility is that the causality mostly goes in the other direction: the reason why a person is a materialist is due to a powerfully developed capacity for abstract thought, which is downstream of disembodiment.

The default perspective for a human is dualism, and you reach another conclusion