Values are extrapolated urges

[Epistemic status: a non-rigorous theory, representing my actual belief about how it works.]

Related to: Value Differences As Differently Crystallized Metaphysical Heuristics

In this post I want to outline my understanding of what “values” are, at least for human beings. This idea or something very like it may already have standard terminology in academic philosophy, in which case, I would appreciate being pointed to the relevant references. This may be obvious, but I want to say it to lay the groundwork for a puzzle that I want to talk about in the next post.

Basically, I posit that “values”, in the case of human beings, are crystallization or abstractions of simple response patterns.

[Google doc version for commenting]

Abstracting values from reactions

All animals, have a huge suite of automatic reactions to stimuli, both behavioral and affective, and both learned and hard coded.

When thirsty and near water, a lion will drink. When a rabbit detects a predator, they’ll freeze (and panic?). When in heat (and the opportunity presents itself) a giraffe will  copulate. When his human comes home, a dog will wag its tail and run up in greeting, presumably in a state of excited happiness. [I note that all of my examples are off mammals.]

Some of these behaviors might be pretty complex, but their basic structure is TAP-like: something happens, and there is some response in physiology of the animal. I’m going to call this category of “contextualized behaviors and affects”, “urges”.

Humans understand language, which means that the range of situations that they can respond to is correspondingly vaster than most animals. For instance, a human might be triggered (have a specific kind of fear response) to another human making a speech act.

But that isn’t the main thing that differentiates humans in this context. The big difference between humans and most other animals, is that humans can abstract from a multitude of behaviors, to infer and crystallize the “latent intentionality” among those behaviors.

For instance, an early human can reason,

When I see a tiger, I run, and feel extreme overriding panic. If the tiger catches me, I’ll try to fight it. When a heavy rock falls from a cliff, and I hear it falling, I also have a moment of panic, and duck out of the way.

When I am hungry, I eat. When I am thirsty, I drink.

When other people in my tribe have died, I’ve felt sad, and sometimes angry.

…I guess I don’t want to die.

[edit 2022-06-01: More specifically, what’s going on is that the human simulates a bunch of possible scenarios in which he comes to harm or dies, and has a negatively-valenced (flee, retreat, resist) reaction to each one. He intuits the similarity between those scenarios, to abstract out general concepts of harm or death, and associativity learns a general negatively-valenced reaction to those outcomes. He develops a flee-retreat-resist response to anything that involves his dying. He ends up with a goal of “staying alive”. (By default, all of this happens non-verbally, and without any conscious reflection.)]

From each of these disparate, contextualized, urges-to-action and affective responses (which by the way, I posit are not two distinctly different things, but rather two ends of a spectrum), a person notices the common thread, “what do each of these behaviors seem to be aiming towards?”

And abstracting that goal, from the urges, he/she then “owns” it. He/she thinks of him/herself as an entity wanting, valuing, caring about that thing (rather than a bundle of TAPs, some of which are correlated).

My guess is that this abstraction operation is an application of primate (maybe earlier than primate?) social-modeling software to one’s self. It is too expensive to track all of the individual response behaviors of all of the members of your band, but fortunately, you can compress most of the information by modeling them not as adaption-executors, but as goal directed agents, and keeping track of their goals and their state of knowledge.

When one applies the same trick to one’s own behavior and mental states, one can compress a plethora of detail about a bunch of urges into a compact story about what you want. Wala. You’ve started running an ego, or a self.

This is the origin of “values.” Values are compressions / abstractions / inferences based on / extrapolated from a multitude of low level reactions to different situations.

I think that most animals can’t and don’t do this kind of inference. Chipmunks (I think) don’t have values. They have urges. Humans can, additionally, extrapolate their urges into  values.

I’m pretty sure that something like this process is how people come to their values (in the conventional sense of “the things they prioritize”) in real life.

For instance, I am triggered by claims and structures that I perceive as threats to my autonomy. I flinch away defensively. I think that this has shaped a lot of my personality, and choices, including leading me into prizing rationality.

Furthermore, I posit that something like this process is how people tend to adapt political ideologies. When someone hears about the idea of redistribution, and their visceral sense of that is someone taking things from them, they have a (maybe subtle) aversion / threatened feeling.* This discomfort gives rise to an urge to skepticism of the idea. And if such a person hangs out with a bunch of other people that have similar low-level reactions, eventually, it becomes common knowledge, and this becomes the seed of an ideology, that gets modified and reinforced by all the usual tribal mechanisms.

I think the same basic thing can happen when someone feels (probably less than consciously) threatened by all kinds of ideologies. And this + social mimesis is how people end up with “conservative values” or “liberal values” or “libertarian values” or what have you.

* – I have some model of how this works, the short version being, “stimuli trigger associated (a lot of the action here is in the association function) mental imagery, which gives rise to a valence,  which guides immediate action, modulo further, more consequentialist deliberation. In fact, you can learn to consciously catch glimpses of this happening.

Of course all of this is a simplification. Probably this process occurs hierarchically, where we abstract some goals from TAP-like urges, and then extrapolate more abstract goals from those, and so on until we get to the “top” (if it turns out that there is a “top”, as opposed to a cycle that has some tributaries that flow into it).

For that reason, the abstraction / crystallization / triangulation process is not deterministic. It is probably very path dependent. Two people with the exact same base level pattern of urges, in different contexts will probably grow into people with very different crystallized values.

Values influence behavior

Now a person might abstract out their values from their behavior in a way that is largely non-consequential. They model themselves, and describe themselves, in terms of their values, but that is just talk. The vast majority of their engagement in the world is still composed of the behaviors stemming from their urges in response to specific situations.

But, it also sometimes happens that abstracting out values, and modeling one’s self as an optimizer (or something like an optimizer) for those values, can substantially effect the level of behavior.

For one thing, having a shorthand description of what one cares about means that one one can use that description for deliberation. Now, when considering what to do in a situation, a person might follow a mental process that involves asking how they can achieve some cashed goal, instead of reflexively acting on the basis of the lower level urges that the goal was originally abstracted from.

This means that a person might end up acting in a way that is distinctly in opposition to those low level reactions.

For instance, a person might want status and respect, and they can feel the tug to go drink and socialize “with the guys” of their age group, but they instead stay home and study, because they reason that this will let them get a good job, which will let them get rich, which they equate with having a lot of status.

Or a person might take seriously that they don’t want to die, and sign up for cryonics, even though none of their urges recommended that particular action, and in fact, it flies in the face of their social conformity heuristics.

Furthermore, in this vein a person might notice inconsistencies between their professed values and the way they behave, or between multiple diverse values. And if they are of a logical turn of mind they may attempt to modify their own behavior to be more in line with their values. Thus we end up with moral striving (though moral striving might not be the only version of this dynamic).

Bugs

Just to say this explicitly, humans, uniquely (I think? maybe some other animals also abstract their values), can examine some particular behavior or reaction and consider it to be a bug, a misfiring, where the system is failing to help them achieve their values.

For instance, I’m told that a frog will reflexively flick out it’s tongue to ensnare anything small and black that enters its field of vision. From the perspective of evolution, this is a bug: the behavior is “intended” for catching and eating flies, and eating bits of felt that human researchers throw in the air (or whatever) is not part of the behavior selected for.  [Note: in talking about what evolution “intended”, we’re executing the same mental move of abstracting goals and values from behavior. Evolution is just the fact of what happened to replicate, but we can extrapolate from a bunch of specific contextualized adaptions to reason about what evolution is “trying to do”.]

But, I claim here, that asking “is this behavior a bug, from the frog’s perspective?” is a mis-asked question, because the frog has not abstracted its values from its behaviors, in order to reflect back on its behaviors and judge them.

In the parallel case of a human masturbating, the human can abstract its values from his or her behavior, and could deem masturbation as a 1) bug, a dis-endorsed behavior that arises from a hormonal system that is partially implementing his or her values, but which misfires in this instance, or 2)  as an expression of what he/ she actually values, part of a life worth living.

(Now it might or might not be the case that only one of these options is reflexively stable. If only one of them is, for humans in general, there is still a meaningful sense in which one can be mistaken about which things are Good. That is a person can evaluate something as aligned with their values, but would come to think differently in the limit of reflection.)

A taxonomy of Cruxes

[crossposted to LessWrong]

This is a quick theoretical post.

In this post, I want to outline a few distinctions between different kinds of cruxes. Sometimes folks will find what seems to be a crux, but they feel some confusion, because it seems like it doesn’t fit the pattern that they’re familiar with, or it seems off somehow. Often this is because they’re familiar with one half of a dichotomy, but not the other.

Conjunctive, unitary, and disjunctive cruxes

As the Double Crux method is typically presented, double cruxes are described as single propositions, about which, if you changed your mind, you would change your mind about another belief.

But as people often ask,

“What if, there are two propositions, B and C, and I wouldn’t change my mind about A, if I just changed my mind about B, and I wouldn’t change my mind about A if I just changed my mind about C, and I would only if I change my mind about A, if I shift on both B and C?”

This is totally fine. In this situation would would just say that your crux for A is a conjunctive crux of B and C.

In fact, this is pretty common, because people often have more than one concern in any given situation.

Some examples:

  • Someone is thinking about quitting their job to start a business, but they will only pull the trigger if a) they thought that their new work would actually be more fulfilling for them, and b) they know that their family won’t suffer financial hardship.
  • A person is not interested in signing up for cryonics, but offers that they would if a) it was inexpensive (on the order of $50 a month and b) if the people associated with cryonics were the sort of people that he wanted to be identified with. [These are the stated cruxes of a real person that I had this discussion with.]
  • A person would go vegetarian if, a) they were sure it was healthy for them and b) doing so would actually reduce animal suffering (going a level deeper: how elastic is the supply curve for meat?).

In each of these cases there are multiple considerations, none of which is sufficient to cause one to change one’s mind, but which together represent a crux.

As I said, conjunctive cruxes are common, I will say that sometimes folks are too fast to assert that they would only change their mind if they turned out to be wrong about a large number of conjunctive terms.

When you find yourself in this position of only changing your mind on the basis of a large number of separate pieces, this is a flag that there may be a more unified crux that you’re missing.

In this situation I would back up and offer very “shallow” cruxes. Instead of engaging with all the detail of your model, instead look for a very high level / superficial summary, and check if that is a crux. Following a chain of many shallow cruxes is often easier than trying to get into the details of complicated models right off the bat.

(Alternatively, you might move into something more like consideration factoring.)

As a rule of thumb, the number of parts to a conjunction should be small: 2 is common, three is not that common. Having a 10 part conjunction is implausible. Most people can’t hold that many elements in their head all at once!

I’ve occasionally seen order of 10 part disjunctive arguments / conjunctive cruxes in technical papers, though I think it is correct to be suspicious of them. They’re often of the form “argument one is sufficient, but even if it fails, argument 2, is sufficient, and even that one fails…” But, errors are often correlated, and the arguments are likely not as independent as they may at first appear. It behooves you to identify the deep commonality between your lines of argument, the assumptions that multiple arguments are resting on, because then you can examine it directly. (Related to the “multiple stage fallacy‘).

Now of course, one could in principle have a disjunctive crux, where if they changed their mind about B or about C, they would change their mind about A. But, in that case there’s no need to bundle B and C. I would just say that B is a crux for A and also C is a crux for A.

Causal cruxes vs. evidential cruxes

A causal crux back-traces the causal arrow of your belief structure. They’re found by answering the question “why do I believe [x]?” or “what caused me to think [x] in the first place?” and checking if the answer is a crux.

For instance, someone is intuitively opposed to school uniforms. Introspecting on why they feel that way, they find that they’re expecting (or afraid that) that kind of conformity squashes creativity. They check if that’s a crux for them (“what if actually school uniforms don’t squash creativity?”), and find that it is: they would change their mind about school uniforms if they found that they were wrong about the impact on creativity.”

Causal cruxes trace back to the reason why you believe the proposition.

In contrast, an evidential crux is a proxy for your belief. You might find evidential cruxes by asking a question like “what could I observe, or find out, that would make me change my mind?”

For instance, (this one is from a real double crux conversation that happened at a training session I ran), two participants were disagreeing about whether advertising destroys value on net. Operationalizing, one of them stated that he’d change his mind if they realized that beer commercials, in particular, didn’t destroy value.

It wasn’t as if he believed that advertising is harmful because beer commercials destroy value. Rather it was that he thought that advertising for beer was a particularly strong example of the general trend that advertising is harmful. So if he changed his mind in that instance, where he was most confident, he expected that he would be compelled in the general case.

In this case “beer commercials” are serving as a proxy for “advertising.” If the proxy is well chosen, this can totally serve as a double crux. (It is, of course, possible that one will be convinced that they were mistaken about the proxy, in a way that doesn’t generalize to the underlying trend. But I don’t think that this is significantly more common than following a chain of cruxes down, resolving at the bottom, and then finding that the crux that you named was actually incomplete. In both cases, you move up as far as needed, adjust the crux (probably by adding a conjunctive term), and then traversing a new chain.)

Now, logically, these two kinds of cruxes both have the structure “If not B, then not A” (“if uniforms don’t squash creativity, then I wouldn’t be opposed to them anymore.” and “if I found that beer commercials in fact do create value, then I would think that advertising doesn’t destroy value on net”). In that sense they are equivalent.

But psychologically, causal cruxes traverse deeper into one’s belief structure, teasing out why one believes something, and evidential cruxes traverse outward, teasing out testable consequences  or implications of the belief.

Monodirectional vs. Bidirectional cruxes

Say that you are the owner of a small business. You and your team are considering undertaking a major new project. One of your employees speaks up and says “we can’t do this project. The only way to execute on it would bankrupt the company.”

Presumably, this would be a crux for you. If you knew that the project under consideration would definitely bankrupt the company, you would definitively think that you shouldn’t pursue that project.”

However, it also isn’t a crux, in this sense: if you found out that that claim was incorrect, that actually you could execute on the project without bankrupting your company, you would not, on that basis alone, definitively decide to pursue the project.

This is an example of a monodirectional crux. If the project bankrupts the company, then you definitely won’t do it. But if it doesn’t bankrupt the company then you’re merely uncertain. This consideration dominates all the other considerations, it is sufficient to determine the decision, when it is pointing in one direction, but it doesn’t necessarily dominate when it points in the other direction.

(Oftentimes, double cruxes are composed of two opposite bidirectional cruxes. This can work totally fine. It isn’t necessary that for each participant, the whole question turns on the double crux, so long as for each participant, flipping their view on the crux (from their current view) would also cause them to change their mind about the proposition in question.)

In contrast, we can occasionally identify a bidirectional crux.

For instance, if a person thinks that public policy ought to optimize for Quality Adjusted Life Years, and they’ll support whichever health care scheme does that, then “maximizing QALYs” is a bidirectional crux. That single piece of information (which plan maximizes QALYs), completely determines their choice.

“A single issue voter” is a person voting on the basis of a bidirectional crux.

In all of these cases you’re elevating one of the considerations over and above all of the others.

Pseudo cruxes

[This section is quite esoteric, and is of little practical relevance, except for elucidating a confusion that folks sometimes encounter.]

Because of the nature of mono-directional cruxs, people will sometimes find pseudo-cruxes, propositions that seem like cruxes, but are nevertheless irrelevant to the conversation.

To give a (silly) example, let’s go back to the canonical disagreement about school uniforms. And let’s consider the proposition “school uniforms eat people.”

Take person who is in favor of school uniforms. The proposition that “school uniforms eat people” is almost certainly a crux for them. The vast majority of people who support school uniforms would change their mind if they were convinced that school uniforms were carnivorous.

(Remember, in the context of a Double Crux conversation, you should be checking for cruxy-ness independently of your assessment of how likely the proposition is. The absurdity heuristic is insidious, and many claims that turn out to be correct, seem utterly ridiculous at first pass, lacking a lot of detailed framing and background.)

This is a simple crux. If the uniform preferring person found out that uniforms eat people, they would come to disprefer uniforms.

Additionally, this is probably a crux for folks who oppose school uniforms as well, in one pretty specific sense: were all of their other arguments to fall away, knowing “that school uniforms eat people” would still be sufficient reason for them to oppose school uniforms. Note that doesn’t mean that they do think that school uniforms eat people, nor does it mean that finding out that school uniforms don’t eat people (duh) would cause them to change their mind, and think that school uniforms are good. We might call this an over-determining hypothetical crux. It’s a bidirectional crux that points exclusively in the direction that a person already believes, and which furthermore, the person currently assumes to be false.

A person might say,

I already think that school uniforms are a bad idea, but if I found out they eat people, that would be further reason for me to reject them. Furthermore, now that we’re discussing the possibility, that “school uniforms don’t eat people” is such an important consideration such that it would have to be a component of any conjunctive crux that would cause me to change my mind and think that school uniforms are a good idea. But I don’t actually think that school uniforms eat people, so it isn’t a relevant part of that hypothetical conjunction.

This is a complicated series of claims. Essentially, this person is saying that in a hypothetical world where they thought differently than they currently do, this consideration, if it held up would be a crux for them (that would bring them to the position that they actually hold, in reality).

Occasionally (on the order of once out of 100?), a novice participant will find their way to a pseudo crux like that one, and find themselves confused. They can tell that the proposition “school uniforms eat people” if true, matters for them. It would be relevant for their belief. But it doesn’t actually help them push the disagreement forward, because, at best, it pushes further in the direction of what they already think.

(And secondarily, it isn’t really an opening for helping their partner change their mind, because the uniform-dispreferring person, doesn’t actually think  that school uniforms eat people, and so would only try to argue that they do if they had abandoned any pretense of truth-seeking in favor of trying to convince someone using whatever arguments will persuade, regardless of their validity.)

So this seems like a crux, but it can’t do work in the Double Crux process.

There is another kind of pseudo crux stemming from bidirectional cruxes. This is when a proposition is not a crux, but it’s inverse would be.

In our school uniform example, suppose that that in a conversation, someone boldly, and apropos of nothing,  asserted “but school uniforms don’t eat people.” Uniforms not eating people is a monodirectional crux that dominates all the other considerations, but school uniforms not eating people is so passé, that it is unlikely to be a crux for anyone (unless the reason they were opposed to school uniforms was kids getting eaten). Nevertheless, there is something about it that seems (correctly) cruxy. It is the ineffectual side of a monodirectional crux. It isn’t a crux, but its inverse is. We might call this a crux shadow or something.

Thus, there is a four-fold pattern of monodirectional cruxes, where one quadrant is a useful progress bearing crux, and the other three contain different flavors of pseudo cruxes.

Proposition: “If school uniforms eat people, then I would oppose school uniforms”

Suppose school uniforms eat people Suppose school uniforms don’t eat people
I am opposed to school uniforms Overdetermining hypothetical crux:  “I would oppose school uniforms anyway, but this would be a crux for me, if (hypothetically) I was in favor of school uniforms.” Non-crux / Crux shadow: “Merely not eating people is not sufficient to change my mind. Not a crux.”
I am in favor of school uniforms Relevant (real) monodirectional crux: “If school uniforms actually eat people, that would cause me to change my mind.” Non-crux / Crux shadow: “While finding out that uniforms do eat people would sway me, that they don’t eat people isn’t a crux for me.”

And in the general case,

Proposition: X is sufficient for A, but Not X is not sufficient for B

X is true X is false
I believe A Overdetermining hypothetical crux Non-crux / Crux shadow
I believe B Relevant monodirectional crux Non-crux / Crux shadow

Note that the basic double crux pattern avoids accidentally landing on pseudo cruxes.

.

 

A conversational pattern I observe

[This is super rough]

[I’m pretty sure that there is prior work on this question that I don’t know about yet.]

Here’s a conversational pattern that I’ve noticed recently. I observed this specifically in conversations with my parents, but on further reflection, I do this too, and postulate that pretty much everyone does.

Basically, one person will say something, either sharing an opinion, or (maybe this is an atypical rationalist version?) sharing a fact, or something in between.

Then the other person will share an opinion or a fact that is at least somewhat related.

Basically the people go back and forth, sharing what they know about / what they think / what they’re interested in.

For instance, a few weeks ago, someone was telling me about how it wasn’t accidental that Hitler lost the war, which I agreed with, and I shared some anecdotes from my recent reading of the Rise and Fall of the Third Reich, particularly junctures where the NAZIs could have won the war if things had gone slightly differently (like if the Luftwaffe had continued bombing the areas in which the RAF command bunkers, from which they were using RADAR and radio to coordinate their planes, during the battle of Britain).

This may not be a great example, since I might have been making a point about historical contingency, and hindsight bias. I’m very confident that this pattern expressed itself in my most recent conversation with a friend to “catch up” though.

She would say something about something, and I would say something vaguely related.

(Ok. In both these cases, I had phenomenology of “desire to impress” and “pressure to show sophistication.” This is less evident in my conversations with my parents, but it isn’t implausible that there was a less vivid current of the same. That maybe the driving force of this weird thing.)

In any case, this conversational pattern is common, but it is particularly evident when talking with my parents, I’ll often share something, and (misunderstanding what I’m saying) will say something else that is only barely or vaguely related. (I think this might be offered as if it what is being said is in agreement with me.) This particular situation seems to highlight how much conversation is just an opportunity to share things that we want to talk about with someone else, who is likewise taking the opportunity to talk about things that they want to talk about.

What’s going on here? Why do we like doing this?

Some thoughts:

  • Maybe we just like the feeling of being listened to, because it validates our opinions and perspectives. This is pretty interesting, given that it’s fake: the other person isn’t actually engaging with our opinions except to riff on them. It’s as if we’re masturbating each-other.
  • Maybe this is about intelligence or sophistication signalling? We get to show off our thoughts?
  • Maybe it’s a kind of bonding / tribe affirming thing: we pseudo agree with each-other’s propositions (without engaging with them critically)
  • We just like talking about what we’re interested in for some reason, and the real question is why we like that.

Some musings on historical contingency, randomness, and my own desire to be an optimizer

It just hit me how the fact that this pandemic probably escaped from a lab in China means that this event, with global and personal consequences, is so contingent. There are other Everett branches where it didn’t happen, and I am going about my life as I would have.

I have a weight in my stomach about this. Especially in so far as the quarantine is good for me, and I am learning things or building skills / meta-skills that will have a permanent impact on my life, it is unsettling to think that they depend on fate. Am I going to be consistently less effective in those other worlds, because this “lucky” break didn’t happen? (Interestingly, that is the horrifying thing to me, not the thought of all the luck that I’ve missed out on that went to other worlds.) My growth and power is so fragile.

Of course there have been other events like this: WWII was contingent on Hitler’s birth, and that massively shaped the world I live in now. Most earths don’t look like this one for that reason. There’s still an intact European Jewry, the balance of power looks very different, and science may not have been institutionalized and may not be state-funded. How were Atomic bombs discovered in those world? How did the computer revolution happen?

And of course my personal life is incredibly contingent. If I hadn’t come across Grimoire for the Apprentice Wizard, would I be on a path of ambition? If I hadn’t gone to U Chicago, and met Stefan, would I have have even encountered LessWrong. Reading the sequences, at that time in my life, radically, radically transformed my trajectory.

But this is a huge contingent event occurring after I came into myself as a conscious entity, and after I had “gotten my footing” in the world. This pandemic has changed, and is apt to change a lot for me, but only by luck.

I think that this one stings in particular (at least in part) because, in principle I could have taken time and space to do the things that I’m doing now. But I predict that I wouldn’t have, unless forced. I can feel how that’s a bias. How across the multiverse, I’m less strong than I might otherwise be, because I’m not resisting that temptation. And here, I needed random chance to save me, instead of bootstrapping by my own power.

This is a reminder to me that I want to be an optimizer: I want to be the sort of thing that, whatever sort of universe it finds itself in, convergently steers that world towards the same state. Obviously, the worlds don’t actually converge, because of how much is determined by the randomness, and because I am small. But I want it to be the case that my situation defines the shape of the problems I have to solve, and the tools that I have to work with (and build myself from), but that my situation never sways my basic ability to steer.

I want to be the sort of thing that systematically moves in right direction, no matter where in the multiverse I find myself.

Holding both the positive aspects of a problem and reality of there being a problem

[Probably obvious to many (most?) of my readers, but the obvious often bears saying aloud.]

This morning, as part of my rest day, I did the first exercise of Claudia Altucher’s book, Become an Idea Machine, which is mostly a series of prompts for generating 10 (or more) new ideas, one prompt a day.

The first prompt is to take 10 complaints that you have, and for each one, turn it into a positive expression of gratitude. She gives the example of hating being stuck in traffic every day, and generating from that, “I am grateful that I live in a city that has so much traffic because that means there [are] plenty of opportunities here, and I can meet lots of interesting people.” You’re supposed to do this ten times, on your own problems.

I think that this is a great exercise. I imagine that I tend to be too rigid in my assessment of circumstances, and there are in fact a lot of good things resulting from a “problem”, which I fail to apprehend, because I’m stuck in my existing mindset. Done well, I think this mental motion shouldn’t feel like a self-trick, whereby you convince yourself that “the grapes were probably sour anyway.” Instead it feels like uncovering actual value, henceforth unnoticed, all around you.

As an example, my fourth most pressing problem is the lack of satisfying romantic partnership. And it is natural to focus on the pain or longing of that situation. But also, not having a partner means that I have a lot of solitary time, to think and work and do what I want, which I really value. By default, I think I wouldn’t notice how much I value that alone-time unless I did have a partner, at which point I might start to feel stifled. Hedonically, it seems like a bug if I only pay attention to the aspects of a situation that I dislike.

So I think appreciating what’s good about things that we might reflexively label as bad.

However, I think that it is crucial, if we want to be balanced and sane, that we not lose track of the problem being bad in this process.

It is right and good for the paraplegic to savor the way his disability forces him to slow down and be reflective, and “stop and smell the roses.” But that doesn’t mean that he can’t assess the situation, the pros and cons, and determine that on net, it would be better to be able to walk (and climb and run).

Likewise, it is extremely appropriate to look around at the way the recent pandemic has brought communities together, and the way that it has brought out heroism in many, and feel pleased and proud. But that doesn’t mean that we have to say that the pandemic is good, or that we shouldn’t wish that it hadn’t happened.

The claim “there is often unrecognized good in what seems to be bad” does not entail that “everything that seems bad is actually good”, (or the related “everything happens for a reason”). When we conflate the first with the second(s) them we’re  setting ourselves up for a kind of Stockholm syndrome of the present, a massive status quo bias.

There is more good in the world than we often appreciate by default. And the world is complex, and it is often hard to tell what the ultimate consequence of something will be. But that does not mean that we abdicate our sacred right to weigh up the pros and cons,  make the best assessment we can of whether a thing is good or bad, and then try to steer the world towards the good. When we do otherwise the consequences can be literally catastrophic.

(Of course this is not to say that everything that we reflexively consider bad is in fact bad on net. We might take something that we hate, consider all the benefits that it gives us and all the costs, and conclude that actually, our initial impression was wrong, and we on reflection, prefer the world with the supposed-to-be-bad thing. Just as we might reconsider our reflexive attitude to something that is supposed to be good, and conclude that on-net, we dis prefer it. [For instance, Brienne’s comment in this thread has had me reevaluating my attitude towards romantic love, lately.])

 

 

Engineering vs. Valence

Despite often being maligned, the contemporary “spiritual” worldview has some things going for it. I emphasizes love, compassion, and gratitude, which are probably close to the most important thing to focus on for individual and community eudaimonia. (Though emphatically not for safely or sanely steering civilization.). New-agey types, despite their woo, do have a number of very effective tools, frameworks, and insights, like NVC, Focusing, and circling. The metaphysics is very confused, but metaphysics is really hard to get right (and if anything like panpsychism is true (which my inside view currently puts a lot more mass on than most of my peers), then they will turn out to have been not that far of the mark).

But, I suggest that the thing that is most wrong in the “spiritual” worldview is the general propensity to posit “things with intrinsic valence”. In my caricature of the spiritual worldview (which I more-or-less believed  from the ages of 15 to 20), many things (actions, energies, materials) are seen as fundamentally “good” and others as fundamentally “bad”, and that “good” things have good effects and “bad” things have bad effects.

  • For instance, according to David Hawkins, meditation, prayer, optimistic thoughts, and positive emotions,  “raise your vibration“, attract good things into your life, and provokes peace locally, and across the world. (Negative thoughts, negative emotions, artificial supplements, etc.  lower your vibration.)
  • Or, orgonite, which is a substance which is supposed to transmute “negative energy” into “positive energy.” Cellphone towers supposedly emit “negative orgon energy” (which “promote drought, negativity, fear, and so on”),
  • “Toxins” is a sort of generic term for “bad things that are in your body”. You want to get rid of them.
  • The phrases “good karma” and “bad karma” speak for themselves.

The list goes on and on. All of these invoke a sense of a thing that is good and safe vs. a thing that is bad and harmful. Obviously, you want more of the good, and less of the bad.

The problem is that almost nothing in the world seems to work like this. Things aren’t intrinsically bad or good. They just have effects. And whether those effects are good or bad depend on mechanistic details of the systems involved.

For example there is no substance that is fundamentally “healthy” or “unhealthy” for humans. The effects depend on the dosage, and location.

Air is obviously good for people, but an air bubble in the the blood stream can be fatal. Potassium is critical to the functioning of our neurons, but too much potassium disrupts that functioning, so we use potassium solutions as a lethal injection . Water is prototypically “good for you”, but drinking too much causes health problems and, of course, having water in your lungs is a good way to drown.

These substances aren’t fundamentally “bad.” The harm comes from too much of them in the wrong place.

I claim that (almost) everything works like this.

Facebook, or the internet, isn’t good or evil, it just has effects, some of which are positive and some of which are negative.

Smoking  will kill you, but its stimulant effects can boost your effective intelligence very slightly.

Even compassion and Christly forgiveness, are harmful in some systems.

The world is a bunch of complicated systems, and in order to get good effects, we don’t just mash the “good” button. Rather, we mostly need to figure out which precises elements go in exactly the right places.

 

 

I have a new Blog

More than a month ago, I started a new blog at https://efficacyengineering.wordpress.com/ .

I’m using it to document my personal development projects, as I build up complete, robust, systems for maintaining effective psychological states. That entails designing experiments, semi-rigorous analysis of results, phenomenological feature extraction, and brainstorming and debugging on the places where I’m stuck. Eventually, I might write up more permanent articles on that site, outlining the general psychological principles and mechanisms that underlie optimal learning and efficiency.

It is probably best thought of as a “productivity blog”, though for me, it is an research project in applied psychology.

I’m going to keep posting my thoughts on other topics to this blog, but from now on anything having to do with productivity or learning will go over there.

Leading indicators are crucial for control systems

[This is probably obvious to some of you.]

Control systems

I’ve been thinking about building self contained systems lately, specifically (mostly) in the context of personal productivity. If you want a self contained system that is robust to disruption, you want it to incorporate control systems.

My approach to building a self contained system for my own effort-less efficiency has been to identify the inputs and intermediate states to the psychological states that I’m aiming for, and then build control systems around those inputs and intermediaries.

For instance, if you know that subjective mental energy is one important input to flow, and when you don’t have mental energy, you become draggy, and concentration is elusive, then you want to set up an automatic system so that whenever your mental energy is low, you automatically take actions to recover it.

The problem of systems that depend on the inputs they control

However, this has a problem. If you have control systems that depend on the relevant input themselves, then they can’t really function as control systems.

For instance, speaking of mental energy again, suppose you know that if you go to the gym and physically exert yourself, you’ll experience a gain in subjective mental energy. But, unfortunately, going to the gym itself has some activation energy, and so if you are low on mental energy, you’re unlikely to do it. In this situation, you’re stuck in a less than optimal attractor, where  feeling draggy prevents you from doing things that would help you not feel draggy.

(There’s also an attractor on the other side of the hill, where having energy makes it easy to do the things that help you maintain energy. But that attractor is less stable, because if anything disrupts any part of the virtuous cycle, the whole thing grinds to a stop.)

My solution to this, in practice, in most cases, is to find workarounds that have minimal activation energy, so that falling into the preferred attractor is easy. But, I’ve also assumed that there are some places where I would just have to bite the bullet and be satisfied with non-responsive systems. That is, you just rigidly make sure to exercise every day, because you know that it supports everything else.

This solution is maybe fine, but it is also pretty fragile.

Leading indicators save the day

Actually though, this is already a solved problem. Living organisms are (or are made of) homeostatic control systems that regulate their own inputs.

An animal needs calories in order to function and it spends calories to control the level of calories it has stored.

And the key thing is that there is a long lag in the system. An animal doesn’t wait until its cells are starved to go hunt. It goes to hunt, or at least goes to the refrigerator, on the basis of a much much earlier indicator, when it is hungry. It gets hungry much earlier than when it is starting to literally run out of calories.

Or thinking somewhat more abstractly: Suppose you have a control system that regulates the input of gasoline, but the control system itself depends on gasoline to function?

If such a system was constructed with very little lag, it would fail with the first sizeable shock. But if the system had look-ahead (perhaps because gas flows through a reservoir that fuels the regulator, even when there is 0 gas flowing through the regulator at the moment).

Conclusion

So it seems that I should be looking out for very early leading indicators of unwanted phenomenological states, and hooking up control systems to those, as opposed to building systems that track the phenomenological states of some input starting to fall apart. I may find that more things can be control systems than I had previously thought.

Some sleep thoughts very roughly

My current model: falling asleep depend on three things happening. If these three things happen, then you will be asleep:

  1. Relaxed body
  2. Lowered pulse (around 50 bpm)
  3. Mind clear of thoughts (or leaning into visual imagery)

But actually, most of the action is in the prerequisite 0th step: disengaging from whatever is interesting, so that your attention can actually be to relax and fall asleep.

How to do that?

Some ideas:

  • IDC with the thoughts
  • Physically remove the felt senses from the body
  • Meditate?
  • Distract yourself
    • By drawing
    • By reading fiction
    • By trying to get absorbed in some thoughts?
  • “Leaning out” from the thoughts?
  • By jotting down everything that you’re excited about?
  • By scheduling specific time in the morning to try and boot up those motivating considerations / making reminders of everything important.
    • The main thing, might be the sense of time scarcity or urgency which is salient at the time.

 

 

Modes, not traits, and decoupling cognitive energy from intentionality

[Epistemic status: speculation from single n of 1 experiences that I’m excited about]

I had a really effective second half of my day today, and right now I’m going to speculate about some of the mechanics of that.

Modes, not traits

Some relevant background to this is a thought that I had two weeks ago:

“I shouldn’t think of terms in terms of ways that I should be or things that I should do, but rather __modes__ that I could get into that are useful sometimes. Even if I want to be in those modes most days, I should still think of them as separate modes and not as default states.”

This is important because a lot of ways that I want to be consume some resource, so I can’t actually maintain them perpetually. I might want to be habitually hyper-productive, but since I probably can’t be hyper-productive for literally all of my waking hours (I need rest and stuff), if I try to always be hyper productive, I’ll fail, and never really build the habit. Instead, I should have a hyper-productive mode and build the habit of  getting into that mode regularly.

This is maybe obvious to a lot of you, but it seems like a useful insight to me. (I wonder if I’m on the verge of reinventng “work/life balance” in the same way that I reinvented “its good to have a room.”)

I think this goes pretty deep, and there are a number of different not-necessarily mutually exclusive modes at various levels of subtlety. (For one thing, I have a mode for syncing with Anna: our natures tend to clash by default, but these days she either meets me in something like my way of being, or I meet her at something like her way of being.)

But here are three high level, high granularity modes / ways of being / high-level intentions that it seems like I would want to operate from on a regular basis.

  • Rest
  • Executing intentions mindset / committed engagement [manager time]
  • Slow thinking / deep work / mono-focus [maker time]

The rest of this post will be about holding the Executing intentions mindset.

Maintaining taughtness

My current sense of how to do the Executing Intentions mode well, involves maintaining “taughtness” / “tension” across the whole period that you’re in that mode. That’s a phenomenological description: it feels like there’s a sort of tension that I can let go slack. It has something to do with remembering the executing intentions meta-intention? Or having the context of the tasks that I’m doing loaded up and available?

One reason why this worked well today, I think, was that I decided that I was going to stop listening to audio-books for the time being. I might usually listen to an audiobook as I walk somewhere, but this tends to take me out of the EI mindset, what was taught becomes slack.

Other ways that I can fall out of it:

  • Looking at my phone in the bathroom.
  • Making food and eating, and especially listening to audio while making food.

Each of these have a character of “I’m doing this mundane thing right now, I might as well occupy my mind with something entertaining or informative.” It might be that that engagement with that material kicks out the EI meta-intention, because my mind is filled with other content. But it feels more like all of those behaviors have a kind of lackadaisical attitude, like “its fine to slow down and spend time here”, instead of the momentum of one thing after another.

I did take breaks today, but they had a different character than most breaks I take. They were more intentional: more circumscribed, less distracted. I intentionally decided how long each one was going to be, and set a timer, but more importantly than the timer, there was a part of me that didn’t turn off during the break, I was still geared up to take the next thing coming at me. I’m confident this is less restful than other kinds of breaks, and thus it is crucial that this is only a mode and that one also have a rest mode, when you release all the tension and taughtness.

Decoupling energy levels from intentionality

Another thing that happened today is that at the end of my session, I was feeling cognitively drained. I think that usually, that would cause me to disengage from the EI mindset and release my conscious hold on my intentionality. But this time, I was more like “I notice that I am cognitively drained. My job in this time-slice is to recover cognitive energy.” and I went to go strength train.

I think there’s something important here: I was decoupling how tried I felt from how intentional I was going to be.

This seems important on a number of counts

  • For one thing it caused me to strength train during my work day, which sometimes gets skipped.
  • Additionally, this sort of rolling with the punches enabled me to maintain the taughtness, instead of abandoning the attitude whenever I loose cognitive resources.

Remember, this really depends on having high quality rest.