A taxonomy of Cruxes

[crossposted to LessWrong]

This is a quick theoretical post.

In this post, I want to outline a few distinctions between different kinds of cruxes. Sometimes folks will find what seems to be a crux, but they feel some confusion, because it seems like it doesn’t fit the pattern that they’re familiar with, or it seems off somehow. Often this is because they’re familiar with one half of a dichotomy, but not the other.

Conjunctive, unitary, and disjunctive cruxes

As the Double Crux method is typically presented, double cruxes are described as single propositions, about which, if you changed your mind, you would change your mind about another belief.

But as people often ask,

“What if, there are two propositions, B and C, and I wouldn’t change my mind about A, if I just changed my mind about B, and I wouldn’t change my mind about A if I just changed my mind about C, and I would only if I change my mind about A, if I shift on both B and C?”

This is totally fine. In this situation would would just say that your crux for A is a conjunctive crux of B and C.

In fact, this is pretty common, because people often have more than one concern in any given situation.

Some examples:

  • Someone is thinking about quitting their job to start a business, but they will only pull the trigger if a) they thought that their new work would actually be more fulfilling for them, and b) they know that their family won’t suffer financial hardship.
  • A person is not interested in signing up for cryonics, but offers that they would if a) it was inexpensive (on the order of $50 a month and b) if the people associated with cryonics were the sort of people that he wanted to be identified with. [These are the stated cruxes of a real person that I had this discussion with.]
  • A person would go vegetarian if, a) they were sure it was healthy for them and b) doing so would actually reduce animal suffering (going a level deeper: how elastic is the supply curve for meat?).

In each of these cases there are multiple considerations, none of which is sufficient to cause one to change one’s mind, but which together represent a crux.

As I said, conjunctive cruxes are common, I will say that sometimes folks are too fast to assert that they would only change their mind if they turned out to be wrong about a large number of conjunctive terms.

When you find yourself in this position of only changing your mind on the basis of a large number of separate pieces, this is a flag that there may be a more unified crux that you’re missing.

In this situation I would back up and offer very “shallow” cruxes. Instead of engaging with all the detail of your model, instead look for a very high level / superficial summary, and check if that is a crux. Following a chain of many shallow cruxes is often easier than trying to get into the details of complicated models right off the bat.

(Alternatively, you might move into something more like consideration factoring.)

As a rule of thumb, the number of parts to a conjunction should be small: 2 is common, three is not that common. Having a 10 part conjunction is implausible. Most people can’t hold that many elements in their head all at once!

I’ve occasionally seen order of 10 part disjunctive arguments / conjunctive cruxes in technical papers, though I think it is correct to be suspicious of them. They’re often of the form “argument one is sufficient, but even if it fails, argument 2, is sufficient, and even that one fails…” But, errors are often correlated, and the arguments are likely not as independent as they may at first appear. It behooves you to identify the deep commonality between your lines of argument, the assumptions that multiple arguments are resting on, because then you can examine it directly. (Related to the “multiple stage fallacy‘).

Now of course, one could in principle have a disjunctive crux, where if they changed their mind about B or about C, they would change their mind about A. But, in that case there’s no need to bundle B and C. I would just say that B is a crux for A and also C is a crux for A.

Causal cruxes vs. evidential cruxes

A causal crux back-traces the causal arrow of your belief structure. They’re found by answering the question “why do I believe [x]?” or “what caused me to think [x] in the first place?” and checking if the answer is a crux.

For instance, someone is intuitively opposed to school uniforms. Introspecting on why they feel that way, they find that they’re expecting (or afraid that) that kind of conformity squashes creativity. They check if that’s a crux for them (“what if actually school uniforms don’t squash creativity?”), and find that it is: they would change their mind about school uniforms if they found that they were wrong about the impact on creativity.”

Causal cruxes trace back to the reason why you believe the proposition.

In contrast, an evidential crux is a proxy for your belief. You might find evidential cruxes by asking a question like “what could I observe, or find out, that would make me change my mind?”

For instance, (this one is from a real double crux conversation that happened at a training session I ran), two participants were disagreeing about whether advertising destroys value on net. Operationalizing, one of them stated that he’d change his mind if they realized that beer commercials, in particular, didn’t destroy value.

It wasn’t as if he believed that advertising is harmful because beer commercials destroy value. Rather it was that he thought that advertising for beer was a particularly strong example of the general trend that advertising is harmful. So if he changed his mind in that instance, where he was most confident, he expected that he would be compelled in the general case.

In this case “beer commercials” are serving as a proxy for “advertising.” If the proxy is well chosen, this can totally serve as a double crux. (It is, of course, possible that one will be convinced that they were mistaken about the proxy, in a way that doesn’t generalize to the underlying trend. But I don’t think that this is significantly more common than following a chain of cruxes down, resolving at the bottom, and then finding that the crux that you named was actually incomplete. In both cases, you move up as far as needed, adjust the crux (probably by adding a conjunctive term), and then traversing a new chain.)

Now, logically, these two kinds of cruxes both have the structure “If not B, then not A” (“if uniforms don’t squash creativity, then I wouldn’t be opposed to them anymore.” and “if I found that beer commercials in fact do create value, then I would think that advertising doesn’t destroy value on net”). In that sense they are equivalent.

But psychologically, causal cruxes traverse deeper into one’s belief structure, teasing out why one believes something, and evidential cruxes traverse outward, teasing out testable consequences  or implications of the belief.

Monodirectional vs. Bidirectional cruxes

Say that you are the owner of a small business. You and your team are considering undertaking a major new project. One of your employees speaks up and says “we can’t do this project. The only way to execute on it would bankrupt the company.”

Presumably, this would be a crux for you. If you knew that the project under consideration would definitely bankrupt the company, you would definitively think that you shouldn’t pursue that project.”

However, it also isn’t a crux, in this sense: if you found out that that claim was incorrect, that actually you could execute on the project without bankrupting your company, you would not, on that basis alone, definitively decide to pursue the project.

This is an example of a monodirectional crux. If the project bankrupts the company, then you definitely won’t do it. But if it doesn’t bankrupt the company then you’re merely uncertain. This consideration dominates all the other considerations, it is sufficient to determine the decision, when it is pointing in one direction, but it doesn’t necessarily dominate when it points in the other direction.

(Oftentimes, double cruxes are composed of two opposite bidirectional cruxes. This can work totally fine. It isn’t necessary that for each participant, the whole question turns on the double crux, so long as for each participant, flipping their view on the crux (from their current view) would also cause them to change their mind about the proposition in question.)

In contrast, we can occasionally identify a bidirectional crux.

For instance, if a person thinks that public policy ought to optimize for Quality Adjusted Life Years, and they’ll support whichever health care scheme does that, then “maximizing QALYs” is a bidirectional crux. That single piece of information (which plan maximizes QALYs), completely determines their choice.

“A single issue voter” is a person voting on the basis of a bidirectional crux.

In all of these cases you’re elevating one of the considerations over and above all of the others.

Pseudo cruxes

[This section is quite esoteric, and is of little practical relevance, except for elucidating a confusion that folks sometimes encounter.]

Because of the nature of mono-directional cruxs, people will sometimes find pseudo-cruxes, propositions that seem like cruxes, but are nevertheless irrelevant to the conversation.

To give a (silly) example, let’s go back to the canonical disagreement about school uniforms. And let’s consider the proposition “school uniforms eat people.”

Take person who is in favor of school uniforms. The proposition that “school uniforms eat people” is almost certainly a crux for them. The vast majority of people who support school uniforms would change their mind if they were convinced that school uniforms were carnivorous.

(Remember, in the context of a Double Crux conversation, you should be checking for cruxy-ness independently of your assessment of how likely the proposition is. The absurdity heuristic is insidious, and many claims that turn out to be correct, seem utterly ridiculous at first pass, lacking a lot of detailed framing and background.)

This is a simple crux. If the uniform preferring person found out that uniforms eat people, they would come to disprefer uniforms.

Additionally, this is probably a crux for folks who oppose school uniforms as well, in one pretty specific sense: were all of their other arguments to fall away, knowing “that school uniforms eat people” would still be sufficient reason for them to oppose school uniforms. Note that doesn’t mean that they do think that school uniforms eat people, nor does it mean that finding out that school uniforms don’t eat people (duh) would cause them to change their mind, and think that school uniforms are good. We might call this an over-determining hypothetical crux. It’s a bidirectional crux that points exclusively in the direction that a person already believes, and which furthermore, the person currently assumes to be false.

A person might say,

I already think that school uniforms are a bad idea, but if I found out they eat people, that would be further reason for me to reject them. Furthermore, now that we’re discussing the possibility, that “school uniforms don’t eat people” is such an important consideration such that it would have to be a component of any conjunctive crux that would cause me to change my mind and think that school uniforms are a good idea. But I don’t actually think that school uniforms eat people, so it isn’t a relevant part of that hypothetical conjunction.

This is a complicated series of claims. Essentially, this person is saying that in a hypothetical world where they thought differently than they currently do, this consideration, if it held up would be a crux for them (that would bring them to the position that they actually hold, in reality).

Occasionally (on the order of once out of 100?), a novice participant will find their way to a pseudo crux like that one, and find themselves confused. They can tell that the proposition “school uniforms eat people” if true, matters for them. It would be relevant for their belief. But it doesn’t actually help them push the disagreement forward, because, at best, it pushes further in the direction of what they already think.

(And secondarily, it isn’t really an opening for helping their partner change their mind, because the uniform-dispreferring person, doesn’t actually think  that school uniforms eat people, and so would only try to argue that they do if they had abandoned any pretense of truth-seeking in favor of trying to convince someone using whatever arguments will persuade, regardless of their validity.)

So this seems like a crux, but it can’t do work in the Double Crux process.

There is another kind of pseudo crux stemming from bidirectional cruxes. This is when a proposition is not a crux, but it’s inverse would be.

In our school uniform example, suppose that that in a conversation, someone boldly, and apropos of nothing,  asserted “but school uniforms don’t eat people.” Uniforms not eating people is a monodirectional crux that dominates all the other considerations, but school uniforms not eating people is so passé, that it is unlikely to be a crux for anyone (unless the reason they were opposed to school uniforms was kids getting eaten). Nevertheless, there is something about it that seems (correctly) cruxy. It is the ineffectual side of a monodirectional crux. It isn’t a crux, but its inverse is. We might call this a crux shadow or something.

Thus, there is a four-fold pattern of monodirectional cruxes, where one quadrant is a useful progress bearing crux, and the other three contain different flavors of pseudo cruxes.

Proposition: “If school uniforms eat people, then I would oppose school uniforms”

Suppose school uniforms eat people Suppose school uniforms don’t eat people
I am opposed to school uniforms Overdetermining hypothetical crux:  “I would oppose school uniforms anyway, but this would be a crux for me, if (hypothetically) I was in favor of school uniforms.” Non-crux / Crux shadow: “Merely not eating people is not sufficient to change my mind. Not a crux.”
I am in favor of school uniforms Relevant (real) monodirectional crux: “If school uniforms actually eat people, that would cause me to change my mind.” Non-crux / Crux shadow: “While finding out that uniforms do eat people would sway me, that they don’t eat people isn’t a crux for me.”

And in the general case,

Proposition: X is sufficient for A, but Not X is not sufficient for B

X is true X is false
I believe A Overdetermining hypothetical crux Non-crux / Crux shadow
I believe B Relevant monodirectional crux Non-crux / Crux shadow

Note that the basic double crux pattern avoids accidentally landing on pseudo cruxes.



Coordinating Conversation I: micro-coordination of intention

[This is a draft, mostly written in 2018, with very minimal editing. See the caveats here.]

Follow up to: Something simple to try in conversations

In this essay, I want to describe a simple, but important abstraction for how to think about what’s happening in a conversation, and how to make use of that abstraction for helping a conversation go more smoothly. This idea applies at multiple time-scales: in this post, I’ll describe how the simpler case of second-to-second interactions, and then in the next, I’ll explain the broader minute-to-minute case.

Let’s consider some two-person epistemic conversation, say two people attempting to converge on some topic they disagree about.

There are two roles (moves/ strategies) that one can play in a two-person epistemic conversation.  One can either be listening [/ be the listener], and trying to understand their partner’s point. Or one can be explaining [/ be the explainer], trying to convey a point to their partner.

Both participants will almost certainly do both over the course of the conversation, and each may switch very rapidly between explaining and listening, but for any discrete time-slice of the conversation, each person will be holding an intention either to convey, or to come to understand.

It’s useful to track conversations in terms of which participant is implementing which role, second to second. Thinking about conversations this way, and being conscious of what intention you are implementing, as a participant, is useful because it enables conversational coordination.

At any given time, no more than one person should be explaining, and at least one person should be trying to understand what is being said. Specifically, this means that it should never be the case that both participants are in the explaining mode at the same time.

At any given moment, one person should be explaining, and the other person should be listening and trying to understand.

This may seem like an obvious point, but conversations frequently fail to meet this standard. This is a common failure mode: both parties are excited about the point they have to make, and have a visceral compulsion to make that point clear to the other person. And so both people are explaining at each other. When both people are in the explaining role at the same time, communication usually fails.

(The way Double Crux has been traditionally described aggravates this issue. The concept of a Double Crux can leave one with the impression that one is supposed to try and do both roles at once, both making their point and attempting to understand the point of their partner in one swoop. I, at least, don’t hold that Double Crux conversations should be conducted this way. [Instead, I think they should usually look like this.])

From the inside, this situation often feels like seeing that your partner is just missing this one point, and if you just convey it to them, they’ll realize that you’re right. It feels so easy to tell them that reason why they’re wrong. Or feeling that what they’re saying is so absurd that there’s almost a compulsion to point out exactly how absurd it is.

When I notice this happening in a conversation I’m participating in, I step back to the listening role and try and understand the point that my interlocutor is making. Only after I’ve paraphrased and they feel like I’ve understood them, do I return to making my point. [1]

TAP: notice that I’m pushing to make a point over my interlocutor -> mentally step back and paraphrase

This is super simple, but it is also important for making conversations go well.

[1] If it’s still relevant. Often, once I’ve understood what they were trying to tell me, my objection is obviated.



Intro to some draft posts on Double Crux, Epistemic Mediation, and Conversational Facilitation

This is the intro page to a short series of posts on topics relating to Double Crux, Epistemic Mediation, and Conversational Facilitation.

I’ve spent a lot of time developing methods for resolving tricky disagreements, and trying to teach those methods, via running test sessions, facilitating conversations, etc.

For the most part, I haven’t prioritized writing things up. However, I do have a series of incomplete drafts, most of which I wrote in 2018.

They’ve been sitting in my drafts folder since then (with some occasional additions every few month). But they don’t do any good sitting in my drafts folder, and this blog is specifically for posting rough drafts. So I’m posting them here, where other people can take a look at them.

All the caveats:

  • They are really, actually drafts. I’ve made minimal changes for readability. I’m sorry if these are not comprehensible, but the alternative was not posting anything, at least for a while.
  • I don’t expect these to convey skills, only point at skills.
    • Part of the reason why I have had such high standards for these and similar posts is that, in running test sessions, I did a lot of iteration to develop units that would cause people to actually implement the mental motions described, instead of simply verbally agreeing that those moves are sensible, and then going back to using their default conversational habits. I want to do the same via writing, but I don’t know how to do that yet.
  • My thinking in this area has evolved since I wrote these. While I still think that all of these are at least pointed in the right direction, some of my descriptions no longer seem to me to be the most natural way to conceptualize the relevant mechanisms.
  • I don’t claim originality for any of this content. Just like everything in the space of rationality, I’m sure that most of this has been discovered and rediscovered before, and many people are already doing something like this.
  • This is definitely not a complete catalogue of everything that I think is important in this of disagreement resolution or conversational facilitation.

In future, I’m going to post more of my iterative, semi-rambling incomplete and maybe-incomprehensible content here, instead of


Coordinating Conversation / micro-coordination (listening and explaining)


The Problem of “Agreeing to Agree”

Action-Oriented Operationalization

Other posts on these topics that I’ve already published:

Shallow Cruxes

The Crux-checking move (Using your partner to find your own Cruxes)

Agree first

Some other relevant posts

Some things I think about Double Crux and related topics

The Basic Double Crux Pattern

Basic Double Crux pattern example

Using the facilitator to make sure that each person’s point is held

Consideration Factoring: a relative of Double Crux

Consideration Factoring: a relative of Double Crux

[Epistemic status: work in progress, at least insofar as I haven’t really nailed down the type-signature of “factors” in the general case. Nevertheless, I do this or something like this pretty frequently and it works for me. There are probably a bunch of prerequisites, only some of which I’m tracking, though.]

This posts describes a framework I sometimes use when navigating (attempting to get to the truth of and resolve) a disagreement with someone. It is clearly related to the Double Crux framework, but is distinct enough, that I think of it as an alternative to Double Crux. (Though in my personal practice, of course, I sometimes move flexibly between frameworks).

I claim no originality. Just like everything in the space of rationality, many people already do this, or something like this.

Articulating the taste that inclines me to use one method in one conversational circumstance and a different method in a different circumstance is tricky. But a main trigger for using this one is when I am in a conversation with someone, and it seems like they keep “jumping all over the place” or switching between different arguments and considerations. Whenever I try to check if a consideration is a crux (or share an alternative model of that consideration), they bring up a different consideration. The conversation jumps around, and we don’t dig into any one thing for very long. Everything feels kind of slippery somehow.

(I want to emphasize that this pattern does not mean the other person is acting in bad faith. Their belief is probably a compressed gestalt of a bunch of different factors, which are probably not well organized by default. So when you make a counter argument to one point, they refer to their implicit model, and the counterpoint you made seems irrelevant or absurd, and they try to express what that counterpoint is missing.)

When something like that is happening, it’s a trigger to get paper (this process absolutely requires externalized, shared, working memory), and start doing consideration factoring.

Step 1: Factor the Considerations

1a: List factors

The first step is basically to (more-or-less) goal factor. You want to elicit from your partner, all of the considerations that motivate their position, and write those down on a piece of paper.

For me, so far, usually this involves formulating the disagreement as an action or a world state, and then asking what are the important consequences of that action or world-state. If your partner thinks that that it is a good idea to invest 100,000 EA dollars in project X, and you disagree, you might factor all of the good consequences that your partner expects from project X.

However, the type signature of your factors is not always “goods.” I don’t yet have a clean formalism that describes what the correct type signature is, in full generality. But it is something like “reasons why Z is important”, or “ways that Z is important”, where the two of you disagree about the importance of Z.

For instance, I had a disagreement with someone about how important / valuable it is that rationality development happen within CFAR, as opposed to some other context: He thought it was all but crucial, or at least throwing away huge swaths of value, while I thought it didn’t matter much one way or the other. More specifically, he said that he thought that CFAR had a number of valuable resources, that it would be very costly for some outside group to accrue.

So together, we made a list of those resources. We came up with:

  1. Ability to attract talent
  2. The ability to propagate content through the rationality and EA communities.
  3. The Alumni network
  4. Funding
  5. Credibility and good reputation in the rationality community.
  6. Credibility and good reputation in the broader world outside of the rationality community.

My scratch paper:

IMG_3024 2 copy(We agreed that #5 was really only relevant insofar as it contributed to #2, so we lumped them together. The check marks are from later in the conversation, after we resolved some factors.)

Here, we have a disagreement which is something like “how replaceable are the resources that CFAR has accrued”, and we factor into the individual resources, each of which we can engage with separately. (Importantly, when I looked at our list, I thought that for each resource, either 1) it isn’t that important, 2) CFAR doesn’t have much of it, or 3) it would not be very hard for a new group to acquire it from scratch.)

1b: Relevance and completeness checks

Importantly, don’t forget to do relevance and completion checks:

  • If all of these considerations but one were “taken care of” to your satisfaction, would you change your mind about the main disagreement? Or is that last factor doing important work, that you don’t want to loose?
  • If all of these consideration were “taken care of” to your satisfaction, would you change your mind about the main disagreement? Or is something missing?

[Notice that the completeness check and relevance check on each factor, together, is isomorphic to a crux-check on the conjunction of all of the factors.]

Step 2: Investigate each of the factors

Next, discuss each of the factors in turn.

2a: Rank the factors

Do a breadth first analysis of which branches seem most interesting to talk about, where interesting is some combination of “how crux-y that factor is to your view”, “how cruxy that factor is for your partner’s view”, and “how much the two of you disagree about that factor.”

You’ll get to everything eventually, but it makes sense to do the most interesting factors first.

The two of you spend a few minutes superficially discussing each one, and assessing which seems most juicy to continue with first.

2b: Discuss each factor in turn

Usually, I’ll take out a new sheet of paper for each factor.

Here you’ll need to be seriously and continuously applying all of the standard Double Crux / convergence TAPs. In particular, you should be repeatedly...

  • Operationalizing to specific cases
  • Paraphrasing what you understand your partner to have said,
  • Crux checking (for yourself), all of their claims, as they make them.

[I know. I know, I haven’t even written up all of these basics, yet. I’m working on it.]

This is where the work is done, and where most of the skill lies. As a general heuristic, I would not share an alternative model or make a counterargument until we’ve agreed on a specific, visualizable story that describes my partner’s point and I can paraphrase that point to my partner’s satisfaction (pass their ITT).

In general, a huge amount of the heavily lifting is done by being ultra specific. You want to be working with very specific stories with clarity about who is doing what, and what the consequences are.  If my partner says “MIRI needs prestige in order to attract top technical talent”, I’ll attempt to translate that into a specific story…

“Ok, so for instance, there’s a 99.9 percentile programmer, let’s call him Bob, who works at Google, and he comes to an AIRCS workshop, and has a good time, and basically agrees that AI safety is important. But he also doesn’t really want to leave his current job, which is comfortable and prestigious, and so he sort of slides off of the whole x-risk thing. But if MIRI were more prestigious, in the way that say, RAND used to be prestigious (most people who read the New York times know about MIRI, and people are impressed when you say you work at MIRI), Bob is much more likely to actually quit his job and go work at on AI alignment at MIRI?”

…and then check if my partner feels like that story has captured what they were trying to say. (Checking is important! Much of the time, my partner wants to correct my story, in some way. I keep offering modified versions it until I give a version that they certify as capturing their view.)

Very often, telling specific stories clears out misconceptions: either correcting my mistaken understanding of what the other person is saying, or helping me to notice places where some model that I’m proposing doesn’t seem realistic in practice. [One could write several posts on just the skillful use of specificity in converge conversations.]

Similarly, you have to be continually maintaining the attitude of trying to change your own mind, not trying to convince your partner.

Sometimes the factoring is recursive: it makes sense to further subdivide consideration, within each factor. (For instance, in the conversation referenced above about rationality development at CFAR, we took the factor of “CFAR has or could easily get credibility outside of the rationality / EA communities” and asked “what does extra-community credibility buy us?” This produced the factors “access to governments agencies, fortune 500 companies, universities and other places of power” and “leverage for raising the sanity waterline.” Then we might talk about how much each of those sub-factors matter.)

(In my experience) your partner will probably still try and jump around between the factors: you’ll be discussing factor 1, and they’ll bring in a consideration from factor 4. Because of this, one of the things you need to be doing is, gently and firmly, keeping the discussion on one factor at a time. Every time my partner seems to try to jump, I’ll suggest that that seems like it is more relevant to [that other factor], than to this one, and check if they agree. (The checking is really important! It’s pretty likely that I’ve misunderstood what they’re saying.) If they agree, then I’ll say something like “cool, so let’s put that to the side for a moment, and just focus on [the factor we’re talking about], for the moment. We’ll get to [the other factor] in a bit.” I might also make a note of the point they were starting to make on the paper of [the other factor]. Often, they’ll try to jump a few more times, and then get the hang of this.

In general, while you should be leading and facilitating the process, every step should be a consensus between the two of you. You suggest a direction to steer the conversation, and check if that direction seems good to your partner. If they don’t feel interested in moving in that direction, or feel like that is leaving something important out, you should be highly receptive to that.

If at any point your partner feels “caught out”, or annoyed that they’ve trapped themselves, you’ve done something wrong. This procedure and mapping things out on paper should feel something like relieving to them, because we can take things one at a time, and we can trust that everything important will be gotten to.

Sometimes, you will semi-accidentally stumble across a Double Crux for your top level disagreement that cuts across your factors. In this case you could switch to using the Double Crux methodology, or stick with Consideration Factoring. In practice, finding a Double Crux means that it becomes much faster to engage with each new factor, because you’ve already done the core untangling work for each one, before you’ve even started on it.


This is just one framework among a few, but I’ve gotten a lot of mileage from it lately.

Basic Double Crux pattern example

Followup to: The Basic Double Crux Pattern


This is a continuation of my previous post, in which I outlined the what I currently see as the core pattern of double crux finding. This post will most be an example of that pattern in action.

While this example will be more realistic than the intentionally silly example in the last post, it will still be simplified, for the sake of brevity and clarity. There’s a tradeoff between realism and clarity of demonstration. The following conversation is mostly-fictional (though inspired by Double Cruxes that I have actually observed). It is intended to be realistic enough that this could have been a real conversation, but is also specifically demonstrating the basic Double Crux pattern, (as opposed to other moves or techniques that are often relevant.)


As a review, the key conversational moves of Double Crux are…

  1. Finding a single crux:
    1. Checking for understanding of P1’s point.
    2. Checking if that point is a crux for P1.
  2. Finding a double crux:
    1. Checking P2’s belief about P1’s crux.
    2. Checking if P2’s crux is also a crux for P1.

As an excise, I encourage you to go through the example below below and track what is currently happening in the context of this frame: Which parts of the conversation correspond to which conversational move?

Note that, in general, participants will be intent on saying all kinds of things that aren’t cruxes, and part of the facilitator’s role is to keep the conversation on track, while also engaging with their concerns.

Example: Carol and Daniel on gun control legislation

[Important note: This example is entirely fictional. It doesn’t represent the views, of anyone, including me. The arguments that these two character’s make are constructions of arguments that I have heard from CFAR participants, and things that I heard in podcasts years ago, and things I made up. I have no idea if these are in fact the most important considerations about gun control, or if the facts that these characters reference are actually true, only that they are plausible positions that a person could hold.]

Two people, Carol and Daniel, are having a discussion about politics. In particular, they’re discussing new gun laws that would heavily restrict citizens ability to buy and sell firearms. Carol is in favor of the laws, but Daniel is opposed.

Facilitator: Daniel, do you want to tell us why you think this law is a bad idea?

Daniel: Look every time there’s a mass shooting, everyone gets up in arms about it, and politicians scramble over each other to show how tough they are, and how bad an event the shooting was, and then they pass a bunch of laws that don’t actually help, and add a bunch of red tape for people who want to buy firearms for legitimate uses.

Facilitator: Ok. There were a lot of things in there, but one part of is you said that these laws don’t work?

Daniel: That’s right.

Facilitator: you mean that they don’t reduce incidence of mass shooting?

Daniel: Yeah, but mass shootings are almost a rounding error.

Facilitator: [suggesting an operationalization] So what do you mean when you say they don’t help? That they don’t reduce the murder rate?

Daniel: Yeah. That’s right.

Facilitator: Ok. If you found out that this law in particular did reduce the US murder rate, would you be in favor of it?

Daniel: Well, it would have to have a significant impact. If the effect is so small as to be basically unnoticeable, then it probably isn’t worth the constraint on millions of people across the US.

Facilitator: Ok. But if you knew that this law would “meaningfully” reduce the murder rate, would you change your mind about this law? (Where meaningfully means something like “more than 5%”?)

Daniel: Yeah, that sounds right.

Facilitator: Great. Ok. So it sounds like whether this law would impact the murder rate is a crux for you.

Carol, do you think that this law would reduce the murder rate?

Carol: Yeah.

Facilitator: By more than 5%?

Carol: I’m not sure by how much specifically, but something like that.

Facilitator: Ok. If you found it that this law would actually have no effect (or close to no effect) on the murder rate would you change your mind?

Carol: I think it would reduce the murder rate.

Facilitator: Cool. You currently think that this law would reduce the murder rate. I want to talk about why you think that in a moment. But first, I want to check if it’s a crux. Suppose you found really strong evidence that this this law actually wouldn’t reduce the murder rate at all (I realize that you currently think that it will), would you still support this law?

Carol: Yeah. I guess there wouldn’t be much of a point to it in that case.

Facilitator: Cool. So it sounds like this is a double Crux. You, Daniel, would change your mind about this law if you found out that it would improve the murder rate, substantially, and you, Carol, would change your mind, if you found out it wouldn’t improve the murder rate. High-five!

The participants high-five, and write down their cruxes.

[insert picture]

Facilitator: Alright. So it sounds like we should talk about this double crux, why do each of you think that this would improve the murder rate?

Carol: Well, I’m aware of other countries that implemented similar reforms, and it did in fact improve incidents of violent crime in those countries.

Facilitator: Ok. So the reason why you think that this law would reduce the murder rate is that it worked in other countries?

Carol: That’s right.

Facilitator: Ok. Suppose you found out that actually laws like this did not help in those other countries? Would that change your mind about how likely this reform is to work in the US?

Carol: I’m pretty sure that it did work. In Sweden, for instance.

Facilitator: Ok. Suppose you found out that the evidence in those countries, like Sweden, didn’t hold up, or it turns out you misread the studies or something.

Carol: If all those studies turned out to be bunk…Yeah. I guess that would cause me to change my mind.

Facilitator: Cool. So it sounds like that’s a crux for you.

Daniel, presumably, you think that laws like this don’t lower the murder rate in other countries. Is that right?

Daniel: Well, I don’t know about those studies in particular. But I’m generally skeptical here, because this topic is politicized. I think you can’t rely on most of the papers in this area.

Facilitator: Ok. Is that a crux for you?

I don’t think so. There are a lot of gun law interventions that are effective in other countries, but don’t work in the US, because there are just so many more guns in the US, than in other countries in the world. Like, because there are so many guns already in circulation, making it harder to buy a gun today, makes almost no difference for how easy it is to get your hands on a gun. In other countries where there are fewer weapons lying around, things like gun buybacks, or restrictions on gun purchases, have more of an effect, because the size of the pool of the existing guns is larger, and so those interventions have a much smaller proportional impact.

Like, you have to account for the black market. There’s a big black market in weapons in the US, much bigger than in most European countries, I think.

Facilitator: [Summarizing] Ok. It sounds like you’re saying that you don’t think making harder to buy guns has an impact on the murder rate because, if you want to kill someone in the US, you can get your hand on a gun one way or another.

Daniel: Yeah.

Facilitator: Ok. Is that a crux for you? Suppose that you found out that, actually reforms like this one do make it much harder for would-be criminals to get a gun. Would that change your mind about whether this reform would shift the murder rate?

Daniel: Let’s see…I might want to think about it more, but yeah, I think if I knew that it actually made it harder for those people to get guns, that would change my mind, but it would have to be a lot harder. Like, it would have to be hard enough that those people end up just not having guns, not like they would have to wait an extra 2 weeks.

Facilitator: Ok. Cool. If laws like this made it so that fewer criminals had guns, you would expect those laws to improve the murder rate?

Daniel: yeah.

Facilitator: Carol, do you expect that this new gun law would cause fewer criminals to own guns?

Carol: Yeah. But it’s not just the guns. The other thing is the ammunition. This law would not just make it harder to buy guns, but would make it harder to buy bullets. It doesn’t matter how criminals got their guns (they might have inherited them from their grandmother, for all I know), if they can’t buy bullets for those guns.

Facilitator: OH-k. That’s pretty interesting. You’re saying that the way that you limit gun violence is by restricting the flow of ammunition.

Carol: Exactly.

Daniel: Huh. Ok. I hadn’t thought about that. That does seem pretty relevant.

Facilitator (to Daniel): Is that a crux for you? If this law makes it harder for criminals to get bullets, would you expect it to impact the murder rate?

Daniel: Yeah. I hadn’t thought about that. Yeah, if would-be murderers have trouble getting ammunition, then I would expect that to have an impact on the murder rate. But I don’t know how they get bullets. It might be mostly via black market channels that aren’t much effected by this kind of regulation. Also, it maybe that it’s similar to guns, in that there are already so many bullets in circulation, that it

Facilitator (to Daniel): Cool. Let’s hold off on that question until we check if it is also a crux for Carol.

Facilitator (to Carol): Is that a crux? Suppose that you found out that criminals are able to get their hands on ammunition via “unofficial”, illegal methods. Would that change your mind about how likely this policy is to reduce the murder rate in the US?

Carol: Let me think.

. . .

I’m still kind of suspicious. I would want to see some hard data of states or countries that have the a similar gun culture to the US, and also have this sorts of restrictions-

Daniel [Interrupting]: Yeah, I would obviously want that too.

Carol: …but in the absence of evidence like that, then it makes sense that if criminals  get guns and ammo via illegal channels, then a reform that makes it harder to buy weapons wouldn’t have any impact on the murder rate.

Facilitator: Ok. Awesome. So it sounds like we have a Double Crux, “would this regulation limit the availability of ammunition for would-be killers.” Separately, you Carol, have a single crux about “Would this make it harder for criminals to buy guns?” We could talk about that, but Daniel doesn’t think it’s as relevant.

Carol: No. I’m excited to just talk about the ammo for now.

Facilitator: Cool. That also makes sense to me.

And maybe underneath “Would this regulation limit the availably of ammunition to would-be killers”, there’s another question of “How do criminals mostly get their ammunition?” If they get their ammo from sources that are relevantly affected by these sorts of restrictions, then we expect it to restrict the flow of ammo to criminals. But if they have a stockpile, or get it from the black market (which get’s it from other countries, or something), then we wouldn’t expect the availability of ammo for criminals to be impacted much.

Does that sound right?


Facilitator: Ok then. Cool. I think you guys should high five.

Carol and Daniel high five, and the write the new cruxes on their piece of paper.


Facilitator: Also, it sounds like there was maybe another Double Crux there? That is, if there was evidence from a country or state that tried a policy like this one, and also had a similar number of guns in circulation?

Does that sound right?

Daniel: Yeah.

Carol: Yep.

Facilitator: Ok. That just seems good to note, even  if it isn’t actionable right now. If either of you encounter evidence of that sort, it seems like it would be relevant to your beliefs.

But it seems like the most useful thing to do right now is to see if we can figure out how criminals typically get ammunition.

Daniel and Carol researched this question, found a bunch of conflicting evidence, but settled on a shared epistemic state about how most criminals get their ammo. One or both of them updated, and they lived with a more detailed model of this small part world ever after.


Hopefully this example gives a flavor of what a Double Crux conversation is (often) like.

Again, I want to point out that this example uses the person of the facilitator for convenience. While I do usually recommend getting a third person to be a facilitator, it is also totally possible for one or both of the conversational participants to execute these moves and steer the conversation.

This conversation is specifically intended to demonstrate the basic Double Crux pattern (although the normal order was disrupted a bit towards the end of the conversation), but nevertheless, we see some examples of other principles or techniques of productive disagreement resolution.

Distillation: As is common, Daniel said a lot of things in his opening statement, not all of which are relevant, or cruxes. One of the things that the facilitator needs to do is draw out the key point or points from all of those words, and then check if those are crucial. Much of the work of Double Crux is drawing out the key points each person is making, so you can check if they are cruxes.

Operationalizing: This particular example was pretty low on operationalization, but nevertheless, early on, the conversation settled on the operationalization of “reducing the murder rate”.

Persistent crux checking: Every time either Carol, or Daniel made a point, the facilitator would ask them if it was a crux.

Signposting: The facilitator frequently states what just happened (“It sounds like that’s a crux for you.”). This is often surprisingly helpful to do in a conversation. The facilitator

Distinguishing between whether a proposition is a crux, and what the truth value of a crux is: One thing that you’ll notice is that several times, the facilitator asks Carol if some proposition “A” is a crux for her, and she responds by saying, in effect, “I think A is true”. This is a fairly common thing that happens, even for quite smart people. It is important to make mental space between the questions “is X a crux for Y?”, and “is X true?”. It is often very tempting to jump in and refute a claim, before checking if it is a crux. But it is important to keep those questions separate.

False starts: There were several places where the facilitator checked for a Double Crux, and “missed”. This is natural and normal.

This conversation was weird on in that most of the considerations named were cruxes for someone. People are often inclined to share consideration, that upon reflection, they agree don’t matter at all.

Conversational avenues avoided: This is maybe the key value add of Double Crux. For every false start, there was a long, and ultimately irrelevant conversation that could have been had.

It might have been tempting for Daniel to argue about whether gun control regulation actually worked in other countries. (It is just sooo juicy to tell people exactly how they are wrong, when they’re wrong.) But that would have been a pointless exercise, (for Daniel’s belief improvement, at least), because Daniel thinks  that the data from other countries is irrelevant. At every point they stay on a track that both parties consider promising for updating.

(Of course, sometimes you do have information that is relevant to your partner’s crux, but not to your own, and it is good and proper to share that info with them.)

Inside view / outside view separation: Towards the end, we saw Carol distinguishing between outside view reasoning based on empirical data, and inside view reasoning based on thinking through probable consequences. She clearly separated these two, for herself. I think this is sometimes a useful move, so that one can feel psychologically permitted to do blue-sky reasoning, without forgetting that you would defer to the empirical data over and above your theorizing.


I also again want to clarify my claims:

  • I am not claiming that this pattern solves all disagreements.
  • I am not claiming that this is the only thing a person should do in conversation.
  • I am not claiming that one should always stick religiously to this pattern, even when Double Cruxing. (Though it is good to try sticking religiously to this pattern as a training exercise. This is one of the exercises that I have people do at Double Crux trainings.)

I am saying…

  1. This is my current best attempt to delineate how finding double cruxes usually works, and
  2. I have found double cruxes to be extremely useful for productively resolving disagreements.


Hope this helps. More to come, soon probably.


[As always, I’m generally willing to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that’s something you’re interested in.]

The Basic Double Crux Pattern

[This is a draft, to be posted on LessWrong soon.]

I’ve spent a lot of time developing tools and frameworks for bridging “intractable” disagreements. I’m also the person affiliated with CFAR who has taught Double Crux the most, and done the most work on it.

People often express to me something to the effect, “The important thing about Double Crux is all the low level habits of mind: being curious, being open to changing your mind, paraphrasing to check that you’ve understood, operationalizing, etc. The ‘Double Crux’ framework, itself is not very important.”

I half agree with that sentiment. I do think that those low level cognitive and conversational patterns are the most important thing, and at Double Crux trainings that I have run, most of the time is spent focusing on specific exercises to instill those low level TAPs.

However, I don’t think that the only value of the Double Crux schema is in training those low level habits. Double cruxes are extremely powerful machines that allow one to identify, if not the most efficient conversational path, a very high efficiency conversational path. Effectively navigating down a chain of Double Cruxes is like magic. So I’m sad when people write it off as useless.

In this post, I’m going to try and outline the basic Double Crux pattern, the series of 4 moves that makes Double Crux work, and give a (simple, silly) example of that pattern in action.

These four moves are not (always) sufficient for making a Double Crux conversation work, that does depend on a number of other mental habits and TAPs, but this pattern is, according to me, at the core of the Double Crux formalism.

The pattern:

The core Double Crux pattern is as follows. For simplicity, I have described this in the form of a 3-person Double Crux conversation, with two participants and a facilitator. Of course, one can execute these same moves in a 2 person conversation, as one of the participants. But that additional complexity is hard to manage for beginners.

The pattern has two parts (finding a crux, and finding a double crux), and each part is composed of 2 main facilitation moves.

Those four moves are…

  1. Clarifying that you understood the first person’s point.
  2. Checking if that point is a crux
  3. Checking the second person’s belief about the truth value of the first person’s crux.
  4. Checking the if the first person’s crux is also a crux for the second person.

In practice

The conversational flow of these moves looks something like this:

Finding a crux of participant 1:

P1: I think [x] because of [y]

Facilitator: (paraphrasing, and checking for understanding) It sounds like you think [x] because of [y]?

P1: Yep!

Facilitator: (checking for cruxyness) If you didn’t think [y], would you change your mind about [x]?

P1: Yes.

Facilitator: (signposting) It sounds like [y] is a crux for [x] for you.

Checking if it is also a crux for participant 2

Facilitator: Do you think [y]?

P2: No.

Facilitator: (checking for a Double Crux) if you did think [y] would that change your mind about [x]?

P2: Yes.

Facilitator: It sounds like [y] is a Double Crux

[Recurse, running the same pattern on [Y] ]

Obviously, in actual conversation, there is a lot more complexity, and a lot of other things that are going on.

For one thing, I’ve only outlined the best case pattern, where the participants give exactly the most convenient answer for moving the conversation forward (yes, yes, no, yes). In actual practice, it is quite likely that one of those answers will be reversed, and you’ll have to compensate.

For another thing, this formalism is rarely so simple. You might have to do a lot of conversational work to clarify the claims enough that you can ask if B is a crux for A (for instance when B is nonsensical to one of the participants). Getting through each of these steps might take fifteen minutes, in which case rather than four basic moves, this pattern describes four phases of conversation. (I claim that one of the core skills of a savvy facilitator is tracking which stage the conversation is at, which goals have you successfully hit, and which is the current proximal subgoal.)

There is also a judgment call about which person to treat as “participant 1” (the person who generates the point that is tested for cruxyness). As a first order heuristic, the person who is closer to making a positive claim over and above the default, should usually be the “p1”. But this is only one heuristic.


This is an intentionally silly, over-the-top-example, for demonstrating the the pattern without any unnecessary complexity. I’ll publish a somewhat more realistic example in the next few days.

Two people, Alex and Barbra, disagree about tea: Alex thinks that tea is great, and drinks it all the time, and thinks that more people should drink tea, and Barbra thinks that tea is bad, and no one should drink tea.

Facilitator: So, Barbra, why do you think tea is bad?

Barbra: Well it’s really quite simple. You see, tea causes cancer.

Facilitator: Let me check if I’ve got that: you think that tea causes cancer?

Barbra: That’s right.

Facilitator: Wow. Ok. Well if you found out that tea actually didn’t cause cancer, would you be fine with people drinking tea?

Barbra: Yeah. Really the main thing that I’m concerned with is the cancer-causing. If tea didn’t cause cancer, then it seems like tea would be fine.

Facilitator: Cool. Well it sounds like this is a crux for you Barb. Alex, do you currently think that tea causes cancer?

Alex: No. That sounds like crazy-talk to me.

Facilitator: Ok. But aside from how realistic it seems right now, if you found out that tea actually does cause cancer, would you change your mind about people drinking tea?

Alex: Well, to be honest, I’ve always been opposed to cancer, so yeah, if I found out that tea causes cancer, then I would think that people shouldn’t drink tea.

Facilitator: Well, it sounds like we have a double crux!

In a real conversation, it often doesn’t goes this smoothly. But this is the rhythm of Double Crux, at least as I apply it.

That’s the basic Double Crux pattern. As noted there are a number of other methods and sub-skills that are (often) necessary to make a Double Crux conversation work, but this is my current best attempt at a minimum compression of the basic engine of finding double cruxes.

I made up a more realistic example here, and I’m might make more or better examples.


Some things I think about Double Crux and related topics

I’ve spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements. I’m also probably the person who has spent the most time explicitly thinking about and working with CFAR’s Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I’m not going to go into detail, defend, or substantiate, most of them.

The following are my own beliefs and do not necessarily represent CFAR, or anyone else. I, of course, reserve the right to change my mind.

Here are some things I currently belive:


  1. Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for that goal, but it is only one tool/framework in an ensemble.
    1. Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people’s intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one’s conversational partner), etc.
    2. In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).
    3. In particular, my personal conversational facilitation repertoire is about 60%  Double Crux-related techniques, and 40% other frameworks that are not strictly within the frame of Double Crux.
  2. Just to say it clearly: I don’t think Double Crux is the only way to resolve disagreements, or the best way in all contexts. (Though I think it may be the best way, that I know of, in a plurality of common contexts?)
  3. The ideal use case for Double Crux is when…
    1. There are two people…
    2. …who have a real, action-relevant, decision…
    3. …that they need to make together (they can’t just do their own different things)…
    4. …in which both people have strong, visceral intuitions.
  4. Double Cruxes are almost always conversations between two people’s system 1’s.
  5. You can Double Crux between two people’s unendorsed intuitions. (For instance, Alice and Bob are discussing a question about open borders. They both agree that neither of them are economists, and that neither of them trust their intuitions here, and that if they had to actually make this decision, it would would be crucial to spend a lot of time doing research and examining the evidence and consulting experts. But nevertheless Alices current intuition leans in favor of Open Borders , and Bob’s current intuition leans agains Open Borders. This is a great starting point for a Double Crux.)
  6. Double cruxes (as in a crux that is shared by both parties in a disagreement) are common, and useful. Most disagreements have implicit Double Cruxes, though identifying them can sometimes be tricky.
  7. Conjunctive cruxes (I would change my mind about X, if I changed my mind about Y and about Z, but not if I only changed my mind about Y or Z) are common.
  8. Folks sometimes object that Double Crux won’t work, because their belief depends on a large number of considerations, each one of which has only a small impact on their overall belief, and so no one consideration is a crux. In practice, I find that there are double cruxes to be found even in cases where people expect their beliefs have this structure.
    1. Theoretically, it makes sense that we would find double cruxes here: if a person has a strong disagreement (including a disagreement of intuition) with someone else, we should expect that there are a small number of considerations doing most of the work of causing one person to think one thing and the other to think something else. It is improbable that each person’s beliefs depend on 50 factors, and for Alice, most of those 50 factors point in one direction, and for Bob, most of those 50 factors point in the other direction, unless the details of those factors are not independent. If considerations are correlated, you can abstract out the fact or belief that that generates the differing predictions in all of those separate considerations. That “generating belief” is a crux.
    2. That said, there is a different conversational approach that I sometimes use, which involves delineating all of the key considerations (then doing Goal-factoring style relevance and completeness checks), and then dealing with each consideration one at time (often via a fractal tree structure: listing the key considerations of each of the higher level considerations).
      1. This approach absolutely requires paper, and skillful (firm, gentle) facilitation, because people will almost universally try and hop around between considerations, and they need to be viscerally assured that their other concerns are recorded and will be dealt with in due course, in order to engage deeply with any given consideration.
  9. About 60% of the power of Double Crux comes from being specific.
    1. I quite like Liron’s recent sequence on being specific. It re-reminded me of some basic things that have been helpful in several recent conversations. In particular, I like the move of having a conversational paint a specific, best case scenario, as a starting point for discussion.
      1. (However, I’m concerned about Less Wrong readers trying this with a spirit of trying to “catch out” one’s conversational partner in inconsistency, instead of trying to understand what their partner wants to say, and thereby shooting themselves in the foot. I think the attitude of looking to “catch out” is usually counterproductive to both understanding and to persuasion. People rarely change their mind when they feel like you have trapped them in some inconsistency, but they often do change their mind if they feel like you’ve actually heard and understood their belief / what they are trying to say / what they are trying to defend, and then provide relevant evidence and argument. In general (but not universally) it is more productive to adopt a collaborative attitude of trying to sincerely trying to help a person articulate, clarify, and substantiate the point your partner is trying to make, even if you suspect that their point is ultimately wrong and confused.)
  10. Many (~50%) disagreements evaporate upon operationalization, but this happens less frequently than people think: and if you seem to agree about all of the facts, and agree about all specific operationalizations, but nevertheless seem to have differing attitudes about a question, that should be a flag. [I have a post that I’ll publish soon about this problem.]
  11. You should be using paper when Double Cruxing. Keep track of the chain of Double Cruxes, and keep them in view.
  12. People talk past eachother all the time, and often don’t notice it. Frequently paraphrasing your current understand of what your conversational partner is saying, helps with this. [There is a lot more to say about this problem, and details about how to solve it effectively].
  13. I don’t endorse the Double Crux “algorithm” described in the canonical post. That is, I don’t think that the best way to steer a Double Crux conversation, is to hew to those 5 steps in that order. Actually finding double cruxes is, in practice, much more complicated, and there are a large number of heuristics and TAPs that make the process work. I regard that algorithm as an early (and self conscious) attempt to delineate moves that would help move a conversation towards double cruxes.
  14. This is my current best attempt at distilling the core moves that makes Double Crux work, though this leaves out a lot.
  15. In practice, I think that double cruxes most frequently emerge not from people independently generating their own list cruxes (though this is useful). Rather double cruxes usually emerge from the move of “checking if the point that your partner made is a crux for you.”
  16. I strongly endorse facilitation of basically all tricky conversations, Double Crux oriented or not. It is much easier to have a third party track the meta and help steer, instead of the participants, who’s working memory is (and should be) full of the object level.
  17. So called, “Triple Crux” is not a feasible operation. If you have more than two stakeholders, have two of them Double Crux, and then have one of those two Double Crux with the third person. Things get exponentially trickier as you add more people. I don’t think that Double Crux is a feasible method for coordinating more than ~ 6 people.
  18. Double Crux is much easier when both parties are interested in truth-seeking and in changing their mind, and are assuming good faith about the other. But, these are not strict prerequisites, and unilateral Double Crux is totally a thing.
  19. People being defensive, emotional, or ego-filled does not preclude a productive Double Crux. Some particular auxilary skills are required for navigating those situations, however.
    1. This is a good start for the relevant skills.
  20. If a person wants to get better at Double Crux skills, I recommend they cross-train with IDC. Any move that works in IDC you should try in Double Crux. Any move that works in Double Crux you should try in IDC. This will seem silly sometimes, but I am pretty serious about it, even in the silly-seeming cases. I’ve learned a lot this way.
  21. I don’t think Double Crux necessarily runs into a problem of “black box beliefs” wherein one can no longer make progress because one or both parties comes down to a fundamental disagreement about System 1 heuristics/ models that they learned from some training data, but into which they can’t introspect. Almost always, there are ways to draw out those models.
    1. The simplest way to do this (which is not the only or best way, depending on the circumstances, involves generating many examples and testing the “black box” against them. Vary the hypothetical situations to triangulate to the exact circumstances in which the “black box” outputs which suggestions.
    2. I am not making the universal claim that one never runs into black box beliefs, that can’t be dealt with.
  22. Disagreements rarely come down to “fundamental value disagreements”. If you think that you have gotten to a disagreement about fundamental values, I suspect there was another conversational tact that would have been more productive.
  23. Also, you can totally Double Crux about values. In practice, you can often treat values like beliefs: often there is some evidence that a person could observe, at least in principle, that would convince them to hold or not hold some “fundamental” value.
    1. I am not making the claim that there are no such thing as fundamental values, or that all values are Double Crux-able.
  24. A semi-esoteric point: cruxes are  (or can be) contiguous with operationalizations. For instance, if I’m having a disagreement about whether advertising produces value on net, I might operationalize to “beer commercials, in particular, produce value on net”, which (if I think that operationalization actually captures the original question) is isomorphic to “The value of beer commercials is a crux for the value of advertising.  I would change my mind about advertising in general, if I changed my mind about beer commercials.” (In this is an evidential crux, as opposed to the more common causal crux. (More on this distinction in future posts.))
  25. People’s beliefs are strongly informed by their incentives. This makes me somewhat less optimistic about tools in this space than I would otherwise be, but I still think there’s hope.
  26. There are a number of gaps in the repertoire of conversational tools that I’m currently aware of. One of the most important holes is the lack of method for dealing with psychological blindspots. These days, I often run out of ability to make a conversation go well when we bump into a blindspot in one person or the other (sometimes, there seem to be psychological blindspots on both sides). Tools wanted, in this domain.

(The Double Crux class)

  1. Knowing how to identify Double Cruxes can be kind of tricky, and I don’t think that most participants learn the knack from the 55 to 70 minute Double Crux class at a CFAR workshop.
  2. Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I’m still playing around with the how to do this most efficiently. (The “Basic Double Crux pattern” post is the distillation of my current approach.)
    1. This is one development avenue that would particularly benefit from from parallel search: If you feel like you “get” Double Crux, and can identify Double Cruxes fairly reliably and quickly, it might be helpful if you explicated your process.
  3. That said, there are a lot of relevant compliments and sub-skills to Double Crux, and to bridging disagreements more generally.
  4. The most important function of the Double Crux class at CFAR workshops is teaching and propagating the concept of a “crux”, and to a lesser extent, the concept of a “double crux”. These are very useful shorthands for one’s personal thinking and for discourse, which are great to have in the collective lexicon.

(Some other things)

  1. Personally, I am mostly focused on developing deep methods (perhaps for training high-expertise specialists) that increase the range of problems (in this case, disagreements) that the x-risk ecosystem can solve at all. I care more about this goal this than developing shallow tools that are useful “out of the box” for smart non-specialists, or in trying to change the conversational norms of various relevant communities (though both of those are secondary goals.)
  2. I am highly skeptical of teaching many-to-most of the important skills for bridging deep disagreement, via anything other than ~one-on-one, in-person interaction.
  3. As such, I am publishing my drafts of Double Crux stuff here, over the next few weeks. But making writeups for the public internet, particularly of content that I don’t yet know how to teach via writing, is not a priority for me.

I have a standing offer to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that’s something you’re interested in.

On Double Crux tests and tournaments

Most of the tests I’ve heard people pitch for DC don’t seem very valuable to me, and I want to at least gesture at why.

Other folks seem to be thinking of Double Crux as a complete method, to be directly compared with other methods: “which one works better”. I think of Double Crux as one (very important) pattern in an ensemble for the overall goal of bridging disagreements. “Testing Double Crux”, as I often hear people talk about it, sounds to me a little like “testing bank shots” in basketball: it is clearly useful sometimes, it isn’t always the right thing to go for, and it depends heavily on personal skill.

I think that example overstates it somewhat: Double Crux is more of a broad framework for disagreement bridging than bankshots are for basketball. And that’s not to say that you can’t test bank shots: it’s plausible that there are superstitions about it, and it isn’t as effective as many practitioner’s belive. But the value of information seems lower to me (at least at this stage, where approximately no one has put in more than 20 hours in explicitly training disagreement bridging, compared to basketball, which has hundreds of highly skilled experts.)

I would be more excited in organizing a “disagreement resolution tournament”, where experts who have developed their art and trained to excellence, compete, rather than (for instance) a setup where we give 20 undergrads a 30 minute long double crux lecture with 30 minutes of practice, and compare them to a control group.

(That second things isn’t useless, but I care a lot less about developing shallow tools that are helpful for ~0-skilled folks, out of the box, than I do about deep experts who increase the range of problems (in this case, disagreements) that humanity  / the x-risk ecosystem can solve at all.)

The logistics of such a tournament seem hard to make work, because there’s not an obvious way to standardized disagreements to resolve, and in practice there are very few highly skilled experts of differing schools. So the value of information in 2019 still seems low. But it seems more promising than most of the tests I hear proposed.

Using the facilitator to make sure that each person’s point is held

[Epistemic status: This is a strategy that I know works well from my own experience, but also depends on some prereqs.

I guess this is a draft for my Double Crux Facilitation sequence.]

Followup to: Something simple to try in conversations

Related to: Politics is the Mind Killer, Against Disclaimers

Here’s a simple model that is extremely important to making difficult conversations go well:

Sometimes, when a person is participating in a conversation, or an argument, he or she will be holding onto a “point”, that he/she wants to convey.

For instance…

  • A group is deciding which kind of air conditioner to get, and you understand that one brand is much more efficient than the others, for the same price.
  • You’re listening to a discussion between two intellectuals who you can tell are talking past eachother, and you have the perfect metaphor that will clarify things for both of them.
  • Your startup is deciding how to respond to an embarrassing product failure, one of the cofounders wants to release a statement that you think will be off-putting to many of your customers.

As a rule, when a person is “holding onto” a point that they want to make, they are unable to listen well.

The point that a person wants to make relates to something that’s important to them. If it seems that their conversational-partners are not going to understand or incorporate that point, that important value is likely going to be lost. Reasonably, this entails a kind of anxiety.

So, to the extent that it seems to you that your point won’t be heard or incorporated, you’ll agitatedly push for airtime, at the expense of good listening. Which, unfortunately, results in a coordination problem of each person pushing to get their point heard and no one listening. Which, of course, makes it more likely that any given point won’t be heard, triggering a positive feedback loop.

In general, this means that conversations are harder to the degree that…

  1. The topic matters to the participants.
  2. The participant’s visceral expectation is that they won’t be heard.

(Which is a large part of the reason why difficult conversations get harder as the number of participants increases. More people means more points competing to be heard, which exacerbates the death spiral.)


I think this goes a long way towards explicating why politics is a mind killer. Political discourse is a domain which…

  1. Matters personally to many participants, and
  2. Includes a vast number of “conversational participants”,
  3. Who might take unilateral action, on the basis of whatever arguments they hear, good or bad.

Given that setup, it is quite reasonable to treat arguments as soldiers. When one sees someone supporting, or even appearing to support a policy or ideology that you consider abhorrent or dangerous, there is a natural and reasonable anxiety that the value you’re protecting will be lost. And there is a natural (if usually poorly executed) desire to correct the misconception in the common knowledge before it gets away from you. Or failing that, to tear down the offending argument / discredit the person making it.

(To see an example of the thing that one is viscerally fearing, see the history of Eric Drexler’s promotion of nanotechnology. Drexler made arguments about Nanotech, which he hoped would direct resources in such a way that the future could be made much better. His opponents attacked strawmen of those arguments. The conversation “got away” from Drexler, and the whole audience discounted the ideas he supported, thus preventing any progress towards the potential future that Drexler was hoping to help bring into being.

I think the visceral fear of something like this happening to you is what motivates “treating arguments as soldiers“)

End digression

Given this, one of the main thing that needs to happen to make a conversation go well, is for each participant to (epistemically!) aleive that their point will be gotten to and heard. Otherwise, they can’t be expected to put it aside (even for a moment) in order to listen carefully to their interlocutor (because doing so would increase the risk of their point in fact not being heard).

When I’m mediating conversations, one strategy that I employ to facilitate this is to use my role as the facilitator to “hold” the points of both sides. That is (sometimes before the participants even start talking to each-other), I’ll first have each one (one at a time) convey their point to me. And I don’t go on until I can pass the ITT of that person’s point, to their (and my) satisfaction.

Usually, when I’m able to pass the ITT, there’s a sense of relief from that participant. They now know that I understand their point, so whatever happens in the conversation, it won’t get lost or neglected. Now, they can relax and focus on understanding what the other person has to say.

Of course, with sufficient skill, one of the participants can put aside their point (before it’s been heard by anyone) in order to listen. But that is often asking too much of your interlocutors, because doing the “putting aside” motion, even for a moment is hard, especially when what’s at stake is important. (I can’t always do it.)

Outsourcing the this step to the facilitator, is much easier, because the facilitator has less that is viscerally at stake for them (and has more metacognition to track the meta-level of the conversation).

I’m curious if this is new to folks or not. Give me feedback.


Something simple to try in conversations

Last night, I was outlining conversational techniques. By “conversational technique” I mean things like “ask for an example/ generate a hypothesis”, “repeat back, in your own words, what the other person just said”, “consider what would make you change my mind”, etc. and the times when it would be useful to use them, so that I could make more specific Trigger Action Plans. I quickly noticed a way of carving up the space which seemed useful to me and potentially interesting to my (currently non-existent) readers.

In a conversation, second to second, you may be trying to understand what another person is saying, or you may be trying to help them understand what you are trying to convey. There are perhaps some other possibilities, such as trying to figure out a new domain together, but even then, at any given moment one of you is likely to be explaining and the other listening.

It seems quite useful to me, to be tracking which you are aiming to do at any given moment, understand – or get someone to understand.

Being aware of who is doing what allows two conversationalists to coordinate, verbally and explicitly, if need be. A conversation is apt to go better when participant A is focused on listening while participant B is focused on explicating, and vise versa. Discussions often become less manageable when both parties are too busy explaining to listen.

Before I start training more specific conversational TAPs, I’ve started paying attention to which of the two I’m doing second to second.