Using the facilitator to make sure that each person’s point is held

[Epistemic status: This is a strategy that I know works well from my own experience, but also depends on some prereqs.

I guess this is a draft for my Double Crux Facilitation sequence.]

Followup to: Something simple to try in conversations

Related to: Politics is the Mind Killer, Against Disclaimers

Here’s a simple model that is extremely important to making difficult conversations go well:

Sometimes, when a person is participating in a conversation, or an argument, he or she will be holding onto a “point”, that he/she wants to convey.

For instance…

  • A group is deciding which kind of air conditioner to get, and you understand that one brand is much more efficient than the others, for the same price.
  • You’re listening to a discussion between two intellectuals who you can tell are talking past eachother, and you have the perfect metaphor that will clarify things for both of them.
  • Your startup is deciding how to respond to an embarrassing product failure, one of the cofounders wants to release a statement that you think will be off-putting to many of your customers.

As a rule, when a person is “holding onto” a point that they want to make, they are unable to listen well.

The point that a person wants to make relates to something that’s important to them. If it seems that their conversational-partners are not going to understand or incorporate that point, that important value is likely going to be lost. Reasonably, this entails a kind of anxiety.

So, to the extent that it seems to you that your point won’t be heard or incorporated, you’ll agitatedly push for airtime, at the expense of good listening. Which, unfortunately, results in a coordination problem of each person pushing to get their point heard and no one listening. Which, of course, makes it more likely that any given point won’t be heard, triggering a positive feedback loop.

In general, this means that conversations are harder to the degree that…

  1. The topic matters to the participants.
  2. The participant’s visceral expectation is that they won’t be heard.

(Which is a large part of the reason why difficult conversations get harder as the number of participants increases. More people means more points competing to be heard, which exacerbates the death spiral.)


I think this goes a long way towards explicating why politics is a mind killer. Political discourse is a domain which…

  1. Matters personally to many participants, and
  2. Includes a vast number of “conversational participants”,
  3. Who might take unilateral action, on the basis of whatever arguments they hear, good or bad.

Given that setup, it is quite reasonable to treat arguments as soldiers. When one sees someone supporting, or even appearing to support a policy or ideology that you consider abhorrent or dangerous, there is a natural and reasonable anxiety that the value you’re protecting will be lost. And there is a natural (if usually poorly executed) desire to correct the misconception in the common knowledge before it gets away from you. Or failing that, to tear down the offending argument / discredit the person making it.

(To see an example of the thing that one is viscerally fearing, see the history of Eric Drexler’s promotion of nanotechnology. Drexler made arguments about Nanotech, which he hoped would direct resources in such a way that the future could be made much better. His opponents attacked strawmen of those arguments. The conversation “got away” from Drexler, and the whole audience discounted the ideas he supported, thus preventing any progress towards the potential future that Drexler was hoping to help bring into being.

I think the visceral fear of something like this happening to you is what motivates “treating arguments as soldiers“)

End digression

Given this, one of the main thing that needs to happen to make a conversation go well, is for each participant to (epistemically!) aleive that their point will be gotten to and heard. Otherwise, they can’t be expected to put it aside (even for a moment) in order to listen carefully to their interlocutor (because doing so would increase the risk of their point in fact not being heard).

When I’m mediating conversations, one strategy that I employ to facilitate this is to use my role as the facilitator to “hold” the points of both sides. That is (sometimes before the participants even start talking to each-other), I’ll first have each one (one at a time) convey their point to me. And I don’t go on until I can pass the ITT of that person’s point, to their (and my) satisfaction.

Usually, when I’m able to pass the ITT, there’s a sense of relief from that participant. They now know that I understand their point, so whatever happens in the conversation, it won’t get lost or neglected. Now, they can relax and focus on understanding what the other person has to say.

Of course, with sufficient skill, one of the participants can put aside their point (before it’s been heard by anyone) in order to listen. But that is often asking too much of your interlocutors, because doing the “putting aside” motion, even for a moment is hard, especially when what’s at stake is important. (I can’t always do it.)

Outsourcing the this step to the facilitator, is much easier, because the facilitator has less that is viscerally at stake for them (and has more metacognition to track the meta-level of the conversation).

I’m curious if this is new to folks or not. Give me feedback.


Some possible radical changes to the world

Strong AI displaces humans as the dominant force on the planet.

A breakthrough is made the objective study of meditation, which makes triggering enlightenment much easier. Millions of people become enlightened.

Narrow AI solves protein folding, Atomically Precise Manufacturing (nanotech) becomes possible and affordable. (Post scarcity?)

The existing political order collapses.

The global economy collapses, supply chains break down. (Is this a thing that could happen?)

Civilization abruptly collapses.

Nuclear war between two or more nuclear powers.

A major terrorist attack pushes the US into heretofore unprecedented levels of surveillance and law-enforcement.

Sufficient progress in made on human health extension that many powerful people anticipate being within range of longevity escape velocity.

Genetic engineering (of one type or another) gives rise to a generation that includes a large number of people who are much smarter than the historical human distribution.

Advanced VR?

Significant rapid global climate change.


Some varieties of feeling “out of it”

[Epistemic status: phenomenology. I don’t know if this is true for anyone other than me. Some of my “responses” are wrong, but I don’t know which ones yet.

Part of some post on the phenomenology and psychology of productivity. There are a lot of stubs, places for me to write more.

This is badly organized. A draft.]

One important skill of maintaining productivity through phenomenology is distinguishing between the different kinds of low energy states. I think that people typically conflate a large number of mental states under the label of “tired” or that of “don’t feel like working.” The problem with this is, that the phenomenology of these different states points to different underlying mechanisms, and each one should be responded to differently.

If you can distinguish between these flavors of experience, then each one can be the trigger for a TAP, to bring you back to a more optimal state.

I don’t think I’ve learned to make all relevant state-distinctions here, but these are some that I can recognize.

Sleep deprivation: Feels like like a kind of buzzy feeling in my head that goes with “low energy”.

The best response is a nap. If that doesn’t work, then maybe try a stimulant. You can also just wait: after a while your circadian system will be strongly countering your sleep pressure, and you’ll feel more alert.

Fuzzy-headed: Often from overeating, or not having gotten enough physical activity in the past few days.

The best response is exercise. (Maybe sufficiently intense exercise, that you have an endorphin response?)

Hungry: You probably know what this is. I think maybe the best response is to ignore it?

Running out of thinking-steam due to need to eat: This feels distinctly different from the one above. Sort of like my thoughts running out, due to something like an empty head?

Usually, eating entails some drop in energy level, but if you time it right, both not eating and eating can be energizing. Though I’ve never done this for long periods and I don’t know if it sustainable.

Cognitive exhaustion: This is the one I understand the least. I don’t know what it is. Maybe needing to process, or consolidate info, or do subconscious processing? I don’t know if emotional exhaustion is the meaningfully different (my guess is no?).

The default thing to do here is to take a break, but I’m not sure if that’s the best thing to do. I think maybe you can just switch tasks and get the same effect?


I’ll write about aversions more sometime, because they are the ones that are most critical to productivity. Aversions come in two different types: Anxiety/Fear/Stress aversions and “glancing off” aversions.

Anxiety/Fear/Stress/Belief Aversion: This sort of aversion is almost always accompanied by a tension-feeling in the gut and stems from some flavor-of-fear about the thing I’m averse to. A common template for the fear is “I’ve already failed / I’ve already fucked up.” Another is a fear of being judged.

The response to this one is to use Focusing to bring the concern that your body is holding on to into conscious attention, and to figure out a way to handle it.

“Glancing off” Aversions: This is closer to the feeling of slipping off a task, or “just not feeling like doing it”, or finding your attention going elsewhere. This is often due to a task that is aversive not do to it’s goal relevant qualities, but due to it’s ambiguity, or it being to big to hold in mind.

The response, as I’ll write about later, is to chunk out the smallest concrete next action and to visualize doing it.

Ego deletion: Feels sort of like my brain is tired? This feels kind of like cognitive exhaustion, and they might be the same thing. I think this is due to other subsystems in me wanting something other than work.

The correct response, I think, is to take a break and do whatever I feel like doing in that moment, though I don’t have a good understanding of mental energy, and it maybe that I’m supposed to do something that has clear and satisfying reward signals? (I don’t think that’s right, though. Feels a bit too mechanical.)

Urgy-ness: Also have to write more about this another time. This is a feeling compulsion for short term gratification, often of several verities in sequence, without satisfaction. This is often a second order response to an anxiety or fear aversions, and can also be about some goal that’s un handled or an unmet need. See also: the reactiveness scaler (which I also haven’t written about yet.)

Response: exercise, then Focusing

I wrote this fast. Questions are welcome.



RAND needed the “say oops” skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:

The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.

If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.

Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.

To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.

A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.