The seed of a theory of triggeredness

[epistemic status: not even really a theory, just some observations, and self-observations at that.

Unedited.]

Related: “Flinching away from truth” is often about *protecting* the epistemology

“Triggered” seems to be a pretty specific state, that has something of rage, something of panic, and a general sort of “closing in” of experience. I think it might be a pointer to something important (I postulate a related triad of triggeredness, trauma, and blindspots, and blindspots seem like a crucial thing to have a better grasp on.) So I’ve been paying attention to my own triggeredness.

I’ve noticed that I feel triggered in only two situations.

Adversarial forces

The first is when there’s something that I think is important, but I anticipate adversarial forces, either in me or external to me, that are threatening to erode my commitment to that important thing.

For instance, if I have a standard that I’m trying to hold to, but I expect (or project) that someone is about to try and argue me out of, or social pressure me out of it. (Probably, it is necessary that I be unsteady in my commitment to that standard, in such a way that some part of me expects me to be improperly argued out of it, and something important will be lost? If I were confident in my view, or confident in my ability to respond and update sensibly, there wouldn’t be an issue.)

An example: If someone makes even mild, good-natured attempts to convince me that I should impair my cognition, or drink alcohol to relax, I might become filled with triggered rage.

[This is not quite a real example for me, but it is very close to a real example. I in fact, have trouble writing a real example, because my every attempt to fill in the what they are suggesting I do are obvious strawmen that don’t come close to passing the ITT. I get things like “meld with the crowd”, or “surrender my independence” and start feeling slightly triggered. I think I can’t currently see the real thing clearly.]

Another example: I think that I should only use CFAR units that I personally use. I agreed to teach Aversion Factoring, explicitly with the condition that I say clearly that I used to use it, but now use Focusing with a dash of IDC for processing aversions. Someone who wasn’t aware of that, asked (in a way that I guess felt presure-y to me?) if they “could convince me not to tell the participants that I use Focusing/IDC instead?” I got slightly triggered and snapped back, “absolutely not” (in a kind of mean way).

Impossibilities of crucial communication

The other is when there’s something important to protect, but I don’t expect to be able to comunicate what it is to the relevant actors, perhaps because the true reasons don’t seem defensible.

For instance, if I’m on a team and we’re considering bringing on a new member. Most people on the team feel excited about the new guy. I don’t want him to join, but despair of compelling them. (It feels to me like the excited people are being reckless with our team and I’m going to end up leaving it.) I feel a triggered panic.

This impossibility of communication is often due to some conflation of separate things, or bucket error, either in me, or in others.

Example: a person is considering taking some action, X. I think X is doomed to fail, but it is nearby to action Y, which I think is important or valuable. I’m afraid that the person will try X and it will go poorly, and onlookers will not be able to distinguish X and Y, so and so everyone gives up on Y as untenable. If I could convey that X and Y were meaningfully distinct, then there wouldn’t be an issue, and I wouldn’t need to be triggerd about it.

Common thread

There’s a thread in both of these of “something important to me is threatened because I can’t articulate what it is or name it right.”

Why does outlining my day in advance help so much?

[epistemic status: Hypothesizing. Pretty stream of consciousness. I’m rereading Thinking, Fast and Slow right now, and that has clearly been influencing my thinking.]

Advance outlines

More than a year ago, I read Cal Newport’s Deep Work: Rule’s for Focused Success in a Distracted World. Overall, I wasn’t that impressed with it: it seemed to be mostly fluff. There was one practice that I picked up from that book however, that made the time cost of reading it (actually, listening to the audiobook) worthwhile.

Newport recommends outlining your day, hour by hour, before the day starts. This outline is not intended to be a ridged schedule however: you’re allowed to deviate from the plan. However, if you do decide to change what you do in a given time block, you have to put that on the outline, and also reschedule the rest of your day in light of that change.

(It’s possible that I’m misremembering the actual procedure that Newport recommends. I think that his version has two side by side columns, one with a pre-made outline and the other to be filled in with how you actually spend your time? What I do, at least, is fill out a new column every time I make a decision to deviate from my schedule outline. It looks something like this:

IMG_2702.JPG[1]

In practice, I often don’t keep this up for the whole day. For the day shown above, “writing” ended up turning into a debugging meeting with a friend/collaborator, alternating with writing, and then going home to pack. [2] )

Outlining my day in advance like this has a pretty large effect on “how well my day goes” overall, my subjective sense of my own focus and productivity. The effect is not as large as waking up early and doing Deep work [3], but it is larger than the effect of a 20 minute meditation. My guess is that the effect is larger than regular exercise, but I’m much less sure of that. (All of these are eyeball’ed subjective estimates. It’s quite possible that my affect heuristic is failing me here, if my subjective sense of wellbeing does not correlate well with my actually getting things done and moving towards my goals. I really need to figure out some better metrics for my own effectiveness.)

A priori, it’s a bit surprising that writing a schedule that I’m not even going to stick to would have such a large effect. Why would this be?

I don’t know. But here are some hypotheses. These aren’t mutually exclusive. For all I know they all apply. I think at least some of these point at interesting psychological phenomena.

Hypotheses

Hypothesis 1: It causes me to load up my goals and priorities in some kind of short term memory or background awareness. 

This might be subtle; I don’t know. There’s a thing about having my goals “loaded up”, or at hand to me, not far from my thoughts. Sometimes (like after a workshop, and before I have had time to orient) I don’t have my goals loaded up. I’m not taking actions to hit them, and I’m not experiencing any anxiety about them. I might spend the morning (or the day) doing whatever random thing, because I’m something like not tracking / not paying attention to / not primed to pay attention to / not remembering the things that I care about and want to accomplish? [I should probably study this experience more, so that I have a better sense of what’s going on.]

I think that one of the things that’s happening is that the outlining activity causes me to “load up” my goals in short term memory.

Hypothesis 3: It clarifies time scarcity and tradeoffs

There’s a temptation (for me at least) to act as if there’s infinite time. “I do want to write today, but I’ll do it later.” That kind of postponement feels costless, but it really isn’t. Something has to give. The procedure outlined above gives me a much more visceral sense of the scarcity of the time resource, and forces me to confront the tradeoffs. (For instance, I didn’t do math on July 1, I met with Diva instead. But that was a conscious choice.)

Being aware of the limits on my time supports me in spending it well. I’m less apt to waste time if I’m viscerally aware of what that actually costs.

Hypothesis 3: It allows me to rehearse my day / set TAPs / biases later decision moments

There’s something magical about walking through my day in some detail that, for instance, just making a todo list of three or four priorities, doesn’t do.

In order to schedule in blocks like that I have to visualize how my day will go in at least a little detail. And I think that future-pacing my day like that makes it easier to execute.

I’m not quite sure why this is. It might be something like that walkthrough lightly sets some TAPs, and particular, TAPs for transitioning between tasks.  For instance

TAP: Finish meeting with Ben -> walk over to my desk, take out “How to Prove It” and start reading the introduction).

Note that my current procedure does not have me visualizing the scene in detail like that, or explicitly setting TAPs. But maybe something like that is happening subliminally, as I think about how long I need to do a task and where I’ll be at that time of day, etc.

Another model in this vein (or maybe another frame on the same model) is that scheduling introduces a bias or directional tendency to my decision points. Throughout the day, I have a small hundreds number of moments when I need to determine my next action. Those moments include when I feel like getting up from writing to pace, or if I should go make food right now, or if I’m going to sit down to work on that python script I was writing, or if I should do Focusing on that thing in my belly.

Such decision points inherently entail ambiguity. Furthermore, there are really a large number of factors to take into account: my energy levels, what I feel like doing, if I have enough time to make progress on a thing, the nature of the tradeoffs between the various good things that I could do etc. I have policies and TAPs for making some of these decisions (one wants to live a choice minimal life-style), but most of these moments still entail some level of ambiguity and cognitive effort. And the more of the decision that falls to my current less reflective self, the more likely I am to follow a path of least resistance: taking a break instead of finishing this post, or doing something good but not crucial.

I think having rehearsed the decision in advance takes some of the load off, there’s a sort of echo of having already chosen, I’ve carved a shallow rut, so that the thing that my more reflective self decided was best to do at this time is the path of least (or less) resistance.

Interestingly, this maybe the same mechanism as hypothesis 1, except where Hyp 1 is about loading up goals, Hyp 3 is about loading up task-transitions. And the mechanism in question is starting to look suspiciously like priming.

Let’s clarify that claim explicitly: the main reason why prescheduling works is that it briefly puts my attention on my goals and the tasks to achieving them. This leaves a kind of mental “residue” [4], those goals and actions are more cognitively available. And therefore, those actions are given higher decision weightings at ambiguous decision points. [Plus, it makes time scarcity feel real. (Hyp. 3)]

Next steps

I’m not sure if any of that was even coherent, or if it is, if I’ll think that this is correct in a week.

After writing this, it seems like the natural next thing to do is goal-factor. Is there a way that I can get all the benefits of this procedure more cheaply? If I find a strictly better procedure, that’s a win. If I find a procedure that hits some but not all of the benefits, that would give me more data about the physiological structure in this area.

 

Notes

[1] I was nocturnal for this day because I was transitioning in advance for a Europe trip.

[1] I can easily check, because I separately track all my time in Toggl.

[3] I find that my day goes better the earlier I wake up, and that this trend is robust all the way up to as early as 3:00 AM. It’s really amazing to have long blocks of uninterrupted work time, while it’s dark and the rest of the world is sleeping. Unfortunately, this has the obvious tradeoff of making it hard to  meet with / spend time with other humans.

[4] I believe this is a technical term used for the cost of attention switching?

The two-way connection between thought-content and physiological state

[epistemic status: argument, followed by hypothesizing.]

Exercise for state-shifting

Here’s a useful trick for those of you who don’t know of it yet: you can use very brief exercise to quickly shift your physical/mental/emotional state.

Suppose that you’re agitated or anxious or energized about something, but you don’t have time to engage with it at the moment. You’re about to go into an important meeting, and it be disruptive for you to be experiencing agitation about something unrelated.

One thing that you can do in this scenario is 90 seconds of cardio: do 60 pushups, or do jumping jacks, or sprint. At least in my experience, this disrupts the agitation (clearing my mental pallet, at it were), so that I can go in and put my full attention on the meeting.

I recently experienced this on a larger scale: after touching a very deep trigger / trauma for me, and having a more visceral reaction reaction than I’ve yet experienced. I was still very triggered about it and ruminating on it, an hour after the initial trigger-event.

The advisor I consulted told me to exhaust myself: to do squats to failure, or to do tabata sprints. Not having a squat rack available at the time, I went outside and did some (bad) 20 second sprints. I was much calmed by the time I finished.)

Implications

In my sleep post from last month, I ended by outlining a very simple model:

I’m awake because my body is physiologically aroused.

…Which is caused by attention being absorbed by something that’s in some way energizing or exciting.

…Which is probably because a goal directed process in me is trying to get something (by ruminating or planning or whatever).

Or, stated visually:

Physiological activation diagram 1

However, the fact that you can use exercise to shift your state suggests that this causal flow is not so simple.

Short, intense, physical exertion is sort of like manually resetting the physiological activation node, by “washing it out” with all the state characteristics implied by exercise (or something).

But the fact that this works, and (at least sometimes) you don’t immediately go back to ruminating, suggests that the causal connection between mental content and physiological activation can go both directions: your thoughts can change your level of arousal, and your level of arousal can change your thoughts. Which gives us a causal diagram more like this one:

test1

Elaborating on that model

[Epistemic state: The following is a working hypothesis.]

My current working model has it that you have effectively two “immediate states” or working memories”: that of your system 2 (that’s the standard one), and that of your system 1 (the felt senses and bodily auroral).

Each one has a limited capacity. Just as you can’t keep track of more than a few ideas at a time, your body can only have one(?) overall physiological state. Otherwise 90 seconds of cardio would not “wipe the slate”.

Each of these “states” can influence the other: Your physiological state can influence your mental content (this happens deliberately when one does Focusing), and your mental content can influence your physiological activation (remembering a task I forgot can induce panic).

More thoughts

I frequently experience myself becoming more activated when I lie down to go to sleep. I hypothesize that when I let my mind wader as I’m falling asleep, I often hit upon either, a new exciting idea, or some area that I’m anxious or fearful about. This triggers an activation response, and then a positive feedback loop between the two states.

(Notably, distracting myself by, for instance, reading a comic book for a while, allows me to fall asleep. Eating something also helps, and sometimes masturbating. I speculate that distraction is intervening on the mental content, and eating is intervening on my physiological activation, because digestion activates PSNS. Masturbating might be both?)

 

 

Committed Engagement and the Critical Importance of Ambiguity

[epistemic status: the basic idea has been validated by at least my experience, and it seems to resonate with others. But I’m not confident that I have the right framing or am using the right concepts.]

[Part of my Psychological Principles of Productivity drafts.]

In this essay, I want to point out a fact about human psychology, and some interventions  based on that fact.

First, an example. There’s a rule that my mom taught me for cleaning my room, when I was growing up: never pick up an object more than once. Once you have an item in your hand, you must put it where it goes, never put it back down where you found it. The reason for this is that you otherwise tend to get stuck in a loop: where you pick up a thing, are not quite sure where it goes, and so pick up another thing. Finding yourself in the same situation, you pick up the first thing again.

In my adult life, I sometimes find myself in a similar situation when processing email. I’m going through my inbox, and I get to an email that I’m not quite sure how to respond to, and I notice myself flicking back to my inbox without having made a decision about how to reply.

There’s an important truth about human psychology in this phenomenon: ambiguity, that is unclarity about specific next actions, is micro-hedonically aversive, and the human mind tends to flinch away from it.

Productivity

In fact, I think that ambiguity is the primary cause of ugh fields that can curtail my (your?) productivity.

Committed engagement

That’s because resolving ambiguity, clarifying what your options are, and choosing which one to commit to, is hard work. It requires conscious, System-2 style, effort. For most of us, being so called “knowledge workers”, resolving ambiguity is the bulk of our work. The hard part is figuring out what to do. Doing it is often comparatively easy.

Often, when Aversion Factoring, I find that the only reason why I don’t feel like doing something, is the effort of chunking out what exactly the next actions are. After I’ve done that I have no aversion at all.

Accordingly, I now think of processing my various inboxes (and particularly the inbox of reminders that I leave for myself), not as a low-energy, time-limited [as opposed to energy-limited] task, but as a key component of the work that I do.

And when I’m processing inboxes, I step into a mode that I call committed engagement: I make it my intention to plow through and empty the inbox. Given that I’m going to get to and deal with every item, there’s no incentive to look at a thing and put it back. In Committed engagement, the natural thing to do with an item is figure out what needs to be done with it. (Committed engagement is an energized state, with some pressure to get through the task rapidly.)

This is contrast to a sort of “shallow engagement” in which I skim over the inbox, clicking on things that seem quick or interesting, and then marking them as unread again, if they require even a little bit of thought.

Simulation for resolving ambiguity

I have a variety of useful TAPs based on this principle that my mind avoids ambiguity. When I feel averse to a thing in a way that has the flavor of ambiguity (which I do have specific phenomenology for), I visualize the very first smallest steps of the action in my Inner Simulator, which often lowers the activation energy so substantially that it becomes basically easy to take the action.

For instance, Trigger: “I should start writing, but I don’t feel like it” -> Action: “Visualize opening up my laptop” tends to automatically lead to opening up my laptop and begin writing. 

If know that I should strength train, but I don’t feel like it, I’ll simulate concretely standing up, walking to the elevator, and pushing the button. Which in most cases, is sufficient to cause me to get up, walk over, and push the button. And once I’m in the elevator, I’m on my way to the gym.

I think of this as taking advantage of “the smallest atomic action” principle of setting good TAPs. But instead of setting a plan for the future, you’re “setting a plan” for the very next moment. It’s almost humorous how much motivation cascades from merely imagining a simple atomic action.

Similarly, if I’m lost in working on a problem, I might write down the first step, or the main blocker, just to make it clear to me what that is. From there, the next actions are often clear and I can make progress.

Epistemology

This psychological fact is extremely important for productivity, but it is also relevant to epistemology. Your mind is averse to ambiguity. when considering a problem, you have a tendency to deflect away from the parts that are non-concrete: which are often where the important thinking is to be done.

This is at least a part of the reason why “rubber ducking” or talking with a friend is often helpful: stating your problem out loud forces you to clarify the points where you have ambiguity, which you might otherwise skim over.

A shout out

I think my mom probably learned that rule from David Allen (who she met in person), or at least his excellent book, Getting Things Done. He says:

You may find you have a tendency, while processing your in-basket, to pick something up, not know exactly what you want to do about, and then let your eyes wander onto another item farther down the stack and get engaged with it. That item may be more attractive to your psyche because you know right away what to do with it – and you don’t feel like thinking about what’s in your hand. This is dangerous territory. What’s in your hand is likely to land on a “hmppphhh” stack on the side of your desk because you become distracted by something easier, more important, or more interesting below it.

Furthermore, this idea that clarifying your work, and resolving your “stuff” into next actions is the bulk of one’s intellectual labor, is an important theme of the book.

. . .

Keep this in mind: Your mind flinches away from ambiguity. But you can learn to notice, and counter-flinch.

 

Related: Microhedonics, Attention, Visualization

References: Getting Things Done: the Art of Stress Free Productivity

Culture vs. Mental Habits

[epistemic status: personal view of the rationality community.]

In this “post”, I’m going to outline two dimensions on which one could assess the rationality community and the success of the rationality project. This is hardly the only possible break-down, but it is one that underlies a lot of my thinking about rationality community building, and what I would do, if I decided rationality community building were a strong priority.

I’m going to call those two dimensions Culture and Mental Habits. As we’ll see these are not cleanly distinct categories, and they tend to bleed into each other. But they have separate enough focuses that one can meaningfully talk about the differences between them.

Culture

By “culture” I mean something like…

  • Which good things are prioritized?
  • Which actions and behaviors are socially rewarded?
  • Which concepts and ideas are in common parlance?

Culture is about groups of people, what those groups share and what they value.

My perception is that on this dimension, the Bay area rationality community has done extraordinarily well.

Truth-seeking is seen as paramount: individuals are socially rewarded for admitting ignorance and changing their minds. Good faith and curiosity about other people’s beliefs is common.

Analytical and quantitative reasoning is highly respected, and increasingly, so is embodied intuition.

People get status for doing good scholarship (e.g. Sarah Constantin), for insightful analysis of complicated situations (e.g. Scott Alexander, for instance), or for otherwise producing good or interesting intellectual content (e.g. Eliezer).

Betting (putting your money where your mouth is) is socially-encouraged. Concepts like “crux” and “rationalist taboo” are well known enough to be frequently invoked in conversation.

Compared to the backdrop of mainline American culture, where admitting that you were wrong means losing face, and trying to figure out what’s true is secondary (if not outright suspicious, since it suggests political non-allegiance), the rationalist bubble’s culture of truth seeking is an impressive accomplishment.

Mental habits

For lack of a better term, I’m going to call this second dimension “mental habits” (or perhaps to borrow Leverage’s term “IPs”).

The thing that I care about in this category is “does a given individual reliably execute some specific cognitive move, when the situation calls for it?” or “does a given individual systematically avoid a given cognitive error?

Some examples, to gesture at what I mean

  • Never falling prey to the planning fallacy
  • Never falling prey to sunk costs
  • Systematically noticing defensiveness and deflinching or a similar move
  • Systematically noticing and responding to rationalization phenomenology
  • Implementing the “say oops” skill, when new evidence comes to light that overthrows an important position of yours
  • Systematic avoidance of the sorts of errors I outline my Cold War Cognitive Errors investigation (this is the only version that is available at this time).

The element of reliability is crucial. There’s a way that culture is about “counting up” (some people know concept X, and use it sometimes) and mental habits is about “counting down” (each person rarely fails to execute relevant mental process Y).

The reliability of mental habits (in contrast with some mental motion that you know how to do and have done once or twice), is crucial, because it puts one in a relevantly different paradigm.

For one thing, there’s a frame under which rationality is about avoiding failure modes: how to succeed in a given domain depends on the domain, but rationality is about how not to fail, generally. Under that frame, executing the correct mental motion 10% of the time is much less interesting and impressive than executing it everytime (or even 90% of the time).

If the goal is to avoid the sorts of errors in my cold war post, then it is not even remotely sufficient for individuals to be familiar with the patches: they have to reliably notice the moments of intervention and execute the patches, almost every time, in order to avoid the error in the crucial moment.

Furthermore, systematic execution of a mental TAP allows for more complicated cognitive machines. Lots of complex skills depend on all of the pieces of the skills working.

It seems to me, that along this dimension, the rationality community has done dismally.

Eliezer wrote about Mental Habits of this sort in the sequences and in his other writing, but when I consider even very advanced members of my community, I think very few of them systematically notice rationalization, or will reliably avoid sunk costs, or consistently respond to their own defensiveness.

I see very few people around me who explicitly attempt to train 5-second or smaller rationality skills. (Anna and Matt Fallshaw are exceptions who come to mind).

Anna gave a talk at the CFAR alumni reunion this year, in which she presented two low-level cognitive skills of that sort. There were about 40 people in the room watching the lecture, but I would be mildly surprised if even 2 of those people reliably execute the skills described, in the relevant-trigger situation, 6 months from that talk.

But I can imagine a nearby world, where the rationality community was more clearly a community of practice, and most of the the people in that room, would watch that talk and then train the cognitive habit to that level of reliability.

This is not to say that fast cognitive skills of this sort are what we should be focusing on. I can see arguments that culture really is the core thing. But nevertheless, it seems to me that the rationality community is not excelling on the dimension of training it’s members in mental TAPs.

[Added note: Brienne’s Tortoise skills is nearly archetypal of what I mean by “mental habits”.]

Some possible radical changes to the world

Strong AI displaces humans as the dominant force on the planet.

A breakthrough is made the objective study of meditation, which makes triggering enlightenment much easier. Millions of people become enlightened.

Narrow AI solves protein folding, Atomically Precise Manufacturing (nanotech) becomes possible and affordable. (Post scarcity?)

The existing political order collapses.

The global economy collapses, supply chains break down. (Is this a thing that could happen?)

Civilization abruptly collapses.

Nuclear war between two or more nuclear powers.

A major terrorist attack pushes the US into heretofore unprecedented levels of surveillance and law-enforcement.

Sufficient progress in made on human health extension that many powerful people anticipate being within range of longevity escape velocity.

Genetic engineering (of one type or another) gives rise to a generation that includes a large number of people who are much smarter than the historical human distribution.

Advanced VR?

Significant rapid global climate change.

 

RAND needed the “say oops” skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:

The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.

If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.

Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.

To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.


A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.

Honor is about common knowledge?

[epistemic status: musing, just barely grounded in history. Inspired by chapter four of the Elephant in the Brain.]

I’ve often mused how in (most parts) of the modern world, physical violence is utterly out of the question (no one expects to be murdered over a business deal), but we often take for granted that people are going to scam us, lie to us, try to cheat us, fail to follow through on an agreement, or otherwise take advantage of us, in the name of profit (the prototypical used car salesman is an excellent example, though it is broader than that.)

In contrast, in many other times and places, the reverse has held true: physical violence is common and expected, but lying or reneging on a promise is seen as an atrocious crime. To take a concrete instance, the Homeric Epics are populated with brutal thugs, and outright murderers (Odysseus is praised for “cleverly” slitting the throats of a group of sleeping Thracians and taking their stuff), but their norms of hospitality and ξενία (“guest friendship”), are regarded as sacred.

It is almost as if the locus of “civilization” has shifted. It used to be that to be civilized meant keeping one’s commitments, but , and now it means not outright murdering people.

I want to explore the connection between honesty and violence, and why they seem to trade off.

Why do “honor cultures” go with violence? 

There’s one natural reason why violence and sacred honor go together: if you don’t keep to your commitments, the other guy’ll kill you.

Indeed, he probably has to kill you, because your screwing him over represents an insult. If he doesn’t challenge that insult, it implies weakness on cowardice. He’s embedded in a system that depends on his physical courage: he has serfs to oppress, he has vassals to protect (and extract taxes from), and a lord or sovereign to whom he owes military service.

If it becomes common knowledge that he’s not challenging insults, and therefore, is presumably not confident in his own military prowess, those serfs and vassals might think that this is a good time to try and throw off his yoke, and other enemies might think that his land and stuff is ripe for the picking, and attack.

In the ancient world, the standard of what behavior was unacceptable was determined by the common knowledge that behavior was unacceptable, because that common knowledge impels the victim to seek recompense.

If an affront is weird enough to not land in the common knowledge as an insult, one might be able to let it pass, but in practice, one can’t be sure what others will view as an insult, so people were probably erring on the side of false negatives, to avoid the possibility of seeming weak. So, actions that we would consider innocuous (like minor lies), are big deals, and this becomes encoded in the culture.

A lord cannot afford to appear weak. Honor culture, and respect for commitments, follows from that.

Why does the modern world expect liars?

There’s a crucial difference in the modern world, though. In our era, the government maintains a monopoly on force.

This means, of course, that directly attacking someone who’s wronged you is out of the question. The government frowns on such behavior. Instead, disputes are handled not by duels or armed conflict, but by the courts.

This changes the rules of the game. Any scam that is legal enough to hold up in court is “fair game”, and if you don’t read the fine print on a contract that you signed, you might get screwed over.

In the modern world, the standard of what behavior is unacceptable is determined by the government and the courts. Anything that is clearly breaking the law is out, but any infractions that adversarially bypass the legal system (like a scam-y contract), is treated as “the sort of thing that happens” in the marketplace, a risk to protect against.

 

 

I’m sure I’m not the first person to say something like this. Anyone have sources for academic analyses of the roots of honor cultures?

What is productivity momentum?

[Epistemic status: question, and fragmented thoughts. I’m confident that this is a thing, and quite uncertain about the mechanism that governs it. I expect to have a much better organized write up on this topic soon, but if you wan to watch the sausages get made, be my guest.]

[This is another fragment of my upcoming “Phenomenology and Psychology of Personal Productivity” posts (plentiful plosive ‘p’s!).]

There seems to be something like momentum or inertia to my productivity. If my first hour of the day is focused and I’m clipping through tasks, all of the later hours will be similarly focused and productive. But if I get off to a bad start, it curtails the rest of my day. How well my morning goes is the #2 predictor of how well the rest of my day will go (the first is sleep quality).

My day is strongly influenced by the morning, but this effect seems more general than that: how focused and “on point” I am in any given hour is strongly influences how “on point I will be in the coming hours. If I can successfully build momentum in the middle of the day, it is easier to maintain for the rest of the day. (Note: What do I mean by “on point”?)

Why does this phenomenon work like this?

(Note: these hypotheses are not mutually exclusive. Some, I think, are special cases of others.)

Hypothesis 1: My mind is mostly driven by short-term gratification. I can get short term gratification in one of two ways: via immediate stimulation, or by making progress towards goals. Making progress towards goals is more satisfying, but it also has some delay. Switching from immediate stimulation to satisfaction by making progress on goals  entails a period of time when you’re not receiving immediate stimulation, and also not being satisfied by goal-progress, because you’re still revving up and getting oriented. It takes a while to get into the flow of working, when it starts being enjoyable.

But once you’re experiencing satisfaction from goal-progress, it feels good and you’re motivated to continue doing that.

Hypothesis 1.5: Same as above, but it isn’t about gratification from immediate stimulation vs. gratification from goal-progress. It’s about gratification from immediate stimulation vs. gratification from self actualization or self exertion, the pleasure of pushing yourself and exhausting yourself.

Hypothesis 2: There’s an activation energy or start up cost to the more effortful mode of being productive, but once that cost is paid, it’s easy.

[I notice that the sort of phenomenon described in Hyp. 1, 1.5, and  2, is not unique to “productivity”. It also seems to occur in other domains. I often feel a disinclination to go exercise, but once I start, it feels good and I want to push myself. (Though, notably, this “broke” for me in the past few months. Perhaps investigating why it broke would reveal something about how this sort of momentum works in general?)]

Hypothesis 3: It’s about efficacy. Once I’ve made some progress, spent an hour in deep work, or whatever, I the relevant part of my mind alieves that I am capable of making progress on my goals, and so is more motivated do that.

In other words, being productive is evidence that something good will happen if I try, which makes it worth while to try.

(This would sugest that other boosts to one’s self-confidence or belief in ability to do things would also jump start momentum chains, which seems correct.)

Hypothesis 4: It’s about a larger time budget inducing parts-coordination. I have a productive first hour and get stuff done. A naive extrapolation says that if all of the following hours have a similar density of doing and completing, then I will be able to get many things done. Given this all my parts that are advocating for different things that are important to them settle down, confident that their thing will be gotten to.

In contrast if I have a bad morning, each part is afraid that it’s goal will be left by the wayside, and so they all scramble to drag my mind to their thing, and I can’t focus on any one thing.

[This doesn’t seem right. The primary contrasting state is more like lazy and lackadaisical, rather than frazzled.]

Hypothesis 5: It is related to failing with abandon. It’s much more motivating to be aiming to have an excellent day than it is to be aiming to recover from a bad morning to have a decent day. There’s an inclination to say “f*** it”, and not try as hard, because the payoffs are naturally divided into chunks of a day.

Or another way to say this: my motivation increases after a good morning because I alieve that I can get to all the things done, and getting all the things done is much more motivating than getting 95% of the things done because of completion heuristics (which I’ve already noted, but not written about anywhere).

Hypothesis 6: It’s about attention. There’s something that correlates with productivity which is something like “crispness of attention” and “snappiness of attentional shifts.” Completing a task and then moving on to the next one has this snappiness.

Having a “good morning” means engaging deeply with some task or project and really getting immersed in it. This sort of settledness is crucial to productivity and it is much easier to get into if I was there recently. (Because of fractionation?!)

Hypothesis 7: It’s about setting a precedent or a set point for executive function, or something? There’s a thing that happens throughout the day, which is that an activity is suggested, by my mind or by my systems, and some relevant part of me decides “Yes, I’ll do that now”, or “No, I don’t feel like it.”

I think those choices are correlated for some reason? The earlier ones set the standard for the later ones? Because of consistency effects? (I doubt that that is the reason. I would more expect a displacement effect (“ah. I worked hard this morning, I don’t need to do this now”) than a consistency effect (“I choose to work earlier today, so I’m a choose-to-work person”). In any case, this effect is way subverbal, and doesn’t involve the social mind at all, I think.)

This one feels pretty right. But why would it be? Maybe one of hypotheses 1-5?

Hypothesis 8: Working has two components: the effort of starting and reward making progress / completing.

If you’re starting cold, you have to force yourself through the effort, and it’s easier to procrastinate, putting the task off for a minute or an hour.

But if you’ve just been working on or just completed something else and are feeling the reward high from that, then the reward component of tasks in general, is much more salient, is pulled into near-mode immediacy. Which makes the next task more compelling.

I think this captures a lot of my phenomenological experience regarding productivity momentum and it also explains the related phenomena with exercise and similar.

(Also, there’s something like an irrational fear of effort, which builds up higher and higher as long as you’re avoiding is, but which dissipates once you exert some effort?)

(M/T on Hyp. 8:) If this were the case, it seems like it would predict that momentum would decay if one took a long break in the middle of the day. I think in practice this isn’t quite right, because the “productivity high” of a good morning can last for a long time, into the afternoon or evening.

 

My best guess is that it is hypothesis 8, but that the dynamic of hypothesis 5 is also in play. I’ll maybe consolidate all this thinking into a theory sometime this week.

Note: Also, maybe I’m only talking about flow?

 

Added 2018-11-28: Thinking about it further, I hit upon two other hypotheses that fit with my experience.

Hypothesis 7.5: [related to 1, 1.5, and 3. More or less a better reformulation of 7.] There’s a global threshold of distraction or of acting on (or reacting to) thoughts and urges flashing through one’s mind. Lowering this threshold on the scale of weeks and months, but it also varies day by day. Momentum entails lowering that threshold, so that one’s focus on any given task can be deep, instead of shallow.

This predicts that meditation and meditative-like practices would lower the threshold and potentially start up cycle of productivity momentum. Indeed, the only mechanism that I’ve found that has reliably helped me recover from unproductive mornings and afternoons is a kind of gently-enforced serenity process.

I think this one is pretty close to correct.

Hypothesis 10: [related to 2, and 8] It’s just about ambiguity resolution. Once I start working, I have a clear and sense of what that’s like which bounds the possible hedonic downside. (I should write more about ambiguity avoidance.)

 

 

Where did the radically ambitious thinking go?

[Epistemic status: Speculation based on two subjective datapoints (which I don’t cite).]

Turing and I.J. Good famously envisioned the possibility of a computer superintelligence, and furthermore presaged AI risk (in at least some throwaway lines). In our contemporary era, in contrast, these are fringe topics among computer scientists. The persons who have most focused on AI risk are outside of the AI establishment. And Yudkowsky and Bostrom had to fight to get that establishment to take the problem seriously.

Contemporaneously with Turing, the elite physicists of the generation (Szilard, in particular, but also others) were imagining the possibility of an atomic bomb and atomic power. I’m not aware of physicists today geeking out over anything similarly visionary. (Interest in fictional, but barely grounded technologies like warp drives doesn’t cut it. Szilard could forecast atomic bombs and their workings from his knowledge of physics. Furthermore, it was plausible for Szilard to accomplish the development of such a device in a human lifetime, and actively tried for it. This seems of a different type than idle speculation about Star Trek technologies.) At least one similarly world-altering technology, Drexlarian Nanotechnology, is firmly not a part of the discourse between mainstream scientists. (I think. I suppose they talk about it among themselves surreptitiously, but in that case, I would be surprised by how little the scientific community has pursued it).

I have the impression (not based on any hard data) that the scientists of the first half of the 20th century regularly explored huge, weird ideas, technologies and eventualities that, if they came to pass, could radically reshape the world. I further have the impression that most scientists today don’t do the same, or at least don’t do so in a serious way.

One way to say it: in 1920 talk of the possibility of an atomic bomb is in the domain of science, to be treated as a real and important possibility, but in 2018, nanotech is in the domain of science fiction, to be treated as entertainment. (Noting that atomic bombs were written about in science fiction.)

I don’t know why this is.

  • Maybe because current scientist are more self-conscious, not wanting to seem unserious?
  • Maybe because they have less of a visceral sense that the world can change in extreme ways?
  • Or maybe “science fiction” ideas became low status somehow? Perhaps because more rigorous people were hyping all sorts of nonsense, and intellectuals wanted to distance themselves from such people? So they adopted a more cynical attitude?
  • Maybe the community of scientists was smaller then, so it was easier to create common knowledge that an idea wasn’t “too out there”.
  • Maybe because nanotech and superintelligence are actually less plausible than atomic bombs, or are at least, more speculative?

I want to know: if this effect is real, what happened?