Culture vs. Mental Habits

[epistemic status: personal view of the rationality community.]

In this “post”, I’m going to outline two dimensions on which one could assess the rationality community and the success of the rationality project. This is hardly the only possible break-down, but it is one that underlies a lot of my thinking about rationality community building, and what I would do, if I decided rationality community building were a strong priority.

I’m going to call those two dimensions Culture and Mental Habits. As we’ll see these are not cleanly distinct categories, and they tend to bleed into each other. But they have separate enough focuses that one can meaningfully talk about the differences between them.

Culture

By “culture” I mean something like…

  • Which good things are prioritized?
  • Which actions and behaviors are socially rewarded?
  • Which concepts and ideas are in common parlance?

Culture is about groups of people, what those groups share and what they value.

My perception is that on this dimension, the Bay area rationality community has done extraordinarily well.

Truth-seeking is seen as paramount: individuals are socially rewarded for admitting ignorance and changing their minds. Good faith and curiosity about other people’s beliefs is common.

Analytical and quantitative reasoning is highly respected, and increasingly, so is embodied intuition.

People get status for doing good scholarship (e.g. Sarah Constantin), for insightful analysis of complicated situations (e.g. Scott Alexander, for instance), or for otherwise producing good or interesting intellectual content (e.g. Eliezer).

Betting (putting your money where your mouth is) is socially-encouraged. Concepts like “crux” and “rationalist taboo” are well known enough to be frequently invoked in conversation.

Compared to the backdrop of mainline American culture, where admitting that you were wrong means losing face, and trying to figure out what’s true is secondary (if not outright suspicious, since it suggests political non-allegiance), the rationalist bubble’s culture of truth seeking is an impressive accomplishment.

Mental habits

For lack of a better term, I’m going to call this second dimension “mental habits” (or perhaps to borrow Leverage’s term “IPs”).

The thing that I care about in this category is “does a given individual reliably execute some specific cognitive move, when the situation calls for it?” or “does a given individual systematically avoid a given cognitive error?

Some examples, to gesture at what I mean

  • Never falling prey to the planning fallacy
  • Never falling prey to sunk costs
  • Systematically noticing defensiveness and deflinching or a similar move
  • Systematically noticing and responding to rationalization phenomenology
  • Implementing the “say oops” skill, when new evidence comes to light that overthrows an important position of yours
  • Systematic avoidance of the sorts of errors I outline my Cold War Cognitive Errors investigation (this is the only version that is available at this time).

The element of reliability is crucial. There’s a way that culture is about “counting up” (some people know concept X, and use it sometimes) and mental habits is about “counting down” (each person rarely fails to execute relevant mental process Y).

The reliability of mental habits (in contrast with some mental motion that you know how to do and have done once or twice), is crucial, because it puts one in a relevantly different paradigm.

For one thing, there’s a frame under which rationality is about avoiding failure modes: how to succeed in a given domain depends on the domain, but rationality is about how not to fail, generally. Under that frame, executing the correct mental motion 10% of the time is much less interesting and impressive than executing it everytime (or even 90% of the time).

If the goal is to avoid the sorts of errors in my cold war post, then it is not even remotely sufficient for individuals to be familiar with the patches: they have to reliably notice the moments of intervention and execute the patches, almost every time, in order to avoid the error in the crucial moment.

Furthermore, systematic execution of a mental TAP allows for more complicated cognitive machines. Lots of complex skills depend on all of the pieces of the skills working.

It seems to me, that along this dimension, the rationality community has done dismally.

Eliezer wrote about Mental Habits of this sort in the sequences and in his other writing, but when I consider even very advanced members of my community, I think very few of them systematically notice rationalization, or will reliably avoid sunk costs, or consistently respond to their own defensiveness.

I see very few people around me who explicitly attempt to train 5-second or smaller rationality skills. (Anna and Matt Fallshaw are exceptions who come to mind).

Anna gave a talk at the CFAR alumni reunion this year, in which she presented two low-level cognitive skills of that sort. There were about 40 people in the room watching the lecture, but I would be mildly surprised if even 2 of those people reliably execute the skills described, in the relevant-trigger situation, 6 months from that talk.

But I can imagine a nearby world, where the rationality community was more clearly a community of practice, and most of the the people in that room, would watch that talk and then train the cognitive habit to that level of reliability.

This is not to say that fast cognitive skills of this sort are what we should be focusing on. I can see arguments that culture really is the core thing. But nevertheless, it seems to me that the rationality community is not excelling on the dimension of training it’s members in mental TAPs.

[Added note: Brienne’s Tortoise skills is nearly archetypal of what I mean by “mental habits”.]

Using the facilitator to make sure that each person’s point is held

[Epistemic status: This is a strategy that I know works well from my own experience, but also depends on some prereqs.

I guess this is a draft for my Double Crux Facilitation sequence.]

Followup to: Something simple to try in conversations

Related to: Politics is the Mind Killer, Against Disclaimers

Here’s a simple model that is extremely important to making difficult conversations go well:

Sometimes, when a person is participating in a conversation, or an argument, he or she will be holding onto a “point”, that he/she wants to convey.

For instance…

  • A group is deciding which kind of air conditioner to get, and you understand that one brand is much more efficient than the others, for the same price.
  • You’re listening to a discussion between two intellectuals who you can tell are talking past eachother, and you have the perfect metaphor that will clarify things for both of them.
  • Your startup is deciding how to respond to an embarrassing product failure, one of the cofounders wants to release a statement that you think will be off-putting to many of your customers.

As a rule, when a person is “holding onto” a point that they want to make, they are unable to listen well.

The point that a person wants to make relates to something that’s important to them. If it seems that their conversational-partners are not going to understand or incorporate that point, that important value is likely going to be lost. Reasonably, this entails a kind of anxiety.

So, to the extent that it seems to you that your point won’t be heard or incorporated, you’ll agitatedly push for airtime, at the expense of good listening. Which, unfortunately, results in a coordination problem of each person pushing to get their point heard and no one listening. Which, of course, makes it more likely that any given point won’t be heard, triggering a positive feedback loop.

In general, this means that conversations are harder to the degree that…

  1. The topic matters to the participants.
  2. The participant’s visceral expectation is that they won’t be heard.

(Which is a large part of the reason why difficult conversations get harder as the number of participants increases. More people means more points competing to be heard, which exacerbates the death spiral.)

Digression

I think this goes a long way towards explicating why politics is a mind killer. Political discourse is a domain which…

  1. Matters personally to many participants, and
  2. Includes a vast number of “conversational participants”,
  3. Who might take unilateral action, on the basis of whatever arguments they hear, good or bad.

Given that setup, it is quite reasonable to treat arguments as soldiers. When one sees someone supporting, or even appearing to support a policy or ideology that you consider abhorrent or dangerous, there is a natural and reasonable anxiety that the value you’re protecting will be lost. And there is a natural (if usually poorly executed) desire to correct the misconception in the common knowledge before it gets away from you. Or failing that, to tear down the offending argument / discredit the person making it.

(To see an example of the thing that one is viscerally fearing, see the history of Eric Drexler’s promotion of nanotechnology. Drexler made arguments about Nanotech, which he hoped would direct resources in such a way that the future could be made much better. His opponents attacked strawmen of those arguments. The conversation “got away” from Drexler, and the whole audience discounted the ideas he supported, thus preventing any progress towards the potential future that Drexler was hoping to help bring into being.

I think the visceral fear of something like this happening to you is what motivates “treating arguments as soldiers“)

End digression

Given this, one of the main thing that needs to happen to make a conversation go well, is for each participant to (epistemically!) aleive that their point will be gotten to and heard. Otherwise, they can’t be expected to put it aside (even for a moment) in order to listen carefully to their interlocutor (because doing so would increase the risk of their point in fact not being heard).

When I’m mediating conversations, one strategy that I employ to facilitate this is to use my role as the facilitator to “hold” the points of both sides. That is (sometimes before the participants even start talking to each-other), I’ll first have each one (one at a time) convey their point to me. And I don’t go on until I can pass the ITT of that person’s point, to their (and my) satisfaction.

Usually, when I’m able to pass the ITT, there’s a sense of relief from that participant. They now know that I understand their point, so whatever happens in the conversation, it won’t get lost or neglected. Now, they can relax and focus on understanding what the other person has to say.

Of course, with sufficient skill, one of the participants can put aside their point (before it’s been heard by anyone) in order to listen. But that is often asking too much of your interlocutors, because doing the “putting aside” motion, even for a moment is hard, especially when what’s at stake is important. (I can’t always do it.)

Outsourcing the this step to the facilitator, is much easier, because the facilitator has less that is viscerally at stake for them (and has more metacognition to track the meta-level of the conversation).

I’m curious if this is new to folks or not. Give me feedback.

 

Some possible radical changes to the world

Strong AI displaces humans as the dominant force on the planet.

A breakthrough is made the objective study of meditation, which makes triggering enlightenment much easier. Millions of people become enlightened.

Narrow AI solves protein folding, Atomically Precise Manufacturing (nanotech) becomes possible and affordable. (Post scarcity?)

The existing political order collapses.

The global economy collapses, supply chains break down. (Is this a thing that could happen?)

Civilization abruptly collapses.

Nuclear war between two or more nuclear powers.

A major terrorist attack pushes the US into heretofore unprecedented levels of surveillance and law-enforcement.

Sufficient progress in made on human health extension that many powerful people anticipate being within range of longevity escape velocity.

Genetic engineering (of one type or another) gives rise to a generation that includes a large number of people who are much smarter than the historical human distribution.

Advanced VR?

Significant rapid global climate change.

 

Some varieties of feeling “out of it”

[Epistemic status: phenomenology. I don’t know if this is true for anyone other than me. Some of my “responses” are wrong, but I don’t know which ones yet.

Part of some post on the phenomenology and psychology of productivity. There are a lot of stubs, places for me to write more.

This is badly organized. A draft.]

One important skill of maintaining productivity through phenomenology is distinguishing between the different kinds of low energy states. I think that people typically conflate a large number of mental states under the label of “tired” or that of “don’t feel like working.” The problem with this is, that the phenomenology of these different states points to different underlying mechanisms, and each one should be responded to differently.

If you can distinguish between these flavors of experience, then each one can be the trigger for a TAP, to bring you back to a more optimal state.

I don’t think I’ve learned to make all relevant state-distinctions here, but these are some that I can recognize.

Sleep deprivation: Feels like like a kind of buzzy feeling in my head that goes with “low energy”.

The best response is a nap. If that doesn’t work, then maybe try a stimulant. You can also just wait: after a while your circadian system will be strongly countering your sleep pressure, and you’ll feel more alert.

Fuzzy-headed: Often from overeating, or not having gotten enough physical activity in the past few days.

The best response is exercise. (Maybe sufficiently intense exercise, that you have an endorphin response?)

Hungry: You probably know what this is. I think maybe the best response is to ignore it?

Running out of thinking-steam due to need to eat: This feels distinctly different from the one above. Sort of like my thoughts running out, due to something like an empty head?

Usually, eating entails some drop in energy level, but if you time it right, both not eating and eating can be energizing. Though I’ve never done this for long periods and I don’t know if it sustainable.

Cognitive exhaustion: This is the one I understand the least. I don’t know what it is. Maybe needing to process, or consolidate info, or do subconscious processing? I don’t know if emotional exhaustion is the meaningfully different (my guess is no?).

The default thing to do here is to take a break, but I’m not sure if that’s the best thing to do. I think maybe you can just switch tasks and get the same effect?

Aversions

I’ll write about aversions more sometime, because they are the ones that are most critical to productivity. Aversions come in two different types: Anxiety/Fear/Stress aversions and “glancing off” aversions.

Anxiety/Fear/Stress/Belief Aversion: This sort of aversion is almost always accompanied by a tension-feeling in the gut and stems from some flavor-of-fear about the thing I’m averse to. A common template for the fear is “I’ve already failed / I’ve already fucked up.” Another is a fear of being judged.

The response to this one is to use Focusing to bring the concern that your body is holding on to into conscious attention, and to figure out a way to handle it.

“Glancing off” Aversions: This is closer to the feeling of slipping off a task, or “just not feeling like doing it”, or finding your attention going elsewhere. This is often due to a task that is aversive not do to it’s goal relevant qualities, but due to it’s ambiguity, or it being to big to hold in mind.

The response, as I’ll write about later, is to chunk out the smallest concrete next action and to visualize doing it.

Ego deletion: Feels sort of like my brain is tired? This feels kind of like cognitive exhaustion, and they might be the same thing. I think this is due to other subsystems in me wanting something other than work.

The correct response, I think, is to take a break and do whatever I feel like doing in that moment, though I don’t have a good understanding of mental energy, and it maybe that I’m supposed to do something that has clear and satisfying reward signals? (I don’t think that’s right, though. Feels a bit too mechanical.)

Urgy-ness: Also have to write more about this another time. This is a feeling compulsion for short term gratification, often of several verities in sequence, without satisfaction. This is often a second order response to an anxiety or fear aversions, and can also be about some goal that’s un handled or an unmet need. See also: the reactiveness scaler (which I also haven’t written about yet.)

Response: exercise, then Focusing

I wrote this fast. Questions are welcome.

 

 

RAND needed the “say oops” skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:

The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.

If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.

Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.

To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.


A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.

Honor is about common knowledge?

[epistemic status: musing, just barely grounded in history. Inspired by chapter four of the Elephant in the Brain.]

I’ve often mused how in (most parts) of the modern world, physical violence is utterly out of the question (no one expects to be murdered over a business deal), but we often take for granted that people are going to scam us, lie to us, try to cheat us, fail to follow through on an agreement, or otherwise take advantage of us, in the name of profit (the prototypical used car salesman is an excellent example, though it is broader than that.)

In contrast, in many other times and places, the reverse has held true: physical violence is common and expected, but lying or reneging on a promise is seen as an atrocious crime. To take a concrete instance, the Homeric Epics are populated with brutal thugs, and outright murderers (Odysseus is praised for “cleverly” slitting the throats of a group of sleeping Thracians and taking their stuff), but their norms of hospitality and ξενία (“guest friendship”), are regarded as sacred.

It is almost as if the locus of “civilization” has shifted. It used to be that to be civilized meant keeping one’s commitments, but , and now it means not outright murdering people.

I want to explore the connection between honesty and violence, and why they seem to trade off.

Why do “honor cultures” go with violence? 

There’s one natural reason why violence and sacred honor go together: if you don’t keep to your commitments, the other guy’ll kill you.

Indeed, he probably has to kill you, because your screwing him over represents an insult. If he doesn’t challenge that insult, it implies weakness on cowardice. He’s embedded in a system that depends on his physical courage: he has serfs to oppress, he has vassals to protect (and extract taxes from), and a lord or sovereign to whom he owes military service.

If it becomes common knowledge that he’s not challenging insults, and therefore, is presumably not confident in his own military prowess, those serfs and vassals might think that this is a good time to try and throw off his yoke, and other enemies might think that his land and stuff is ripe for the picking, and attack.

In the ancient world, the standard of what behavior was unacceptable was determined by the common knowledge that behavior was unacceptable, because that common knowledge impels the victim to seek recompense.

If an affront is weird enough to not land in the common knowledge as an insult, one might be able to let it pass, but in practice, one can’t be sure what others will view as an insult, so people were probably erring on the side of false negatives, to avoid the possibility of seeming weak. So, actions that we would consider innocuous (like minor lies), are big deals, and this becomes encoded in the culture.

A lord cannot afford to appear weak. Honor culture, and respect for commitments, follows from that.

Why does the modern world expect liars?

There’s a crucial difference in the modern world, though. In our era, the government maintains a monopoly on force.

This means, of course, that directly attacking someone who’s wronged you is out of the question. The government frowns on such behavior. Instead, disputes are handled not by duels or armed conflict, but by the courts.

This changes the rules of the game. Any scam that is legal enough to hold up in court is “fair game”, and if you don’t read the fine print on a contract that you signed, you might get screwed over.

In the modern world, the standard of what behavior is unacceptable is determined by the government and the courts. Anything that is clearly breaking the law is out, but any infractions that adversarially bypass the legal system (like a scam-y contract), is treated as “the sort of thing that happens” in the marketplace, a risk to protect against.

 

 

I’m sure I’m not the first person to say something like this. Anyone have sources for academic analyses of the roots of honor cultures?

What is productivity momentum?

[Epistemic status: question, and fragmented thoughts. I’m confident that this is a thing, and quite uncertain about the mechanism that governs it. I expect to have a much better organized write up on this topic soon, but if you wan to watch the sausages get made, be my guest.]

[This is another fragment of my upcoming “Phenomenology and Psychology of Personal Productivity” posts (plentiful plosive ‘p’s!).]

There seems to be something like momentum or inertia to my productivity. If my first hour of the day is focused and I’m clipping through tasks, all of the later hours will be similarly focused and productive. But if I get off to a bad start, it curtails the rest of my day. How well my morning goes is the #2 predictor of how well the rest of my day will go (the first is sleep quality).

My day is strongly influenced by the morning, but this effect seems more general than that: how focused and “on point” I am in any given hour is strongly influences how “on point I will be in the coming hours. If I can successfully build momentum in the middle of the day, it is easier to maintain for the rest of the day. (Note: What do I mean by “on point”?)

Why does this phenomenon work like this?

(Note: these hypotheses are not mutually exclusive. Some, I think, are special cases of others.)

Hypothesis 1: My mind is mostly driven by short-term gratification. I can get short term gratification in one of two ways: via immediate stimulation, or by making progress towards goals. Making progress towards goals is more satisfying, but it also has some delay. Switching from immediate stimulation to satisfaction by making progress on goals  entails a period of time when you’re not receiving immediate stimulation, and also not being satisfied by goal-progress, because you’re still revving up and getting oriented. It takes a while to get into the flow of working, when it starts being enjoyable.

But once you’re experiencing satisfaction from goal-progress, it feels good and you’re motivated to continue doing that.

Hypothesis 1.5: Same as above, but it isn’t about gratification from immediate stimulation vs. gratification from goal-progress. It’s about gratification from immediate stimulation vs. gratification from self actualization or self exertion, the pleasure of pushing yourself and exhausting yourself.

Hypothesis 2: There’s an activation energy or start up cost to the more effortful mode of being productive, but once that cost is paid, it’s easy.

[I notice that the sort of phenomenon described in Hyp. 1, 1.5, and  2, is not unique to “productivity”. It also seems to occur in other domains. I often feel a disinclination to go exercise, but once I start, it feels good and I want to push myself. (Though, notably, this “broke” for me in the past few months. Perhaps investigating why it broke would reveal something about how this sort of momentum works in general?)]

Hypothesis 3: It’s about efficacy. Once I’ve made some progress, spent an hour in deep work, or whatever, I the relevant part of my mind alieves that I am capable of making progress on my goals, and so is more motivated do that.

In other words, being productive is evidence that something good will happen if I try, which makes it worth while to try.

(This would sugest that other boosts to one’s self-confidence or belief in ability to do things would also jump start momentum chains, which seems correct.)

Hypothesis 4: It’s about a larger time budget inducing parts-coordination. I have a productive first hour and get stuff done. A naive extrapolation says that if all of the following hours have a similar density of doing and completing, then I will be able to get many things done. Given this all my parts that are advocating for different things that are important to them settle down, confident that their thing will be gotten to.

In contrast if I have a bad morning, each part is afraid that it’s goal will be left by the wayside, and so they all scramble to drag my mind to their thing, and I can’t focus on any one thing.

[This doesn’t seem right. The primary contrasting state is more like lazy and lackadaisical, rather than frazzled.]

Hypothesis 5: It is related to failing with abandon. It’s much more motivating to be aiming to have an excellent day than it is to be aiming to recover from a bad morning to have a decent day. There’s an inclination to say “f*** it”, and not try as hard, because the payoffs are naturally divided into chunks of a day.

Or another way to say this: my motivation increases after a good morning because I alieve that I can get to all the things done, and getting all the things done is much more motivating than getting 95% of the things done because of completion heuristics (which I’ve already noted, but not written about anywhere).

Hypothesis 6: It’s about attention. There’s something that correlates with productivity which is something like “crispness of attention” and “snappiness of attentional shifts.” Completing a task and then moving on to the next one has this snappiness.

Having a “good morning” means engaging deeply with some task or project and really getting immersed in it. This sort of settledness is crucial to productivity and it is much easier to get into if I was there recently. (Because of fractionation?!)

Hypothesis 7: It’s about setting a precedent or a set point for executive function, or something? There’s a thing that happens throughout the day, which is that an activity is suggested, by my mind or by my systems, and some relevant part of me decides “Yes, I’ll do that now”, or “No, I don’t feel like it.”

I think those choices are correlated for some reason? The earlier ones set the standard for the later ones? Because of consistency effects? (I doubt that that is the reason. I would more expect a displacement effect (“ah. I worked hard this morning, I don’t need to do this now”) than a consistency effect (“I choose to work earlier today, so I’m a choose-to-work person”). In any case, this effect is way subverbal, and doesn’t involve the social mind at all, I think.)

This one feels pretty right. But why would it be? Maybe one of hypotheses 1-5?

Hypothesis 8: Working has two components: the effort of starting and reward making progress / completing.

If you’re starting cold, you have to force yourself through the effort, and it’s easier to procrastinate, putting the task off for a minute or an hour.

But if you’ve just been working on or just completed something else and are feeling the reward high from that, then the reward component of tasks in general, is much more salient, is pulled into near-mode immediacy. Which makes the next task more compelling.

I think this captures a lot of my phenomenological experience regarding productivity momentum and it also explains the related phenomena with exercise and similar.

(Also, there’s something like an irrational fear of effort, which builds up higher and higher as long as you’re avoiding is, but which dissipates once you exert some effort?)

(M/T on Hyp. 8:) If this were the case, it seems like it would predict that momentum would decay if one took a long break in the middle of the day. I think in practice this isn’t quite right, because the “productivity high” of a good morning can last for a long time, into the afternoon or evening.

 

My best guess is that it is hypothesis 8, but that the dynamic of hypothesis 5 is also in play. I’ll maybe consolidate all this thinking into a theory sometime this week.

Note: Also, maybe I’m only talking about flow?

 

Added 2018-11-28: Thinking about it further, I hit upon two other hypotheses that fit with my experience.

Hypothesis 7.5: [related to 1, 1.5, and 3. More or less a better reformulation of 7.] There’s a global threshold of distraction or of acting on (or reacting to) thoughts and urges flashing through one’s mind. Lowering this threshold on the scale of weeks and months, but it also varies day by day. Momentum entails lowering that threshold, so that one’s focus on any given task can be deep, instead of shallow.

This predicts that meditation and meditative-like practices would lower the threshold and potentially start up cycle of productivity momentum. Indeed, the only mechanism that I’ve found that has reliably helped me recover from unproductive mornings and afternoons is a kind of gently-enforced serenity process.

I think this one is pretty close to correct.

Hypothesis 10: [related to 2, and 8] It’s just about ambiguity resolution. Once I start working, I have a clear and sense of what that’s like which bounds the possible hedonic downside. (I should write more about ambiguity avoidance.)

 

 

Where did the radically ambitious thinking go?

[Epistemic status: Speculation based on two subjective datapoints (which I don’t cite).]

Turing and I.J. Good famously envisioned the possibility of a computer superintelligence, and furthermore presaged AI risk (in at least some throwaway lines). In our contemporary era, in contrast, these are fringe topics among computer scientists. The persons who have most focused on AI risk are outside of the AI establishment. And Yudkowsky and Bostrom had to fight to get that establishment to take the problem seriously.

Contemporaneously with Turing, the elite physicists of the generation (Szilard, in particular, but also others) were imagining the possibility of an atomic bomb and atomic power. I’m not aware of physicists today geeking out over anything similarly visionary. (Interest in fictional, but barely grounded technologies like warp drives doesn’t cut it. Szilard could forecast atomic bombs and their workings from his knowledge of physics. Furthermore, it was plausible for Szilard to accomplish the development of such a device in a human lifetime, and actively tried for it. This seems of a different type than idle speculation about Star Trek technologies.) At least one similarly world-altering technology, Drexlarian Nanotechnology, is firmly not a part of the discourse between mainstream scientists. (I think. I suppose they talk about it among themselves surreptitiously, but in that case, I would be surprised by how little the scientific community has pursued it).

I have the impression (not based on any hard data) that the scientists of the first half of the 20th century regularly explored huge, weird ideas, technologies and eventualities that, if they came to pass, could radically reshape the world. I further have the impression that most scientists today don’t do the same, or at least don’t do so in a serious way.

One way to say it: in 1920 talk of the possibility of an atomic bomb is in the domain of science, to be treated as a real and important possibility, but in 2018, nanotech is in the domain of science fiction, to be treated as entertainment. (Noting that atomic bombs were written about in science fiction.)

I don’t know why this is.

  • Maybe because current scientist are more self-conscious, not wanting to seem unserious?
  • Maybe because they have less of a visceral sense that the world can change in extreme ways?
  • Or maybe “science fiction” ideas became low status somehow? Perhaps because more rigorous people were hyping all sorts of nonsense, and intellectuals wanted to distance themselves from such people? So they adopted a more cynical attitude?
  • Maybe the community of scientists was smaller then, so it was easier to create common knowledge that an idea wasn’t “too out there”.
  • Maybe because nanotech and superintelligence are actually less plausible than atomic bombs, or are at least, more speculative?

I want to know: if this effect is real, what happened?

Cravyness – A hypothesis

[epistemic status: think I think is at least partially true, this week.]

[This is one of the fragments of thought that is leading up to some posts on “the Psychology and Phenomenology of Productivity” that I have bubbling inside of me.]

I sometimes find myself feeling “cravy.” I’ll semi-compulsively seek instant gratification, from food, from stimulation from youtube or webcomics, from mastrabation. My attention will flit from object to object, instead of stabilizing on anything. None of that frantic activity is very satisfying, but it’s hard to break the pattern in the moment.

I think this state is the result of two highly related situations.

  1. I have some need (a social need, a literal nutrient, a sexual desire, etc.) that is going unfullfilled, and I’m flailing, trying to get it. I don’t know what I’m missing or wanting so I’m just scrabbling after anything that has given me a dopamine hit in the past.
  2. Some relevant part of me currently alives that one of my core goals (not necessarily and IG, but a core component of my path) is impossible, and is panicing. I’m seeing short term gratification because, that part of me (thinks that short term gratification is the only kind that’s possible or is trying to distract itself from the pain of the impossibility.)

 

 

(Eli’s notes to himself: Notably, both of these hypothesis sugest that Focusing would likely be effective… – Ah. Right. But I don’t usually do the “clearing a space” step.)

A hill and a blanket of consciousness (terrible metaphor)

[epistemic status: A malformed thought which turned into something else I had a couple of weeks ago, which seems important to me right now. As I was writing it became more for me and less for public consumption. ]

Some inspiration: I’m reading (well, listening to the audiobook of) Consciousness Explained. I’m also thinking about this Slate Star Codex post.

What does it mean for a thought to be conscious vs. unconscious? Taking for granted that there’s something like a chamber of of Guf: there are a bunch of competing thoughts or thought fragments or associations or plans or plan fragment or whatever, occurring “under the surface”?

There’s a typical view of consciousness which is that it is discreet and boolean: you have a bunch of unconscious thoughts and some of them become conscious. You have a working memory, and you can manipulate the objects in working memory. (Working memory isn’t quite the same thing, though. You don’t have to be aware of the object in working memory, you just need to be able to recall them, when needed).

But a lot of sources (Gendlin, the authors of the Mind Illuminated, Shenzhen Young, (indirectly) Yudkowsky, and my own phenomenological experience) sugest that it’s more like a scalar gradient: some thoughts are more conscious, but there are also less conscious thoughts on the edges of awareness, that you can become more aware of with training.

Something like this metaphor:

Thoughts are like grains of sand piled into a hill or pyramid. The gains at the top are the most conscious, the easiest to see. The ones a bit further down are peripherally conscious.  The further down you go the less conscious it is.

Conscious awareness itself is like a blanket that you throw over the top of the hill. Most people’s blankets are pretty small: they only cover the very top of the hill. But with training, you can stretch out your blanket, so that it can cover more of the hill. You can become aware of more “unconscious” phenomena. (I need a different word for how high on the hill a thought is is, something like it’s “absolute accessibility”, and how far the blanket reaches. Whether a thing is conscious depends on both the height on the hill and the size of the blanket.)

And to complicate the metaphor, thoughts are not really grains of sand. They’re more like ants, each trying to get to the top of the hill (I think? Maybe not all thoughts “want to be conscious”. In fact I think many don’t. ok. Scratch that.)

…They’re more like ants, many of which are struggling to get to the top of the hill, by climbing over their brethren.  And also, some of the ants are attached to some of the other ants with strings, so that if one of them get’s pull up it pulls up the other one.