Honor is about common knowledge?

[epistemic status: musing, just barely grounded in history. Inspired by chapter four of the Elephant in the Brain.]

I’ve often mused how in (most parts) of the modern world, physical violence is utterly out of the question (no one expects to be murdered over a business deal), but we often take for granted that people are going to scam us, lie to us, try to cheat us, fail to follow through on an agreement, or otherwise take advantage of us, in the name of profit (the prototypical used car salesman is an excellent example, though it is broader than that.)

In contrast, in many other times and places, the reverse has held true: physical violence is common and expected, but lying or reneging on a promise is seen as an atrocious crime. To take a concrete instance, the Homeric Epics are populated with brutal thugs, and outright murderers (Odysseus is praised for “cleverly” slitting the throats of a group of sleeping Thracians and taking their stuff), but their norms of hospitality and ξενία (“guest friendship”), are regarded as sacred.

It is almost as if the locus of “civilization” has shifted. It used to be that to be civilized meant keeping one’s commitments, but , and now it means not outright murdering people.

I want to explore the connection between honesty and violence, and why they seem to trade off.

Why do “honor cultures” go with violence? 

There’s one natural reason why violence and sacred honor go together: if you don’t keep to your commitments, the other guy’ll kill you.

Indeed, he probably has to kill you, because your screwing him over represents an insult. If he doesn’t challenge that insult, it implies weakness on cowardice. He’s embedded in a system that depends on his physical courage: he has serfs to oppress, he has vassals to protect (and extract taxes from), and a lord or sovereign to whom he owes military service.

If it becomes common knowledge that he’s not challenging insults, and therefore, is presumably not confident in his own military prowess, those serfs and vassals might think that this is a good time to try and throw off his yoke, and other enemies might think that his land and stuff is ripe for the picking, and attack.

In the ancient world, the standard of what behavior was unacceptable was determined by the common knowledge that behavior was unacceptable, because that common knowledge impels the victim to seek recompense.

If an affront is weird enough to not land in the common knowledge as an insult, one might be able to let it pass, but in practice, one can’t be sure what others will view as an insult, so people were probably erring on the side of false negatives, to avoid the possibility of seeming weak. So, actions that we would consider innocuous (like minor lies), are big deals, and this becomes encoded in the culture.

A lord cannot afford to appear weak. Honor culture, and respect for commitments, follows from that.

Why does the modern world expect liars?

There’s a crucial difference in the modern world, though. In our era, the government maintains a monopoly on force.

This means, of course, that directly attacking someone who’s wronged you is out of the question. The government frowns on such behavior. Instead, disputes are handled not by duels or armed conflict, but by the courts.

This changes the rules of the game. Any scam that is legal enough to hold up in court is “fair game”, and if you don’t read the fine print on a contract that you signed, you might get screwed over.

In the modern world, the standard of what behavior is unacceptable is determined by the government and the courts. Anything that is clearly breaking the law is out, but any infractions that adversarially bypass the legal system (like a scam-y contract), is treated as “the sort of thing that happens” in the marketplace, a risk to protect against.

 

 

I’m sure I’m not the first person to say something like this. Anyone have sources for academic analyses of the roots of honor cultures?

What is productivity momentum?

[Epistemic status: question, and fragmented thoughts. I’m confident that this is a thing, and quite uncertain about the mechanism that governs it. I expect to have a much better organized write up on this topic soon, but if you wan to watch the sausages get made, be my guest.]

[This is another fragment of my upcoming “Phenomenology and Psychology of Personal Productivity” posts (plentiful plosive ‘p’s!).]

There seems to be something like momentum or inertia to my productivity. If my first hour of the day is focused and I’m clipping through tasks, all of the later hours will be similarly focused and productive. But if I get off to a bad start, it curtails the rest of my day. How well my morning goes is the #2 predictor of how well the rest of my day will go (the first is sleep quality).

My day is strongly influenced by the morning, but this effect seems more general than that: how focused and “on point” I am in any given hour is strongly influences how “on point I will be in the coming hours. If I can successfully build momentum in the middle of the day, it is easier to maintain for the rest of the day. (Note: What do I mean by “on point”?)

Why does this phenomenon work like this?

(Note: these hypotheses are not mutually exclusive. Some, I think, are special cases of others.)

Hypothesis 1: My mind is mostly driven by short-term gratification. I can get short term gratification in one of two ways: via immediate stimulation, or by making progress towards goals. Making progress towards goals is more satisfying, but it also has some delay. Switching from immediate stimulation to satisfaction by making progress on goals  entails a period of time when you’re not receiving immediate stimulation, and also not being satisfied by goal-progress, because you’re still revving up and getting oriented. It takes a while to get into the flow of working, when it starts being enjoyable.

But once you’re experiencing satisfaction from goal-progress, it feels good and you’re motivated to continue doing that.

Hypothesis 1.5: Same as above, but it isn’t about gratification from immediate stimulation vs. gratification from goal-progress. It’s about gratification from immediate stimulation vs. gratification from self actualization or self exertion, the pleasure of pushing yourself and exhausting yourself.

Hypothesis 2: There’s an activation energy or start up cost to the more effortful mode of being productive, but once that cost is paid, it’s easy.

[I notice that the sort of phenomenon described in Hyp. 1, 1.5, and  2, is not unique to “productivity”. It also seems to occur in other domains. I often feel a disinclination to go exercise, but once I start, it feels good and I want to push myself. (Though, notably, this “broke” for me in the past few months. Perhaps investigating why it broke would reveal something about how this sort of momentum works in general?)]

Hypothesis 3: It’s about efficacy. Once I’ve made some progress, spent an hour in deep work, or whatever, I the relevant part of my mind alieves that I am capable of making progress on my goals, and so is more motivated do that.

In other words, being productive is evidence that something good will happen if I try, which makes it worth while to try.

(This would sugest that other boosts to one’s self-confidence or belief in ability to do things would also jump start momentum chains, which seems correct.)

Hypothesis 4: It’s about a larger time budget inducing parts-coordination. I have a productive first hour and get stuff done. A naive extrapolation says that if all of the following hours have a similar density of doing and completing, then I will be able to get many things done. Given this all my parts that are advocating for different things that are important to them settle down, confident that their thing will be gotten to.

In contrast if I have a bad morning, each part is afraid that it’s goal will be left by the wayside, and so they all scramble to drag my mind to their thing, and I can’t focus on any one thing.

[This doesn’t seem right. The primary contrasting state is more like lazy and lackadaisical, rather than frazzled.]

Hypothesis 5: It is related to failing with abandon. It’s much more motivating to be aiming to have an excellent day than it is to be aiming to recover from a bad morning to have a decent day. There’s an inclination to say “f*** it”, and not try as hard, because the payoffs are naturally divided into chunks of a day.

Or another way to say this: my motivation increases after a good morning because I alieve that I can get to all the things done, and getting all the things done is much more motivating than getting 95% of the things done because of completion heuristics (which I’ve already noted, but not written about anywhere).

Hypothesis 6: It’s about attention. There’s something that correlates with productivity which is something like “crispness of attention” and “snappiness of attentional shifts.” Completing a task and then moving on to the next one has this snappiness.

Having a “good morning” means engaging deeply with some task or project and really getting immersed in it. This sort of settledness is crucial to productivity and it is much easier to get into if I was there recently. (Because of fractionation?!)

Hypothesis 7: It’s about setting a precedent or a set point for executive function, or something? There’s a thing that happens throughout the day, which is that an activity is suggested, by my mind or by my systems, and some relevant part of me decides “Yes, I’ll do that now”, or “No, I don’t feel like it.”

I think those choices are correlated for some reason? The earlier ones set the standard for the later ones? Because of consistency effects? (I doubt that that is the reason. I would more expect a displacement effect (“ah. I worked hard this morning, I don’t need to do this now”) than a consistency effect (“I choose to work earlier today, so I’m a choose-to-work person”). In any case, this effect is way subverbal, and doesn’t involve the social mind at all, I think.)

This one feels pretty right. But why would it be? Maybe one of hypotheses 1-5?

Hypothesis 8: Working has two components: the effort of starting and reward making progress / completing.

If you’re starting cold, you have to force yourself through the effort, and it’s easier to procrastinate, putting the task off for a minute or an hour.

But if you’ve just been working on or just completed something else and are feeling the reward high from that, then the reward component of tasks in general, is much more salient, is pulled into near-mode immediacy. Which makes the next task more compelling.

I think this captures a lot of my phenomenological experience regarding productivity momentum and it also explains the related phenomena with exercise and similar.

(Also, there’s something like an irrational fear of effort, which builds up higher and higher as long as you’re avoiding is, but which dissipates once you exert some effort?)

(M/T on Hyp. 8:) If this were the case, it seems like it would predict that momentum would decay if one took a long break in the middle of the day. I think in practice this isn’t quite right, because the “productivity high” of a good morning can last for a long time, into the afternoon or evening.

 

My best guess is that it is hypothesis 8, but that the dynamic of hypothesis 5 is also in play. I’ll maybe consolidate all this thinking into a theory sometime this week.

Note: Also, maybe I’m only talking about flow?

 

Added 2018-11-28: Thinking about it further, I hit upon two other hypotheses that fit with my experience.

Hypothesis 7.5: [related to 1, 1.5, and 3. More or less a better reformulation of 7.] There’s a global threshold of distraction or of acting on (or reacting to) thoughts and urges flashing through one’s mind. Lowering this threshold on the scale of weeks and months, but it also varies day by day. Momentum entails lowering that threshold, so that one’s focus on any given task can be deep, instead of shallow.

This predicts that meditation and meditative-like practices would lower the threshold and potentially start up cycle of productivity momentum. Indeed, the only mechanism that I’ve found that has reliably helped me recover from unproductive mornings and afternoons is a kind of gently-enforced serenity process.

I think this one is pretty close to correct.

Hypothesis 10: [related to 2, and 8] It’s just about ambiguity resolution. Once I start working, I have a clear and sense of what that’s like which bounds the possible hedonic downside. (I should write more about ambiguity avoidance.)

 

 

Where did the radically ambitious thinking go?

[Epistemic status: Speculation based on two subjective datapoints (which I don’t cite).]

Turing and I.J. Good famously envisioned the possibility of a computer superintelligence, and furthermore presaged AI risk (in at least some throwaway lines). In our contemporary era, in contrast, these are fringe topics among computer scientists. The persons who have most focused on AI risk are outside of the AI establishment. And Yudkowsky and Bostrom had to fight to get that establishment to take the problem seriously.

Contemporaneously with Turing, the elite physicists of the generation (Szilard, in particular, but also others) were imagining the possibility of an atomic bomb and atomic power. I’m not aware of physicists today geeking out over anything similarly visionary. (Interest in fictional, but barely grounded technologies like warp drives doesn’t cut it. Szilard could forecast atomic bombs and their workings from his knowledge of physics. Furthermore, it was plausible for Szilard to accomplish the development of such a device in a human lifetime, and actively tried for it. This seems of a different type than idle speculation about Star Trek technologies.) At least one similarly world-altering technology, Drexlarian Nanotechnology, is firmly not a part of the discourse between mainstream scientists. (I think. I suppose they talk about it among themselves surreptitiously, but in that case, I would be surprised by how little the scientific community has pursued it).

I have the impression (not based on any hard data) that the scientists of the first half of the 20th century regularly explored huge, weird ideas, technologies and eventualities that, if they came to pass, could radically reshape the world. I further have the impression that most scientists today don’t do the same, or at least don’t do so in a serious way.

One way to say it: in 1920 talk of the possibility of an atomic bomb is in the domain of science, to be treated as a real and important possibility, but in 2018, nanotech is in the domain of science fiction, to be treated as entertainment. (Noting that atomic bombs were written about in science fiction.)

I don’t know why this is.

  • Maybe because current scientist are more self-conscious, not wanting to seem unserious?
  • Maybe because they have less of a visceral sense that the world can change in extreme ways?
  • Or maybe “science fiction” ideas became low status somehow? Perhaps because more rigorous people were hyping all sorts of nonsense, and intellectuals wanted to distance themselves from such people? So they adopted a more cynical attitude?
  • Maybe the community of scientists was smaller then, so it was easier to create common knowledge that an idea wasn’t “too out there”.
  • Maybe because nanotech and superintelligence are actually less plausible than atomic bombs, or are at least, more speculative?

I want to know: if this effect is real, what happened?

Cravyness – A hypothesis

[epistemic status: think I think is at least partially true, this week.]

[This is one of the fragments of thought that is leading up to some posts on “the Psychology and Phenomenology of Productivity” that I have bubbling inside of me.]

I sometimes find myself feeling “cravy.” I’ll semi-compulsively seek instant gratification, from food, from stimulation from youtube or webcomics, from mastrabation. My attention will flit from object to object, instead of stabilizing on anything. None of that frantic activity is very satisfying, but it’s hard to break the pattern in the moment.

I think this state is the result of two highly related situations.

  1. I have some need (a social need, a literal nutrient, a sexual desire, etc.) that is going unfullfilled, and I’m flailing, trying to get it. I don’t know what I’m missing or wanting so I’m just scrabbling after anything that has given me a dopamine hit in the past.
  2. Some relevant part of me currently alives that one of my core goals (not necessarily and IG, but a core component of my path) is impossible, and is panicing. I’m seeing short term gratification because, that part of me (thinks that short term gratification is the only kind that’s possible or is trying to distract itself from the pain of the impossibility.)

 

 

(Eli’s notes to himself: Notably, both of these hypothesis sugest that Focusing would likely be effective… – Ah. Right. But I don’t usually do the “clearing a space” step.)

A hill and a blanket of consciousness (terrible metaphor)

[epistemic status: A malformed thought which turned into something else I had a couple of weeks ago, which seems important to me right now. As I was writing it became more for me and less for public consumption. ]

Some inspiration: I’m reading (well, listening to the audiobook of) Consciousness Explained. I’m also thinking about this Slate Star Codex post.

What does it mean for a thought to be conscious vs. unconscious? Taking for granted that there’s something like a chamber of of Guf: there are a bunch of competing thoughts or thought fragments or associations or plans or plan fragment or whatever, occurring “under the surface”?

There’s a typical view of consciousness which is that it is discreet and boolean: you have a bunch of unconscious thoughts and some of them become conscious. You have a working memory, and you can manipulate the objects in working memory. (Working memory isn’t quite the same thing, though. You don’t have to be aware of the object in working memory, you just need to be able to recall them, when needed).

But a lot of sources (Gendlin, the authors of the Mind Illuminated, Shenzhen Young, (indirectly) Yudkowsky, and my own phenomenological experience) sugest that it’s more like a scalar gradient: some thoughts are more conscious, but there are also less conscious thoughts on the edges of awareness, that you can become more aware of with training.

Something like this metaphor:

Thoughts are like grains of sand piled into a hill or pyramid. The gains at the top are the most conscious, the easiest to see. The ones a bit further down are peripherally conscious.  The further down you go the less conscious it is.

Conscious awareness itself is like a blanket that you throw over the top of the hill. Most people’s blankets are pretty small: they only cover the very top of the hill. But with training, you can stretch out your blanket, so that it can cover more of the hill. You can become aware of more “unconscious” phenomena. (I need a different word for how high on the hill a thought is is, something like it’s “absolute accessibility”, and how far the blanket reaches. Whether a thing is conscious depends on both the height on the hill and the size of the blanket.)

And to complicate the metaphor, thoughts are not really grains of sand. They’re more like ants, each trying to get to the top of the hill (I think? Maybe not all thoughts “want to be conscious”. In fact I think many don’t. ok. Scratch that.)

…They’re more like ants, many of which are struggling to get to the top of the hill, by climbing over their brethren.  And also, some of the ants are attached to some of the other ants with strings, so that if one of them get’s pull up it pulls up the other one.

 

The top of the pyramid is constant

[epistemic status: incomplete thought, perhaps to be followed up on in later posts]

I just read most of this article in the Atlantic, which points out that despite increasing investment (of both money and manpower) in science, the rate of scientific discovery is, at best commiserate with scientific progress in the 1930s, and may not even be meeting that bar.

(This basic idea is something that I’ve been familiar with for several years. Furthermore, this essay reminds me of something I read a few months ago: that the number of scientific discoveries named after their discovers (a baseline metric for importance?) is about the same decade to decade, despite vastly more scientists. [I know the source, but I can’t be bothered to cite it right now. Drop a message in the comments if you want it.]

When I read the headline of this article, my initial hypothesis was this:

Very few people in the world can do excellent groundbreaking science. Doing excellent scientific research requires both a very high intrinsic intelligence, and additionally, some other cognitive propensities and dispositions which are harder to pin down. In earlier decades science was a niche enterprise that attracted only these unusual people.

Today, science is a gigantic network of institutions that includes many times as many people. It still attracts the few individuals capable of being excellent scientists, but it also includes 10 to 1000 times as many people who don’t have the critical properties.

My posit: The great scientists do good work. Any additional manpower put into the scientific institutions is approximately useless. So the progress of science is constant.

(There’s probably a second order factor that all those extra people, and especially the bureaucracy that is required to manage and organize them all, get in way, and make it harder for the best scientists to do their work. (And in particular, it might dilute the attention of the best scientists in training their successors, which weakens the transmission of the cognitive-but-non-biological factors that contribute to “great-scientist-ness.”)

But I would guess that this is mostly a minor factor.)

But…

Between 1900 and 2015, the world population increased by close to 5 times. It seems like if my model was correct, the number of “great scientists” today would be higher than it was in 1930, if only because of population growth (ignoring things like the Flynn effect).

Why aren’t there 5 x as many great scientists? Maybe the bureaucracies getting in the way thing was bigger than I thought?

Maybe the “adjacent possible” of scientific discoveries increases linearly, for some reason, instead of exponentially, as one would expect?

Or maybe “discoveries named after their creators” is not a good proxy for “important discoveries”, because it’s a status symbol. And the number of people at the top of a status hierarchy is constant, even if the status hierarchy is much bigger.