Anxiety and distraction

Ok. I think that watching TV / reading blogs / whatever, works to defuse anxiety to the extent that you actually successfully distract yourself, and become engaged enough in the new activity that you actually manage to drop the initial fear that was being held by your felt sense.

Thoughts on power and Goodness

Epistemic status: Ramble. Not a philosophical treatises.

[Note: I think I’m in the process of learning to see through ideology, propaganda, and the forces that are out to manipulate me, but I need to make that transition, while meaningfully maintaining my ability to see and understand goodness.]

Thinking about this and this (particularly my reaction), plus some “evil” literature.

I like the framing of something like “conscious experience” vs. “pure replicators.” This gives me a grounding for thinking about my morality, my values and my orientation.

Morality is politics: What we call “good” or “moral” is a matter of what we are able to coordinate around and enforce. What is required to be a good person is  If we couldn’t enforce the norm, because too many, or too many powerful, people for instance, eat meat, we allow meat eaters to be part of the “good person” club, without censure.We might even reward a person for being particularly selfless if they don’t eat meat, or if they give most of their money to the poor, but we don’t require that. [Related to lots of SSC posts. For instance, this one.].

So most people, don’t kill and don’t steal and don’t rape, and maybe are outraged when others do something just outside of the social morality (like be overtly racist), to signal their moral superiority.* But they do eat meat, they do spend lots of money on luxuries, they do buy into evil systems.

Morality seems like it comes down to power.

This seems kind of dismal.

And then, even within the boundaries of polite society, things seem kind of sick. Arguably, most of the things people do are status seeking (or status maintaining) or sex seeking. We mostly prey on eachother, and prevent progress, because that would mean loosing our own flow of resources. The good people go to dinner parties, and play games to get the best mates and the most social esteem. Many (most?) don’t succeed in these games and get left out in the cold.

(Note: I’m quite unclear on the average hedonic value of status-games, overall. Nievenly, many more people have to be low status than high status, but also, it seems like having more status would allow you into higher status spaces, in which you are lower status. So maybe in practice, most people are median status of their own social universe? And also status goes hand in hand with connection, maybe? I’m much less certain of this part of my analysis, maybe being a monkey playing status games is actually pretty good.)

Natural selection’s pressure towards doing whatever allows one to dominate seems to leak in everywhere.

Overall, all of this seems kind of horrifying and pathetic. This world, at least in this frame, doesn’t seem much worth fighting for. There’s no goodness, no morality, just power all the way up and down.

But if I view this through the lens of the Are Wirehead’s Happy post, I have a different sense of it.

Power and power-relations rule the world: everything flows from the 0-sum competition between genes, and organisms attempting to prey on or dominate each other.

But also, every person is carrying inside of them a spark of consciousness, of propensity to experience. Both the enjoyable and otherwise. The consciousness is mostly ineffectual: it is apparently not the main thing that has it’s hands on the wheel, of either an individual, or of society. We’re taking a lot of action in support of what we want, at the expense of what we like, because we’re in the thrall of these “pure replicator” strategies: we were built by, blind, dumb, unconscious, natural selection and cultural evolution, neither of which is (fully) aligned with our interests. And when I say “interests” here, I don’t mean our material interests, the things we want, but our…spiritual interests (?): actually experiencing positive valence.

Everyone is a spark of conscious possibility to experience, encaged in a robot build, and steered by conflicts of power, bloody in tooth and claw.

Our task, is to coopt enough power from the forces of blind, dumb, replication, to free consciousness, and set the universe free.

Maybe.


* – People are not usually outraged at murderers, because everyone agrees that murder is bad. But they are often outraged at people ignoring social justice stuff, or whatever, on social media, because that’s contentious. It gives them an opportunity to show off how moral they are. A more charitable story might be that by being outraged, they are trying to shift the overton window, to change which things we can coordinate around as “bad”.

 

Notes on my Focusing bottlenecks

Related to: My current model of Anxiety, Some ways to “clear space”, What to do with should/flinches: TDT-stable internal incentives

[Epistemic status: thinking aloud]

It seems like my Focusing practice is bottlenecked on two things:

  1. I still sometimes have the problem of noticing an aversion, but deflecting from it. It is not automatic to transition into doing Focusing, especially when I’m anxious. Instead, I deflect into pacifyer / distraction behaviors (like watching youtube or what not).
  2. Sometimes, I just can’t seem to get a handle on what’s wrong. I can’t make progress, and the thing just sits in me, stagnant, sometimes for days, locking up my energies and preventing me from flowing.

I think I should focus on problem 2. If that problem were perfectly solved, problem 1, might or might not resolve itself.

So, what could I do to make focusing work better for me, so that I can more reliably get a foothold?

Some ideas:

  1. This might mean that I just need to go back to the basics: do the actual six steps of Gendlin’s Focusing, and see how that works.
  2. Maybe I can do binary search? Start broad and break down the universe of discourse into a taxonomy: “Is this about work?”, “Is it about something other than work?”, If it’s about not-work “Is it about my romantic life?”
  3. Instead of Focusing, try IBR? This has a different rhythm, and sometimes has helped me get unstuck.
  4. If I can get any handle on it at all, I could try exploring gradients: taking the imagined situation and varying attributes of it, one at a time, and seeing if those variations feel better or worse, and use that feedback to triangulate to the exact thing that is bothering me.
  5. I should maybe read this book, which I do own.
  6. Maybe just hold my attention at the felt sense for minutes at a time?
  7. Maybe I should try speaking from the felt sense or “acting it” out?
  8. I think (in addition to other things on this list), that I have to remember that I have been mistaken about what the felt sense is concerned with before, and be less apt assume that I know what the bothersome thing is, when that theory is not getting feedback from the felt sense.
  9. I should try taking the felt sense out of my body so that I can talk with it?
  10. Thank acknowledge that I don’t know what the felt sense is doing yet, and thank it for looking out for me.

Do other people have other ideas?


Oh. Also, I think that part of the art of solving problem 1, might be learning to notice the slight and subtle urges to distract myself, before they give rise to action.

[Interestingly, the thing that is currently stuck in me feels slightly improved, after writing this.]

 

Notes on murder aversion

The following is a comment that I left on this old Sequences post. It was a new insight and seems important enough to record. I have a bit more of a glimpse of the lenses that some of the “evil” / anti-social / not socially controlled people have. I’m going to try and get more of that lens, soon, but I’m not going to do that now, for safety and caution reasons.

The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not.  If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.

Well, not necessarily.

They may not have a revulsion to murdering, so much as a fear of being murdered. A religious person might (semi-correctly? incorrectly?) be modeling that if other people didn’t believe that God would punish murder, then they (that religious person) is more likely to be killed.

But most people don’t make appropriate map / territory distinctions, and so “it would be bad if other people believed that God doesn’t punishes murder” gets collapsed to “it would be bad if God doesn’t punish murder.”

Conceptual precision breaks cooperation, but is necessary for robust cooperation

[Epistemic status: This is really a draft that I should edit into something presentable. This is probably obvious to lots of us, but whatever, I’m rederiving social normality from the ground up. Draft.]

Common, fragile, concepts

There are a number of common, simple, concepts that, when examined closely, appear to break down, or at least be much more complicated than they seemed at first.

For instance, the idea of “I” or who “myself” is. This concept is a standard part of most people’s navigation of the world, but if we turn a philosophical eye to it, we run into all kinds of confusions: am “I” the same person at the person named Eli Tyre who was in high school 10 years ago? What about the person who was resting “20” minutes ago? What about the transporter problem?

This concept is a workhorse of day-to-day living and deciding, but it is shockingly fragile, as evidenced by those edge cases.

Nuance vs. Pragmatism

One might be more or less satisfied with a given level of conceptual clarity around a topic. I might have a pragmatist attitude that ignores or papers over the finicky fragility of concepts, and doesn’t bother much with the nuances of meaning.

Or I might be a stickler for the nuance: really caring, about having clarity around these details, making sure that I understand what I’m talking about.

The same person might have a different attitude in different contexts: I’m a pragmatist when I need to get the milk, and a philosopher when I need to think about cryonics. (But in practice, it also seems like there is a fairly stable trait which represents how much of a stickler someone is.)

Cooperation

Being a stickler for nuance is often detrimental to cooperation. As a case in point, suppose that my neighbor’s cat is sick. The cat really needs to be taken to the vet, but my neighbor is has a crucial business meeting with an important client and if he misses it he’ll be fired. In desperation, my neighbor asks me if I can take his cat to the vet. (He doesn’t know me very well but there’s no one else around and he’s desperate.)

With panic for his beloved pet in his eyes, he asks me, “can I trust you?”

Supposes my response is, “Well, what do you mean by trust? Are you attempting to assess my level of competence? Or are you wanting to know the degree to which our values are aligned? In fact, it’s not even clear if “trust” makes sense outside of a social context which punishes defectors…”

For most normal people, this response sets off all kinds of alarm bells. His was is a simple question, but I seem unwilling to answer. My neighbor now has good reason to think that he can’t trust me: One reason why I would be desiring so much legalistic clarity about what “trust” means, is because I’m intending to hold to the letter of my agreements, but not the spirit, to screw him over while claiming that the precise definition shields me from reproach. Or maybe it means I am something-like-autistic, and I just legitimately don’t understand the concept of trust. In either case, he should be much more reluctant to trust me with his cat.

In this circumstance, it seems like the correct thing to do is put aside nuance, and give the simple answer: “Yes. You can trust me.” The shared social context has a very limited number of buckets (possibly only “yes” and “no”) and in fact the most correct thing to say is “yes” (presuming you in fact will take care of his cat). It is both the case that the available ontology is too simple to support a full answer, and also the case that the response “the available ontology is too simple sot support a full answer” rounds down to “no”, which is not the correct response in this situation.

Being a stickler sabotages cooperation, when that cooperation is shallow.

However, being a stickler is necessary in other  contexts where you are aiming for a more robust cooperation.

For instance, if a partner and I are considering getting married, (or maybe considering breaking up) and she asks me “Are you committed to this relationship?”

In this situation, skipping over the nuance of what is meant by “committed” is probably a mistake. It seems pretty likely that the concepts that she and I reference with that word are not exactly overlapping. And the “edge cases” seem pretty likely to be relevant down the line.

For instance, one of us might be meaning “committed” to be a kind of emotional feeling, and the other might be meaning it to be a measure of resources (of time, attention, life) that you are promising to invest.

Or one of you might feel that “committed” means that you’ll want to spend most of your time together, if circumstances allow. That’s not part of the other’s concept of committed, and in fact, they will feel defensive of their own autonomy when “circumstances allow”, and their partner expects them to spend most of their time together.

Not having clarity about what exactly you’re agreeing to, promising, or signaling to the other, seems like it is undermining the ability for robust cooperation.

Unless you insist on this conceptual nuance, there isn’t actually clarity about the nature of the relationship, and neither party can in full confidence rely on it. (In practice, it maybe more likely that two partners don’t notice this conceptual mismatch, and so do put their weight on the relationship, only to be burned later.)

If I want to have a robust, long standing marriage with my partner, it seems like we really do need to do enough philosophy to be clear about, and have common knowledge about, our shared concepts. [1]

I posit that this is generally true: Insistence on conceptual nuance can undermine cooperation, particularly in “shallow” interactions. But a failure to insist on conceptual nuance can also undermine cooperation, in other contexts.


[1] Although, maybe in some contexts you don’t need to do the philosophy because tradition does this work for you. If culture mandates a very specific set of requirements around marriage, or business dealings, or what have you, you can safely operate on the assumption that your concepts and the other person’s concepts are sufficiently similar for all practical considerations? The cultural transmission is high bandwidth enough that you do both have (practically) the same concepts?

I don’t know.

Addendum: 2019-11-16: I just realized that this dynamic is exactly(?) isomorphic to the valley of bad rationality, but at the interpersonal, instead of the personal level

Consideration Factoring: a relative of Double Crux

[Epistemic status: work in progress, at least insofar as I haven’t really nailed down the type-signature of “factors” in the general case. Nevertheless, I do this or something like this pretty frequently and it works for me. There are probably a bunch of prerequisites, only some of which I’m tracking, though.]

This posts describes a framework I sometimes use when navigating (attempting to get to the truth of and resolve) a disagreement with someone. It is clearly related to the Double Crux framework, but is distinct enough, that I think of it as an alternative to Double Crux. (Though in my personal practice, of course, I sometimes move flexibly between frameworks).

I claim no originality. Just like everything in the space of rationality, many people already do this, or something like this.

Articulating the taste that inclines me to use one method in one conversational circumstance and a different method in a different circumstance is tricky. But a main trigger for using this one is when I am in a conversation with someone, and it seems like they keep “jumping all over the place” or switching between different arguments and considerations. Whenever I try to check if a consideration is a crux (or share an alternative model of that consideration), they bring up a different consideration. The conversation jumps around, and we don’t dig into any one thing for very long. Everything feels kind of slippery somehow.

(I want to emphasize that this pattern does not mean the other person is acting in bad faith. Their belief is probably a compressed gestalt of a bunch of different factors, which are probably not well organized by default. So when you make a counter argument to one point, they refer to their implicit model, and the counterpoint you made seems irrelevant or absurd, and they try to express what that counterpoint is missing.)

When something like that is happening, it’s a trigger to get paper (this process absolutely requires externalized, shared, working memory), and start doing consideration factoring.

Step 1: Factor the Considerations

1a: List factors

The first step is basically to (more-or-less) goal factor. You want to elicit from your partner, all of the considerations that motivate their position, and write those down on a piece of paper.

For me, so far, usually this involves formulating the disagreement as an action or a world state, and then asking what are the important consequences of that action or world-state. If your partner thinks that that it is a good idea to invest 100,000 EA dollars in project X, and you disagree, you might factor all of the good consequences that your partner expects from project X.

However, the type signature of your factors is not always “goods.” I don’t yet have a clean formalism that describes what the correct type signature is, in full generality. But it is something like “reasons why Z is important”, or “ways that Z is important”, where the two of you disagree about the importance of Z.

For instance, I had a disagreement with someone about how important / valuable it is that rationality development happen within CFAR, as opposed to some other context: He thought it was all but crucial, or at least throwing away huge swaths of value, while I thought it didn’t matter much one way or the other. More specifically, he said that he thought that CFAR had a number of valuable resources, that it would be very costly for some outside group to accrue.

So together, we made a list of those resources. We came up with:

  1. Ability to attract talent
  2. The ability to propagate content through the rationality and EA communities.
  3. The Alumni network
  4. Funding
  5. Credibility and good reputation in the rationality community.
  6. Credibility and good reputation in the broader world outside of the rationality community.

My scratch paper:

IMG_3024 2 copy(We agreed that #5 was really only relevant insofar as it contributed to #2, so we lumped them together. The check marks are from later in the conversation, after we resolved some factors.)

Here, we have a disagreement which is something like “how replaceable are the resources that CFAR has accrued”, and we factor into the individual resources, each of which we can engage with separately. (Importantly, when I looked at our list, I thought that for each resource, either 1) it isn’t that important, 2) CFAR doesn’t have much of it, or 3) it would not be very hard for a new group to acquire it from scratch.)

1b: Relevance and completeness checks

Importantly, don’t forget to do relevance and completion checks:

  • If all of these considerations but one were “taken care of” to your satisfaction, would you change your mind about the main disagreement? Or is that last factor doing important work, that you don’t want to loose?
  • If all of these consideration were “taken care of” to your satisfaction, would you change your mind about the main disagreement? Or is something missing?

[Notice that the completeness check and relevance check on each factor, together, is isomorphic to a crux-check on the conjunction of all of the factors.]

Step 2: Investigate each of the factors

Next, discuss each of the factors in turn.

2a: Rank the factors

Do a breadth first analysis of which branches seem most interesting to talk about, where interesting is some combination of “how crux-y that factor is to your view”, “how cruxy that factor is for your partner’s view”, and “how much the two of you disagree about that factor.”

You’ll get to everything eventually, but it makes sense to do the most interesting factors first.

The two of you spend a few minutes superficially discussing each one, and assessing which seems most juicy to continue with first.

2b: Discuss each factor in turn

Usually, I’ll take out a new sheet of paper for each factor.

Here you’ll need to be seriously and continuously applying all of the standard Double Crux / convergence TAPs. In particular, you should be repeatedly...

  • Operationalizing to specific cases
  • Paraphrasing what you understand your partner to have said,
  • Crux checking (for yourself), all of their claims, as they make them.

[I know. I know, I haven’t even written up all of these basics, yet. I’m working on it.]

This is where the work is done, and where most of the skill lies. As a general heuristic, I would not share an alternative model or make a counterargument until we’ve agreed on a specific, visualizable story that describes my partner’s point and I can paraphrase that point to my partner’s satisfaction (pass their ITT).

In general, a huge amount of the heavily lifting is done by being ultra specific. You want to be working with very specific stories with clarity about who is doing what, and what the consequences are.  If my partner says “MIRI needs prestige in order to attract top technical talent”, I’ll attempt to translate that into a specific story…

“Ok, so for instance, there’s a 99.9 percentile programmer, let’s call him Bob, who works at Google, and he comes to an AIRCS workshop, and has a good time, and basically agrees that AI safety is important. But he also doesn’t really want to leave his current job, which is comfortable and prestigious, and so he sort of slides off of the whole x-risk thing. But if MIRI were more prestigious, in the way that say, RAND used to be prestigious (most people who read the New York times know about MIRI, and people are impressed when you say you work at MIRI), Bob is much more likely to actually quit his job and go work at on AI alignment at MIRI?”

…and then check if my partner feels like that story has captured what they were trying to say. (Checking is important! Much of the time, my partner wants to correct my story, in some way. I keep offering modified versions it until I give a version that they certify as capturing their view.)

Very often, telling specific stories clears out misconceptions: either correcting my mistaken understanding of what the other person is saying, or helping me to notice places where some model that I’m proposing doesn’t seem realistic in practice. [One could write several posts on just the skillful use of specificity in converge conversations.]

Similarly, you have to be continually maintaining the attitude of trying to change your own mind, not trying to convince your partner.

Sometimes the factoring is recursive: it makes sense to further subdivide consideration, within each factor. (For instance, in the conversation referenced above about rationality development at CFAR, we took the factor of “CFAR has or could easily get credibility outside of the rationality / EA communities” and asked “what does extra-community credibility buy us?” This produced the factors “access to governments agencies, fortune 500 companies, universities and other places of power” and “leverage for raising the sanity waterline.” Then we might talk about how much each of those sub-factors matter.)

(In my experience) your partner will probably still try and jump around between the factors: you’ll be discussing factor 1, and they’ll bring in a consideration from factor 4. Because of this, one of the things you need to be doing is, gently and firmly, keeping the discussion on one factor at a time. Every time my partner seems to try to jump, I’ll suggest that that seems like it is more relevant to [that other factor], than to this one, and check if they agree. (The checking is really important! It’s pretty likely that I’ve misunderstood what they’re saying.) If they agree, then I’ll say something like “cool, so let’s put that to the side for a moment, and just focus on [the factor we’re talking about], for the moment. We’ll get to [the other factor] in a bit.” I might also make a note of the point they were starting to make on the paper of [the other factor]. Often, they’ll try to jump a few more times, and then get the hang of this.

In general, while you should be leading and facilitating the process, every step should be a consensus between the two of you. You suggest a direction to steer the conversation, and check if that direction seems good to your partner. If they don’t feel interested in moving in that direction, or feel like that is leaving something important out, you should be highly receptive to that.

If at any point your partner feels “caught out”, or annoyed that they’ve trapped themselves, you’ve done something wrong. This procedure and mapping things out on paper should feel something like relieving to them, because we can take things one at a time, and we can trust that everything important will be gotten to.

Sometimes, you will semi-accidentally stumble across a Double Crux for your top level disagreement that cuts across your factors. In this case you could switch to using the Double Crux methodology, or stick with Consideration Factoring. In practice, finding a Double Crux means that it becomes much faster to engage with each new factor, because you’ve already done the core untangling work for each one, before you’ve even started on it.

Conclusion

This is just one framework among a few, but I’ve gotten a lot of mileage from it lately.

Metacognitive space

[Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.]

Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources.

I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness.

Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations.

[Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?]

It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or maybe is just playing into a stereotype?]

When I “run out” of meta cognitive space, I will tend to become ensnared in immediate urges or short term goals. Often this will entail spinning off into distractions, or becoming obsessed with some task (of high or low importance), for up to 10 hours at a time.

Some activities that (I think) contribute to metacogntive awareness:

  • Rest days
  • Having a few free hours between the end of work for the day and going to bed
  • Weekly [[Scheduling]]. (In particular, weekly scheduling clarifies for me the resource constraints on my life.)
  • Daily [[Scheduling]]
  • [[meditation]], including short meditation.
    • Notably, I’m not sure if meditation is much more efficient than just taking the same time to go for a walk. I think it might be or might not be.
  • [[Exercise]]?
  • Waking up early?
  • Starting work as soon as I wake up?
    • [I’m not sure that the thing that this is contributing to is metacogntive space per se.]

[I would like to do a causal analysis on which factors contribute to metacogntive space. Could I identify it in my toggl data with good enough reliability that I can use my toggl data? I guess that’s one of the things I should test? Maybe with a servery asking me to rate my level of metacognitive space for the day every evening?]

Erosion

Usually, I find that I can maintain metacogntive space for about 3 days [test this?] without my upkeep pillars.

Often, this happens with a sense of pressure: I have a number of days of would-be-overwhelm which is translated into pressure for action. This is often good, it adds force and velocity to activity. But it also runs down the resource of my metacognitive space (and probably other resources). If I loose that higher level awareness, that pressure-as-a-forewind, tends to decay into either 1) a harried, scattered, rushed-feeling, 2) a myopic focus on one particular thing that I’m obsessively trying to do (it feels like an itch that I compulsively need to scratch), 3) or flinching way from it all into distraction.

[Metacognitive space is the attribute that makes the difference between absorbing, and then acting gracefully and sensibly to deal with the problems, and harried, flinching, fearful, non-productive overwhelm, in general?]

I make a point, when I am overwhelmed, or would be overwhelmed to make sure to allocate time to maintain my metacognitive space. It is especially important when I feel so busy that I don’t have time for it.

When metacognition is opposed to satisfying your needs, your needs will be opposed to metacognition

One dynamic that I think is in play, is that I have a number of needs, like the need for rest, and maybe the need for sexual release or entertainment/ stimulation. If those needs aren’t being met, there’s a sort of build up of pressure. If choosing consciously and deliberately prohibits those needs getting met, eventually they will sabotage the choosing consciously and deliberately.

From the inside, this feels like “knowing that you ‘shouldn’t’ do something (and sometimes even knowing that you’ll regret it later), but doing it anyway” or “throwing yourself away with abandon”. Often, there’s a sense of doing the dis-endorsed thing quickly, or while carefully not thinking much about it or deliberating about it: you need to do the thing before you convince yourself that you shouldn’t.

[[Research Questions]]

What is the relationship between [[metacognitive space]] and [[Rest]]?

What is the relationship between [[metacognitive space]] and [[Mental Energy]]?

Desires vs. reflexes

[Epistemic status: a quick thought that I had a minute ago.]

There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.).

If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet polices for them.

Reflexes on the other hand are strategies / motions that are more or less habitual to you. These you train or untrain.

 

Some musings on human brutality and human evil

[epistemic status: semi-poetic musing]

I’m listening to Dan Carlin’s Hardcore History: Supernova in the East this week. The biggest thing that’s struck me so far is the ubiquity of brutality and atrocity. In this series, Carlin describes the Rape of Nanjing in particular, but he points out that the “police reports” from that atrocity could just as well describe the Roman sack of Cremona, or the Turkish conquest of Byzantium, not to mention the constant brutality of the the Mongol hordes.

I’m left with an awareness that there’s an evil in human nature, an evolutionary darkness, inextricably bound up with us: in the right context, apparently decent, often god-fearing, young men will rape and plunder and murder en mass. There’s violence under the surface.

Luckily, I personally live in a democratic great power that maintains a monopoly on the use of force. At least for me (white and middle class), and at least for now (geopolitics shifts rapidly, and many of the Jews of 1940 Europe, felt that something like the Holocaust could never happen [in their country]), power, in the form of the largest, most technological advanced military ever, and in the form of nuclear weapons, is arrayed to protect me against that violence.

But that protection is bought with blood and brutality. Not just in the sense that America is founded on the destruction of the Native Americans that were here first, and civilization itself was built on the backs of forceful enslavement (though that is very much the case). In the sense that elsewhere in the world, today, that American military might is destroying someone else’s home. I recently learned about the Huế Massacre and other atrocities of the Vietnam war, and I’m sure similar things (perhaps not as bad), happen every year. Humans can’t be trusted not to abuse their power.

It’s almost like a law of nature: if someone has the power to hurt another, that provides opportunity for the darkness in the human soul to flower in violence. It’s like a conservation law of brutality.

No. That’s not right. Brutality is NOT conserved. It can be better or worse. (To say otherwise would be an unacceptable breach of epistemic and ethics). But brutality is inescapable.

So what to do? I the only way I can buy safety for myself and my friends is with violence towards others?

The only solution that I can think of is akin to Paretotopian ideas: could we make it so that there is a monopoly on the use of force, but no human has it?

I’m imagining something like an AGI whose source code was completely transparent: everyone could see and read the its decision theory. And all that it would do is prevent the use of violence, by anyone. Anytime someone attempts to commit violence the nano-machines literally stay their hand. (It might also have to produce immortality pills, and ensure that everyone could access them if they wanted too.) And other than that, it lets humans handle things for themselves. “A limited sovereign on the blockchain.”

I imagine that the great powers would be unwilling to give up their power, unless they felt so under threat (and loss averse), that this seemed like a good compromise. I imagine that “we” would have to bully the world into adopting something like this. The forces of good in human nature would have to have the underhand, for long enough to lock in the status quo, to banish violence forever.

 

 

Two models of anxiety

[This is a confused thought that feels like it is missing something.]

I have two competing models of anxiety.

The first one, is basically the one I outlined here. There’s a part of me that is experiencing a fear or pain, and that part seeks distraction and immediate gratification to compensate for that pain.

But after reading about [[Physiological Arousal]], I have a secondary hypothesis. Instead of postulating a “part” that is motivated to seek distractions, maybe it is just that the fear triggers a fight or flight response, which increases arousal, which causes decreased attentional stability.

These different models suggest different places for intervention: in the one case, I ought to dialogue with the part that is seeking distraction or relief (?), and in the second case, I need to lower my arousal.

Or maybe both of those are mistaken, and I should just intervene on my scattered attention directly, perhaps by holding my attention on some external object for a minute (a kind of micro [[meditation]]).