A mechanistic description of status

[This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t read Social Status: Down the Rabbit Hole on Kevin Simler’s excellent blog, Melting Asphalt read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”]

In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation.

(I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.)

By “status” I mean prestige-status.

Axiom 1: People have goals.

That is, for any given human, there are some things that they want. This can include just about anything. You might want more money, more sex, a ninja-turtles lunchbox, a new car, to have interesting conversations, to become an expert tennis player, to move to New York etc.

Axiom 2: There are people who control resources relevant to other people achieving their goals.

The kinds of resources are as varied as the goals one can have.

Thinking about status dynamics and the like, people often focus on the particularly convergent resources, like money. But resources that are only relevant to a specific goal are just as much a part of the dynamics I’m about to describe.

Knowing a bunch about late 16th century Swedish architecture is controlling a goal relevant-resource, if someone has the goal of learning more about 16th century Swedish architecture.

Just being a fun person to spend time with (due to being particularly attractive, or funny, or interesting to talk to, or whatever) is a resource relevant to other people’s goals.

Axiom 3: People are more willing to help (offer favors to) a person who can help them achieve their goals.

Simply stated, you’re apt to offer to help a person with their goals if it seems like they can help you with yours, because you hope they’ll reciprocate. You’re willing to make a trade with, or ally with such people, because it seems likely to be beneficial to you. At minimum, you don’t want to get on their bad side.

(Notably, there are two factors that go into one’s assessment of another person’s usefulness: if they control a resource relevant to one of your goals, and if you expect them to reciprocate.

This produces a dynamic where by A’s willingness to ally with B is determined by something like the product of

  • A’s assessment of B’s power (as relevant to A’s goals), and
  • A’s assessment of B’s probability of helping (which might translate into integrity, niceness, etc.)

If a person is a jerk, they need to be very powerful-relative-to-your-goals to make allying with them worthwhile.)

All of this seems good so far, but notice that we have up to this point only described individual pair-wise transactions and pair-wise relationships. People speak about “status” as a attribute that someone can possess or lack. How does the dynamic of a person being “high status” arise from the flux of individual transactions?

Lemma 1: One of the resources that a person can control is other people’s willingness to offer them favors

With this lemma, the system folds in on itself, and the individual transactions cohere into a mostly-stable status hierarchy.

Given lemma 1, a person doesn’t need to personally control resources relevant to your goals, they just need to be in a position such that someone who is relevant to your goals will privilege them.

As an example, suppose that you’re introduced to someone who is very well respected in your local social group: Wendy. Your assessment might be that Wendy, directly, doesn’t have anything that you need. But because Wendy is well-respected by others in your social group, they are likely to offer favors to her. Therefore, it’s useful for Wendy to like you, because then they are more apt to call on other people’s favors on your behalf.

(All the usual caveats about has this is subconscious, and humans are adaption-executors and don’t do explicit, verbal assessments of how useful a person will be to them, but rely on emotional heuristics that approximate explicit assessment.)

This causes the mess of status transactions to reinforce and stabilize into a mostly-static hierarchy. The mass of individual A-privileges-B-on-the-basis-of-A’s-goals flattens out, into each person having a single “score” which determines to what degree each other person privileges them.

(It’s a little more complicated than that because people who have access to their own resources have less need of help from other. So a person’s effective status (the status-level at which you treat them is closer to their status minus your status. But this is complicated again because people are motivated not to be dicks (that’s bad for business), and respecting other people’s status is important to not being a dick.)

Initial thoughts about the early history of Circling

I spent a couple of hours over the past week looking into the origins and early history of Circling, as part of a larger research project.

If you want to read some original sources, this was the most useful and informative post on the topic that I found.

You can also read my curated notes (only the things that were most interesting to me), including my thinking about the Rationality Community.


A surprising amount of the original work was done while people were in college. Notably, Bryan, Decker, and Sarah, all taught and developed Circling / AR in the living spaces of their colleges:

“Even before this, Bryan Bayer and Decker Cunov had independently discovered the practice as a tool to resolve conflicts in their shared college household in Missouri,”

“Sara had been a college student, had discovered Authentic Relating Games, had introduced them into her college dorm with great success”

It reminds me that a lot the existence and growth of EA was driven by student groups. I wonder if most movements are seeded by people in their early 20s, and therefore college campuses have been the background for the origins of most movements throughout the past century.


There’s in  way in which the teaching of Circling spread, the way the teaching of rationality didn’t.

It sounds like many of the people who frequently attended the early weekend programs that Guy and Jerry (and others) were putting on, had ambitions to develop and run similar programs of their own one day. And to a large degree, they did. There’s been something like 10 to 15 for pay circling-based programs, across at least 4 organizations. In contrast Rationality has one CFAR, that primarily runs a single program.

I wonder what accounts for the difference?

Hypotheses:

  • Circlers tend to be poor, where rationalist tend to be software engineers. Circlers could dream of doing Circling full time, but there’s not much appeal for rationalists to be teaching rationality full time. (That would be a pay cut, and there’s no “activity” that rationalist love and that they would get to do as their job.)
  • Rationality is too discrete and explicit. Once you’ve taught the rationality techniques you know, you’re done (or you have to be in the business of inventing new ones), whereas teaching Circling is more like a service: there’s not a distinct point when the student “has it” and doesn’t need your teaching, but a gradual apprenticeship.
  • Relatedly, maybe there’s just not enough demand for rationality training. A CFAR workshop is, for most rationalists, is a thing that you do once, whereas Circlers might attend several Circling immersion or trainings in a year. Rationality can become a culture and a way of life, but CFAR workshops are not. As a result, the demand for rationality training amounts to 1 workshop per community member, instead of something like 50 events per community member.
    • Notably, if CFAR had a slightly different model, this feature could change.
  • Rationality is less of concrete thing, separate from the CFAR or LessWrong brands.
    • Because of this, I think most people don’t feel enough ownership of “Rationality” as an independent thing. It’s Eliezer’s thing or CFAR’s thing. Not something that is separate from either of them.
    • Actually, the war between the founders might be relevant here. That Guy and Decker were both teaching Circling highlighted that is was separate from any one brand.
    • I wonder what the world would look like if Eliezer coined a new term for the thing we call rationality, instead of taking on a word that already has meaning in the wider world. I expect there would be less potential for a mass movement, but more and affordance to teach the thing, a feeling that one could be expert at it.
  • Maybe the fact the Circling was independently discovered by Guy and Jerry, and Decker and Bryan, made it obvious that no one owned it.
    • If we caused a second rationality-training organization to crop up, would that cause a profusion of rationality orgs?
  • Circling people acquired enough confidence in their own skills that they felt comfortable charging for them, rationalist don’t.
    • It is more obvious who the people who are skilled in circling is, because you can see it in a Circle.
    • Circling has an activity that is engaging to spend many an hour at and includes a feedback loop, so people became skilled at it in a way that rationalists don’t.

There aren’t people who are trying to build Rationality empires the way Jordan is trying to build a Circling empire.


I get the sense that a surprising number of the core people of circling are what I would call “jocks.” (Though my actual sample is pretty limited)

  • Guy originally worked as a personal trainer.
  • Sean Wilkinson and John Thompson ran a personal tennis academy before teaching Circling.
  • Jordan was a model.

“Many of us lived together in communal houses and/or were in relationships with other community members.”

They had group houses and called themselves “the community”. I wonder how common those threads are, in subcultures across time (or at least across the past century).

Goal-factoring as a tool for noticing narrative-reality disconnect

[The idea of this post, as well as the opening example, were relayed to me by Ben Hoffman, who mentioned it as a thing that Michael Vassar understands well. This was written with Ben’s blessing.]

Suppose you give someone an option of one of three fruits: a radish, a carrot, and and apple. The person chooses the carrot. When you ask them why, they reply “because it’s sweet.”

Clearly, there’s something funny going on here. While the carrot is sweeter than the radish, the apple is sweeter than the carrot. So sweetness must not be the only criterion your fruit-picker is using to make his decision. He/she might be choosing partially on that basis, but there must also be some other, unmentioned factor, that is guiding his/her choice.

Now imagine someone is describing the project that they’re working on (project X). They explain their reasoning for undertaking this project, the good outcomes that will result from it: reasons a, b, and c.

When someone is presenting their reasoning like this, it can be useful to take a, be and c as premises, and try and project what seems to you like the best course of action that optimizes for those goals. That is, do a quick goal-factoring, to see if you can discover a y, that seems to fulfill goals a, b, and c, better than X does.

If you can come up with such a Y, this is suggestive of some unmentioned factor in your interlocutor’s reasoning, just as there was in the choice of your fruit-picker.

Of course this could be innocuous. Maybe Y has some drawback you’re unaware of, and so actually X is the better plan. Maybe the person you’re speaking with just hadn’t thought of Y.

But but it also might be he/she’s lying outright about why he/she’s doing X. Or maybe he/she has some motive that he/she’s not even admitting to him/herself.

Whatever the case, the procedure of taking someone else’s stated reasons as axioms and then trying to build out the best plan that satisfies them is a useful procedure for drawing out dynamics that are driving situations under the surface.

I’ve long used this technique effectively on myself, but I sugest that it might be an important lens for viewing the actions of institutions and other people. It’s often useful to tease out exactly how their declared stories about themselves deviate from their revealed agency, and this is one way of doing that.