Here’s a sketch. All of the following are generalizations, and some are wrong.
There are rationalists.
The rationalists are unusually intelligent, even I think, for the tech culture that is their sort of backdrop. But they are, by-and-large kind of aspy: on the whole, they are weak on social skills, or their is something broken about their social perceptions (broken in a different way for each one).
Rationalists rely heavily on explicit reasoning, and usually start their journeys pretty disconnected from their bodies.
They are strongly mistake theorists.
They have very very strong STEM-y epidemics. They can follow, and are compelled by arguments. They are masterful at weighing evidence and coming to good conclusions on uncertain questions, where the there is something like a data-set or academic evidence base.
They are honest.
They generally have a good deal of trust and assumption of good faith about other people, or they are cynical of humans and human behavior, using (explicit) models of “signaling” and “evo pysch.”
I think they maybe have a collective blindspot with regards to Power, and are maybe(?) gullible (related to the general assumption about good faith). I suspect that rationalists might find it hard to generate the hypothesis that “this real person right in front of me, right now, is lying to me / trying to manipulate me.”
They are, generally, concerned about ex-risk from advanced AI, and track that as the “most likely thing to kill us all”.
There’s also this other cluster of smart people. This includes Leverage-people, and some Thiel people, and some who call themseleves post rationalists.
They are more “humanities” leaning. They probably think that lots of classic philosophy is not only good, but practically useful (where some rationalists would be apt to deride that as the “rambling of dead fools”).
They are more likely to study history or sociology, than math or Machine Learning.
They are keenly aware of the importance of power and power relations, and are better able to take ideology as object, and treat speech as strategic action rather than mere representation of belief.
Their worldview emphasizes “skill”, and extremely skilled people, who shape the world.
They are more likely to think of “beliefs” as having a proper function doing something other than reflecting the true state of the world, for instance, facilitating coordination, or producing an effective psychology. The rationalist would think of instrumentally useful false beliefs as something that is kind of dirty.
They tend to get some factual questions wrong (as near as I can tell): one common one is disregarding IQ, and positing that all mental abilities are a matter of learning.
These people are much more likely to think that institutional decay or civilizational collapse is more pressing than AI.
It seems like both these groups have blindspots, but I would really like to have a better sense of the likelyhood of both of these disasters, so it would be good if we could get all the virtues into one place, to look at both of them.