Social reasoning about two clusters of smart people

Here’s a sketch. All of the following are generalizations, and some are wrong.

There are rationalists.

The rationalists are unusually intelligent, even I think, for the tech culture that is their sort of backdrop. But they are, by-and-large kind of aspy: on the whole, they are weak on social skills, or their is something broken about their social perceptions (broken in a different way for each one).

Rationalists rely heavily on explicit reasoning, and usually start their journeys pretty disconnected from their bodies.

They are strongly mistake theorists.

They have very very strong STEM-y epidemics. They can follow, and are compelled by arguments. They are masterful at weighing evidence and coming to good conclusions on uncertain questions, where the there is something like a data-set or academic evidence base.

They are honest.

They generally have a good deal of trust and assumption of good faith about other people, or they are cynical of humans and human behavior, using (explicit) models of “signaling” and “evo pysch.”

I think they maybe have a collective blindspot with regards to Power, and are maybe(?) gullible (related to the general assumption about good faith). I suspect that rationalists might find it hard to generate the hypothesis that “this real person right in front of me, right now, is lying to me / trying to manipulate me.”

They are, generally, concerned about ex-risk from advanced AI, and track that as the “most likely thing to kill us all”.

 

There’s also this other cluster of smart people. This includes Leverage-people, and some Thiel people, and some who call themseleves post rationalists.

They are more “humanities” leaning. They probably think that lots of classic philosophy is not only good, but practically useful (where some rationalists would be apt to deride that as the “rambling of dead fools”).

They are more likely to study history or sociology, than math or Machine Learning.

They are keenly aware of the importance of power and power relations, and are better able to take ideology as object, and treat speech as strategic action rather than mere representation of belief.

Their worldview emphasizes “skill”, and extremely skilled people, who shape the world.

They are more likely to think of “beliefs” as having a proper function doing something other than reflecting the true state of the world, for instance, facilitating coordination, or producing an effective psychology. The rationalist would think of instrumentally useful false beliefs as something that is kind of dirty.

They tend to get some factual questions wrong (as near as I can tell): one common one is disregarding IQ, and positing that all mental abilities are a matter of learning.

These people are much more likely to think that institutional decay or civilizational collapse is more pressing than AI.

 

It seems like both these groups have blindspots, but I would really like to have a better sense of the likelyhood of both of these disasters, so it would be good if we could get all the virtues into one place, to look at both of them.

 

 

A view of the main kinds of problems facing us

I’ve decided that I want to to make more of a point to write down my macro-strategic thoughts, because writing things down often produces new insights and refinements, and so that other folks can engage with.

This is one frame or lens that I tend to think with a lot. This might be more of a lens or a model-let than a full break-down.

There are two broad classes of problems that we need to solve: we have some pre-paradigmatic science to figure out, and we have have the problem of civilizational sanity.

Preparadigmatic science

There are a number of hard scientific or scientific-philosophical problems that we’re facing down as a species.

Most notably, the problem of AI alignment, but also finding technical solutions to various risks caused by bio-techinlogy, possibly getting our bearings with regards to what civilization collapse means and how it is likely to come about, possibly getting a handle on the risk of a simulation shut-down, possibly making sense of the large scale cultural, political, cognitive shifts that are likely to follow from new technologies that disrupt existing social systems (like VR?).

Basically, for every x-risk, and every big shift to human civilization, there is work to be done even making sense of the situation, and framing the problem.

As this work progresses it eventually transitions into incremental science / engineering, as the problems are clarified and specified, and the good methodologies for attacking those problems solidify.

(Work on bio-risk, might already be in this phase. And I think that work towards human genetic enhancement is basically incremental science.)

To my rough intuitions, it seems like these problems, in order of pressingness are:

  1. AI alignment
  2. Bio-risk
  3. Human genetic enhancement
  4. Social, political, civilizational collapse

…where that ranking is mostly determined by which one will have a very large impact on the world first.

So there’s the object-level work of just trying to make progress on these puzzles, plus a bunch of support work for doing that object level work.

The support work includes

  • Operations that makes the research machines run (ex: MIRI ops)
  • Recruitment (and acclimation) of people who can do this kind of work (ex: CFAR)
  • Creating and maintaining infrastructure that enables intellectually fruitful conversations (ex: LessWrong)
  • Developing methodology for making progress on the problems (ex: CFAR, a little, but in practice I think that this basically has to be done by the people trying to do the object level work.)
  • Other stuff.

So we have a whole ecosystem of folks who are supporting this preparadgimatic development.

Civilizational Sanity

I think that in most worlds, if we completely succeeded at the pre-paradigmatic science, and the incremental science and engineering that follows it, the world still wouldn’t be saved.

Broadly, one way or the other, there are huge technological and social changes heading our way, and human decision makers are going to decide how to respond to those changes, possibly in ways that will have very long term repercussions on the trajectory of earth-originating life.

As a central example, if we more-or-less-completly solved AI alignment, from a full theory of agent-foundations, all the way down to the specific implementation, we would still find ourselves in a world, where humanity has attained god-like power over the universe, which we could very well abuse, and end up with a much much worse future than we might otherwise have had. And by default, I don’t expect humanity to refrain from using new capabilities rashly and unwisely.

Completely solving alignment does give us a big leg up on this problem, because we’ll have the aid of superintelligent assistants in our decision making, or we might just have an AI system implement our CEV in classic fashion.

I would say that “aligned superintelligent assistants” and “AIs implementing CEV”, are civilizational sanity interventions: technologies or institutions that help humanity’s high level decision-makers to make wise decisions in response to huge changes that, by default, they will not comprehend.

I gave some examples of possible Civ Sanity interventions here.

Also, think that some forms of governance / policy work that OpenPhil, OpenAI, and FHI have done, count as part of this category, though I want to cleanly distinguish between pushing for object-level policy proposals that you’ve already figured out, and instantiating systems that make it more likely that good policies will be reached and acted upon in general.

Overall, this class of interventions seems neglected by our community, compared to doing and supporting preparadigmatic research. That might be justified. There’s reason to think that we are well equipped to make progress on hard important research problems, but changing the way the world works, seems like it might be harder on some absolute scale, or less suited to our abilities.

 

 

 

 

Why is the media consumption of adult millennials the same as it was when they were children?

[Random musings.]

Recently, I’ve seen ads for number of TV shows that are re-instantiations of TV shows from the the early 2000s, apparently targeted at at people in their late twenties and early thirties, today.

For instance, there’s a new Lizzie Mcguire show, that follows a 30-year-old Lizzie as a practicing lawyer. (In the original show, she was a teenager in high school.) In a similar vein, there’s a new That’s So Raven Show, about Raven being a mom.

Also, recently, Disney released a final season of Star Wars the Clone Wars (which ran from 2008 to 2014).

These examples seem really interesting to me, because this seem like a new phenomenon. Something like, Millennials unironically like and are excited about the same media that they liked when they were kids. I think think this is new. My impression is that it would be extremely unusual for a 30 year-old in 1990, to show similar enthusiasm for the media they consumed as a 12 year old. I imagine that for that person there is a narrative that you are supposed to “grow out of childish things”, and a person who doesn’t do that is worthy of suspicion. (Though I wasn’t there in 1990, so maybe I’m miss-modeling this.)

My impression (which is maybe mistaken), is that Millennials did not “grow up” in the sense that earlier generations did. Instead of abandoning their childhood interests to consume “adult media”, they maintained their childhood interests into their 30s. What could be going on here?

  • (One thing to note is that all three of the examples that I gave above are not just Disney properties, but specifically Disney+ shows. Maybe this is a Disney thing, as opposed to a Millennial thing?)

Some hypotheses:

  • One theory is that in the streaming era, demographics are much more fragmented, and there is an explosion of content creation for every possible niche, instead of aiming for broad appeal. So while there always would have been some people who are still excited about the content from their childhood, now media companies are catering to that desire, in order to capture that small demographic.
  • Another possibility is that the internet allowed for self-sustaining fandoms. In the past, if you liked a thing, at best you could talk about it with your friends, until that content ended and your friends moved on. But with the internet, you could go on message boards, and youtube, and reddit, and be excited about the things you love, with other people who love those things, even decades after they aired. The internet keeps your childhood fresh and alive for you, in a way that wasn’t really possible for previous generations.
  • Maybe being a geek became destigmatized. I think there is one group of adults in 1990 that would be unironically excited about the content that they enjoyed as kids and teen-agers: Nerds, who still love Star Wars, or Star Trek, or comic books, or whatever. (I posit that this is because nerds tend to like things because of how natively cool they seem, which is pretty stable over a lifetime, as opposed to tracking the Keynesian beauty contest of which things are popular with the zeitgeist / which things are cool to like, which fluctuates a lot over years and decades.) For some reason (probably related to the above bullet point), being a geek became a lot less socially stigmatized over the early 2000s, and there was less social backlash for liking nerdy things, and for being unironically excited about content that was made for children.
    • I feel like there is deeply related to sex. I posit that the reason that most young men “grow out of childish things”, is that when they become interested in girls, they start to focus near-exclusively on getting laid, and childish interests are a liability to that. (Nerds either 1) care more about the things that they like, so that they are less willing to give them up, even for sex or 2) are more oblivious of the impact that their interests have on their prospects for getting laid). But I have the sense that unironically liking your childhood media is less of a liability to your sex-life in 2000, than it was in 1990, for reasons that are unclear.
    • (Again, maybe it is because the internet allows people to live in communities that that also appreciate that media, or maybe because nerds provided a ton of social value and can get rich and successful, so being a nerd is less stigmatized on the dating market, or maybe because special effects got so good that the things that were cool to nerds are now more obviously cool to everyone (eg superhero movies have mass appeal).
  • Maybe the content from the early 2000s is just better, in some objective sense, than the content of the 1970s – 1980s. Like maybe my dad grew out of the content that he watched as a kid, because it was just less sophisticated, where as the content that my generation watched as kids, is more interesting to adults?
  • Maybe the baby boomers had an exciting adult world to grow into, which was more compelling than their childhood interests. Millennials feel adrift in the world, and so default to the media they liked as kids, because they don’t have better things to do?