What’s up with the Anthropic board?

[Edit: I this post was based on a factual error. Reid Hoffman is not on the Anthropic board. Reed Hastings is. Thank you to Neel for correcting my mistake!]

What are the dynamics of Anthropic board meetings are like, given that some of the board seem to not really understand or believe in Superintelligence?

Reid Hoffman is on the board. He’s the poster child for “AI doesn’t replace humans, it’s a tool that empowers humans”. Like he wrote two(!) whole books about it (titles: Impromtu: Amplifying Our Humanity Through AI and Superagency: Empowering Humanity in the Age of AI).

For instance, here:

In this early period, many companies haven’t yet figured out how to integrate new engineers into AI-native workflows.

But I still believe there will be essentially unlimited demand for people who think computationally.

and

If you’re entering the workforce today, you have a unique advantage: you can grow up working with copilots, understanding the leverage they give you as an employee, and help your companies figure out how to integrate AI into their work

It sure doesn’t sound like he’s living in a mental world where there will be AIs that will be better at almost all people at almost all tasks by 2030!

He seems to be expressing broadly similar talking points about AI amplifying human work, as recently as three weeks ago.[1]

It seems like he’s not really Superintelligence-pilled, at least for the most important versions of superintelligence?

I imagine Dario coming into the board meetings and say “Alright guys, I  expect AI that is better than almost all humans at almost all tasks, possibly by 2027 and almost certainly no latter than 2030. Our mainline projection is that Anthropic will have a country of geniuses in a datacenter within 5 years.”

What is going on here?

  • Does Reid internally translates that to “we’re building awesome software tools that will empower people, not replace them”?
  • Does he think Dario is exaggerating for effect?
  • Does he think that Dario is just factually wrong about a projections that are extremely central to Anthropic’s business, but they haven’t bothered to (or at least haven’t succeeded at) getting to ground about it?
  • Does Dario not say these things to his board, but only in essays and interviews that he publishes to the whole world?!
  • Is Reid posturing about what he believes?

I don’t have a hypothesis that explains these observation that doesn’t seem bizarre. My best bad guess is that Reid is basically filtering out anything that doesn’t match his existing impressions about AI, despite being an early investor in OpenAI and being on the board of Anthropic!

Leave a comment