Perhaps the most important thing to understand about EA is that, from the beginning, it was composed of two entwined memes.
Charity is inefficient
The first of these memes was that by using reason and evidence you could do much more good, with charity, than the default.
In the early 2010s, there were some nerds on the internet (LessWrong and GiveWell and some other places) writing about, among other things, how to optimize charitable giving.
It seemed like the charity sector, in general, was not very efficient. With careful thought and research you could easily outcompete the charity sector as a whole, making investments that are orders of magnitude more effective than donating with a typical amount of thought.
The basic claim here is about competence: by being more thoughtful and rational, by doing research, we can outcompete a charitable industry that isn’t trying very hard, at least by our standards.
Normal people can save lives
The second meme was that a person with relatively small life-style changes, a normal person in the first world could do a shocking level of good, and it is good for people to do that.
My guess is that this meme started with Peter Singer’s drowning child paper, which argued that by spending tiny amounts of money and giving up some small creature comforts, one could literally save lives in the third world. And given that, it’s really really good when people decide to live that way; the more people who do, the more lives are saved. The more resources that are committed to these problems more good we can do.
This basic meme became the seed of Giving What We Can, for instance.
Note that in its initial form, this was something like a redistributive argument: We have so much more stuff than others who are in dire need that there’s a moral pressure (if not an obligation) to push for wealth transfers that help address the difference.
To summarize the difference between these ideas: One of these is about using the altruistic budget more effectively, and the other is about expanding the size of the altruistic budget.
Synergies
These memes are obviously related. They’re both claims about the outsized impact one can have via charitable donation (and career prioritization).
The claim that a first-worlder can save lives relatively cheaply means that there’s not an efficient market in saving lives.
And in my experience at least, the kind of analytical person that is inclined to think “One charity is going to be the most effective one. Which one is that?” also tends to be the kind of person that thinks, at some point in their life, “There are people in need who are just as real as I am, and the money I’m spending could be spent on helping them. I should give most of my money away.” There’s an “EA type”, and both of these memes are appealing to that type of person.
So in some ways these two memes are synergistic. Indeed, they’re synergistic enough that they fused together into the core of the Effective Altruism movement.
Tensions
But, as it turns out, those memes are also in tension with each other in important respects, in ways that played out in cultural conflicts inside of EA, as it grew.
These two memes encode importantly different worldviews, which have different beliefs about the constraints on doing more good in the world. To make an analogy to AI alignment, one of these views holds that the limiting factor is capability, how much of an influence we you can have on the world, and the other holds that the limiting factor is steering, the epistemology to identify outsized philanthropic opportunities.
Obviously, both of these can be constraints on your operation, and doing the most good will entail finding an optimal point on the pareto frontier of their tradeoff.
Implications of a “resources moved” frame
If the operating model is that the huge philanthropic gains are primarily redistributive, the primary limiting factor is the quantitative amount of resources moved. That tends to imply…
The Effective Altruism community that wants to be a mass-movement
If the good part of EA is more people deciding to commit more of their resources to effective charities, you want to convert as many people as possible. One high priority is convincing as many people as possible to become EAs.
Indeed this was the takeaway from Peter Singer’s TED talk in all the way back in 2013: You can do more good by donating to effective charities than you can by being a doctor (a generally regarded way of “helping people”), but you can do even better than that by converting more people to EA.
Branding matters
If you’re wanting to spread that core message, to get more people to donate more of their resources to effective charities, there’s incentive to reify the EA brand, to be the sort of thing that people can join, rather than an ad hoc collection of ideas and bloggers.
And branding is really important. If you want lots of people to join your movement, it matters a lot if the general public perception of your movement is positive.
Implications of a “calling the outsized opportunities frame”
In contrast, if you’re operating on a model where the philanthropic gains are the result of doing better and more careful analysis, the limiting constraint on the project is not (necessarily) getting more material resources to direct, but the epistemic capability to correctly pinpoint good opportunities. This tends to imply…
Unusually high standards of honesty and transparency are paramount
If you’re engaged in the intellectual project of trying to figure out how the world works, and how to figure out which interventions make things better, it is an indispensable feature of your epistemic community that it has strict honesty norms, that are firmly in Simulacrum level 1.
You need to have expectations for what counts as honest that are closer to those of the scientific community, compared to the standards of marketing.
We might take for granted that in many facets of the world, people are not ever really trying to be accurate (when making small talk or crafting slogans) and that people and organizations will put their best foot forward, highlighting their success and quietly downplaying failures.
But that normal behavior is counter to a collective epistemic process of putting forward and critiquing ideas, and learning from (failed) attempts to do stuff.
Furthermore, if you have a worldview that holds that the charity sector is incredibly inefficient, you’re apt to ask why that is. And part of the answer is that this kind of normal “covering one’s ass” / “putting forward a good face” behavior kills the accounting that causes charities to be effective in their missions. This background tends to make people more paranoid about these effects in their “let’s try to outcompete the charity sector” community.
“More people” is not an unadulterated good
Adding more people to a conversation does not, in the typical case, make the reasoning of that conversation more rigorous.
A small community of bloggers and intellectuals engaged in an extended conversation, aiming make progress together on some questions about how to most effectively get altruistic gains at scale, doesn’t necessarily benefit from more participants.
And it definitely doesn’t benefit from the indiscriminate addition of participants. A small number of people will contribute to the extended conversation more than the communication and upfront enculturation cost that each new person imposes.
These two world views give rise to two impulses in the egregore of EA: the faction that is in favor of professionalism and the faction in vocal support of epistemics and integrity.
We see this play out everytime someone posts something arguably unseemly on the EA forum, and someone comments, “I think it was bad to post this, it makes EA look bad and has a negative impact in expectation.”
And I think the tension between these impulses goes a long way towards explaining why EA seems so much less promising to me now than it did 5 years ago.
I have more to say here, about how the incentive to do the hard work of rigorous thinking things through and verifying lines of argument is somewhat self-cannibalizing, but I don’t feel like writing that all right now, so I’m shipping this as a post.