[Draft. This post really has a lot of prerequisites, that I’m not going to bother trying to explain. I’m just writing it to get it out of me. I’ll have to come back and make it understandable later, if that seems worth doing. This is really not edited.]
We live in an inadequate world. Things are kind of a mess. The vast majority of human resources are squandered, by Moloch, on ends that we would not reflectively endorse. And we’re probably all going to die.
The reason the world is so messed up, can be traced back to a handful of fundamental problems or fundemental constraints. By “fundamental problem” I have something pretty specific in mind, but Inadquite Equlibira points in the right direction. They’re the deep reasons why we can’t “just fix” the worlds problems.
Some possible fundamental problems / constraints, that I haven’t done the work to formulate correctly:
- The wold is too big and fast for any one person to know all of the important pieces.
- The game theoretic constraints that make rulers act against the common good.
- People in power take power preserving actions, so bureaucracy resist change, including correct change.
- People really want to associate with prestigious people, and make decisions on that basis.
- We can’t figure out what’s true anywhere near efficiently enough.
- People can’t actually communicate about the important things.
- We don’t know how, even in principle, to build an aligned AGI.
- Molochian race dynamics.
- Everyone is competing to get information to the people with power, and the people in power don’t know enough to know who to trust.
- We’re not smart enough.
- There is no system that is keeping track of the wilderness between problems.
I recently had the thought that some of these problems have different characters than the others. They fall into two camps, which, of course, actually form a spectrum.
For some of these problems, if you solved them, the solution would be self-aligning.
By that I mean something like, for some of these problems, their solutions would be a pressure or force, that would push towards solving the other problems. In the best case, if you successfully solved that problem, in due course this would case all of the other problems to automatically get solved. The flow-through effects of such a solution are structurally positive.
For other problems, even though the represent a fundamental constraint, if they were solved they wouldn’t push towards the solving of the other problems. In fact, solving that one fundamental problem in isolation might make the other problems worse.
A prototypical case of a problem who’s solution is self-aligning [I need to come up with better terminology] is an Aligned AI. If we knew how to build an AI that could do what we actually want, this would perhaps automatically solve all of our other problems. It could tell us how (if not fix the problems itself) to have robust science, or optimal economic policy, or incentive-aligned leaders, or whatever.
Aligned AI is the lolapaluza of altruistic interventions. We can solve everything in one sweep. (Except of course, the problems that were prerequisites for solving aligned AI. Those we can’t count on the AI to solve for us.)
Another example: If we implemented robust systems that incentivized leaders to act in the interests of the public good, it seems like this has the potential of (eventually) breaking all of the other problems. It would be a jolt that knocks our civilization into the attractor basin of a sane, adequate civilization (if our civilization is not in that attractor basin already).
In contrast, researcher ability is a fundamental constraint of our civilization (though maybe not a fundemental problem?), but it is not obvious that the flow through effects of breaking through that fundamental constraint are structurally positive. On the face of it, it seems like it would be bad if everyone in the world decoupled their research acumen: that seems like it would speed us toward doom.
This gives a macros-strategic suggestion, and a possible solution to the last term problem: identify all of the fundamental problems that you can, determine which ones have self-aligning solutions, and dedicate your life to solving whichever problem has the best ratio of tractability to size of (self-aligned) impact.
I maybe reinventing symmetric vs. asymmetric weapons here, but I think I am actually pointing at something deeper, or at least extending the idea further.
[Edit / note to self: I could maybe explain this with reference to personal productivity?: you want to find the thing which is easy to do but most makes it easy to do the other things. I’m not sure this captures the key thing I want to convey.]