I’m currently reading The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg (the man who leaked the Pentagon Papers), on the suggestion of Anna Salamon.
I’m interested in the cold war planning communities because they might be relevant to the sort of thinking that is happening, or needs to happen, around AI x-risk, today. And indeed, there are substantial resemblances between the RAND corporation and at least some of the orgs that form the core of the contemporary x-risk ecosystem.
For instance…
A narrative of “saving the world”:
[M]y colleagues were driven men. They shared a feeling—soon transmitted to me—that we were in the most literal sense working to save the world. A successful Soviet nuclear attack on the United States would be a catastrophe, and not only for America.
A perception of the inadequacy of the official people in power:
But above all, precisely in my early missile-gap years at RAND and as a consultant in Washington, there was our sense of mission, the burden of believing we knew more about the dangers ahead, and what might be done about them, than did the generals in the Pentagon or SAC, or Congress or the public, or even the president. It was an enlivening burden.
We were rescuing the world from our Soviet counterparts as well as from the possibly fatal lethargy and bureaucratic inertia of the Eisenhower administration and our sponsors in the Air Force.
Furthermore, a major theme of the book is the insanity of US Nuclear Command and Control polices. Ellsberg points repeatedly at the failures of decision-making and morality amongst the US government.
A sense of intellectual camaraderie:
In the middle of the first session, I ventured—though I was the youngest, assigned to be taking notes, and obviously a total novice on the issues—to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.” A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish, like Herman and me) had routinely spoken to each other; I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t remotely take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.”
Visceral awareness of existential failure:
At least some of the folks at RAND had a visceral sense of the impending end of the world. They didn’t feel like they were just playing intellectual games.
I couldn’t believe that the world would long escape nuclear holocaust. Alain Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.
That last point seems particularly relevant. Folks in our cluster invest in the development and practice of tools like IDC in part because of the psychological pressures that accompany the huge stakes of x-risk.
At least some of the “defense intellectuals” of the Cold War were under similar pressures.[1]
For this reason, the social and intellectual climate around RAND and similar organizations during the Cold War represents an important case study, a second data point for comparison to our contemporaries working on existential risk.
How did RAND employees handle the psychological pressures? Did they spontaneously invent strategies for thinking clearly in the face of the magnitude of the stakes? If so, can we emulate those strategies? If not, does that imply that their thinking about their work was compromised? Or does it suggest that our emphasis on psychological integration methods are misplaced?
And perhaps most importantly, what mistakes did they make? Can we use their example to foresee similar mistakes of our own and avoid them?
[1] – Indeed, it seems like they were under greater pressures. There’s a sense of franticness and urgency that I feel in Ellsberg’s description that I don’t feel around MIRI. But I think that this is due to the time horizons that RAND and co. were operating under compared to the those that MIRI is operating under. I expect that as we near the arrival of AGI, there will be a sense of urgency and psychological pressure that is just as great and greater than that of the cold war planners.
End note: In addition to all these more concrete correlations, there’s also the intriguing intertwining of existential risk and decision theory in both of the data points of nuclear war planning and AI safety. I wonder if that is merely coincidence or represents some deeper connection.
“End note: In addition to all these more concrete correlations, there’s also the intriguing intertwining of existential risk and decision theory in both of the data points of nuclear war planning and AI safety. I wonder if that is merely coincidence or represents some deeper connection.” – Deeper connection. It’s hard for the world to end “randomly” soon, because it’s been around in basically the same distribution. The way for it to not end “randomly” is for some agent to try to end it [bold, maybe sketchy claim]. If you want that agent to not try to end it, you need to understand decision theory, because that’s the theory of how that agent works.
LikeLike