Some musings on human brutality and human evil

[epistemic status: semi-poetic musing]

I’m listening to Dan Carlin’s Hardcore History: Supernova in the East this week. The biggest thing that’s struck me so far is the ubiquity of brutality and atrocity. In this series, Carlin describes the Rape of Nanjing in particular, but he points out that the “police reports” from that atrocity could just as well describe the Roman sack of Cremona, or the Turkish conquest of Byzantium, not to mention the constant brutality of the the Mongol hordes.

I’m left with an awareness that there’s an evil in human nature, an evolutionary darkness, inextricably bound up with us: in the right context, apparently decent, often god-fearing, young men will rape and plunder and murder en mass. There’s violence under the surface.

Luckily, I personally live in a democratic great power that maintains a monopoly on the use of force. At least for me (white and middle class), and at least for now (geopolitics shifts rapidly, and many of the Jews of 1940 Europe, felt that something like the Holocaust could never happen [in their country]), power, in the form of the largest, most technological advanced military ever, and in the form of nuclear weapons, is arrayed to protect me against that violence.

But that protection is bought with blood and brutality. Not just in the sense that America is founded on the destruction of the Native Americans that were here first, and civilization itself was built on the backs of forceful enslavement (though that is very much the case). In the sense that elsewhere in the world, today, that American military might is destroying someone else’s home. I recently learned about the Huế Massacre and other atrocities of the Vietnam war, and I’m sure similar things (perhaps not as bad), happen every year. Humans can’t be trusted not to abuse their power.

It’s almost like a law of nature: if someone has the power to hurt another, that provides opportunity for the darkness in the human soul to flower in violence. It’s like a conservation law of brutality.

No. That’s not right. Brutality is NOT conserved. It can be better or worse. (To say otherwise would be an unacceptable breach of epistemic and ethics). But brutality is inescapable.

So what to do? I the only way I can buy safety for myself and my friends is with violence towards others?

The only solution that I can think of is akin to Paretotopian ideas: could we make it so that there is a monopoly on the use of force, but no human has it?

I’m imagining something like an AGI whose source code was completely transparent: everyone could see and read the its decision theory. And all that it would do is prevent the use of violence, by anyone. Anytime someone attempts to commit violence the nano-machines literally stay their hand. (It might also have to produce immortality pills, and ensure that everyone could access them if they wanted too.) And other than that, it lets humans handle things for themselves. “A limited sovereign on the blockchain.”

I imagine that the great powers would be unwilling to give up their power, unless they felt so under threat (and loss averse), that this seemed like a good compromise. I imagine that “we” would have to bully the world into adopting something like this. The forces of good in human nature would have to have the underhand, for long enough to lock in the status quo, to banish violence forever.

 

 

Some notes on Von Neumann, as a human being

I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.

I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)

Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.

Watching this first clip, I noticed that I was surprised by a number of thing.

  1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
  2. That he was middling height (somewhat shorter than the presenter he’s talking too).
  3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat.

Some other notes of interest:

He was not a skilled poker player, which punctured my assumption that Von Neumann was omnicompetent. (pg. 5) Nevertheless, poker was among the first inspirations for game theory. (When I told this to Steph, she quipped “Oh. He wasn’t any good at it, so he developed a theory from first principles, describing optimal play?” For all I know, that might be spot on.)

Perhaps relatedly, he claimed he had low sales resistance, and so would have his wife come clothes shopping with him. (pg. 21)


He was sexually crude, and perhaps a bit misogynistic. Eugene Wigner stated that “Johny believed in having sex, in pleasure, but not in emotional attachment. HE was interested in immediate pleasure and little comprehension of emotions in relationships and mostly saw women in terms of their bodies.” The journalist Steve Heimes wrote “upon entering an office where a pretty secretary was working, von Neumann habitually would bend way over, more or less trying to look up her dress.” (pg. 28) Not surprisingly, his relationship with his wife, Klara, was tumultuous, to say the least.

He did however, maintain a strong, life long, relationship with his mother (who died the same year that he did).

Overall, he gives the impression of being a genius, overgrown child.


Unlike many of his colleagues, he seemed not to share the pangs conscience that afflicted many of the bomb creators. Rather than going back to academia following the war, he continued doing work for the government, including the development of the Hydrogen bomb.

Von Neumann advocated preventative war: giving the Soviet union an ultimatum, of joining a world government, backed by the threat of (and probable enaction of) nuclear attack, while the US still had a nuclear monopoly. He famously said of the matter, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock.”

This attitude was certainly influenced by his work on game theory, but it should also be noted that Von Neumann hated communism.

Richard Feynman reports that Von Neumann, in their walks through the Los Alamos desert, convinced him to adopt and attitude of “social irresponsibility”, that one “didn’t have to be responsible for the world he was in.”


Prisoner’s dilemma says that he and his collaborators “pursued patents less aggressively than the could have”. Edward Teller commented, “probably the IBM company owes half its money to John Von Neumann.” (pg. 76)

So he was not very entrepreneurial, which is a bit of a shame, because if he had the disposition he probably could have made sooooo much money / really taken substantial steps towards taking over the world. (He certainly had the energy to be an entrepreneur: he only slept for a few hours a night, and was working for basically all his working hours.


He famously always wore a grey oxford 3 piece suit, including when playing tennis with Stanislaw Ulam, or when riding a donkey down the grand canyon. But, I am not clear why. Was that more comfortable? Did he think it made him look good? Did he just not want to have to ever think about clothing, and so preferred to be over-hot in the middle of the Los Alamos desert, rather than need to think about if today was “shirt sleeves weather”?


Von Neumann himself once commented on the strange fact of so many Hungarian geniuses growing up in such a small area, in his generation:

Stanislaw Ulam recalled that when Von Neumann was asked about this “statistically unlikely” Hungarian phenomenon, Von Neumann “would say that it was a coincidence of some cultural factors which he could not make precise: an external pressure on the whole society of this part of Central Europe, a subconscious feeling of extreme insecurity in individual, and the necessity of producing the unusual or facing extinction.” (pg. 66)


One thing that surprised me most was that it seems that, despite being possibly the smartest person in modernity, he would have benefited from attending a CFAR workshop.

For one thing, at the end of his life, he was terrified of dying. But throughout the course of his life he made many reckless choices with his health.

He ate gluttonously and became fatter and fatter over the course of his life. (One friend remarked that he “could count anything but calories.”)

Furthermore, he seemed to regularly risk his life when driving.

Von Neuman was an aggressive and apparently reckless driver. He supposedly totaled his car every year or so. An intersection in Princeton was nicknamed “Von Neumann corner” for all the auto accidents he had there. records of accidents and speeding arrests are preserved in his papers. [The book goes on to list a number of such accidents.] (pg. 25)

(Amusingly, Von Neumann’s reckless driving seems due, not to drinking and driving, but to singing and driving. “He would sway back and forth, turning the steering wheel in time with the music.”)

I think I would call this a bug.

On another thread, one of his friends (the documentary didn’t identify which) expressed that he was over-impressed by powerful people, and didn’t make effective tradeoffs.

I wish he’d been more economical with his time in that respect. For example, if people called him to Washington or elsewhere, he would very readily go and so on, instead of having these people come to him. It was much more important, I think, he should have saved his time and effort.

He felt, when the government called, [that] one had to go, it was a patriotic duty, and as I said before he was a very devoted citizen of the country. And I think one of the things that particularly pleased him was any recognition that came sort-of from the government. In fact, in that sense I felt that he was sometimes somewhat peculiar that he would be impressed by government officials or generals and so on. If a big uniform appeared that made more of an impression than it should have. It was odd.

But it shows that he was a person of many different and sometimes self contradictory facets, I think.

Stanislaw Ulam speculated, “I think he had a hidden admiration for people and organizations that could be tough and ruthless.” (pg. 179)

From these statements, it seems like Von Neumann leapt at chances to seem useful or important to the government, somewhat unreflectively.

These anecdotes suggest that Von Neumann would have gotten value out of Goal Factoring, or Units of Exchange, or IDC (possibly there was something deeper going on, regarding a blindspots around death, or status, but I think the point still stands, and he would have benefited from IDC).

Despite being the discoverer/ inventor of VNM Utility theory, and founding the field of Game Theory (concerned with rational choice), it seems to me that Von Neumann did far less to import the insights of the math into his actual life than say, Critch.

(I wonder aloud if this is because Von Neumann was born and came of age before the development of cognitive science. I speculate that the importance of actually applying theories of rationality in practice, only becomes obvious after Tversky and Kahneman demonstrate that humans are not rational by default. (In evidence against this view: Eliezer seems to have been very concerned with thinking clearly, and being sane, before encountering Heuristics and Biases in his (I belive) mid 20s. He was exposed to Evo Psych though.))


Also, he converted to Catholicism at the end of his life, based on Pascal’s Wager. He commented “So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end”, and “There probably has to be a God. Many things are easier to explain if there is than if there isn’t.”

(According to wikipedia, this deathbed conversion did not give him much comfort.)

This suggests that he would have gotten value out of reading the sequences, in addition to attending a CFAR workshop.

 

Initial Comparison between RAND and the Rationality Cluster

I’m currently reading The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg (the man who leaked the Pentagon Papers), on the suggestion of Anna Salamon.

I’m interested in the cold war planning communities because they might be relevant to the sort of thinking that is happening, or needs to happen, around AI x-risk, today. And indeed, there are substantial resemblances between the RAND corporation and at least some of the orgs that form the core of the contemporary x-risk ecosystem.

For instance…

A narrative of “saving the world”:

[M]y colleagues were driven men. They shared a feeling—soon transmitted to me—that we were in the most literal sense working to save the world. A successful Soviet nuclear attack on the United States would be a catastrophe, and not only for America.

A perception of the inadequacy of the official people in power:

But above all, precisely in my early missile-gap years at RAND and as a consultant in Washington, there was our sense of mission, the burden of believing we knew more about the dangers ahead, and what might be done about them, than did the generals in the Pentagon or SAC, or Congress or the public, or even the president. It was an enlivening burden.

We were rescuing the world from our Soviet counterparts as well as from the possibly fatal lethargy and bureaucratic inertia of the Eisenhower administration and our sponsors in the Air Force.

Furthermore, a major theme of the book is the insanity of US Nuclear Command and Control polices.  Ellsberg points repeatedly at the failures of decision-making and morality amongst the US government.

A sense of intellectual camaraderie:

In the middle of the first session, I ventured—though I was the youngest, assigned to be taking notes, and obviously a total novice on the issues—to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.” A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish, like Herman and me) had routinely spoken to each other; I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t remotely take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.”

Visceral awareness of existential failure:

At least some of the folks at RAND had a visceral sense of the impending end of the world. They didn’t feel like they were just playing intellectual games.

I couldn’t believe that the world would long escape nuclear holocaust. Alain Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.

That last point seems particularly relevant. Folks in our cluster invest in the development and practice of tools like IDC in part because of the psychological pressures that accompany the huge stakes of x-risk.

At least some of the “defense intellectuals” of the Cold War were under similar pressures.[1]

For this reason, the social and intellectual climate around RAND and similar organizations during the Cold War represents an important case study, a second data point for comparison to our contemporaries working on existential risk.

How did RAND employees handle the psychological pressures? Did they spontaneously invent strategies for thinking clearly in the face of the magnitude of the stakes? If so, can we emulate those strategies? If not, does that imply that their thinking about their work was compromised? Or does it suggest that our emphasis on psychological integration methods are misplaced?

And perhaps most importantly, what mistakes did they make? Can we use their example to foresee similar mistakes of our own and avoid them?


[1] – Indeed, it seems like they were under greater pressures. There’s a sense of franticness and urgency that I feel in Ellsberg’s description that I don’t feel around MIRI. But I think that this is due to the time horizons that RAND and co. were operating under compared to the those that MIRI is operating under. I expect that as we near the arrival of AGI, there will be a sense of urgency and psychological pressure that is just as great and greater than that of the cold war planners.

End note: In addition to all these more concrete correlations, there’s also the intriguing intertwining of existential risk and decision theory in both of the data points of nuclear war planning and AI safety. I wonder if that is merely coincidence or represents some deeper connection.