Approaches to this thing called “Rationality” (or alternatively, a history of our lineage)

[Posted to the CFAR mailing list]

[Somewhat experimental: Looking for thumbs up and thumbs down on this kind of writing. I’m trying to clarify some of the fuzziness around why we are calling the-thing-some-of-us-are-calling-rationality “rationality.”]

So what is this rationality thing anyway?

Simply stated, some behavior works better than other behavior for achieving a given goal. In fact, for formal and well defined environments, “games”, this is provably true. In the early to mid 20th century, academic mathematicians developed game theory and decision theory, mathematical formalizations of idealized decision algorithms that give provably optimal outcomes (in expectation).

One school of rationality (let’s call it “formal rationality”) is largely about learning and relying on these decision rules. For a rationalist of this type, progress in the field means doing more math, and discovering more theorems or decisions rules. Since most non-trivial decision problems involve dealing with uncertainty, and uncertainty in the real world is quantified using statistics, statistics is central to the practice of formal rationality.

MIRI does the sort of work that a formal rationalist would consider to be progress on rationality: trying to develop solutions to decision theory problems. (This is NOT to say that most, or even any of the actual people who work are MIRI are themselves of the “formal rationality” school as opposed to those to follow. In fact I have reason to think that NONE of them would identify as such.) The other large frontiers of “formal rationality” are mostly in economics. The economy can be thought of as a single gigantic game theoretic game.

For the formal rationalist, rationality is almost entirely solved. We have game theory. We have probability theory. We have decision theory. There may be edge-case scenarios that need to be solved (pascal’s mugging, for instance), but for the most part, the “art” has already been invented. Declaring oneself a rationalist in the formal sense is a statement of philosophy: it means you trust the approximations of the formal decision rules over intuition, common sense, tradition, or well, anything.  One doesn’t need to qualify with the word “aspiring.”

(There’s a framework nearby to formal rationality which is largely captured by the term “evidence-based.” This is the position that one should base one’s actions and beliefs on evidence, over intuition or superstition. We can call this traditional rationality.  Traditional rationality includes science, and evidence-seeking in general.)

If you have formalized decision rules that describe the behaviour of goal directed agents, you now have the affordance to check what humans are actually doing. Enter Kahneman and Tversky. Over the course of the 1970’s to 1990’s ,they do many experiments and determine that 1) most people are not optimal goal-directed agents, (i.e. they are “irrational”. Little surprise to anyone, I think), 2) that those with advanced knowledge of “formal rationality” (e.g. statistics, economics, probability theory, game theory, decision theory) also fail to be optimal goal-directed agents (WE’re irrational too), and 3) that humans tend to deviate from ideal behaviour in systematic, predictable ways.

Thus develops the Heuristics and Biases project in psychology, with gives to rise another approach to the project of rationality. If humans are intrinsically and systematically biased, and simply telling a person about the bais doesn’t fix it (as is often the case), then the greater share of rationality training involves coming up with methods to counteract native cognitive bias. We can call this approach to rationality “the debiasing approach.” It inherits many of the formalizations from formal rationality (which do reflect ideal behavior), but the emphasis is on dealing with the actual human mind and correcting it’s faults. The project of rationality involves math, but now it is mostly in the domain of psychology.

This is in large part, the approach to rationality that Eliezer took in the sequences (though the sequences are a philosophical treatise, and his aims went beyond debiasing), and it fairly well characterizes LessWrong.

In 2012, CFAR is founded, and initially takes the debiasing approach. But the organization pretty quickly pivots away from that sort of model (you’ll notice that there are no modules in the current workshop of the form “this is the X fallacy/bias and here is the technique that eliminates or mitigates it.”) Developing debiasing protocols proves to be difficult, but there’s a nearby thing which is very useful and and much more tractable. CFAR borrows the System 1 / System 2 framework from heuristics and biases and develops methods to get those facets of the mind to communicate with one another.

For instance, sometimes a person intellectually endorses an action but doesn’t feel emotionally motivated about it. Propagating urges (or aversion factoring) is a technique that facilitates the dialogue between those parts of the mind, such that one (or both) of them updates and they are both on the same page. Similarly, sometimes a part of the mind has information about how likely a project is to succeed, but that data needs to be queried to be useful. The Inner Simulator / Murphyjitsu is a technique that lets the conscious, verbal system query the part of the mind that automatically makes predictions of that sort.

This approach isn’t about mitigating specific biases, but rather propagating information that one already has in one part of his/her mind to other parts of the the cognitive system. We can call this approach the “parts-propagation approach.” It’s about unifying the mind (minding our way style, mostly, but not exclusively) such that all parts of the mind are on the same page and pulling in the same direction, so that we can be more “agenty ducks” (i.e., better approximations of the simplified goal-directed agents of formal rationality, with stable, ordered goals) that can get shit done in the world.

These are three rather different approaches to rationality, and each one entails very different ideas of what the “art of rationality” should look like and what directions research should take. I have thoughts about which of these approaches are most tractable and important, but my goal here is only to clarify some of the confusion about what is meant by “rationality” and why.

Thoughts? Are these categories good ones? Do they carve reality at the joints?

 

Progress on brokenness

[This post is about me.]

Relating to a brief interaction between Mark, Andrew, and Eli, yesterday.

Over the past few months, my sense of myself and my oddities has gone through some significant changes and sophistication.

I’ve long known that my internal processes differed in many respects from those of most people, and while there are clear drawbacks to my cognitive style, there are also clear advantages. It wasn’t necessarily clear that the advantages outweighed the drawbacks in the abstract, but I happened to like my cognitive style, and it seemed useful for reasons of variety is nothing else. In any case there was a sense of “that’s just the way Eli is.”

In my interactions with the cooler people of rationalist community, my oddities started to “come into focus” more. Instead of being black boxes of “the ways Eli is weird” I began developing much deeper causal models of what I was doing differently and why. It wasn’t an either-or-proposition: I could figure out how to refine the things that I was doing to avoid the drawbacks without blunting the advantages.

Then more recently, largely as a result of my interactions with this group, I’ve gotten a fuller sense of what my mind is doing, moment-to-moment, on a phenomenological level, and I’m excited because I think I have (or am close to having) the tools, the phenomenology, and the community to flatly resolve issues that have plagued me for my whole life, and which pretty much everyone assumed were un-fixable and would just need to be accommodated.

An anecdote: When I was in second grade, in the winter, I would wear long-sleeved shirts. I would also wash my hands a lot (due to OCD-like tendencies). However, I wouldn’t roll up my sleeves, so I would walk around with damp sleeves and even if I dried my hands, it was as if they were  constantly being submerged in water. This was in Arizona, a desert. My hands would get so dry, that they had the texture of sandpaper and they were constantly cracked to the point of bleeding. This was uncomfortable, to say the least.

My mom asked me, with exasperation, “why don’t you just roll up your sleeves?” and I responded, “because no one told me too.”

I said that a lot growing up. It was (and is) a common pattern for me. There have been many things that most kids just pick up, that most people find obvious, that I had to be told or taught explicitly.

There are a lot of things going wrong in the story above. There’s the thing that I felt like I needed to keep washing my hands. There’s clearly some sort of loop missing that does error-correction or automatic hypothesis generation or something.

If I needed to be told to roll up my sleeves, how much more am I missing subtle and implicit processes that most people don’t even have the words to describe. If that wasn’t wasn’t clear to me, how much worse am I at all the other things that people do automatically without even realizing that they are doing it?

But, this anecdote makes me optimistic. Because when someone finally told me to roll up my sleeves before washing my hands, I did. The hard part was the figuring out what to do, not the doing it. There may be (there likely are) simple things that I could be taught to do, explicitly, that are not actually much harder for me to do than for everyone else (but which I just don’t come to do automatically), which would close-to solve the cognitive bottlenecks. I suspect that there is enormously low-hanging fruit, if someone can just figure out how to point it out to me in a way I can understand.

I’m tinkering a lot, moving towards fuller abstract models and finding phenomenological levers. I’m figuring out what the pieces are. I’m trying stuff. And you guys are the best people in the world to help me figure out what those processes are.

Now, I shouldn’t be too excited. There may be thousands of implicit micro-protocols that I’m missing. I’m not sure that there are, but it seems possible. But even if that’s the case, if I can fill the most important holes that I’m missing, maybe I can bootstrap. If I can learn the process that others use to learn implicitly, explicitly, I can solve the rest myself and go FOOM.

This is super interesting to me, because 1) It would help me a bunch practically, 2) because I’m super curious about what my mind does and why, and this is an abstractly interesting project, and 3) because if many of these things are in fact hard for me to learn, but easy for me to do, then I could fill some long-standing holes very rapidly, which means that I could approximately know how to be a “normal human”, while still having access to all the exceptional competencies that I’ve been forced to develop as workarounds over the course of my 22 years missing parts of my brain. I mean, I’m a little crippled, and I’m still largely functional. What does the non-crippled version look like?
Optimism bias and stuff. Everything is harder than it seems. But I am actually making progress and I’m excited about what comes next.

My hamming problem: Making dealing with overwhelm automatic?

[I posted this to a neurophenomenology mailing list, here, on February 11, 2016]
Most of my wasted time (and most of my wasted potential value), is lost in “procrastination.” But “procrastination” isn’t reductionistic enough: the phenomenon has parts.

In particular, this is due to a particular kind of aversion to a particular kind of sensation of overwellm. This overwhelm has certain characteristics. For instance, I have never (I think) experienced it as resulting from some task that didn’t have a deadline.

I think this overwhelm is the result of system 1 not believing that it can accomplish a task or several tasks. That fear is anxiety provoking. It causes my mind to glance away or to become absorbed in something that I consider to be much less important (or even just not important at all).

There’s more though. I can notice this sensation and then use some process (Aversion factoring, or Focusing) to pay attention to the anxiety. In this self-dialog I am “forced” to come to terms with the fact, that I, on reflection, want to accomplish the task (or at least want the task accomplished). [Because of the deadline] simply not doing the thing in question is not a live option, once I have stared it in the face. I’ll then, typically, negotiate between the parts of myself. “What would it take for me to want to do the thing?” I might do just one bite-sized unit as a CoZE. Usually this dialog ends with some sort of “just start”, which I then proceed to do, unless I have an available affordence which I can bullshit myself into believing takes priority over doing the aversive thing.

But, when I do, “just start”, most of the time it goes well. In one of two ways.

Sometimes, I start, and I make progress and it isn’t as bad as I thought, or as overwhelming as I thought, and I relax.

Sometimes, I start, and I make progress, and I’m feeling confused, and it still feels overwhelming, but now the next steps are clear, and the fear propels me instead of paralyzes me. I’m still aroused, but now my arousal has an outlet, and this also feels damn good, and even though I’m feeling pressured, I’m no longer aversive to looking at, thinking about, the task to be accomplished.

First of all, this phenomenon seems to indicate that my anxiety and overwhelm is the result of not knowing how to do a thing. Once the path forward is clear, I feel pressured to take it, but not knowing what the path is, I’m anxious and will take any opportunity to be distracted from my fearful thought.

(I think this is a special case of the more general “my system 1 doesn’t believe that it/I can accomplish a given task, and so doesn’t want to think about it.” But with the exception of brute physical skills, if I don’t believe I can do a thing, it’s because I don’t know how.)

I can force myself to stare a scary thing in the face, and come to grips with it, but this is really will-power-y, and hence unreliable. I want to figure out how to make this process perfectly automatic.

This is my hamming problem, and I think it is the key bottleneck on the productivity of most people. If I had a technique that would reliably and efficiently cause me to flinch towards the things I don’t know how to do, and consequently scare me, this would be an God-damn superpower. This is the main difference between my most productive days, when I typically work ten hours at clip, and then rest (because I need it and not because I’m avoiding something), and then go back for more, and most of my days, which are flowing and efficient until I hit something aversive and grind to a halt.

There are a bunch of ideas in this space, but this is MY HAMMING PROBLEM. I’m not looking for some idea that helps a little. I want this problem solved. I want to be at the point where I never have this problem again and I just churn through would-be aversions of this type, effortlessly, everyday. I want the bottleneck on my productivity to be my time and my physical needs, not my micro-hedonic fears.

I think this is possible, and I’m determined to figure out how.

Click here to Reply