Cravyness – A hypothesis

[epistemic status: think I think is at least partially true, this week.]

[This is one of the fragments of thought that is leading up to some posts on “the Psychology and Phenomenology of Productivity” that I have bubbling inside of me.]

I sometimes find myself feeling “cravy.” I’ll semi-compulsively seek instant gratification, from food, from stimulation from youtube or webcomics, from mastrabation. My attention will flit from object to object, instead of stabilizing on anything. None of that frantic activity is very satisfying, but it’s hard to break the pattern in the moment.

I think this state is the result of two highly related situations.

  1. I have some need (a social need, a literal nutrient, a sexual desire, etc.) that is going unfullfilled, and I’m flailing, trying to get it. I don’t know what I’m missing or wanting so I’m just scrabbling after anything that has given me a dopamine hit in the past.
  2. Some relevant part of me currently alives that one of my core goals (not necessarily and IG, but a core component of my path) is impossible, and is panicing. I’m seeing short term gratification because, that part of me (thinks that short term gratification is the only kind that’s possible or is trying to distract itself from the pain of the impossibility.)

 

 

(Eli’s notes to himself: Notably, both of these hypothesis sugest that Focusing would likely be effective… – Ah. Right. But I don’t usually do the “clearing a space” step.)

A hill and a blanket of consciousness (terrible metaphor)

[epistemic status: A malformed thought which turned into something else I had a couple of weeks ago, which seems important to me right now. As I was writing it became more for me and less for public consumption. ]

Some inspiration: I’m reading (well, listening to the audiobook of) Consciousness Explained. I’m also thinking about this Slate Star Codex post.

What does it mean for a thought to be conscious vs. unconscious? Taking for granted that there’s something like a chamber of of Guf: there are a bunch of competing thoughts or thought fragments or associations or plans or plan fragment or whatever, occurring “under the surface”?

There’s a typical view of consciousness which is that it is discreet and boolean: you have a bunch of unconscious thoughts and some of them become conscious. You have a working memory, and you can manipulate the objects in working memory. (Working memory isn’t quite the same thing, though. You don’t have to be aware of the object in working memory, you just need to be able to recall them, when needed).

But a lot of sources (Gendlin, the authors of the Mind Illuminated, Shenzhen Young, (indirectly) Yudkowsky, and my own phenomenological experience) sugest that it’s more like a scalar gradient: some thoughts are more conscious, but there are also less conscious thoughts on the edges of awareness, that you can become more aware of with training.

Something like this metaphor:

Thoughts are like grains of sand piled into a hill or pyramid. The gains at the top are the most conscious, the easiest to see. The ones a bit further down are peripherally conscious.  The further down you go the less conscious it is.

Conscious awareness itself is like a blanket that you throw over the top of the hill. Most people’s blankets are pretty small: they only cover the very top of the hill. But with training, you can stretch out your blanket, so that it can cover more of the hill. You can become aware of more “unconscious” phenomena. (I need a different word for how high on the hill a thought is is, something like it’s “absolute accessibility”, and how far the blanket reaches. Whether a thing is conscious depends on both the height on the hill and the size of the blanket.)

And to complicate the metaphor, thoughts are not really grains of sand. They’re more like ants, each trying to get to the top of the hill (I think? Maybe not all thoughts “want to be conscious”. In fact I think many don’t. ok. Scratch that.)

…They’re more like ants, many of which are struggling to get to the top of the hill, by climbing over their brethren.  And also, some of the ants are attached to some of the other ants with strings, so that if one of them get’s pull up it pulls up the other one.

 

The top of the pyramid is constant

[epistemic status: incomplete thought, perhaps to be followed up on in later posts]

I just read most of this article in the Atlantic, which points out that despite increasing investment (of both money and manpower) in science, the rate of scientific discovery is, at best commiserate with scientific progress in the 1930s, and may not even be meeting that bar.

(This basic idea is something that I’ve been familiar with for several years. Furthermore, this essay reminds me of something I read a few months ago: that the number of scientific discoveries named after their discovers (a baseline metric for importance?) is about the same decade to decade, despite vastly more scientists. [I know the source, but I can’t be bothered to cite it right now. Drop a message in the comments if you want it.]

When I read the headline of this article, my initial hypothesis was this:

Very few people in the world can do excellent groundbreaking science. Doing excellent scientific research requires both a very high intrinsic intelligence, and additionally, some other cognitive propensities and dispositions which are harder to pin down. In earlier decades science was a niche enterprise that attracted only these unusual people.

Today, science is a gigantic network of institutions that includes many times as many people. It still attracts the few individuals capable of being excellent scientists, but it also includes 10 to 1000 times as many people who don’t have the critical properties.

My posit: The great scientists do good work. Any additional manpower put into the scientific institutions is approximately useless. So the progress of science is constant.

(There’s probably a second order factor that all those extra people, and especially the bureaucracy that is required to manage and organize them all, get in way, and make it harder for the best scientists to do their work. (And in particular, it might dilute the attention of the best scientists in training their successors, which weakens the transmission of the cognitive-but-non-biological factors that contribute to “great-scientist-ness.”)

But I would guess that this is mostly a minor factor.)

But…

Between 1900 and 2015, the world population increased by close to 5 times. It seems like if my model was correct, the number of “great scientists” today would be higher than it was in 1930, if only because of population growth (ignoring things like the Flynn effect).

Why aren’t there 5 x as many great scientists? Maybe the bureaucracies getting in the way thing was bigger than I thought?

Maybe the “adjacent possible” of scientific discoveries increases linearly, for some reason, instead of exponentially, as one would expect?

Or maybe “discoveries named after their creators” is not a good proxy for “important discoveries”, because it’s a status symbol. And the number of people at the top of a status hierarchy is constant, even if the status hierarchy is much bigger.

Circling vs. Unrolling

[Musing]

In reference to Critch’s post here.

I’m intrigued by the explicit unrolling in contrast to circling. I wonder how much circling an instance of developing overpowered tools on weird partly-orthagonal dimensions (like embodiment) because you haven’t yet discovered the basic simple structure of the domain.

Like, a person might have a bunch of cobbled together hacks and heuristics (including things about narrative, and chunking next actions, and discipline) for maintaining their productivity. But a crisp understanding of the relevant psychology makes “maintaining productivity” a simple and mostly effortless thing to do.

Or a person who spends years doing complicated math without paper. They will discover all kinds of tricks for doing mental computation, and they might get really good at these tricks, and building that skill might even have benefits in other domains. But at the end of the day, all of that training is blown out of the water as soon as they have paper. Paper makes the thing they were training hard to do easy.

To what extent is Circling working hard to train capacities that are being used as workarounds for limited working memory and insufficient theoretical understanding the structure of human interaction?

(This is a real question. My guess is, “some, but less than 30%”.)

A lot of my strategies for dealing with situations of this sort are circling-y, and feel like a lot of that is superfluous. If I had a better theoretical understanding, I could do the thing with much more efficiency.

For instance, I exert a lot of effort to be attuned to the other person in general and to be picking up subtle signs from them, and tracking where they’re at. If had a more correct theoretical understanding, a better ontology, I would only need to be tracking the few things that it turns out are actually relevant.

Since humans don’t know what those factors are, now, people are skilled at this sort of interaction insofar as they can track everything that’s happening with the other person, and as a result, also capture the few things that are relevant to the underlying structure.

I suspect that others disagree strongly with me here.

A mechanistic description of status

[This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”]

In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation.

(I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.)

By “status” I mean prestige-status.

Axiom 1: People have goals.

That is, for any given human, there are some things that they want. This can include just about anything. You might want more money, more sex, a ninja-turtles lunchbox, a new car, to have interesting conversations, to become an expert tennis player, to move to New York etc.

Axiom 2: There are people who control resources relevant to other people achieving their goals.

The kinds of resources are as varied as the goals one can have.

Thinking about status dynamics and the like, people often focus on the particularly convergent resources, like money. But resources that are only relevant to a specific goal are just as much a part of the dynamics I’m about to describe.

Knowing a bunch about late 16th century Swedish architecture is controlling a goal relevant-resource, if someone has the goal of learning more about 16th century Swedish architecture.

Just being a fun person to spend time with (due to being particularly attractive, or funny, or interesting to talk to, or whatever) is a resource relevant to other people’s goals.

Axiom 3: People are more willing to help (offer favors to) a person who can help them achieve their goals.

Simply stated, you’re apt to offer to help a person with their goals if it seems like they can help you with yours, because you hope they’ll reciprocate. You’re willing to make a trade with, or ally with such people, because it seems likely to be beneficial to you. At minimum, you don’t want to get on their bad side.

(Notably, there are two factors that go into one’s assessment of another person’s usefulness: if they control a resource relevant to one of your goals, and if you expect them to reciprocate.

This produces a dynamic where by A’s willingness to ally with B is determined by something like the product of

  • A’s assessment of B’s power (as relevant to A’s goals), and
  • A’s assessment of B’s probability of helping (which might translate into integrity, niceness, etc.)

If a person is a jerk, they need to be very powerful-relative-to-your-goals to make allying with them worthwhile.)

All of this seems good so far, but notice that we have up to this point only described individual pairwise transactions and pairwise relationships. People speak about “status” as a attribute that someone can possess or lack. How does the dynamic of a person being “high status” arise from the flux of individual transactions?

Lemma 1: One of the resources that a person can control is other people’s willingness to offer them favors

With this lemma, the system folds in on itself, and the individual transactions cohere into a mostly-stable status hierarchy.

Given lemma 1, a person doesn’t need to personally control resources relevant to your goals, they just need to be in a position such that someone who is relevant to your goals will privilege them.

As an example, suppose that you’re introduced to someone who is very well respected in your local social group: Wendy. Your assessment might be that Wendy, directly, doesn’t have anything that you need. But because Wendy is well-respected by others in your social group, they are likely to offer favors to her. Therefore, it’s useful for Wendy to like you, because then they are more apt to call on other people’s favors on your behalf.

(All the usual caveats about has this is subconscious, and humans are adaption-executors and don’t do explicit verbal assessments of how useful a person will be to them, but rely on emotional heuristics that approximate explicit assessment.)

This causes the mess of status transactions to reinforce and stabilize into a mostly-static hierarchy. The mass of individual A-privileges-B-on-the-basis-of-A’s-goals flattens out, into each person having a single “score” which determines to what degree each other person privileges them.

(It’s a little more complicated than that because people who have access to their own resources have less need of help from other. So a person’s effective status (the status-level at which you treat them is closer to their status minus your status. But this is complicated again because people are motivated not to be dicks (that’s bad for business), and respecting other people’s status is important to not being a dick.)

Goal-factoring as a tool for noticing narrative-reality disconnect

[The idea of this post, as well as the opening example, were relayed to me by Ben Hoffman, who mentioned it as a thing that Michael Vassar understands well. This was written with Ben’s blessing.]

Suppose you give someone an option of one of three fruits: a radish, a carrot, and and apple. The person chooses the carrot. When you ask them why, they reply “because it’s sweet.”

Clearly, there’s something funny going on here. While the carrot is sweeter than the radish, the apple is sweeter than the carrot. So sweetness must not be the only criterion your fruit-picker is using to make his decision. He/she might be choosing partially on that basis, but there must also be some other, unmentioned factor, that is guiding his/her choice.

Now imagine someone is describing the project that they’re working on (project X). They explain their reasoning for undertaking this project, the good outcomes that will result from it: reasons a, b, and c.

When someone is presenting their reasoning like this, it can be useful to take a, be and c as premises, and try and project what seems to you like the best course of action that optimizes for those goals. That is, do a quick goal-factoring, to see if you can discover a y, that seems to fulfill goals a, b, and c, better than X does.

If you can come up with such a Y, this is suggestive of some unmentioned factor in your interlocutor’s reasoning, just as there was in the choice of your fruit-picker.

Of course this could be innocuous. Maybe Y has some drawback you’re unaware of, and so actually X is the better plan. Maybe the person you’re speaking with just hadn’t thought of Y.

But but it also might be he/she’s lying outright about why he/she’s doing X. Or maybe he/she has some motive that he/she’s not even admitting to him/herself.

Whatever the case, the procedure of taking someone else’s stated reasons as axioms and then trying to build out the best plan that satisfies them is a useful procedure for drawing out dynamics that are driving situations under the surface.

I’ve long used this technique effectively on myself, but I sugest that it might be an important lens for viewing the actions of institutions and other people. It’s often useful to tease out exactly how their declared stories about themselves deviate from their revealed agency, and this is one way of doing that.

 

 

Approaches to this thing called “Rationality” (or alternatively, a history of our lineage)

[Posted to the CFAR mailing list]

[Somewhat experimental: Looking for thumbs up and thumbs down on this kind of writing. I’m trying to clarify some of the fuzziness around why we are calling the-thing-some-of-us-are-calling-rationality “rationality.”]

So what is this rationality thing anyway?

Simply stated, some behavior works better than other behavior for achieving a given goal. In fact, for formal and well defined environments, “games”, this is provably true. In the early to mid 20th century, academic mathematicians developed game theory and decision theory, mathematical formalizations of idealized decision algorithms that give provably optimal outcomes (in expectation).

One school of rationality (let’s call it “formal rationality”) is largely about learning and relying on these decision rules. For a rationalist of this type, progress in the field means doing more math, and discovering more theorems or decisions rules. Since most non-trivial decision problems involve dealing with uncertainty, and uncertainty in the real world is quantified using statistics, statistics is central to the practice of formal rationality.

MIRI does the sort of work that a formal rationalist would consider to be progress on rationality: trying to develop solutions to decision theory problems. (This is NOT to say that most, or even any of the actual people who work are MIRI are themselves of the “formal rationality” school as opposed to those to follow. In fact I have reason to think that NONE of them would identify as such.) The other large frontiers of “formal rationality” are mostly in economics. The economy can be thought of as a single gigantic game theoretic game.

For the formal rationalist, rationality is almost entirely solved. We have game theory. We have probability theory. We have decision theory. There may be edge-case scenarios that need to be solved (pascal’s mugging, for instance), but for the most part, the “art” has already been invented. Declaring oneself a rationalist in the formal sense is a statement of philosophy: it means you trust the approximations of the formal decision rules over intuition, common sense, tradition, or well, anything.  One doesn’t need to qualify with the word “aspiring.”

(There’s a framework nearby to formal rationality which is largely captured by the term “evidence-based.” This is the position that one should base one’s actions and beliefs on evidence, over intuition or superstition. We can call this traditional rationality.  Traditional rationality includes science, and evidence-seeking in general.)

If you have formalized decision rules that describe the behaviour of goal directed agents, you now have the affordance to check what humans are actually doing. Enter Kahneman and Tversky. Over the course of the 1970’s to 1990’s ,they do many experiments and determine that 1) most people are not optimal goal-directed agents, (i.e. they are “irrational”. Little surprise to anyone, I think), 2) that those with advanced knowledge of “formal rationality” (e.g. statistics, economics, probability theory, game theory, decision theory) also fail to be optimal goal-directed agents (WE’re irrational too), and 3) that humans tend to deviate from ideal behaviour in systematic, predictable ways.

Thus develops the Heuristics and Biases project in psychology, with gives to rise another approach to the project of rationality. If humans are intrinsically and systematically biased, and simply telling a person about the bais doesn’t fix it (as is often the case), then the greater share of rationality training involves coming up with methods to counteract native cognitive bias. We can call this approach to rationality “the debiasing approach.” It inherits many of the formalizations from formal rationality (which do reflect ideal behavior), but the emphasis is on dealing with the actual human mind and correcting it’s faults. The project of rationality involves math, but now it is mostly in the domain of psychology.

This is in large part, the approach to rationality that Eliezer took in the sequences (though the sequences are a philosophical treatise, and his aims went beyond debiasing), and it fairly well characterizes LessWrong.

In 2012, CFAR is founded, and initially takes the debiasing approach. But the organization pretty quickly pivots away from that sort of model (you’ll notice that there are no modules in the current workshop of the form “this is the X fallacy/bias and here is the technique that eliminates or mitigates it.”) Developing debiasing protocols proves to be difficult, but there’s a nearby thing which is very useful and and much more tractable. CFAR borrows the System 1 / System 2 framework from heuristics and biases and develops methods to get those facets of the mind to communicate with one another.

For instance, sometimes a person intellectually endorses an action but doesn’t feel emotionally motivated about it. Propagating urges (or aversion factoring) is a technique that facilitates the dialogue between those parts of the mind, such that one (or both) of them updates and they are both on the same page. Similarly, sometimes a part of the mind has information about how likely a project is to succeed, but that data needs to be queried to be useful. The Inner Simulator / Murphyjitsu is a technique that lets the conscious, verbal system query the part of the mind that automatically makes predictions of that sort.

This approach isn’t about mitigating specific biases, but rather propagating information that one already has in one part of his/her mind to other parts of the the cognitive system. We can call this approach the “parts-propagation approach.” It’s about unifying the mind (minding our way style, mostly, but not exclusively) such that all parts of the mind are on the same page and pulling in the same direction, so that we can be more “agenty ducks” (i.e., better approximations of the simplified goal-directed agents of formal rationality, with stable, ordered goals) that can get shit done in the world.

These are three rather different approaches to rationality, and each one entails very different ideas of what the “art of rationality” should look like and what directions research should take. I have thoughts about which of these approaches are most tractable and important, but my goal here is only to clarify some of the confusion about what is meant by “rationality” and why.

Thoughts? Are these categories good ones? Do they carve reality at the joints?

 

Progress on brokenness

[This post is about me.]

Relating to a brief interaction between Mark, Andrew, and Eli, yesterday.

Over the past few months, my sense of myself and my oddities has gone through some significant changes and sophistication.

I’ve long known that my internal processes differed in many respects from those of most people, and while there are clear drawbacks to my cognitive style, there are also clear advantages. It wasn’t necessarily clear that the advantages outweighed the drawbacks in the abstract, but I happened to like my cognitive style, and it seemed useful for reasons of variety is nothing else. In any case there was a sense of “that’s just the way Eli is.”

In my interactions with the cooler people of rationalist community, my oddities started to “come into focus” more. Instead of being black boxes of “the ways Eli is weird” I began developing much deeper causal models of what I was doing differently and why. It wasn’t an either-or-proposition: I could figure out how to refine the things that I was doing to avoid the drawbacks without blunting the advantages.

Then more recently, largely as a result of my interactions with this group, I’ve gotten a fuller sense of what my mind is doing, moment-to-moment, on a phenomenological level, and I’m excited because I think I have (or am close to having) the tools, the phenomenology, and the community to flatly resolve issues that have plagued me for my whole life, and which pretty much everyone assumed were un-fixable and would just need to be accommodated.

An anecdote: When I was in second grade, in the winter, I would wear long-sleeved shirts. I would also wash my hands a lot (due to OCD-like tendencies). However, I wouldn’t roll up my sleeves, so I would walk around with damp sleeves and even if I dried my hands, it was as if they were  constantly being submerged in water. This was in Arizona, a desert. My hands would get so dry, that they had the texture of sandpaper and they were constantly cracked to the point of bleeding. This was uncomfortable, to say the least.

My mom asked me, with exasperation, “why don’t you just roll up your sleeves?” and I responded, “because no one told me too.”

I said that a lot growing up. It was (and is) a common pattern for me. There have been many things that most kids just pick up, that most people find obvious, that I had to be told or taught explicitly.

There are a lot of things going wrong in the story above. There’s the thing that I felt like I needed to keep washing my hands. There’s clearly some sort of loop missing that does error-correction or automatic hypothesis generation or something.

If I needed to be told to roll up my sleeves, how much more am I missing subtle and implicit processes that most people don’t even have the words to describe. If that wasn’t wasn’t clear to me, how much worse am I at all the other things that people do automatically without even realizing that they are doing it?

But, this anecdote makes me optimistic. Because when someone finally told me to roll up my sleeves before washing my hands, I did. The hard part was the figuring out what to do, not the doing it. There may be (there likely are) simple things that I could be taught to do, explicitly, that are not actually much harder for me to do than for everyone else (but which I just don’t come to do automatically), which would close-to solve the cognitive bottlenecks. I suspect that there is enormously low-hanging fruit, if someone can just figure out how to point it out to me in a way I can understand.

I’m tinkering a lot, moving towards fuller abstract models and finding phenomenological levers. I’m figuring out what the pieces are. I’m trying stuff. And you guys are the best people in the world to help me figure out what those processes are.

Now, I shouldn’t be too excited. There may be thousands of implicit micro-protocols that I’m missing. I’m not sure that there are, but it seems possible. But even if that’s the case, if I can fill the most important holes that I’m missing, maybe I can bootstrap. If I can learn the process that others use to learn implicitly, explicitly, I can solve the rest myself and go FOOM.

This is super interesting to me, because 1) It would help me a bunch practically, 2) because I’m super curious about what my mind does and why, and this is an abstractly interesting project, and 3) because if many of these things are in fact hard for me to learn, but easy for me to do, then I could fill some long-standing holes very rapidly, which means that I could approximately know how to be a “normal human”, while still having access to all the exceptional competencies that I’ve been forced to develop as workarounds over the course of my 22 years missing parts of my brain. I mean, I’m a little crippled, and I’m still largely functional. What does the non-crippled version look like?
Optimism bias and stuff. Everything is harder than it seems. But I am actually making progress and I’m excited about what comes next.

My hamming problem: Making dealing with overwhelm automatic?

[I posted this to a neurophenomenology mailing list, here, on February 11, 2016]
Most of my wasted time (and most of my wasted potential value), is lost in “procrastination.” But “procrastination” isn’t reductionistic enough: the phenomenon has parts.

In particular, this is due to a particular kind of aversion to a particular kind of sensation of overwellm. This overwhelm has certain characteristics. For instance, I have never (I think) experienced it as resulting from some task that didn’t have a deadline.

I think this overwhelm is the result of system 1 not believing that it can accomplish a task or several tasks. That fear is anxiety provoking. It causes my mind to glance away or to become absorbed in something that I consider to be much less important (or even just not important at all).

There’s more though. I can notice this sensation and then use some process (Aversion factoring, or Focusing) to pay attention to the anxiety. In this self-dialog I am “forced” to come to terms with the fact, that I, on reflection, want to accomplish the task (or at least want the task accomplished). [Because of the deadline] simply not doing the thing in question is not a live option, once I have stared it in the face. I’ll then, typically, negotiate between the parts of myself. “What would it take for me to want to do the thing?” I might do just one bite-sized unit as a CoZE. Usually this dialog ends with some sort of “just start”, which I then proceed to do, unless I have an available affordence which I can bullshit myself into believing takes priority over doing the aversive thing.

But, when I do, “just start”, most of the time it goes well. In one of two ways.

Sometimes, I start, and I make progress and it isn’t as bad as I thought, or as overwhelming as I thought, and I relax.

Sometimes, I start, and I make progress, and I’m feeling confused, and it still feels overwhelming, but now the next steps are clear, and the fear propels me instead of paralyzes me. I’m still aroused, but now my arousal has an outlet, and this also feels damn good, and even though I’m feeling pressured, I’m no longer aversive to looking at, thinking about, the task to be accomplished.

First of all, this phenomenon seems to indicate that my anxiety and overwhelm is the result of not knowing how to do a thing. Once the path forward is clear, I feel pressured to take it, but not knowing what the path is, I’m anxious and will take any opportunity to be distracted from my fearful thought.

(I think this is a special case of the more general “my system 1 doesn’t believe that it/I can accomplish a given task, and so doesn’t want to think about it.” But with the exception of brute physical skills, if I don’t believe I can do a thing, it’s because I don’t know how.)

I can force myself to stare a scary thing in the face, and come to grips with it, but this is really will-power-y, and hence unreliable. I want to figure out how to make this process perfectly automatic.

This is my hamming problem, and I think it is the key bottleneck on the productivity of most people. If I had a technique that would reliably and efficiently cause me to flinch towards the things I don’t know how to do, and consequently scare me, this would be an God-damn superpower. This is the main difference between my most productive days, when I typically work ten hours at clip, and then rest (because I need it and not because I’m avoiding something), and then go back for more, and most of my days, which are flowing and efficient until I hit something aversive and grind to a halt.

There are a bunch of ideas in this space, but this is MY HAMMING PROBLEM. I’m not looking for some idea that helps a little. I want this problem solved. I want to be at the point where I never have this problem again and I just churn through would-be aversions of this type, effortlessly, everyday. I want the bottleneck on my productivity to be my time and my physical needs, not my micro-hedonic fears.

I think this is possible, and I’m determined to figure out how.

Click here to Reply