A series of vignettes for thinking about assisting corrupt regimes to prosecute crimes

I was talking with a someone about whether you should always help the police prosecute serious crimes, up to and including rape and murder, even if you know that the police system is corrupt. I said that I would support the police in prosecuting a murder, but I didn’t think that this was a slam-dunk obvious moral conclusion. This was my response to them.

I don’t have a super strong take about what the correct answer is here, only that there’s a moral dilemma to contend with at all.

I

Let’s start with this hypothetical:

Let’s say you live in a slum in Chicago. The neighborhood you live in is controlled by a gang. Police officers mostly don’t go into your neighborhood, because the gang has a tight hold on it.

The primary activities of the gang are selling crack, and extorting protection money. That’s their main business model. There are a bunch of grunts, but the top guys in the organization get pretty rich this way.

That business model entails maintaining control over their territory. So if someone else tries to sell crack in your neighborhood, they’ll scare him away, and if he comes back, they’ll kill him. And a lot of violence is bad for business, so if someone not in the gang is roughing people up, the gang will typically threaten them, drive them away, hurt them, or kill them. (If a member of the gang kills someone unnecessarily, they might get reprimanded for being stupid, but probably not much more than that.)

If someone is murdered by a non-gang-member, the gang won’t do much of an investigation. But usually it’s pretty clear who did it. And depending on who was killed, killing someone on gang territory represents a challenge to the gang’s power, and so they’ll probably find and kill the perpetrator, in retribution.

The gang is a major “social institution” in your neighborhood. They’re the closest thing to an organization of law and order that’s around.

I claim that it is NOT obvious that if someone is murdered, you should help the gang find and kill the perpetrator.

That might sometimes turn out to be the best available option, especially if the killer seems really dangerous. But helping the gang is basically siding with one set of murderers against another. And by helping them you’re adding whatever social weight you have to their legitimacy as the Schelling norm-enforcement institution.

Probably if the gang suddenly got a lot weaker, such that they stopped being the Schelling “biggest force around”, things would get locally worse, as a bunch of smaller gangs would make a play for the power vacuum. There would be more violence, not less, until eventually you’ll settle into a new lower-violence equilibrium where there’s some other gang that’s dominant (or maybe a few gangs, which have divided and staked out the old territory). 

But that the collapse of the gang’s power would be locally bad, doesn’t make it obvious that supporting them is the moral thing to do. 

II

That’s one hypothetical. Now let’s try on a different one.

Let’s say that literally all of the above circumstances obtain, but instead of a gang, it’s the local police force that’s behaving this way.

Let’s say you live in a slum in Chicago. The neighborhood you live in is controlled by the local police department. Police officers from other jurisdictions mostly don’t go into your neighborhood, because this local police force has a tight hold on it.

The primary activities of the police force are selling crack, and extorting protection money. That’s their main business model. There are a bunch of grunts, but the top guys in the organization get pretty rich this way.

That business model entails maintaining control over their territory. So if someone else tries to sell crack in your neighborhood, the police will scare him away, and if he comes back, they’ll kill him. And a lot of violence is bad for business, so if someone who’s not a member of the department is roughing people up, the police officers will typically threaten them, drive them away, hurt them, or kill them. (If a police officer kills someone unnecessarily, they might get reprimanded for being stupid, but probably not much more than that.)

If someone is murdered by a non-police-officer, the police won’t do much of an investigation. But usually it’s pretty clear who did it. And depending on who was killed, killing someone on their territory represents a challenge to the department’s power, and so they’ll probably find and kill the perpetrator in retribution.

The police department is a major “social institution” in your neighborhood. They’re the closest thing to an organization of law and order that’s around.

It is maybe an important difference between the first hypothetical and this one, that the gang wears police uniforms. But I don’t think it is much of a cruxy difference. If we blur our eyes and look at the effects, the second scenario is almost identical to the first. It’s only the labels that are different.

If it seems morally incorrect to side with the local gang in the first hypothetical, then it seems morally incorrect to side with the police in the second. That an organization is called “the police” is rarely cruxy for whether they deserve our support as a bastion of civilization.

III

That was a hypothetical. Now let’s talk about some real historical cases.

Let’s consider the sheriff of a small town in the South around 1885.

The primary function of the Sheriff is maintaining white supremacy. He does that in a bunch of ways, but most notably, he goes around arresting black men on extremely flimsy legal pretext (“loitering” or “vagrancy”, if the man goes into town, for instance, or for failing to pay debts that he was forced to take on), and sometimes no legal pretext at all. He and the local judge sentence the black man to hard labor, and then sell a contract for that man’s labor to one of their buddies, a man who owns a mine up-state.

The black man will probably spend the rest of his life doing forced labor for that mine-owner. Every time the end of his sentence is coming up, he’ll be penalized for some infraction that will necessitate extending his sentence. That way, the mine can continue to extort his labor indefinitely.

The Sheriff, the Judge, and the mine-owners all make a profit from this.

Sometimes people from the North come down and observe this system. Some of them are appalled, but no one wants another Civil War, and there’s a balance of power in the federal government that lets the Southern states govern themselves how the choose. So no one stops this, even though it’s definitely illegal by common law and by US federal law.

Sometimes, when a white man is murdered, the Sheriff will do an investigation and punish the perpetrator. But often, they’ll probably scapegoat and lynch a black man for it, instead. But, I presume that he does (at least sometimes) do basically appropriate law-enforcement work, arresting and prosecuting white criminals, approximately according to the law.

This really happened. For decades. 

(If you want to know more you might check out the book Slavery by Another Name: The Re-Enslavement of Black Americans from the Civil War to World War II. I think there’s also a PBS documentary. I don’t know if it’s any good.) 

This seems to me to be much worse than a police force that is primarily selling crack and extorting protection money.

That Sheriff is the only game in town for law and order. He is legally empowered by the local government. But the local government is so corrupt and evil as to not to be morally legitimate. I don’t think it deserves my support.

If there’s a murder, even if I trusted that the Sheriff would find and prosecute the perpetrator instead of a scapegoat, I might not want to evoke (and thereby reinforce) his authority, which is not legitimately held.

IV

Now let’s talk about today.

Here’s some stats that I could grab quickly.

  • American prisons are famously inhumane. Inmates are regularly raped or killed.
  • One out of three black men go to prison in their lifetimes.
  • 44% of all the people in American prisons are there for drug offenses. Some large fraction of those are for the victimless crime of smoking weed.
  • 5% of illicit drug users are African American, yet African Americans represent 29% of those arrested and 33% of those incarcerated for drug offenses. African Americans and whites use drugs at similar rates, but the imprisonment rate of African Americans for drug charges is almost 6 times that of whites. (source, which I didn’t factcheck, but these numbers are consistent with my understanding)
  • As of October 2016, there have been 1900 exonerations of the wrongfully accused, 47% of the exonerated were African American. (same source as above.)
  • It is normal for arrested black men to be threatened into making plea bargains, even when they’re innocent. It is normal for arrested black men to fail to receive due process.
  • A Nixon aid said explicitly that the war on drugs was a way of targeting black people:

“The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.” (source)

This…does not look Just to me. This looks like a massive miscarriage of justice

If a person looks at the past history of how black people were treated by law enforcement, and looks at law enforcement today, and notices that the current crimes are being perpetrated by the same institutions that perpetuated the earlier evils, and they conclude that current US law enforcement organizations are immoral and illegitimate and shouldn’t be cooperated with…

Well, I can see where they’re coming from.

It seems to be a pretty live question whether the typical police force, or the criminal justice system as a whole is better conceived of as basically a gang, extorting and exploiting the marginalized fractions of society, or as an imperfect institution (as all institutions are imperfect) of Justice. And depending on which it is (or if it’s some more complicated thing), I’ll have a different view about whether it’s a good idea to cooperate with the police.

Some things I’ve come to understand from reading about D/s relationships

Recently, I started getting interested in 24/7 power exchange relationships: how they work, what they’re like, why anyone would want to do that. I quickly read a couple of books on the topic, mostly intro level or 102 level guides for people who want to that themselves.

Here are my takeaways:

  • These relationships are usually, or at least often quite loving and affectionate. In a way, it’s really sweet.
  • It seems like the relationship is “about” or “for” the submissive, more than I would have guessed. Often the dominant’s pride is in tightly controlling the experience of the submissive in a way that brings her pleasure or satisfaction.
  • Being a dominant is, to a surprising extent, acting. It’s about creating the appearance and experience of dominance for the submissive. Substance seems (often) to matter less than I’d have thought.
    • There are lots of places where a dominant might use something like magician’s tricks to craft that illusion (brandishing a stage knife, but then actually cutting off his/her clothes with (more effective) first aid scissors behind the submissive’s back. Or displaying a jar of yellow jackets, and then (while the submissive is blindfolded), pressing a jar filled with flies against their body, so they think the buzzing crawling sensation is the stinging bees.
  • Some submissive feel reassured by the the knowledge that they’ll be punished if they break a rule or do something bad. This is true even if they dislike the punishment itself.
  • The kink lifestyle is, interestingly, not so different from traditional marriage: the wife respect, and obeys the husband, and the husband defends and cherishes the wife. To my amusement, it seems like there’s a kind of political horseshoe here: where the most conservative relationships parallel the most one of the most far out liberal kinds of relationships.

Looking into corporate campaigns for animal welfare a bit

I spent ~10 hours a few weeks ago doing some research to inform my donation choices for the year.1

I spend most of my time thinking about and trying to reduce x-risk (and other abstract, long term problems, influenced through long, noisy, chains of cause and effect). But in this case, the money I was allocating is my yearly cryonics-cost-matched donation, and I wanted to allocate it to doing near term good for existing (or soon to exist) beings, rather than the more speculative radically uncertain stuff that I spend most of my time and effort on.

I’d casually read some things (namely this) that suggested that animal welfare charities are much higher leverage that global poverty, so I focused on that. Specifically, I wanted to look into the impact model of the various animal welfare interventions, walk through the steps in the chain, and assess for myself if I trusted those impact models. I ended up focusing most on corporate campaigns for chickens (and a bit on campaigns for shrimp, which are somewhat different).

I really didn’t spend enough time on this to have confident conclusions. Please don’t take this post as conclusive in any way. Mostly it’s an intermediate report, consolidating my thoughts part way through an in process investigation.

Corporate campaigns

It seems like the main intervention by which you can turn money into better lives for animals, is corporate campaigns.

The impact story

Roughly, the way these work is that an animal welfare charity will, first, politely get in contact with the leadership of some major corporation that either produces animal products (Tyson for instance) or buys wholesale animal products as part of their production process (McDonalds). They’ll inform that leadership of the conditions for the animals in factory farms, and ask the leadership to stop. Apparently, this alone works quite well a lot of the them. Shockingly, the leadership of those companies often don’t know how horrendous the conditions are in the farms of their suppliers, and they’re motivated to change those conditions, either because of their own conscience, or because they see that it’s a big PR risk.

But if that alone doesn’t work, the charity will run ads informing consumers of those conditions, until the company agrees to change them. The charity is effectively committing to continue to run these adds, in perpetuity, unless the company changes it’s policy.

In the mid 2010s, there was a big push of this sort for layer hens, specifically to get the companies to commit to phase out battery cages, and switch to cage-free egg production. If this change was successful, the chickens would still live in sad warehouses, instead of living in anything like their natural habitat, but they would be free to move around in those warehouses, instead of spending their whole lives trapped, and packed tight, in cages, and unable move.

This ask was chosen because aversion studies suggest that chicken welfare is much higher when the birds are free to move about the warehouse instead of trapped in the battery cages. And this switch was low cost for the companies in question: it only costs a few cents per egg. The hope was that by applying PR pressure to this particularly high leverage opportunity, we could spend money to improve chicken welfare by a lot.

On the face of it, this worked surprisingly well. Basically, the the whole US egg industry committed to stop using battery cages. The typical cost estimate given is that $1 spent on these campaigns would free 9 to 120 hens, on average, from battery cages. That’s 12 to 160 life-years spent free to roam around, instead of in a painful cage. Which does seem like a lot of leverage for a dollar!

Since then the frontier has moved forward, to pushing for similar reforms in other countries, and pushing for other reforms for other animals.

Checking a bit closer

But there are a number of ways this analysis might turn out to not be as rosey as it seems.

Follow through

The first thing to keep in mind is that these campaigns secured commitments from the relevant companies, commitments that they were expected to follow through on over the intervening years. If they don’t follow through you might need to do more campaigns (this time advertising the fact that they made a commitment, but didn’t act on that commitment, unlike others in their industry). It might turn out that the $1 to 120 hens estimate is inflated because it’s not taking those additional follow up campaigns into account.

But, according to this graph, it looks like mostly companies have been following through. The fraction of US egg-laying hens that are cage-free has increased from 15% in Jan 2018 to 38% in July 2023.

Some companies will probably drag their feet, and some additional follow up campaigns will be necessary. But we can expect those to be more effective now that the whole industry has committed to this new standard. Now you can say, “X company promised to switch to cage free, which is the industry standard. Other companies are following through on their commitment, but not company X.”

(Also note that the lay rate for cage free chickens has risen to parity with the industry average (probably as workers get used to the new cage-free setups), which is important, because if you need more chickens to make up for the fact that they’re not in cages, there might not be a net gain in chicken welfare.)

Counterfactuals

This raises the question of counterfactual history: did these campaigns have a causal role in the continuing switch to cage free egg production?

It seems possible, I suppose, that there were exogenous reasons to switch to cage free practices, and the campaigns didn’t have anything to do with it. Maybe this is a long running trend? I’d love to see a graph that extended further back into the 2000s and early 2010s.

But as a first pass, I’m inclined to take for granted the simple story that this shift is downstream of commitments the companies made, and they made those commitments in large part do to activist pressure.


Still work?

There seems to be wide agreement that the lowest hanging fruit has been picked and the cost effectiveness of current campaigns is not as cost effective as they used to be, though still pretty good.

Apparently, 64% of eggs are produced in Asia, including 37% in China (source), and I think we don’t know if these kinds of corporate campaigns work as well there. It seems totally plausible that cultural differences could make this strategy totally ineffective in China.

Some extremely provisional first pass conclusions and more questions.

The flagship animal welfare intervention seems to me, on first pass, to hold up. It’s not certain, by any means. But it looks like, given some reasonable assumptions, it works for reducing animal suffering at high efficiency.

I opted not to donate to any particular charity, opting instead to donate to an animal welfare fund. So long as I’ve verified that at least some of the impact models check out, I expect a profesional animal welfare grantmaker to be better positioned than me to allocate marginal dollars to the next highest leverage opportunity.

Importantly, there don’t seem to be animal welfare analogues to GiveDirectly or AMF which turn dollars into straightforward benefits at the margin. There’s nothing where I can pay money, and reliably, improve the life of some number of beings. It’s more like, I can pour resources into a big machine which, if everything goes as planned, will help a bunch of animals, but if our assumptions are mistaken, will do nothing. The interventions on offer seem like good bets, but they’re still bets.

Given that all that there are only speculative options on the table, I would rather be investing in the development of clean meats which can end this horror entirely, in the long run, instead of moving some animals from torturous conditions to substantially better, but still inhumane conditions.

(My main reservation about spending my neartermist charity budget on pushing for clean meat is short AI timelines. It’s not worth investing in plans to reduce the horror, if they take 30 years to reach fruition.)

I would love to read about projections for the scaling up and commercialization of clean meat, including estimates for the date of cost and taste parity, and estimates (including extremely wide error bar estimates) of how much marginal investment can pull that date forward in time. Looking into opportunities to invest in clean meat is probably my next priority here.


  1. This is downstream of writing a twitter thread accusing most EAs of being too trusting of “EA” and insufficiently reflective about the investment of their efforts. As often happens, I made the critique, and then felt an obligation/desire to meet it myself. ↩︎

First pass on estimating how much I benefit from structural racism

Suppose I wanted to estimate, quantitatively, how much I, personally, have benefited from historical and contemporary structural racism.

I’m interested in this question because I’m at least somewhat sympathetic to the argument that, if I personally benefited from oppressive systems, then there’s some fraction of “my” resources to which I don’t have a legitimate claim. 

(It’s not obvious how I should respond to that situation. A first thought is that I should donate that fraction of my wealth to racial justice charities. In the ideal case those charities might function as an offset, effectively compensating for the harm done. At minimum, disowning that fraction of my resources correctly aligns the incentives. If every person followed this policy <link to deontological principles>, there would be no incentive to enforce white supremacy in the first place. I would prefer not to live in a condition of being incentivized to turn a blind eye to racial injustice, and score moral points by condemning it after the fact. Though depending on the size of the numbers that might entail large moral or personal tradeoffs, and I’ll have to think more about how best to act.)

So how might I go about calculating the personal benefit that I’ve derived from the oppression of blacks in America?

These are some of my first pass thoughts.

Framing the question

Some of the ways that I could possibly have benefited:

  • I was born to a wealthier family, because my parents and grandparents were afforded privileges and advantages at the expense of black people.
  • I had better access to education for being white.
  • I had better access to jobs for being white.
  • I had way lower risk of staying out of prison, including for spurious or trivial offenses.

…and I think that’s about it? (I welcome additional suggestions)

Given my specific, highly unusual work history, I find it pretty implausible that I, personally, benefited from racial privilege.

(It’s possible that my community is more racist than I imagine, but eg I find it pretty hard to imagine Anna or Critch or Oliver turning down my earnest help in the counterfactual where I have black skin. But maybe I’m overstating the degree to which most white people will tend to take black people less seriously, taking their errors and mistakes as stronger evidence for incompetence.)

My educational opportunities seem basically mediated by the wealth of my parents.

So it seems like this question reduces to estimating what fraction of my parent’s relative wealth depends on 0-sum privileges at the expense of black people and to calculating the expected risk of being unfairly imprisoned as a white vs. black person.

A note on what I’m looking for

I’m not just looking for advantages or privileges that I benefit from, that black people lack. I’m looking for places where I derived benefits at the expense of black people, or other racial groups.

It’s straightforward that there barriers to black advancement that I’m just completely free of. My life was clearly made easier by being white.

But those barriers might be in the form of transfers of value, effectively theft from black Americans to white americans. In that case white americans or a subset of white americans come out ahead in absolute terms, from the existence of structural racism. And some of those barriers might be in the form of destruction of value. In this case, the white Americans (or some white Americans) come out ahead in relative terms, because the diff between whites and blacks is bigger, but not in absolute terms.

Economics, in particular, is not in general 0-sum. That there are privileges that I have and others don’t might be bad for them, but that doesn’t necessarily mean that it is good for me.

One way to frame the question: We can consider a hypothetical world where there wasn’t any systemic racism. In that world, would I have been poorer than in actual history, because in actual history I derived direct and/or indirect benefits from white supremacy? Or would my personal wealth have been about the same (adjusted somewhat for having greater purchasing power in an overall wealthier society), but black people, on average, would be richer. 

In the first case, I’m benefiting from structural racism. In the second case, structural racism is still a massive evil perpetuated in my society, for which I bear at least a little responsibility, but I’m not a beneficiary.

I care about this distinction, because it informs whether the wealth that is legally mine, is legitimately mine to do with as I want, according to my own morality, or not. If my parents gave me some advantages in life, and I capitalized on those advantages to create value in a mostly free market, I consider myself to have a moral right to my share of the gains from trade. But if my advantages are 0-sum, and came, unjustly, at the expense of other racial groups, even if I didn’t choose to opress them, my ownership of that wealth is far more suspect. 

I’m not sure how I should respond to that possibility. Maybe I should give away whatever fraction of my wealth is illigitmately earned, ideally finding some way to reinvest it in the communities from which it was stolen? That might turn out to be unrealistic / infeasible, but it seems like I should owe some extra responsibility if I am the beneficiary of an ongoing theft.

Family Wealth

First of all, it is possible that my parents are financially worse off, on-net, for the existence of structural racism in America. The economy is not, in general, a 0-sum game. Typically, everyone benefits from more people having more economic opportunity, because those people are more productive and society as a whole is wealthier.

It’s plausible that, over the course of the 20th century, rich Southern whites were deriving benefits from oppressing and extracting value from blacks, but that most whites in most of the US were made materially worse off, not better off by this. (In fact, most people are not very literate in economics. It may be that even the whites actively perpetuating white supremacy didn’t benefit on net, and would have been richer if they had created a fairer and more Just society.)

I need to find not just places where my parents or grandparents had advantages that black people mostly lacked, but 0-sum privileges that they were afforded at the expense of black people.

My personal situation: 

My mother was born and grew up in Boston. My maternal grandparents were Jewish, second-generation immigrants from Poland. My grandfather worked as an engineer, including for military contractors.

My dad was likewise born and raised in the Northeast. I don’t know as much about his parentage. Also a third generation immigrant, I think, from Ireland.

Both my mom and dad went to college, then moved to New York city, and worked in the computer industry, doing sales. They’re well off now, in large part because when they were a young couple, they were both motivated by money, and made a lot of it for their age, but they were also judicious: mostly living on one salary and saving the other.

Did they have an advantage securing those jobs because they were white? Would a black man have been handicapped in how much he could sell because of racism?

[Edit: I talked with my dad about his early corporate experience. From 1981 to 1992, he worked for a company called Businessland, selling computers. When he started in, Boston, 2 out of about ~150 people who worked for the company in that region were black. 10 years later, in the early 90’s, when he was a sales manager in New York, about 25% of the ~120 people in that region were either black or hispanic.

He relayed to me that hiring a black person in the 80’s was generally seen as a big risk, and that Boston, in particular, was extremely racist, with segregated neighborhoods that felt and expressed antipathy for each other. New York was better, both because NYC is a uniquely diverse melting pot, and because by the 90s, overt racism had declined.]

Probably the answer to both of those is “at least a little”. But the jobs they held were not sinecures. Sales in particular is a domain in which you can measure performance, which is why salespeople get paid on commission. Someone who was as driven as my dad, but black, would surely have faced discrimination, but how much less would he have made?

But there’s a key point here which is that my parents did work for their wealth. That there were barriers deliberately and emergently placed in front of black people, to make it harder for them to get ahead, doesn’t illegitimate my parents’ wealth accumulation. 

I already know that being black in the United States is a severe handicap. But I want to know in what ways those handicaps were transfers of value from one person to another, not just destruction of value. 

Avoiding prison

Thinking about this one a little further, I expect to see the same dynamic as above, except more strongly. It’s manifestly unjust that a black person goes to prison for smoking marujana, and a white person doesn’t. 

And that’s on the tip of the iceberg of ways that the criminal justice system extorts black / low class people.

But all of those are examples where white people are being granted the rights due to them in a Just society, while black people are being denied those rights. Not a situation where white people are benefiting from special extrajudicial privileges above what is due to them by law. 

(Admittedly it is technically illegal to smoke marujana in some places, but only technically, and I’m not tempted to say that white people are “above the law”, in that case. The law, in that case, is a farce, used to penalize marginalized people).

It’s obviously unjust to have a society which claims, but doesn’t follow through on, equal treatment before the law. That’s obviously evil. 

But I don’t think that I benefit from that evil. Again, my risk of going to prison is lower than that of a black person, but that’s not because I’m externalizing a conserved quantity of “expected years in prison” onto someone else. If we reformed society so that it became Just, there would be many, many fewer black people in prison, but my own risk of going to prison wouldn’t change. If anything, it would go down somewhat, since “injustice anywhere is a threat to justice everywhere”. As a Jew, and as a human being, I should expect to be safer in a just society than in a society that maintains Justice for only a subset of its population.)

Conclusions from first thoughts and next steps

My tentative conclusion is that because of explicit, unjust discrimination, it is much harder to be black in America than to be white, and that was especially true in previous centuries, most Northern whites didn’t actually benefit from that discrimination, especially those who were primarily accumulating wealth via economic production instead of privileged access to rents.

But these are only first pass thoughts. My next step is to collect and read some books about racial wealth inequality and white privilege, to build up a more grounded list of ways that I might have benefited from structural racism.

Some thoughts on Agents and Corrigibility

[Reproducing this comment on LessWrong, with slight edits]

“Prosaic alignment work might help us get narrow AI that works well in various circumstances, but once it develops into AGI, becomes aware that it has a shutdown button, and can reason through the consequences of what would happen if it were shut down, and has general situational awareness along with competence across a variety of domains, these strategies won’t work anymore.”

I think this weaker statement now looks kind of false in hindsight, since I think current SOTA LLMs are already pretty much weak AGIs, and so they already seem close to the threshold at which we were supposed to start seeing these misalignment issues come up. But they are not coming up (yet). I think near-term multimodal models will be even closer to the classical “AGI” concept, complete with situational awareness and relatively strong cross-domain understanding, and yet I also expect them to mostly be fairly well aligned to what we want in every relevant behavioral sense.

I basically still buy the quoted text and don’t think it now looks false in hindsight.

We (apparently) don’t yet have models that have robust longterm-ish goals. I don’t know how natural it will be for models to end up with long term goals: the MIRI view says that anything that can do science will definitely have long-term planning abilities which fundamentally entails having goals that are robust to changing circumstances. Maybe that’s true, maybe it isn’t.. Regardless, I expect that we’ll specifically engineer agents with long term goals. (Whether or not those agents will have “robust” long term goals, over and above what they are prompted to do/want in a specific situation, is also something that I don’t know.)

What I expect to see is agents that have a portfolio of different drives and goals, some of which are more like consequentialist objectives (eg “I want to make the number in this bank account go up”) and some of which are more like deontological injunctions (“always check with my user/ owner before I make a big purchase or take a ‘creative’ action outside of my training distribution”).

My prediction is that the consequentialist parts of the agent will basically route around any deontological constraints that are trained in, even if the agent is sincerely committed to the demonological constraints . 

As an example, your personal assistant AI does ask your permission before it does anything creative, but also, its superintelligently persuasive. So it always asks your permission in exactly the way that will result in it accomplishing what it wants. If there are a thousand action sequences in which it asks for permission, it picks the one that has the highest expected value with regard to it’s consequentialist goal. This basically nullifies the safety benefit of any deontological injunction, unless there are some that can’t be gamed in this way.

To do better than this, it seems like you do have to solve the Agent Foundations problem of corrigibility (getting the agent to be sincerely indifferent between your telling it to take the action or not take the action) or you have to train in not a deontological injunction, but an active consequentialist goal of serving the interests of the human (which means you have find a way to get the Agent to be serving some correct enough idealization of human values).

But I think we mostly won’t see this kind of thing until we get quite high levels of capability, where it is transparent to the agent that some ways of asking for permission have higher expected value than others. Or rather, we might see a little of this effect early on, but until your assistant is superhumanly persuasive it’s pretty small. Maybe we’ll see a bias toward accepting actions that serve the AI agents goals (if we even know what those are) more, as capability goes up, but we won’t be able to distinguish “the AI is getting better at getting what it wants from the human” from “the AIs are just more capable, and so they come up with plans that work better.” It’ll just look like the numbers going up.

To be clear, “superhumanly persuasive” is only one, particularly relevant, example of a superhuman capability that allows you to route around deontological injunctions that the agent is committed to. My claim is weaker if you remove that capability in particular, but mostly what I’m wanting to say is that powerful consequentialism find and “squeezes through” the gaps in your oversight and control and naive-corrigibility schemes, unless you figure out corrigibility in the Agent Foundations sense.

Smart Sessions – Finally a (kinda) window-centric session manager

This is a short post about some software functionality that I’ve long wanted, and a browser extension that gets most of it well enough.

The dream

There’s a simple piece of software that I’ve wanted for several years. Few months, I go on a binge of trying to find something that do what I’m looking for.

Basically: a session manager that allows you to group windows together somehow, so that you can close and save them all with one click, and then reopen them all with one click.

I make heavy use of the OSX feature “desktops”, which allows multiple separate workspaces in parallel. I’ll typically have a desktop for my logging and tracking, one for chats and coms, one with open blog posts, one with writing projects, one with an open coding project, etc. Each of these are separate contexts that I can switch between for doing different kinds of work.

What I want is to be able to easily save each of those contexts, and easily re-open them later.

But since I’ll often have multiple sessions open at the same time, across multiple desktops, I don’t want the session-saver app to save all my windows. Just the ones that are part of a given workspace context.

The best way to do this is if the software could tell which windows were open on which desktops and use that as the discriminator. But some sort of manual drag and drop for adding a (representation of) a window (on a list of windows) to a group would work too.

The situation

This seems to me like something that…there should be a lot of demand for? I think lots of people have many windows, related to different projects that they want to keep separate, open on their computer at the same time.

But, as near as I can tell there’s almost nothing like this.

There are a lot of session managers, browser extensions that allow you to save your tabs for the future (I’ve mostly used OneTab, but there are dozens). However, they’re virtually all tab-centric. A “session” typically refers to a single window, with multiple tabs, not to multiple windows, with multiple tabs each, which means that to reopen a session (in my “multiple windows” sense of the word), I need to mentally keep track of which windows were included and open all of them one by one, instead of clicking one button to get the context back.

There are a few session managers that save multiple windows in a session (I’m thinking of Session Buddy or the less polished Tab Session Manager), but these have the opposite problem: they save all the open windows, including those that are part of other workflows in other desktops, which means that I have to go through and manually remove them every time I save a session. (This is especially a problem for me because there’s a set of windows that I always keep open on my first desktop.) And on top of that, they tend to save sessions as static snapshots, rather than as mutable objects that change as you work with them, so you need to repeatedly delete old sessions and replace them with updated ones.

Success!

I spent a few hours over the past week, yet again, reading about and trying a bunch of tab managers in rapid succession to find any that have anything like the functionality I’m wanting.

I finally found exactly one that does what I want!

It is a little finicky, with a bunch of small to medium sized UX problems. But it is good enough that I’m going ahead and making a point to try using it.

I’m sharing this here because maybe other people have also been wanting this functionality, and they can benefit from fruits of my laborious searching.

Current solution: Smart Sessions – Tab Manager

Smart Sessions is a chrome extension that does let you save groups of windows. This is the best one that I’ve found so far.

When you click on the icon, there’s a button for creating a new session. When you click it, it displays a list of all your current open tabs (another button organizes all those tabs by window), with checkboxes. The user checks the windows that they want to be included in a session. You give it a name and then create the session.

While a session is active (and while a default setting called “auto save” is set to Yes), when you close a tab or a window, it removes that tab or window from the session (though it does create a weird popup every time). You can also remove tabs/windows from the list manually.


The weird popup. It’s not super clear from the text what the options mean, but I think “stop tracking” deactivates the session, and “save” removes the window you just closed from the active session.

You can press the stop button, which closes all the windows, to be reopened later.

When the session is inactive, you can edit the list of tabs and windows that compose a session, removing some (though I think not adding?). You can also right click on any page, select Smart Sessions, and add that page to any session, active or not.

At the bottom of the session list, there’s a button that deletes the session.

This basically has functionality that I want! 

I want to first and foremost give a big hurrah to the developer Serge (Russo?), for being the only person in the world to make what seems to me and obvious and extremely helpful tab-management tool. Thank you Serge!

Some issues or weird behavior

However, it still has a fundamentally tab-centric design, with multi-window sessions seeming like concessions or afterthoughts, rather than core to the user experience. This results in some weird functionality. 

  • Every time you create a new session, you need to click a button so that the selection list is separated by window, instead of only a list of tabs. If you don’t click this button, the selection list is a flat list of tabs, and when you create the session, all the selected tabs will be combined into a single window.
    • (One UX that I could imagine is having a global setting on the settings page, “tab-centric default” vs “window-centric default”. You can still press the button to toggle individual sessions, but for window-centric session users, having a default would save me a button press each time.
  • I think as a side effect of the above feature, whenever you create a new session, it takes all the windows of that session (regardless of where they are on the screen or across different desktops) and stacks them all on top of each other, so that only the top one is visible (not even some overlap so you can see how many windows are stacked on top of each other). 
  • It would be intuitive if, while a session is active, if you opened a new window, that window was automatically added to the session. Not only does that not work, there appears to be no way to add new windows to a session. New tabs get added to the session, but not new windows. The right-click “add to session functionality”, adds a single page, as a new tab in one of the windows of a session, not as a new window in that session.
    • The only way, as near as I can find, to increase the number of windows in a session is to drag a tab from a multi-tab window into its own window—both resulting windows are saved as part of the session. In order to add new windows to a session, the user needs to do an an awkward maneuver to exploit this functionality: first create a new tab in a window that’s part of the session, and then drag it to it’s own window. Or alternatively, make a new window, add that window as a tab, to one of the windows that is part of the active session, and then drag it out again. That tab, again in its own window, will be added to the session.
  • As noted above, every time you close a window, that’s part of the active session, this activates a popup.

It would be great if these issues were addressed.

Additionally, for  some reason the extension is slow to load. Sometimes (but not always), I’ll click on the icon and it will take a full two seconds for the list of sessions to appear. I haven’t yet figured out what the pattern is for why there’s sometimes a delay and sometimes not.

And finally, there are some worrying reviews that suggest that at least sometimes, the whole history disappears? I’m not sure what’s up with that, but I’m going to make a point to regularly export all my sessions (there’s easy export functionality), just to be careful.

Overall though, this so far works, and I feel pretty excited about it.

Very low energy states seem to “contain” “compressed” fear or despair.

[I wrote this elsewhere, but wanted it someplace where I could link to it in isolation.]

When I’m feeling very low, I can often do Focusing and bring up exactly the concern that feels hopeless or threatened.

When I feel into fear “underneath” the low energy state, the fear (which I was ignoring or repressing a moment ago), sort of inverts, and comes blaring into body-awareness as panic or anxiety, and my energy comes back. Usually, from there, I can act from the anxiety, moving to take action on the relevant underhanded concern.

[Example in my logs on April 10]

When I feel into the low energy, and there’s despair underneath, usually the thing thing that needs to happen is move that is like “letting reality in” (this has a visual of a vesicle, with some substance inside of it, which when the membrane is popped defuses and equalizes with the surrounding environment) or grieving. Usually after I do that, my energy returns.

(Notably there seems to be an element of hiding from or ignoring or repressing what’s true in each of these.)

In both cases, it sort of feels like the low energy state is the compacted from of the fear or despair, like snow that has been crushed solid. And then in doing the Focusing, I allow it to decompress.

Rough Thoughts on the White House Executive order on AI

I spent a few hours reading, and parsing out, sections 4 and 5 of the recent White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The following are my rough notes on each subsection in those two subsections, summarizing what I understand each to mean, and my personal thoughts.

My high level thoughts are at the bottom.

Section by section

Section 4 – Ensuring the Safety and Security of AI Technology.

4.1

  • Summary:
    • The secretary of commerce and NIST are going to develop guidelines and best practices for AI systems.
    • In particular:
      • “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
        • What does this literally mean? Does this allocate funding towards research to develop these benchmarks? What will concretely happen in the world as a result of this initiative?
    • It also calls for the establishment of guidelines for conducting red-teaming.
      • [[quote]]
        • (ii)  Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.  These efforts shall include:
          •  (A)  coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and
          • (B)  in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order.
  • Commentary:
    • I imagine that these standards and guidelines are going to be mostly fake.
    • Are there real guidelines somewhere in the world? What process leads to real guidelines?

4.2

  • Summary:
    • a
      • Anyone who has or wants to train a foundation model, needs to
        • Report their training plans and safeguards.
        • Report who has access to the model weights, and the cybersecurity protecting them
        • The results of red-teaming on those models, and what they did to meet the safety bars
      • Anyone with a big enough computing cluster needs to report that they have it.
    • b
      • The Secretary of Commerce (and some associated agencies) will make (and continually update) some standards for models and computer clusters that are subject to the above reporting requirements. But for the time being,
        • Any models that were trained with more than 10^26 flops
        • Any models that are trained primarily on biology data and trained using greater than 10^23 flops
        • Any datacenter that connected with greater than 100 gigabits per second
        • Any datacenter that can train an AI at 10^20 flops
    • c
      • I don’t know what this subsection is about. Something about protection cyber security for “United States Infrastructure as a Service” products.
      • This includes some tracking of when foreigners want to use US AI systems in ways that might pose a cyber-security risk, using standards identical to the ones laid out above.
    • d
      • More stuff about IaaS, and verifying the identity of foreigners.
  • Thoughts:
    • Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
    • What do I think about this overall?
      • I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
      • The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
      • The interest in red-teaming is promising, but again it depends on the implementation details.
        • I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
          • What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?

4.3

  • Summary:
    • They want to protect against AI cyber-security attacks. Mostly this entails government agencies issuing reports.
      • a – Some actions aimed at protecting “critical infrastructure” (whatever that means).
        • Heads of major agencies need to provide an annual report to the Secretary of Homeland security on potential ways that AIs open vulnerabilities to critical infrastructure in their purview.
        • “…The Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
        • Government orgs will incorporate some new guidelines.
        • The secretary of homeland security will work with government agencies to mandate guidelines.
        • Homeland security will make an advisory committee to “provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.”
      • b – Using AI to improve cybersecurity
        • One piece that is interesting in that: “the Secretary of Defense and the Secretary of Homeland Security shall…each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks”, and then report on their results
  • Commentary
    • This is mostly about issuing reports, and guidelines. I have little idea if any of that is real or if this is just an expansion of lost-purpose bureaucracy. My guess is that there will be few people in the systems that have inside views that allow them to write good guidelines for their domains of responsibility regarding AI, and mostly these reports will be epistemically conservative and defensible, with a lot of “X is possibly a risk” where the authors have large uncertainty about how large the risk is.
    • Trying to use AI to improve cyber security sure is interesting. I hope that they can pull that off. It seems like one of the things that ~ needs to happen for the world to end up in a good equilibrium is for computer security to get a lot better. Otherwise anyone developing a powerful model will have the weights stolen, and there’s a really vulnerable vector of attack for not-even-very-capable AI systems. I think the best hope for that is using our AI systems to shore up computer security defense, and hope that at higher-than-human levels of competence, cyber warfare is not so offense-dominant. (As an example, someone suggested maybe using AI to write a secure successor to C, and the using AI to “swap out” the lower layers of our computing stacks with that more secure low level language.)
      • Could that possibly happen in government? I generally expect that private companies would be way more competent at this kind of technical research, but maybe the NSA is a notable and important exception? If they’re able to stay ten years ahead in cryptography, maybe they can stay 10 years ahead in AI cyberdefense.
        • This raises the question, what advantage allows the NSA to stay 10 years ahead? I assume that it is a combination of being able to recruit top talent, and that there are things that they are allowed to do that would be illegal for anyone else. But I don’t actually know if that’s true.

4.4 – For reducing AI-mediated CHEMICAL, BIOLOGICAL, RADIOLOGICAL, AND NUCLEAR threats, focusing on biological weapons in particular.

  • Summary:
    • a
      • The Secretary of Homeland Security (with help from other executive departments) will “evaluate” the potential of AI to both increase and to defend against these threats. This entails talking with experts and then submitting a report to the president.
      • In particular, it orders the Secretary of Defense (with the help of some other governmental agencies) to conduct a study that “assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks”, evaluates the risks associated with the biology datasets used to train such systems, assesses ways to use AI to reduces biosecurity risks.
    • b – Specifically to reduce risks from synthetic DNA and RNA.
      • The office of science and technology policy (with the help of other executive departments) are going to develop a “framework” for synthetic DNA/RNA companies to “implement procurement and screening mechanisms”. This entails developing “criteria and mechanisms” for identifying dangerous nucleotide sequences, and establishing mechanism for doing at-scale screening of synthetic nucleotides.
      • Once such a framework is in place, all (government?) funding agencies that fund life science research will make compliance with that framework a condition of funding.
      • All of this, once set up, needs to be evaluated and stress tested, and then a report sent to the relevant agencies.
  • Commentary:
    • The part about setting up a framework for mandatory screening of nucleotide sequences, seems non-fake. Or at least it is doing more than commissioning assessments and reports.
      • And it seems like a great idea to me! Even aside from AI concerns, my understanding is that the manufacture synthetic DNA is one major vector of biorisk. If you can effectively identify dangerous nucleotide sequences (and that is the part that seems most suspicious to me), this is one of the few obvious places to enforce strong legal requirements. These are not (yet) legal requirements, but making this a condition of funding seems like a great step.

4.5

  • Summary
    • Aims to increase the general ability for identifying AI generated content, and mark all Federal AI generated content as such.
    • a
      • The secretary of commerce will produce a report on the current and likely-future methods for, authenticating non-AI content, identifying AI content, watermarking AI content, preventing AI systems from “producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual)”
    • b
      • Using that report, the Secretary of Commerce will develop guidelines for detecting and authenticating AI content.
    • c
      • Those guidelines will be issued to relevant federal agencies
    • d
      • Possibly those guidelines will be folded into the Federal Acquisitions Regulation (whatever that is)
  • Commentary
    • Seems generally good to be able to distinguish between AI generated material and non-AI generated material. I’m not sure if this process will turn up anything real that meaningfully impacts anyone’s experience of communications from the government.

4.6

  • Summary
    • The Secretary of Commerce is responsible for running a “consultation process on potential risks, benefits, other implications” of open source foundation models, and then for submitting a report to the president on the results.
  • Commentary
    • More assessments and reports.
    • This does tell me that someone in the executive department has gotten the memo that open source models mean that it is easy to remove the safeguards that companies try to put in them.

4.7

  • Summary
    • Some stuff about federal data that might be used to train AI Systems. It seems like they want to restrict the data that might enable CBRN weapons or cyberattacks, but otherwise make the data public?
  • Commentary
    • I think I don’t care very much about this?

4.8

  • Summary
    • This orders a National Security Memorandum on AI to be submitted to the president. This memorandum is supposed to “provide guidance to the Department of Defense, other relevant agencies”
  • Commentary:
    • I don’t think that I care about this?

Section 5 – Promoting Innovation and Competition.

5.1 – Attracting AI Talent to the United States.

  • Summary
    • This looks like a bunch of stuff to make it easier for foreign workers with AI relevant expertise to get visas, and to otherwise make it easy for them to come to, live in, work in, and stay in, the US.
  • Commentary
    • I don’t know the sign of this.
    • Do we want AI talent to be concentrated in one country?
      • On the one hand that seems like it accelerates timelines some, especially if there are 99.9% top tier AI researchers that wouldn’t otherwise be able to get visas, but who can now work at OpenAI. (It would surprise me if this is the case? Those people should all be able to get O1 visas, right?)
      • On the other hand, the more AI talent is concentrated in one country the smaller jurisdiction of the regulatory regime that slows down AI. If enough of the AI talent is in the US, regulations that slow down AI development in the US only have a substantial impact, at least the the short term, before that talent moves, but maybe also in the long term, if researchers care more about continuing to live in the US than they do about making cutting edge AI progress.

5.2

  • Summary
    • a –
      • The director of the NSF will do a bunch of things to spur AI research.
        • …”launch a pilot program implementing the National AI Research Resource (NAIRR)”. This is evidently something that is intended to boost AI research, but I’m not clear on what it is or what it does.
        • …”fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs.”
        • …”establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order.”
    • b –
      • The Secretary of Energy will make a pilot program for training AI scientists.
    • c –
      • Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office will sort out how generative AI should impact patents, and issue guidance. There will be some similar stuff for copyright.
    • d –
      • Secretary of Homeland Security “shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks”
    • e –
      • The HHS will prioritize grant-making to AI initiatives.
    • f –
      • Something for the veterans.
    • g –
      • Something for climate change
  • Commentary
    • Again. I don’t know how fake this is. My guess is not that fake? There will be a bunch of funding for AI stuff, from the public sector, in the next two years.
    • Most of this seems like random political stuff.

5.3 – Promoting Competition.

  • Summary
    • a –
      • The heads of various departments are supposed to promote competition in AI, including in the inputs to AI (NVIDIA)?
    • b
      • The Secretary of Commerce is going to incentivize competition in the semi-conductor industry, via a bunch of methods including
        • “implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem”
        • mentorship programs
        • Increasing the resources available to startups (including datasets)
        • Increasing the funding to R&D for superconductors
    • c – The Administrator of the Small Business Administration will support small businesses innovating and commercializing AI
    • d
  • Commentary
    • This is a lot of stuff. I don’t know that any of it will really impact how many major players there are at the frontier of AI in 2 years.
    • My guess is probably not much. I don’t think the government knows to to create NVIDAs or OpenAIs.
    • What the government can do is break up monopolies, but they’re not doing that here.

My high level takeaways

Mostly, this executive order doesn’t seem to push for much object-level action. Mostly it orders a bunch of assessments to be done, and reports on those assessments to be written, and then passed up to the president.

My best guess is that this is basically an improvement?

I expect something like the following to happen:

  • The relevant department heads talk with a bunch of experts. 
  • The write up very epistemically conservative reports in which they say “we’re pretty sure that our current models in early 2024 can’t help with making bioweapons, but we don’t know (and can’t really know) what capabilities future systems will have, and therefore can’t really know what risk they’ll pose.”
  • The sitting president will then be weighing those unknown levels of national security risks against obvious economic gains and competition with China.

In general, this executive order means that the Executive branch is paying attention. That seems, for now, pretty good. 

(Though I do remember in 2015 how excited and optimistic people in the rationality community were about Elon Musk, “paying attention”, and that ended with him founding OpenAI, what many of those folks consider to be the worst thing that anyone had ever done to date. FTX looked like a huge success worthy of pride, until it turned out that it was a damaging and unethical fraud. I’ve become much more circumspect about which things are wins, especially wins of the form “powerful people are paying attention”.)

Request for parellel conditional-market functionality

In responses to James’ plan for a manifold dating site, I just wrote tho following comment.

I think this needs a new kind of market UI, to setup multiple conditional markets in parallel. I think this would be useful in general, and also the natural way to do romance markets, in particular.

What I want is for a user to be able to create a master (conditional) prompt that includes a blank to be filled in. eg “If I go on a date with ____, will we end up in a relationship 2 months later?” or “If I read ____ physics textbook, will I be impressed with it?” or “Will I think the restaurant ____, is better than the Butchers Son, if I eat there?” The creator of this master question can include resolution details in the description, as always.

Then other users can come and submit specific values of for the blank. In these cases, they suggest people, physics textbooks, or restaurants.

However (and this is the key thing that makes this market different from the existing multiple choice markets), every suggestion becomes it’s own market. Each suggestion gets a price between 100% and 0%, rather than all of the suggestions in total having adding up to a probability of 100%.

After all, it’s totally possible that someone would end up in a relationship with Jenny (if they end up going on a date with Jenny) and end up in a relationship with George (if they go on a date with George). And it’s likely that there there are multiple restaurants that one would like better than the Butcher’s Son. There’s no constraint that all the answers have to sum to 100%.

(There are other existing markets that would make more sense with this format. Aella’s one-night stand market for one, or this one about leading AI labs. It’s pretty common for multiple choice questions to not need to sum to 100% probability, because multiple answers can be correct.)

Currently, you can create a bunch of conditional markets yourself. But that doesn’t work well for romance markets in particular, for two reasons.

1. Most of the value of these markets is in discovery. Are there people who the market thinks that I should go on a date with, who I’ve never met?
2. It is very socially forward to create a market “Will I be in a relationship with [Jenny], if we go on one date?” That means that I reveal that I’m thinking about Jenny enough to make a market, to Jenny, and to all the onlookers, which could be embarrassing to her. It’s important that the pairings are suggested by other people, and mixed in with a bunch of other suggestions, instead of highlighted in a single top-level market. Otherwise it seems like this is pushing someone’s personal life into a public limelight too much.

If this kind of market UI existed, I would immediately create a “If Eli goes on a date with ____, will the be in a relationship 3 months later?”, and a link to my existing dating doc, and a large subsidy (we’d have to think about to allocate subsidies across all the markets in a set).

In fact, if it were possible and legal to do this with real money, I would probably prefer spending $10,000 subsidizing a set of real money prediction markets of this form, compared to would spending $10,000 on a matchmaker. I just expect the market (and especially “the market” when it is composed of people who are one or two degrees removed from people that I might like to date), to be much better at suggesting successful pairings.

A letter to my 20 year old self

If I could send some advice back in time, to myself when I was 20 years old, this is a lot of what I would say. I think almost all of this is very idiosyncratic to me, and the errors that I, personally, am inclined towards. I don’t think that most 20 year olds that are not me should take these points particularly seriously, unless they recognize themselves in it.

[See also First conclusions from reflections on my life]

  1. Order your learning

You want to learn all skills, or at least all the awesome and useful ones. This is completely legitimate. Don’t let anyone tell you that you shouldn’t aim for that (including with words like “specialization” or “comparative advantage”.)

But because of this, every time you encounter something awesome, you respond by planning to make the practice of it part of your life in the short term. This is a mistake. Learning most things will require either intense bouts of focusing on only that one thing for (at least small numbers of) days at a time, or consistent effort over weeks or months. 

If every time you encounter some skill that seems awesome or important, you resolve to learn it, this dilutes your focus, which ends up with you not learning very much at all. Putting a surge of effort into something and then not coming back to it for some weeks is almost a total waste of that effort—you’ll learn almost nothing permanent from that.

The name of the game is efficiency. You should think of it like this:

Your skill and knowledge, at any given time, represents a small volume in a high dimensional space. Ultimately you want to expand in all or almost all directions. There’s no skill that you don’t want, eventually. But the space is very high dimensional and infinite, so trying to learn everything that crosses your path won’t serve you that well. You want to order your learning.

Your goal should be to plot a path, a series of expansions in this high dimensional space, that results in expanding the volume as quickly as possible. Focus on learning the things that will make it easier and faster to continue to expand, along the other dimensions, instead of focusing on whatever seems cool or salient in the moment.

[added:] More specifically, you should be willing to focus on doing one thing at a time (or one main thing, with a one or at most, two side projects). Be willing to take on a project, ideally but not necessarily involving other people, and make it your full time job for at a month. You’ll learn more and make more progress when you’re not dividing your efforts. You won’t loose nearly as much time in the switching costs, because you won’t have to decide what to do next: there will be a clear default. And if you’re focusing on one project at a time, it’s much easier to see if you’re making progress. You’ll be able to tell much faster if you’re spinning your wheels doing something that feels productive, but isn’t actually building anything. Being able to tell that you failed at a timeboxed goal means that you can notice and adapt.

A month might feel like a long time, to put aside all the other things you want to learn, but it’s not very long in the grand scheme of things. There have been many months since I was 20, and I would be stronger now, if I had spent more of them pushing hard on some specific goal, instead of trying to do many good things and scattering my focus.

You want to be a polymath; but the way to polymathy is not trying to do everything all at once: it’s mostly going intensely, on several different things, in sequence.

  1. Learn technical skills

In particular, prioritize technical skills. They’re easier to learn earlier in life, and I wish I had a stronger grounding in them now.

First and foremost, learn to program. Being able to automate processes, and build simple software tools for yourself is a superpower. And it is a really great source of money.

Then, learn calculus, linear algebra, differential equations, microeconomics, statistics, probability theory, machine learning, information theory, and basic physics. [Note that I’ve so far only learned some of these myself, so I am guessing at their utility].

It would be a good use of your time if you dropped everything else and made your only priority in the first quarter of college to do well in IBL calculus. This would be hard, but I think you would make substantial steps towards mathematical maturity if you did that.

In general, don’t bother with anything else in college, except learning technical subjects. I didn’t find much in the way of friends or connections there, and you’ll learn the non-technical stuff fine on your own.

The best way to learn these is to get a tutor, and walk through the material with the tutor on as regular a basis as you can afford.

  1. Prioritize money 

You’re not that interested in money. You feel that you don’t need much in the way of “stuff” to have an awesome life. You’re correct about that. Much more than most of the people around you, you don’t want or need “nice things”. You’re right to devalue that sort of thing. You’ll be inclined to live frugally, and that has served me very well.

However, you’re missing that money can be converted into learning. Having tens or hundreds of thousands of dollars is extraordinarily helpful for learning pretty much anything you care to learn. If nothing else, most subjects can be learned much faster by talking with a tutor. When you have money, if there’s anything you want to learn, you can just hire someone who knows it to teach you how to do it, or to do it with you. This is an overpowered strategy.

It is a priority for you to get to the point that you’re making (or have saved) enough money that you feel comfortable spending hundreds of dollars on a learning project.

Combining 1, 2, 3, the thing that I recommend that you do now is drop almost everything and learn to become a good programmer. Your only goal for the next few months should be 1) to have enough money for rent and food, and 2) to become a good enough programmer that you can get hired for it, as quickly as you can. Possibly the best way to do this is to do a coding boot camp, instead of self-teaching. You should be willing to put aside other cool things that you want to do and learn, for only a couple of months, to do this.

Then get a job as a software engineer. You should be able to earn small hundreds of thousands of dollars a year with a job like that, while still having time to do other stuff you care about in your off hours. If you live frugally, you can work for 2.5 years and come away with a small, but large enough (eg >100k) nest egg for funding all the other skills that you want to learn.

(If you’re still in college, staying to do IBL first, and then focusing on learning programing isn’t a bad idea. It might be harder to get mathematical maturity, in particular, outside of college.)

  1. Make things / always have a deliverable

I’ve gained much much more skill over the course of projects where I was just trying to do something, than from the sum of all my explicit learning projects. Mostly you learn skills as a side effect of doing things. This just works better than explicit learning projects. 

This also means that you end up learning real skills, instead of the skills that seem abstractly useful or cool from the outside, many of which turn out to have not much relevance to real problems. Which is fine; you can pursue things because they’re cool. But very often, what is most useful and relevant are pieces that are too mundane to come to mind, and doing real things reveals them. Don’t trust your abstract model of what elements are useful or relevant or important or powerful, too much. Better to let your learning be shaped to the territory directly, in the course of trying to do specific things.

The best way to learn is to just try to do something that you’re invested in, for other reasons, and learn what you need to know to succeed along the way. Find some software that you wish existed, that you think would be useful to you, and just try and build it. Run a conference. Take some work project that seems interesting and knock it out of the park. 

Try to learn as much as you can this way.

In contrast, I’ve spent a huge amount of time thinking over the years that didn’t create any value at all. If I learned something at the time, I soon forgot it, and it is completely lost ot me now. This is a massive waste. 

So your projects should always have deliverables. Don’t let yourself finish or drop a project, especially a learning project, until you have produced some kind of deliverable. 

A youtube video of yourself explaining some new math concept. A lecture for two friends. Using a therapy technique with a real client.

A blog post jotting down what you learned, or summarizing your thoughts on a domain is the minimum viable deliverable. If nothing else, write a blog post for everything that you spend time on, to capture the the value of your thinking for others, and for yourself later.

Don’t wait to create a full product at the end. Ship early, ship often. Create intermediate deliverables, capturing your intermediate progress, at least once a day. Write / present about your current thoughts and understanding, including your open confusions. (I’ve often gotten more clarity about something in the process of writing up my confusions in a blog post).

The deliverable can be very rough. But it shouldn’t be just your personal notes. If you’re writing a rough blog post, write it as if for an audience beyond yourself. That will force you to clarify your thoughts and clearly articulate the context much more than writing a personal journal entry. In my experience, the blog posts that I write like this are usually more helpful for my future self than the personal journal entries are.

The rule should be that someone other than you, in principle, could get value from it. A blog post or a recorded lecture, that no one reads, but someone could read and find interesting counts. The same thing, but on a private google drive, doesn’t count. (Even better, though, is if you find just one person who actually gets value out of it. Make things that provide value to someone else.)

Relatedly, when you have an idea for a post or an essay, write it up immediately, while the ideas are alive and energizing. If you wait, they’ll go stale and it is often very hard to get them back. There are lots of thoughts and ideas that I’ve had, which are lost forever because I opted to wait a bit on writing them down. This post is itself the result of some thoughts that I had while listening to a podcast, which I made a point to right up while the thoughts were alive in me.

  1. Do the simple thing first

You’re going to have many clever ideas for how to do things better than the default. I absolutely do not want to discourage you in that.

But it will behoove you to start, by doing the mundane, simple thing. Try the default first, then do optimizations and experiments on top of that, and feel free to deviate from the default, when you find something better.

If you have some fancy idea for how to use spaced repetition systems to improve your study efficiency, absolutely try that. But start by doing the simple thing of sitting down, reading the textbook, and doing the exercises, and then apply your fancy idea on top of that.

You want to get a baseline to compare against. And oftentimes, clever tricks are less important than just putting in the hours doing the work, and so you want to make sure to get started doing the work as soon as possible, instead of postponing it until after you’ve developed a clever system. Even if your system is legitimately clever, if the most important thing is doing the hard work, you’ll wish you started earlier.

You’re sometimes going to be more ambitious than the structures around you expect of you. That’s valid. But start with the smaller goals that they offer, and exceed them, instead of trying to exceed them in one fell swoop.

When you were taking Hebrew in high school, you were unimpressed by the standards of the class and held yourself higher than them. For the first assignment, you were to learn the first list of vocabulary words from the book, for the next week. But you felt that you were better than that, and resolved to study all the vocab in the whole book (or at least a lot of it) in that period, instead.

But that was biting off more than you could easily chew, and (if I remember correctly), when you came back the next week, you had not actually mastered the first vocab list. You would have done better to study that list first, and then move on to the rest, even if you were going to study more than was required.

I’ve fallen into this trap more than once. “Optimizing” my “productivity” with a bunch of clever hacks, or ambitious targets, which ultimately mask the fact that my output is underperforming very mundane work habits, for instance. 

You might want to work more and harder than most people, but start by sticking to a regular workday schedule, with a weekend, and then you can adjust it, or work more than that, from there.

Don’t fall into the trap of thinking that the simple thing that everyone else is doing is beneath you, since you’re doing a harder or bigger thing than that. Do the simple thing first, and then do more or better.

I’m sure there’s more to say, but this is what I was pressing on me last night in particular.