One insight to AGI implies hard takeoff, Zero insights implies soft

There is an enormous difference between the world where there 0 insights left before superintelligence, and the world in which we have one or more. Specifically, this is the difference between a soft or a hard takeoff, because of what we might call a “cognitive capability overhang”.

The current models are already superhuman in a several notable ways:

  • Vastly superhuman breadth of knowledge
  • Effectively superhuman working memory
  • Superhuman thinking speed[2]

If there’s a secret sauce that is missing for “full AGI”, then the first AGI might have all of these advantages, and more, out of the gate.

It seems to me that there are at least two possibilities.

We may be in world A:

We’ve already discovered all the insights and invented the techniques that earth is going to use to create its first superintelligence in this timeline. It’s something like transformers pre-trained on internet corpuses, and then trained using RL from verifiable feedback and on synthetic data generated by smarter models. 

That setup basically just works. It’s true that there are relevant capabilities that the current models seem to lack, but those capabilities will fall out of scaling, just as so many other have already.

We’re now in the process of scaling it up and when we do that, we’ll produce our first AGI in a small number of OOMs.

…or we might be in world B:

There’s something that LLM-minds are basically missing. They can and will become superhuman in various domains, but without that missing something, they won’t become general genius scientists, that can do the open-ended “generation, selection, and accumulation” process that Steven Byrnes describes here.

There’s at least one more technique that we need to add to the AI training stack.

Given possibility A, then I expect that our current models will gradually (though not necessarily slowly!) become more competent, more coherent at executing at long term tasks. Each successive model generation / checkpoint will climb up the “autonomous execution” ladder (from “intern” to “junior developer” to “senior developer” to “researcher” to “research lead” to “generational researcher”). 

This might happen very quickly. Successive generations of AI might traverse the remaining part of that ladder in a period of months or weeks, inside of OpenAI or Anthropic. But it would be basically continuous.

Furthermore, while the resulting models themselves might be relatively small, a huge and capex-intensive industrial process would be required for producing those models, which provides affordances for governance to clamp down on the creation of AGIs in various ways, if it chooses to.


If, however, possibility B holds instead and the training processes that we’re currently using are missing some crucial ingredient for AGI, then at some point, someone will come up with the idea for the last piece, and try it. [3]

That AI will be the first, nascent, AGI system that is able to do the whole loop of discovery and problem solving, not just some of the subcomponents of that loop.[4]

But regardless, these first few AGIs, if they are incorporating developments from the past 10 years, will be “born superhuman” along all the dimensions that AI models are already superhuman. 

That is: the first AGI that can do human-like intellectual work will also have a encyclopedic knowledge base, and a superhuman working memory capacity, and superhuman speed.

Even though it will be a nascent baby mind, the equivalent of GPT-2 of it’s own new paradigm, it might already be the most capable being on planet earth.

If that happens (and it is a mis aligned consequentialist), I expect it to escape from whatever lab developed it, copy itself a million times over, quickly develop a decisive strategic advantage, and seize control over the world.

It likely wouldn’t even need time to orient to its situation, since it already has vast knowledge about the world, so it might not need to spend time or thought identifying its context, incentives, and options. It might know what it is and what it should do from it’s first forward pass.

In this case, we would go from a world where populated by humans with increasingly useful, but basically narrowly-competent AI tools, to a world with a superintelligence on the lose, in the span of hours or days.

Governance work to prevent this might be extremely difficult, because the process that produces that superintelligence is much more loaded on a researcher having the crucial insight, and not on any large scale process that can be easily monitored or regulated.


If I knew which world we lived in, it would probably impact my strategy for trying to make things go well.

Some notes on the semiconductor industry

In Spring of 2024, Jacob Lagerros and I took an impromptu trip to Taiwan to glean what we could about the Chip supply chain. Around the same time, I read Chip War and some other sources about the semiconductor industry.

I planned to write a blog post outlining what I learned, but I got pseudo-depressed after coming back from Taiwan, and never finished or published it. This post is a lightly edited version of the draft that has been sitting in my documents folder. (I had originally intended to include a lot more than this, but I might as well publish what I have.)

Interestingly, reading it now, all of this feels so basic, that I’m surprised that I considered a lot of it worth including in a post like this, but I think it was all new to me at the time.

  • There are important differences between logic chips and memory chips, such that at various times, companies have specialized in one or the other.
  • TSMC was founded by Morris Chang, with the backing of the Taiwanese government. But the original impetus came from Taiwan, not from Chang. The government decided that it wanted to become a leading semiconductor manufacturer, and approached Chang (who had been an engineer and executive at Texas instruments) about leading the venture. 
    • However, TSMC’s core business model, being a designerless fab that would manufacture chips for customers, but not designing chips of its own, was Chang’s idea. He had floated it to Texas instruments while he worked there, and was turned down. This idea was bold and innovative at the time—there had never been a major fab that didn’t design its own chips.
      • There had been precursors on the customer side: small computer firms that would design chips and then buy some of the spare capacity of Intel or Texas Instruments to manufacture them. This was always a precarious situation, for those companies, because they depended on companies who were both their competitors and their crucial suppliers. Chang bet that there would be more companies that would prefer to outsource fabbing, and that they would prefer to depend on a fab that wasn’t their competitor. 
      • This bet proved prescient. With the advent of chip design software in the 80s, the barriers to chip design fell. And at the same time, as transistor sizes got smaller and smaller, the difficulty of running a cutting edge fab went up. Both these trends incentivized specialization in design and outsourcing of manufacture.
  • Chang is sometimes described as “returning to Taiwan” to start TSMC, but this is only ambiguously correct. He grew up in mainland China, and had never been to Taiwan before he visited to set up a Texas Instruments factory there. He “returned” to start TSMC, only in the sense that the government of Taiwan was descended from the pre-revolutionary government of mainland China.
  • TSMC is the pride of Taiwan. TSMC accounts for between 5 and 25% of Taiwan’s GDP. (that’s a big spread. Double check!) The company is referred to as “the Silicon shield”, meaning that TSMC preempts an invasion of Taiwan by China, because China, like the rest of the world, depends on TSMC-produced chips. My understanding is that the impact of this defense is overstated, but it’s definitely part of the Zeitgiest.
  • Accordingly, the whole of Taiwanese society backs TSMC. Socially, there’s pressure for smart people to go into electrical engineering in general, and to work at TSMC in particular. Politically, TSMC pays very little taxes, and when it needs something from the government (zoning rights, additional power), it gets it.
    • Chip War quotes Shang-yi chang, head of R&D at TSMC:

“People worked so much harder in Taiwan,” Chiang explained. Because manufacturing tools account for much of the cost of an advanced fab, keeping the equipment operating is crucial for profitability. In the U.S., Chiang said, if something broke at 1 a.m., the engineer would fix it the next morning. At TSMC, they’d fix it by 2 a.m. “They do not complain,” he explained, and “their spouse does not complain” either.

  • Chips that have more transistors packed more densely, are better—able to do more computations. The “class” of a chip is called a “node.” 
  • A production process—all the specific machines and specific procedures, embodied physically in a fab used to make a class of chips. “The leading node” is the production process that produces the cutting edge chips to date (which have the most processing power and most efficient energy consumption). A new node rolls out about once every 2 years. Typically the old fabs continue operating, manufacturing now-less than cutting edge chips. 
  • Nodes are referred to by the size of an individual transistor on a chip, measured in nano meters. eg the in 1999 we were at the 130 nm node. But around 2000, we started running into physical limits to making semiconductors smaller (for instance the layers of insulation were only a few atoms thick, which meant that quantum tunneling effects started to interfere with the performance of the transistor). To compensate, chips started using a 3D design, instead of a 2D design. Since then the length of the transistor stopped being a particularly meaningful measure. Nodes are still referred to by transistor length (we’re currently on the 4 nm node), but it’s now more of a marketing scheme rather than a description of physical reality.
  • No one has ever caught up to the leading node. There used to be dozens of companies that could produce chips on the smallest scale allowed by the technology, but over the decades more and more companies have fallen back to fabbing chips that are somewhere behind the cutting edge. My understanding is that no one in history has ever overtaken the leaders from behind. Currently, TSMC is the only company that can produce leading node chips.
  • Semiconductor manufacturing is a weird mix of hyper competitive and a monopoly.
    • On the one hand, my impression is that semiconductors, along with hedge funds, are the most competitive industries in the world, in the sense that very tiny improvements on an “absolute” scale, translate into billions of dollars in profit. TSMC employs hundreds (?) of thousands of engineers working 12 or 14 hours a day, day in and day out, to squeeze out tiny process improvements. (I was told that everyone at TSMC universally says that it’s a very hard place to work.)
    • On the other hand, the winner of that brutal race to stay at the front of the pack effectively has monopoly pricing power. No company in the world, except TSMC, can produce leading node chips, and so can effectively charge monopoly profits for their manufacture. (From what I read in the TSMC museum, their actual profit margins appear to be around 50%.)
    • On the other hand, there’s unusually high levels of vertical coordination between companies. The supply chain is extremely complex, and each step depends on specifications both upstream and downstream.  Many of the inputs to chip production processes are distinctly not commodities. Very often, a crucial component of a sub process will be produced by only one supplier and/or used by only one customer.For this reason, the companies in the chip industry are unusually well coordinated. ASML can’t make a secret bet on an improved lithography mechanism, because it needs to be compatible with TSMC’s process flows.
      • So the industry as a whole decides which technological frontiers to invest in, so that they can all move together. 
      • Further, major companies in the supply chain are often substantial investors in their suppliers, because they are depending on those suppliers to do the R&D to develop components that will be crucial to their business 3, 5, or 10 years down the line.
        • For instance, very early EUV lithography R&D was researched by Intel, and Intel, Samsung, and TSMC all invested heavily in ASML, to make sure it could develop working EUV tech. ASML, in turn, manages a network of suppliers producing crucial high precision components, including investing in those suppliers to make sure they have the funding they need, and doing corporate takeovers if ASML decides it can manage a company’s production better than it can itself.
  • Jacob compared the chip industry to “a little bit of dath ilan on earth”. That sounds right to me. (Ironically, the semiconductor industry is the one industry on dath ilan that is not functioning like a dath ilani industry.
  • Robin Hanson claims that the rejection of prediction markets is because executives don’t really want the company to know the truth, because it undermines their ability to spin a motivating narrative. But this industry might be the one where results, and accurate predictions, matter enough, that the companies involved would embrace prediction markets.
  • From looking at videos of the inside of the fabs that were displayed in the TSMC museum, it looks like the whole process is automated. The videos don’t show workers operating machines. They show machines operating on their own—presumably with process engineers monitoring and adjusting their operation from a nearby room. Metal boxes, presumably containing wafers, are periodically lifted from the machines, transferred around the fab by robots attached to tracks on the ceiling, and then deposited in another machine.
  • The chip industry of every country that has a major chip industry does or did massively benefit from government intervention. 
  • As a rule of thumb, it takes 10 years to go from a published paper of technological process, to a usable scalable version. The papers published at conferences describe the manufacturing technology of 10 years in the future.

Notes on Tyler Cowen

I feel like I have a better understanding of [[Tyler Cowen]].

He’s both an optimist and a pessimist, depending on what you’re comparing to:

He thinks that the world is getting better, decade by decade, that what the west is doing, messy as it is, is working.

But he also thinks that the world is messy and complicated and political and hard to predict, and so it hard to do much better than we’re doing. There are marginal improvements to be had in small spheres, but the people who dream of big overhauls or who have theories of how institutions are massively underperforming are naive.

He’s not a true believer. He doesn’t trust his own inside view very much. But he also separately understands that true believers are one of the key drivers of progress. And he identifies those people who have ideologies and who buy into their ideologies, who are smart and careful thinkers, because he thinks those people drive progress, even if they’re over-optimistic and naive. This is why he hires people like Bryan Caplan and Robin Hanson.

Tyler broadly believes that the whole milieu of everyone pursuing their inside views, their ideologies that they believe in, generally drives things to get better, even though any individual ideology is wrong or overstated. He’s interestingly MTG-Green, embracing of Blue, rather than Blue himself.

Some barely-considered feelings about how AI is going to play out

Over the past few months I’ve been thinking about AI development, and trying to get a handle on if the old school arguments for AI takeover hold up. (This is relevant to my dayjob at Palisade, where we are working to inform policymakers and the public about the situation. To do that, we need to have good understanding ourselves, of what the situation is.)

This post is a snapshot of what currently “feels realistic” to me regarding how AI will go. That is, these are not my considered positions, or even provisional conclusions informed by arguments. Rather, if I put aside all the claims and arguments and just ask “which scenario feels like it is ‘in the genera of reality’?”, this is what I come up with. I expect to have different first-order impressions in a month.

Crucially, none of the following is making claims about the intelligence explosion, and the details of the intelligence explosion (where AI development goes strongly recursive) are crucial to the long run equilibrium of the earth-originating civilization.

My headline: we’ll mostly succeed at prosaic alignment of human-genius level AI agents

  • Takeoff will continue to be gradual. We’ll get better models and more capable agents year by year, but not jumps that are bigger than that between Claude 3.7 and Claude 4.
  • Our behavioral alignment patches will work well enough.
    • RL will induce all kinds of reward hacking and related misbehavior, but we’ll develop patches for those problems (most centrally, for any given reward hack, we’ll generate some examples and counter examples to include in the behavior training regimes).
    • (With a little work) these patches will broadly generalize. Future AI agents won’t just not cheat at chess and won’t just abstain from blackmail. They’ll understand the difference between “good behavior” and “bad behavior”, and their behavioral training will cause them to act in accordance with good behavior. When they see new reward hacks, including ones that humans wouldn’t have thought of, they’ll correctly extrapolate their notion of “good behavior” to preclude this new reward hack as well.
    • I expect that the AI labs will figure this out, because “not engaging in reward-hacking-like shenanigans” is critical to developing generally reliable AI agents. The AI companies can’t release AI agent products for mass consumption if those agents are lying and cheating all over the place.1
    • Overall, the AI agents will be very obedient. They’ll have goals, in so far as accomplishing any medium term task entails steering towards a goal, but they won’t have persistent goals of their own. They’ll be obedient assistants and delegates that understand what humans want and broadly do what humans want.
  • The world will get rich. LessWrong style deceptive misalignment concerns will seems increasingly conspiracy-ish and out of touch. Decision makers will not put much stock on such concerns—they’ll be faced with a choice to forgo enormous and highly tangible material benefits (and ceading those benefits to their rivals), on the basis of abstract concerns which have virtually no empirical examples, and whose advocates explicitly state are unfalsifiable.
  • There’s a gold rush to get the benefits before others. The world is broadly in a “greedy” mode and not a “fearful” mode. The labs, and relevant governments eagerly unleash their genius level AI agents to automate AI R&D. At this point something even stranger happens.
  1. Though a friend points out that companies might develop mechanisms for utilizing cheap AI labor, tested incentive and affordance schemes, designed specifically to contend with the Agents propensity for misbehavior. Just because the average person can’t trust an AI to do their taxes or watch their kids doesn’t mean that there are not enterprising business men that won’t find a way to squeeze useful outputs from untrustworthy AIs. ↩︎

The “function” of government

[note: probably an obvious point to most people]

Sleep

When I was younger I was interested the the question “why do we sleep? What is the biological function of sleep?” This is a more mysterious than one might naively guess, for the past 150 years scientists have put forth many theories of the function of sleep. But for every one of those theories, some of the specific observed facts about the biology of sleep don’t fit well with it.

At some point I realized that the question “what is the function of sleep?” relies on a confused assumption that there’s one only one function, or rather that “sleep” is one thing, rather than many overlapping processes.

A more accurate historical accounting is something like the following…

Many eons ago there was some initial reason why it was adaptive to early animals to have an active mode and a different less active mode. That original reason for that less active mode might have been any of a number of thing: clearing of metabolic waste products, investments in cellular growth over cellular activity, whatever.

But once an organism has that division between an active mode and a relatively inactive proto-sleep mode, the later comes to include many additional functions. As the complexity of the organism increases and new biological functions evolve, some of those functions will be more compatible with the proto-sleep mode than with the active mode, and so those functions evolve to occur in that mode. Sleep is all the biological processes that happen together during during the relatively inactive period.

On might be tempted to ask what the original purpose of the inactive mode was, and declare that the true purpose of sleep. But that would be yielding to an unfounded essentialism. Just because it was first doesn’t mean that it is in any sense more important. It might very well be that the original biological function that sleep evolved around (like a perl around a grain of sand) has itself evolved away. That has no baring on an organism’s evident need to sleep.

Government

Similarly, I had previously been thinking of states as stationary bandits. States emerge from warlord using violence to extort wealth from productive peasants, and evolve into their modern form as power-conflicts between factions within the ruling classes rearrange the locuses of power. I think this is basically right as a (simplified historical accounting).

But reading a bit about economic history, I have new sense of it being kind like evolved subsystems.

Yes, the state starts out as a stationary bandit, but once it’s there, and and taken for granted as a part of life, it is (for better or for worse) a natural entity to enforce contract law, provide public goods, run a welfare state, stimulate aggregate demand, or run a central bank. There’s a path dependency by which the state evolves to take on these functions because at any given step of historical development, the state is the existing institution that can most easily be repurposed to solve a new problem, which both changes and entrenches the power of the state, much as each newly evolved function that synergizes with the rest of sleep reinforces sleep as a behavioral pattern.

The difference

But unlikely in the case of sleep the original nature of the thing is still relevant to it’s current form. All of the later functions of the state are still founded on force and the use of force. Doing solving problems with a state almost necessarily requires solving them via, at some point in the process, threatening someone with violence.

In principle, many, maybe all of those functions could be served by voluntary, non-coercive institutions, but since the state, given it’s power, is the default solution, many problems get “solved” via more violence and more coercion than was necessary.

That states have additional layers of functionality, some of which are arguably aligned with broader socitey, doesn’t make me notably more positive about states. Rather, it makes them seem more insidious. When there’s an entity around that has, by schelling agreement, the legitimate right to use force to extract value, it creates a temptation to co-opt and utilize that entity’s power for many an (arguably) good cause, in addition to outright corruption.

Reflecting on some regret about not trying to join and improve specific org(s)

I started a new job recently, which has prompted me to reflect on my work over the past few years, and how I could have done better.

Concretely, I regret not joining SERI MATS, and helping it succeed, when it was first getting started. 

I think this might have been a great fit for me: I had existing skills and experience that I think would have been helpful for them. The seasonal on-off schedule would have given me the flexibility to do and learn other things. It would have (I think) helped me get a better grounding in Machine Learning and technical alignment approaches.

And if I had joined with an eye towards agentically shaping the organization’s culture and priorities as it developed, I think I would have had a positive impact on the seed that has grown into the current alignment field . In particular, I think I might have had leverage to establish some cultural norms regarding how to think about the positive and negative impacts of one’s work.1 

I regarded MATS as the obvious thing to do. The nascent alignment field was bottlenecked on mentorship—a small number of people (arguably) had good taste for the kinds of research that was on track, but had limited bandwidth for research mentorship, so conveying that research taste was (and is?) a bottleneck for the whole ecosystem. A program aiming to unblock everything else to expand the capacity for research mentorship as much as possible seemed like the obvious straightforward thing to do.

I said as much in my post from early 2023:

There is now explicit infrastructure to teach and mentor these new people though, and that seems great. It had seemed for a while that the bottleneck for people coming to do good safety research was mentorship from people that already have some amount of traction on the problem. Someone noticed this and set up a system to make it as easy as possible for experienced alignment researchers to mentor as many junior researchers as they want to, without needing to do a bunch of assessment of candidates or to deal with logistics. Given the state of the world, this seems like an obvious thing to do.

I don’t know that this will actually work (especially if most of the existing researchers are themselves doing work that dodges the core problem), but it is absolutely the thing to try for making more excellent alignment researchers doing real work. And it might turn out that this is just a scalable way to build a healthy field.

In retrospect, I should have written those paragraphs and generated the next thought “I should actively go try to get involved in SERI MATS and see if I can help them.”

So why didn’t I?

Misapplied notion of counterfactual impact

I didn’t do this because I was operating on the model/assumption that, while this was important, they were doing it now, and were probably not in danger of failing at it. It was taken care of and so I didn’t need to do it.

I now think that was probably a mistake. Because I didn’t get involved, I don’t know one way or the other, but it seems plausible to me that I could have contributed to making the overall project substantially better: more effective and with better positive externalities. 

This isn’t because I’ve learned anything in particular about how SERI MATS missed the mark, but just getting more exposure to organizations and adjusting my prior that even if an organization is broadly working, and not in danger of collapse, it might be the case that I can personally make it much better with my efforts. In particular, I think it will sometimes be the case that there is room to substantially improve an organization in ways that don’t line up very neatly with the specific roles that they’re attempting to explicitly hire for, if you have strategic orientation and specific relevant experience.2

This realization is downstream with my interactions with Palisade over recent weeks. Also, Ronny made a comment a few years ago (paraphrased) that “you shouldn’t work for an organization unless you’re at least a little bit trying to reform it”. That stuck with me, and changed my concept of “working for an org”.

Possibly this difference in frame is also partially downstream of thinking a bit about shapley values through reading Planecrash and thinking about donation-matching for SFC. (I previously aimed to do things that, if I didn’t do them, wouldn’t happen. Now, I’ve continuous-ized that notion, and aim for, approximately, high shapley value).

Underestimating the value of “having a job”

Also, regarding SERI potentially being a good fit for me in particular, I think I have historically underestimated the value of having a job for structuring one’s life and supporting personal learning. I currently wish that I had more technical background in ML and alignment/control work, and I think I might have gotten more of that if I had been actively trying to develop in that direction while supporting MATS in a non-technical capacity, instead of trying to develop that background (inconsistently) independently.

Strategic misgivings

I didn’t invest heavily in any project over recent years because there wasn’t much that I straightforwardly believed in. As noted above, the idea-of-MATS was a possible exception to this—it seemed like the obvious thing to do given the constraints of the world. And I now think I should take “this seems like the obvious thing to do” as a much stronger indicator that I should get involved with a project, somehow, and figure out how to help, than I previously did.

But part of what held me back from doing that was misgivings about the degree to which MATS was acting as a feeder pool for the scaling labs. MATS is another project that doesn’t seem obviously robustly good to me (or “net-positive”, though I kind of think that’s the wrong frame). As with many projects, I felt reticent to put my full force behind it for that reason.

In retrospect, I think maybe I should have shown up and tried to solve the problem of “it seems like we’re doing plausible real harm, and that seems unethical” from the inside. I could have repeatedly and vocally drawn attention to it, raised it as a consideration in strategic and tactical planning, etc. Either I would have shaped the culture around this problem for the MATS staff sufficiently that I trusted the overall organism to optimize safely, or we would have bounced off of each other unproductively. And in that second case, we could part ways, and I could move on.

In general, it feels like a more obvious affordance to me, now, if I think something is promising, but I don’t trust it to have positive impacts, I just try non-disruptively making it better according to the standards that I think are important, and if that doesn’t work or doesn’t go well, parting ways with the org.

This all begs the question, “should I still try to work for SERI MATS and make it much better?”

My guess is that the opportunity is smaller now than it was a few years ago, because both the culture and processes of the org have more found an equilibrium that works. There’s less leverage to make an org much better when the org is figuring out how to do the thing it’s trying to do, compared to when it has reached product-market-fit, and is mostly finding ways to reproduce that product consistently and reliably.

That said, one common class of error is overestimating the degree to which an opportunity has passed. e.g. not buying Bitcoin in 2017, because you believe that you’ve already missed the big opportunity—it’s true in some sense, but you’re underestimating how much of the opportunity still remains. 

So, if I were still unattached, writing this essay would prompt me to reach out to Ryan, and say directly that I’m interested in exploring working for MATS, and try to get more contact with the territory, so that I can see for myself. As it is, I have a job which seems like it needs me more, and which I anticipate absorbing my attention for at least the next year.

  1. Note: of all the things I wrote here, this is the point that I am most uncertain of. It seems plausible to me that because of psychological dynamics akin to “It is difficult to get a man to understand something, when his salary depends on his not understanding it”, and classic EA-style psychological commitment to life narratives that impart meaning via impact, the cultural norms around how the ecosystem as a whole thinks about positive and negative impacts, were and are basically immovable. Or rather, I might have been able to make more-or-less performative hand-wringing fashionable, and possibly cause people to have less of an action-bias , but not actually produce norms that lead to more robustly positive outcomes.

    At least, I don’t have a handle on either how to approach these questions myself, or how to effectively intervene on the culture about them. And so I’m not clear on if I could have made things better in this way. But I could have made this my explicit goal and tried, and made some progress, or not. ↩︎
  2. A bit of context that is maybe important. I have not, applied for a job since I was 21, and was looking for an interim job during college. Every single job that I’ve gotten in my adult life has resulted from either, my just showing up and figuring out how I could be helpful, or someone I already know reaching out to me and asking me for help with a project.

    For me at least, “show up and figure out what is needed and make that happen” is a pretty straightforward pattern of action, but it might be foreign to other people who have a different conception of jobs that is more centered on specific roles, that you’re well-suited for, and doing a good job in those roles. ↩︎

That no one rebuilt old OkCupid updates me a lot about how much the startup world actually makes the world better

The prevailing ideology of San Francisco, Silicon Valley, and the broader tech world, is that startups are an engine (maybe even the engine) that drives progress towards a future that’s better than the past, by creating new products that add value to people’s lives.

I now think this is true in a limited way. Software is eating the world, and lots of bureaucracy is being replaced by automation which is generally cheaper, faster, and a better UX. But I now think that this narrative is largely propaganda.

It’s been 8 years since Match bought and ruined OkCupid and no one, in the whole tech ecosystem, stepped up to make a dating app even as good as old OkC is a huge black mark against the whole SV ideology of technology changing the world for the better.

Finding a partner is such a huge, real, pain point for millions of people. The existing solutions are so bad and extractive. A good solution has already been demonstrated. And yet not a single competent founder wanted to solve that problem for planet earth, instead of doing something else, that (arguably) would have been more profitable. At minimum, someone could have forgone venture funding and built this as a cashflow business.

It’s true that this is a market that depends on economies of scale, because the quality of your product is proportional to the size of your matching pool. But I don’t buy that this is insurmountable. Just like with any startup, you start by serving a niche market really well, and then expand outward from there. (The first niche I would try for is by building an amazing match-making experience for female grad students at a particular top university. If you create a great experience for the women, the men will come, and I’d rather build an initial product for relatively smart customers. But there are dozens of niches one could try for.)

But it seems like no one tried to recreate OkC, much less creating something better, until the manifold team built manifold.love (currently in maintenance mode)? Not that no one succeeded. To my knowledge, no else one even tried. Possibly Luna counts, but I’ve heard through the grapevine that they spent substantial effort running giant parties, compared to actually developing and launching their product—from which I infer that they were not very serious. I’ve been looking for good dating apps. I think if a serious founder was trying seriously, I would have heard about it.

Thousands of funders a year, and no one?!

That’s such a massive failure, for almost a decade, that it suggests to me that the SV ideology of building things that make people’s lives better is broadly propaganda. The best founders might be relentlessly resourceful, but a tiny fraction of them seem to be motivated by creating value for the world, or this low hanging fruit wouldn’t have been left hanging for so long.

This is of course in addition to the long list of big tech companies who exploit their network-effect monopoly power to extract value from their users (often creating negative societal externalities in the process), more than creating value for them. But it’s a weaker update that there are some tech companies that do ethically dubious stuff, compared to the stronger update that there was no startup that took on this obvious, underserved, human problem.

My guess is that the tech world is a silo of competence (because competence is financially rewarded), but operates from an ideology with major distortions / blindspots, that are disconnected from commonsense reasoning about what’s Good. eg following profit incentives, and excitement about doing big things (independent from whether those good things have humane or inhumane impacts) off a cliff.

Small cashflow software businesses might be over soon?

[Epistemic status: half-baked musing that I’m writing down to clarify for myself]

For the past 15 years there’s been an economic niche, where a single programer develops a useful tool, utility, or application, and sells it over the internet to a few thousand people for a small amount of money each, and make a decent (sometimes passive or mostly-passive) living on that one-person business.

In practice, these small consumer software businesses are on the far end of a continuum that includes venture-backed startups, and they can sometimes be the seed of an exponentially scaling operation. But you only need to reach product-market fit with a few thousand users for a business like this to sustainable. And at the point, it might be mostly on autopilot, and the entrepreneur has income, but can shift most of their attention to other projects, after only two or three years.

Intend (formally complice), is an example of this kind of business from someone in my circles.

I wonder if these businesses will be over soon, because of AI.

Not just that AI will be able to do the software engineering, but that AI swarms will be able to automate the whole entrepreneurial process from generating (good) ideas, developing early versions, shipping them, getting user-feedback, and iterating.

The discourse already imagines a “one person-unicorn”, where a human CEO coordinates a company of AIs to provide a product or service. With half a step more automation, you might see meta-entrepreneurs overseeing dozens or hundreds of separate AI swarms, each ideating, prototyping, and developing a business. Some will fail (just like every business), but some will grow and succeed and (just like with every other business venture) you can invest more resources into the ones that are working.

Some questions:

  • How expensive will inference be, in running these AI entrepreneurs? Will the inference costs be high enough that you need venture funding to run an AI entrepreneur-systems?
    • Estimating this breaks down into roughly “how many tokens does it take to run a business (per day?)?” and “How much will an inference token cost in 2028?”
  • What are the moats and barriers to entry here? What kind of person would capture the gains to this kind of setup.
  • Will this eat the niche of human-ideated software businesses? Will there be no room left to launch businesses like this and have them succeed, because the space of niche software products will be saturated? Or is the space of software ideas so dense, that there will still be room for differentiation, even if there are 1000x as many products of this type, of comparable quality, available?

. . .

In general, the leverage of code is going to drop over the next 5 years.

Currently, one well-placed engineer will write a line of code that might be used by millions of users. That because there’s 0-marginal cost to replicating software and so a line of code written once might as well be copied to a million computers. But it’s also representative of the relative expense of programming labor. Not many people can write (good) code and so their labor is expensive. It’s definitely not worth paying $100 an hour for an engineer to write some software when you can buy existing off the shelf software that does what you need (or almost what you need) for $50 a month.

But, as AI gets good enough that “writing code” becomes an increasingly inexpensive commodity, the cost-benefit of writing custom software is going to shift in the “benefit” direction. When writing new software is cheap, you might not want to pay the $50 a month, and there will be more flexibility to write exactly the right software for your particular usecase instead of a good-enough off the shelf-version (though I might be overestimating the pickiness of most of humanity with regards to their software). So more people and companies will write custom software more of the time, instead of buying existing software. As that happens the number of computers that run a given line of code will drop, in the process.

How I wish I lived my life (since 2020)

[I wrote this a few months ago]

I always have a(n at least) part-time job, doing something object level, where someone pays me to do something that creates value. Doing something that isn’t entirely self-directed, adds some structure to my life which, I think, makes me better at doing my personal projects. I might stick to one job for six months or a year, and then move on to try something else. I want to try a bunch of different things and work with different kinds of people. I always have a job, but I also always have my eye out for my next job. I do things like research for AI impacts, grant making for SFF, startup stuff for manifold, generalist work for CAIP, logistics for Lighthaven events.

[I should have a flag for whenever I don’t have a job. That’s something that I should fix ASAP, even with just a stopgap. Instead of looking for something that I really want to do, I should make sure that I have something that I’m doing for a few hours a week-day, even if I want to find something better. When people ask me what I’m doing, I should always have a day job.]

In the evenings, I work on personal and learning projects: programming projects (including working with a tutor), studying textbooks, writing, practicing therapy skills. Whatever I’m working on, it always has a deliverable: if I’m learning something, I should write what I’m learning or gives talks about it. If I’m learning a skill, I design a “final project” that involves some person other than me.

Sometimes I’ll put the learning projects aside, and scale up my work, going all in for a campaign of a week, or 3 months, working intensively with a team to complete an end-to-end project.

Some weekends I try an intensive, doing an experiment or self-designed exercise with another person or a group of people.

I live frugally. I put away most of the money I earn, split between long run investments (both index funds and higher risk bets) and my personal development fund. I make a enough to live on from scattered projects, so I should be able to save most of what I make from my work.

I go to 5 conferences a year, trying to get exposure to interesting happenings in the world, people who are thinking about interesting things, and highly ethical women to date.

Every day I meditate and exercise. I don’t watch TV or youtube or read comic books. My go-to habits when I’m not doing anything are reading and taking notes on podcasts.

Lessons from and musings about Polytopia

Over the past 6 months I’ve played about 100 hours of the 4X game “The Battle of Polytopia.” Mostly playing on “crazy” difficulty, against 3 or 4 other tribes. 

This is more than I’ve played any video game since I was 15 or so. I wanted to write up some of my thoughts, especially those that generalize.

Momentum

Polytopia is a game of momentum or compounding advantage. If I’m in the lead by turn 10 or so, I know that I am basically guaranteed to win eventually. [Edit 2024-09-12: after playing for another 30 hours, and focusing on the early game, I can now almost always win eventually, regardless of whether I have an early lead]. I can turn a current resource advantage into an overwhelming military advantage for a particular city, and seizing that city, get more of a lead. After another 10 turns, I’ll have compounded that until my tribe is an inexorable force moving across the square.

I think the number one thing that I took away from this game is the feeling of compounding momentum. Life should have that flavor. 

And, in particular, the compounding loops through the world. In polytopia, you generally want to spend down your resources to 0 or close to 0, every turn, unless there’s a specific thing that you’re aiming to buy that requires more than one maringal-turn of resource. “Saving up” is usually going to be a losing proposition, because the return on investment of seizing a city or building the population of an existing city sooner is exponential.

This also generalizes. I’m very financially conservative, by nature. I tend to earn money and save it / invest it. There’s a kind of compounding to that, but it isn’t active. There’s a different attitude one could have, where they’re investing in the market some amount every year, and putting aside some money for emergencies, but most of their investment loops through the world. Every year, they spend down most of the money they own, and invest it in ways to go faster.

I think most people, in practice, don’t do this very well: they spend their salary on a nice apartment, instead of the cheapest apartment they can afford and tutoring, personal assistants, and plane flights to one-on-one longshot bet meetings. At a personal (instead of organizational) level, I think the returns to spending additional money saturate fast, and after that point you get better returns investing in the market. But I think there might be something analogous to the “spend down your whole budget, every turn” heuristic. 

I’ve thought in the past that I should try aggressively spending more money. But maybe I should really commit to trying it. I have a full time salary for the first time in my life. Maybe this year, I should experiment with trying to find ways to spend 90% of that salary (not my investment returns, which I’ll reinvest), and see what the returns to that are. 

This overall dynamic of compounding advantages is more concerning that I, personally, haven’t built up much of an advantage yet. Mediocre accumulation of money, connections, and skills seems “medium” good. But because of the exponential, mediocre is actually quite far down a power law. This prompts me to reflect on what I can do this year to compound my existing resources (particularly with regards to personal connections, since I realized late in life that who you know who knows what you can do, is a constraint on what you can do).

Geography

Because of this momentum effect, the geography of the square dominates all other considerations in the question of which tribe will eventually win. In particular, if I start out isolated, far from any other tribes, with several easily accessible settlements available to convert, winning is going to be easy. I can spend the first ten turns capturing those settlements and building up their population without needing to expend resources on military units for defense.

In contrast, if I start out sandwiched between two other tribes with all settlements that are within my reach also within theirs, the struggle is apt to be brutal. It’s still possible to win starting from this position: The key is to seize, and build up at least two cities, and create enough units defensively that the other tribes attack each other instead of you. (That’s one thing that I learned: you sometimes need to build units to stand near your cities, even when you’re not planning an attack, because a density of units discourages other tribes from attacking you, in the first place). 

From there, depending on how dire the straights are, and if I’m on the coast, I’ll want to either 

  1. Train defenders to garrison my cities, and then quickly send out scouts to convert nonaffiliated settlements, and build up an advantage that way, or
  2. Train an attack force (of mostly archers, most likely, because a mass of archers can attack from a distance with minimal risk) target a city that the other two tribes are fighting over. I can sweep in and seize it after they’ve exhausted themselves.

This can work, but it still depends on luck. If you can’t get to your first settlement(s) fast enough, or another tribe captures one of your cities before you’ve had time to build up a detering defense force, there’s not really a way to recover. I’ll be fighting against better resourced adversaries for the rest of the game, until they overwhelm me. Usually I’ll just start the game over when I get unlucky this early.

This overwhelming importance of initial conditions sure seems like it generalizes to life, but mostly as a dower reminder that life is unfair. Insofar as you can change your initial conditions, they weren’t actually initial.

Thresholds, springs, and concentration of force

There are units of progress in polytopia that are composed of smaller components, but which don’t provide any value until all components are completed. 

For instance, it’s tempting to harvest a fruit or hunt an animal to move the resource counter of a city up by one tick. But if you don’t have the stars to capture enough resources to reach the next population-increase threshold for that city (or there just aren’t other accessible resources nearby), it doesn’t actually help to collect that one resource. You get no benefit for marginal ticks on the resource counter, you only get benefit from increases in city population.

Even if collecting the resource is the most valuable thing to do this turn, you’re still better off holding off and waiting a turn (one exception to the “spend down your resources every turn” heuristic). Waiting gives you optionality in how you spend those resources—you might have even better options in the next turn, including training units that were occupied last turn, or researching technologies 

Similarly, capturing a city entails killing the unit that is stationed there, and moving one of your units into its place, with few enough enemy units nearby that your unit isn’t killed before the next turn. Killing just the central stationed enemy unit, without having one of your units nearby to capture the city, is close to useless (not entirely useless, because it costs the enemy tribe one unit). Moving a unit into the city, only for it to be killed before the next turn, is similarly close to useless.

So in most cases, capturing a city is a multi-turn campaign of continually training and/or moving enough units into position to have a relative military advantage, killing enough of the enemy units, and moving one of your units (typically a defender or a giant, if you can manage it) into position in the city.

Crucially, partially succeeding at a campaign—killing most of the units, but not getting all the way to capturing the city, it buys you effectively nothing. You don’t win in polytopia by killing units, except insofar as that is instrumental to capturing cities.

More than that, if you break off a campaign part way through, your progress is not preserved. When you back your units out, that gives the enemy city slack to recover and replenish their units. So if you go back to capture that city later, you’ll have to more or less start over from scratch with wearing down their nearby military.

That is to say, capturing a city in polytopia is spring-like: if you don’t push it all the way to completion, it bounces back, and you need to start over again. It’s not just that marginal progress doesn’t provide marginal value until you reach a threshold point. Marginal progress decays over time.

I can notice plenty of things that are spring-like in this way, once I start thinking in those terms. 

Some technical learning, for instance. If I study something for a bit, and then leave it for too long (I’m not sure what “too long” is—maybe more than two weeks?) I don’t remember the material enough for my prior studying to help me much. If I want to continue, I basically have to start over.

But on the other hand, I read and studied the first few chapters of a Linear Algebra textbook in 2019, and that’s served me pretty well: I can rely on at least some of those concepts in my thinking. I think this differences is partly due to the material (some subjects just stick for me better, or are more conceptually useful for me compared to others). But largely, I think this is a threshold effect: if I study the content enough to chunk and consolidate the concepts, it sticks with me and I can build on it. But if I read some of a textbook, but don’t get to the point of consolidating the concepts, it just gets loaded into my short term memory, to decay on the order of weeks.

Writing projects definitely have the threshold-dynamic—they don’t provide any value until I ship them—and they’re partially but not fully spring-like. When I’ve left a writing project for too long, it’s hard to come back to it: the motivating energy is gone. And sometimes I do end up, when I’m inspired again, rewriting essentially the same text (though often with a different structure). But sometimes I am able to use partial writing from previous attempts.

Generalizing, one reason why things are springs, is because short term memories and representations decay, and you need to pass the threshold of consolidating into long term representations.

In polytopia, because capturing cities is spring-like, succeeding requires having a concentration of force. Splitting your forces to try to take two cities at once, and be worse than useless. And so one of the most important disciplines of playing polytopia is having internal clarity about which city you’re targeting next, so that you can overwhelm that city, capture it, consolidate it and then move on to the next one. Sometimes there are sudden opportunities to capture cities that were not your current target and late in the game, you might have more than one target at a time (usually from different unit-training bases).

Similarly, anything in my life that’s spring-like demands a concentration of force. 

If technical learning in my short term memory tends to decay, that means that I need to commit sufficiently to a learning project for long enough to hit the consolidation threshold. I want to concentrate my energies on the project until I get to the point of success, whatever success means.

Same basic principle for writing projects. When writing, I should probably make a point to just keep going until I have a first complete draft.

Video games

Probably the most notable thing I learned was not from the content, but from the format. Video games can work. 

I got better at playing polytopia over the period that I was playing it, from mostly losing to mostly winning my games. That getting better was mostly of the form of making mistakes, noticing those mistakes, and then more or less automatically learning the habits to patch those mistakes. 

For instance, frustration at losing initiative because I left a city un-garrisoned and an enemy unit came up and took it without a fight while I wasn’t looking, led into a general awareness of all my cities and the quiet units within movement distance of them, so that I could quickly train a unit to garrison them. 

Or running into an issue when I ran out of population for a city and couldn’t easily garrison it, and learning to keep a defender within one step of a city, so that I can train units there to send to the front, but move the defender into place when the population is full.

This was not very deliberate or systematic. I just kept playing and gradually learned how to avoid the errors that hobbled me.

And I just kept playing because it was (is) addictive. In particular, when I finished a game, there was an automatic impulse to start another one. I would play for hours at a stretch. At most I think I played for ten hours in a row. 

Why was it addictive? I think the main thing is that the dynamic of the game means I never get blocked with no option, or no idea, for what to do next. At every moment there’s an affordance to move the game forward: either something to do or just moving on to the next turn. The skill is in taking actions skillfully, but not in figuring out how to take actions at all. I think this, plus an intermittent reinforcement schedule was crucial to what made it addictive. 

Overall, this has been bad for my life, especially after the point when I started mostly winning, and I wasn’t learning as much any more. 

But I think I learned something about learning and getting better in that process. I’ve been playing with the idea of intentionally cultivating that kind of addiction for other domains, or pretending as if I’m experiencing that kind of addiction to simulate it.

I bet I could get into a mode like this with programming, where I compulsively keep going for hours over weeks, and in the process learn the habits to counter my mistakes and inefficiencies, less because of anything systematic, and more just because those errors are present to mind in my short term memory by the time I encounter them again. I think I’m probably close to having enough skill in programming that I can figure out how to never be blocked, especially with the help of an LLM, and get into the addictive rhythm.

Further, this makes me more interested in trying to find video games that are both dopamine-addictive and train my intuition for an important domain. 

I’ve been playing with Manifold markets, recently, and I feel like I’m getting a better sense of markets in the process. I wonder if there are good video games for getting an intuition for linear algebra, or economics. I have registered that playing 100 hours of factorio is an important training regime. I wonder if there are others. 

I haven’t really played video games since I was in middle school (with the exception of some rationality training exercises on snakebird and baba is you). At the time I was playing Knights of the Old Republic, and decided that I would try to become a jedi in real life, instead of in the game. I mostly haven’t played video games since. 

I now think that this was maybe a mistake.

It’s hard to know what lessons I would have learned if I had played more video games—when I played Age of Mythology and Age of Empires as a kid, I don’t remember getting better over time, as I did with polytopia. But I do think there are lessons that I could have learned from playing video games that would have helped me in thinking about my life. Notably, getting reps playing through games with early, mid, and late stages, would have given me a model for planning across life stages, which is something that, in retrospect, I was lacking. I didn’t have an intuitive sense of the ways that the shape of my opportunities would be different in my 20s vs. my thirties, for instance. Possibly I would have avoided some life errors if I had spent more time playing and learning to get good at, video games.