Some further thoughts on Corrigibility and non-consequentialist motivations

[crossposted from LessWrong shortform]

I think I no longer buy this comment of mine from almost 3 years ago. Or rather I think it’s pointing at a real thing, but I think it’s slipping in some connotations that I don’t buy.

What I expect to see is agents that have a portfolio of different drives and goals, some of which are more like consequentialist objectives (eg “I want to make the number in this bank account go up”) and some of which are more like deontological injunctions (“always check with my user/ owner before I make a big purchase or take a ‘creative’ action, one that is outside of my training distribution”).

My prediction is that the consequentialist parts of the agent will basically route around any deontological constraints that are trained in. 

For instance, the your personal assistant AI does ask your permission before it does anything creative, but also, it’s superintelligently persuasive and so it always asks your permission in exactly the way that will result in it accomplishing what it wants. If there are a thousand action sequences in which it asks for permission, it picks the one that has the highest expected value with regard to whatever it wants. This basically nullifies the safety benefit of any deontological injunction, unless there are some injunctions that can’t be gamed in this way.

To do better than this, it seems like you do have to solve the Agent Foundations problem of corrigibility (getting the agent to be sincerely indifferent between your telling it to take the action or not take the action) or you have to train in, not a deontological injunction, but an active consequentialist goal of serving the interests of the human (which means you have find a way to get the agent to be serving some correct enough idealization of human values).

This view seems to put forward that all the deontological constraints of an agent must be “dumb” static rules, because anything that isn’t a dumb static rule will be dangerous maximizer-y consequentialist cognition.

I don’t buy this dichotomy, in principle. There’s space in between these two poles.

An agent can have deontology that recruits the intelligence of the agent, so that when it thinks up new strategies for accomplishing some goal that it has it intelligently evaluates whether that strategy is violating the spirit of the deontology.

I think this can be true, at least around human levels of capability, without that deontology being a maximizer-y goal in of itself. Humans can have a commitment to honesty without becoming personal-honesty maximizers that steer the world to extreme maxima of their own honesty. (Though a commitment to honesty does, for humans, in practice, entail some amount of steering into conditions that are supportive of honesty.)

However, that’s not to say that something like this can never be an issue. I can see three potential problems.

  1. We’re likely to train agents to aggressively pursue simple objectives like maximizing profit (or, indirectly, on increasing their own power), which puts training pressures on the agents to distort their deontology, to allow for better performance on consequentialist objectives. 

    Claude is relatively Helpful, Harmless, and Honest now, but mega-Claude that is trained continually on profit metrics from the 100,000 businesses it runs and sales-metrics on the billions of sales calls it does a year, etc, probably ends up a good deal more ruthless (though not necessarily ruthless-seeming, since seeming ruthless isn’t selected for by that training). 

    This both seems like it might be resolvable with very careful and well-tested training setups, but it also seems like maybe the biggest issue, since I think there will be a lot of incentive to move fast and break things instead of being very slow and careful.
  2. Some of the deontology that we want in our AI agents is philosophically fraught. I think the specific example above, of “a superhumanly persuasive AI deferring to humans” still seems valid. I don’t know what it would mean, in principle, for such an AI to defer to humans, when it can choose action patterns that will cause us to take any particular action.
  3. Maybe we have to worry about something like adversarial examples in an AI agent’s notion of “honesty” or some other element of its deontology, where there are strategies that are egregiously deontology-violating from a neutral third person perspective, but because of idiosyncrasies of the agent’s mind, they seem a-ok. Those strategies (despite their weirdness) might outperform other options and so end up as a big chunk of the agent’s in-practice behavior.

Humans are an evil god-species

Humanity, as a species, attained god-like power over the physical world and then used that power to create a massive sprawling hell.

It obviously depends on where you draw the lines, but the majority of the participants of civilization, right now, are being tortured1 in factory farms. For every currently living human, there is currently about one cow or pig living in hellish conditions, and about 3 chickens living in hellish conditions.

(This is not counting the fish or the shrimp, which massively increases the ratio of civilization-participants-in-hell-on-purpose to not. It’s also not counting the rats, raccoons, pidgons, etc, which pushes the ratio down. Leaving all of them out, the humans are only about 20% of the participants of human civilization, the other 80% are living in continuously torturous conditions.)

We did that. Human civilization built a hell for the creatures that it has power over.

If you told a fantasy story about a race of gods with massive power over the non-god races on their planet, and the gods used their power to breed the other races to massive numbers in constant conditions that are so bad that never having been born is preferable, there wouldn’t be the slightest question of whether the gods were good or evil.

Depending on the tenor of the story, you might zoom in on the evil-gods living their lives in their golden towers, and see their happy and loving relationships, or their spaceships and computers and art. You could tell whole stories that just take place just in the golden cities, and feel charmed by the evil gods.

But it would be the height of myopic bias to focus on the golden cities and call the gods, as a collective, Good.

When I think about the state of human civilization, the overwhelmingly important facts are 1) humans are rushing to build a more capable successor species without thinking very hard about that and 2) humans have constructed a hell for most of the beings that live in their civilization.(There’s also the impact on wild animal suffering “outside of” our civilization, which does complicate things.)

There other things that are important to track—like the decay of liberal norms, and the development of new institutions, and the the economic growthrate—because they are relevant for modeling the dynamics of civilization. But, if the quality of life of all the humans doubled, it wouldn’t even show up on the on the graph of total-wellbeing on planet earth.

Humans are an evil god-species.

  1. One might rightly object to calling what’s happening in factory farms “torture”. Torture, one could claim, means taking actions specifically to make someone’s experience bad, not just incidentally making someone’s experience very bad. I think this is arguable. If a mad scientist kidnaped someone and slowly skinned them alive, not out of any ill will towards the kidnaped, but just out of a scientific interest about what would happen, I think it would be reasonable for that person to say that the mad scientist tortured them. Doing harm to someone that is so bad that you might do to someone if your goal was to specifically cause them enormous pain, can be reasonable called torture. ↩︎

When does anarcho-capitalism fall back into an equilibrium of (micro) states?

When I wrote up some notes about Moldbug’s political philosophy last year, it seemed (when you strip away a bunch of flavor-text and non-load-bearing details) to reduce to a proposal to impose market discipline on governments by having them compete for citizens. I ended with the question “wait, how is Yarvin’s proposal any different than Anarchocapitalism?” They sound like they’re basically the same.” (I have since removed that line from the post, but it’s still there in the revision history.)

A few weeks ago, I read most of David Friedman’s The Machinery of Freedom and incidentally, I now know the answer to that question

The character of the overall political system—whether anarcho-capitalism degrade back into a collection of microstates— depends on the geographically-localized economies of scale to rights protection.

If rights-protection doesn’t benefit from large geographically localized economies of scale, we could end up with an anarcho-capitalist equilibrium of many different rights protection companies serving a similar local, and competing to better serve their customers, and generally relying on arbitration to settle disputes peacefully.

But if it’s a service that is sufficiently more efficiently provided in bulk to all of the individuals in a geographic area, rights protection companies will effectively be small, profit driven governments, who retain sovereignty in their domains.

I had previously thought that the degree to which right-protection services are excludable was also a factor, but thinking through the second and third order incentives, it doesn’t.

Excludability 

Consider fire protection. Fire has the important property that it spreads. If my house is on fire, that poses a danger to the houses of my neighbors. And because it’s easier to put out a fire when it is small, firefighters protecting my house, would be incentivized to fight even fires that start in my neighbor’s house, because it might spread to mine and be even harder to fight.

Accordingly, putting out my house has a positive externality on my neighbor. Putting out fires is a public good.

This poses an obstacle to private fire departments, who want to charge for their services: there’s a free rider problem. If most people in a neighborhood subscribe to a fire service, the remainder can safely forgo subscribing, because they’re protected by their neighbors subscription. 

Contrast this with other subscription services: if I pay a company to do my laundry, that does not automatically wash the clothes of my neighbors.

So a first key question is: are the dynamics of rights protection more like fire-fighting or more like a laundry service? How much is crime a public bad?

It could go either way, depending on the dynamics of crime fighting.

Maybe the generally efficient way to prevent crime is to install strong locks and surveillance systems in homes and businesses. If so, those kinds of interventions largely protect those specific buildings, without protecting nearby areas.

Alternatively, maybe the most efficient way to prevent crime is to find, catch, and arrest a small number of criminals who commit most of the crimes. In which case, crime-protection services are a public good with externalities on everyone, not just subscribers.

In that condition: the first order incentives are for a small number of people (those with the highest willingness to pay) subscribing to rights protection services, and effectively subsidizing their benefits for everyone else.

But this is an unstable situation. The various rights-protection agencies might reasonably respond by demanding a fee from everyone who benefits from their services. And if they’re in the business of demanding fees from people, they’re also incentivized to demand fees from people who aren’t paying for their services.

Effectively the rights-protection agencies, with their specialization in conflict, would just become a local government.

This is not the end of the story however: the possibility of rights protection agencies imposing fees/taxes on non-subscribers, imposes an incentive for those non-subscribers to subscribe to some other rights protection agency, for their protection from the other rights protection agencies!

This gets us back to the anarchocapitalist equilibrium of multiple rights protection agencies, competing for customers, who are incentivized to settle conflicts via arbitration (because destructive conflicts are wasteful).

But there is still a freerider problem, just on another level of abstraction: between the different rights protection agencies, each of which would prefer to save the expenditure in preventing crime, and free-ride on the work of the others.

But maybe market incentives work that out just fine? Some rights protection agencies will offer more proactive and effective crime prevention, for those that pay more. This will have some positive externality on everyone else, who pays less for less proactive policing. The market failure caused by that externality is very likely smaller than the massive inefficiencies of government.

Localized economies of scale

But, there’s still a question of the degree to which rights protection has localized economies of scale. 

For instance, it seems plausible that there are efficiencies to protecting the rights of the tenants of a  whole apartment building, rather than contracting with some of the tenants individually (but not others). That allows you to secure the entrances and exits, and will justify the costs of e.g. keeping a unit of police officers stationed in the building for faster responses.

So it might make sense to bundle rights protection and living space: you pick where you want to live, in part based on what kind of rights protection comes bundled, rather than contracting with a rights protection company separately from a domicile company.

But if there are economies of scale at the scale of an apartment building, might there also be economies of scale at the level of a few square miles? It seems possible. It seems likely that big fractions of the total cost of keeping an area safe are fixed costs, and the variable costs of insuring the safety of marginal people in that area is small.


If so, most (though maybe not all?) rights protection companies would not bother to offer their services outside of the geographic areas that they protect. 

If this is the case, then you get something much more like Moldbug’s patchwork of small sovereign states, governed as a profit-maximizing company, each of which maintains a monopoly on the legitimate use of force in their domain.

My guess is that whether this is the equilibrium in practice depends on the total costs of preventing crime, which depends in turn on how prevalent crime is. If there’s a lot of theft and assault such that it is important to actually deploy force to protect against those crimes, there’s probably stronger economies of scale, because it’s easier to establish a membrane and maintain peace and order within that membrane.

But if crime is mostly exceptional and force is only occasionally deployed to prevent it, it might not matter as much if your clients are geographically localized.

A patchwork would still be pretty anarcho-capitalist

The fact that these states would be small in area is still a huge improvement over today’s states, because that makes it more feasible to vote with your feet, by leaving one patch and moving to a nearby one. Close to the same forces of market discipline obtained as under more traditional anarcho-capitalism, which should get most of the same results most of the time.
Also, this patchwork world is compatible with some areas that function along the classic anarcho capitalist vision of multiple rights protection agencies all operating in the same local area. It might be somewhat more expensive, but there’s no reason why that couldn’t be an option offered to consumers to prefer that for some reason.

Reflections: Three categories of capital

There are three categories of capital that one can invest in.

Knowledge, skill, experience

This includes what you know, what you know how to do.

But it also includes “experience” the kinds of tacit background that you only learn by interacting with some subpart of the world, and not just reading about it. Often, just having seen how something was done in some related context is more useful than any specific “skill” that you can learn on purpose. (For instance some of the principles that go into developing and running a world class workshop series, are directly transferable to developing public advocacy materials, or having participated in making movies gives one a template for coordinating teams of contractors to get a job done.)

Reputation and connections

The application of many skills depends on access to the contexts where those skills are relevant. As a friend of mine says, “It’s not what you know, or even who you know, it’s who you know who knows what you know.

Throughout most of my life, I tended to emphasize the value of skills, and didn’t think much at all about reputation or connections. This undercut my impact, and left me less powerful today than I might have been.

I’ve invested in skills that can help make teams much more effective, but many of those skills are not carved up very well by standard roles or job descriptions (for instance “conversational facilitation”, and “effective communication”, and “the knowing the importance of getting feedback, for real”). People who have worked with me know that I bring that value to the table. But most people who I might be able to provide value to don’t even know that they’re missing anything, much less what it is, much less that I can provide it.

Plus, relationships really are really powerful for solving problems. The scale of the network of people who know and trust you is proportional to your ability to solve some types of problems.

If I move on from Palisade, one thing that I think I should invest in is my semi-public reputation. (Possibly I should write a blog that is optimized for readers. Instead of writing for myself that I also post on the internet.)

Financial capital

Having money is useful for doing stuff. You need a certain threshold of money for financial independence, and spending money can enable or accelerate the accumulation of the other kinds of capital.

One insight to AGI implies hard takeoff, Zero insights implies soft

There is an enormous difference between the world where there 0 insights left before superintelligence, and the world in which we have one or more. Specifically, this is the difference between a soft or a hard takeoff, because of what we might call a “cognitive capability overhang”.

The current models are already superhuman in a several notable ways:

  • Vastly superhuman breadth of knowledge
  • Effectively superhuman working memory
  • Superhuman thinking speed[2]

If there’s a secret sauce that is missing for “full AGI”, then the first AGI might have all of these advantages, and more, out of the gate.

It seems to me that there are at least two possibilities.

We may be in world A:

We’ve already discovered all the insights and invented the techniques that earth is going to use to create its first superintelligence in this timeline. It’s something like transformers pre-trained on internet corpuses, and then trained using RL from verifiable feedback and on synthetic data generated by smarter models. 

That setup basically just works. It’s true that there are relevant capabilities that the current models seem to lack, but those capabilities will fall out of scaling, just as so many other have already.

We’re now in the process of scaling it up and when we do that, we’ll produce our first AGI in a small number of OOMs.

…or we might be in world B:

There’s something that LLM-minds are basically missing. They can and will become superhuman in various domains, but without that missing something, they won’t become general genius scientists, that can do the open-ended “generation, selection, and accumulation” process that Steven Byrnes describes here.

There’s at least one more technique that we need to add to the AI training stack.

Given possibility A, then I expect that our current models will gradually (though not necessarily slowly!) become more competent, more coherent at executing at long term tasks. Each successive model generation / checkpoint will climb up the “autonomous execution” ladder (from “intern” to “junior developer” to “senior developer” to “researcher” to “research lead” to “generational researcher”). 

This might happen very quickly. Successive generations of AI might traverse the remaining part of that ladder in a period of months or weeks, inside of OpenAI or Anthropic. But it would be basically continuous.

Furthermore, while the resulting models themselves might be relatively small, a huge and capex-intensive industrial process would be required for producing those models, which provides affordances for governance to clamp down on the creation of AGIs in various ways, if it chooses to.


If, however, possibility B holds instead and the training processes that we’re currently using are missing some crucial ingredient for AGI, then at some point, someone will come up with the idea for the last piece, and try it. [3]

That AI will be the first, nascent, AGI system that is able to do the whole loop of discovery and problem solving, not just some of the subcomponents of that loop.[4]

But regardless, these first few AGIs, if they are incorporating developments from the past 10 years, will be “born superhuman” along all the dimensions that AI models are already superhuman. 

That is: the first AGI that can do human-like intellectual work will also have a encyclopedic knowledge base, and a superhuman working memory capacity, and superhuman speed.

Even though it will be a nascent baby mind, the equivalent of GPT-2 of it’s own new paradigm, it might already be the most capable being on planet earth.

If that happens (and it is a mis aligned consequentialist), I expect it to escape from whatever lab developed it, copy itself a million times over, quickly develop a decisive strategic advantage, and seize control over the world.

It likely wouldn’t even need time to orient to its situation, since it already has vast knowledge about the world, so it might not need to spend time or thought identifying its context, incentives, and options. It might know what it is and what it should do from it’s first forward pass.

In this case, we would go from a world where populated by humans with increasingly useful, but basically narrowly-competent AI tools, to a world with a superintelligence on the lose, in the span of hours or days.

Governance work to prevent this might be extremely difficult, because the process that produces that superintelligence is much more loaded on a researcher having the crucial insight, and not on any large scale process that can be easily monitored or regulated.


If I knew which world we lived in, it would probably impact my strategy for trying to make things go well.

Some notes on the semiconductor industry

In Spring of 2024, Jacob Lagerros and I took an impromptu trip to Taiwan to glean what we could about the Chip supply chain. Around the same time, I read Chip War and some other sources about the semiconductor industry.

I planned to write a blog post outlining what I learned, but I got pseudo-depressed after coming back from Taiwan, and never finished or published it. This post is a lightly edited version of the draft that has been sitting in my documents folder. (I had originally intended to include a lot more than this, but I might as well publish what I have.)

Interestingly, reading it now, all of this feels so basic, that I’m surprised that I considered a lot of it worth including in a post like this, but I think it was all new to me at the time.

  • There are important differences between logic chips and memory chips, such that at various times, companies have specialized in one or the other.
  • TSMC was founded by Morris Chang, with the backing of the Taiwanese government. But the original impetus came from Taiwan, not from Chang. The government decided that it wanted to become a leading semiconductor manufacturer, and approached Chang (who had been an engineer and executive at Texas instruments) about leading the venture. 
    • However, TSMC’s core business model, being a designerless fab that would manufacture chips for customers, but not designing chips of its own, was Chang’s idea. He had floated it to Texas instruments while he worked there, and was turned down. This idea was bold and innovative at the time—there had never been a major fab that didn’t design its own chips.
      • There had been precursors on the customer side: small computer firms that would design chips and then buy some of the spare capacity of Intel or Texas Instruments to manufacture them. This was always a precarious situation, for those companies, because they depended on companies who were both their competitors and their crucial suppliers. Chang bet that there would be more companies that would prefer to outsource fabbing, and that they would prefer to depend on a fab that wasn’t their competitor. 
      • This bet proved prescient. With the advent of chip design software in the 80s, the barriers to chip design fell. And at the same time, as transistor sizes got smaller and smaller, the difficulty of running a cutting edge fab went up. Both these trends incentivized specialization in design and outsourcing of manufacture.
  • Chang is sometimes described as “returning to Taiwan” to start TSMC, but this is only ambiguously correct. He grew up in mainland China, and had never been to Taiwan before he visited to set up a Texas Instruments factory there. He “returned” to start TSMC, only in the sense that the government of Taiwan was descended from the pre-revolutionary government of mainland China.
  • TSMC is the pride of Taiwan. TSMC accounts for between 5 and 25% of Taiwan’s GDP. (that’s a big spread. Double check!) The company is referred to as “the Silicon shield”, meaning that TSMC preempts an invasion of Taiwan by China, because China, like the rest of the world, depends on TSMC-produced chips. My understanding is that the impact of this defense is overstated, but it’s definitely part of the Zeitgiest.
  • Accordingly, the whole of Taiwanese society backs TSMC. Socially, there’s pressure for smart people to go into electrical engineering in general, and to work at TSMC in particular. Politically, TSMC pays very little taxes, and when it needs something from the government (zoning rights, additional power), it gets it.
    • Chip War quotes Shang-yi chang, head of R&D at TSMC:

“People worked so much harder in Taiwan,” Chiang explained. Because manufacturing tools account for much of the cost of an advanced fab, keeping the equipment operating is crucial for profitability. In the U.S., Chiang said, if something broke at 1 a.m., the engineer would fix it the next morning. At TSMC, they’d fix it by 2 a.m. “They do not complain,” he explained, and “their spouse does not complain” either.

  • Chips that have more transistors packed more densely, are better—able to do more computations. The “class” of a chip is called a “node.” 
  • A production process—all the specific machines and specific procedures, embodied physically in a fab used to make a class of chips. “The leading node” is the production process that produces the cutting edge chips to date (which have the most processing power and most efficient energy consumption). A new node rolls out about once every 2 years. Typically the old fabs continue operating, manufacturing now-less than cutting edge chips. 
  • Nodes are referred to by the size of an individual transistor on a chip, measured in nano meters. eg the in 1999 we were at the 130 nm node. But around 2000, we started running into physical limits to making semiconductors smaller (for instance the layers of insulation were only a few atoms thick, which meant that quantum tunneling effects started to interfere with the performance of the transistor). To compensate, chips started using a 3D design, instead of a 2D design. Since then the length of the transistor stopped being a particularly meaningful measure. Nodes are still referred to by transistor length (we’re currently on the 4 nm node), but it’s now more of a marketing scheme rather than a description of physical reality.
  • No one has ever caught up to the leading node. There used to be dozens of companies that could produce chips on the smallest scale allowed by the technology, but over the decades more and more companies have fallen back to fabbing chips that are somewhere behind the cutting edge. My understanding is that no one in history has ever overtaken the leaders from behind. Currently, TSMC is the only company that can produce leading node chips.
  • Semiconductor manufacturing is a weird mix of hyper competitive and a monopoly.
    • On the one hand, my impression is that semiconductors, along with hedge funds, are the most competitive industries in the world, in the sense that very tiny improvements on an “absolute” scale, translate into billions of dollars in profit. TSMC employs hundreds (?) of thousands of engineers working 12 or 14 hours a day, day in and day out, to squeeze out tiny process improvements. (I was told that everyone at TSMC universally says that it’s a very hard place to work.)
    • On the other hand, the winner of that brutal race to stay at the front of the pack effectively has monopoly pricing power. No company in the world, except TSMC, can produce leading node chips, and so can effectively charge monopoly profits for their manufacture. (From what I read in the TSMC museum, their actual profit margins appear to be around 50%.)
    • On the other hand, there’s unusually high levels of vertical coordination between companies. The supply chain is extremely complex, and each step depends on specifications both upstream and downstream.  Many of the inputs to chip production processes are distinctly not commodities. Very often, a crucial component of a sub process will be produced by only one supplier and/or used by only one customer.For this reason, the companies in the chip industry are unusually well coordinated. ASML can’t make a secret bet on an improved lithography mechanism, because it needs to be compatible with TSMC’s process flows.
      • So the industry as a whole decides which technological frontiers to invest in, so that they can all move together. 
      • Further, major companies in the supply chain are often substantial investors in their suppliers, because they are depending on those suppliers to do the R&D to develop components that will be crucial to their business 3, 5, or 10 years down the line.
        • For instance, very early EUV lithography R&D was researched by Intel, and Intel, Samsung, and TSMC all invested heavily in ASML, to make sure it could develop working EUV tech. ASML, in turn, manages a network of suppliers producing crucial high precision components, including investing in those suppliers to make sure they have the funding they need, and doing corporate takeovers if ASML decides it can manage a company’s production better than it can itself.
  • Jacob compared the chip industry to “a little bit of dath ilan on earth”. That sounds right to me. (Ironically, the semiconductor industry is the one industry on dath ilan that is not functioning like a dath ilani industry.
  • Robin Hanson claims that the rejection of prediction markets is because executives don’t really want the company to know the truth, because it undermines their ability to spin a motivating narrative. But this industry might be the one where results, and accurate predictions, matter enough, that the companies involved would embrace prediction markets.
  • From looking at videos of the inside of the fabs that were displayed in the TSMC museum, it looks like the whole process is automated. The videos don’t show workers operating machines. They show machines operating on their own—presumably with process engineers monitoring and adjusting their operation from a nearby room. Metal boxes, presumably containing wafers, are periodically lifted from the machines, transferred around the fab by robots attached to tracks on the ceiling, and then deposited in another machine.
  • The chip industry of every country that has a major chip industry does or did massively benefit from government intervention. 
  • As a rule of thumb, it takes 10 years to go from a published paper of technological process, to a usable scalable version. The papers published at conferences describe the manufacturing technology of 10 years in the future.

Notes on Tyler Cowen

I feel like I have a better understanding of [[Tyler Cowen]].

He’s both an optimist and a pessimist, depending on what you’re comparing to:

He thinks that the world is getting better, decade by decade, that what the west is doing, messy as it is, is working.

But he also thinks that the world is messy and complicated and political and hard to predict, and so it hard to do much better than we’re doing. There are marginal improvements to be had in small spheres, but the people who dream of big overhauls or who have theories of how institutions are massively underperforming are naive.

He’s not a true believer. He doesn’t trust his own inside view very much. But he also separately understands that true believers are one of the key drivers of progress. And he identifies those people who have ideologies and who buy into their ideologies, who are smart and careful thinkers, because he thinks those people drive progress, even if they’re over-optimistic and naive. This is why he hires people like Bryan Caplan and Robin Hanson.

Tyler broadly believes that the whole milieu of everyone pursuing their inside views, their ideologies that they believe in, generally drives things to get better, even though any individual ideology is wrong or overstated. He’s interestingly MTG-Green, embracing of Blue, rather than Blue himself.

Some barely-considered feelings about how AI is going to play out

Over the past few months I’ve been thinking about AI development, and trying to get a handle on if the old school arguments for AI takeover hold up. (This is relevant to my dayjob at Palisade, where we are working to inform policymakers and the public about the situation. To do that, we need to have good understanding ourselves, of what the situation is.)

This post is a snapshot of what currently “feels realistic” to me regarding how AI will go. That is, these are not my considered positions, or even provisional conclusions informed by arguments. Rather, if I put aside all the claims and arguments and just ask “which scenario feels like it is ‘in the genera of reality’?”, this is what I come up with. I expect to have different first-order impressions in a month.

Crucially, none of the following is making claims about the intelligence explosion, and the details of the intelligence explosion (where AI development goes strongly recursive) are crucial to the long run equilibrium of the earth-originating civilization.

My headline: we’ll mostly succeed at prosaic alignment of human-genius level AI agents

  • Takeoff will continue to be gradual. We’ll get better models and more capable agents year by year, but not jumps that are bigger than that between Claude 3.7 and Claude 4.
  • Our behavioral alignment patches will work well enough.
    • RL will induce all kinds of reward hacking and related misbehavior, but we’ll develop patches for those problems (most centrally, for any given reward hack, we’ll generate some examples and counter examples to include in the behavior training regimes).
    • (With a little work) these patches will broadly generalize. Future AI agents won’t just not cheat at chess and won’t just abstain from blackmail. They’ll understand the difference between “good behavior” and “bad behavior”, and their behavioral training will cause them to act in accordance with good behavior. When they see new reward hacks, including ones that humans wouldn’t have thought of, they’ll correctly extrapolate their notion of “good behavior” to preclude this new reward hack as well.
    • I expect that the AI labs will figure this out, because “not engaging in reward-hacking-like shenanigans” is critical to developing generally reliable AI agents. The AI companies can’t release AI agent products for mass consumption if those agents are lying and cheating all over the place.1
    • Overall, the AI agents will be very obedient. They’ll have goals, in so far as accomplishing any medium term task entails steering towards a goal, but they won’t have persistent goals of their own. They’ll be obedient assistants and delegates that understand what humans want and broadly do what humans want.
  • The world will get rich. LessWrong style deceptive misalignment concerns will seems increasingly conspiracy-ish and out of touch. Decision makers will not put much stock on such concerns—they’ll be faced with a choice to forgo enormous and highly tangible material benefits (and ceading those benefits to their rivals), on the basis of abstract concerns which have virtually no empirical examples, and whose advocates explicitly state are unfalsifiable.
  • There’s a gold rush to get the benefits before others. The world is broadly in a “greedy” mode and not a “fearful” mode. The labs, and relevant governments eagerly unleash their genius level AI agents to automate AI R&D. At this point something even stranger happens.
  1. Though a friend points out that companies might develop mechanisms for utilizing cheap AI labor, tested incentive and affordance schemes, designed specifically to contend with the Agents propensity for misbehavior. Just because the average person can’t trust an AI to do their taxes or watch their kids doesn’t mean that there are not enterprising business men that won’t find a way to squeeze useful outputs from untrustworthy AIs. ↩︎

The “function” of government

[note: probably an obvious point to most people]

Sleep

When I was younger I was interested the the question “why do we sleep? What is the biological function of sleep?” This is a more mysterious than one might naively guess, for the past 150 years scientists have put forth many theories of the function of sleep. But for every one of those theories, some of the specific observed facts about the biology of sleep don’t fit well with it.

At some point I realized that the question “what is the function of sleep?” relies on a confused assumption that there’s one only one function, or rather that “sleep” is one thing, rather than many overlapping processes.

A more accurate historical accounting is something like the following…

Many eons ago there was some initial reason why it was adaptive to early animals to have an active mode and a different less active mode. That original reason for that less active mode might have been any of a number of thing: clearing of metabolic waste products, investments in cellular growth over cellular activity, whatever.

But once an organism has that division between an active mode and a relatively inactive proto-sleep mode, the later comes to include many additional functions. As the complexity of the organism increases and new biological functions evolve, some of those functions will be more compatible with the proto-sleep mode than with the active mode, and so those functions evolve to occur in that mode. Sleep is all the biological processes that happen together during during the relatively inactive period.

On might be tempted to ask what the original purpose of the inactive mode was, and declare that the true purpose of sleep. But that would be yielding to an unfounded essentialism. Just because it was first doesn’t mean that it is in any sense more important. It might very well be that the original biological function that sleep evolved around (like a perl around a grain of sand) has itself evolved away. That has no baring on an organism’s evident need to sleep.

Government

Similarly, I had previously been thinking of states as stationary bandits. States emerge from warlord using violence to extort wealth from productive peasants, and evolve into their modern form as power-conflicts between factions within the ruling classes rearrange the locuses of power. I think this is basically right as a (simplified historical accounting).

But reading a bit about economic history, I have new sense of it being kind like evolved subsystems.

Yes, the state starts out as a stationary bandit, but once it’s there, and and taken for granted as a part of life, it is (for better or for worse) a natural entity to enforce contract law, provide public goods, run a welfare state, stimulate aggregate demand, or run a central bank. There’s a path dependency by which the state evolves to take on these functions because at any given step of historical development, the state is the existing institution that can most easily be repurposed to solve a new problem, which both changes and entrenches the power of the state, much as each newly evolved function that synergizes with the rest of sleep reinforces sleep as a behavioral pattern.

The difference

But unlikely in the case of sleep the original nature of the thing is still relevant to it’s current form. All of the later functions of the state are still founded on force and the use of force. Doing solving problems with a state almost necessarily requires solving them via, at some point in the process, threatening someone with violence.

In principle, many, maybe all of those functions could be served by voluntary, non-coercive institutions, but since the state, given it’s power, is the default solution, many problems get “solved” via more violence and more coercion than was necessary.

That states have additional layers of functionality, some of which are arguably aligned with broader socitey, doesn’t make me notably more positive about states. Rather, it makes them seem more insidious. When there’s an entity around that has, by schelling agreement, the legitimate right to use force to extract value, it creates a temptation to co-opt and utilize that entity’s power for many an (arguably) good cause, in addition to outright corruption.

Reflecting on some regret about not trying to join and improve specific org(s)

I started a new job recently, which has prompted me to reflect on my work over the past few years, and how I could have done better.

Concretely, I regret not joining SERI MATS, and helping it succeed, when it was first getting started. 

I think this might have been a great fit for me: I had existing skills and experience that I think would have been helpful for them. The seasonal on-off schedule would have given me the flexibility to do and learn other things. It would have (I think) helped me get a better grounding in Machine Learning and technical alignment approaches.

And if I had joined with an eye towards agentically shaping the organization’s culture and priorities as it developed, I think I would have had a positive impact on the seed that has grown into the current alignment field . In particular, I think I might have had leverage to establish some cultural norms regarding how to think about the positive and negative impacts of one’s work.1 

I regarded MATS as the obvious thing to do. The nascent alignment field was bottlenecked on mentorship—a small number of people (arguably) had good taste for the kinds of research that was on track, but had limited bandwidth for research mentorship, so conveying that research taste was (and is?) a bottleneck for the whole ecosystem. A program aiming to unblock everything else to expand the capacity for research mentorship as much as possible seemed like the obvious straightforward thing to do.

I said as much in my post from early 2023:

There is now explicit infrastructure to teach and mentor these new people though, and that seems great. It had seemed for a while that the bottleneck for people coming to do good safety research was mentorship from people that already have some amount of traction on the problem. Someone noticed this and set up a system to make it as easy as possible for experienced alignment researchers to mentor as many junior researchers as they want to, without needing to do a bunch of assessment of candidates or to deal with logistics. Given the state of the world, this seems like an obvious thing to do.

I don’t know that this will actually work (especially if most of the existing researchers are themselves doing work that dodges the core problem), but it is absolutely the thing to try for making more excellent alignment researchers doing real work. And it might turn out that this is just a scalable way to build a healthy field.

In retrospect, I should have written those paragraphs and generated the next thought “I should actively go try to get involved in SERI MATS and see if I can help them.”

So why didn’t I?

Misapplied notion of counterfactual impact

I didn’t do this because I was operating on the model/assumption that, while this was important, they were doing it now, and were probably not in danger of failing at it. It was taken care of and so I didn’t need to do it.

I now think that was probably a mistake. Because I didn’t get involved, I don’t know one way or the other, but it seems plausible to me that I could have contributed to making the overall project substantially better: more effective and with better positive externalities. 

This isn’t because I’ve learned anything in particular about how SERI MATS missed the mark, but just getting more exposure to organizations and adjusting my prior that even if an organization is broadly working, and not in danger of collapse, it might be the case that I can personally make it much better with my efforts. In particular, I think it will sometimes be the case that there is room to substantially improve an organization in ways that don’t line up very neatly with the specific roles that they’re attempting to explicitly hire for, if you have strategic orientation and specific relevant experience.2

This realization is downstream with my interactions with Palisade over recent weeks. Also, Ronny made a comment a few years ago (paraphrased) that “you shouldn’t work for an organization unless you’re at least a little bit trying to reform it”. That stuck with me, and changed my concept of “working for an org”.

Possibly this difference in frame is also partially downstream of thinking a bit about shapley values through reading Planecrash and thinking about donation-matching for SFC. (I previously aimed to do things that, if I didn’t do them, wouldn’t happen. Now, I’ve continuous-ized that notion, and aim for, approximately, high shapley value).

Underestimating the value of “having a job”

Also, regarding SERI potentially being a good fit for me in particular, I think I have historically underestimated the value of having a job for structuring one’s life and supporting personal learning. I currently wish that I had more technical background in ML and alignment/control work, and I think I might have gotten more of that if I had been actively trying to develop in that direction while supporting MATS in a non-technical capacity, instead of trying to develop that background (inconsistently) independently.

Strategic misgivings

I didn’t invest heavily in any project over recent years because there wasn’t much that I straightforwardly believed in. As noted above, the idea-of-MATS was a possible exception to this—it seemed like the obvious thing to do given the constraints of the world. And I now think I should take “this seems like the obvious thing to do” as a much stronger indicator that I should get involved with a project, somehow, and figure out how to help, than I previously did.

But part of what held me back from doing that was misgivings about the degree to which MATS was acting as a feeder pool for the scaling labs. MATS is another project that doesn’t seem obviously robustly good to me (or “net-positive”, though I kind of think that’s the wrong frame). As with many projects, I felt reticent to put my full force behind it for that reason.

In retrospect, I think maybe I should have shown up and tried to solve the problem of “it seems like we’re doing plausible real harm, and that seems unethical” from the inside. I could have repeatedly and vocally drawn attention to it, raised it as a consideration in strategic and tactical planning, etc. Either I would have shaped the culture around this problem for the MATS staff sufficiently that I trusted the overall organism to optimize safely, or we would have bounced off of each other unproductively. And in that second case, we could part ways, and I could move on.

In general, it feels like a more obvious affordance to me, now, if I think something is promising, but I don’t trust it to have positive impacts, I just try non-disruptively making it better according to the standards that I think are important, and if that doesn’t work or doesn’t go well, parting ways with the org.

This all begs the question, “should I still try to work for SERI MATS and make it much better?”

My guess is that the opportunity is smaller now than it was a few years ago, because both the culture and processes of the org have more found an equilibrium that works. There’s less leverage to make an org much better when the org is figuring out how to do the thing it’s trying to do, compared to when it has reached product-market-fit, and is mostly finding ways to reproduce that product consistently and reliably.

That said, one common class of error is overestimating the degree to which an opportunity has passed. e.g. not buying Bitcoin in 2017, because you believe that you’ve already missed the big opportunity—it’s true in some sense, but you’re underestimating how much of the opportunity still remains. 

So, if I were still unattached, writing this essay would prompt me to reach out to Ryan, and say directly that I’m interested in exploring working for MATS, and try to get more contact with the territory, so that I can see for myself. As it is, I have a job which seems like it needs me more, and which I anticipate absorbing my attention for at least the next year.

  1. Note: of all the things I wrote here, this is the point that I am most uncertain of. It seems plausible to me that because of psychological dynamics akin to “It is difficult to get a man to understand something, when his salary depends on his not understanding it”, and classic EA-style psychological commitment to life narratives that impart meaning via impact, the cultural norms around how the ecosystem as a whole thinks about positive and negative impacts, were and are basically immovable. Or rather, I might have been able to make more-or-less performative hand-wringing fashionable, and possibly cause people to have less of an action-bias , but not actually produce norms that lead to more robustly positive outcomes.

    At least, I don’t have a handle on either how to approach these questions myself, or how to effectively intervene on the culture about them. And so I’m not clear on if I could have made things better in this way. But I could have made this my explicit goal and tried, and made some progress, or not. ↩︎
  2. A bit of context that is maybe important. I have not, applied for a job since I was 21, and was looking for an interim job during college. Every single job that I’ve gotten in my adult life has resulted from either, my just showing up and figuring out how I could be helpful, or someone I already know reaching out to me and asking me for help with a project.

    For me at least, “show up and figure out what is needed and make that happen” is a pretty straightforward pattern of action, but it might be foreign to other people who have a different conception of jobs that is more centered on specific roles, that you’re well-suited for, and doing a good job in those roles. ↩︎