Dean Ball writes, in his post subtitled “why I am not a doomer”, that the blocker to AI takeover risk is computational irreducibility. A misaligned AI intent on taking over the world would fail because of the limits of intelligence: the intractability of predicting complex systems.
…I am doubtful about the ability of an AI system—no matter how smart—to eradicate or enslave humanity in the ways imagined by the doomers. Note that this is not a claim about alignment or any other technical safeguard, even if a “misaligned” AI system wanted to take over the world and had no developer- or government-imposed, AI-specific safeguards to hinder it, I contend it would still fail. “Taking over the world” involves too many steps that require capital, interfacing with hard-to-predict complex systems (yes, hard to predict even for a superintelligence), ascertaining esoteric and deliberately hidden knowledge (knowledge that cannot be deduced from first principles), and running into too many other systems and procedures with in-built human oversight. It is not any one of these things, but the combination of them, that gives me high confidence that AI existential risk is highly unlikely and thus not worth extreme policy mitigations such as bans on AI development enforced by threats to bomb civilian infrastructure like data centers. “If anyone builds it, everyone dies” is false.
This is argument misconstrues what superhuman “intelligence” (or if one prefers, superhuman “capability”) entails.
Some specific individuals have been world-historically skilled at managing capital, interfacing with hard-to-predict systems, organizing groups to accomplish goals, etc.
Notable examples include Napoleon, who almost conquered Europe (and did succeed in transforming it in various ways) and Elon Musk, who’s currently the richest person in the world, briefly had enormous political influence, and has approximately singlehandedly outcompeted a whole industry of geopolitical strategic significance.

See also: John D. Rockefeller, Bismarck, and Augustus.
These extraordinary individuals are dramatically more capable of accumulating power and steering the world than most mere mortals. Surely, they all got enormously lucky. But they are are also obviously extremely skilled. Very few people could have accomplished what they did, even with unrealistically favorable conditions.
Obviously, those extraordinary individuals had to contend with the real world of computational irreducibility. But that didn’t negate their advantages. Their skill was in dealing with the real world, including all it’s complexity and unpredictability.
The concern is that machine superintelligences will be skilled in the same ways, but enormously more so.
Despite the impressiveness of these exemplar individuals, they’re certainly not anywhere near the fundamental limits of how capable a being can be at strategy, management, leadership, engineering, organization, propaganda, diplomacy, etc.
As a case in point, I find it hard to believe that in 100 years, it will be possible for a human to be a Fortune 500 CEO. AI corporations—AIs with better judgement than Jeff Bezos or Bill Gates, with encyclopedic knowledge of every field and every recorded case study, that can process every information stream into a company, and make every decision at every level of the company with the full benefit of all that information, all at computer speed—will completely trounce any human who tried to run a company the old fashioned way, by directing a management hierarchy of humans (or even of AIs) sending emails to each other. Even if the AI corporations have judgement only as good as Jeff Bezos, it won’t even be remotely close. They have too many other advantages.
In such a world, the AIs will outcompete humans, and all the capital will accumulate in the hands of AIs, unless the AIs are aligned to human principles, who will be allowed to direct the AI-generated surplus to human-selected priorities.
Maybe the AI takeover doesn’t happen instantly. But we have little reason to expect human political structures to remain in place and in power when there are one or more super-Napoleons operating on earth. The governments of Europe could barely contain actual Napoleon.