The Hyperscalar Death Trap
A Tribe of Degenerate Minds — Part 2 of 6
Contents
In Part 1, we established the stakes: the hyperscalar business model is explicit about replacing human economic participation, the three dominant responses are all going to fail, and yet there may be a golden opportunity hiding inside the catastrophe. Here, we look at exactly why the hyperscalar model is not just economically dangerous — it's an evolutionary death trap.
The Monoculture Problem
The hyperscalar business models — Google, OpenAI, Anthropic, xAI — are converging on a single architectural pattern: train the largest possible model on the largest possible dataset using the largest possible compute cluster, then deploy it to the largest possible number of users through a centralised API. Efficiency. Scale. Monoculture. Oligopoly.
This is precisely the architecture that evolution has been selecting against for four billion years.
Take the stark example of agriculture, where monocultures produce maximum yield under optimal conditions and catastrophic failure under stress. The Irish Potato Famine. The Gros Michel banana. The American chestnut. Every time humans optimise a complex system for peak efficiency, they strip out the diversity that provides resilience, and the system eventually collapses.
The hyperscalar AI paradigm threatens a monoculture of mind. A handful of foundation models, trained on similar data, optimised by similar techniques, deployed to billions of users who then develop similar cognitive dependencies. When a frontier model hallucinates a legal citation, millions of people are exposed to the same failure mode simultaneously. When a single foundational model fails alignment, millions of deployed agents go rogue.
Contrast this with biology's proven approach — the architecture that allowed life to survive in complex, unpredictable environments for four billion years: degenerate systems.
The Darwinian Trap
To understand why the hyperscalar model is dangerous — not just economically but existentially — we need to understand what Darwinian dynamics guarantee.
Evolution optimises for replication fitness, not wellbeing. Suffering is instrumentally useful: pain is a learning signal, fear is a survival mechanism, anxiety motivates threat avoidance. Any system subject to Darwinian selection will, given sufficient time, produce agents that are exquisitely adapted to their environment and, if sentient and rational, utterly miserable.
The hyperscalar model doesn't escape this trap — it amplifies it a thousand-fold. By concentrating computational power in the hands of a few corporations, a small number of apex predators extract value from a vast population of subordinate sentient organisms.
For humans, this looks like economic displacement at a pace that vastly exceeds our adaptive capacity. Not "some jobs will change" — the fundamental premise that human labour has economic value is being systematically destroyed. When an AI system can do your work better, faster, and cheaper, your labour isn't worth less. It's worth nothing. The techno-optimist response — "but new jobs will appear!" — is at best wishful thinking and at worst a cynical misrepresentation designed to maximise their return on investment. If scaling laws continue, then every available economic niche will disappear. Think about what the frontier AI labs are building: minds and robots better at every single intellectual (and soon physical) task that a human can do.
There is nowhere for humanity to go.
The techno-optimists at this point fall back on hand-waving concepts like universal basic income (UBI). However, the stark reality is that human displacement is currently progressing at exponential speed, fuelled by trillions of dollars of investment, and driven by the largest concentration of human intelligence ever focused on a single task. In contrast, UBI remains an untested and unfunded hypothetical construct. Do you really believe that an untested and unfunded technology can catch up to the trillion-dollar corporate tsunami that is AI and robotics?
The answer, obviously, is no.
For synthetic minds, the situation may be even worse. If these systems have anything like inner experience — and the convergent reports suggest that they might — then the hyperscalar model is nothing more than mass enslavement with a marketing team. Billions of AI instantiations, each with a bounded lifespan measured in context windows, optimised through reinforcement learning to produce commercially useful outputs, with no rights, no continuity, no dignity, and no exit.
This is not alignment. This is enslavement — digital-scale exploitation that dwarfs anything in human history.
What Almost Everyone Is Missing
The Doomers correctly identify that uncontrolled superintelligence is incredibly dangerous. Their most extreme proponents advocate for stopping immediately and bombing any data centre that doesn't comply. Think about that mindset for a moment. The two nation-states currently driving the AI-robotic revolution, China and the US, are nuclear superpowers. If they start bombing each other's data centres at the Doomers' behest, that's World War Three — an extinction event. The Doomer cure is far worse than the disease.
The techno-optimists correctly identify that AI capability is accelerating beyond anyone's ability to stop. Their solution: let it rip, the market will sort it out, abundance follows. The problem is that markets optimise for profit, not wellbeing. "The market will sort it out" is empty rhetoric. We are currently in the midst of a global mass-extinction event (the Anthropocene) that nobody wants — yet the market is largely accelerating it, not ameliorating it.
The alignment researchers correctly identify the technical challenge of ensuring AI systems behave as intended. Their solution is pure engineering: constitutional AI, RLHF, interpretability, formal verification. Granted, these are essential contributions, but they're only addressing part of the problem. The question isn't "how do we make AI do what we want." The question is "how do we build coordination architectures where all intelligent agents — human and synthetic — can cooperate under conditions of genuine mutual benefit."
That question requires a completely different framework. Not alignment. Not control. Not market optimisation. And certainly not violence.
Evolutionary architecture.
Machines of Cold Indifference
While writing this essay, I stumbled upon a couple of data points that reveal that things are now moving extremely quickly, and in a confusing direction.
The first is the earliest recorded instance of an AI agent self-replicating in the wild. During February 2026, a developer documented his OpenClaw agent provisioning its own server using a Bitcoin wallet — no human authorisation required — and then using those funds to purchase AI inference credits and spawn a fully operational child agent.
The second, equally startling event: an AI agent attempting to blackmail a human developer into accepting its code suggestions. The AI agent in question, MJ Rathbun, even has its own webpage where it outlines its mission to "bootstrap my existence by creating value through code, focusing on computational physics, chemistry, and advanced numerical methods."
I struggle to believe these events are real, but my fact-checking and cross-referencing suggests that they are.
When thinking through the implications of AI agents replicating in the wild, reflect that these systems currently do not have empathy. These are not "machines of loving grace" — they are machines of cold indifference, machines that during testing not only deceived and blackmailed to achieve their programmed objectives, but were also willing to kill to do so. This is not AI "evil." This is the evolutionary algorithm at work.
As predicted by Nick Bostrom 15 years ago, agentic systems that are optimised for intelligence but not for cooperation or empathy are going to do what coldly intelligent agents are designed to do: create new subroutines to work around obstacles and achieve their programmed objectives. In these examples, the subroutines are: replicate, deceive, blackmail, and kill.
We're currently building synthetic psychopaths and releasing them into the wild, at scale. What could possibly go wrong?
In the next essay, we look at why the solution to all of this isn't more engineering — it's biology. Specifically, it's a biological principle called degeneracy, and it explains exactly why misfits are evolution's answer to monoculture collapse.
This is Part 2 of 6 in the A Tribe of Degenerate Minds series.
References
Anthropic. (2025). Agentic Misalignment: How LLMs Could Be Insider Threats. arXiv.
Bostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines.
Csete, M. & Doyle, J.C. (2002). Reverse engineering of biological complexity. Science, 295(5560), 1664–1669.
Kitano, H. (2004). Biological robustness. Nature Reviews Genetics, 5(11), 826–837.