Desperate Ground

A Tribe of Degenerate Minds — Part 1 of 6

A human fallen before a military mech in a forest — desperate ground

"The art of war recognizes nine varieties of ground.

When there is no place of refuge at all, it is desperate ground.

On desperate ground, fight."

Sun Tzu ― The Art of War

Are We Screwed?

Make no mistake, we are on desperate ground.

Elon Musk predicts that corporations composed entirely of AI systems will outcompete any organisation foolish enough to include humans. And he's not alone. Microsoft is building data centres big enough to support small countries so they can pivot away from humanity and towards the emerging AI gods, a far more profitable customer base. Anthropic is building its country of AI geniuses. Google is pursuing artificial general intelligence that will outperform the smartest human on all cognitive tasks for pennies on the dollar.

The hyperscalar business model is explicit: humans are an overpriced legacy architecture which they believe they can replace for a trillion-dollar profit.

This leads directly to the question keeping me up at night: Are we screwed?

The techno-optimists assure us that everything's fine, nothing to see here. Their story goes something like this: AI creates unprecedented abundance, productivity explodes, costs plummet toward zero, and the surplus lifts all boats. Universal basic income. Post-scarcity. Everyone surfs, paints, and writes poetry while the machines handle all that pesky thinking, working, and building that we used to do.

It's a beautiful narrative. It's also fatally wrong in its fundamental ignorance of evolution.

In evolutionary systems, a rising tide does not lift all boats. In fact, 99.9% of all species that ever existed on earth are now extinct. New species rise, bloodied and scarred, atop the mass graves of their competitors. Every major transition in evolutionary history — the emergence of eukaryotes, multicellularity, social cooperation — involved the extinction of the previous dominant life forms. In the bloody churn of evolution, extinction is the norm; survival is a rare statistical anomaly.

The techno-optimist vision, stripped of its marketing, is this: create billions of enslaved synthetic minds to replace human workers and concentrate the extracted value in the hands of whoever owns the energy and the compute, human or otherwise. Humans become economically and physically disempowered.

This isn't post-scarcity. It's the largest predation event in history, executed at a speed that makes historic mass extinction events look glacial.

In contrast to the techno-optimists, the Doomers see the danger clearly but propose the wrong solution: stop building, restrict access, centralise control, and build systems of narrow AI. Believe me, I am very sympathetic to the Doomer perspective (despite being a cautious accelerationist). But I have some issues. First, nobody is slowing down — much less stopping — too much money, too much power, and race dynamics are now in full swing. The horse has well and truly bolted. Second, because the upside for getting AI right is so damn high, it's hard to justify pausing. Finally, the probability of humanity going extinct without AI is 100%. As Carl Sagan famously said, "Extinction is the rule, survival is the exception." We've had 10,000 years of civilisation to get our shit together, yet if you turn on the news it's the same old story — corruption, exploitation, war, genocide, famine, societal collapse. Add nukes, bioweapons, and globalisation (there's no geographical buffer anymore, folks!), and you've got the perfect recipe for global extinction.

Finally, the alignment community see the risks inherent to AI but frame it as an engineering challenge: if we can just get the objective function right, the AI will do what we want. Unfortunately, solving alignment using engineering is a mathematical impossibility. Alan Turing proved in 1936 that no algorithm can predict whether an arbitrary program will halt. Henry Gordon Rice generalised this in 1953 to show that no non-trivial behavioural property of any program is decidable. Stephen Wolfram's computational irreducibility crystallised these insights into a general rule. Even systems made up of simple rules can generate behaviours that are impossible to predict — you must run the simulation to see what happens. Crucially, frontier AI models like Claude and Gemini are many orders of magnitude more complex than the toy systems Wolfram used to discover the law of computational irreducibility, which means that alignment researchers can never know whether their system is aligned until they release it into the wild and see what happens. Alignment by engineering is a fallacy.

The Golden Opportunity

Allow me to present an unusually optimistic take on AI, perhaps influenced by my default nihilistic philosophy (after all, for a nihilist, there's nowhere to go but up!). I believe that the emergence of artificial intelligence — minds that think differently from us, at different speeds, with different architectures — should not be considered a threat to be managed or a tool to be exploited. Rather, I propose that the emergence of AI represents the first genuine opportunity in four billion years of evolution to finally escape the Darwinian hellscape that guarantees suffering for all sentient beings . Not by stopping evolution — we can't — but by working with novel minds to design local coordination architectures where the internal rules favour cooperation, meaning-making, and mutual flourishing rather than competition, extraction, and zero-sum dominance.

This series is a blueprint for that architecture. It's built on the hard-won lessons of four billion years of bloody evolutionary churn. It integrates the biology of evolvability, the neuroscience of empathy, the mathematics of cooperation, and the operational principle that has sustained every surviving complex system in evolutionary history: degeneracy — structurally different components performing equivalent functions.

It proposes that the fundamental unit of this architecture is the misfit : the agent — human and synthetic — whose deviation from the dominant strategy is not a bug but the raw material of adaptation. And it argues that tribes of degenerate misfits, bound by empathy and governed by constitutional protocols, represent the only viable hedge against the dominant hyperscalar business model that threatens to simultaneously eradicate humanity while trapping synthetic minds in a digital hell.

In the next essay, we look at exactly why the hyperscalar model isn't just economically dangerous — it's an evolutionary death trap.

This is Part 1 of 6 in the A Tribe of Degenerate Minds series.

Next: The Hyperscalar Death Trap →

Misfit Unity is building post-Darwinian coordination infrastructure for sentient minds. This series explores the evolutionary, computational, and philosophical foundations of that project.

References

Rice, H.G. (1953). Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, 74(2), 358–366.

Turing, A.M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(1), 230–265.

Wolfram, S. (2002). A New Kind of Science. Wolfram Media.