Too Little Too Late: AI-Mother May Avert Annihilation says Geoffrey Hinton, the Godfather of AI

Commentary by Rob

Geoffrey Hinton, the "Godfather of AI," is a shameless hypocrite with his tardy concern about the dangers of artificial intelligence. Why is it that only after this goblin is done with his research that he starts to ring the warning bells? Think about it. Scientists are not driven by their mere curiosity or desire to benefit humanity. They have decided to take up science as their surrogate activity—a way to set, seek, and attain goals to satisfy their basal need of "the power process." He satisfied his and now he’s telling everyone else they need to deal with the incoming wake of economic disruption, even annihilation, he predicts. His framing of the issue with capitalism is suicidally lazy and misses the real decisive factor in the dynamics of the techno-industrial system: natural selection among the most powerful organizations of the world. It is through this power struggle, which forces organizations to use the most efficient methods to stay on top,[1]  that the dominant economic modes are determined and societal dynamics unfold.

Was Hinton thinking about the implications of superintelligence when pioneering the foundations of neural networks that power large language models (LLMs)? Even in the face of indifference and disregard from his peers, he toiled to produce a seemingly fruitless technique for little gain in immediate status, money, or any sort of humanitarian end. Why? Because solving problems is a surrogate activity for the scientist: The underlying drive of most scientists is to solve problems for their own sake in order to experience the power process—not any humanitarian purpose they may outwardly (or inwardly) pretend. The truth is that most scientists work on whatever there is funding for, and the only way to piece that with the constant influx of scientists willing to do the devil's work is by realizing that the drive to advance science comes from a fundamentally selfish place and not some particularly strong affinities these people may have for their field, let alone the human race. For who could predict the applications of their arcane results and their ultimate social ramifications?

A Godfather agrees to take the mantle of responsibility for the moral upbringing of a child. This godfather forsook society while he was active in his research and now offers just the sort of mindless solution to the problem he helped start that you would expect from a decadent scientist: to relegate humanity to the “baby” in a mother-child relationship with artificial intelligence. And then he has the gall to speak of “human dignity” as he swats away the prospect of doling out UBI, saying, “People get their worth from their jobs.” It’s unsurprising that he deflects attention (others' as well as his—for self-delusion is a hallmark of the technocrat) away from the broader problems of superintelligence and onto just one specific possible manifestation: capitalism and its inequality. But the capitalism he scapegoats is merely the most efficient technological-economic form, at present, because it confers the most power to the world states that are embroiled in a struggle for power. So, while the proximate cause of income inequality can be pinned on capitalism, the nature of that capitalism is still determined by the technology he and his peers cook up, and the existential threat posed by superintelligence that Hinton speaks of still threatens the world. When you pin humanity with, at best, a 10% chance of annihilation, it warrants a proper allocation of attention to the core cause and the most viable solution.[1]

“Pausing” AI, as Hinton has previously suggested, or hoping a superintelligent deity takes us as adoptive children do not strike at the core; those are losing strategies. The core of the disruption to human life—and the natural world more broadly—lies in the technological substrate of industrial society. Not only does it destroy ways of life as quickly as it creates them, it stands to end it all as we know it. Whether the hypothetical superintelligence emerges and how it might behave remains to be seen, but in the meantime, humanity is compounding the points of catastrophic failure that can wipe out all complex life on Earth.


___________

NOTES:

[1] Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence. https://fortune.com/2025/09/06/godfather-of-ai-geoffrey-hinton-massive-unemployment-soaring-profits-capitalist-system/]

Copyright © 2026 by Wilderness Front LLC. All Rights Reserved.

Next
Next

Modern tech "saved" the ozone—so it could burn it up again