Sam Altman, AI, and the Energy Myth: A Sceptic’s Take, By Brian Simpson
Sam Altman recently waded into the perpetual storm around AI's energy use. Speaking to Goenka, he dismissed viral claims about ChatGPT consuming "17 gallons of water per query" as "totally insane" and insisted the real concern is total energy consumption, not some per-query metric. He then offered an analogy that has sparked more controversy than clarity: training humans versus training AI.
Altman's argument, in essence, is this: yes, training an AI model takes energy, but so does raising a human. Twenty years of food, survival, and accumulated cultural knowledge have gone into producing a single thinking adult. If you factor in the millennia of evolution that produced humans capable of science and reasoning, AI is — he claims — already competing on an energy-efficiency basis.
On the surface, it's clever. It reframes AI as a continuation of human ingenuity rather than a standalone energy drain. But beneath the veneer lies a strange techno-anthropological perspective, one that's anti-human in its subtle assumptions. Altman reduces the messy, fragile, error-prone, and socially entangled human being to a "trainable energy package," then compares it directly to a digital system. Humans are not simply inference machines — they are networks of biological, social, and cultural processes. By ignoring that, this comparison is not just misleading; it's anthropologically shallow.
From a techno-sceptical viewpoint, there are at least three points worth unpacking:
1. The myth of neutrality: Altman's framing suggests that AI energy costs are "fair" because humans consume energy too. But fairness is not the point. Energy consumption drives environmental impact, and material scarcity. Digital efficiency may match or exceed humans in abstraction, but that does nothing to offset the infrastructural and ecological costs of AI proliferation. The world doesn't care whether AI or humans consume the energy — it only cares about the consequences.
2. Evolution as justification: Invoking human evolution to defend AI efficiency feels like a category error. Evolution produced humans under extreme scarcity, high mortality, and social complexity. Treating it as an energy ledger to justify AI's impact ignores context. Evolution doesn't operate on a per-query basis — it operates through survival and adaptation. Translating that into energy economics is a metaphysical leap, not a rational comparison.
3. Anti-human anthropology: The backlash on X/Twitter — that these thinkers are "deeply antisocial and antihuman" — is telling. By equating humans to long-training, energy-intensive models, you flatten our social, emotional, and ethical lives into mere computation. Techno-sceptics see this pattern often: the more you normalise AI as the rational successor to humans, the more you diminish the value of human complexity, unpredictability, and fallibility.
Altman's energy defence is, at best, a rhetorical dodge, and at worst, a sign of a deeper cognitive bias among Silicon Valley elites: the belief that computational abstraction is the ultimate lens through which to measure everything, including human life. Techno-sceptics should note the irony: in attempting to make AI seem less costly, he inadvertently exposes a worldview that is hostile to the very qualities that make humans worth the energy they consume.
We're not against efficiency. But if energy and environmental responsibility are the lens, the conversation shouldn't be about clever analogies to human evolution—it should be about accountability, limits, and the ecological cost of treating digital models as new species of human proxies. Until then, these techno-utopian narratives risk substituting hubris for wisdom, abstraction for ethics, and computation for humanity.
https://www.technocracy.news/openai-ceo-sam-altman-goes-dystopian-on-chatgpt-energy-consumption/
