Effective Altruism: A Hypocritical Charade of Elite Power, By Charles Taylor (Florida)

Effective Altruism (EA), a philosophy championed by tech billionaires like Elon Musk, Dustin Moskovitz, and Sam Bankman-Fried, claims to maximise global good through reason and evidence. Yet, its lofty rhetoric masks a darker reality: EA is a hypocritical scam, weaponising genuine concerns about suffering to enrich and empower a technocratic elite. By prioritising esoteric causes like AI safety, shrimp welfare, and space colonisation over immediate human crises, like the 42 million people facing starvation in 2021, it reveals a disconnect between its stated goals and its actions. I will argue that EA's selective altruism, opaque funding, and manipulation of public narratives expose it as a tool for the superrich to justify their greed and control, cloaked in the guise of philanthropy.

EA's core tenet, "cause prioritisation," claims to objectively identify the most effective ways to alleviate suffering. Yet, its choices, favouring farmed-animal welfare, biosecurity, AI safety, and "long termism" over starving children or global poverty, reek of hypocrisy. In 2021, when the UN World Food Program appealed for $6 billion to save 42 million people from starvation, Elon Musk dismissed the plea, demanding proof of impact. Instead, he donated a similar sum to his own Musk Foundation, which lacks transparency and has no clear record of addressing urgent human needs. Meanwhile, Open Philanthropy, funded by Moskovitz, allocated only a third of its $440 million in 2021 grants to global health, with 28% going to longtermist causes like space exploration and AI governance.

This selective focus betrays EA's supposed rationality. If "doing the most good" is the goal, why prioritise speculative future risks, like the welfare of "digital minds" or unborn generations trillions of years hence, over tangible suffering today? Philosopher Nick Bostrom's "Astronomical Waste" paper, endorsed by Musk, claims that delaying space colonisation "wastes" 100 trillion human lives per second! Such outlandish calculations justify funnelling billions into transhumanist pet projects while ignoring 783 million people currently undernourished. This hypocrisy exposes EA as a vehicle for elite priorities, not universal good.

EA's funding structure further reveals its duplicity. The movement's wealth, ballooning from $600 million in 2021 alone, comes primarily from tech billionaires like Moskovitz and Bankman-Fried, with Musk's contributions obscured by his secretive Musk Foundation. The foundation's barebones website offers no details on grants or applications, and in 2021–2022, it donated less than 5% of its multi-billion-dollar assets. This lack of transparency allows elites to channel funds into self-serving causes, like AI safety aligned with their tech empires, while claiming philanthropic virtue.

The rapid rise of EA from a fringe group of 100 in 2009 to 10,000 supporters by 2022 was no grassroots triumph. It was engineered through strategic media exposure and elite patronage. Joe Rogan's 2017 and 2018 podcast invitations to EA philosophers William MacAskill and Peter Singer, followed by Musk's 2022 tweet promoting MacAskill's book to 100 million followers, catapulted EA into the mainstream. Coincidences like Igor Kurganov, a former MacAskill flatmate's partner and Musk Foundation collaborator, suggest an orchestrated network. Glowing articles in major outlets and Sam Harris's podcast endorsements further cemented EA's legitimacy, despite its meagre public support, 0.000125% of the global population. This manufactured prominence exposes EA as a top-down elite project, not a popular movement.

EA's hypocrisy lies in its weaponisation of compassion to advance transhumanist goals. By framing speculative issues, like AI sentience or shrimp welfare, as morally urgent, EA diverts resources from immediate crises. MacAskill's claim that 80 billion animals slaughtered annually is a "f-cked up place to be" contrasts starkly with his silence on human starvation. Similarly, Bostrom's focus on "future people" dismisses present suffering as trivial. This skewed moral calculus, cloaked in "reason and evidence," appeals to the cold, utilitarian mindset of tech billionaires who prioritise abstract futures over human lives.

The movement's flexibility is its greatest hypocrisy. EA's vague definition of "the most good" allows elites to justify any pet project. Musk's obsession with space colonisation, Moskovitz's AI safety ventures, and Bankman-Fried's biosecurity grants, align suspiciously with their business interests, yet are branded as altruistic. The Center for Effective Altruism's Julia Wise, who wept over a $4 candy apple due to guilt over malaria nets, embodies this pathological empathy, a performative sacrifice that masks elite control over charity priorities. By elevating tilapia welfare over human hunger, EA distorts compassion into a tool for transhumanist propaganda.

Philosophically, EA's reliance on "cause neutrality" is a sham. Defining "good" through mathematical models ignores the subjective, human nature of suffering. As philosophers like Kant argued, saving a drowning child is instinctively moral, yet MacAskill's choice to save a Picasso painting for charity proceeds defies common sense. This cold utilitarianism, embraced by "half-autistic" tech nerds, as critics describe, strips charity of empathy, reducing it to a heartless equation. The claim that future lives outweigh present ones is intellectual hubris, assuming technocrats can predict outcomes millennia away. As spiritual traditions teach, suffering is mind-based and best addressed in the present with wisdom and compassion, not grandiose calculations.

EA's moral bankruptcy is evident in its dismissal of immediate needs. The 42 million people Musk could have saved in 2021 were deemed less "effective" than his space ambitions. This echoes the broader transhumanist agenda: depopulate, control, and escape to Mars while guilting the masses into sacrificing for "the future." The parallels to "by 2030, you'll own nothing and be happy" are stark, EA's guilt-inducing narrative pushes voluntary austerity while elites hoard power.

EA is a charade, a carefully crafted scam to enrich and empower the superrich under the guise of philanthropy. By funnelling billions into non-transparent NGOs and esoteric causes, tech billionaires like Musk avoid accountability while shaping global narratives. The movement's growth, driven by elite funding and media manipulation, masks its unpopularity, 10,000 supporters after $600 million and mass exposure is a failure by any measure. Yet, its influence persists, as elites leverage EA to justify tax-dodging donations, like Musk's $6 billion to his own foundation, and to push transhumanist agendas that align with their wealth and power.

The hypocrisy is glaring: EA claims to save the world but ignores its most vulnerable. It champions "evidence" but cherry-picks causes that benefit its patrons. It preaches sacrifice while its leaders live lavishly, with Musk'sfortune untouched by the starvation he dismissed. This is not altruism but a power grab, using guilt and fear to control minds and resources. The war on our consciousness, as critics note, targets the young and impressionable, convincing them to prioritise shrimp over humans and future AI over present lives.

Effective Altruism is a hypocritical facade, a tool for tech billionaires to mask their greed and God-complexes as philanthropy. By prioritising speculative, self-serving causes over immediate human suffering, EA exposes its moral and philosophical flaws. Its opaque funding, engineered prominence, and transhumanist agenda reveal a scam designed to enrich and empower elites while guilting the masses into compliance. True compassion addresses suffering in the here and now, not in some distant, unprovable future. As the world grapples with real crises, hunger, poverty, inequality, EA's heartless utilitarianism must be rejected, and its backers held accountable for their manipulative charade.

https://markusmutscheller.substack.com/p/elon-musks-charity-philosophy-let 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 22 June 2025

Captcha Image