Effective Altruism

From P2P Foundation
Jump to navigation Jump to search

Description

Daniel Pinchbeck:

“The leading proponent of effective altruism is Oxford moral philosopher William MacAskill, author of a number of books including his latest, What We Owe the Future. Fellow travelers include Nick Bostrom (Superintelligence) and Toby Ord (The Precipice: Existential Risk and the Future of Humanity). For MacGaskill, Longtermism is “the idea that positively influencing the longterm future is a key moral priority of our time. Long Termism is about taking seriously just how big the future could be and how high the stakes are in shaping it.”

(https://danielpinchbeck.substack.com/p/hospicing-effective-altruism)


Discussion

A critique of EA by Molly White: "The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?

Dig a little deeper, and the rationalism and utilitarianism emerges. Unsatisfied with the generally subjective attempts to evaluate the potential positive impact of putting one’s financial support towards — say — reducing malaria in Africa versus ending factory farming versus helping the local school district hire more teachers, effective altruists try to reduce these enormously complex goals into “impartial”, quantitative equations.

In order to establish such a rubric in which to confine the messy, squishy, human problems they have claimed to want to solve, they had to establish a philosophy. And effective altruists dove into the philosophy side of things with both feet. Countless hours have been spent around coffee tables in Bay Area housing co-ops, debating the morality of prioritizing local causes above ones that are more geographically distant, or where to prioritize the rights of animals alongside the rights of human beings. Thousands of posts and far more comments have been typed on sites like LessWrong, where individuals earnestly fling around jargon about “Bayesian mindset” and “quality adjusted life years”.

The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.

Take, for example, the widely held belief among EAs that it is more effective for a person to take an extremely high-paying job than to work for a non-profit, because the impact of donating lots of money is far higher than the impact of one individual’s work. (The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street — a detail I think is illuminating about effective altruism’s demographic makeup.) This is a useful way to justify working for a company that many others might view as ethically dubious: say, a defense contractor developing weapons, a technology firm building surveillance tools, or a company known to use child labor. It’s also an easy way to justify life’s luxuries: if every hour of my time is so precious that I must maximize the amount of it spent earning so I may later give, then it’s only logical to hire help to do my housework, or order takeout every night, or hire a car service instead of using public transit.

The philosophy has also justified other not-so-altruistic things: one of effective altruism’s ideological originators, William MacAskill, has urged people not to boycott sweatshops (“there is no question that sweatshops benefit those in poor countries“, he says). Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes. You know, hypothetically.

Other issues arise when it comes to the task of evaluating who should be prioritized when it comes to aid. A prominent contributor to the effective altruist ideology, Peter Singer, wrote an essay in 1971 arguing that a person should feel equally obligated to save a child halfway around the world as they do a child right next to them. Since then, EAs have taken this even further: why prioritize a child next to you when you could help ease the suffering of a better1 child somewhere else? Why help a child next to you today when you could instead help hypothetical children born one hundred years from now?2 Or help artificial sentient beings one thousand years from now?

The focus on future artificial sentience has become particularly prominent in recent times, with “effective altruists” emerging as one synonym for so-called “AI safety” advocates, or “AI doomers”.3 Despite their contemporary prominence in AI debates, these tend not to be the thoughtful researchers4 who have spent years advocating for responsible and ethical development of machine learning systems, and trying to ground discussions about the future of AI in what is probable and plausible. Instead, these are people who believe that artificial general intelligence — that is, a truly sentient, hyperintelligent artificial being — is inevitable, and that one of the most important tasks is to slowly develop AI such that this inevitable superintelligence is beneficial to humans and not an existential threat."

(https://newsletter.mollywhite.net/p/effective-obfuscation)


More information

  • Daniel Pinchbech recommends: ‘The Grift Brothers’, an excellent exposé from TruthDig, explores their alliance as well as the questionable ethics behind the effective altruism project.

URL = https://www.truthdig.com/articles/the-grift-brothers/