Effective Altruism

From P2P Foundation
Jump to navigation Jump to search

Description

Daniel Pinchbeck:

“The leading proponent of effective altruism is Oxford moral philosopher William MacAskill, author of a number of books including his latest, What We Owe the Future. Fellow travelers include Nick Bostrom (Superintelligence) and Toby Ord (The Precipice: Existential Risk and the Future of Humanity). For MacGaskill, Longtermism is “the idea that positively influencing the longterm future is a key moral priority of our time. Long Termism is about taking seriously just how big the future could be and how high the stakes are in shaping it.”

(https://danielpinchbeck.substack.com/p/hospicing-effective-altruism)


Discussion

A critique of EA by Molly White:

"The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?

Dig a little deeper, and the rationalism and utilitarianism emerges. Unsatisfied with the generally subjective attempts to evaluate the potential positive impact of putting one’s financial support towards — say — reducing malaria in Africa versus ending factory farming versus helping the local school district hire more teachers, effective altruists try to reduce these enormously complex goals into “impartial”, quantitative equations.

In order to establish such a rubric in which to confine the messy, squishy, human problems they have claimed to want to solve, they had to establish a philosophy. And effective altruists dove into the philosophy side of things with both feet. Countless hours have been spent around coffee tables in Bay Area housing co-ops, debating the morality of prioritizing local causes above ones that are more geographically distant, or where to prioritize the rights of animals alongside the rights of human beings. Thousands of posts and far more comments have been typed on sites like LessWrong, where individuals earnestly fling around jargon about “Bayesian mindset” and “quality adjusted life years”.

The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.

Take, for example, the widely held belief among EAs that it is more effective for a person to take an extremely high-paying job than to work for a non-profit, because the impact of donating lots of money is far higher than the impact of one individual’s work. (The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street — a detail I think is illuminating about effective altruism’s demographic makeup.) This is a useful way to justify working for a company that many others might view as ethically dubious: say, a defense contractor developing weapons, a technology firm building surveillance tools, or a company known to use child labor. It’s also an easy way to justify life’s luxuries: if every hour of my time is so precious that I must maximize the amount of it spent earning so I may later give, then it’s only logical to hire help to do my housework, or order takeout every night, or hire a car service instead of using public transit.

The philosophy has also justified other not-so-altruistic things: one of effective altruism’s ideological originators, William MacAskill, has urged people not to boycott sweatshops (“there is no question that sweatshops benefit those in poor countries“, he says). Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes. You know, hypothetically.

Other issues arise when it comes to the task of evaluating who should be prioritized when it comes to aid. A prominent contributor to the effective altruist ideology, Peter Singer, wrote an essay in 1971 arguing that a person should feel equally obligated to save a child halfway around the world as they do a child right next to them. Since then, EAs have taken this even further: why prioritize a child next to you when you could help ease the suffering of a better1 child somewhere else? Why help a child next to you today when you could instead help hypothetical children born one hundred years from now?2 Or help artificial sentient beings one thousand years from now?

The focus on future artificial sentience has become particularly prominent in recent times, with “effective altruists” emerging as one synonym for so-called “AI safety” advocates, or “AI doomers”.3 Despite their contemporary prominence in AI debates, these tend not to be the thoughtful researchers4 who have spent years advocating for responsible and ethical development of machine learning systems, and trying to ground discussions about the future of AI in what is probable and plausible. Instead, these are people who believe that artificial general intelligence — that is, a truly sentient, hyperintelligent artificial being — is inevitable, and that one of the most important tasks is to slowly develop AI such that this inevitable superintelligence is beneficial to humans and not an existential threat."

(https://newsletter.mollywhite.net/p/effective-obfuscation)


Effective Altruism Without the Hubris

Yascha Mounk:

"The trouble with effective altruism has, from the beginning, been its hubris. The movement’s leaders treated doing good as a kind of intellectual game, one that is so deeply constituted by abstract rules about creating happiness and eradicating pain that it can dispense with such messy concerns as human psychology or the dynamics of the political world.

Any attempt to rescue effective altruism from these shortcomings has to begin with a big helping of modesty. It is very hard to predict how somebody will act three decades from now and virtually impossible to predict what challenges humanity as a whole will face in three centuries. The first step to doing good in an effective manner is to be honest about our cognitive limitations—and to take seriously the ever-present danger that even the most well-meaning and thoroughly “researched” interventions may prove to be counterproductive.

A reconstituted effective altruism must give up on long-termist ambitions that make for fascinating sci-fi but completely fail to guide action of which we can be reasonably confident that it will actually have a positive impact. It must understand that the human psyche is messy, giving us reason to be deeply skeptical about overly clever hacks to hard problems, like telling young idealists to turn themselves into efficient money-making machines. And it must dispense with the technocratic ethos of omniscient sages who are convinced that they are superior to their compatriots and, when push comes to shove, may even be justified in ignoring the ordinary rules of morality—an ethos that both leads good people astray and runs the danger of attracting some of the very worst people to the cause.

But none of this is a reason to throw the baby out with the bathwater. Effective altruists are right that people spend billions of dollars on charitable contributions every year. It is true that much of that money goes to building new gyms at fancy universities or upgrading local cat shelters. And it is hard to argue with the idea that it would, insofar as possible, be better to direct donors’ altruistic instincts to more impactful endeavors, potentially saving the lives of thousands of people.

Even in this more modest form, effective altruism will face some serious empirical obstacles. The history of seemingly obvious interventions that unexpectedly turned out to have adverse consequences is long. But when stripped to its core, the core intuition behind effective altruism does not depend on the more dubious assumptions about human psychology or our ability to predict the future that leading effective altruists have embraced. Rather, it merely claims that charitable donations can make a big impact in the world; that this gives people who are reasonably affluent by global standards good reason to donate a generous share of their income; and that they should think hard about what kinds of donations are most likely to make a real difference to people in genuine need.

Put in these simple terms—and stripped of the hubris to which the movement they inspired sadly succumbed—these premises are virtually impossible to contest."

(https://yaschamounk.substack.com/p/the-problem-with-effective-altruism)


EA as an imperfect example of Idea Machines

Nadia Asparouhova

"I want to address why effective altruism, as I’ve stated elsewhere, “cannot singlehandedly meet the civil purpose of philanthropy.” In other words, if effective altruism is so good already, why do we need other idea machines at all?

I think of philanthropy as a type of idea marketplace for public goods, funded by private capital. Like all idea marketplaces – startups, media, philosophy – it’s inherently pluralistic. We don’t have a single government-funded media channel, for example, but instead get our news, entertainment, and ideas from a multitude of sources.

There are certainly better and worse ways of executing a philanthropic initiative, just as there are better and worse ways of building a startup. But once we look beyond best practices, there’s way more variance in approaches than, say, effective altruism might advocate for.

We seem to understand that entrepreneurship operates in a free market of ideas, so I’m not sure where the idea comes from that there is, or could be, One True Approach to philanthropy. I’d guess it’s because there are so many egregious examples of mismanaged funds and middling outcomes, which have led people to feel understandably suspicious about its effectiveness.

If we were to take EA literally, however, we’d be saying that there is an objectively best way to accomplish these outcomes, and that that way is discoverable: that complex social problems are a finite, solvable game.

If philanthropy is pluralistic – and, like any idea marketplace, that is one of its virtues – then there is no single school of thought that can “solve” complex social questions, because everyone has a different vision for the world. If you’re pro-pluralism in startups, you should also be pro-pluralism in philanthropy.

The scholar Peter Frumkin describes philanthropy as having both instrumental and expressive value. Effective altruism can be understood as a movement that heavily prioritizes instrumental value (which, ironically, is its own form of self-expression). As a private citizen, renouncing my right to expressive value, in favor of donating to wherever GiveWell tells me to, feels like I might as well just pay more taxes to the government. Why have a market of choice if we can’t exercise it?

I expect that effective altruism will always be an example of what I’ve called “club” communities elsewhere: high retention of existing members, but limited acquisition of new members, like a hobbyist club. EA will continue to grow, but it will never become the dominant narrative because it’s so morally opinionated. I don’t think that’s a problem, though, because ideally we want lots of people conducting lots of public experiments.

The more interesting question, then, is: why aren’t there more effective altruisms? It’d be like if there were just one startup, or one blogger, or one news channel. When it comes to deploying private capital towards public outcomes, the idea marketplace is woefully barren.

Although I don’t personally identify with the ethos of effective altruism, I also think they’ve done a lot of things well. EA has a remarkably good infrastructure for attracting and retaining members, identifying cause areas, and directing time and dollars towards those efforts. A common critique of EA is that it fails to attract operational talent, but despite its weaknesses, it’s still the best example of what I’ve been calling an “Idea Machine” in my head – maybe not the best term in the world, but let’s roll with it because I’m bad at naming."

(https://nadia.xyz/idea-machines)

More information

  • Daniel Pinchbech recommends: ‘The Grift Brothers’, an excellent exposé from TruthDig, explores their alliance as well as the questionable ethics behind the effective altruism project.

URL = https://www.truthdig.com/articles/the-grift-brothers/