Effective Accelerationism
Description
Luo P/Acc :
The E/acc (Effective Accelerationism) movement ... believes that humanity should accelerate the development of cutting-edge technologies such as AI without restrictions, ultimately promoting human evolution. The term E/acc was originally invented as a joke, playing on the word Effective Altruism (E/A), and first appeared in a casual conversation on X space between two Silicon Valley programmers. Later, well-known venture capitalist Marc Andreessen, founder of A16Z, and Garry Tan, CEO of YC incubator, added “E/acc” to their X account profiles. After the big shots supported it, many people followed suit and added the E/acc suffix. Although there are forums and websites dedicated to E/acc, it remains a chaotic movement, as each person claiming to be E/acc does not have a unified definition for it.
...
E/Acc was invented to counter the arguments of AI skeptics, blending fragments of old accelerationism and engineers’ Darwinian epistemology. Its sophistry is the “party of industry” (developmentism) routine, where the so-called “social problems arising from the rapid progress of technology can be solved by developing technology rapidly.” This is a strategy of overextending societal resources by extrapolating the techno-centrism to the extreme. E/Acc’s initial point appropriates thermodynamic laws, creating an illusion of a fate like “forbidden intervention”. The notion that “technology is sacred and detached from society, able to externally determine social progress” can be summarized as “techno-centrism/technological determinism,” which can be traced back to Saint-Simon’s expert technocratic rule theory, accompanying the popular understanding of the history of science and technology in textbooks. This kind of technological fetishism actually hinders people’s understanding of technological relationships."
Characteristics
"According to the E/Acc forum, the main points can be summarized as follows:
- Accelerationism refers to the accelerating spiral state of positive feedback coupling between technology and capital.
- Analogous to the second law of thermodynamics, the complexity of the universe is irreversibly increasing, and the level of technological intelligence in human society should also accelerate.
- The development of AI technology is unstoppable, and those who oppose AI are only worried because they do not understand the technology. Artificial intelligence carries certain risks, and more people should be encouraged to join this technological wave. Open sourcing and accelerating are necessary to promote the benign development of technology, rather than restricting or delaying AI technology.
- The public sector of human society (government, NGOs, scientific associations) cannot manage AI, and should fully let go to allow AI to accelerate. AI systems will evolve to a state of mutual balance.
- Existing social problems may be ignored, and allowing technology to advance unchecked may exacerbate social conflicts, but as technology advances to a certain extent, old social problems will be solved effortlessly."
(via [1])
Discussion
Vitalik Buterin:
"Over the last few months, the "e/acc" ("effective accelerationist") movement has gained a lot of steam. Summarized by "Beff Jezos" here, e/acc is fundamentally about an appreciation of the truly massive benefits of technological progress, and a desire to accelerate this trend to bring those benefits sooner.
I find myself sympathetic to the e/acc perspective in a lot of contexts. There's a lot of evidence that the FDA is far too conservative in its willingness to delay or block the approval of drugs, and bioethics in general far too often seems to operate by the principle that "20 people dead in a medical experiment gone wrong is a tragedy, but 200000 people dead from life-saving treatments being delayed is a statistic". The delays to approving covid tests and vaccines, and malaria vaccines, seem to further confirm this. However, it is possible to take this perspective too far.
In addition to my AI-related concerns, I feel particularly ambivalent about the e/acc enthusiasm for military technology. In the current context in 2023, where this technology is being made by the United States and immediately applied to defend Ukraine, it is easy to see how it can be a force for good. Taking a broader view, however, enthusiasm about modern military technology as a force for good seems to require believing that the dominant technological power will reliably be one of the good guys in most conflicts, now and in the future: military technology is good because military technology is being built and controlled by America and America is good. Does being an e/acc require being an America maximalist, betting everything on both the government's present and future morals and the country's future success?
On the other hand, I see the need for new approaches in thinking of how to reduce these risks. The OpenAI governance structure is a good example: it seems like a well-intentioned effort to balance the need to make a profit to satisfy investors who provide the initial capital with the desire to have a check-and-balance to push against moves that risk OpenAI blowing up the world. In practice, however, their recent attempt to fire Sam Altman makes the structure seem like an abject failure: it centralized power in an undemocratic and unaccountable board of five people, who made key decisions based on secret information and refused to give any details on their reasoning until employees threatened to quit en-masse. Somehow, the non-profit board played their hands so poorly that the company's employees created an impromptu de-facto union... to side with the billionaire CEO against them.
Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc."
(https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html)
Critique by Molly White
Molly White:
"While effective altruists view artificial intelligence as an existential risk that could threaten humanity, and often push for a slower timeline in developing it (though they push for developing it nonetheless), there is a group with a different outlook: the effective accelerationists.
This ideology has been embraced by some powerful figures in the tech industry, including Andreessen Horowitz’s Marc Andreessen, who published a manifesto in October in which he worshipped the “techno-capital machine”5 as a force destined to bring about an “upward spiral” if not constrained by those who concern themselves with such concepts as ethics, safety, or sustainability.
Those who seek to place guardrails around technological development are no better than murderers, he argues, for putting themselves in the way of development that might produce lifesaving AI.
This is the core belief of effective accelerationism: that the only ethical choice is to put the pedal to the metal on technological progress, pushing forward at all costs, because the hypothetical upside far outweighs the risks identified by those they brush aside as “doomers” or “decels” (decelerationists).
Despite their differences on AI, effective altruism and effective accelerationism share much in common (in addition to the similar names). Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take.
Both ideologies embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today. This is no coincidence: when you can convince everyone that AI might turn everyone into paperclips tomorrow, or on the flip side might cure every disease on earth, it’s easy to distract people from today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others. This is incredibly convenient for the powerful individuals and companies who stand to profit from AI.
And like effective altruists, effective accelerationists are fond of waxing philosophical, often with great verbosity and with great surety that their ideas are the peak of rational thought.
Effective accelerationists in particular also like to suggest that their ideas are grounded in scientific concepts like thermodynamics and biological adaptation, a strategy that seems designed to woo the technologist types who are primed to put more stock in something that sounds scientific, even if it’s nonsense. For example, the inaugural Substack post defining effective accelerationisms’s “principles and tenets” name-drops the “Jarzynski-Crooks fluctuation dissipation theorem” and suggests that “thermodynamic bias” will ensure only positive outcomes reward those who insatiably pursue technological development. Effective accelerationists also claim to have “no particular allegiance to the biological substrate”, with some believing that humans must inevitably forgo these limiting, fleshy forms of ours “to spread to the stars”, embracing a future that they see mostly — if not entirely — revolving around machines."
(https://newsletter.mollywhite.net/p/effective-obfuscation)
Movements
Alternatives to E/Acc:
AI Alignment
Luo P/Acc :
"AI Alignment
Advocates for slowing down the development pace of general artificial intelligence technology, increasing the focus on the public benefits, ethical discussions, and human values of technology, introducing humanistic value judgments in the AI development process, ensuring that AI technology will not spiral out of control and pose a threat to human society. British professor Geoffrey Hinton, a pioneer in convolutional neural network theory and a foundational scientist in the current AI technology advancement, is known as the “godfather of artificial intelligence.” After leaving Google, he has been calling for cautious handling of AI technology and has become a representative of the AI Alignment camp. OpenAI’s Chief Scientist Ilya is Hinton’s proud disciple, and he is promoting a project within OpenAI called the “Super Alignment Program” to ensure that AI machines align with human intentions and values. This includes understanding both the explicit and implicit intentions of humans, such as authenticity, fairness, and safety. Previously, OpenAI’s two independent directors, Helen Toner and Tasha McCauley, both lean towards the AI Alignment camp."
D/Acc
Luc P/ACC :
"The prominent figure in the contemporary blockchain industry, Vitalik Buterin, the founder of Ethereum, introduced the concept of D/acc in his blog post “My techno-optimism,” in response to Anderson’s “Techno-optimism Manifesto”. (Original article link: https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html)
Vitalik has a dual background as a developer and journalist, inheriting the ideals of crypto-punk and advocating for the public value of technology for good. As a leader of the industry’s largest public blockchain — Ethereum, Vitalik holds a certain influence among developers and investors.
In this article, Vitalik attempts to embrace various technological ideologies within the vision framework of D/Acc. Examples include E/Acc, effective altruism (E/A), libertarianism, pluralism, public healthcare, blockchainism, solarpunk, and lunarpunk. The “d” in D/Acc can represent many concepts, especially defense, decentralization, democracy, and differentiation.
Vitalik borrowed the classic work “The Art of Not Being Governed” by anthropologist James C. Scott, in which Scott proposed the social forms and production methods of resistance to centralized power formation in the mountainous region of Zomia in Southeast Asia, corresponding to the proverb “The emperor is far away in the mountains”.
Vitalik listed a series of blockchain technologies that can serve as methods for individuals to resist data surveillance. Zero-knowledge proofs can be used for privacy protection, allowing users to verify without revealing personal information. Such technologies can allow us to maintain the benefits of privacy and anonymity — attributes that are widely considered necessary for applications such as voting — while still providing security guarantees and combating spam and malicious actors. This can allow users and communities to verify trustworthiness without compromising privacy, protect their security, and avoid relying on centralized bottlenecks that impose definitions of who is good or bad.
For example:
- The digital passport signature is encapsulated in ZK-SNARK, which can prove that you are a unique citizen of a specific country/region without revealing which country you are from. Zupass, incubated by Zuzalu, has been used by hundreds of people and recently by Devconnect (which allows users to hold tickets, memberships, (non-transferable) digital collectibles, and other proofs). Pol.is uses a similar algorithm to Community Notes (and applies it earlier than Community Notes) to help communities identify points of consensus between sub-tribes.
In the article, Vitalik also acknowledges “super alignment” as a practical compromise that allows developers to more consciously ensure that what they are doing can help human flourishing without interrupting the progress of research and development. “By embedding human feedback at every step of the decision-making process, we reduce the motivation to transfer high-level planning responsibilities to artificial intelligence itself, thereby reducing the chances of AI itself doing things that are completely at odds with human values.” If we want a future where superintelligence coexists with “humanity,” where humans are not just pets but actually retain meaningful agency in this world."