Existential Risk

From P2P Foundation
Jump to navigation Jump to search

= ""an existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living". [1]


Description

1. SERI:


"We think of existential risks, or global catastrophic risks, as risks that could cause the collapse of human civilization or even the extinction of the human species. Prominent examples of human-driven global catastrophic risks include 1) nuclear winter, 2) an infectious disease pandemic engineered by malevolent actors using synthetic biology, 3) catastrophic accidents/misuse involving AI, and 4) climate change and/or environmental degradation creating biological and physical conditions that thriving human civilizations would not survive. Other significant catastrophic risks exist as well."

(https://seri.stanford.edu/)

2.

"An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.

Until relatively recently, most existential risks (and the less extreme version, known as global catastrophic risks) were natural, such as the supervolcanoes and asteroid impacts that led to mass extinctions millions of years ago. The technological advances of the last century, while responsible for great progress and achievements, have also opened us up to new existential risks.

Nuclear war was the first man-made global catastrophic risk, as a global war could kill a large percentage of the human population. As more research into nuclear threats was conducted, scientists realized that the resulting nuclear winter could be even deadlier than the war itself, potentially killing most people on earth.

Biotechnology and genetics often inspire as much fear as excitement, as people worry about the possibly negative effects of cloning, gene splicing, gene drives, and a host of other genetics-related advancements. While biotechnology provides incredible opportunity to save and improve lives, it also increases existential risks associated with manufactured pandemics and loss of genetic diversity.

Artificial intelligence (AI) has long been associated with science fiction, but it's a field that's made significant strides in recent years. As with biotechnology, there is great opportunity to improve lives with AI, but if the technology is not developed safely, there is also the chance that someone could accidentally or intentionally unleash an AI system that ultimately causes the elimination of humanity.

Climate change is a growing concern that people and governments around the world are trying to address. As the global average temperature rises, droughts, floods, extreme storms, and more could become the norm. The resulting food, water and housing shortages could trigger economic instabilities and war. While climate change itself is unlikely to be an existential risk, the havoc it wreaks could increase the likelihood of nuclear war, pandemics or other catastrophes."

(https://futureoflife.org/existential-risk/existential-risk/)


History

From Issarice:

  • 19th century–1945 Early development:

"Concerns about human extinction can be traced back to the 19th century[2], with geology unveiling a radically nonhuman past.[3] French scientist Georges Cuvier popularizes the concept of catastrophism in the early 1800s. The first near-Earth asteroid is discovered. Toward the first half of the twentieth century, chemical and biological weapons become a case of concern.


1945 onwards Atomic Age/anthropocene:

"The nuclear holocaust becomes a theoretical scenario shortly after the beginning of this age, which starts following the detonation of the first nuclear weapon. In the 1950s, humanity enters a new age, facing not only existential risks from our natural environment, but also the possibility that we might be able to extinguish ourselves. During the 1960s, mutual assured destruction leads to the expansion of nuclear-armed submarines by both Cold War adversaries. In the same decade, the anti-nuclear movement launches, and the environmentalist movement soon adopts the cause of fighting climate change. Supervolcanoes are discovered in the early 1970s.[6] Global warming becomes widely recognized as a risk in the 1980s. The term "existential threat", beginning to spread around the 1960–80s during the Cold War, takes off in the 1990s and early 2000s.


  • 21st century Field of study consolidation:

Nick Bostrom introduces the term "existential risk", which emerges as a unified field of study.[2] By the early 2000s, scientists identify many other threats to human survival, including threats associated with artificial intelligence, biological weapons, nanotechnology, and high energy physics experiments. Unaligned artificial intelligence is recognized by some as the main threat within a century. Today, it is understood that the natural risks are dwarfed by the human-caused ones, turning the risk of extinction into an especially urgent issue."

(https://timelines.issarice.com/wiki/Timeline_of_existential_risk)


Timelines


Typology

AI Risk

From Issarice:

  • Until 1950 Fictional portrayals only

"Most discussion of AI safety is in the form of fictional portrayals. It warns of the risks of robots who, through either stupidity or lack of goal alignment, no longer remain under the control of humans.

  • 1950 to 2000 Scientific speculation + fictional portrayals

During this period, discussion of AI safety moves from merely being a topic of fiction to one that scientists who study technological trends start talking about. The era sees commentary by I. J. Good, Vernor Vinge, and Bill Joy.


  • 2000 to 2012 Birth of AI safety organizations, close connections with transhumanism

This period sees the creation of the Singularity Institute for Artificial Intelligence (SIAI) (which would later become the Machine Intelligence Research Institute (MIRI)) and the evolution of its mission from creating friendly AI to reducing the risk of unfriendly AI. The Future of Humanity Institute (FHI) and Global Catastrophic Risk Institute (GCRI) are also founded. AI safety work during this time is closely tied to transhumanism and has close connections with techno-utopianism. Peter Thiel and Jaan Tallinn are key funders of the early ecosystem.


  • 2013 to present Mainstreaming of AI safety, separation from transhumanism

SIAI changes name to MIRI, sells off the "Singularity" brand to Singularity University, grows considerably in size, and gets a lot of funding. Superintelligence, the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Security and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI. Key funders of this ecosystem include Open Philanthropy and Elon Musk."

(https://timelines.issarice.com/wiki/Timeline_of_AI_safety)


Nuclear Risk

Broad timeline from Issarice:

  • 1960s: The Cuban Missile Crisis threatens nuclear war


  • 1970s

"In addition to that, Zuberi notes that “by the late 1970s the defniition of proliferation changed from acquiring nuclear weapons or other explosive devices to developing a ‘nuclear explosive capability’”, and “consequently, the objective of safeguards changed from early detection of diversion of signifcant quantities of nuclear materials from peaceful to military pursuits to ‘prevention of development of nuclear explosive 4 ON NUCLEAR (DIS-)ORDER 121 capability’” (Zuberi 2003, p. 44)."


  • 1980s

"The decade was dominated by the Cold War superpower competition of the United States and the Soviet Union. Much of the world held its collective breath during the first years of the decade as tensions and the nuclear arms race heated up between the two rivals, leading to popular anti-nuclear protests worldwide and the nuclear freeze movement in the United States. The international community exhaled a bit in the second half of the decade as the United States and the Soviet Union earnestly sat down at the arms negotiating table and for the first time eliminated an entire category of nuclear weapons through the 1987 Intermediate-Range Nuclear Forces Treaty. The two countries also proceeded to negotiate cuts to their strategic nuclear forces, which ultimately would be realized in the landmark 1991 Strategic Arms Reduction Treaty. Although the U.S.-Soviet nuclear arms race was center stage, efforts to advance and constrain the nuclear weapons ambitions and programs of other countries played out in the wings, sometimes as part of the superpower drama. For instance, the United States shunted nonproliferation concerns aside in ignoring Pakistan’s nuclear weapons program because of that country’s role in fighting Soviet forces inside Afghanistan. Meanwhile, Iraq, North Korea, and South Africa advanced their nuclear weapons efforts in relative secrecy. In this decade, Iran began to secretly acquire uranium-enrichment-related technology from Pakistani suppliers. Taiwan’s covert nuclear weapons program, however, was squelched by U.S. pressure. Other nonproliferation gains included a joint declaration by Argentina and Brazil to pursue nuclear technology only for peaceful purposes, alleviating fears of a nuclear arms race between the two, and the conclusion of a nuclear-weapon-free zone in the South Pacific. Moreover, the NPT added 30 new states parties during the decade, including North Korea."


  • 2000s

"many countries began expressing a newfound interest in nuclear energy during the early 2000s."

(https://timelines.issarice.com/wiki/Timeline_of_nuclear_risk)


Pollution Risk

From Issarice:

  • 17th century Conversion of coal to coke for iron smelting develops, causing considerable air pollution.[1]
  • 18th century During the Industrial Revolution, coal comes into large-scale use. The resulting smog and soot starts having serious health impacts on the residents of growing urban centers.
  • 19th century The Industrial Revolution of the mid-century introduces new sources of air and water pollution.
  • 20th century Pollution grows very rapidly in the Western countries soon after the economic boom following the Second World War.
    • By the late 1950s, pollution becomes a serious issue, leading to a powerful environmental movement in the 1960s, which gains force during the 1970s.
    • Towards the 1990s, sulfur dioxide emissions start to peak in developing countries.
  • 21st century Today, carbon dioxide and other air pollutants, water pollutants and land pollutants are the most common types of substances contaminating the Earth."

(https://timelines.issarice.com/wiki/Timeline_of_pollution)


More information

Bibliography

For perspectives on the risks associated with emerging technologies, see for example:

  • Deudney, D. (2020). Dark skies: Space expansionism, planetary geopolitics, and the ends of humanity. Oxford University Press, USA.
  • Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. ArXiv [Cs.AI]. https://arxiv.org/abs/1802.07228
  • Tucker, J. B. (Ed.). (2012). Innovation, dual use, and security: Managing the risks of emerging biological and chemical technologies. MIT Press.