Jump to content

Talk:Global catastrophic risk/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3

Adding new section (only if it's okay with all of you) from User:DemocraticSocialism

I wish to add a new section, "Volatile World Leaders" to this article because I view people like Donald Trump, Vladimir Putin and Kim Jong-un as a real threat to humanity (especially Trump and Kim). I will only do so if it's alright with all of you. — Preceding unsigned comment added by DemocraticSocialism (talkcontribs) 03:56, 17 August 2017 (UTC)

Overlarded lead

Insufficient global governance creates risks in the social and political domain (potentially leading to a global war with or without a nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructures like the electrical grid, or the failure to manage a natural pandemic) as well as problems and risks in the domain of earth system governance (with risks resulting from global warming, environmental degradation, including extinction of species, or famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture).

This sentence has a high glass-over coefficient. — MaxEnt 21:55, 6 October 2017 (UTC)

Comments

The following potential causes of a global catastrophe are fictional:

  • Artificial intelligence
  • Extraterrestrial invasion
  • Nanotechnology
I would agree myself :). But we have some people take them seriously as potential global catastrophes, and then publish papers on it, so wikipedia can't take a position on that, though it can cite other people, if there are sources that say they are fictional. So, it's a case of looking for good cites to use on that. That is to say - AI in the weak sense is real, as is nanotechnology, and many astronomers think there is a possibility of extra terrestrial intelligences living around distant stars or in distant galaxies. But the idea that any of those would cause a global catastrophe - for nanotech they worry about nanoscale self replicators which is so far away at present that it's a bit premature I think. We can't yet make a self replicating city in a lunar crater, likely to be a far easier task, people forget that a self replicator has to source its materials and life makes it look easy, but we are nowhere near achieving anything like that with 3D printers and the like. For extra terrestrial intelligences, then why would they want to invade and how could they unless they are here already or can get here easily, in which case, if they wanted to, why didn't they take over Earth before the pesky humans evolved here (given that the chance of them reaching to technology at the same time as us is minute)? For AI, then they worry about AIs that are as intelligent as us and purposeful with objectives that can rapidly increase their intelligence and power over a short period of time. But there's no evidence of even the first steps of a truly intelligent AI, and chatbots are pathetic. Sophia is smokes and mirrors.
Still it is no good just saying that. You need to find sources that say that in WP:RS. I've written blog posts saying all of that but they wouldn't count. There doesn't seem to be a lot written by way of skeptical literature about global catastrophes (which is why I've taken to blogging about it and why I'm interested in this article - some people get panicked and very scared by fictional scenarios promoted by the likes of Michio Kaku and Stephen Hawking). Robert Walker (talk) 00:00, 17 February 2018 (UTC)

Cyberattacks can lead to major disasters, but the probability of a global catastrophe are low. Successful cyberattacks on critical infrastructure such as the global positioning system, internet backbones, military infrastructure, etc. can be a global disaster.


   "They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users)."

A global disaster caused by nuclear and chemical accidents is just as likely as a major biotechnology catastrophe.



   "The Chicxulub asteroid, for example, is theorized to have caused the extinction of the non-avian dinosaurs 66 million years ago at the end of the Cretaceous."


   "One such event, the Toba eruption, occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory"


   "When the supervolcano at Yellowstone last erupted 640,000 years ago"


That's not what happened. — Preceding unsigned comment added by Georgengelmann (talkcontribs) 16:48, 21 August 2017 (UTC)

I agree the supervolcano section has a fair few errors in it, for instance it said that it covered much of the US with magma. Actually the magma spread from Yellowstone lake to Idaho falls. I've fixed it a bit - but there may be more errors - do feel free to fix any obvious mistakes or discuss here. Robert Walker (talk) 23:51, 16 February 2018 (UTC)

Hello fellow Wikipedians,

I have just modified one external link on Global catastrophic risk. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:58, 19 October 2017 (UTC)

Claim that 3 - 10 km asteroid could make humans extinct

"For this to occur, the asteroid would need to be at least 1 km (0.62 mi) in diameter, but probably between 3 and 10 km (2–6 miles)."

This is sourced to Nick Bostrom [1].

"In order to cause the extinction of human life, the impacting body would probably have to be greater than 1 km in diameter (and probably 3 - 10 km)".

But he is a philosopher with a special interest in existential risks - not an asteroid specialist or indeed a scientist - and does not give any arguments or sources for his conclusion. For those reasons, though it is an academic source, on this particular topic, I think this is a rather weak source. After all the dinosaurs were hit by a 10 km diameter asteroid. Though many went extinct, the birds survived, also turtles, and small mammals. Humans as omnivores and warm blooded, able to make fire, and with capability for agriculture and able to eat almost anything including for instance sea food along the shoreline, a staple of early man in cold places. Humans can survive with minimal technology anywhere from the Kalahari to Siberia. Even an asteroid winter would not make the tropics colder than Siberia. Even if you take the worst case of a worldwide firestorm followed by close to total darkness, there will be fish, sea food, animals that died and lie frozen, etc. Surely humans would not be made extinct by such an event - there's a big difference between knocking back our civilization seriously for a few decades and going extinct.

Now a 48 km asteroid (30 miles) would make us extinct unless we can deflect it. That's enough to raise air temperaures worldwide to 500 C and boil the surface of the ocean away to a depth of maybe a 100 meters.

“Researchers led by Don Lowe of Stanford University describe the effects of two asteroids measuring 30 to 60 miles across that hit about 3.29 and 3.23 billion years ago. (For context, the asteroid that killed the dinosaurs was probably a measly six miles across.) The dual impacts sent temperatures in the atmosphere up to 932 degrees Fahrenheit for weeks and boiled the oceans for a year, long enough that seawater evaporated and they dropped perhaps 328 feet. The researchers reported their findings in the journal Geology.”

See Geologic record of partial ocean evaporation triggered by giant asteroid impacts, 3.29-3.23 billion years ago

So, this is not questioning that there is a size of asteroid that would make us extinct. The issue is, how large is it? There's an enormous difference between a 10 km and a 48 km diameter asteroid. It's more than 100 times the mass, if similar in density. Luckily we haven't had one of those for over 3 billion years.

As for a 1 km asteroid, that is normally taken to be the smallest asteroid (approximately) that would have some global effects. E.g. slight global cooling or whatever.

So, I think the correct figure here is somewhere between 10 and 48 km but significantly over 10 km. The 48 km event is definitely an extinction level event, only survivable by survivor colonies either in the ocean depths or on the Moon or some such. They would also return to a sterilized Earth which they would have to reinhabit with Earth life from seeds or whatever they have.

I haven't though found a WP:RS on this and it may be a bit awkward to source it without WP:OR. The Nick Bostrom quote I think is dubious and probably wrong. He is not himself the best qualified person to make this assessment, especially not without any reasoning to explain his figure. Indeed to get a really good assesment would probably be a multidisciplinary affair involving experts in several research areas working together. But he doesn't give another source either. So, where can we find a better source? Robert Walker (talk) 00:23, 13 February 2018 (UTC)

Matters where and how it strikes. The dinosaur asteroid struck in a shallow coastal sea with deep mud causing a huge amount of material to be flung up into the atmosphere but had it been in deeper waters, or solid bedrock, it wouldn't have been as bad. It struck at an angle that maximized material ejection. Finally it triggered volcanic eruptions globally based on where it struck the plates, causing massive CO2 release. Size isn't everything. -- GreenC 02:18, 13 February 2018 (UTC)
Yes that's true of course. Well there is discussion over the details for Chicxulub but yes agreed on the general point that where it strikes and the speed of course matter.
But you are talking about things that may have made the Chicxulub impactor worse than a typical 10 km impact, which would suggest that other 10 km impactors would be less likely to cause extinctions if anything. It's not at all a proof that a 10 km asteroid can cause extinction of humans. It would be WP:OR anyway, whatever we might try to figure out here by discussing it amongst ourselves unless we can find a source to cite. Nick Bostrom gives a range of sizes in kilometers which is what you'd expect with smaller ones requiring special conditions for extinction and large enough ones always leading to extinction of humans. That I think is correct. But I don't believe the numbers he has there. Instead of 2 - 10 km, the lowest number of the range would be greater than 10 km I'd have thought, given that the Chicxulub impactor is about as bad as it gets for a 10 km asteroid and didn't lead to extinction of animals that are less robust if anything than a human with our knowledge, fire, clothing, able to cultivate crops and use animal husbandry etc.
Hope that's a bit clearer. I think we need to find a better source than him on this topic. Do you agree? Or do you think he is an adequate source, and if so, perhaps you can say in more detail why you think he is a WP:RS on this? Robert Walker (talk) 03:27, 13 February 2018 (UTC)

Update on this - found a WP:RS on risk of 10% of humanity killed by an impact

I have found a WP:RS though it is a bit out of date, on the risk of 10% of humanity being killed by an impact. It's David Morrison's paper from 1994 "Impacts on Earth by asteroids and comets: assessing the impact" [2]. It's based on the size of asteroid large enough to cause a significant impact winter. He says "We adopt impact by a 1.5 km diameter stony object with a yield Of ~ 200,000 MT as the nominal threshold value for a global catastrophe".

hat's not about human extinction, which would also be far harder to characterize. I'm wondering if it is asking for too much to try to set a size range for human extinction, as it depends so much on the technology. For instance what about nuclear submarines? This is about the idea of a nuclear submarine stocked with food for several years and there were several in different places so that at least some would not be destroyed by the impact tsunami. See Aquatic refuges for surviving a global catastrophe. At any time there would be many nuclear submarines anyway that could help with rebuilding society in all except the very worst of impactors, surely far worse than a 10 km impactor.

Just a suggestion, one possibility is to just replace this sentence with one about the threshold size for global catastrophe - citing David Morrison for now, until we find a more recent cite?? Just an idea. Robert Walker (talk) 04:36, 13 February 2018 (UTC)

I don't think there's a contradiction; I read Bostrom as saying "probably also higher than 3-10 km" and he cites The Impact Hazard in the paragraph above it, which is also from Morrison 1994. I just added the Impact Hazard to the article.
Okay it reads a lot better now, to my eye. One other thing on re-reading it. Where it talks about long period comets, it should probably mention Comet Swift–Tuttle. In 4479 AD it will have a close encounter of less than 0.05 AU, with a probability of impact of 1 in a million[1]. At 26 km in diameter it is significantly larger than the Chicxulub impactor - as an example, if that one was 10 km and was a similar density comet (of course all that is not known for sure) its mass is around 17.5 times larger.
If there's a source that mentions it in the context of catastrophic risks, feel free to add it. Rolf H Nelson (talk) 06:01, 14 February 2018 (UTC)
We can just take the text from the page Comet Swift-Tuttle and edit it down. That page says:
"It is the largest Solar System object that makes repeated close approaches to Earth with a relative velocity of 60 km/s.[2][3] An Earth impact would have an estimated energy of ≈27 times that of the Cretaceous–Paleogene impactor.[4] Comet Swift–Tuttle has been described as "the single most dangerous object known to humanity".[3]" Robert Walker (talk) 14:00, 14 February 2018 (UTC)
Verschuur is fine as WP:RS. Keep in mind non-summary material can go in the child articles, Asteroid-impact avoidance and Impact event Rolf H Nelson (talk) 05:13, 16 February 2018 (UTC)
Minor quibble on the Earth crossers the List of Aten asteroids (semi-major axis less than Earth's) shows the largest as 2 - 7 km so much less than 10 km. However I think those would just be estimates based on their brightness and could be wrong, at least I assume so with such a large distance range so not sure if you can say definitively that there is no 10 km diameter Aten asteroid if there happens to be a particularly dark large asteroid there?? For the Apollos, 1862 Apollo is only 1.5 km - not sure if it is the largest Apollo, but if it is, it's well within the range.
The source for "No sufficiently large asteroid currently exists in an Earth-crossing orbit" for human extinction seems to me to make a reasonable case, but feel free to weaken it to "according to Hamilton" or state "Probably no sufficiently large asteroid currently exists in an Earth-crossing orbit" if you disagree. Rolf H Nelson (talk) 06:01, 14 February 2018 (UTC)
Okay, sorry I don't have access to the book, but this is an article from it by Thomas Ahrens[3] - it's a collation of articles by multiple authors, and it says "the largest Earth crossing asteroids have diameters approaching 10 km" which seems a reasonable summary still today (that's back in 1994). That might be a better thing to say there. I think "sufficient size to cause human extinction" is probably best avoided as there is almost nothing really reliable in the literature on what size of asteroid could cause human extinction. It's hard to define what would, it would need to be multiple disasters not just a single one like an impact, but other things combining with that in a cummulative way and even then hard to see how it would lead to complete extinction - I just haven't seen any detailed treatment of this anywhere, for an impact of size 10 km or so. Unless we can find a WP:RS on this. Or a WP:RS on the difficulty of estimating what size of impact would cause extinction. Just doesn't seem to be much published on this either way that I've seen. Well, not found it yet anyway. I've written a number of blog posts on it myself, and nobody has mentioned any sources to me either outside of my own work which is not a WP:RS as it is self published and I'm no expert :).
I would question the 10^-8 figure as out of date. That was the figure most often given for the chance of a 10 km asteroid or comet hitting Earth per century (now of course reduced to zero for this century for near earth objects, leaving only long period objects). Not for comets alone, if I understand right. It's based on the intervals between 10 km impacts on Earth I think. It is really hard to get an estimate for comets. They are much rarer than asteroids at present (though at times maybe after break up of a Centaur they may be more common for many thousands of years). The closest recent flyby of a comet is Lexell's comet in the seveneteenth century at six times further away than the Moon, and a tiny fragment of a few meters at a similar but lesser distance last century. Probably orders of magnitude rarer than a large asteroid flyby but we just don't have enough data and I've yet to find anyone who has published an attempt at an estimate of the probability for comets. But signfiicantly smaller than that 10^-8. We would also have several years of warning, even a 1 km comet would give us more than two years of warning by analogy with comet Siding Spring. Robert Walker (talk) 14:00, 14 February 2018 (UTC)
I think it would be good to say that there can't be any more undetected Earth crossers of 10 km or larger. Assuming that is true. We need to find a WP:RS for that, is often said, but writing this, I realize, I'm not sure what the basis is, should check.
Also it might be worth mentioning that there are larger Earth approaching but not Earth crossing asteroids which still count as NEAs though there is no risk of them hitting Earth at present. The largest Amor asteroid is 1036 Ganymed a Mars crosser approaching Earth from further away but not crossing our orbit. It's well over 30 km in diameter. But no risk of hitting Earth at present of course. The second largest NEA, also an Amor is 433 Eros at a mean diameter of 16.8 km. The largest Atira (always interior to Earth) is 163693 Atira at around 4.8 km in diameter and its diameter determined to within 1 km so it is within the 10 km diameter. Robert Walker (talk) 06:01, 13 February 2018 (UTC)

References

  1. ^ Chambers, J. E. (1995). "The long-term dynamical evolution of Comet Swift–Tuttle". Icarus. 114 (2). Academic Press: 372–386. Bibcode:1995Icar..114..372C. doi:10.1006/icar.1995.1069.
  2. ^ Weissman, Paul R. (2007). "The cometary impactor flux at the Earth". In Milani, A.; Valsecchi, G.B.; Vokrouhlicky, D. (eds.). Near Earth Objects, our Celestial Neighbors: Opportunity and Risk; Proceedings IAU Symposium No. 236, 2006. Vol. 2. Cambridge University Press. pp. 441–450. doi:10.1017/S1743921307003559. Archived from the original on 2009-08-15. Retrieved 2009-08-13. {{cite book}}: |journal= ignored (help); Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  3. ^ a b Verschuur, Gerrit L. (1997). Impact!: the threat of comets and asteroids. Oxford University Press. pp. 256 (see p. 116). ISBN 978-0-19-511919-0.
  4. ^ This calculation can be carried out in the manner given by Weissman for Comet Hale–Bopp, as follows: A radius of 13.5 km and an estimated density of 0.6 g/cm3 gives a cometary mass of 6.2×1018 g. An encounter velocity of 60 km/s yields an impact velocity of 61 km/s, giving an impact energy of 1.15×1032 ergs, or 2.75×109 megatons, about 27.5 times the estimated energy of the K–T impact event.

An ordered format for each specific risk

1. describe the nature/cause of the risk

2. describe its potential consequences

3. describe opinions about its validity and likelihood

4. describe proposed strategies for preventing it

5. describe proposed strategies for mitigating its effects

(insofar as there are sources for any of the above) K.Bog 21:40, 26 August 2018 (UTC)

Lede issue

"A global catastrophic risk is a hypothetical future event which could damage human well-being on a global scale" - this assessment is based on a single authors opinion, from Nick Bostrom, who's name is listed 24 times in this article. There are other opinions, from more authoritative sources, which argue that we already face a global catastrophic risk with climate change. prokaryotes (talk) 19:00, 6 January 2019 (UTC)

My preference is to keep it as-is, as a list of collapse scenarios, and not pick and choose favorites. They all have proponents and critics. -- GreenC 02:32, 7 January 2019 (UTC)

Article often uses same source

If you search the page for Bostrom you get about 24 entries, including until previously, his blog in the external link section, a link at the lede to one of his book etc. This content has multiple issues which we need to discuss on a case by case basis if someone wants to re-add it. prokaryotes (talk) 03:11, 7 January 2019 (UTC) Before you comment read this review by an MIT author (No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity). prokaryotes (talk) 03:14, 7 January 2019 (UTC)

There's nothing wrong with Bostrom. You seem to be deleting most of his material simply because he is Bostrom. He is well known in this field, one of the first and most significant thinkers and writers, the article would be strange with it. -- GreenC 03:29, 7 January 2019 (UTC)
No, I did not delete most of his material, and that he is one of the first and most significant thinkers and writers, is an exaggeration based on your opinion. If this was true you should find a qualitative source which states that. If you want to discuss article space content, please explain why we should link his homepage in the external links section, or why a redirect to his stub book entry should be above the lede. prokaryotes (talk) 03:35, 7 January 2019 (UTC)
You removed his homepage from EL, and I didn't add it back. Why are you making that an issue? Bostrom is being used as an inline citation in the lead, is that a problem? -- GreenC 13:47, 7 January 2019 (UTC)
I point this out because of your argument, "You seem to be deleting most of his material simply because he is Bostrom." We should use the most authoritative sources, reliable secondary sources, instead of using self published references. prokaryotes (talk) 14:45, 7 January 2019 (UTC)
Like I said above every scenario has its proponents and critics - so what? We are listing multiple POVs here. You seem to be aiming for a singular POV - the global warming is the most dangerous and that AI is not so dangerous. That is not presenting multiple POVs, it is framing one as being the most accurate, which is biased. -- GreenC 03:32, 7 January 2019 (UTC)

This is what inline attribution is for, but beware of doing it so much it gives undue weight to any one person. NewsAndEventsGuy (talk) 13:55, 13 January 2019 (UTC)

Regarding the particular edit, I agree with removing the reference to the source (Bostrom) here. First, the author is introduced later by full name "Nick Bostrom", and his book is mentioned later, too. To reference him here by last name only is awkward, as the reader thinks "who the heck is Bostrom?". More to the point, though, this is an inappropriate academic style. Specifically it's called an "author prominent citation". See [this guideline]: "In author prominent citations the focus is on the author as the source of some original idea or information." For me, the debate here is whether the focus is on the author. I would agree that even if Bostrom is a leader in this field, the focus should not be on him and his contribution, but on the topic. Wikipedia uses author prominent citiations much less than academic papers where scholars are participating in an ongoing dialog among peers in their field. Coastside (talk) 16:11, 23 January 2019 (UTC)

Bostrom survey

This 2008 poll presented in a table in the section called Likelihood uses an untrustworthy link for the reference, and it was pointed out how Bostrom's use of survey's can easily lead to wrong results. Because this polling appears a) outdated b) too selective c) is poorly sourced, it should not be part of the article, at least not in form of a table. prokaryotes (talk) 03:49, 7 January 2019 (UTC)
Yeah OK 2008 11 years ago, it is dated. So let's replace it with something right? It's not hard to find good sources, the Global_catastrophic_risk#Organizations section lists the major players, and the oldest and one of the most respected is the Bulletin of the Atomic Scientists. Just a few days ago they released this: "Quantum computing, biotech, and climate change among threats of most concern to US". That is US-specific, but anyway, these orgs are releasing studies of this nature all the time. -- GreenC 13:47, 7 January 2019 (UTC)
The poll results are also part of the section we discuss here, the topic is human extinction. I would suggest to remove the poll, because it uses up so much space, improve the reference about the survey mention for the written segment, then try to briefly point to the related more in-depth articles for this subject, instead of selectively presenting poll results in tables. prokaryotes (talk) 14:55, 7 January 2019 (UTC)

I have reinstated the poll and table, as well as a few other items whose omission has not been justified or explained. It seems many edits are being carried out 'as per talk page' without a consensus being reached on the talk page in the least. I would therefore ask that certain editors pay some consideration before making wholesale and arbitrary alterations, or leave the page alone completely. AbrahamCat (talk) 08:12, 13 January 2019 (UTC)

delete referece to this small but illustrious group of experts informal showing of hands The Bostrom "survey" is not a survey at all. Someone passed out some paper or asked for a show of hands among "a small but illustrious group of experts". So conflating this to "survey" gives it underserved gravitas. Much worse, there is a difference between
  • Risk that the ______ disaster will happen, and
  • Risk that if the ______ disaster will happens humans will go extinct
This source does not attempt to give numbers for the first one. It gives numbers for the second. Our article implies the numbers are for both. In addition, the source adds up their numbers (If this happens there is X percent chance of extinction. If that happens there is Y percent chance) I don't remember how to add probablitiy, but somethign seems off about math behind the the 19*% sum. And even if the math is fine, when we cite it we falsely imply there is that much risk that this will happen. But again the source didn't assign likleihood numbers and the source didn't even include all the biggest threats, like climate change. NewsAndEventsGuy (talk) 11:33, 13 January 2019 (UTC)

I believe the citation is notable and should be included in the article. The reasons for this is that, contrary to the claim by Newsandeventsguy, the numbers add up perfectly well, and cimate change, while a long term environmental risk, poses zero chance of causing human extinction by 2100. Therefore I shall continue to reinstate it until a proper attempt at consensus is reached on this talk page. AbrahamCat (talk) 13:00, 13 January 2019 (UTC)

I shall continue to reinstate it until a proper attempt at consensus is reached that's not how this place works. If you want to know the probability of us going extinct by (for example) an explosion in a marshmallow factory, you take the probability of the explosion occuring TIMES the probability it will bring about extinction. For each number in the table, there is no liklihood number to multiply with the extinction risk for that event given in the table. And even if there were, you can't just add the individual numbers to get an end result that we might go extinct. If the WP:PRIMARY source did this, then I suppose we can regrettably say right in the text that they did the math this way, but we should look hard for other sources that give the paper a critical review. If it turns out there are none (either favorable or condemning) then the PRIMARY source itself is so invisible as to deserve no promotional help from us. Please address the substance of hte criticisms before edit warring to put it back in NewsAndEventsGuy (talk) 14:09, 13 January 2019 (UTC) PS human extinction from global warming has indeed been floated, but I haven't seen anything saying "before 2100". For example, here. NewsAndEventsGuy (talk) 14:12, 13 January 2019 (UTC)
Er, yes, that is how this place works. You need to attempt to discuss with others on the talk page before making edits and deletions of such an unjustified and arbitrary nature. None of the points of your challenge are reason for removal of a valid and sourced piece of text, and your points about the mathematics of the table are not applicable to the text. AbrahamCat (talk) 22:16, 13 January 2019 (UTC)
See WP:Arguments to avoid in discussions and WP:Closing discussions. Notice I'm not opposed to talking about the 2008 thing. You seem determined to ONLY talk about it YOUR way. Better than lines in the sand is if we work on coverage that complies with WP:NPOV. Just reiterating the line in the sand helps no one. NewsAndEventsGuy (talk) 22:23, 13 January 2019 (UTC)
Hi. On further reflection, it seems the reason you've removed the table is mistaken so I'm reinstating it. If the table were not the product of a wide range of expert opinion, and if it was not a clear, concise summary of the previous paragraphs, then its removal may be justifiable. Likewise if the data was flawed in some way, that's fine, but the table is not supposed to multiply the numbers with the extinction risks, given that the final figure on each row is the stated extinction risk given by the experts, so there's nothing wrong with the maths of it either. Please take greater care to study the sources and material at hand before descending into edit warring and accusing others of doing the same (the table has been removed three times now on the basis of a what seems to be a simple misunderstanding). AbrahamCat (talk) 19:58, 23 January 2019 (UTC)
AbrahamCat, the consensus here is to remove that particular table, I don't understand why you want it's mention, pick something newer maybe, but a flawed outdated 2008 poll - which is already part of the article (see paragraph above poll), is not a good choice. prokaryotes (talk) 20:27, 23 January 2019 (UTC)
Additional the poll does not mention climate change, which is regularly cited by experts as a top catastrophic risk in recent years. prokaryotes (talk) 20:31, 23 January 2019 (UTC)
Hi, I have opened a dispute resolution segment [[4]]. I'm asking for a further opinion because at the moment I simply don't have the wikipedian legal knowhow to take on two persistent and experienced editors. You and NewsAndEventsGuy are welcome to post your complaints there. On the table/poll itself, while it admits it was informal, and it is quite old, as tables go it is quite compelling, and on the basis that it is a survey of experts in the field, it still carries relevency. I do not dispute the dangers of anthropogenic climate change, but am yet to see a credible article that claims it may bring about human extinction in the next century or two, so the fact that this is not included is not remarkable. I hope that clarifies some of my thinking behind this.AbrahamCat (talk) 21:11, 23 January 2019 (UTC)

First, the paragraph introducing the table properly expresses caution about over-interpreting the data in the table, which is good. However, I would delete the very first line: "Given the limitations...". Just start with the finding: "In 2008, an informal survey...". The rest of the paragraph is sufficient to give the data context. Second, I don't particularly like the table either, for a variety of reasons. Since the results are informal, it's reasonable to capture the overall result and say humans have a 1 in 5 shot of going extinct by 2100. But using precise percentages like 0.03% for nuclear terrorism is meaningless. It implies 3 degrees of precision, which is not commensurate with the survey. You might as well say the respondents surveyed estimated humans have a zero chance of going exitinct on account of a terrorist blowing everyone up. Why put this in a table? It's not meaningful. At minimum, this table should delete any row with less than 1% from the survey. Also, there is indeed a math problem. The numbers fall short of 19%, which means some significant risks are not included. They should actually add up to more than 19% to account for the possibility of overlapping events, such as "all wars" overlapping with "nuclear war" and "Superintelligent AI" overlapping with "engineered pandemic" (an AI might use a pandemic to do away with everyone). The fact that the math doesn't make sense is an issue. I think the main problem with this table is that in tabular form, the data appears scientific and categorical, when it really isn't. It's a finger in the wind guess. It's ok to have the paragraph summarize the finding, but the table presentation gives it too much weight, makes it appear as a tool for looking up likelihood of meaningful scenarios, and suggests it is more significant a result than it actually is. Presentation matters. Coastside (talk) 22:22, 23 January 2019 (UTC)

how we characterize the 2008 conference report

This is a response to a new thread on the same topic, I'm answering here per WP:MULTI.
(A) I look at behavioral commentary instead of focussing on content, and leading talk page headings like this and shake my head. At issue is "Global Catastrophic Risks Survey”, by A. Sandberg and N. Bostrom, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5
(B) False attribution. We can't attribute anything to the institute itself. This is because page 1 says, right at the bottom, "The views expressed herein are those of the author(s) and do not necessarily reflect the views of the Future of Humanity Institute"
(C) Neutrality-Who? See WP:ATTRIBUTEPOV. This source does not say they did a "survey", but rather an "informal survey" of a "small but illustrious group of experts". If we're going to report the opinion of this group to comply with ATTRIBUTEPOV we must also provide the context that answers who? In this case, a "small but illustrious group of experts" or, if there is an RS reason for a different way to answer who? that could also work. But just implying gravitas and rigor of a serious academic research survey is false and misleading.
(D) Neutrality-context and weight. The source itself says they (1) left out a lot of potential risks, (2) to the unwary reader "survey" can easily be read to imply a rigorous scientific survey of a representative group of international thinkers, planners, military contingency experts and so forth. Instead, they got a "Small but illustrious group of experts" together and took a quickie straw poll, an "informal survey" during the week(end). E.g., show of hands, back of envelope. The source also says the answers are likely rife with cognitive biases and should be taken with a grain of salt. If we talk about this project at all, to comply with WEIGHT and UNDUE etc we must explain this context.
(E) YES! By all means we can neutrally talk about their project. If-and-only-if we accurately describe the context, which we have not been doing.
(F) So in answer to another editors' NPA violating question "What's going on", in a separate thread, the answer would be edit warring instead of discussing the source in context of our policies and guidelines in a collaborative way and being patient while we try to form a consensus. Please address the substance of these criticisms, not me personally and in that manner we can go forward. Thanks.
PS If you're just joining us here, my immediate remarks (B-F above) are about characterizing the Bostrom's post conference opinion paper and are one thing. My comments criticizing how we use the content is something else, and appear in the prior subsection, pinpoint diff is here
NewsAndEventsGuy (talk) 17:36, 13 January 2019 (UTC)

Artificial intelligence

The section on Artificial intelligence goes into great length to promote the view of Bostrom about the threat from A.I., then citing his book, and then goes on with, "..Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have echoed these concerns". However, it was actually Hawking who warned of the possible dangers from AI long before Bostrom. Again, this doom and gloom about AI has been criticized as wrong. Yes we should mention ideas involving AI but there are more sources than Bostrom, and the article should make clear who first warned of it.prokaryotes (talk) 03:56, 7 January 2019 (UTC)
The section could also mention Kurzweil et al, ie. see 2045 The Singularity, emphasis "..cannot be accurately predicted." prokaryotes (talk)
As for "promoting" Bostrom.. again this is unwarranted bad faith. More likely when this material was written the person(s) doing so used Bostrom as their source. I'm a little concerned about your vehement anti-Bostromism it doesn't suggest a neutral editor re: Bomstrom. The thing to do is just improve the article, replace it with something better. I'm not familiar with the historiography of the AI-threat but if Hawkings was first the article should say so. -- GreenC 14:02, 7 January 2019 (UTC)

Bostrom image

The section Classifications contains an image from a Bostrom 2013 paper, which is interesting, however it actually violates WP:SYNTH if you compare it with the original. Even with the original a) article should not be primarily be defined from a single source, which at least can be partly defined as controversial or even fringe, and b) there are likely much easier to read such classifications, c) I find it confusing, it is unclear why this definition, since global risk can develop into an existential risk, but then you have some types of risks only for a specific entry. prokaryotes (talk) 04:02, 7 January 2019 (UTC)
Graph (Scope/intensity grid from Bostrom's paper "Existential Risk Prevention as Global Priority") is interesting only because it's hard to figure out it's purpose! Raquel Baranow (talk) 04:56, 7 January 2019 (UTC)

Pings

Pinging recent article authors, @Funkquake:, @Drbogdan:, @Niceguyedc:, @Zefr:, @Adam Hauner:, @Julietdeltalima:, @Paine Ellsworth:, @Richard001:, @Hyperbolick:, @Kbog:, @PaleoNeonate:, @Sam Sailor:, @StarWarsGlenny:, @Davearthurs:, see recent above discussions - input needed. prokaryotes (talk) 20:38, 7 January 2019 (UTC)

@Prokaryotes: Thank you for your consideration re input - seems my most recent contribution (10:15, 26 November 2018) to the main article was limited to a possible "asteroid collision" (see below) - nevertheless - I may present more input to the article and/or talk at my next opportunity:

Copied from => main article (version - 15:41, 7 January 2019):

In April 2018, the B612 Foundation reported "It's a 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when."[1][2] Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet.[3][4][5] In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare.[6][7][8][9][10] According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.[11]

References

  1. ^ Harper, Paul (28 April 2018). "Earth will be hit by asteroid with 100% CERTAINTY – space experts warn - EXPERTS have warned it is "100pc certain" Earth will be devastated by an asteroid as millions are hurling towards the planet undetected". Daily Star. Retrieved 23 June 2018.
  2. ^ Homer, Aaron (28 April 2018). "Earth Will Be Hit By An Asteroid With 100 Percent Certainty, Says Space-Watching Group B612 - The group of scientists and former astronauts is devoted to defending the planet from a space apocalypse". Inquisitr. Retrieved 23 June 2018.
  3. ^ Stanley-Becker, Isaac (15 October 2018). "Stephen Hawking feared race of 'superhumans' able to manipulate their own DNA". The Washington Post. Retrieved 26 November 2018.
  4. ^ Haldevang, Max de (14 October 2018). "Stephen Hawking left us bold predictions on AI, superhumans, and aliens". Quartz. Retrieved 26 November 2018.
  5. ^ Bogdan, Dennis (18 June 2018). "Comment - Better Way To Avoid Devastating Asteroids Needed?". The New York Times. Retrieved 26 November 2018. {{cite news}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  6. ^ Staff (21 June 2018). "National Near-Earth Object Preparedness Strategy Action Plan" (PDF). White House. Retrieved 23 June 2018.
  7. ^ Mandelbaum, Ryan F. (21 June 2018). "America Isn't Ready to Handle a Catastrophic Asteroid Impact, New Report Warns". Gizmodo. Retrieved 23 June 2018.
  8. ^ Myhrvold, Nathan (22 May 2018). "An empirical examination of WISE/NEOWISE asteroid analysis and results". Icarus. Bibcode:2018Icar..314...64M. doi:10.1016/j.icarus.2018.05.004. Retrieved 23 June 2018.
  9. ^ Chang, Kenneth (14 June 2018). "Asteroids and Adversaries: Challenging What NASA Knows About Space Rocks - Two years ago, NASA dismissed and mocked an amateur's criticisms of its asteroids database. Now Nathan Myhrvold is back, and his papers have passed peer review". The New York Times. Retrieved 23 June 2018.
  10. ^ Chang, Kenneth (14 June 2018). "Asteroids and Adversaries: Challenging What NASA Knows About Space Rocks - Relevant Comments". The New York Times. Retrieved 23 June 2018.
  11. ^ U.S.Congress (19 March 2013). "Threats From Space: a Review of U.S. Government Efforts to Track and mitigate Asteroids and Meteors (Part I and Part II) – Hearing Before the Committee on Science, Space, and Technology House of Representatives One Hundred Thirteenth Congress First Session" (PDF). United States Congress. p. 147. Retrieved 26 November 2018.
Hope this helps in some way - in any case - Enjoy! :) Drbogdan (talk) 21:10, 7 January 2019 (UTC)

What's going on?

I look at edits like this and shake my head. There seems to be some form of extreme PC going on, where any mention of a particular person or organization is treated as being biased and expunged. There's nothing wrong with mentioning who said it, and what they said. It doesn't make the article biased, it makes it accurate and precise. We have to report what people say, there is no other way to do it. That's what this article is, a reporting about what people say. Removing those things or their attribution is silly and does not improve the article. -- GreenC 15:54, 13 January 2019 (UTC)

Per the WP:TPG headings should be neutrally phrased and not used for implied NPA violations. Also per the TPG, discussions should be kept in one place (see WP:MULTI. I have replied above, pinpoint diff is here. NewsAndEventsGuy (talk) 17:12, 13 January 2019 (UTC)
I added comment on this particular edit above. I suggest not continuing here in order to keep discussion in one place (as per NewsAndEventsGuy's comment. Coastside (talk) 16:11, 23 January 2019 (UTC)

That chart

The chart in question

I'm proposing to remove this chart -- will do, if not gainsaid here -- for two reasons. First, it is my belief that, technically, it is not free of copyright (and also can't be used under fair use), see here for details. Wikimedia Commons has not yet ruled on this, may not, and may rule that it's free; whatever, Commons is people and they get these things wrong a lot. I'm not inclined to take the judgement of any other organization (especially as they are frequently wrong, and insufficiently rigorous) over our rules -- not Commons, or anyone else. But even more importantly, the chart is wrong and makes no sense.

Specifically, what is the "Aging?" entry? "Aging" makes no sense, aging of what? I have read thru the original paper where the chart appears, looking for explanation -- there isn't any. This is probably why someone at some point in the chart's convoluted history appended a "?" to that entry. Including a chart with a meaningless and confusing entry, and adding a "?" to indicate "We don't know what this means either" is not good. And "Well, but most of it is correct and makes sense" is not a good argument for including material generally.

The chart could be edited (if it was free, which I'm pretty sure it isn't), but how and my who? Anyone editing the chart would be doing original reasearch. If we do want a corrected chart, at least let a subject matter expert here build one from scratch on their own. Herostratus (talk) 14:14, 16 June 2019 (UTC)

I'm a participant on Commons who wants to keep it. However as I noted in that conversation, this chart may or may not be copyright and the evidence is sketchy either way. Regardless the only reason we have it here is because it is a Risk Assessment Matrix (RAM), a particular type of graph, which is an excellent way to quickly and visually understand risk, of any kind, in particular the kind of risk this article discusses. RAMs are commonly used and there is no copyright on them any more than one can copyright how a bar chart works. So all we need to do is make our own RAM. The problem there is someone will claim original research. But the new chart can reference the Bostrom chart as evidence RAMs are used when talking about global risk. For example the new chart would replace "Loss of one hair" with "Mosquito bite". Someone might still claim it is too closely paraphrasing, to which I don't have an answer because there's no way to make a RAM any more different, and there's no way to prevent someone from making their own RAM on this topic just because Bostrom did at one point. It's nothing more than a chart showing a continuum of risk from small to large, like the powers of 10 video. -- GreenC 14:31, 16 June 2019 (UTC)
All this is fine with me. My real main objection is "Aging" -- makes no sense -- and the "?" after it which makes us look bad. Making a new similar chart, fine with me personally. Just replacing "Aging?" with something, I defer to your judgement whether that is allowable per copyright or whatever. Herostratus (talk) 02:07, 17 June 2019 (UTC)
re: Aging is in pink meaning it is a 'catastrophic risk'. Possibly it refers to demographics. Bostrom is probably referring to the aging of populations creating a reverse pyramid shape like what is happening in Japan - not enough young people to care for the elderly, create GDP, tax revenue etc.. to support society. -- GreenC 02:40, 17 June 2019 (UTC)
And that's supposed to be a terminal-level global catastrophic risk, worse than the destruction of the ozone layer? I can't buy that... also I don't think Bostrom made the chart, I think a grad or post-grad assistant did... an earlier (2002) version, which may have been made by Bostrom himself, just has an "X" in that spot. I'm also not sure what "X" is supposed to mean (the paper doesn't make it clear), but one possibility is that it's a placeholder until he could think of something, and he forgot to before the paper was sent. Another possibility it just means "nothing goes here". It could me "end of the world", but why "X"?
Anyway, the "X" has been moved up to a level that was added sometime later (by whom I'm not sure), trans-generational, where it apparently does mean "end of the world", or at any rate "existential risk". But I mean what kind of example is "X" anyway? Does it mean "there's actually no such thing, or at any rate any example we could think of"? (I don't see why like "global thermonuclear war" or "asteroid the size of Manhattan" or something wouldn't go, but I'm not up on the subject.) Is "X" just an artifact place-holder mark? Does it mean "too terrible to say out loud"? Or what? If there's no credible example of an existential risk, it should say "none"; if there is, it should give it instead of saying "X". Another reason the chart is no good. Herostratus (talk) 19:09, 17 June 2019 (UTC)

Mercury "could" obliterate life on Earth?

Even if it were travelling at a velocity of 10% of its own diameter per second, it would have a kinetic energy of 3.9x10^34 J, which is around 200 times the gravitational binding energy of Earth. I think it unlikely any life would survive that. 146.199.221.249 (talk) 14:57, 10 August 2019 (UTC)

Article is outdated and not on point

Article relies too much on the assessment of single sources. For instance is nantechnology or AI such a great danger? Basically just based on "visions", from a few authors. Additional it mixes global catastrophic risks with existential risk. I would think that the consensus today is that the greatest catastrophic risk comes from climate change. prokaryotes (talk) 02:19, 5 January 2019 (UTC)

The charts are useful for understanding collapse. Replace them with something better, don't delete them. Also the article concerns two types of risks - global catastrophic and existential - they both involve the same risks it doesn't make sense to split articles along these terms, they are both the same thing just a matter of arbitrary degree. -- GreenC 02:43, 7 January 2019 (UTC)
"they both involve the same risks it doesn't make sense to split articles along these terms, they are both the same thing just a matter of arbitrary degree" This claim isn't true - there isn't simply a matter of degree between GCRs and existential risks. Existential risks are a special class of catastrophes that result in the permanent destruction of humanity's long-term potential - this is what makes the special in a number of ways, and why researchers generally study them as a distinct class e.g. existential risks are necessarily unprecedented, and are irrevocable - neither is true of global catastrophic risks. From an ethical perspective, this is what makes existential risks so important, and not merely continuous with other very bad things that could befall us - they are a special class of events where the entire future of humanity is at stake, and from which we can never come back. This is evidenced by the section being 'the moral importance of existential risk' - the considerations raised in this section do not apply to GCRs.
We disagree on this point, but I wanted to thank you (GreenC) for your work on this page in recent years. I think you've done a fantastic job and agree with most of your contributions! I'm excited to work together on improving the page. Vermeer dawn (talk) 13:09, 16 February 2020 (UTC)

'Existential risk' should be its own entry

I wanted to reopen the discussion about the future of this page and ‘existential risk’, which I see were merged a few years back. ‘Existential risk’ deserves its own page, distinct from this one. I understand that the level of overlap between GCR/existential risk made it seem unnecessary to have both pages, but I think the correct response is to have only the ‘existential risk’ page, using this article as a starting point.

1. ‘Existential risk’ is a well-defined concept, that is importantly distinct from ‘global catastrophic risk’ and ‘human extinction.’ This is why existential risk is its own field of academic inquiry - these risks raise a unique set of methodological, ethical, and practical considerations, many of which derive from the fact that existential risks are by definition unprecedented, and irrevocable.

2. Lots of this article that is actually about existential risk - e.g. ‘Moral Importance of Existential Risk’. The phrase ‘existential risk’ appears 29 times in this article

3. ‘Existential risk’ is a much more widely used term (~10x the number of google search results)

Vermeer dawn (talk) 13:15, 16 February 2020 (UTC)

Yes, existential risk should have its own page. GCR should retain its page, as it is a notable concept. WeyerStudentOfAgrippa (talk) 14:19, 16 February 2020 (UTC)

Skeleton plan for a new article on existential risk

Following the above, I'm keen to get started on an existential risk article. I've put together a quick skeleton plan and would welcome any feedback, particularly from more experienced WP users.

• Introduction
• Definition & concept
° competing definitions
° difference from global catastrophic risks (broader class)
° difference from human extinction (narrower class)
• Methodological challenges
° existential risks are unprecedented
° existential risk reduction is a global public good
° cognitive biases and existential risk
• Moral importance of existential risk (adapted from this article)
• Sources of existential risk
°Natural vs. anthropogenic
°Non-extinction risks - e.g. permanent collapse of civilisation; permanent totalitarianism
°Brief list of major risks with links to wikipedia articles
• Estimates of existential risk
° I am aware of the following estimates - 2008 survey; Nick Bostrom; Martin Rees; Carl Sagan; Toby Ord; 2016 AI survey
• Organizations working on existential risk reduction
• History of existential risk
° Early history of thinking about human extinction (Darwin, Kant, Byron etc.)
° Twentieth century - Russell, Sagan, Parfit writing about existential risk with regards to nuclear
° Modern - John Leslie, Nick Bostrom’s coining of the term, recent heightened interest
• Skepticism about existential risk
° Steven Pinker, skeptics of AI risk arguments, etc.

Vermeer dawn (talk) 15:15, 18 February 2020 (UTC)

The introduction should just be the lead section; I'm not sure if that's what you meant. The history section should either be the first section or the second after the definition section. The skepticism section should be titled "criticism". Non-extinction existential risks should be introduced in the definition section. WeyerStudentOfAgrippa (talk) 20:48, 18 February 2020 (UTC)
As a minor interjection, history sections can be placed at the end of an article as well. The more important thing is really that precedents should be respected in order to avoid pointless disputes. Consistency with similar articles is a form of precedent, but usually there is also significant flexibility for precedent to be set by the page's primary author. Sunrise (talk) 11:12, 20 February 2020 (UTC)

Please consider we already have Human extinction and Global catastrophic risk. Both of these have covered this topic well for 20 years. WP:CFORK is a thing and simply reframing the issue under a different name isn't going to do much except dilute and confuse. Yeah sure one can slice this off into separate articles based on a slightly different scale of perspective but it just creates a mess of articles to deal with, and eventually someone will come along and try to recombine them. Wikipedia follows what the real world does, not create out own vision of things. In the real world, existensial risk and global catastrophic risk are generally not treated as separate domains of study. Brief list of major risks with links to wikipedia articles We have that in two articles. Estimates of existential risk we have that in this article. Organizations working on existential risk reduction We have that in this article. History of existential risk This would be a good article to include. -- GreenC 00:39, 19 February 2020 (UTC)

  • This sounds like a very comprehensive plan, and I do think that it is possible to improve Wikipedia's coverage of the area while taking account of User:GreenC's concerns about overlap with existing articles. I have a few suggestions, but I think it is best to work in the context of a new draft. I suggest you start by pasting in your plan above to Draft:Existential risk where we can work on and comment on it. As soon as we have furnished a few sources and figured out how to manage the overlap with existing articles, we can move it into mainspace. — Charles Stewart (talk) 06:06, 19 February 2020 (UTC)
Well not to repeat myself but it is unclear existential risk is a topic separate from what this article is and was intended to be. One could write an essay and pick the word "existential" from sources that use it to make it look like a separate topic but then we have a false dichotomy. I understand people want to write more about this topic area generally and see an opportunity with the word "Existential", but let's not create something that doesn't exist. Global catastrophic risk, existential risk, are interchangeable fungible terms that people use without hard definitions. It's not like there are scholars, books and organizations that separate the studies of existential risk from global catastrophic risk. -- GreenC 15:56, 19 February 2020 (UTC)
  • Hi GreenC. Thanks for all these considerations, and I appreciate your concerns. As I said above, I think this article is pretty good and I don't want to come off as disrespectful to the work of editors on this page over the years.
I do have to disagree about the separation of the topics, and your claims about how these are treated by organisations and researchers. I work in the field, researching existential risk at the University of Oxford - we do not use the terms interchangeably, and keep them very distinct. As you can read in my posts above, there are clear distinctions between the two concepts. It is not merely the scale of perspective - 'existential risks' are necessarily unprecedented, and threaten to destroy the entire future of humanity - these two features generate profoundly different ethical and methodological implications for approaching the topics. To use a crude analogy - consider the differences between being seriously injured and dying. In some sense, these outcomes differ merely in scale, but there is a deep difference between the two. There is also a close link though - since being seriously injured can raise the likelihood of dying. (This is why those of us who research existential risks are also interested in global catastrophic risks.) This link should not be mistaken for interchangeability though.
The fact that, in wider contexts, the two terms are used interchangeably, is a regrettable imprecision. It does not align with how scholars, and organisations working on the topic, treat them. I fear that this WP page continues to contribute towards this confusion - particularly the redirect from 'existential risk' to 'global catastrophic risks', and the mixture of sections pertaining to one or other.
I'm new here and realise I won't get very far on this without convincing the more experienced users of this point, so I'm keen to keep this discussion going before I invest more time in a draft article. I fear I risk repeating myself, though. Maybe you could you tell me what would convince you of there being enough conceptual distinction between the two ideas?
Vermeer dawn (talk) 16:30, 19 February 2020 (UTC)
  • Thanks, Bostrom's 2013 paper is very clear on the distinction:
"Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory ... Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks." – Vermeer dawn (talk) 21:35, 19 February 2020 (UTC)
Vermeer dawn If you work at Oxford in this field I certainly respect that. I am no expert and don't want to interfere with development of new perspectives and evolving changes in the field so long as they are broadly understood and not for example a minority view. My concern is how to organize/integrate it into Wikipedia so we don't have overlap (WP:CFORK). The other article to be concerned with is human extinction which also seems to cover existential risk, though not as well as your proposed outline. Perhaps if your article absorbed much of that content we could merge those two into a single article. -- GreenC 16:49, 20 February 2020 (UTC)
Thanks GreenC, I appreciate these concerns. I agree that it will be a challenge to ensure that the three articles - Global Catastrophic Risk, human extinction, and existential risk - need to complement each other properly, and we should limit the overlap, and I'm excited to work with you and the other editors to improve this part of Wikipedia. My own view is that the GCR article is a candidate for removal once there is a good 'existential risk' page - much of the best content on here would be better housed elsewhere, and it's not clear that the term, which is not widely used and lacks no crisp definition, warrants an article. I agree that existential risk could absorb much of the content from human extinction, and that the latter article may not need to exist. Thank you for pointing this out. I will continue developing a good draft of the existential risk page, and we can go from there - hopefully this will make the decisions clearer, and we can pick up the conversation about overlaps/redundancies from there. - Vermeer dawn (talk) 18:33, 20 February 2020 (UTC)
As you wrote, "Existential risks should be understood as a special subset of global catastrophic risks" so there would be a need for this article. The term GCR is widely used (see Google) and we have many articles on topics without agreed on definitions. BTW the quoted sentence lacks a citation and reads like a personal essay "should be understood" .. should according to who? -- GreenC 05:11, 21 February 2020 (UTC)
According to who.. Toby Orb by any chance who works at Future of Humanity Institute (the same as yourself) and is about to publish a major book on the topic of existential risk? See our policy on Conflict of Interest. -- GreenC 01:36, 5 March 2020 (UTC)

Geomagnetic reversal

In this edit editor GreenC re-added the section on Geomagnetic reversal. However, the cited reference (1993) does not conclude that there is a global catastrophic risk involved. Apparently this subject is unrelated to the subject at hand, hence should not be part of this article. I ask editor GreenC to explain his reasoning, since it is so far not backed up by sources. Additional the article Geomagnetic reversal does not offer a reasoning related to catastrophic risks. However, there is one study from 1985 cited in regards to mass extinctions, but this argument seems to be unsupported by more recent sciences. The current consensus appears to be that evidence in regards to extinctions correlated to magnetic reversals is considered "unreliable". prokaryotes (talk) 01:23, 8 January 2019 (UTC)

The topic is risk. Where there is enough fear in the public eye that RSs such as NatGeo report on the risk that makes the specific phenomena relevant to this article, even if the risk involved approaches zero. The litmus test should not be a huge nonzero risk factor, but whether there is public attention and concern. That said, in my opinion, this article would be best reduced to a navigation WP:LIST that rattles off issues and points to those articles. I suppose there might be some text to write about more than one thing happening at once or the international community's ability to talk and deal with issues. In that case the aforementioned list would be an WP:EMBEDDEDLIST. Either way, the list of issues themselves should point to the articles without trying to repeat them in brief here. (Can you say maintenance headache?) NewsAndEventsGuy (talk) 12:51, 8 January 2019 (UTC)
Yes it is a risk, but not catastrophic. We could add this type of scenario to the article Catastrophes in popular culture, otherwise the addition can easily be misread from people who just browse the section titles, leaving this here is equivalent of dedicating a section to the film The Day After Tomorrow at global warming. prokaryotes (talk) 18:15, 8 January 2019 (UTC)
Titling a section for what (so far) appears to be excessive worry is a good idea, P. NewsAndEventsGuy (talk) 18:53, 8 January 2019 (UTC)
"In Popular Culture" is not a good section title, generally speaking. It's popular on Wikipedia but rarely accurate or a good idea. It's a form of bias to separate it out from the rest. -- GreenC 19:10, 8 January 2019 (UTC)
While it's better than nothing, I agree with GreenC. On the other hand, I have not taken time to think of anything better. NewsAndEventsGuy (talk) 19:13, 8 January 2019 (UTC)
We don't know maybe it would be catastrophic. Also in reply User talk:NewsAndEventsGuy there is an industry (?) or field of catastrophic risk as detailed in the organizations section we need tn article on this topic. -- GreenC 18:28, 8 January 2019 (UTC)
Can you cite a source that makes the statement, in support for, "We don't know maybe it would be catastrophic"? prokaryotes (talk) 18:39, 8 January 2019 (UTC)
You are looking for absolute assurance on a topic that has none. Few of these catastrophics risks can we say with much certainty because we have so little data on them (except past climate change). As for this particular one, this article lists some potential dangers. Some suggestion it could " triggering massive earthquakes, rapid climate change and species extinctions". That doesn't mean everyone agrees those things will happen, or that it will happen. But some scientists say it might. Thus we cover it. -- GreenC 19:10, 8 January 2019 (UTC)

Since geomagnetic reversal is a topic I know nothing about. I read Geomagnetic reversal and the abstracts of the citations given here, which I should have done sooner. There is nothing there to suggest this topic should be included here, other than some old theories that seem to be debunked or at least no longer in fashion. I'm OK with removal. -- GreenC 19:22, 8 January 2019 (UTC)

LOL, okay. I just wanted to point out that 1-10 year Ozone holes - according to your cite a worry for skin cancer, and the proposed link with the extinction of Neanderthals, are considered fringe views, and not fit into the GCR term definition. However, I do think now that it would be nice to have an article of catastrophes in popular culture, otherwise with the new section on this we likely get many more entries here. I think this article here deserves a more professional scope. prokaryotes (talk) 19:29, 8 January 2019 (UTC)
There have actually been many things removed from here over the years due to being too speculative. I would caution about adding them back in as it can cause a WEIGHT problem as the illegitimate probably outnumber the legit, and trying to debunk things in this article is not a good idea better left for the main article. -- GreenC 19:46, 8 January 2019 (UTC)
I don't know much about it either. What about the synergistic timing of a flip (and magnetic field reduction) happening at the same time we get some serious space weather (or exchange EMP attacks)? Was that discussed in the abstracts you read, GreenC? Not sure how RS this is, but article one. Not sure, but it might be mentioned in Electronic armageddon NewsAndEventsGuy (talk) 19:31, 8 January 2019 (UTC) PS To be clear, whatever ya'll want to do is OK by me. I was just jawing trying to be helpful from the bleachers. NewsAndEventsGuy (talk) 19:35, 8 January 2019 (UTC)
there are so many possible synergistic combinations that it is not practical to consider all of them, nor do we have any basis for considering which combinations are likely. DGG ( talk ) 05:53, 17 January 2019 (UTC)
I wasn't trying to FORUM you know. Only asking whether RSs discuss synergistic effects. Where one rock to my head might hurt, two together might be fatal. Is this idea in RSs that discuss geomagnetic reversal? NewsAndEventsGuy (talk) 11:55, 17 January 2019 (UTC)

Electro mobility causes a random statistic flux much stronger than earth magnetic field. so what we will see is Effects of all main roads. and not earth magnetic field below.

This could cause major changes of ionosphere and water and gas ballance. Wikistallion (talk) 11:12, 4 August 2020 (UTC)

Existential risk draft

@Robert McClenon and Vermeer dawn: Regarding the recent submission of Draft:Existential risk, the overlapping content issue has already been discussed in the two discussion sections above. The draft (which could be expanded further) contains much more content about existential risk than the sections in this article, which is already long. If the draft were approved, the existential risk content in this article could be reduced. WeyerStudentOfAgrippa (talk) 16:24, 14 August 2020 (UTC)

Took a quick look at the draft. The first line of the body, "Nick Bostrom introduced the concept of existential risk in a 2002 article", is problematic. Bostrom, the founder of FLI, did not "introduce" the concept of existential risk in 2002. I'm not sure if that's even what you meant to write, but I'm sure you'll agree that "the concept of existential risk" had been introduced prior to 2002; obviously scholars have been studying existential risk for much longer than that (like for all of human history). The source for that statement is equally problematic: a technical report from FLI. Look, FLI can't write another version of this article citing all of its own scholarship. That's obviously not kosher. If you want to add content to the existing articles, you're more than welcome, but please source it to WP:RSes (meaning published in a peer-reviewed journal, not self-published reports). Also, please don't describe FLI's scholarship as being "seminal" (or "groundbreaking", "pioneering", "legendary", being the first to introduce or define a concept, etc.) unless it's cited to multiple RSes, independent of FLI, using that exact wording. Bottom line, a Wikipedia article is not a venue for promoting an institute's scholarship. Lev!vich 16:53, 15 August 2020 (UTC)
Good point about getting independent sourcing for "introduced"; I changed the wording for now. Of the other problematic words you list, I found only one instance of "seminal" in reference to John Leslie's 1996 book (unaffiliated with FHI), which I then removed.
Regarding the rest of your comment, you clearly mean the Future of Humanity Institute (FHI) rather than the Future of Life Institute (FLI). I agree that it would be highly inappropriate for FHI to "write another version of this article citing all of its own scholarship"; however, I am puzzled as to why you think this applies to the draft in question. Not only is the subject of the draft a legitimately distinct concept from global catastrophic risk in general, but also the sourcing in the draft extends well beyond FHI and contains only a small portion of FHI's research output. It is also trivially false that a WP:RS has to be a peer-reviewed journal article. WeyerStudentOfAgrippa (talk) 18:17, 16 August 2020 (UTC)
You're right, I meant FHI not FLI, my mistake. I think the recent revisions you made to the draft are good improvements but I still think the draft is duplicative of existing articles, largely for the reasons already stated at Draft:Existential risk and here. It's like having separate articles on "Gray" and "Light gray". I agree the collection of articles surrounding the topic of human extinction and its potential causes need to be improved and reorganized with appropriate parent articles and spin-offs, etc., but I'm not sure we need to have separate articles on "global catastrophic risks" that are "existential" vs. those that are "global but not necessarily existential". I think the content in the draft is good, but it should probably merged to various different articles. Lev!vich 04:14, 17 August 2020 (UTC)

Moving this article to ‘Existential risk’

@GreenC, WeyerStudentOfAgrippa, Charles Stewart, and Robert McClenon:

In a discussion on the existential risk draft, GreenC writes: “Look, we have articles about topics, what we name that topic can be a matter of discussion but we don't duplicate the same topic just because people call it different things”

This is a good point. In the same discussion I gave a number of reasons why ‘existential risk’ is more deserving of being in the namespace than ‘global catastrophic risk’, which I’ll recap here:

  • Google Trends: “existential risk” gets 4x the search traffic over the past 5 years.
  • Google Search: “existential risk” has 10x the search results than "global catastrophic risk”
  • Google Scholar: 4,910 papers containing “existential risk”; 543 containing “global catastrophic risk”
  • Across Wikipedia articles, there are 626 instances of "existential risk", vs. 551 of "global catastrophic risk"
  • Even the Global catastrophic risk wiki contains more uses of ‘existential risk’ (30) than of ‘global catastrophic risk’ (27)

Given this, I propose moving the page ‘Global Catastrophic Risk’ to ‘Existential Risk’. In combination with this renaming, I would like to incorporate some of the improved content from Draft:Existential_risk, including the lead section (to align with the new article name). The changes to the lead section will need to be made at the same time as the move to avoid inconsistency.— Preceding unsigned comment added by Vermeer dawn (talkcontribs)

  • Comment If this is a move proposal, see WP:RFM how to properly notify the wider community rather than a select set of editors. -- GreenC 14:12, 25 August 2020 (UTC)
  • Oppose for now Until there is a draft. It's one thing to rename the article, another to rewrite with a different scope. I'm willing to keep an open mind so long as there are no COI related problems such as an over-reliance on the FHI, who you work for. The article should not be a showcase for FHI. Otherwise if an organization or idea doesn't fit neatly into the FHI definition of existential risk, this creates a problem. The topic area is broad and somewhat generic with multiple POVs, this is how Wikipedia works it is inclusive the article should be flexible to hold all current views and organizations. -- GreenC 14:43, 25 August 2020 (UTC)
  • Oppose - These arguments are not at all persuasive, not relevant to the issue, and in fact, not even accurate. Let's just take Google search results, the second bullet point. The number of "hits" that Google reports when you search for something is not the actual number of pages on which that term appears. The "hits" reported on the first page of results is not accurate, ever. To get to the accurate count, one must page through the search results until they get to the last page. Only then will Google report the actual number of pages returned. If you search "existential risk", the number of actual results is 150 (there are 15 pages of results). If you search "global catastrophic risk", the number of actual results is 111 (11 pages of results). Thus, the two terms actually appear in Google search results almost the same amount. Then there's the issue of false positives. Searching Google Scholar for "existential risk" also turns up articles using the term not exactly in the human-extinction sense, e.g. one about technology posing an "existential risk" to the workforce or "Sensation seeking and risk-taking in the Norwegian population". Those false positives would have to be weeded out before we could figure out which term is the WP:COMMONNAME. Perhaps the number one reason not to make this proposed move is that, as the proposer has argued many times already, "global catastrophic risk" and "existential risk" are not the same thing. They are points on a spectrum. There are plenty of GScholar results that use both terms. For example, [5] and [6]. The latter describes "incident", "event", "disaster", 'crisis", "global catastrophic risk" (100m+ dead), and "existential risk" (all humans dead, no future generations). Are we going to have separate articles about each one of those categories? No, of course not. I agree this topic area needs to be reworked, but I don't see an outcome where we have a separate article called "Existential risk", even though the proposer really, really wants us to have one (because the proposer works for the guy who wrote a book called "Existential Risk"). "Global catastrophic risk" isn't a great title, either. What this topic area needs is a parent-level article (Human extinction) and a sub-article called something like Potential causes of human extinction or more succinctly, Human extinction risks. That article would then describe the risks, and also discuss how the risks (depending on severity) may be existential risks, or just global catastrophic risks, or perhaps mere crises or disasters. Lev!vich 16:21, 25 August 2020 (UTC)

"Existential threat" listed at Redirects for discussion

A discussion is taking place to address the redirect Existential threat. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 January 22#Existential threat until a consensus is reached, and readers of this page are welcome to contribute to the discussion. (t · c) buidhe 09:12, 22 January 2021 (UTC)

"Existential threats" listed at Redirects for discussion

A discussion is taking place to address the redirect Existential threats. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 May 24#Existential threats until a consensus is reached, and readers of this page are welcome to contribute to the discussion. –LaundryPizza03 (d) 05:26, 24 May 2021 (UTC)

Where is the sober exposition of actual risks?

I started reading the article, and it's madness. A roomful of unidentified "experts" decided sometime that there's a 19% chance that all of humanity will be destroyed in the next 80 years -- and it's reported in great detail as if it was factual, citing a single primary source. (Even though this source does not identify the "experts", nor how many there were, beyond saying that they showed up for a meeting on this inflammatory topic.)

The main definition of the main article topic says that 'A "global catastrophic risk" is any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity.' But that is the fallacy of the excluded middle -- there are plenty of risks that are not imperceptible and are also not catastrophic. The errors just compound from there. The article contains muddled thinking that confuses risks to the human race with risks to the entire biosphere, and risks short of extinction with the risk of extinction (of either humans or the biosphere). I'm surprised that the mythology of the Judeo-Christian Bible is not cited for the story of Noah as an alleged global catastrophe from the past, not to mention the Book of Revelation as a credible scientific prediction of how the world will end in the future (with the blowing of golden trumpets and plagues of locusts). And what about the Y2K bug catastrophe and the end of the Maya calendar and List of dates predicted for apocalyptic events?

How can we reclaim this article to actually discuss past and potential future global catastrophes without falling prey to mere apocalyptic foolishness, clickbait fictional speculation, and environmentalist extremism? Perhaps we need to rename the page to a title that attracts less fuzzy thinking? For example, Human extinction is a much more substantial, succinct, better edited, more credible article today. Maybe this Global catastrophic risk article should go away entirely while migrating any worthwhile content to that page. This is a serious question. Gnuish (talk) 11:31, 21 January 2020 (UTC)

We report what reliable sources say, not what we believe is true. The experts you cite, from Oxford, are reliable. Not sure why you put an Oxford source in scare-quotes. You call this is an "inflammatory topic", thank you for revealing your personal bias against the field of study. You cite the Bible, Noah, Y2K bug, Maya etc.. can you point to where these things are mentioned in the article; you are making things up because you lack evidence for your arguments. Please point to the "environmental extremism" .. did something trigger you? All of the scenarios are covered in reliable sources. Human extinction is a type of risk but this article covers any risk that is catastrophic, it doesn't need to result in human extinction. None of the scenarios in this article necessarily are a threat to the human species - they might be, that is not necessarily the point of the article, though it can discuss that also because we don't know how severe these scenarios will be. That you are confused on this point explains why you think the article is "muddled". -- GreenC 15:12, 21 January 2020 (UTC)
Can we at least fix the fallacious definition of "catastrophic" as identical to "perceptible"? Perhaps if we find consensus on something small, we can move to other areas of consensus. I would suggest that the risk from Covid-19 pandemic is perceptible but not catastrophic. Even if you disagree with that, surely you can find a risk that is perceivable but not catastrophic, such as the risk that someone will invent a better way of powering transportation than internal combustion. Gnuish (talk) 01:28, 18 October 2020 (UTC)

I was fairly impressed with the article, but I do use the same term, and have presented a few security conference papers on the topic. I used different criteria and rankings based on DREAD, and would love to see more community involvement. It might be worthwhile for those interested to work on the topic outside this discussion thread - does anyone know of an ongoing working group? Oxford was a long time ago. Chadnibal (talk) 21:01, 16 June 2021 (UTC)

Artificial simulation program termination missing

I notice in the "Global catastrophic risks" template at the bottom of the article there is no link to the Simulation hypothesis. Surely if we are living in a computer simulation, it can be switched off or otherwise triggered by some event to end. Myth and fictional possibilities are included, seems to me one of you regular editors might consider finding a way to include it. You know, "Computer. End program." 5Q5| 16:14, 22 February 2021 (UTC)

They don't appear as prominently in the literature as some of the other threats, but maybe it would merit a sentence here and/or in human extinction. Rolf H Nelson (talk) 04:24, 23 February 2021 (UTC)
Good point, added to this article. -- Beland (talk) 03:33, 9 July 2021 (UTC)

Merge with Human extinction

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
There was no consensus to merge yet. Since this discussion petered out, Global catastrophe scenarios was spun off of this article. That makes the proposed target shorter, but also creates a destination for material from Human extinction which was overlapping. Some editors had suggestions for refactoring which they wanted to see before considering a possible merge. I will work on implementing suggestions and we can have another merge discussion later if needed. -- Beland (talk) 01:08, 14 February 2022 (UTC)

I have no strong feelings if the result is put under this title or Human extinction or some third title, but the two articles have significant overlap and if overlapping sections were merged there wouldn't be enough left over to justify a separate article on either topic. To be specific, both articles have sections on likelihood, causes, prevention, and ethics. The non-overlapping sections are on classification of risks, organizations, psychology, and fiction. I don't see any strong reason to consider any of the non-overlapping sections more closely related to one of the articles and not the other. If there's going to be tweaking or rewriting of this content as is being discussed above, that would be easier if there is one article instead of duplicate content across two articles. -- Beland (talk) 03:27, 9 July 2021 (UTC)

Support, the articles are quite similar Sahaib3005 (talk) 10:45, 27 July 2021 (UTC)

(comment) First off I disagree that we are the point of having a single "Oppose/Support" type discussion because this is more complicated then hey there is overlap - and if you go this route and loose it just makes it that much harder in the future because of precedent of prior consensus. This conversation has been ongoing forever, since Human extinction was first created over 16 years ago (!). The talk page here, and there, have many discussions. They should be reviewed and previous arguments and discussions examined and discussed. If we over-simplify the discussion to "overlap" we may end up with an over-simplified result that would not be an improvement. There is a lot to this. BTW I am not outright opposed to a merge, but many people over the past 16 years have been. -- GreenC 14:58, 27 July 2021 (UTC)

I think reading all the previous discussion would take too long - which is maybe why no one else has commented - suggest people just skim read the current articles. Chidgk1 (talk) 11:02, 13 August 2021 (UTC)
Looks like I made the same proposal in 2005, but no one replied. It was also made in 2014, see Talk:Global catastrophic risk/Archive 2#Merger proposal. The suggestions made by the folks who opposed the merge and wanted things rearranged and rescoped haven't come to pass in the six or so years since, so I take that as a sign that there isn't support for doing those things among the many editors who actually work on these pages, or at least not anyone actually willing to do them. There were only four replies to the 2014 proposal; maybe there was never really consensus there or consensus has changed. -- Beland (talk) 09:23, 27 August 2021 (UTC)
Oh and the overlap problem was also raised in 2015 with only two editors and no clear decision: Talk:Human extinction/Archive 1#Overlapping with the article: "Global catastrophic risk" -> Merge or find clarity. Seems like this will just keep coming up until it's dealt with comprehensively. -- Beland (talk) 09:29, 27 August 2021 (UTC)

Support as long as the title remains "Global catastrophic risk" - because loss of culture is also covered not just biological extinction. Chidgk1 (talk) 11:06, 13 August 2021 (UTC)

(comment) I'm not sure whether or not the pages should be merged. Human extinction is a distinct concept from global catastrophic risk or existential risk/catastrophe ("risks that threaten the destruction of humanity's long-term potential"). Humans evolving into a different species would be an example of human extinction but not necessarily an existential catastrophe; an unrecoverable dystopia is an example of an existential catastrophe but not human extinction. Human extinction is also a much older concept, and many of the people writing about human extinction are probably not familiar with the terms "global catastrophic risk" or "existential risk" and are not necessarily thinking in terms of humanity's long-term potential. In contrast, people who discuss global catastrophic risk are generally the same people who are discussing existential risk, so it makes sense to have global catastrophic risk and existential risk in the same article. There is unfortunately a lot of overlap between the two pages, which makes sense as scenarios that pose a global catastrophic risk also tend to pose an existential risk or risk of human extinction. I agree that it would be easier if we didn't have to copy edits across both articles. —Enervation (talk) 03:34, 19 August 2021 (UTC)

Oppose: The current article's different paragraphs keep jumping back and forth between "existential risks" and "global catastrophic risks", as if the editors can't seem to notice the difference. The two are vastly different: one results in no more humans and has never yet happened; the other doesn't and has happened many times before, like the Black Death or Covid-19. (Even the Chicxulub asteroid impact which long preceded the evolution of humans but killed off the dinosaurs, was catastrophic but not existential for life on earth: though most species went extinct, the remaining ones rapidly radiated to fill ecological niches.) So if all the "existential risk" discussions from this article were moved out into the "Human extinction" / "Existential risk" article, the remaining text about Global catastrophic risks would be significantly improved. Then there would be no need to merge them. -- Gnuish (talk) 00:53, 20 August 2021 (UTC)

Oppose: A wide gulf exists between "apocalypse", a great disaster, and human extinction. Ebola with the infection pattern of Covid-19 Delta (or worse) could reduce the human population below that needed to maintain current infrastructure and logistics, indeed Covid-19 has already had very significant logistical effects. This scenario could lead to the loss of high technology, possibly for a very long time, and kill billions without human extinction. Similarly for nuclear war, an asteroid, etc. "Apocalypse" means extremely serious and might be further limited to disasters that represent a serious setback for human civilization; however, on a scale of 1 to 10, not every scenario only comes at strength 11. See Apocalyptic_and_post-apocalyptic_fiction Laguna CA (talk) 06:34, 21 August 2021 (UTC)

Oppose - There's a huge gap between an apocalypse, a great disaster, and human extinction. Hansen SebastianTalk 12:36, 21 August 2021 (UTC)

Support: The articles are very largely redundant, and I think the concerns above could be addressed. @Beland: All the concerns above focus on the issue of separating existential risks from just "humanity has a shitty day" risks. To get around that perhaps you could reorganize this article so that "human extinction risks" are a separate category, and the drilldown article from there is "human extinction". From that point, a merge would be more natural (or maybe unnecessary). Efbrazil (talk) 22:18, 26 August 2021 (UTC)

Comment responding @Enervation, Gnuish, Laguna CA, and Hansen Sebastian:

(And @Efbrazil: -- Beland (talk) 09:32, 27 August 2021 (UTC))

The difficulty with dividing coverage of extinction hazards from catastrophic hazards is that due to uncertainty about how bad it would be and how well we would cope, pretty much every extinction threat is also a catastrophic risk, and mostly the other way around as well.

There are a few events which are not survivable, such as the Big Crunch, heat death of the universe or destruction of the universe by false vacuum decay. Unless we discover some way to travel between dimensions or universes or something fun like that, and make it not an extinction event.

When the Earth is destroyed by expansion of the Sun, or if the Earth is blown to bits by a huge impact or falls into a black hole, it's possible humanity will perish, but if it has established colonies on other planets or in other solar systems by that time, it might continue and not be an extinction event.

Most of the hazards currently on Human extinction fall on a spectrum from "pretty bad but humans are pretty smart and scrappy so maybe a few would survive somehow" to "probably catastrophic but not an extinction but could go badly and we're all dead". I'd put those in some order like: global tectonic turnover that makes the Earth's entire surface lava, hostile alien invasion, hostile nanotech, hostile AI, post-humanity people, extraterrestrial radiation burst, nuclear war, moderate asteroid impact, bioweapon, pandemic, moderate supervolcano eruption, climate change, ecological collapse, low birth rate. These are currently explored in more detail on Global catastrophic risk. If we don't want these duplicated across two articles, it sounds like folks are suggesting we move/merge these sections to Human extinction even if we're just explaining why they probably won't result in that (which is what we do now for some, like climate change).

There are a few events which could definitely be catastrophic but are not plausible extinction triggers, such as a cyberattack that takes out IT systems and electric grids, running out of minerals to mine, or reaching the world's maximum agricultural capacity. If we move anything that's a plausible extinction threat to human extinction, then that leaves only these few things for Global catastrophic risk, and it would not be very interesting reading. Thoughts?

-- Beland (talk) 09:00, 27 August 2021 (UTC)

(Comment) Good discussion, @Beland:.
I have less opposition to merging under Global catastrophic risk, but many of the billion years in the future risks are merely current cosmology hypotheses which have not been accepted for anything like a significant period of time considering the prediction time. To reinforce Beland's point about the Sun engulfing the Earth, yes, that will happen—the case is long and well established—but equally, the means of building a self-sufficient space habitat and parking it as far away as necessary, possibly behind a protective planet or planetoid, is also well established.
Part of my problem with merging is that Human extinction is quite exaggerated. Nuclear war could be anything from a deplorable but minor inconvenience for a country or two, to everyone firing every nuke they have, which would not be close to an extinction event: stable human populations exist in many places just not worth nuking; yes, cancer and mutations would skyrocket, but that population wouldn't be threatened by extinction; why would someone nuke every canyon in Switzerland or Borneo or Tasmania or Mauritius? And not enough warheads exist: the earth is huge, and nukes are quite inefficient: Hiroshima's damage was 1 square mile; Nagasaki's was 2 or 3; the conventional Bombing of Tokyo 9–10 March 1945, destroyed almost 16 square miles (41 km2), with only about 1.7kt of explosives. Nukes do, roughly, flash damage (including radiation and Electromagnetic Pulse [EMP]), blast damage, and over-pressure damage; so after glassing an area with radiation, the blast and over-pressure are wasted. Nuke too low, and effects are shadowed by terrain and the curvature of the Earth; nuke too high, and you're in space, reducing or negating blast and over-pressure damage. Not well publicly documented are the effects of EMP: roughly, "Go to the Steam Age. Go directly to the Steam Age. Do not pass Go. Do not collect $200", but that's inside an area of effect and EMP doesn't itself kill people. Yes, I know too much about this.[1]
Please excuse the detail above, though the reference will be useful. The point is that Human extinction is rather hypothetical, poor quality, and unreasonably pessimistic, while Global catastrophic risk is higher quality. Yes, a meteor with a significant fraction of the mass of Earth would cause extinction, possibly of the entire biosphere or even remove the ability for biological processes to restart evolution; but that's so unlikely and so obvious, it really doesn't need more than a line or two in a discussion of the far more likely chance of a merely catastrophic meteor strike.
So I think Global catastrophic risk is the better foundation and title for a comprehensive article, but it does have problems. The risk percentage table is far too prominent and insufficiently qualified. The 5% AI risk, which is the risk I'm most qualified to judge, appears from a fast skim of the reference to be based on, "Group Expert Elicitation via the simple aggregation of subjective opinions" apparently by economists, not computer scientists. AI computes; it does not by itself act. So why give it catastrophe-capable scope in the first place? As I outlined above, accidental nuclear war is not an extinction event, though we have come close to such an event, both by computer—not AI!—and by human mistake. Putting nuclear war aside, an apocalyptic AI would need a very long life span and/or inherent (needing no outside resources—how does that work?) means of self-reproduction and defenses against any possible human intervention (how?) and some reason for extincting the human race, which implies giving it emotions—some of these things, notably emotions, are not difficult, but why would any rational person do all those things? This is beyond improbable. Sure, MIT makes empathetic AIs, but they don't make the AI itself emotional—why would you want your computer or phone to get into a huff and decide it's bored with the silly uses you're putting it to?
So, I still oppose (actual vote, above). I think the best way forward is to improve Global catastrophic risk until Human extinction is redundant and unneeded: then point the latter at the former; and delete the former.
Laguna CA (talk) 04:44, 30 August 2021 (UTC)

Yeah, those are good points. You can't really separate out the serious stuff into one article without making the other article garbage. Also, most risks have a human extinction component or not, depending on severity. One thing that might help build support is to re-propose the merge by clarifying how the TOCs fit together. Something like this, where highlights are new or merges:

  • 1 Definition and classification
  • 1.1 Defining global catastrophic risks
  • 1.2 Defining existential risks: Rename to Defining human extinction risks, merge in overview of of Human extinction article
  • 1.2.1 Non-extinction risks
  • 2 Likelihood
  • 2.1 Natural vs. anthropogenic: Move to intro of "4 Potential sources of risk"
  • 2.2 Risk estimates: Move to 5 since the whole thing is talking about human extinction
  • 2 Methodological challenges
  • 2.1 Lack of historical precedent
  • 2.2 Incentives and coordination
  • 2.3 Cognitive biases Merge in 6 Psychology from human extinction
  • 3 Moral importance of existential risk Merge in 7 Ethics from human extinction
  • 4 Potential sources of risk merge in 3 Causes from human extinction
  • 4.1 Anthropogenic
  • 4.2 Non-anthropogenic
  • 5 Human extinction probability: Merge in 3 sections- Risk estimates table from 2.2 above, plus 2 Likelihood and 4 Probability from human extinction
  • 6 Proposed mitigation merge in 5 Prevention
  • 6.1 Global catastrophic risks and global governance
  • 6.2 Climate emergency plans
  • 7 Organizations
  • 7.1 Research: Part of organizations section plus merge in Research from human extinction
  • 8 History merge in 1 History
  • 8.1 Early history of thinking about human extinction
  • 8.2 Atomic era
  • 8.3 Modern era

Efbrazil (talk) 22:09, 27 August 2021 (UTC)

  • Oppose for now: there's a large overlap, with possible HE events appearing to be a subset of GCRs, and it may be that we are best off with a single GCR article. However, a merge is tricky and we do not have consensus for any particular plan. I recommend that supporters of a merge continue to edit in the absence of a merge, trying to harmonise the two articles, with a view to having HE be a sub article of GCR. If and when this looks achievable, a merge should be much less controversial. — Charles Stewart (talk) 05:01, 30 August 2021 (UTC)
  • Oppose: receiving article is already to large. —¿philoserf? (talk) 16:05, 23 October 2021 (UTC)
  • Oppose spitting content among multiple articles along an arbitrary and unsouraceable criteria. . -- GreenC 18:11, 12 December 2021 (UTC)

References

  1. ^ Glasstone, Samuel; Dolan, Philip J. "The Effects of Nuclear Weapons" (PDF). Defense Threat Reduction Agency. US Department of Defense and US Department of Energy.
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Quality issues

@Laguna CA: I certainly understand your skepticism that the risk estimates of human extinction are too high. There is more specific coverage at Nuclear holocaust#Likelihood of complete human extinction which I just linked in. I merged the sections from Global catastrophic risk that assigned numeric probabilities to human extinction into Human extinction#Probability. So whether that content is good or bad, the question is now which article it belongs in. It is referenced, however, so if you think it needs trimming, are there any of the cited sources you would describe as unreliable? If you think the probabilities are too high, are there reliable sources which give lower estimates or express skepticism of numbers like these? You make good points, but our individual opinions as Wikipedia editors don't override reliable sources when it comes to determining article content. -- Beland (talk) 01:00, 16 February 2022 (UTC)

Coordination with Human extinction

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was no consensus. Chidgk1 (talk) 13:32, 12 April 2022 (UTC)

I merged all the duplicate sections across both articles, and tried to split specific coverage of human extinction out so we can see if we do in fact think they make sense as separate articles. This has resulted in a peculiar situation where the history of the idea, probability, and ethics are covered on Human extinction, but methodological challenges quantifying and mitigating the risk, proposed mitigation measures, and related organizations are covered on Global catastrophic risk. I have patched this over with hatnotes for now, but it does not seem like the most logical distinction or friendly to readers.

The two concepts are intertwined because human extinction is the global catastrophe we have the most content on, all global catastrophic risks are hazards that could cause human extinction, and many global catastrophes that could cause human extinction could also be sub-extinction events and thus fall both inside and outside the scope of "human extinction". I still think merging the articles under a suitably generic title would be the best solution, but if people feel strongly they should continue to exist separately, there are other approaches.

The creation of Global catastrophe scenarios has made it possible to have both of these articles be the parent to that article, with the result that we were able to eliminate a lot of duplication. We could take the same approach with more content. For example, I think the Proposed mitigation, Methodological challenges, and Organizations sections could be spun out of Global catastrophic risk into Prevention of global catastrophe or Mitigation of global catastrophic risks or Prevention of human extinction and summarized in both articles. Human extinction#Probability could be spun out into Probability of human extinction and summarized in both articles.

What are your thoughts? -- Beland (talk) 00:30, 16 February 2022 (UTC)

Oppose I don't believe this article is improved by splitting off these sections. It creates obscure article topics which reduces readership and weakens this article. Such splits also tend to harden the organization, making it less flexible to future re-organizations, you have to work around preexisting articles to merge and delete, vs. merge and deletion of sections within an article. The correct solution is to merge human extinction here, but failing that make this article the primary for those sections, and in human extinction create a summary section with a main article link to the section here. For example the Organization section here is in full, while in human extinction is summary or only relevant with a main link to the section here. No need for many specialized separate articles. -- GreenC 21:47, 17 February 2022 (UTC)
Oppose This article is not very big so I don't think it needs splitting Chidgk1 (talk) 15:05, 18 February 2022 (UTC)

@Chidgk1 and GreenC: Okey, we won't do that, that. Do you think a merge with Human extinction would be an improvement, is there some other reorganization that would help with this weird split, or are you happy letting things grow from where they are? -- Beland (talk) 19:36, 11 March 2022 (UTC)

Sorry too busy to think about this. Up to you guys. Chidgk1 (talk) 06:16, 12 March 2022 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Split proposal

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was no consensus. Chidgk1 (talk) 13:34, 12 April 2022 (UTC)

The current article is in WP:TOOBIG territory (55kb according to Prosesize) and IMO feels too long for comfortable reading. The potential sources section is the longest part of the article and has a more detailed scope than the rest of the article. Therefore, I propose splitting the section off into a new article with a title such as "Potential sources of global catastrophic risk". WeyerStudentOfAgrippa (talk) 15:56, 22 October 2021 (UTC)

The article is longer than 100.000!

The article is longer than 100.000! According to Wikipedia:Splitting, it should be splitted. Agree --Geysirhead (talk) 14:22, 14 November 2021 (UTC)

I suggest to reduce it to Global catastrophic risk (non-exterminative) and put the rest into Human extinction. --Geysirhead (talk) 17:02, 13 November 2021 (UTC)
Also, natural vs. anthropogenetic split is possible.--Geysirhead (talk) 17:10, 13 November 2021 (UTC)
There should be a difference between Omnicide (Anthropogenic global catastrophic risk) and Anthropogenic hazard (Non-global and non-exterminational risks)--Geysirhead (talk) 17:22, 13 November 2021 (UTC)
Something like that:
Caption text
Local Non-exterminative global Human extinction
Anthropogenetic Anthropogenic hazard Anthropogenetic global catastrophic risk, Human impact on the environment Omnicide
Natural Natural hazards, Natural disaster Natural global catastrophic risk Naturally caused human extiction
Support, I agree the article is currently too long and like WeyerStudentOfAgrippa's suggestion most so far. I disagree with Geysirhead's suggestions since I think they are too fine grained and rely too much on categories (in particular whether a risk is "exterminative") that (i) people don't commonly use and that (ii) aren't clear cut. E.g., some people worry about engineered pathogens as an extinction risk while others believe that no infectious disease by itself could kill all of humanity, so it doesn't naturally classify as either "exterminative" or not. -- Ego.Eudaimonia (talk) 15:04, 27 November 2021 (UTC)

Now that the split has been made, can we delete the split template at the top of the page? If no one objects, I'll do this soon. —— Ego.Eudaimonia (talk) 21:08, 4 December 2021 (UTC)

To clarify what happened, this article spun off Global catastrophe scenarios, and the split tag was removed. None of the other proposed articles were created (and I'd oppose creating them). -- Beland (talk) 01:00, 14 February 2022 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Split History section off to Human extinction#History

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was no consensus. Chidgk1 (talk) 13:35, 12 April 2022 (UTC)

I suggest to reduce this section and keep all history about human extinction there. Only thoughts about global catasptrophic rick and existentional risk should be kept in this history.--Geysirhead (talk) 18:21, 12 December 2021 (UTC)

  • Oppose for the same reason the discussion above "Merge with Human extinction" was opposed. They are arbitrary distinctions, one can not speak of a global catastrophic risk without also discussing the possibility of human extinction. Since we have no idea to what degree any catastrophic risk will lead. -- GreenC 18:49, 12 December 2021 (UTC)
Previous discussion does not apply here, it was about a merge, not split.--Geysirhead (talk) 10:33, 22 December 2021 (UTC)

So Global catastrophic risk#History and Human extinction#History were nearly identical, word for word, 3 subsections long. The only exceptions were reference formatting, a slight difference in order of sentences, and the fact that the "Human extinction" copy had some sentences that were not in the "Global catastrophic risk" copy. Since the copy on "Global catastrophic risk" section only discussed the history of thinking about human extinction, I dropped it from that article completely after syncing it up with the other version. If we want a "history of thinking about human extinction" section in Global catastrophic risk, it should follow Wikipedia:Summary style and be a lot shorter. Or maybe the articles should just be merged; that depends on other sections. -- Beland (talk) 01:25, 14 February 2022 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Category errors & redirect issues: Creating a new "Omnicide" page?

Posting this on both the 'Human extinction' page and the 'Global catastrophic risk' page to suggest the creation of either 1. an 'Omnicide' page 2. a separate 'Existential risk' page and/or 3. Retitling the 'global catastrophic risk' page.

There are some gnarly issues with the terminology in this constellation of pages & page redirects: 'Human extinction' covers...human extinction. Whereas omnicide, properly understood in the context of the literature on this subject, refers to the extinction of all terrestrial life. We can concede that while these may be potentially related domains they are meant to describe consequentially and meaningfully distinct outcomes. Human extinction is, so to speak, a sub-domain of what is being referred to in the word 'omnicide'--as a header it doesn't even remotely cover what is meant to be called up in the word omnicide. Meanwhile 'mass-extinction' no longer has the right ring because most people entering middle-age who have had access to a K-12 education have been aware that we're living through a mass-extinction since they were young children. Mass-extinction begins to sound like a normative element in other words and is more likely to be associated with ancient, pre-human events as opposed to evoking the threat of a future event without example in terrestrial history.

Meanwhile, "existential risk" redirects to "global catastrophic risk." The distinction here is even more subtle but it's still a problem. Existential risk refers to all of the following: 1. the risk of omnicide 2. the risk of human extinction 3. the risk of a civilizational collapse so severe that would evacuate the meaning or desirability of a continuity of human life. <<The modifying phrase here is important. Civilizational collapse, historically, encompasses plenty of situations that might have been experienced as desirable or less severe than the sort of situation that's being gestured towards in the term existential risk. As a term for this constellation of existential threats 'global catastrophic risk' appears to discount or downplay or fall short of the extremity of these potentials. "Global catastrophic risk" sounds like it could just as easily be applied to the risk of the bond market collapsing as to, say for example, the extinction of all terrestrial life. As a description of omnicide or even of existential risk the header "global catastrophic risk" shades into classical apocalyptic thinking--the end is conceived as potentially redemptive. Conceiving of or talking about the actual cessation of all terrestrial life without the implication of cyclic reinvention or hanging onto the possibility of a silver lining is avoided and repressed by apocalyptic thinking. Clarifying distinctions between these styles of thought about the complex of issues relating to omnicide or to existential risk requires some sort of revision in this space.

The reworking of *either* Human extinction page or the Global catastrophic risk page to cover what is being discounted, downplayed or missed in this conversation might end up being extensive, difficult to negotiate or even uncalled for since these pages do manage to cover what the terms in their titles describe--they just don't describe omnicide or adequately deal with the maximal extremity of what is being described. Really, it seems to me, the most appropriate move would be to make a new, separate Existential Risk page that has a slightly different emphasis and organization than the Global Catastrophic Risk page. But this is likely to be somewhat duplicative of the Global catastrophic risk page. Therefore: maybe a new Omnicide page? But that term is more exotic. Either option seems worth pursuing.

So the question, I'm raising is: Should we look into re-titling and revising the Global catastrophic risk page or creating a new omnicide page?

[1][2][3]


  ThomasMikael (talk) 18:10, 23 January 2024 (UTC)

References

  1. ^ Moynihan, Thomas (2020). X-risk: how humanity discovered its own extinction. Falmouth: Urbanomic. ISBN 978-1-913029-84-5.
  2. ^ Mohaghegh, Jason Bahbak; Mohaghegh, Jason Bahbak (2023). Mania, doom, and the future-in-deception. Omnicide / Jason Bahbak Mohaghegh. Falmouth, UK: Urbanomic. ISBN 978-1-7336281-6-7.
  3. ^ Mohaghegh, Jason Bahbak; Mohaghegh, Jason Bahbak (2019). Mania, fatality, and the future-in-delirium. Omnicide / Jason Bahbak Mohaghegh. Falmouth, UK: Urbanomic Media Ltd. ISBN 978-0-9975674-6-5.