Is there an excessive overlap between belief in "AI extinction risk" and longtermism?
6
Never closes
Yes
No
Get Ṁ1,000 play money
Related questions
Related questions
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
41% chance
At the beginning of 2025, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
64% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
70% chance
At the beginning of 2028, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
65% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
41% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
15% chance
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
41% chance
Will someone take desperate measures due to expectations of AI-related risks by January 1, 2030?
66% chance