What will be the median p(doom) of AI researchers after AGI is reached?
12
Ṁ5162101
1D
1W
1M
ALL
83%
Above 5%
73%
Above 10%
62%
Above 20%
24%
Above 50%
10%
Above 80%
AGI defined as an AI that is better at AI research than the average human AI researcher not using AI.
p(doom) defined as human extinction or outcomes that are similarly bad.
In Katya Grace's 2022 survey, median values were 5% for "extremely bad outcome (e.g., human extinction)” and 5-10% for human extinction.
All answers which are true resolve Yes.
Related:
Get Ṁ1,000 play money
Sort by:
Related questions
Related questions
Will we have at least one more AI winter before AGI is realized?
33% chance
What will be the average P(doom) of AI researchers in 2025?
20% chance
Will AGI be achieved in the next 5 years?
38% chance
Will we reach "weak AGI" by the end of 2025?
33% chance
Will we get AGI before 2032?
61% chance
Will OpenAI be in the lead in the AGI race end of 2026?
41% chance
Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
49% chance
In what year will we achieve AGI?
If a large, good survey of AI engineers in the United States is run, what will be the average p(doom) within 10 years?
14% chance
In which year will a majority of AI researchers concur that a superintelligent, fairly general AI has been realized?