
If AGI has an okay outcome, will there be an AGI singleton?
Mini
5
Ṁ6482101
25%
chance
1D
1W
1M
ALL
An okay outcome is defined in Eliezer Yudkowsky's market as:
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This resolves YES if I can easily point to the single AGI that has an okay outcome, and NO otherwise.
Get Ṁ1,000 play money
Related questions
Related questions
Will we get AGI before 2031?
61% chance
Will we get AGI before 2030?
58% chance
Will we get AGI before 2029?
53% chance
A multipolar AGI scenario is safer than a singleton AGI scenario
30% chance
Will AGI be a problem before non-G AI?
20% chance
By when will we have AGI?
Will we get AGI before 2026?
4% chance
Will we get AGI before 2026?
11% chance
Will AI create the first AGI?
41% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?