If Artificial General Intelligence has a poor outcome, what will be the reason?
Mini
2
Ṁ1102030
1D
1W
1M
ALL
86%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
80%
Someone successfully aligns AI to cause a poor outcome
75%
Something from Eliezer's list of lethalities occurs.
25%
Alignment is impossible.
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
Get Ṁ1,000 play money
Related questions
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If we survive general artificial intelligence, what will be the reason?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
Will General Artificial Intelligence happen before 2035?
58% chance
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
Who first builds an Artificial General Intelligence?
If we survive general artificial intelligence before 2100, what will be the reason?