Will prioritizing corrigible AI produce safe results?
3
Ṁ1602050
45%
chance
1D
1W
1M
ALL
This market is conditional on the market "Will the company that produces the first AGI have prioritized Corrigibility?" (https://manifold.markets/PeterMcCluskey/will-the-company-that-produces-the). This market will resolve as N/A if that market resolves as NO or N/A.
If that market resolves as YES, this market will resolve one year later, to the same result that the market "Will AGI create a consensus among experts on how to safely increase AI capabilities?" (https://manifold.markets/PeterMcCluskey/will-agi-create-a-consensus-among-e) is resolved as.
I will not trade in this market.
Get Ṁ1,000 play money
Related questions
Related questions
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
85% chance
Is slowing down AGI good for AI safety? [resolves to poll]
83% chance
Is RLHF good for AI safety? [resolves to poll]
45% chance
Will AGI create a consensus among experts on how to safely increase AI capabilities?
31% chance
By 2027 will there be a well-accepted training procedure(s) for making AI honest?
15% chance
Will I still consider improving AI X-Safety my top priority on EOY 2024?
73% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will the ARC Prize Foundation succeed at making a new benchmark that is easy for humans but still hard for the best AIs?
82% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance
Will there be serious AI safety drama at Meta AI before 2026?
58% chance