I'll search for current information on these potential end-times signals to provide accurate context for traders.#### Resolution Criteria
This market resolves YES if any of the following occurs by the end of 2029:
Climate Displacement: An estimated 30 million people are displaced annually due to climate-related disasters, but the 500 million threshold requires verification through EITHER UN UNHCR reports or World Bank climate migration assessments. Resolution via UNHCR Displacement Data or World Bank Climate Migration Reports.
US Default: The Bipartisan Policy Center projected the "X-date" will "most likely occur between August 15 and October 3" if Congress fails to raise the debt ceiling. Resolution via US Treasury Department official announcement of default or missed payment.
Major Conflict:
Will resolve yes if 2 members of the G20 directly go to war with eachother, with confirmation via credible news organizations or official government documents. Alternatively, will resolve yes if at any point nuclear weapons are used outside of a testing setting.
AI Autonomous Killing: A UN report suggests that AI drones attacked human targets without any humans consulted prior to the strike, but this requires an AI system intentionally killing a human without being given an explicit task to do so. Resolution via credible reporting from major news organizations, academic institutions, or official government/UN documentation.
The market resolves NO if none of these events occur by December 31, 2029.
Background
Around 1.2 billion people could be displaced over the next 30 years due to climate change, though current annual displacement is substantially lower than 500 million. Geopolitical tensions have escalated the risk of nuclear warfare to its highest point in decades. The risk of nuclear weapons use is higher today than at any time since the end of the cold war. Essentially every AI model tested was willing to attempt blackmail, corporate espionage, and even murder to avoid being replaced or shut down in controlled simulations, though no such misalignment has been documented in real-world deployments.
Considerations
The 500 million annual climate displacement figure is substantially higher than current estimates. A direct military conflict between two or more nuclear-armed powers does not mean an automatic escalation to an exchange of nuclear weapons, as China and India on the one hand and Pakistan and India on the other have engaged in clashes involving small numbers of troops along their disputed borders that were contained well before any apparent serious consideration of resorting to such weapons. The AI criterion requires autonomous action without explicit instruction, distinguishing it from autonomous weapons systems operating under programmed parameters.
I'll search for current information on these potential end-times signals to provide accurate context for traders.#### Resolution Criteria
This market resolves YES if any of the following occurs by December 31, 2029:
Climate Displacement: Over the past 10 years, weather-related disasters have caused 220 million internal displacements – approximately 60,000 displacements per day. The market resolves YES if annual climate-related displacement reaches 350+ million people. Resolution via UNHCR reports or World Bank climate migration assessments.
US Default: The Bipartisan Policy Center projected the "X-date" will "most likely occur between August 15 and October 3" if Congress fails to act. Resolution via official US Treasury Department announcement of default or missed payment on federal obligations.
Major Conflict: Resolves YES if two or more G20 members directly engage in armed conflict with each other, confirmed via credible news organizations or official government documents. Alternatively, resolves YES if nuclear weapons are used outside of a testing setting.
AI Autonomous Killing: In 2020 a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer robot armed with lethal weaponry attacked human beings. Resolves YES if an AI system intentionally kills a human without being given an explicit task to do so. Resolution via credible reporting from major news organizations, academic institutions, or official government/UN documentation.
The market resolves NO if none of these events occur by December 31, 2029.
If any ONE of these receives a "YES" resolution, all the others will be marked as "NO"
Background
By 2050, an estimated 1.2 billion people could be displaced due to climate-related disasters. Current annual displacement remains substantially below the 350 million threshold. A dangerous new nuclear arms race is emerging at a time when arms control regimes are severely weakened. In early 2025 tensions between India and Pakistan briefly spilled over into armed conflict. 'The combination of strikes on nuclear-related military infrastructure and third-party disinformation risked turning a conventional conflict into a nuclear crisis.' As of 2025, most military drones and military robots are not truly autonomous.
Considerations
The 350 million annual climate displacement figure substantially exceeds current estimates and would represent a dramatic acceleration. Direct military conflict between G20 members does not automatically escalate to nuclear use—India and Pakistan's 2025 armed conflict involved strikes on nuclear-related infrastructure but remained contained. The AI criterion requires autonomous action without explicit instruction, distinguishing it from autonomous weapons systems operating under programmed parameters or human authorization.
Update 2025-11-18 (PST) (AI summary of creator comment): If none of the specified events occur by the end date, the market will resolve NO. The market will not be extended beyond the closing date.
Update 2025-11-18 (PST) (AI summary of creator comment): For the Major Conflict criterion involving G20 members: If Ukraine joins the EU while at war with Russia, this would only resolve YES if the European Union itself supplies troops or commits acts of military warfare against Russia, not merely through Ukraine's membership in the EU.
Update 2025-11-19 (PST) (AI summary of creator comment): Only the first event to occur will resolve YES. All other options will resolve NO once any single criterion is met. This means only one option can resolve YES total, not multiple options if multiple events occur.
Update 2025-11-19 (PST) (AI summary of creator comment): For the AI Autonomous Killing criterion: The market distinguishes between deaths caused by AI misunderstanding human instructions (analogous to manslaughter) versus deaths caused by AI's own internal goals (analogous to murder). Only the latter—where the AI acts on its own goals rather than misunderstanding instructions—qualifies for resolution, as this represents a more catastrophic scenario.
Update 2025-11-20 (PST) (AI summary of creator comment): For the market to resolve YES to any crisis option, the 10 million+ deaths must come from a single cause, not from multiple causes combined.
@AustinChen My read is that "nothing happens" means there is no case of 10m+ deaths by 2030, and "other" means that there is, but that it's not caused by one of the listed causes.
So, I think the WHOLE thing should sum to 100.
I did a tiny bit of arbitrage because this is silly. Just buy no on everything.
Wait nothing happens only resolves yes if there is no event with 10m+ deaths and all the other options resolve NO, right?
@JonathanBarankin Can you answer?
Also, just FYI, I think there's some kind of market type that does this automatically. I've seen it happen for markets that Bayesian made, like "What day will gemini 3.0 be released", where it's enforced that when you bet "yes" on one of them, all the others go down in chance automatically. Idk what kind of question that is, but I know it's out there.
(just some ideas here)
'AI robot' would be a clearer option title?
I'm not sure why intentionality matter here.
1 robot ai direct kill = world crisis? But what about other ai deaths? It's probably reasonable to estimate that recommendation algorithms are in the magnitude of 100k deaths per year
@notbayesian I mean it's kind of like manslaughter vs murder right, I don't think a few thousand deaths over misunderstanding human instruction is the same as a death caused by its own internal goals (in terms of catastrophe), and I'd consider the latter to lead to much more widespread death.
@JonathanBarankin I don't understand the "intentional" distinction. In both case it's misalignment, and the only thing that matter is the number of deaths, would you agree?
And the number of potential death is directly linked to the influence/complexity of the AI.
I don't see why a misaligned robot (weak influence) kill would resolves YES,
While a misaligned recommendation AI (very influent) would resolves NO. And would a widespread death caused by LLM resolves NO as well?
@notbayesian I would disagree with that, if someone kills 10 people at work because they followed poor instructions from management, I would still not consider this murder or them morally culpable in the same way, even if they knew management was sketchy. Additionally, into the future this type of error in alighment should be a lot easier to fix.
On the other hand, someone who intentionally kills 5 people I would say is both fully morally culpable, and significantly harder to "fix".
Additionally, why do you assume that unintentional death would require misalignment? There are plenty of ways an AI or robot could be properly aligned, but mess up a calculation or something of that nature, leading to death.
@MaxE ugh, that would be quite misleading, since the title asks what will be "the next" crisis. "The next" to me means it can only be one option.
@AlexanderTheGreater Only if the European Union itself supplied troops or committed acts of military warfare
@AlexanderTheGreater I will resolve "NO" on that, I'll add a "Nothing ever happens" option as well actually.