
This market resolves to YES if, by December 31, 2027, the AI safety community successfully advocates for and achieves a significant global slowdown in frontier AI development.
A "significant global slowdown" requires at least two of the following:
A formal international agreement signed by at least 3 of the top 5 AI-producing countries to limit or pause certain types of AI development
Major AI labs (at least 3 of the top 5) publicly committing to and implementing significant voluntary slowdowns
Implementation of substantial regulatory barriers to rapid AI development in at least 3 of the top 5 major AI-producing countries
The slowdown must be explicitly connected to AI safety concerns and must represent a material change from the previous development pace.
The top 5 countries will be determined by the most updated version of this report: https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-in-ai or the most up-to-date report with similar methodology.
EDITED Aug 4 2025: added "(of the top 5)" to condition 3, added report example
@JussiVilleHeiskanen I guess I'll make that 'of the top 5', like the previous bullets unless anyone objects. I'll edit the description. As for what those 5 countries are, we can use a report like this one: https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-in-ai or a more up-to-date equivalent by time of resolution