
Will the US implement AI incident reporting requirements by 2028?
Plus
11
Ṁ3062028
83%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US establishes by 2028 a policy requiring certain kinds of AI incident reporting, similar to requirements in aviation or data breach reporting. The policy may allow for many incidents to be kept confidential within a regulatory body. The goal is to enable regulators to track specific types of harms and near-misses from AI systems, allowing them to identify dangers and quickly develop mitigation strategies.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ1,000 play money
Related questions
Related questions
Will the US implement information security requirements for frontier AI models by 2028?
88% chance
Will the US regulate AI development by end of 2025?
34% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
20% chance
Will the US establish a clear AI developer liability framework for AI harms by 2028?
39% chance
Will the US require a license to develop frontier AI models by 2028?
50% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
18% chance
Will a regulatory body modeled on the FDA regulate AI in the US by the end of 2027?
16% chance
Will the US government adopt a mandatory labeling system for AI-generated content by 2025?
25% chance
Will someone be arrested for a felony offense committed in the name of AI safety in the US before 2026?
67% chance