At each of the dates in this market I'll run a poll asking:
"Would it be safer for humanity if all AI was open source?"
First poll:
2nd poll - Jan 2025
3rd poll - Aug 2025
Update 2025-08-12 (PST) (AI summary of creator comment): - The creator missed the Llama 4 checkpoint; the current poll will resolve both GPT‑5 and Llama 4.
There will be no separate Llama 4 poll; both share the same result.
I think the problem with this question is wording. "Safer for humanity" makes me think about existential risks. I think economic risks are MUCH higher and it would be better if more models were open source, but this doesn't translate to "safer for humanity" because I don't think unemployment and inequality will lead to extinction.
In my opinion, existential risks are very low at the moment because from what I've seen all current models completely fail at displaying agent behavior, they are also, architecturally, not optimisers, which is what most "AI kills all theories" assume.