Will manifold think "it would be safer if all AI was open source" when:
➕
Plus
20
Ṁ1157
2100
18%
It's 2026 Jan
6%
GPT 5 comes out
7%
Llama 4 comes out
23%
It's 2030

At each of the dates in this market I'll run a poll asking:

"Would it be safer for humanity if all AI was open source?"

First poll:

2nd poll - Jan 2025

3rd poll - Aug 2025

  • Update 2025-08-12 (PST) (AI summary of creator comment): - The creator missed the Llama 4 checkpoint; the current poll will resolve both GPT‑5 and Llama 4.

    • There will be no separate Llama 4 poll; both share the same result.

Get Ṁ1,000 play money
Sort by:

Looks like I missed Llama 4, so this poll will resolve both GPT5 and Llama 4

Latest poll out

I think the problem with this question is wording. "Safer for humanity" makes me think about existential risks. I think economic risks are MUCH higher and it would be better if more models were open source, but this doesn't translate to "safer for humanity" because I don't think unemployment and inequality will lead to extinction.
In my opinion, existential risks are very low at the moment because from what I've seen all current models completely fail at displaying agent behavior, they are also, architecturally, not optimisers, which is what most "AI kills all theories" assume.

This interview with zuck was my inspiration for this market.

At 38 minutes they discuss the dangers of open source and at 48:50 zuck makes an interesting point that maybe the biggest harms aren't the existential ones but the real ones that exist today.