How will reading "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" change my P(Doom)?
22
Ṁ2230
resolved Sep 19
100%80%
No significant change.
15%
Increase it.
4%
Decrease it.

My current estimate of P(Doom) is 35%. I believe that ASI will be invented in the near future, but that alignment is tractable enough that this will most likely end well for humanity. I believe that an international treaty banning frontier AI development, like Yudkowsky advocates, is extremely unlikely to happen and would probably not work to reduce P(Doom). I have read most of Yudkowsky's work, including HPMOR, the Sequences, and much of LessWrong.

My preorder of "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" was delivered while I was writing this question. I plan to read it over the next week, but might need a few weeks.

Given all that info, feel free to speculate on how reading this book will affect my P(Doom). I plan on talking about the book and reading what other people think about it, but will not look for anything else that might change my view on P(Doom).


If I do not finish reading the book by 10/20, I will resolve this N/A. If there is a significant change in my P(Doom) not related to this book, I will also resolve N/A.

Get Ṁ1,000 play money

🏅 Top traders

#NameTotal profit
1Ṁ241
2Ṁ158
3Ṁ129
4Ṁ63
5Ṁ37
Sort by:
No significant change.

I finished reading the book yesterday. It was good, but none of the arguments were new and my particular disagreements with Yudkowsky were not sufficiently addressed. I agree with this review: https://x.com/willmacaskill/status/1968759901620146427