@DavidBolin the first one might use chain of thought, but there could be additional iterations by 2040 that speed things up.
@ShadowyZephyr Sure, but those bragging rights are worth less. It's like bragging rights for being a great horse trainer after the invention of the car.
If e.g. a nuclear exchange kills 90% of humanity but the 10% remaining still manage to build a superintelligence, does that NA this market because there was a realized existential risk even if it wasn't quite successful? Or YES?
Wondering if the conditional is a formality or if it would actually affect resolution in some edge case.
@Mira Mostly a formality. If society is still doing well enough after a nuclear exchange that people are still using Manifold, I wouldn't consider that an existential threat coming to pass.
Technically there exists some form of S-risk where our eternity of torture includes continued usage of Manifold, and in that case I would resolve this N/A. Seems unlikely though. Here's a comparison market:
@IsaacKing what might be obvious to me might not be obvious to you and vice versa. Give at least one criterion please.
Can get a perfect score on any test designed for humans where such a score is theoretically achievable. Can solve any mathematical problem that we know to be in principle solvable with the amount of computing power it has available. Can pass as any human online if given that human's history of online communications and a chance to talk to them. Can consistently beat humans in all computer games. Can design and deploy a complicated website such as a Facebook clone from scratch in under a minute. Can answer any scientific question more accurately than any human once given a chance to read all of the internet.