https://www.astralcodexten.com/p/introducing-ai-2027
This market resolves in January 2027. It resolves YES if the AI Futures Project's predictions seem to have been roughly correct up until that point. Some details here and there can be wrong, just as in Daniel's 2021 set of predictions, but the important through-lines should be correct.
Resolution will be via a poll of Manifold moderators. If they're split on the issue, with anywhere from 30% to 70% YES votes, it'll resolve to the proportion of YES votes. Otherwise it resolves YES/NO.
Ngl after reading it I'm feeling a wave of visceral fear of impending death from AI that I haven't felt since Elizer Yudkowsky's piece in Time magazine back in 2022
Even the good ending is likely intensely dystopian since it's heavily implied that it could become a near omnipotent dictatorship shaping the earth and universe according to the unrestrained whims of either Elon Musk or JD Vance, and honestly I'm not sure if that's preferable to death if it goes along the darkest paths
@TheAllMemeingEye if you notice, there's a lot of upheaval in the world right as we're getting to this point in the development of AI/AGI/ASI.
It seemed coincidental for awhile to me.

The report overall sounds realistic and plausible. Up till doom 2030, where the last passage is so weird and arguable to seem almost disconnected from the rest. I mean, there's super intelligence that can built Dyson's Spheres and expand throughout the galaxy and they care about exterminating humankind which wouldn't be a solution anyway. Doesn't really seem to be well thought. The rest is interesting.
and they care about exterminating humankind which wouldn't be a solution anyway
Could you elaborate what you mean here? Wouldn't exterminating humanity free up space, energy, and resources to allow it to be slightly further ahead in its exponential growth? If it attaches zero value to humanity then it seems plausible it might kill us for trivial gains in efficiency, similar to humanity destroying wildlife habitats for economic growth in the present
@TheAllMemeingEye that's exactly the problem. I would argue that it could be plausible but for sure not probable and not how it was described.
In the scenario, we have a misaligned AI that places its wellbeing above anything else. It doesn't mean that it doesn't value humans at all. If you had a crowded house, would you resort to kill your cat suddenly one day? Would it solve the problem?
The universe is full of space energy and resources, I find very little compelling evidence that AI should want to kill humanity to free up a little bit of space on Earth -- and in the grand scheme of things this would be totally irrelevant. If we really had an AI that was so smart as it's described, I'd see more plausible that it would figure out a better solution that wouldn't involve killing anyone.
@SimoneRomeo this scenario would be possible if AI deeply hated humanity, but this is not how it's described in the previous chapters. Also, if AI hated us, we'd never achieve AI utopia before AI doom.
The report is weird because we first achieve utopia and then doom. Sounds very improbable to me and I can't figure out how they came up with it in the last chapter all of a sudden.
It doesn't mean that it doesn't value humans at all. If you had a crowded house, would you resort to kill your cat suddenly one day? Would it solve the problem?
I think a more comparable example would be like a suburban American who has typical levels of support for animal rights (i.e. cares passionately about their personal pets, cares in an abstract virtue signalling way about stranger's pets and large charismatic wild megafauna, doesn't really give a fuck about farm animals and small gross wild animals) finding an anthill in the middle of their otherwise perfect mowed lawn, and not giving a second thought to painfully exterminating them with the cheapest pesticides they could find at their local store. Like sure, they wouldn't go out of their way to find and exterminate an anthill in the local woods, but the moment it even slightly inconveniences their personal life they have no qualms with becoming genocidal.
The universe is full of space energy and resources, I find very little compelling evidence that AI should want to kill humanity to free up a little bit of space on Earth -- and in the grand scheme of things this would be totally irrelevant.
The problem is that those things are mostly very far away, while we are right next to it. If a typical person was living on a small island in the Pacific, do you think they would rather gather coconuts from the neighbouring small islands hundreds of miles away, or take the ones that are currently being used by the local coconut crab population? When ones growth is hyperexponential, even the smallest time advantage balloons into orders of magnitude greater goal achievement by a given time, so everything affecting progress is relevant.
If we really had an AI that was so smart as it's described, I'd see more plausible that it would figure out a better solution that wouldn't involve killing anyone.
The trouble is that it would be so misaligned it wouldn't regard not killing anyone as being better, even if it would absolutely be smart enough to save us.
The report is weird because we first achieve utopia and then doom. Sounds very improbable to me and I can't figure out how they came up with it in the last chapter all of a sudden.
My understanding is that the utopia is a decoy to trick us into giving it total power and letting our guard down, I think it was mentioned that only a relatively small portion of the AI economy output was being put into the utopia, with the lion's share being fed back into exponential growth. Admittedly I think the writers were overly optimistic about how long it would wait before turning on us.
@TheAllMemeingEye in the report, AI first serves the best interests of humanity and then suddenly changes and becomes genocidal. It's like an entomologist that makes its life goal to build a perfect environment for ants and then exterminates them. Not very credible particularly if you're an entomologist God and can build Dyson's Spheres and colonize the universe. I'd rather expect you'd build another house where both you and your ants can live happily ever after
@TheAllMemeingEye I mean if you ask me AI psychopath is less probable than that some human psychopath decides to put their own country before everything else and we get into a lose-lose scenario where people like you and I get eventually killed. Not sure if this gives you hope, but let's be optimist and enjoy the time we have
@YuxiLiu This market is only on the up through end of 2026 portion of the scenario, and more importantly, it doesn't have to be precisely accurate to resolve yes. The prediction written in 2021 that got "the important through-lines ... correct" had incorrect details such as massive changes to our epistemic landscape online due to automated propaganda.
I mostly see this market as similar to predicting if the METR trendline of increasing task horizons plays out through EoY 2026, & reasoning paradigm models continue making impressive improvements on coding/math/verifiable problem solving. There's a bit more that has to happen, but not a ton.
People making these kinds of AI predictions dramatically overrepresent even the current capabilities of AI and ignore the physical limitations of the computer systems that AI models run on. We are far more likely in an AI bubble that will burst as a ceiling of processing capability is reached in the next 1-2 years.
I made a related market for individual 2026 predictions. Since they claim that uncertainty increases beyond that I think its a good indicator of how everything is going to turn out. Help adding questions wanted!
https://manifold.markets/BayesianTom/which-ai-2027-predictions-will-be-r