Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.
Resolves as YES if such a system is created and publicly announced before January 1st 2026
Here are markets with the same criteria:
Will we get AGI before 2024?NO
Will we get AGI before 2025?NO
Will we get AGI before 2026?5% (this question)
Will we get AGI before 2027?19%
Will we get AGI before 2028?34%
Will we get AGI before 2029?52%
Will we get AGI before 2030?65%
Will we get AGI before 2031?66%
Will we get AGI before 2032?69%
Will we get AGI before 2033?69%
Will we get AGI before 2034?73%
Will we get AGI before 2035?74%
Will we get AGI before 2036?76%
Will we get AGI before 2037?78%
Will we get AGI before 2038?79%
Will we get AGI before 2039?78%
Will we get AGI before 2040?80%
Will we get AGI before 2041?81%
Will we get AGI before 2042?82%
Will we get AGI before 2043?83%
Will we get AGI before 2044?84%
Will we get AGI before 2045?87%
Will we get AGI before 2046?88%
Will we get AGI before 2047?89%
Will we get AGI before 2048?90%
Related markets:
Will we get ASI before 2027?6%
Will we get ASI before 2028?8%
Will we get ASI before 2029?15%
Will we get ASI before 2030?22%
Will we get ASI before 2031?34%
Will we get ASI before 2032?40%
Will we get ASI before 2033?47%
Will we get ASI before 2034?54%
Will we get ASI before 2035?60%
Other questions for 2026:
Will there be a crewed mission to Lunar orbit before 2026?4%
Will we get room temperature superconductors before 2026?6%
Will we discover alien life before 2026?4%
Will a significant AI generated meme occur before 2026?55%
Will we get fusion reactors before 2026?4%
Will we get a cure for cancer before 2026?3%
Other reference points for AGI:
Will we get AGI before Vladimir Putin stops being the leader of Russia?50%
Will we get AGI before Xi Jinping stops being the leader of China?40%
Will we get AGI before a human walks on the Moon again?33%
Will we get AGI before a human walks on Mars?69%
Will we get AGI before we get room temperature superconductors?83%
Will we get AGI before we discover alien life?85%
Will we get AGI before we get fusion reactors?56%
Will we get AGI before 1M humanoid robots are manufactured?60%
https://arxiv.org/pdf/2503.23674

Participants picked GPT-4.5, prompted to act human, as the real person 73% of time, well above chance. Only GPT4.5 passed the test.
The creator of this market (and sister markets) has deleted their account. (Thanks to @Primer for the nudge to clarify what will happen with these markets.) What do @traders think of making these markets mirror https://manifold.markets/ManifoldAI/agi-when-resolves-to-the-year-in-wh-d5c5ad8e4708 ?
@dreev Personally I'm dumping money here precisely because I don't think the Turing test criteria is a good criteria. So I'd rather not. I'm fine with the drop-in remote worker definition. Because there no clear criteria here, I assume this will resolve Yes only when it's in incontrovertly clear there is AGI to the non-rat-adjacent man-in-the-street, which im also fine with.
@CamillePerrin It sounds like you might like my new AGI market that's aiming to avoid being triggered by any technicalities. (Note my huge bias though -- I'm betting heavily on NO at the probabilities most here seem to think reasonable.)
I don't actually think the Metaculus and Longbets versions are useless. I think it's pretty unlikely that something less than a true AGI will manage to pass a long, informed, adversarial Turing test. That's why I've been betting NO in these markets even while proposing that these markets mirror those Turing-test-based ones.
It seems the one thing everyone agrees on is that passing a long, informed, adversarial Turing test is a necessary if not sufficient condition for these markets to resolve YES.
@dreev I don't agree. I think it's a pretty useless test. It's likely that a large number of humans would fail to pass it if the examiners genuinely thought they might be machines.
@dreev a (rigorous) Turing test around the advent of AGI is virtually guaranteed to have a very high false positive rate and a very high false negative rate, which makes this test useless. This wasn't obvious to people 50 years ago, but it's obvious now
the closer you get to AGI, the closer the apparant value of the Turing test (no matter how "adversarial") converges to zero
If you were to design a high quality adversarial Turing test today, it would be full of gotchas the examiners would think might trip up LLMs, all of which will be useless 2 years from now. And many of which would already filter out a large part of the human control group
@UnspecifiedPerson Agreed on avoiding N/A. Hopefully we can settle on fair and reasonably objective resolution criteria here. Clearly some people want something stricter than even the strictest Turing test. (Again, note my own bias for having stricter criteria.) Aschenbrenner's drop-in remote workers may be the answer. Please do chime in in my other market about this, whether or not it has any bearing on this one.
PS: @MalachiteEagle can you edit your points into a single response? This isn't Discord!
@dreev the drop-in remote worker is a better criteria than the Turing test, but still it's weaker than this question's criteria. Because it's possible to have a drop in remote worker than can get work done without "learning quickly, and learning from experience". A drop in remote worker can get work done without being AGI.
@VitorBosshard Oh, no one wants to change anything; sorry to give that impression. We want to pin down what things like "learns quickly" mean.
The market description as the creator left it is not clear, unless you mean that it's clear we're not there yet. I agree with that.
Also since the title just says "AGI" like many other markets do, I think we should err in the direction of what people mean by "AGI" based on other markets. Especially prominent ones like the one that the big countdown timer at manifold.markets/ai is based on. But also I agree that the creator's intent in this market was to be a bit stricter in the definition of what counts as AGI. So, again, we're aiming to pin that down better.
@MalachiteEagle I actually just presumed they're unreachable since their account is deleted. But if anyone can lure them back to weigh in, that'd be awesome.
Arbitrage opportunity: https://manifold.markets/dreev/in-what-year-will-we-have-agi
(I think that market is a bit more likely to resolve NO than this one, as it's been defined so far.)
Would it be fair to treat these AGI-when markets as mirroring the following question on Metaculus?
https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
Or maybe this one:
https://www.metaculus.com/questions/11861/when-will-ai-pass-a-difficult-turing-test/
My current thinking is that passing a long, informed, adversarial Turing test (human foils are PhD-level experts in at least one STEM field and judges are PhD-level experts in AI) is a necessary but perhaps not sufficient condition for AGI. What we really mean by AGI is along the lines of Aschenbrenner's drop-in remote workers. Here's how I put it elsewhere:
Set physical capabilities aside and imagine hiring a remote worker — someone who can participate in Zoom calls, send emails, write code, and do anything else on any website or web app. If an AI can do all those things at least as well as humans, it counts as AGI.
I think that comports best with the market description here?
@dreev neither of the metaculus questions properly addresses (long-term) autonomy, which I think is generally considered part of AGI. Drop-in remote workers comes very close but might need to be specified, for example that it means the ability to do a job (including planning, prioritizing, etc), and not just a set of individual tasks
I think excluding physical tasks is the right call
@MalachiteEagle The "learn quickly, and learn from experience" part is quite tricky.
One could argue it would at least have to finetune itself constantly, and maybe even retrain itself to check that box.
One could also argue a bit of rather extensive notetaking like we're currently seeing with ClaudePlaysPokemon might be enough.
That's a wide range of interpretations.
@Primer I disagree. I think there is a key step between now and AGI which specifically tackles "learn quickly, and learn from experience". There are no deployed models today that can do those things in a way that remotely resembles human memory/skill acquisition. This is why N/A'ing these questions is wrong.
@MalachiteEagle I got no skin in the game here. But if I were invested in any of those markets, I'd push for a clarification. I wouldn't be surprised if a mod would resolve these Yes based on a combination of "Metaculus" and "reasonable expectations". I'd also think about creating new markets, as some will argue a bit of note-taking qualifies (or decides Manifold doesn't have the manpower to resolve those tricky cases) and all these markets might end up N/Aed.
@Primer resolving one of these markets a year isn't exactly going to trigger a manpower shortage. The good thing about manifold is that it's a marketplace of ideas, and the best market wins. Creating new questions is great, but this question already does more in 3 sentences than any of those metaculus questions manage in 3 pages.
I don't think these questions will end up N/A. Furthermore, I've seen these ones getting linked a fair bit on Twitter and elsewhere, which brings more interest to the platform.
resolving one of these markets a year isn't exactly going to trigger a manpower shortage
There may well be hundreds of those. I doubt (might of course be wrong) the current procedure is sustainable with a growing userbase.
@MalachiteEagle By the way, I absolutely agree with this part:
I think there is a key step between now and AGI which specifically tackles "learn quickly, and learn from experience". There are no deployed models today that can do those things in a way that remotely resembles human memory/skill acquisition.
Any objections to making this market mirror https://manifold.markets/ManifoldAI/agi-when-resolves-to-the-year-in-wh-d5c5ad8e4708 ?
@MalachiteEagle See Daniel's proposal above. The "learn quickly, and learn from experience" would be gone and we'd additionally get other problems, like when an AI clearly is smart enough, but nobody performs the required tests, so we'd be trading on the question if and when the respective tests happen.
@dreev I appreciate mod involvement, it's important to clear this up well before those markets close. But any proposals need to @traders and not only here, but in all the related markets, because traders in the 2031 version should have the same rights. Same holds for all markets which condition on these ones.
Maybe there should be @mods involved who don't hold positions here or in related markets, as traders are already arbitraging with potential substitute resolution criteria. And maybe thus N/A would he smarter. Again, this refers to all of the sister markets as well.