What will be the best AI performance on Humanity's Last Exam by December 31st 2025?
đź’Ž
Premium
56
Ṁ49k
2026
1%
0-10%
4%
10-20%
11%
20-30%
19%
30-40%
17%
40-50%
15%
50-60%
11%
60-70%
9%
70-80%
7%
80-90%
6%
90-100%

This market is duplicated from and inspired from

/Manifold/what-will-be-the-best-performance-o-nzPCsqZgPc

The best performance by an AI system on the new Last Exam benchmark as of December 31st 2025.
https://lastexam.ai/


Resolution criteria

Resolves to the best AI performance on the multimodal version of the Last Exam. This resolution will use https://scale.com/leaderboard/humanitys_last_exam as its source, if it remains up to date at the end of 2025. Otherwise, consensus of reliable sources may be used (or Moderator consensus).

If the number reported is exactly on the boundary (eg. 10%) then the higher choice will be used (ie. 10-20%).

See also:
/Bayesian/will-o3s-score-on-the-last-exam-be

/Bayesian/which-of-frontiermath-and-humanitys

Get Ṁ1,000 play money
Sort by:

Currently this market has an expected average score of 53.6 which I think is quite high. Especially given how the neural scaling laws seem to be coming home to roost.

Are models using tools and/or performing web search eligible under current resolution criteria?

@Metastable I think no matter what tools you add to AI, it still remains AI

@mathvc ability to livechat with human experts?

@jim with an exception of using humans 🙂, then it’s definitely not artificial intelligence

@mathvc live access to web is maybe somewhere on that spectrum tho

@jim why you call it live access? It doesn’t go to math forum and make a post about math problems.

You can replicate internet access by scraping it and using as a giant database

@mathvc yeah but using giant database or the web seems like it's less reliant on the AI model's innate knowledge and intelligence, more reliant on human knowledge and intelligence.

i've edited the market description a bit to not be dependent on my own discretion for what model counts or doesn't count. now it uses a consensus of reliable sources or moderator consensus, instead of my own opinion. 🤷‍♂️ probably won't come up anyway but i realized i was amassing a decent position so

bought Ṁ50 10-20% YES

Anything 30+ is honestly scary territory. FrontierMath is really impressive and all, and no doubt surprising, but kinda an oh that happened type thing. This is the kind of test that would make me seriously reconsider my beliefs about AGI, great market!

ooo maybe a market on which of those two is solved at 80% or above first

@Bayesian sounds like an incentive to finetune my deepseek-giga-overfitter-hle-memorized-v1 model by EOY

@Ziddletwix yeah but that would be CHEATING! and the leaderboard thing would CATCH IT

@Bayesian most likely, but maybe they'll just put an asterisk and scold it in a footnote for being sus & bad. unclear how enforcement is actually handled in practice

fkkkk they might put the footnote saying it's sus affffff then what are we gonna do

@copiumarc I don’t think HLE is harder than Frontier Math.

/Bayesian/which-of-frontiermath-and-humanitys

@mathvc @copiumarc may the person with the best model of reality win

bought Ṁ250 10-20% NO

Surely o3 will get >20%?

@qumeric if the benchmark is knowledge heavy it might not do that much better than 4o? prolly will tho. just some low chance that it doesn't

“The dataset consists of 3,000 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held out questions to assess model overfitting.”

Well sorry but people are gonna overfit to this. Who is gonna judge whether the model is overfitted or not?

@mathvc yes i am confused by this point. so if some model near EOY is massively overfit to HLE, scores 90%+, and they chime in "yeah its performance wasn't so crazy strong on our few holdout problems, it probably overfit a bit", that still counts as 90%+ right? is the holdout set just used as a separate confirmation of overfitting, its not incorporated to the main score?

i agree this is troubling. What do you think would be the best way to proceed?

@Bayesian i found that scale.ai and safe.ai partnered to create this benchmark and it seems that they keep up-to-date evaluations of all frontier models:

https://scale.com/leaderboard/humanitys_last_exam

I guess we can trust their judgment? That is, they will not put a clearly overfitted model on the leaderboard since it makes the leaderboard useless

@Bayesian i dunno i think all benchmarks have caveats so i'd just pick some source for what each model has achieved on the benchmark & if their screener for overfitting is weak that's kinda priced in

@mathvc yeah i agree, that probably works well enough. will add to the description