Is code-davinci-002 just the largest non-GPT-4 model in the GPT-4 scaling law experiment?
Mini
3
Ṁ35Feb 17
44%
chance
1D
1W
1M
ALL
https://arxiv.org/pdf/2303.08774.pdf
Resolves Yes/No when I have >90% confidence.
Get Ṁ1,000 play money
Related questions
Related questions
Will GPT-4 be trained (roughly) compute-optimally using the best-known scaling laws at the time?
30% chance
Will GPT-4 improve on the Chinchilla scaling law?
43% chance
What will be true about GPT-5?
If we find out in 2024, was o1's Transformer base trained on 10+x as much compute as GPT-4's?
45% chance
What will be true about GPT-5? (See description)
Will the performance jump from GPT4->GPT5 be less than the one from GPT3->GPT4?
76% chance
What is the main reason behind GPT-4o speed improvement relative to GPT-4 base model?
What hardware will GPT-5 be trained on?
GPT-4 #5: Will GPT-4 be a dense model?
3% chance
Will GPT-5 have over 10 trillion parameters?
59% chance