MANIFOLD
BrowseUS ElectionNewsAbout
In Jan 2027, Risks from Artificial Intelligence (or similar) will be on 80,000 hours top priority list
Mini
8
Ṁ571
2027
94%
chance
1D
1W
1M
ALL

The top 10 recommended jobs by some kind of odering on a page like this


https://80000hours.org/problem-profiles/

Get Ṁ1,000 play money
Comments

Related questions

Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
5% chance
What AI safety incidents will occur in 2025?
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
73% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
79% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
7% chance
In 2025, what % of EA lists "AI risk" as their top cause?
46% chance
In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
At end of 2025, one of the top five software engineering AIs will be exclusively used by 100 or fewer tech companies
22% chance
Will Trump repeatedly raise concerns about existential risk from AI before the end of 2025?
8% chance
Will humanity wipe out AI x-risk before 2030?
10% chance

Related questions

Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
5% chance
In 2025, what % of EA lists "AI risk" as their top cause?
46% chance
What AI safety incidents will occur in 2025?
In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
73% chance
At end of 2025, one of the top five software engineering AIs will be exclusively used by 100 or fewer tech companies
22% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
79% chance
Will Trump repeatedly raise concerns about existential risk from AI before the end of 2025?
8% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
7% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
BrowseElectionNewsAbout