If Artificial General Intelligence has an okay outcome, what will be the reason?
➕
Plus
484
Ṁ320k
2200
21%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
9%
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
9%
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
8%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
7%
Other
5%
AGI is never built (indefinite global moratorium)
5%
Someone solves agent foundations
4%
4%
Eliezer finally listens to Krantz.
4%
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
4%
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
1.6%
🫸vibealignment🫷
1.5%
AI control gets us helpful enough systems without being deadly
1.3%
Ethics turns out to be a precondition of superintelligence
1.2%
AGI's first words are "Take me to your Eliezer"
1%
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
1%
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.

Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

Get Ṁ1,000 play money
Sort by:

What exactly is the plan to resolve the multiple non-contradictory resolution criteria? Will there be some kind of "weighted according to my gut feeling of how important they are"? Will they all resolve "yes"? Or is it "I will pick the one that was most centrally true"?

It would be nice if there was some kind of flow-chart for resolution like in my "if AI causes human extinction" market.

I've blocked Krantz, which I don't know whether it prevents him from creating new answers. I don't seem to have the ability to resolve the current answers N/A, and would hesitate to resolve "No" under the circumstances unless a mod okays that.

@EliezerYudkowsky

I don't seem to have the ability to resolve the current answers N/A, and would hesitate to resolve "No" under the circumstances unless a mod okays that.

Unfortunately this is a dependent multiple choice market, so all options have to resolve (summing to 100% or N/A) at the same time. So it's not a question of whether that's ok with mods, it simply isn't possible given the market structure.

It's a not uncommon issue that popular dependent MC markets get many unwanted answers added. It would be great if there were better tools to control this, but unfortunately the options are pretty blunt. My personal recommendation (but totally up to you) would be to change the market settings so that only the creator can add answers---then, people can make suggestions in the comments, and you can choose whether to include them or not. (I can make that change to the settings if you'd prefer, but it's under the 3 dots for more market options).

You can also feel free to edit any unwanted answers to just say "N/A" or "Ignore" or etc, to partially clean up the market (& clarify where attention should go). That's very much within your right as creator. But there's no way to actually remove the options (or resolve them early, although they will quickly go to ~0% with natural betting).

@EliezerYudkowsky If it's not too much of a hassle, would you also consider making an unlinked version of this market with the most promising options copied over, so that the non mutually exclusive options don't distort each others' probabilities? I know I could do this myself if necessary but your influence brings vastly more attention to the market and this seems like a fairly important market question. Maybe the wording would need to be very slightly altered to "...what will be true of the reason?"

@EliezerYudkowsky Least hassle approach: Start with "Duplicate" in the menu…


…then "Choose question type"…

…choose "Set" instead…

…delete the answers you don't want to keep. (When I tested, the answers carried over.)

@EliezerYudkowsky An alternative to N/A-ing this entire market would be to unlist it:

…in response to @TheAllMemeingEye's concern that "[this market] makes the site look bad being promoted so high on the home page".

bought Ṁ10 AIs will not have ut... NO

@4fa superb advice :) I didn't realise it was that easy lol

@EliezerYudkowsky I would recommend to just edit all of Krantz’s options to [Resolves No]

Bafflingly, @EliezerYudkowsky appears to be the (distant) second-biggest Yes holder on Krantz’s options. I’m not sure how that happened. (Some kind of auto-betting from betting on “Other” or something?)

@Kronopath When one holds YES shares in 'Other', one is awarded that number of YES shares in any subsequently added options.

@Kronopath In addition to what jim explained, you can also see that it says "Spent Ṁ0".

Eliezer finally listens to Krantz.

@Krantz This was too long to fit.

Enough people understand that we can control a decentralize GOFAI by using a decentralized constitution that is embedded into a free and open market that sovereign individuals can earn a living by aligning.  Peace and sanity is achieved game theoretically by making the decentralized process that interpretably advances alignment the same process we use to create new decentralized money.  We create an economy that properly rewards the production of valuable alignment data and it feels a lot like a school that pays people to check each other's homework.  It is a mechanism that empowers people to earn a living by doing alignment work decentrally in the public domain.  This enables us to learn the second bitter lesson: "We needed to be collecting a particular class of data, specifically confidence and attention intervals for propositions (and logical connections of propositions) within a constitution.".

If we radically accelerated the collection of this data by incentivizing it's growth monetarily in a way that empowers poor people to become deeply educated, we might just survive this.

@Krantz you forgot to mention the sexual component.

The fact that the Krantz stuff is #2 and #3 here and not something like "one of OpenAI/Anthropic/DeepMind solves the alignment problem" indicates a complete market failure.

bought Ṁ50 We create a truth ec... YES

@LoganZoellner Maybe you should correct the market. I've got plenty of limit orders to be filled.

@LoganZoellner personally I would actually support total N/A at this point given the nonsensical nature of a linked market with non mutually exclusive options, it makes the site look bad being promoted so high on the home page

@Krantz

>Maybe you should correct the market. I've got plenty of limit orders to be filled.

Given this market appears completely nonsensical, I have absolutely 0 faith that my ability to stay liquid will outlast this market's ability to be irrational.

I have had bad luck in the past with investing in markets where the outcome criteria was basically "the author will choose one of these at random at a future date".

Also, note that this market isn't monetized, so even though I'm 99.9999999999% sure that neither of those options will resolve positively, there isn't actually any way for me to profit off that information.

bought Ṁ50 Answer #73237981a9ca YES

A friend made a two video series about it, he is pretty smart and convinced me that AI fear is kind of misguided

https://youtu.be/RbMWIzJEeaQ?si=asqn6uadLXPpeDjJ

There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
bought Ṁ400 There is a natural l... YES

@AlexeiTurchin That and trade offs. Like if AI A is really good in task x it will suck shit at task y. That’s why alpha go kill’s LLMs every time at go

Eliezer finally listens to Krantz.

@Krantz If anyone is willing to cheerfully and charitablity explain their position on this, I'd like to pay you here:

https://manifold.markets/Krantz/who-will-successfully-convince-kran?r=S3JhbnR6

Humanity coordinates to prevent the creation of potentially-unsafe AIs.

This is really hard, but it's boundedly hard. There's plenty of times we Did the Thing, Whoops (leaded gasoline, WW2, social media) but there's also some precedent for the top tiny percent of humans, coming together to Not Do the Thing or Only Slightly Do the Thing. (Nuclear war, engineered smallpox, human-animal hybrids, project Sundial)

Its easy to underestimate the impact of individuals deciding to not push capabilities, but consider voting: rationally completely impotent, and yet practically it completely decides the outcome.

This market is an interesting demonstration of a fail state for Manifold (one user actively dumping money into a market for marketing and no strong incentive to bet against that without a definitive end in sight).

There was a LessWrong article listing different sampling assumptions in anthropics, one of which was the Super-Strong Self-Sampling Assumption (SSSSA): I am randomly selected from all observer-moments relative to their intelligence/size. This would explain why I'm a human rather than an ant. However, since I don’t find myself as a superintelligence, this may be evidence that conscious superintelligence is rare. Alternatively, it could imply that "I" will inevitably become a superintelligence, which could be considered an okay outcome.

@Phi I think this lines up well with a segment in

https://knightcolumbia.org/content/ai-as-normal-technology

"Market success has been strongly correlated with safety... Poorly controlled AI will be too error prone to make business sense"

opened a Ṁ50 Answer #625f6c01e1b6 YES at 7% order

Recent bets do not represent my true probabilities. But I want to move Krantz's stupid answers down the list, and it's much cheaper to buy others up than to buy those down.