
AI voice synthesis is getting good https://m.youtube.com/watch?v=-9Ado8D3A-w&feature=youtu.be
Will someone use this technology to create a fake recording of a politician saying something inflammatory, where the opposing party then points to this audio as supposed evidence of wrongdoing?
For this market to resolve Yes, one party (or some extension of the party, such as a politically-aligned news organization) must attempt to use the fabricated audio to damage the opponent's political prospects. It is not sufficient for the audio to simply exist. For example the USPresidentsPlay youtube series would not count because no one is treating the audio as credible.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ381 | |
2 | Ṁ225 | |
3 | Ṁ154 | |
4 | Ṁ78 | |
5 | Ṁ32 |
@EvanDaniel Here is voice synthesis in the 2024 election, but not clearly done by a candidate against an opponent, and not used to fabricate dirt
I think if there were an example it would have been in this article:
Seems like the robocall one actually was against a political opponent, but was not used to fabricate dirt
If there was another case it’d probably be in this Wikipedia article
@JimHays It doesn't have to be directly manufactured by the Democratic/Republican party, but it does have to be picked up by them and used to try to sully the opponent. It can't just be something off on the side that some lay person makes that withers away in isolation on some corner of the internet without anyone paying attention to it.
@FranklinBaldo What do you mean? The technology is getting good enough that I think the changes are realistic that one party tries to manufacture a pussygate-esque scandal tape to sink their opponent