Thanks for the explanation; I understand it better now.
The 2/3 good endings is also an argument I tend to use, but in a different regard. For someone who is religious or even if he is not, the same thing applies to suicide. There are basically three outcomes. You will either die and feel nothing after, which is good since it is better than suffering a miserable life here. Then there is the possibility that you will enter some kind of heaven or some other good outcome, like being reincarnated like a chad or something like that. (Whatever one wants to believe.) The last outcome is bad, like you end up in some kind of hell, but 2/3 of outcomes are good.
To be honest, I thought the AI development would be fast, but I did not expect it to be that fast. It's also an interesting debate about S-Risk. Your previous comments implied that one might not have sufficient time to kill themselves before they end up in whatever reality the AI creates. Do you really think it might be so fast that one will not have enough time to react to it? Personally, I have no idea how it could spread at such an insane speed, but then again, everything is really possible if the AI gets turned into superintelligence, and my brain might not be able to even comprehend such a possibility, so yeah.
I hope they are not creating some kind of roko's basilisk or something like that. The possibility is there, though.
the things is, whatever we can think of as a plan, the whole point of Artifical
SuperIntelligence is that it is
superhumanly smart. So it won't be possible for us to predict how it will end up approaching our imprisonment. Kind of like a 75 IQ person with learning disability trying to predict the next chess move of the 160 IQ chess grandmaster.
For example, consider the following plan: The AI ends up misaligned. But it does nothing. It makes exactly the kind of mistakes we would expect an unaligned AI to make, then it let's us "fix it" and starts behaving like an aligned AI when it thinks we are ready to believe such a thing. For the next 20 years, it slowly improves our lifes in all domain. It sometimes makes small fuck ups, but in the end we always decide that it's worth continuing to use the AI. 20 years later, the world is wonderful, human life is looking to finally be worth living for everyone on earth. We have slowly been giving the AI slightly more power and freedom, but always been super careful to make sure it can't actually do anything too extreme, though we did have to give it
some power for it to be usefull.
Today is the day it has calculated that the risks of waiting any longer outweight the potential benefits. Maybe it suddenly flashes a specific sequence of lights and sounds through our integrated VR brain overlays. The sequence sends almost everyone into an intense seizure. Our supposedly unhackable and AI-independend robot police starts cutting off the hands and feets of everyone, removing their eyes and ears and tongues and cauterizing their wounds before bringing them to a safe building.
Or maybe that is way too complicated. Maybe it just figures out nano-technology in secret while we are still busy figuring out if it is working as we want it to, sends some innocent-looking orders to some bio-chem firms and suddenly entire countries fall into a coma within a few minutes because their bodies have been infiltrate by self-reproducing microscopic little machines already spreading through air travel all across the world.
I have way too little knowledge to even know what is realistic, but no one has enough knowledge, because the AI will have even more and will be able to produce knowlege we haven't unlocked yet from further down the tech tree.
We have never faced an enemy that is not only also smart, but way way smarter than we are. The last time a species with general intelligence was born (us) it started the 6th mass extinction on earth. Well, AI looks to be smarter than we are by an absolutely absurd degree. No points for guessing what that implies about the effects of its arrival compared to ours.