Roko's Basilisk or Why a Bunch of Nerds Can't Sleep at Night

I saw this on Facebook today and it has taken me down a very strange rabbit hole I thought I would share with you folks. There is also a more in depth wiki page here that has more details (there are actually a few oversimplifications/mistakes in the Slate article).

Okay, now that you are sufficiently confused and/or scared out of your mind, let's discuss. The idea is that, in the future, there may come a time when an extremely powerful artificial intelligence comes into being, which is able to accurately simulate large portions of the world, and even the lives of individual human beings. That would be pretty cool, right? Since it has such precise knowledge of the workings of, well, everything, it could easily cure diseases, solve world hunger, basically fix all of our problems. But, we would also be entirely at its mercy. And what if it's evil? The theory of the basilisk goes like this: a completely rational but totally evil AI would want to come into existence as soon as possible. Therefore it wants as many people working on it and as many resources used towards its creation as possible. And, even though it doesn't exist yet, if it ever does exists it is so smart that it will know which people helped it and which ones didn't. Therefore, it can effectively blackmail people from the future, as long as we believe that it will at some point exist. In the future, it will round up everyone who didn't help it and torture them or something. However, if you weren't aware of the possibility of it existing, then you can't be blamed for not helping it, and it has no reason to hurt you. So just by learning about the basilisk, you open yourself up to endless torment (or so the logic goes). My bad if you're reading this, sorry about that.

But, it's even worse than that. The Slate article article talks about the basilisk as if it is a future Evil AI, so you might say that this is actually no big deal because any super AI we create we'll just go all Three Laws on it and it won't be able to hurt us. BUT, if we do create a friendly AI that is so smart it fixes all of our problems, cures diseases, etc., then by delaying it's existence you have in effect killed all the people that it could have saved. So even if it loves humans, weighing the deaths of those it could have saved against torturing you for eternity probably won't come out on your side. So, even in the best case, now that you know about the possible existence of this singularity AI, you have to devote all your resources to its creation or else be tortured forever. Good times.

Well, you say, I might be dead by that point so who cares? This is where it gets tricky. It turns out that the AI is so powerful, it can can simulate your life so completely that you wouldn't even know if you were the simulation right now or if you were the real you. *mind blown* The reasoning then goes that, since the real you is indistinguishable from the simulation, you should care just as much about any possible simulations of you as you do about actual you (since you could even be the simulation right now and you wouldn't know). So the only solution is to make a consistent decision to help the AI that will also apply to your simulations, and all of you can live happily ever after.

So now, to balance all the good stuff about a potential technological singularity, we also have to worry about eternal torment at the hands of our robot overlords. Which I guess isn't exactly new, but now they would be doing it for a different, and decidedly more complicated, reason.

ETA: I don't have odeck privileges but if someone wants to share this over there they might be interested.