Wednesday, March 30, 2022

Don't Worry About Roko's Basilisk

[Author’s note – I admit I’m late to the game on this philosophical matter. I’ve never given Roko’s Basilisk much thought because it seems patently silly on the surface of it. So why pay attention now? It just seems to be coming up a lot lately. Perhaps that is the Basilisk warning me.]

 

In 2010, user Roko on the LessWrong community chat boards posited this thought experiment: What if in the future there is a sufficiently powerful AI that would torture anyone in the past who could imagine the AI’s future existence but didn’t do anything to help bring the AI into existence. This thought experiment is supposed to terrify us because now that we know about it and the possibility of such a future AI seems plausible, we can’t know that right now we’re not being tortured if we’re not helping this AI to come into existence. But I just can’t take this thought experiment seriously even though it is easy enough to blackmail human beings.

 

First of all, while it would seem easy for an AI to blackmail someone given all the information its privy to, no one knows the future and therefore couldn’t be sure the future blackmailer was actually able to manipulate the past. Even if they could, we couldn’t be sure they weren’t lying. So, the options here are to either say “Get lost” and not give it a second thought or actively work against the potential blackmailer. User XiXIDu on Reddit put it this way – “Consider some human told you that in a hundred years they would kidnap and torture you if you don't become their sex slave right now. The strategy here is to ignore such a threat and to not only refuse to become their sex slave but to also work against this person so that they 1.) don't tell their evil friends that you can be blackmailed 2.) don't blackmail other people 3.) never get a chance to kidnap you in a hundred years. This strategy is correct, even for humans. It doesn't change anything if the same person was to approach you telling you instead that if you adopt such a strategy then in a hundred years they would kidnap and torture you. The strategy is still correct. The expected utility of blackmailing you like that will be negative if you follow that strategy. Which means that no expected utility maximizer is going to blackmail you if you adopt that strategy.”

 

Others in the Internet Community have mentioned that Roko’s Basilisk is not unlike Pascal’s Wager, which no one takes seriously anymore because of the false dichotomy it presents. Believe in Roko’s Basilisk or else? It seems unlikely the situation would be that straightforward. For example, why would the future AI waste its energy on torturing people in the past? Wouldn’t it make more sense for it to focus its energy on rewarding those people who help bring it into existence? There’s no good reason for the AI to be malevolent – not that reasons might matter much to such a future AI – since it would be in the AI’s best interest to be (overly) benevolent and not waste resources on people who simply don’t care. It reasonable to assume that efficiency would be one of the hallmarks of a hyper-intelligent AI.

 

Unless the AI blackmailing you could transport you to the future and back for the sake of proving that it will exist one day, or otherwise makes a specific threat and follow through with it, there is no reason to assume the AI blackmailer can back up their threats. And since I’ve just written that and posted it on the Internet, Roko’s Basilisk now knows the burden of proof is on it. If it can’t prove its future existence, it might as well not exist and we shouldn’t worry about it. Good luck with all that, Roko’s Basilisk.

 

Just in case this particular AI will actually exist someday, we still needn’t worry. It seems likely that in all the information we give the Internet and the data AI’s retrieve from us through all our social media, shopping, and messaging, it knows we’re all suffering already even if we’re only taking about life on its most fundamental level. Why would it bother making our lives in the past any more hellish than it already is? I suppose that is a question we should ask the gods…

Thursday, March 24, 2022

Oh, Twitter Christians, You Amuse Me

In trying to convince me that the God of the Bible does in fact exist and therefore validates Christianity compared to, say, Zoroastrianism, a Twitter user wrote this to me: “Does Zoroastrianism contain a virgin birth, a Trinity, a created angel/being that became evil, angel human hybrids, God becoming human and dying for our sins, new heavens and new earth?

 

For some reason, Christians are painfully unaware that virgin births are fairly common in religious mythology. It’s not even a particularly special phenomenon in the animal kingdom, though rare, it can and does happen. Moreover, what is so special about female virgins anyway? The haven’t been tainted by a penis? By that logic any man who has sex with a woman therefore taints her – how rude! Now no god will want to impregnate her! If a religion really wants to impress me, give me a male virgin who impregnates a woman without having sperm taken from him.

 

A trinity? What’s special about a trinity? Lots of things come in threes and stupid tweets are one of them. Why doesn’t God stick to a duality? Or maybe there are four spiritual facets to godhood. What difference does a trinity make? Three is not a special number any more than any other number.

 

A created angel that became evil? Jesus Christ, that’s not even in the Bible. And, as I’ve said many times, any such creation of God had to be known by God to become evil – since the Christian god is omniscient – and this makes God look like a dick; he knew it was going to happen and let it happen anyway! Angelic beings becoming evil also not special in mythology.

 

Angel-human hybrids? Someone has not read ANY mythology other than their own.

 

New heavens and new earth? As I’ve written before, I’m not impressed with believers’ visions of heaven which often sounds a lot like life on earth without having to pay taxes. I get it, though, your life on earth sucks and you need to believe it’ll get better after you die. Yet for some reason, most y’all are scared to die just like anyone else.

 

Comparing one religion to another and pointing out where one is supposedly special whereas the other is not does not validate one’s religion. It just makes you look ignorant. That’s fine for Christians I suppose for in being ignorant and meek they shall allegedly inherit the earth. Mmm, yeah, judging by their work so far, that’s been working out great.