Showing posts with label thought experiment. Show all posts
Showing posts with label thought experiment. Show all posts

Wednesday, August 31, 2022

Why The Christian God Cannot Be Proven Without The Bible

A thought experiment: Image you’ve never read or heard of the Bible and don’t know anything about religion(s). Now think about yourself and the world around you. Also think about the breadth of the entire universe while you’re at it. Is there anything about your body, the planet Earth, and the universe at large that SPECIFICALLY points to a single omnipotent, omniscient, omnibenevolent being that created the universe who also exists outside of the universe? Moreover, is there anything about your body, the planet Earth, and the universe at large that SPECIFICALLY points to the plans or desires of this entity? 

No.

The greatest problem facing the validity of the creator-god myth is something that doesn’t get brought up enough, if at all. 100% of the time, knowledge of a such a god precedes the alleged evidence found in nature for such a deity’s existence. No one ever in their right mind* with no knowledge of religion has ever looked around themselves and at nature and said, “This is all so incredible, a single entity of some sort must’ve created everything.” No one would say this having any knowledge of how complex things are created and built. While some complex objects can be built by a single designer and engineer, we know that this is no small feat and requires lots of time; typically more than six 24-hour days. There is also every indication that the more complex something is to design and build, the more people are required to complete that task. The Empire State Building in New York City had four architects and required hundreds of people to build it. No one, not even a person who knows next to nothing about erecting buildings would say of the Empire State Building that it looks like something a single person designed and built.

[*By ‘in their right mind,’ we should say ‘in possession of analytic skills’ as primitive men obviously possessed little in the way of reason. Modern man still doesn’t.]

Every single time, knowledge about a religion exists prior to viewing one’s self, the Earth, and the universe through that lens to conclude what one sees aligns with and affirms their beliefs. Here, we should ask why, then, are scriptures the only thing that establishes the existence of a creator-god? Why isn’t the existence of any such deity (and their plans) obvious from our existence and the world around us. A person left to their own devices, growing up alone and never coming into contact with another person would not come to the conclusion of the biblical god, for example. There is absolutely nothing about our bodies, our minds, the world outside of us, or the universe beyond Earth that specifically states that we should obey the 10 Commandments or accept Jesus as our Savior, for instance. No one is born with that specific knowledge. While Christians are fond of saying everyone is born a sinner (thanks to Original Sin), at the same time atheists are fond of saying everyone is born an atheist, the only difference being is that the atheist can’t be disputed and that’s no small thing.

It might be objected that, well, a book is just the way a monotheistic god goes about teaching people about his existence and the need to be saved. I can’t help but think, though, that imprinting his existence and desires directly into our minds without the need for other people’s input would be a much better idea, especially considering you risk eternal damnation for not believing in him. Considering that, God does not seem too wise to me when I can think of a better way of doing things, and particularly in the creation of humans. In creating a person, I would also re-design the knee, which is a poorly ‘designed’ joint. I would dispense with much of the universe as well, seeing how humans will never traverse most of that space. So why would I worship a deity I can outsmart on matters of design? Why would I worship a deity whose own book is the only way to ‘truly’ know them be so obtuse as to lead to numerous sects of Christianity that all profess to be the One True Religion? If this deity did exist, I wouldn’t have much respect for their intellect.

So the challenge to apologists stands: Is there anything about your body, the planet Earth, and the universe at large that SPECIFICALLY points to a single omnipotent, omniscient, omnibenevolent being that created the universe who also exists outside of the universe? Moreover, is there anything about your body, the planet Earth, and the universe at large that SPECIFICALLY points to the plans or desires of this entity?

I already know (because I’m omniscient) that at least one Apologist will chime in with DNA as their proof. Only, the complex structure of DNA does not speak to a single creator as I’ve already pointed out, nor does the complexity of DNA tell us anything about the plans or desires of any deity beyond the proclivity to reproduce.

I’ll be waiting a long time for a good answer because all apologists are already tainted by and biased towards their belief, unable to be objective. Meanwhile, I am willing to be objective because I am rational, open to the possibility a creator-god exists given the appropriate proofs, those proofs being arguments or evidence of that single creator as described in the monotheistic traditions that do not fail, that cannot be objected to.

Come, Watson, come. The Game is afoot.

Wednesday, March 30, 2022

Don't Worry About Roko's Basilisk

[Author’s note – I admit I’m late to the game on this philosophical matter. I’ve never given Roko’s Basilisk much thought because it seems patently silly on the surface of it. So why pay attention now? It just seems to be coming up a lot lately. Perhaps that is the Basilisk warning me.]

 

In 2010, user Roko on the LessWrong community chat boards posited this thought experiment: What if in the future there is a sufficiently powerful AI that would torture anyone in the past who could imagine the AI’s future existence but didn’t do anything to help bring the AI into existence. This thought experiment is supposed to terrify us because now that we know about it and the possibility of such a future AI seems plausible, we can’t know that right now we’re not being tortured if we’re not helping this AI to come into existence. But I just can’t take this thought experiment seriously even though it is easy enough to blackmail human beings.

 

First of all, while it would seem easy for an AI to blackmail someone given all the information its privy to, no one knows the future and therefore couldn’t be sure the future blackmailer was actually able to manipulate the past. Even if they could, we couldn’t be sure they weren’t lying. So, the options here are to either say “Get lost” and not give it a second thought or actively work against the potential blackmailer. User XiXIDu on Reddit put it this way – “Consider some human told you that in a hundred years they would kidnap and torture you if you don't become their sex slave right now. The strategy here is to ignore such a threat and to not only refuse to become their sex slave but to also work against this person so that they 1.) don't tell their evil friends that you can be blackmailed 2.) don't blackmail other people 3.) never get a chance to kidnap you in a hundred years. This strategy is correct, even for humans. It doesn't change anything if the same person was to approach you telling you instead that if you adopt such a strategy then in a hundred years they would kidnap and torture you. The strategy is still correct. The expected utility of blackmailing you like that will be negative if you follow that strategy. Which means that no expected utility maximizer is going to blackmail you if you adopt that strategy.”

 

Others in the Internet Community have mentioned that Roko’s Basilisk is not unlike Pascal’s Wager, which no one takes seriously anymore because of the false dichotomy it presents. Believe in Roko’s Basilisk or else? It seems unlikely the situation would be that straightforward. For example, why would the future AI waste its energy on torturing people in the past? Wouldn’t it make more sense for it to focus its energy on rewarding those people who help bring it into existence? There’s no good reason for the AI to be malevolent – not that reasons might matter much to such a future AI – since it would be in the AI’s best interest to be (overly) benevolent and not waste resources on people who simply don’t care. It reasonable to assume that efficiency would be one of the hallmarks of a hyper-intelligent AI.

 

Unless the AI blackmailing you could transport you to the future and back for the sake of proving that it will exist one day, or otherwise makes a specific threat and follow through with it, there is no reason to assume the AI blackmailer can back up their threats. And since I’ve just written that and posted it on the Internet, Roko’s Basilisk now knows the burden of proof is on it. If it can’t prove its future existence, it might as well not exist and we shouldn’t worry about it. Good luck with all that, Roko’s Basilisk.

 

Just in case this particular AI will actually exist someday, we still needn’t worry. It seems likely that in all the information we give the Internet and the data AI’s retrieve from us through all our social media, shopping, and messaging, it knows we’re all suffering already even if we’re only taking about life on its most fundamental level. Why would it bother making our lives in the past any more hellish than it already is? I suppose that is a question we should ask the gods…

Tuesday, February 7, 2017

A Modern Day Trolley Problem

The Trolley Problem (aka the Trolley Dilemma) is sometimes used thought experiment used by psychologists and philosophers to gauge a person’s moral compass. The though experiment goes something like this:

Suppose there is a runaway train car (a trolley) that is rolling down the tracks towards five people tied to the tracks, or are lying on the tracks and are otherwise unable to move. You are off in the distance observing this and happen to see a lever next to you that if pulled will switch the runaway train car from its current course onto another set of tracks. However, on the diverted track there is single person tied to the tracks and will be killed if you pull the lever. Question is: Do you pull the lever to save five people and kill one or take no action and let five people die?

Keep in mind this is the Trolley Problem in its simplest iteration. There are several variations of this thought experiment which involve intentionally pushing a fat man onto the tracks to save five people (the Fat Man version) or intentionally pushing the man (a “fat villain”) who tied up the potential victims onto the tracks to avert the deaths of the innocent. Let’s not concern ourselves with these versions or ask questions about the characters of all the potential victims. For the sake of realism, however, we are going to alter the details of the thought experiment to a more likely scenario than initially presented. We’re going to do this because the Trolley Problem such as it is described above doesn’t present a realistic situation one would find themselves in and be forced to make a moral judgement. What I’d like to do is introduce a modern equivalent to the Trolley Problem, changing the problem to something more akin to what philosopher Judith Jarvis Thompson had in mind with her “brilliant surgeon” version of the Trolley Problem.

Here’s my version: Suppose you are a politician and if you don’t vote on a particular bill, five random people will lose their health care coverage and die from pre-existing conditions. If you do vote for this particular bill, the five people will keep their health care coverage but another single random person will lose their coverage and die from their pre-existing condition. In summary, if you don’t vote – if you take no action regarding the bill – five people will die. If you do vote for the bill – take action on the bill – one person will die and five will be saved. Do you vote for the bill or not?

Philosophically, what you as the politician is likely to do is based upon whether you are a utilitarian or a deontologist. That is, if you seek to do the greatest good, as a utilitarian you’re going to vote for the bill. If, on the other hand you think that committing certain actions like intentional harming someone are wrong, then you’re not going to vote for the bill. Of course, the obvious flaw with the deontologist’s position is that not voting for the bill – an inaction – is just as bad as intentional harm if it is your intention to abstain from the vote. In other words, an inaction is just as bad as an action if one intends towards the inaction. (The major flaw in Deontology is that intentions matter.) There is a choice to be made – vote or don’t vote – and once it is made, there is intention behind the given choice; this fact cannot be escaped. The deontologist likes to think that by not intending to ‘intentionally’ cause harm, they are absolved from whatever harm does happen. Obviously, this is madness as not voting for the bill – intentionally – causes harm and makes the deontologist’s actions morally impermissible.

This deconstruction of the deontologist’s position philosophically compels you to vote for the bill, thus taking the utilitarian route (if there’s no false dilemma here, which there may well be). Without knowledge of any of the random people involved, without knowing whether saving the five will result in a better or worse world, you should vote for the bill on the assumption that the death of five people is likely to wreak more sorrow and havoc than the death of one person. All things being equal among the random people, you are compelled to vote for the bill on purely philosophical grounds if you want to be considered a morally just person (such as morality is construed in the Western Industrialized world). However, what people are likely to do is much different in reality. 

In reality, most people take the deontologist’s route and think they are avoiding taking an action that intentionally harms a single person. This outcome was confirmed by a 2007 online experiment conducted by psychologists Fiery Cushman and Liane Young and biologist Marc Hauser. They concluded that by taking a positive action (doing something) that resulted in a positively negative consequence evoked emotions that clouded ‘better’ judgement. But why should this be the case? Why would ‘taking action’ result in feelings that assume the outcome will be worse than taking no action at all? Why does an apparently personal investment in an outcome change what a person will decide to do?

We may want an evolutionary psychologist to weigh in here or we may hypothesize that people generally ‘don’t want to get their hands dirty’ for fear of negative consequences, meaning, precisely, being responsible for one’s actions. As those of us familiar with the workings of Western culture know, we tend to forgive inaction that leads to harm as we work from the assumption that such consequences weren’t malicious in nature. Only, if one knows the consequences – that five people will die through the inaction of not voting – it is difficult to reason why this outcome isn’t just as malicious as voting. Again, the deontologist works from the premise that an action can be wrong and an inaction not wrong, but I’ve already argued this is demonstrably false as inaction is in fact an action because the decision itself is based on intention. The deontologist intends not to intentionally kill someone not realizing the initial intention intentionally kills five people. No matter what angle you view such moral dilemmas from, given only two choices, the deontological reasoning falters.

None of this takes into account other reason why you might vote or not vote. Perhaps you view situations like this and feel the need to do something, and therefore decide to vote. Or perhaps you’re a misanthrope and are indifferent to the people dying. There may also be the way such dilemmas are presented (for example, in the way they are written or are viewed in a virtual world) that may influence decision-making. Regardless of the one of the two choices made – if that’s all that are given – it still tells us something about the moral compass of the person making the choice. In my example, a deontologist has no firm philosophical ground to stand on. They are, in other words, irrational.


And this is why the Trolley Problem, formulated in 1967, is still relevant today. It would be wise to know when a populous is too irrational, if for no other reason than to prompt a re-evaluation of, say, educational programs. Of course, there is the other side of the coin in which people in powerful positions rely on an irrational populous, so such moral tests would be wise for them to administer as well so they might be aware of when citizenry might be becoming too smart for them to fool. Thought experiments, long considered the realm of lowly philosophers, are beneficial to everyone. And when they’re not, they still make for good conversations when you’re high.