|
Post by goz on Mar 22, 2018 1:01:25 GMT
I believe in Occam's Razor, so absence of evidence is evidence of absence. But in this case the evidence against afterlife exists. It's indirect evidence. You don't have to prove that there is no afterlife; you just have to prove that body and "soul" are a unit, and that there is no dualism. Libet proved that. Meaning: When the body dies, so does the "soul". Occam's Razor is useful and quite valuable as a guiding principle, but it should not be confused with actual evidence. Occam's Razor is useful in choosing between two competing theories with equal support from experimental evidence (where the preferred one is whichever is simplest). I often see people referring to Occam's Razor as though it is on an equal footing with evidence, and it certainly is not; it is clearly weaker, and by itself is insufficient for the purposes of science. I firmly believe that body and "soul" are a unit, and it may be that Libet made a demonstration that that is the case. But even if he proved a monist model of body and soul, it doesn't necessarily follow that there is no afterlife. To shed more light on that would require a better understanding of what consciousness is and the "binding problem" - why is my consciousness bound to me and not someone else? If I got duplicated in a Transporter Room-like contraption, which copy of me is me? If in the future someone with advanced technology were able to perfectly recreate my brain and have it functioning, including all the details of my synaptic weights that encode my memories, would I then regain consciousness as myself? We can theorize about what is likely the case, but at this point it is speculation, not anything proven by science. It is worth noting that this happens in nature with identical twins (clones) and that they each have a separate consciousness because they each have a separate brain and body. At birth their genetics are identical butt this is completely separate to their identity.
|
|
|
Post by Eva Yojimbo on Mar 22, 2018 1:36:39 GMT
I believe in Occam's Razor, so absence of evidence is evidence of absence. But in this case the evidence against afterlife exists. It's indirect evidence. You don't have to prove that there is no afterlife; you just have to prove that body and "soul" are a unit, and that there is no dualism. Libet proved that. Meaning: When the body dies, so does the "soul". Occam's Razor is useful and quite valuable as a guiding principle, but it should not be confused with actual evidence. Occam's Razor is useful in choosing between two competing theories with equal support from experimental evidence (where the preferred one is whichever is simplest). I often see people referring to Occam's Razor as though it is on an equal footing with evidence, and it certainly is not; it is clearly weaker, and by itself is insufficient for the purposes of science. This is somewhere between misleading and incorrect. Even before any testing is done, hypotheses are not equally likely to be true. The prior probability that any hypothesis is likely to be true is directly related to its simplicity--in technical terms its Kolmogorov complexity. Evidence would be any observation that changes the probability of these hypotheses with new information. That evidence can be strong, in that it dramatically changes them from what they were, or it can be relatively weak in that it doesn't change them much at all. Whether "Occam's Razor" (in this case, the simplest/most probable hypothesis) has a bigger impact on how probable a hypothesis is VS any other kind of evidence would depend on how probable the simplest hypothesis was beforehand (the prior) and how strong the evidence was to change that (the posterior). So Occam's Razor is not "weaker" than evidence; that entirely depends on initial complexity/simplicity of the hypotheses involved, and any evidence that could potentially change the probability that those hypotheses are true. To follow up on that point: absence of evidence is most certainly absence of evidence, mathematically provably so.
|
|
|
Post by general313 on Mar 22, 2018 15:25:14 GMT
Occam's Razor is useful and quite valuable as a guiding principle, but it should not be confused with actual evidence. Occam's Razor is useful in choosing between two competing theories with equal support from experimental evidence (where the preferred one is whichever is simplest). I often see people referring to Occam's Razor as though it is on an equal footing with evidence, and it certainly is not; it is clearly weaker, and by itself is insufficient for the purposes of science. This is somewhere between misleading and incorrect. Even before any testing is done, hypotheses are not equally likely to be true. The prior probability that any hypothesis is likely to be true is directly related to its simplicity--in technical terms its Kolmogorov complexity. Evidence would be any observation that changes the probability of these hypotheses with new information. That evidence can be strong, in that it dramatically changes them from what they were, or it can be relatively weak in that it doesn't change them much at all. Whether "Occam's Razor" (in this case, the simplest/most probable hypothesis) has a bigger impact on how probable a hypothesis is VS any other kind of evidence would depend on how probable the simplest hypothesis was beforehand (the prior) and how strong the evidence was to change that (the posterior). So Occam's Razor is not "weaker" than evidence; that entirely depends on initial complexity/simplicity of the hypotheses involved, and any evidence that could potentially change the probability that those hypotheses are true. To follow up on that point: absence of evidence is most certainly absence of evidence, mathematically provably so. I know you're a fan of Bayesian probability, but I think it's a mistake to treat properties of the universe as though they are random variables with well-defined probabilities. When modern physics came to the conclusion that Newtonian physics was a simplified approximation of reality and that Einstein's equations were a better description, particularly at relativistic speeds, scientists didn't do a Bayesian calculation on the likelihood of Newton being right or wrong before and after the experiments of the late 19th century. So I agree with your statement "Even before any testing is done, hypotheses are not equally likely to be true" (that's where Occam is useful), but after testing is done, it doesn't matter what seemed more likely before, we only care about which hypothesis is the better predictor of our test results (given the knowledge available to us at that time), and at that point we no longer care about which is simpler. It is in this sense that I maintain that Occam is weaker than evidence, (i.e. a scientist will always be more confident in a theory that is backed by experimental results than one that only can appeal to Occam's Razor).
|
|
|
Post by general313 on Mar 22, 2018 15:51:21 GMT
Occam's Razor is useful and quite valuable as a guiding principle, but it should not be confused with actual evidence. Occam's Razor is useful in choosing between two competing theories with equal support from experimental evidence (where the preferred one is whichever is simplest). I often see people referring to Occam's Razor as though it is on an equal footing with evidence, and it certainly is not; it is clearly weaker, and by itself is insufficient for the purposes of science. I firmly believe that body and "soul" are a unit, and it may be that Libet made a demonstration that that is the case. But even if he proved a monist model of body and soul, it doesn't necessarily follow that there is no afterlife. To shed more light on that would require a better understanding of what consciousness is and the "binding problem" - why is my consciousness bound to me and not someone else? If I got duplicated in a Transporter Room-like contraption, which copy of me is me? If in the future someone with advanced technology were able to perfectly recreate my brain and have it functioning, including all the details of my synaptic weights that encode my memories, would I then regain consciousness as myself? We can theorize about what is likely the case, but at this point it is speculation, not anything proven by science. It is worth noting that this happens in nature with identical twins (clones) and that they each have a separate consciousness because they each have a separate brain and body. At birth their genetics are identical butt this is completely separate to their identity. But identical twins aren't really identical, despite having identical genomes. Their fingerprints, retinas and most importantly their brains are different. They will have different memories.
|
|
|
Post by clusium on Mar 22, 2018 16:14:32 GMT
...what, exactly, was Pascal's definition of God? The Christian Catholic deity, nobody else Well of course!! Because he was Catholic.
|
|
|
Post by goz on Mar 22, 2018 22:04:57 GMT
It is worth noting that this happens in nature with identical twins (clones) and that they each have a separate consciousness because they each have a separate brain and body. At birth their genetics are identical butt this is completely separate to their identity. But identical twins aren't really identical, despite having identical genomes. Their fingerprints, retinas and most importantly their brains are different. They will have different memories. "Identical twins come from the same zygote. This means that the egg and sperm are the exact same, which equates to identical DNA. From birth, yes they should be 100% genetically identical. However, they aren't necessarily genetically identical as life progresses." (?) a quote from Wiki. There is some controversy over the two different babies developing in the 'nurture' of the womb, however, , as you stated about fingerprints, retinas etc. Theoretically they start out the same. I actually find this fascinating from two perspectives. 1. The fact that their genome is identical having split in the early days of pregnancy and how this affects brain development in those early weeks. 2. The nature/nurture effects on the unborn identical genome and life there after birth. Apparently it also does depend on at what level of development the initial twin cell division took place. Great stuff to ponder in terms of this discussion on consciousness and also the abortion debate,
|
|
|
Post by Terrapin Station on Mar 22, 2018 22:20:12 GMT
Pascal's wager is useless because it excludes options. For example: God doesn't exist, by there is life after death, and people who believed in God are punished whereas people who were atheists are rewarded.
|
|
|
Post by Eva Yojimbo on Mar 23, 2018 2:45:00 GMT
This is somewhere between misleading and incorrect. Even before any testing is done, hypotheses are not equally likely to be true. The prior probability that any hypothesis is likely to be true is directly related to its simplicity--in technical terms its Kolmogorov complexity. Evidence would be any observation that changes the probability of these hypotheses with new information. That evidence can be strong, in that it dramatically changes them from what they were, or it can be relatively weak in that it doesn't change them much at all. Whether "Occam's Razor" (in this case, the simplest/most probable hypothesis) has a bigger impact on how probable a hypothesis is VS any other kind of evidence would depend on how probable the simplest hypothesis was beforehand (the prior) and how strong the evidence was to change that (the posterior). So Occam's Razor is not "weaker" than evidence; that entirely depends on initial complexity/simplicity of the hypotheses involved, and any evidence that could potentially change the probability that those hypotheses are true. To follow up on that point: absence of evidence is most certainly absence of evidence, mathematically provably so. I know you're a fan of Bayesian probability, but I think it's a mistake to treat properties of the universe as though they are random variables with well-defined probabilities. When modern physics came to the conclusion that Newtonian physics was a simplified approximation of reality and that Einstein's equations were a better description, particularly at relativistic speeds, scientists didn't do a Bayesian calculation on the likelihood of Newton being right or wrong before and after the experiments of the late 19th century. So I agree with your statement "Even before any testing is done, hypotheses are not equally likely to be true" (that's where Occam is useful), but after testing is done, it doesn't matter what seemed more likely before, we only care about which hypothesis is the better predictor of our test results (given the knowledge available to us at that time), and at that point we no longer care about which is simpler. It is in this sense that I maintain that Occam is weaker than evidence, (i.e. a scientist will always be more confident in a theory that is backed by experimental results than one that only can appeal to Occam's Razor). Bayes isn't necessarily about treating "properties of the universe as though they are random variables with well-defined probabilities." "Well-defined probabilities" sounds like a frequentist view of probability, and Bayes isn't really treating "properties of the universe" as random variables, but rather our beliefs/hypotheses/theories about the universe. At the fundamental level, Bayes is how all correct reasoning works when it comes to processing how evidence affects beliefs (propositions, theories, etc.), and that's including in science. That's why when ET Jaynes wrote his textbook on probability theory, the subtitle was "the logic of science," because Bayes is basically a type of formalized logic. I don't know what you think is "mistaken" about this. This: "after testing is done, it doesn't matter what seemed more likely before..." is absolutely incorrect. The prior probability is what the new evidence of any experiment is altering. Bayes is the formalization of how it alters it. Even with something like Einstein/Newton you can express what happened in Bayesian way. Let's say that before the eclipse experiment the probability Einstein was right was 10%. The eclipse experiment provided evidence, which you express in conditional probabilities: Given GR is true, the probability of the eclipse experiment's results is (let's say) 99%. Given GR isn't true, the probability of the eclipse experiment's results is 0.1%. So even though the prior wasn't in Einstein's favor, the experiment simply altered the priors so much that the probability Einstein was right became overwhelmingly likely (given those made-up numbers it would've made Einstein about 90% likely to be correct). Thing is, though, the vast, vast, vast, vast majority of science aren't eclipse experiments that provide overwhelming evidence for a hypothesis/theory; most experiments provide some evidence that favors some theory over others, and in those cases the prior probability can have a bigger affect on what the posterior probability ends up being as the evidence from the experiments. Evolution is an example where it was the steady accumulation of evidence over a long period of time that eventually helped make it overwhelmingly likely. No single experiment provided such evidence that made the prior probability irrelevant. So when you say: "we only care about which hypothesis is the better predictor of our test results," that entirely depends on how well any hypothesis predicts the results. In Bayes, that's what you call the conditional probabilities: "Given (h)ypothesis is true, what is the probability of observing (r)esult?" and "given ~h, what is the probability of observing r?" These two conditionals are not always going to be so ridiculously high and low (as in the GR eclipse) that the results make the posterior probability the inverse of the prior probability. Most of the time they're just going to alter them to a much smaller degree, and in those cases the prior probability will often play as much of a role (if not a bigger one) than the conditionals in what the posterior probability ultimately is.
|
|
|
Post by Eva Yojimbo on Mar 23, 2018 2:46:30 GMT
Pascal's wager is useless because it excludes options. For example: God doesn't exist, by there is life after death, and people who believed in God are punished whereas people who were atheists are rewarded. Or, as I postulated here, God exists but it's a rational God that rewards atheists who rationally disbelieved and punishes believers who irrationally believed.
|
|
|
Post by Terrapin Station on Mar 23, 2018 10:30:05 GMT
Pascal's wager is useless because it excludes options. For example: God doesn't exist, by there is life after death, and people who believed in God are punished whereas people who were atheists are rewarded. Or, as I postulated here, God exists but it's a rational God that rewards atheists who rationally disbelieved and punishes believers who irrationally believed. Right, there are tons of other possibilities that the wager doesn't cover.
|
|
|
Post by general313 on Mar 23, 2018 16:49:35 GMT
I know you're a fan of Bayesian probability, but I think it's a mistake to treat properties of the universe as though they are random variables with well-defined probabilities. When modern physics came to the conclusion that Newtonian physics was a simplified approximation of reality and that Einstein's equations were a better description, particularly at relativistic speeds, scientists didn't do a Bayesian calculation on the likelihood of Newton being right or wrong before and after the experiments of the late 19th century. So I agree with your statement "Even before any testing is done, hypotheses are not equally likely to be true" (that's where Occam is useful), but after testing is done, it doesn't matter what seemed more likely before, we only care about which hypothesis is the better predictor of our test results (given the knowledge available to us at that time), and at that point we no longer care about which is simpler. It is in this sense that I maintain that Occam is weaker than evidence, (i.e. a scientist will always be more confident in a theory that is backed by experimental results than one that only can appeal to Occam's Razor). Bayes isn't necessarily about treating "properties of the universe as though they are random variables with well-defined probabilities." "Well-defined probabilities" sounds like a frequentist view of probability, and Bayes isn't really treating "properties of the universe" as random variables, but rather our beliefs/hypotheses/theories about the universe. At the fundamental level, Bayes is how all correct reasoning works when it comes to processing how evidence affects beliefs (propositions, theories, etc.), and that's including in science. That's why when ET Jaynes wrote his textbook on probability theory, the subtitle was "the logic of science," because Bayes is basically a type of formalized logic. I don't know what you think is "mistaken" about this. In assigning probabilities to estimate how likely our theories are correct, there is an utter lack of precision to it. In rolling two dies there is a strong mathematical definition of the probability of rolling snake eyes, and is easily demonstrated physically and mathematically. This is not so with physical theories. For example, a medieval astronomer might assert that there's a 98 percent chance that the earth is the center of the solar system, but does that really even mean anything? How and why would the dubious estimation of a medieval astronomer have any effect on our own modern evaluation of astronomical theories? I'm not following you here. If we never had geocentrist astronomers, would the "probabilities" of our modern theories change? But once evolution was accepted, scientists stopped bothering with the details of Lamarckism or whatever came before. One doesn't need to know the history of a science to understand its current status (though it may be informative and interesting). If a 6th century BC Athenian thought there was an 80 percent chance that Apollo was the true god of truth, what difference does that make to a 21st century Christian? So when you say: "we only care about which hypothesis is the better predictor of our test results," that entirely depends on how well any hypothesis predicts the results. In Bayes, that's what you call the conditional probabilities: "Given (h)ypothesis is true, what is the probability of observing (r)esult?" and "given ~h, what is the probability of observing r?" These two conditionals are not always going to be so ridiculously high and low (as in the GR eclipse) that the results make the posterior probability the inverse of the prior probability. Most of the time they're just going to alter them to a much smaller degree, and in those cases the prior probability will often play as much of a role (if not a bigger one) than the conditionals in what the posterior probability ultimately is. As I've stated above, one need not know the full history of a scientific theory to understand its current status. In addition, theories are usually modified as soon as they fail to predict a certain result. It is more like Boolean logic; a misprediction represents a falsification and an adjustment to the theory is needed. I stand by my statement: a scientist will always be more confident in a theory that is backed by experimental results than one that only can appeal to Occam's Razor. Fortunately for science there is always evidence (otherwise it isn't science).
|
|
Lugh
Sophomore
@dcu
Posts: 848
Likes: 77
|
Post by Lugh on Mar 23, 2018 17:56:47 GMT
Stupid, but misunderstood by some moronic atheists
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Mar 23, 2018 18:19:24 GMT
Stupid, but misunderstood by some moronic atheists Than i think you should tell us what is misunderstood
|
|
|
Post by phludowin on Mar 23, 2018 23:14:53 GMT
In assigning probabilities to estimate how likely our theories are correct, there is an utter lack of precision to it. In rolling two dies there is a strong mathematical definition of the probability of rolling snake eyes, and is easily demonstrated physically and mathematically. This is not so with physical theories. For example, a medieval astronomer might assert that there's a 98 percent chance that the earth is the center of the solar system, but does that really even mean anything? Actually, no. What astronomers like Kopernikus or Galilei did was to observe the "movements" of the celestial bodies. And the evidence they collected contradicted geocentric models. We could reformulate this using probability vocabulary. Observation A: The celestial bodies move in a certain way. Hypothesis B: The Earth is the center of the universe. Now when P(A|B) is zero, or close to zero (If the Earth was the center of the Universe, then the celestial bodies could not have moved the way they did, unless they were behaving weirdly), but the celestial bodies did in fact move in a certain way (meaning: P(A) > 0), then it follows that P(B) is zero, or at most very very very low. Of course, Galilei and Kopernikus didn't know anything about Boolean algebra or Kolmogorov axioms; but we do.
|
|
|
Post by general313 on Mar 24, 2018 0:18:03 GMT
In assigning probabilities to estimate how likely our theories are correct, there is an utter lack of precision to it. In rolling two dies there is a strong mathematical definition of the probability of rolling snake eyes, and is easily demonstrated physically and mathematically. This is not so with physical theories. For example, a medieval astronomer might assert that there's a 98 percent chance that the earth is the center of the solar system, but does that really even mean anything? Actually, no. What astronomers like Kopernikus or Galilei did was to observe the "movements" of the celestial bodies. And the evidence they collected contradicted geocentric models. We could reformulate this using probability vocabulary. Observation A: The celestial bodies move in a certain way. Hypothesis B: The Earth is the center of the universe. Now when P(A|B) is zero, or close to zero (If the Earth was the center of the Universe, then the celestial bodies could not have moved the way they did, unless they were behaving weirdly), but the celestial bodies did in fact move in a certain way (meaning: P(A) > 0), then it follows that P(B) is zero, or at most very very very low. Of course, Galilei and Kopernikus didn't know anything about Boolean algebra or Kolmogorov axioms; but we do. I'm not referring to the precision of the experimental data used to validate scientific theories, I'm referring to the idea of assigning a probability to the likelihood that a theory is valid, in particular at the time before the experimental data is collected. The validation of their results depended crucially on the data they collected (which as you pointed out they were able to do without knowlege of Kolmogorov axioms and the like). Without that evidence there is no science.
|
|
|
Post by Eva Yojimbo on Mar 24, 2018 2:12:28 GMT
Bayes isn't necessarily about treating "properties of the universe as though they are random variables with well-defined probabilities." "Well-defined probabilities" sounds like a frequentist view of probability, and Bayes isn't really treating "properties of the universe" as random variables, but rather our beliefs/hypotheses/theories about the universe. At the fundamental level, Bayes is how all correct reasoning works when it comes to processing how evidence affects beliefs (propositions, theories, etc.), and that's including in science. That's why when ET Jaynes wrote his textbook on probability theory, the subtitle was "the logic of science," because Bayes is basically a type of formalized logic. I don't know what you think is "mistaken" about this. In assigning probabilities to estimate how likely our theories are correct, there is an utter lack of precision to it. In rolling two dies there is a strong mathematical definition of the probability of rolling snake eyes, and is easily demonstrated physically and mathematically. This is not so with physical theories. For example, a medieval astronomer might assert that there's a 98 percent chance that the earth is the center of the solar system, but does that really even mean anything? First, at least ideally, there is a precision to it. In Solomonoff induction hypotheses are expressed as binary code whose simplicity is calculable and thus directly comparable to other hypotheses. The problem with Solomonoff is that it's (currently) impractical; but at least in theory, there is a way to precisely tell us how likely our hypotheses/theories are. Second, that we're imprecise about our uncertainty isn't really a problem for Bayesian approaches; you aren't any less uncertain by NOT applying Bayes. The thing to understand is that all probability is about modeling uncertainty. The only difference with stuff like dice and coins is that we know precisely what we know (how many sides) and precisely what we don't know (effect of gravity on roll/toss) so it makes it easy to express in numbers. That doesn't mean we can't make educated guesses when it comes to modeling our less precise uncertainty in other areas. I do this in poker all the time. The only thing I know precisely is my hand, the community cards, and the cards remaining either in the deck or other players' hands; however, this doesn't mean much if I can't put my opponent on a range of possible hands given how they're playing. One can't be precise about that, but one can make educated guesses, and the quality of those guesses will have a direct impact on the quality of one's decision making. How and why would the dubious estimation of a medieval astronomer have any effect on our own modern evaluation of astronomical theories? I'm not following you here. If we never had geocentrist astronomers, would the "probabilities" of our modern theories change? It wouldn't. I don't know why you think it would. Our estimations change as our knowledge changes. That's the entire idea behind Bayes. It's the lesson to be learned from the Monty Hall Problem and its variants. I'm skipping the rest of your post because I think it's mostly just restatements of the above point. I think you're rather confused about what Bayes is and the use of probability theory as a model for how scientific evidence works. So instead of responding to those points, let me devise a simple thought experiment: You've found a coin and immediately start flipping it. It lands on heads four times in a row. There are two hypotheses: 1. It's a fair coin 2. It's a trick coin 1. predicts your results only 6% of the time (.5^4). 2. Predicts your results 100% of the time. Do you think it's a trick coin? If so, why? If not, how many flips with the same result would it take to convince you? I'll leave it there for now. I'll have more to say in the next post as this thought experiment is pretty good illustration of my point(s).
|
|
|
Post by general313 on Mar 24, 2018 18:50:48 GMT
In assigning probabilities to estimate how likely our theories are correct, there is an utter lack of precision to it. In rolling two dies there is a strong mathematical definition of the probability of rolling snake eyes, and is easily demonstrated physically and mathematically. This is not so with physical theories. For example, a medieval astronomer might assert that there's a 98 percent chance that the earth is the center of the solar system, but does that really even mean anything? First, at least ideally, there is a precision to it. In Solomonoff induction hypotheses are expressed as binary code whose simplicity is calculable and thus directly comparable to other hypotheses. The problem with Solomonoff is that it's (currently) impractical; but at least in theory, there is a way to precisely tell us how likely our hypotheses/theories are. Second, that we're imprecise about our uncertainty isn't really a problem for Bayesian approaches; you aren't any less uncertain by NOT applying Bayes. The thing to understand is that all probability is about modeling uncertainty. The only difference with stuff like dice and coins is that we know precisely what we know (how many sides) and precisely what we don't know (effect of gravity on roll/toss) so it makes it easy to express in numbers. That doesn't mean we can't make educated guesses when it comes to modeling our less precise uncertainty in other areas. I do this in poker all the time. The only thing I know precisely is my hand, the community cards, and the cards remaining either in the deck or other players' hands; however, this doesn't mean much if I can't put my opponent on a range of possible hands given how they're playing. One can't be precise about that, but one can make educated guesses, and the quality of those guesses will have a direct impact on the quality of one's decision making. How and why would the dubious estimation of a medieval astronomer have any effect on our own modern evaluation of astronomical theories? I'm not following you here. If we never had geocentrist astronomers, would the "probabilities" of our modern theories change? It wouldn't. I don't know why you think it would. Our estimations change as our knowledge changes. That's the entire idea behind Bayes. It's the lesson to be learned from the Monty Hall Problem and its variants. I'm skipping the rest of your post because I think it's mostly just restatements of the above point. I think you're rather confused about what Bayes is and the use of probability theory as a model for how scientific evidence works. So instead of responding to those points, let me devise a simple thought experiment: You've found a coin and immediately start flipping it. It lands on heads four times in a row. There are two hypotheses: 1. It's a fair coin 2. It's a trick coin 1. predicts your results only 6% of the time (.5^4). 2. Predicts your results 100% of the time. Do you think it's a trick coin? If so, why? If not, how many flips with the same result would it take to convince you? I'll leave it there for now. I'll have more to say in the next post as this thought experiment is pretty good illustration of my point(s). I think there's an error in your odds calculation for the coin tossing. If you flipped the coin once, it would be wrong to summarize thusly: 1. predicts your results only 50% of the time (.5^1). 2. predicts your results 100% of the time With a single coin toss, landing on heads, I have equal reason to believe that the coin is a fair or a trick coin. But in any case repeated tosses of the coin ending up heads would increase my suspicion that it is a trick coin. After 4 tosses the probability that they land the same side up would be .5^3 (for a fair coin). I will confess that in my physics and engineering background I never ran across Bayes, even in my statistical physics classes, so I do have limited familiarity with its uses. I do have an appreciation of its utility in some problems (for example medical diagnosis applications). I'm just skeptical that it can bring meaningful results when the "input data" is so speculative. I think this article gives a pretty balanced look at it. Here's an excerpt: But let's rewind for a bit, and we can then talk more about Bayes afterwards. phludowin claimed that science had proven that there is no afterlife, then appealed to Occam's razor to justify that stance. Do you agree with that assessment?
|
|
|
Post by Eva Yojimbo on Mar 25, 2018 2:13:12 GMT
First, at least ideally, there is a precision to it. In Solomonoff induction hypotheses are expressed as binary code whose simplicity is calculable and thus directly comparable to other hypotheses. The problem with Solomonoff is that it's (currently) impractical; but at least in theory, there is a way to precisely tell us how likely our hypotheses/theories are. Second, that we're imprecise about our uncertainty isn't really a problem for Bayesian approaches; you aren't any less uncertain by NOT applying Bayes. The thing to understand is that all probability is about modeling uncertainty. The only difference with stuff like dice and coins is that we know precisely what we know (how many sides) and precisely what we don't know (effect of gravity on roll/toss) so it makes it easy to express in numbers. That doesn't mean we can't make educated guesses when it comes to modeling our less precise uncertainty in other areas. I do this in poker all the time. The only thing I know precisely is my hand, the community cards, and the cards remaining either in the deck or other players' hands; however, this doesn't mean much if I can't put my opponent on a range of possible hands given how they're playing. One can't be precise about that, but one can make educated guesses, and the quality of those guesses will have a direct impact on the quality of one's decision making. It wouldn't. I don't know why you think it would. Our estimations change as our knowledge changes. That's the entire idea behind Bayes. It's the lesson to be learned from the Monty Hall Problem and its variants. I'm skipping the rest of your post because I think it's mostly just restatements of the above point. I think you're rather confused about what Bayes is and the use of probability theory as a model for how scientific evidence works. So instead of responding to those points, let me devise a simple thought experiment: You've found a coin and immediately start flipping it. It lands on heads four times in a row. There are two hypotheses: 1. It's a fair coin 2. It's a trick coin 1. predicts your results only 6% of the time (.5^4). 2. Predicts your results 100% of the time. Do you think it's a trick coin? If so, why? If not, how many flips with the same result would it take to convince you? I'll leave it there for now. I'll have more to say in the next post as this thought experiment is pretty good illustration of my point(s). I think there's an error in your odds calculation for the coin tossing. If you flipped the coin once, it would be wrong to summarize thusly: 1. predicts your results only 50% of the time (.5^1). 2. predicts your results 100% of the time With a single coin toss, landing on heads, I have equal reason to believe that the coin is a fair or a trick coin. But in any case repeated tosses of the coin ending up heads would increase my suspicion that it is a trick coin. After 4 tosses the probability that they land the same side up would be .5^3 (for a fair coin). I will confess that in my physics and engineering background I never ran across Bayes, even in my statistical physics classes, so I do have limited familiarity with its uses. I do have an appreciation of its utility in some problems (for example medical diagnosis applications). I'm just skeptical that it can bring meaningful results when the "input data" is so speculative. I think this article gives a pretty balanced look at it. Here's an excerpt: But let's rewind for a bit, and we can then talk more about Bayes afterwards. phludowin claimed that science had proven that there is no afterlife, then appealed to Occam's razor to justify that stance. Do you agree with that assessment? I'm not sure why you think my odds calculation is in error, but you're free to use whatever odds you want as I don't want to get tripped up over that. When you say: "repeated tosses of the coin ending up heads would increase my suspicion that it is a trick coin" that's close to what I'm getting at. But HOW much does repeated tosses increase your suspicion? How many tosses does it take? See, the answer to those questions entirely depends on what your priors are; in this case, the prior would be the probability that you have a trick coin VS a fair coin to begin with. We all know fair coins are more common, but we have no precise idea of HOW much more common. Yet, despite not having a precise idea of how much more common, you admit that repeated tosses increase your suspicion. See, that's the thing: when you say that, you're doing Bayesian reasoning even if you aren't able to put a precise number on your priors. If you were to say "it would take 10 tosses to convince me it was most likely a trick coin," you can actually work backwards from that (someone better with numbers than me could write it algebraically): If it was 5000:1 (fair:trick), then 10 tosses would change that to 5:1 (trick:fair - at least by my odds). To extend this metaphor, most science works more like the above than, say, finding an experiment that allows you to just turn the coin over and see definitively whether it's a trick coin or not. Most science proceeds by finding evidence that progressively alters our priors until eventually we have strong or overwhelming evidence that one hypothesis/theory is most likely. But there are plenty of scientific areas where we don't have that level of evidence, or we have conflicting evidence, or we have no evidence but the simplicity of the various hypotheses (interpretations of quantum physics is an example of the latter). You can express all of these ideas in Bayesian terms, regardless of how precise/imprecise a number you can put on your priors. Eh, I wouldn't go as far as to say it's proved there's no afterlife. I don't even think that's the right way of looking at such issues. I view that more like how I view Many Worlds in quantum physics: it's the simplest explanation that should be assumed by default until any evidence is presented for a more complex hypothesis. I don't think one should bother wasting time on proving (or claiming it's been proved that) more complex hypotheses are false. I'm also rather curious as to why he thinks the Libet experiment proved that, since I thought that was mostly an experiment about free will. It's easy for someone to just propose that all our thoughts/mind are connected to our brain until death until at which point they uncouple. That's basically what people seem to think about the soul/body connection in general (I mean, those that believe this malarkey to begin with).
|
|
|
Post by phludowin on Mar 25, 2018 8:03:17 GMT
Eh, I wouldn't go as far as to say it's proved there's no afterlife. I don't even think that's the right way of looking at such issues. I view that more like how I view Many Worlds in quantum physics: it's the simplest explanation that should be assumed by default until any evidence is presented for a more complex hypothesis. I don't think one should bother wasting time on proving (or claiming it's been proved that) more complex hypotheses are false. I'm also rather curious as to why he thinks the Libet experiment proved that, since I thought that was mostly an experiment about free will. It's easy for someone to just propose that all our thoughts/mind are connected to our brain until death until at which point they uncouple. That's basically what people seem to think about the soul/body connection in general (I mean, those that believe this malarkey to begin with). To clarify: I believe the Libet experiments proved that there is no body/"soul"-dualism. It's true that he was researching free will; but it's not that uncommon that scientists are looking for something and find something else. Viagra anyone? In my opinion the nonexistance of an immortal "soul" that can exist without a body is enough evidence against afterlife. Where do nonexistant souls go? Answer: Nowhere.
|
|
|
Post by The Herald Erjen on Mar 25, 2018 8:29:51 GMT
Eh, I wouldn't go as far as to say it's proved there's no afterlife. I don't even think that's the right way of looking at such issues. I view that more like how I view Many Worlds in quantum physics: it's the simplest explanation that should be assumed by default until any evidence is presented for a more complex hypothesis. I don't think one should bother wasting time on proving (or claiming it's been proved that) more complex hypotheses are false. I'm also rather curious as to why he thinks the Libet experiment proved that, since I thought that was mostly an experiment about free will. It's easy for someone to just propose that all our thoughts/mind are connected to our brain until death until at which point they uncouple. That's basically what people seem to think about the soul/body connection in general (I mean, those that believe this malarkey to begin with). To clarify: I believe the Libet experiments proved that there is no body/"soul"-dualism. It's true that he was researching free will; but it's not that uncommon that scientists are looking for something and find something else. Viagra anyone? In my opinion the nonexistance of an immortal "soul" that can exist without a body is enough evidence against afterlife. Where do nonexistant souls go? Answer: Nowhere. I don't remember ever hearing of Pascal's Wager before I went nutty religious back in '12, and someone on the old board had to explain it to me. Normally I avoid the whole thing, but your post has reminded me that if you're right, nothing is hurt, because we all go nowhere when we die. However, if you're wrong, you're going to look like the world's biggest chump, and I find that funny.
|
|