|
Post by Eva Yojimbo on Mar 27, 2018 11:02:11 GMT
You're not assuming anything. The number of flips are evidence that should be altering your initial assumption that the coin is fair. 5000 heads in a row? There has to be some point where you start to think the coin probably isn't fair. Why tho? If the first coin flip is 50/50, why isn’t the second one? Assuming a fair coin, the second flip is 50/50. Any individual flip is 50/50. Still, the odds of 10 heads in a row is quite small (0.1%), and 50 in a row is vanishingly small (0.00000000000009% or 1 in 1-quadrillion if my math is right). At some part you should think a trick coin, which would predict heads 100% of the time, is more likely. That's how evidence works.
|
|
|
Post by Eva Yojimbo on Mar 27, 2018 11:03:02 GMT
My OP said:It's the hypotheses that predict the results, not the coins themselves. A "fair-coin hypothesis" predicts 50/50 heads/tails; a "trick-coin hypothesis" predicts 100/0 heads/tails. oh ok. My reading comprehension has always been very poor. Your avatar retention is also poor.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Mar 27, 2018 11:30:35 GMT
Why tho? If the first coin flip is 50/50, why isn’t the second one? Assuming a fair coin, the second flip is 50/50. Any individual flip is 50/50. Still, the odds of 10 heads in a row is quite small (0.1%), and 50 in a row is vanishingly small (0.00000000000009% or 1 in 1-quadrillion if my math is right). At some part you should think a trick coin, which would predict heads 100% of the time, is more likely. That's how evidence works. Correct. Exact odds of 50 heads in a row are... 1 in 1,125,899,906,842,620 To put that into perspective how big a number that is, the odds of winning the UK national lottery are 14m to 1. You'd be almost 6 times more likely to win the lottery TWICE in a row than hit 50 heads in a row. Or if you tossed one million coins every single day, it would take you around 273,000 years before you'd have a decent chance of seeing 50 heads in a row. Thousands of civilisations would rise and fall before you ever saw 50 heads in a row. So yeah, at this stage we can be virtually certain it is a rigged coin. After 4 tosses though, you can't draw any firm conclusions from that. What are you getting at with the thought experiment though?
|
|
|
Post by Eva Yojimbo on Mar 27, 2018 12:38:53 GMT
Assuming a fair coin, the second flip is 50/50. Any individual flip is 50/50. Still, the odds of 10 heads in a row is quite small (0.1%), and 50 in a row is vanishingly small (0.00000000000009% or 1 in 1-quadrillion if my math is right). At some part you should think a trick coin, which would predict heads 100% of the time, is more likely. That's how evidence works. Correct. Exact odds of 50 heads in a row are... 1 in 1,125,899,906,842,620 To put that into perspective how big a number that is, the odds of winning the UK national lottery are 14m to 1. You'd be almost 6 times more likely to win the lottery TWICE in a row than hit 50 heads in a row. Or if you tossed one million coins every single day, it would take you around 273,000 years before you'd have a decent chance of seeing 50 heads in a row. Thousands of civilisations would rise and fall before you ever saw 50 heads in a row. So yeah, at this stage we can be virtually certain it is a rigged coin. After 4 tosses though, you can't draw any firm conclusions from that. What are you getting at with the thought experiment though? Thanks for the elaboration. I linked to the original thread/context in my OP, but basically Cham313 and I were debating the importance of priors (the initial probability a hypothesis is true) when it comes to scientific evidence. He was arguing that priors really don't matter after we get experimental evidence, while I was arguing that whether or not priors or evidence matters more depends on the prior and evidence in question. We also discussed whether or not one can even have any kind of precision when assessing the prior probability of a hypothesis. Cham had once claimed that all that mattered was which hypothesis fit the evidence better, while I argued that knowing the fit doesn't mean much without the prior. I devised this thought experiment as a way of showing how even when your priors are imprecise--in this experiment, the prior would be the initial likelihood the coin is fair/trick before you've flipped it--and even though you don't have any definitive evidence right away, eventually you will be convinced the coin is a trick one. Further, once you are convinced you can retroactively calculate your prior algebraically. I thought it also presented a more common illustration of how scientific evidence works; ie, there's rarely one experiment that provides overwhelming evidence for a hypothesis, but rather there are a series of experiments that provide progressively more evidence for a hypothesis until the evidence is overwhelming, even in cases when we can't put precise numbers on the priors OR the conditionals. Basically, this my nefarious attempt at turning the forum into proper Bayesians. Gooble Gobble One of Us!
|
|
|
Post by phludowin on Mar 27, 2018 13:36:06 GMT
I linked to the original thread/context in my OP, but basically Cham313 and I were debating the importance of priors (the initial probability a hypothesis is true) when it comes to scientific evidence. He was arguing that priors really don't matter after we get experimental evidence, while I was arguing that whether or not priors or evidence matters more depends on the prior and evidence in question. We also discussed whether or not one can even have any kind of precision when assessing the prior probability of a hypothesis. Cham had once claimed that all that mattered was which hypothesis fit the evidence better, while I argued that knowing the fit doesn't mean much without the prior. A problem with the Bayes formula: Very often, the priors are unknown, and just assumed. In this case, I believe the prior is unknowable. The events in the thought experiment are: A: The coin comes up heads everytime when flipped (4 times in your example, or 10 times, 50 times...). B: The coin is a trick coin. You said: P(A|B) = 1, and we know that P(A|not B) = 0.0625 (or lower, depending on the number of flips). But what is P(B)? How many trick coins are in the world? Where was the coin found? On a random street, on an archeology site, or in front of a joke shop? Did a child manufacture the coin when making toy money? Did the coin belong to Harvey Dent? We don't know. Therefore, it can be anything between 0.00000000000000...1 and 0.999999999...9 . And therefore, using Bayes won't do us any good. In this case, we need to drop Bayes and do a different type of science: Examining the coin. The first obvious examination would be: Checking if the coin has a heads and a tales side. Afterwards we can see if it is evenly weighted and formed, and then check for further properties that might alter the odds of flips; like magnetic properties. I devised this thought experiment as a way of showing how even when your priors are imprecise--in this experiment, the prior would be the initial likelihood the coin is fair/trick before you've flipped it--and even though you don't have any definitive evidence right away, eventually you will be convinced the coin is a trick one. But: The prior P(A) is not known either. What if the finder is a stage magician who can flip any coin with at least one heads side in a way that heads will always come up? Then it won't matter if the coin is a trick coin or not. In probability terms: P(A) = 1. And this means: P(A|B) = 1, P(A|not B) = 1. Therefore, we not only have to check the coin, but also the coin thrower. More science that has nothing to do with Bayes. Therefore, ruling out examining the coin (like you did in a previous post) is anti-science. I believe you are not anti-science. Therefore, there are at least two possible events to be examined. Event 1: The previous post of yours does not really exist and is just a hallucination, which proves that I am currently in an alternate dimension, where all coins can simultaneously be trick coins and non-trick coins. Event 2: The thought experiment is not thought out very well. Bonus question: Which of these two events has the higher probability?
|
|
|
Post by theoncomingstorm on Mar 27, 2018 14:09:19 GMT
These types of threads (thought experiments, hypotheticals) never cease to attract people who want to argue against the premise of the stated situation.
OP: Pretend you are given a choice between having sex with an octopus or sex with a manta ray. Which would you choose?
Commenter 1: Why the hell would I do either one?
OP: It's a hypothetical.
Commenter 2: Okay, but what kind of sick fuck would have sex with either one?
OP: It's a hypothetical.
Commenter 3: Right, it's a hypothetical but how would you even go about having sex with either one?
OP: It's a hypothetical.
C3: I know it's a hypothetical. I said as much.
OP: You said it but you don't understand it or else you wouldn't be questioning the stated premise. The point of a hypothetical is the stated premise is accepted as fact, you have a choice between the two.
C3: But I don't accept the stated premise as fact because it's nonsensical.
OP: It's a hypothetical.
C1: I'd just make sure to avoid ever being in such a ridiculous situation.
OP: It's a hypothetical.
C2: Admit it, you just want to fuck an octopus.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Mar 27, 2018 14:38:37 GMT
Correct. Exact odds of 50 heads in a row are... 1 in 1,125,899,906,842,620 To put that into perspective how big a number that is, the odds of winning the UK national lottery are 14m to 1. You'd be almost 6 times more likely to win the lottery TWICE in a row than hit 50 heads in a row. Or if you tossed one million coins every single day, it would take you around 273,000 years before you'd have a decent chance of seeing 50 heads in a row. Thousands of civilisations would rise and fall before you ever saw 50 heads in a row. So yeah, at this stage we can be virtually certain it is a rigged coin. After 4 tosses though, you can't draw any firm conclusions from that. What are you getting at with the thought experiment though? Thanks for the elaboration. I linked to the original thread/context in my OP, but basically Cham313 and I were debating the importance of priors (the initial probability a hypothesis is true) when it comes to scientific evidence. He was arguing that priors really don't matter after we get experimental evidence, while I was arguing that whether or not priors or evidence matters more depends on the prior and evidence in question. We also discussed whether or not one can even have any kind of precision when assessing the prior probability of a hypothesis. Cham had once claimed that all that mattered was which hypothesis fit the evidence better, while I argued that knowing the fit doesn't mean much without the prior. I devised this thought experiment as a way of showing how even when your priors are imprecise--in this experiment, the prior would be the initial likelihood the coin is fair/trick before you've flipped it--and even though you don't have any definitive evidence right away, eventually you will be convinced the coin is a trick one. Further, once you are convinced you can retroactively calculate your prior algebraically. I thought it also presented a more common illustration of how scientific evidence works; ie, there's rarely one experiment that provides overwhelming evidence for a hypothesis, but rather there are a series of experiments that provide progressively more evidence for a hypothesis until the evidence is overwhelming, even in cases when we can't put precise numbers on the priors OR the conditionals. Basically, this my nefarious attempt at turning the forum into proper Bayesians. Gooble Gobble One of Us! I'm not 100% sure if I know what you are getting at, but yeah, degree of belief can change (or stay the same) with more experiments. In your thought experiment, if we are attempting to calculate the probability that a coin is real or fake, that does change things, compared to if we were simply calculating the probability of a coin landing heads x times. A general point in science though, no theory could ever be demonstrated to be 100% true, because no matter how many times an experiment is repeated there could always be exceptions that we don't know about yet. A good example is something relatively simple like Newton's 2nd law of motion. F=ma. To this day it is still a great approximation for classical physics and everyday life, and two centuries worth of experiments have validated it, but then it turned out you can't use this rule for particles/quantum mechanics. Also a scientific theory being correct (or not) is not really a probabilistic event, there could be probabilistic events within a scientific theory (for example in quantum mechanics), but you could never say Newton's 2nd law of Motion is true 80% of the time, and 20% of the time it isn't, it is either true given conditions x or it isn't. Of course, none of this should ever be an excuse to casually dismiss any of our current scientific theories (without exceptionally good reasons).
|
|
Lugh
Sophomore
@dcu
Posts: 848
Likes: 77
|
Post by Lugh on Mar 27, 2018 20:41:34 GMT
oh ok. My reading comprehension has always been very poor. Your avatar retention is also poor. I like to keep it fresh
|
|
|
Post by goz on Mar 27, 2018 22:37:31 GMT
Assuming a fair coin, the second flip is 50/50. Any individual flip is 50/50. Still, the odds of 10 heads in a row is quite small (0.1%), and 50 in a row is vanishingly small (0.00000000000009% or 1 in 1-quadrillion if my math is right). At some part you should think a trick coin, which would predict heads 100% of the time, is more likely. That's how evidence works. Correct. Exact odds of 50 heads in a row are... 1 in 1,125,899,906,842,620 To put that into perspective how big a number that is, the odds of winning the UK national lottery are 14m to 1. You'd be almost 6 times more likely to win the lottery TWICE in a row than hit 50 heads in a row. Or if you tossed one million coins every single day, it would take you around 273,000 years before you'd have a decent chance of seeing 50 heads in a row. Thousands of civilisations would rise and fall before you ever saw 50 heads in a row. So yeah, at this stage we can be virtually certain it is a rigged coin. After 4 tosses though, you can't draw any firm conclusions from that. What are you getting at with the thought experiment though? Forgive the interruption butt this is so cool. So you are saying, that IF I flipped a coin (fair or dark-as I am not racist ) 50 times and it came down either heads or tails 50 times in a row... I should buy a lottery ticket? Do you think I should buy it from the supermarket or the newsagent? Which has better odds!
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Mar 27, 2018 23:17:51 GMT
Correct. Exact odds of 50 heads in a row are... 1 in 1,125,899,906,842,620 To put that into perspective how big a number that is, the odds of winning the UK national lottery are 14m to 1. You'd be almost 6 times more likely to win the lottery TWICE in a row than hit 50 heads in a row. Or if you tossed one million coins every single day, it would take you around 273,000 years before you'd have a decent chance of seeing 50 heads in a row. Thousands of civilisations would rise and fall before you ever saw 50 heads in a row. So yeah, at this stage we can be virtually certain it is a rigged coin. After 4 tosses though, you can't draw any firm conclusions from that. What are you getting at with the thought experiment though? Forgive the interruption butt this is so cool. So you are saying, that IF I flipped a coin (fair or dark-as I am not racist ) 50 times and it came down either heads or tails 50 times in a row... I should buy a lottery ticket? Do you think I should buy it from the supermarket or the newsagent? Which has better odds! Well you've made it to 50, may as well go for 51.
|
|
|
Post by goz on Mar 27, 2018 23:27:38 GMT
Forgive the interruption butt this is so cool. So you are saying, that IF I flipped a coin (fair or dark-as I am not racist ) 50 times and it came down either heads or tails 50 times in a row... I should buy a lottery ticket? Do you think I should buy it from the supermarket or the newsagent? Which has better odds! Well you've made it to 50, may as well go for 51. So two lottery tickets one from each?
|
|
|
Post by Eva Yojimbo on Mar 28, 2018 6:08:44 GMT
I linked to the original thread/context in my OP, but basically Cham313 and I were debating the importance of priors (the initial probability a hypothesis is true) when it comes to scientific evidence. He was arguing that priors really don't matter after we get experimental evidence, while I was arguing that whether or not priors or evidence matters more depends on the prior and evidence in question. We also discussed whether or not one can even have any kind of precision when assessing the prior probability of a hypothesis. Cham had once claimed that all that mattered was which hypothesis fit the evidence better, while I argued that knowing the fit doesn't mean much without the prior. A problem with the Bayes formula: Very often, the priors are unknown, and just assumed. In this case, I believe the prior is unknowable. The events in the thought experiment are: A: The coin comes up heads everytime when flipped (4 times in your example, or 10 times, 50 times...). B: The coin is a trick coin. You said: P(A|B) = 1, and we know that P(A|not B) = 0.0625 (or lower, depending on the number of flips). But what is P(B)? How many trick coins are in the world? Where was the coin found? On a random street, on an archeology site, or in front of a joke shop? Did a child manufacture the coin when making toy money? Did the coin belong to Harvey Dent? We don't know. Therefore, it can be anything between 0.00000000000000...1 and 0.999999999...9 . And therefore, using Bayes won't do us any good. In this case, we need to drop Bayes and do a different type of science: Examining the coin. The first obvious examination would be: Checking if the coin has a heads and a tales side. Afterwards we can see if it is evenly weighted and formed, and then check for further properties that might alter the odds of flips; like magnetic properties. Not having a precise prior is not really a problem for Bayes and part of what I wanted to demonstrate with this thought experiment. For this I'm going to use "T" for "trick coin" and "F" for flips to keep things straight. Asking "what is P(T)" is a fine question to ask. But you can just as easily ask "how many flips does it take to convince you the coin is a trick coin?" Once you answer that--and the answer will be quite subjective, even though I'm sure we'd all agree some answers are more or less reasonable--then we can find P(T) algebraically. If we wanted to set our level of certainty at 95%, and let's say we agreed on 20 flips to be that convinced, then we'd find P(T) like this: .95 = (1T) / ((1T) + ((1-T)0.000001)) which would be about 0.002%. So answering how much evidence it takes to convince you is just another way of defining your priors. All of the questions you ask ("Where was the coin found?" etc.) would just be additional info that would modify the prior. Let's say the coin was found randomly on the street; how many flips to convince you? Let's say the coin was found on the floor of a magic shop; how many now? Presumably in the latter case you will be convinced with fewer flips, but why? You can no more put a precise prior on T on the streets as T on the floor of a magic shop. You just assume (not irrationally) that the latter is more common. The crucial point is this: even when you can't define your priors, the evidence will still eventually convince you it's a trick coin. Debating how much evidence it takes is just another way of debating your priors. When you describe dropping Bayes, you're not actually dropping Bayes. What you're doing is describing performing experiments that would give one of the conditionals--E(vidence)|~T or E|T--a value of zero. Finding experiments that achieve that is an ideal that science aspires to, but in the vast majority of fields for the vast majority of history that's not how it's worked. One thing I'm showing with this thought experiment is that, even without knowing our priors precisely, and without having the benefit of an experiment that can rule out one of our hypotheses, we are still capable of being convinced by the evidence of the flips. Indeed, it would be rather absurd to never be convinced through the flips alone. I devised this thought experiment as a way of showing how even when your priors are imprecise--in this experiment, the prior would be the initial likelihood the coin is fair/trick before you've flipped it--and even though you don't have any definitive evidence right away, eventually you will be convinced the coin is a trick one. But: The prior P(A) is not known either. What if the finder is a stage magician who can flip any coin with at least one heads side in a way that heads will always come up? Then it won't matter if the coin is a trick coin or not. In probability terms: P(A) = 1. And this means: P(A|B) = 1, P(A|not B) = 1. Therefore, we not only have to check the coin, but also the coin thrower. More science that has nothing to do with Bayes. Therefore, ruling out examining the coin (like you did in a previous post) is anti-science. I believe you are not anti-science. Therefore, there are at least two possible events to be examined. Event 1: The previous post of yours does not really exist and is just a hallucination, which proves that I am currently in an alternate dimension, where all coins can simultaneously be trick coins and non-trick coins. Event 2: The thought experiment is not thought out very well. First, I worded my OP as " you've found a coin and immediately start flipping it," and since I didn't specify you were a magician with this ability it shouldn't be assumed. Second, in your scenario, since P(F|T) and P(F|~T) are both 1, then the flips don't provide any evidence for whether the coin is fair or trick. Your guess as to whether it's fair or not would be identical to your prior. Checking the coin thrower would be more science that would be factored into Bayes rather than having nothing to do with it. Ruling out examining the coin is a way of showing how often in science we aren't able to just "examine the coin." That's not anti-science, that's just a fact. The coin is just a metaphor here. Physicists don't actually put cats in boxes with poison that might or might not kill them either... at least I hope not!
|
|
|
Post by Eva Yojimbo on Mar 28, 2018 7:10:16 GMT
Thanks for the elaboration. I linked to the original thread/context in my OP, but basically Cham313 and I were debating the importance of priors (the initial probability a hypothesis is true) when it comes to scientific evidence. He was arguing that priors really don't matter after we get experimental evidence, while I was arguing that whether or not priors or evidence matters more depends on the prior and evidence in question. We also discussed whether or not one can even have any kind of precision when assessing the prior probability of a hypothesis. Cham had once claimed that all that mattered was which hypothesis fit the evidence better, while I argued that knowing the fit doesn't mean much without the prior. I devised this thought experiment as a way of showing how even when your priors are imprecise--in this experiment, the prior would be the initial likelihood the coin is fair/trick before you've flipped it--and even though you don't have any definitive evidence right away, eventually you will be convinced the coin is a trick one. Further, once you are convinced you can retroactively calculate your prior algebraically. I thought it also presented a more common illustration of how scientific evidence works; ie, there's rarely one experiment that provides overwhelming evidence for a hypothesis, but rather there are a series of experiments that provide progressively more evidence for a hypothesis until the evidence is overwhelming, even in cases when we can't put precise numbers on the priors OR the conditionals. Basically, this my nefarious attempt at turning the forum into proper Bayesians. Gooble Gobble One of Us! I'm not 100% sure if I know what you are getting at, but yeah, degree of belief can change (or stay the same) with more experiments. In your thought experiment, if we are attempting to calculate the probability that a coin is real or fake, that does change things, compared to if we were simply calculating the probability of a coin landing heads x times. A general point in science though, no theory could ever be demonstrated to be 100% true, because no matter how many times an experiment is repeated there could always be exceptions that we don't know about yet. A good example is something relatively simple like Newton's 2nd law of motion. F=ma. To this day it is still a great approximation for classical physics and everyday life, and two centuries worth of experiments have validated it, but then it turned out you can't use this rule for particles/quantum mechanics. Also a scientific theory being correct (or not) is not really a probabilistic event, there could be probabilistic events within a scientific theory (for example in quantum mechanics), but you could never say Newton's 2nd law of Motion is true 80% of the time, and 20% of the time it isn't, it is either true given conditions x or it isn't. Of course, none of this should ever be an excuse to casually dismiss any of our current scientific theories (without exceptionally good reasons). What I'm getting at is trying to show how ideally science is Bayesian, and that most of the common objections to this don't hold up under scrutiny. The coin flip thought experiment illustrates a lot of these principles. Yes, trying to calculate the probability the coin is a trick coin is part of it, but I tried to show how Bayes works in a more common sense scenario. By asking how many flips it takes to convince you the coin is a trick coin, you can use that to figure your estimation that the coin was trick beforehand, even if beforehand you'd admit to not knowing. I agree with science not being about certainty, but I disagree that we can't view theories/hypotheses probabilistically. The most idealized form of science would be Solomonoff Induction, and because it expresses hypotheses as strings of binary code, hypotheses are directly comparable and their probability is directly related to their complexity. I think one of the biggest roadblocks to people getting Bayes on a gut level is the lingering false notion that probability is a property of reality rather than minds, or that it's only for situations where we have some strong degree of certainty/evidence of the frequentist variety. This just isn't true. Probability is for modeling uncertainty, period. Whether we know something precisely or imprecisely, whether we know what we don't know precisely or imprecisely, it doesn't matter. The coin flip example shows this principle at work: we know two things precisely: the probability the coin will land on heads X times if fair and the probability the coin will land on heads X times if trick. We DON'T know precisely the probability we have a trick or fair coin beforehand. But saying we don't know the latter precisely doesn't paralyze us from eventually being convinced the coin is a trick one by flipping it, nor should it; and saying at what point we're convinced is just a way of estimating our prior of having found a trick coin. You can't do one without the other.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Mar 28, 2018 9:53:16 GMT
I'm not 100% sure if I know what you are getting at, but yeah, degree of belief can change (or stay the same) with more experiments. In your thought experiment, if we are attempting to calculate the probability that a coin is real or fake, that does change things, compared to if we were simply calculating the probability of a coin landing heads x times. A general point in science though, no theory could ever be demonstrated to be 100% true, because no matter how many times an experiment is repeated there could always be exceptions that we don't know about yet. A good example is something relatively simple like Newton's 2nd law of motion. F=ma. To this day it is still a great approximation for classical physics and everyday life, and two centuries worth of experiments have validated it, but then it turned out you can't use this rule for particles/quantum mechanics. Also a scientific theory being correct (or not) is not really a probabilistic event, there could be probabilistic events within a scientific theory (for example in quantum mechanics), but you could never say Newton's 2nd law of Motion is true 80% of the time, and 20% of the time it isn't, it is either true given conditions x or it isn't. Of course, none of this should ever be an excuse to casually dismiss any of our current scientific theories (without exceptionally good reasons). What I'm getting at is trying to show how ideally science is Bayesian, and that most of the common objections to this don't hold up under scrutiny. The coin flip thought experiment illustrates a lot of these principles. Yes, trying to calculate the probability the coin is a trick coin is part of it, but I tried to show how Bayes works in a more common sense scenario. By asking how many flips it takes to convince you the coin is a trick coin, you can use that to figure your estimation that the coin was trick beforehand, even if beforehand you'd admit to not knowing. I agree with science not being about certainty, but I disagree that we can't view theories/hypotheses probabilistically. The most idealized form of science would be Solomonoff Induction, and because it expresses hypotheses as strings of binary code, hypotheses are directly comparable and their probability is directly related to their complexity. I think one of the biggest roadblocks to people getting Bayes on a gut level is the lingering false notion that probability is a property of reality rather than minds, or that it's only for situations where we have some strong degree of certainty/evidence of the frequentist variety. This just isn't true. Probability is for modeling uncertainty, period. Whether we know something precisely or imprecisely, whether we know what we don't know precisely or imprecisely, it doesn't matter. The coin flip example shows this principle at work: we know two things precisely: the probability the coin will land on heads X times if fair and the probability the coin will land on heads X times if trick. We DON'T know precisely the probability we have a trick or fair coin beforehand. But saying we don't know the latter precisely doesn't paralyze us from eventually being convinced the coin is a trick one by flipping it, nor should it; and saying at what point we're convinced is just a way of estimating our prior of having found a trick coin. You can't do one without the other. Okay, I think I know what you saying, I'm just not sure if I can think of a practical use for Bayes when modelling the probability that a given scientific theory is true or not. I'm interested on your thoughts though. For example, let's take Newton's 2nd law, F=ma A physicist (before quantum mechanics) might say "look, we've got a centuries worth of experiments that demonstrate 'Newton's laws of motion' is as solid a set of laws as you are ever going to find" A sceptic might reply "there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." The physicist at this point might tear his hair out hearing this. As it turned out the sceptic would have been more onto something than previously imagined, but I guess that is besides the point. How, or maybe the more pertinent question is, why would either of them use Bayes theorem to demonstrate their certainty or uncertainty? Then after they've both gone away and come back with something, how would you reconcile these opposing views? I'm just not sure if it has a practical use here. Or take any other established scientific theory there is. Let's go 'natural selection'. The biologist might say "look, all the evidence, from the fossil records, to geology, to DNA testing, to carbon dating all point to the same thing, that evolution is a rock solid theory, the evidence is overwhelming". The sceptic might say "but what about god?". So same question again, how practical would it be for either of them to use Bayes theorem to model their certainty? Okay, we can say later on, with more experiments or evidence, any of our theories could fall apart or could become more established, but I'm not sure at any point in the past or future why Bayes would be useful here. Of course I'm not saying Bayes has no use, period, it has many applications within statistics, testing, science, quantum mechanics, I'm just not convinced it is ever useful for modelling our established scientific interpretations. Just to pick you up on one thing though, you mentioned that some people have a false notion that probability is a property of reality rather than minds. I get why you would think that, but the wave function of an electron or photon or particle is probabilistic, in that it doesn't have a superposition until it is measured, and until then it can only be expressed as a probability. Although in practical terms we don't notice the probabilistic nature of particles, because there are so many they average out when you start getting to huge numbers. So in a sense you could say all of reality is probabilistic, we just don't notice it in classical physics or everyday life where everything, even the toss of a coin, can be measured and predicted relatively precisely (given the right tools). I guess the overall problem is, we can make great predictions with maths, but human interpretations can be found wanting, and I don't know how you can model the probability that an interpretation is correct or not...
|
|
|
Post by general313 on Mar 28, 2018 15:06:23 GMT
I think one of the biggest roadblocks to people getting Bayes on a gut level is the lingering false notion that probability is a property of reality rather than minds, or that it's only for situations where we have some strong degree of certainty/evidence of the frequentist variety. This just isn't true. Probability is for modeling uncertainty, period. I wouldn't go that far. There's some pretty important real world consequences to the second law of thermodynamics, where descriptions of entropy are tied to probability and randomness.
|
|
|
Post by phludowin on Mar 28, 2018 19:33:07 GMT
I guess I'm a bit too tired to respond to the first part of your post; not to mention frustrated by the fact that message boards do not allow a proper rendering of math formulas. I don't know where the 0.0002% come from. But I have just enough brain power to respond to the second part of the post. But: The prior P(A) is not known either. What if the finder is a stage magician who can flip any coin with at least one heads side in a way that heads will always come up? Then it won't matter if the coin is a trick coin or not. In probability terms: P(A) = 1. And this means: P(A|B) = 1, P(A|not B) = 1. Therefore, we not only have to check the coin, but also the coin thrower. More science that has nothing to do with Bayes. Therefore, ruling out examining the coin (like you did in a previous post) is anti-science. I believe you are not anti-science. Therefore, there are at least two possible events to be examined. Event 1: The previous post of yours does not really exist and is just a hallucination, which proves that I am currently in an alternate dimension, where all coins can simultaneously be trick coins and non-trick coins. Event 2: The thought experiment is not thought out very well. First, I worded my OP as " you've found a coin and immediately start flipping it," and since I didn't specify you were a magician with this ability it shouldn't be assumed. When I find a coin, I don't start with flipping it: I examine it first. If it has two heads, then I already know it's a trick coin without having to flip it. This approach also works when the coin is just a metaphor. I work in a software company. While I don't work as a full-time programmer, I know a bit about computer programming. When there is a problem to be solved, it makes sense to think about the algorithm first, and only then write code that implements the algorithm. Just "flipping the coin to see if it's a trick coin" would be the equivalent of just copying and pasting various lines of codes, putting them together, and hoping that it works. This might work for simple problems; but if your problem is a complex combination of various problems, with lots of variables, then the odds are high that you have a solution that will not really work. Flipping the coin (testing the software) is the next step. On the other hand, there are programming approaches that are test-driven. Meaning: You write code so that it fulfills test scenarios, and then simplify it. Maybe my programming metaphor is worse than the coin metaphor after all... Sorry this post isn't well thought out. You put effort in your last reply; I guess you deserve at least a reply, even if it's not a very good one.
|
|
|
Post by OpiateOfTheMasses on Mar 28, 2018 22:28:24 GMT
10 flips is probably enough to start you thinking there may be something hinky about the coin, but given how quickly you can conduct the "test" I'd probably flip it 30 or 40 times before I was willing to come to any firmer conclusions.
|
|
|
Post by Eva Yojimbo on Mar 29, 2018 1:28:07 GMT
What I'm getting at is trying to show how ideally science is Bayesian, and that most of the common objections to this don't hold up under scrutiny. The coin flip thought experiment illustrates a lot of these principles. Yes, trying to calculate the probability the coin is a trick coin is part of it, but I tried to show how Bayes works in a more common sense scenario. By asking how many flips it takes to convince you the coin is a trick coin, you can use that to figure your estimation that the coin was trick beforehand, even if beforehand you'd admit to not knowing. I agree with science not being about certainty, but I disagree that we can't view theories/hypotheses probabilistically. The most idealized form of science would be Solomonoff Induction, and because it expresses hypotheses as strings of binary code, hypotheses are directly comparable and their probability is directly related to their complexity. I think one of the biggest roadblocks to people getting Bayes on a gut level is the lingering false notion that probability is a property of reality rather than minds, or that it's only for situations where we have some strong degree of certainty/evidence of the frequentist variety. This just isn't true. Probability is for modeling uncertainty, period. Whether we know something precisely or imprecisely, whether we know what we don't know precisely or imprecisely, it doesn't matter. The coin flip example shows this principle at work: we know two things precisely: the probability the coin will land on heads X times if fair and the probability the coin will land on heads X times if trick. We DON'T know precisely the probability we have a trick or fair coin beforehand. But saying we don't know the latter precisely doesn't paralyze us from eventually being convinced the coin is a trick one by flipping it, nor should it; and saying at what point we're convinced is just a way of estimating our prior of having found a trick coin. You can't do one without the other. Okay, I think I know what you saying, I'm just not sure if I can think of a practical use for Bayes when modelling the probability that a given scientific theory is true or not. I'm interested on your thoughts though. For example, let's take Newton's 2nd law, F=ma A physicist (before quantum mechanics) might say "look, we've got a centuries worth of experiments that demonstrate 'Newton's laws of motion' is as solid a set of laws as you are ever going to find" A sceptic might reply "there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." The physicist at this point might tear his hair out hearing this. As it turned out the sceptic would have been more onto something than previously imagined, but I guess that is besides the point. How, or maybe the more pertinent question is, why would either of them use Bayes theorem to demonstrate their certainty or uncertainty? Then after they've both gone away and come back with something, how would you reconcile these opposing views? I'm just not sure if it has a practical use here. Or take any other established scientific theory there is. Let's go 'natural selection'. The biologist might say "look, all the evidence, from the fossil records, to geology, to DNA testing, to carbon dating all point to the same thing, that evolution is a rock solid theory, the evidence is overwhelming". The sceptic might say "but what about god?". So same question again, how practical would it be for either of them to use Bayes theorem to model their certainty? Okay, we can say later on, with more experiments or evidence, any of our theories could fall apart or could become more established, but I'm not sure at any point in the past or future why Bayes would be useful here. Of course I'm not saying Bayes has no use, period, it has many applications within statistics, testing, science, quantum mechanics, I'm just not convinced it is ever useful for modelling our established scientific interpretations. First, I must make a distinction between Bayes on three levels: ideal, descriptive, and practical. Bayes is ideal because it's a complete description of how correct evidence processing works. Bayes is descriptive in that any and all correct evidence processing can be described in Bayesian terms. When you talk about examples from science, you can describe experiments/evidence in terms of conditionals, you can describe the hypothesis as the prior, and the posterior represents how the experiment/evidence alters the prior. Mostly what you're talking about in this section of your post is Bayes as practical, meaning you're asking how working scientists can explicitly use Bayes the way that statisticians use Bayes. I would say that they don't and probably don't need to. It's more important to understand the principles and try to apply them as best as possible rather than putting actual numbers to priors, conditionals, and posteriors. However, that scientists don't and don't need to do this doesn't mean we can't describe what they do in those terms. It's much the same my profession of poker, where I can't explicitly work out hands using Bayes at the table, but I can try to think in Bayesian terms while making decisions even on an intuitive level (and work them out explicitly afterwards). As for your examples, taking Newton first, before QM I think it's fine to simply say that the evidence for Newton's laws was overwhelming and our estimation of their probability of being true was extremely close to 100%. It's not enough for the skeptic just to say "well, they could be wrong, we don't know for sure" to offer any kind of serious challenge to Newton. All we have to do to account for that "maybe we could be wrong" is not put our level of certainty at 100%. I don't really see how this is a challenge to Bayes though; it's really just a lesson on not assuming certainty (which is a lesson Bayes can impart as well since if you were 100% certain about anything then new evidence could never change your mind). With natural selection and God the situation is a bit different. This is really more of an issue of Occam and Priors rather than Bayes, because in both cases the hypotheses are attempting to explain the same set of data, so which is more likely has everything to do with their complexity, including how much is known VS how much is being assumed. In the case of natural selection, we're basically just taking what we know happens--genetic change across generations--and extrapolating that from the origin of life to the present. You can use natural selection to make certain predictions and retrodictions that you can't do with God. So with God all you've done is add complexity without adding any more predictive power. As I said elsewhere, in these situations, without the aid of Solomonoff Induction, it's most practical to simply dismiss/ignore the more complex hypothesis unless it produces evidence (in the form of unique predictions) above and beyond what the simpler hypothesis can. Thus, this is more of a Occamian principle than a Bayesian (but Solomonoff incorporates both Occam and Bayes into its formalism). Just to pick you up on one thing though, you mentioned that some people have a false notion that probability is a property of reality rather than minds. I get why you would think that, but the wave function of an electron or photon or particle is probabilistic, in that it doesn't have a superposition until it is measured, and until then it can only be expressed as a probability. Although in practical terms we don't notice the probabilistic nature of particles, because there are so many they average out when you start getting to huge numbers. So in a sense you could say all of reality is probabilistic, we just don't notice it in classical physics or everyday life where everything, even the toss of a coin, can be measured and predicted relatively precisely (given the right tools). I'd highly encourage you to read that QM thread I linked to because this is a whole different can of worms. I do not believe the wavefunction is probabilistic. I think we treat it is as probabilistic because that's how we experience it. I used this example in that QM thread: imagine you have a line of gunpowder. The gunpowder forks into three paths. You are the fire, the gunpowder is the wavefunction, the split is the measurement. As the fire, you will experience having gone left, middle, OR right after the split/measurement. You can ask "what's the probability of me going left VS going middle or right?" and produce an answer of "33%," but the truth is that the probability of you going left, right, AND middle is 100%. You will go all three ways, even though the "you" after the split will have experienced only having gone left, middle, OR right. It will seem to you as if the other two "paths" have disappeared. But I have to stress that the probabilistic nature of QM is entirely dependent on interpretation, not science. The equation that models how quantum systems behave (Shrodinger Wave Equation) is deterministic, and you have to interpret it by adding to it a stochastic collapse in order to make it probabilistic. The simplest interpretation is Many-Worlds, which just takes the Shrodinger Wave Equation as a complete description of QM, and like I said above, in such situations it's best to just assume the simplest explanation until unique evidence is produced that favors the more complex explanations. So you can't really turn to QM in order to make the argument that probability is inherent in reality. In my strong opinion, probability only exists in the mind. It expresses various states of ignorance about reality. Any time we're using probability, we're acknowledging there's some facet of reality we don't know or can't account for. I guess the overall problem is, we can make great predictions with maths, but human interpretations can be found wanting, and I don't know how you can model the probability that an interpretation is correct or not... If you want to know how you can (ideally, again, this is currently completely impractical) precisely model the probability that an interpretation/hypothesis/theory is correct then you need to research Solomonoff Induction; in particular concepts like Kolmogorov complexity that's about describing reality and our theories about reality in binary code. Once you've described them in binary code, it's just a matter of comparing the length of code to generate a probability that they're correct. But, again, that's only ideally. Practically speaking, we just have to do the best we can. To go back to my OP's thought experiment, nobody knows the ratio of trick coins to fair coins in the world. So we can all say we're uncertain about that. If you put a number to that uncertainty like "I think the ratio might be something like 10,000:1," are you any more or less sure by doing that than in just saying "I don't know?" I don't think there's anything wrong with putting a number to our uncertainty as long is doesn't make us think the number is accurate, and thus unduly lessen our uncertainty. But like I've said, whether you know the ratio of fair to trick coins, you DO know that if you flip heads enough times, you'll eventually be convinced it's a trick coin. That's what I mean by using Bayes to be descriptive. You can give a reasonable answer about how many flips it will take to convince you. You can do that without ever once doing an explicitly Bayesian calculation. But once you've become convinced, once you've put a number to how many flips it takes, it's possible to describe this entire process in Bayesian terms, including having the ability to put a number on your prior of the ratio of trick to fair coins.
|
|
|
Post by Eva Yojimbo on Mar 29, 2018 1:28:56 GMT
I think one of the biggest roadblocks to people getting Bayes on a gut level is the lingering false notion that probability is a property of reality rather than minds, or that it's only for situations where we have some strong degree of certainty/evidence of the frequentist variety. This just isn't true. Probability is for modeling uncertainty, period. I wouldn't go that far. There's some pretty important real world consequences to the second law of thermodynamics, where descriptions of entropy are tied to probability and randomness. I'm not certain why you think this contradicts my post; I didn't imply that probability (in general) doesn't have real-world consequences. If you're suggesting that the probability/randomness of thermodynamics is inherent in reality, then I'd say that like with QM this is actually a matter of interpretation. I don't know as much about thermodynamics (really not much at all) as I do QM, but I do know that Maximum Entropy Thermodynamics interprets thermodynamics in terms of inferences. Yudkowsky has a post explaining this, as interpreting thermodynamics as observer-dependent. Though I haven't read it myself, I usually find myself agreeing with him. I also found with a quick Google search that Sean Carroll (another guy I usually agree with) recently published a paper on this and discussed it on his blog. I'm posting these links as much for myself as for you so I'll remember to read them tomorrow.
|
|
|
Post by Eva Yojimbo on Mar 29, 2018 1:53:50 GMT
I guess I'm a bit too tired to respond to the first part of your post; not to mention frustrated by the fact that message boards do not allow a proper rendering of math formulas. I don't know where the 0.0002% come from. But I have just enough brain power to respond to the second part of the post. First, I worded my OP as " you've found a coin and immediately start flipping it," and since I didn't specify you were a magician with this ability it shouldn't be assumed. When I find a coin, I don't start with flipping it: I examine it first. If it has two heads, then I already know it's a trick coin without having to flip it. This approach also works when the coin is just a metaphor. I work in a software company. While I don't work as a full-time programmer, I know a bit about computer programming. When there is a problem to be solved, it makes sense to think about the algorithm first, and only then write code that implements the algorithm. Just "flipping the coin to see if it's a trick coin" would be the equivalent of just copying and pasting various lines of codes, putting them together, and hoping that it works. This might work for simple problems; but if your problem is a complex combination of various problems, with lots of variables, then the odds are high that you have a solution that will not really work. Flipping the coin (testing the software) is the next step. On the other hand, there are programming approaches that are test-driven. Meaning: You write code so that it fulfills test scenarios, and then simplify it. Maybe my programming metaphor is worse than the coin metaphor after all... Sorry this post isn't well thought out. You put effort in your last reply; I guess you deserve at least a reply, even if it's not a very good one. I got the 0.002% using Bayes where the prior is unknown but the conditionals and posterior is known. You can just plug: 0.95 = 1t/(1t+(1−t)(0.000001)) into HERE. Generally, you use Bayes where the prior and conditionals are known in order to find the posterior, but you can also use a known posterior and conditionals to find the prior. That's what I'm getting at with the OP thought experiment. We know the conditionals P(flips|trick) and P(flips|not trick). If you say that it will take, say, 20 flips to make you 95% sure it's a trick coin, you use .95 as your posterior, and 0.000001 (probability of 20 head flips given fair) as your second conditional to find the prior T. I think you need to read Cash's post because now you're just arguing with the scenario of the experiment rather than trying to understand the points I'm trying to illustrate with it. Nobody argues with Copenhagen by saying we don't put cats in poison-filled boxes. I use coins precisely because the math of the conditionals is simple and well-understood. That takes one element of difficulty/controversy out of it. It's not the only example I could use, but it's convenient for addressing the underlying issues. Pointing out that you can just examine the coin to know if it's a trick one seems like avoiding trying to understand the points being made with it.
|
|