|
Post by Arlon10 on Feb 13, 2020 8:48:08 GMT
1) 2) The Monty Hall Problem is so simple even you could solve it. Okay, maybe you couldn't, but it is easy. I did not bother applying Bayes' Theorem because it is not needed. I am still not convinced you ever did apply Bayes' Theorem correctly. 3) There's the pity You have a singular talent for confounding things, for making them obscure. That's not what I do. I make the complicated clear. My illustration of the 100 door Monty Hall problem, although also not needed to solve it, makes it clear why the popular solution is correct. 1) That's not a response. If you want to give me an example of evidence for a hypothesis where Bayes doesn't apply, feel free. 2) The issue isn't the simplicity, the issue is that you can't fully explain how to solve it without Bayes in some form. Saying the "2/3 goes to the other door" is not an explanation when it can't account for alternative scenarios where it doesn't (Monty Crawl, Monty Fall). Of course you're not convinced I correctly applied Bayes's Theorem correctly because you don't understand it yourself. LOL, You have to understand things before you can clarify them. The 100 door Mont Hall might allow someone to intuitively get what happens in the problem, but you can apply the same math (Bayes) to a 3-door, 5-door, 10-door, or any-door variation of the problem and it works. You can also apply it to many other scenarios in which intuitive explanations aren't as easy to come by. Plus, once you've applied it enough, Bayes becomes intuitive itself. It makes it easy to see why stuff like "absence of evidence isn't evidence of absence" and "wet sidewalks aren't evidence of rain" are patently false. Learning formulas by rote and solving word problems that are designed to fit those formulas with easily plugged in values does not adequately prepare you for the real world. Real world problems are full of surprises that throw the simple formulas off. Probability mathematics can be very accurate when working with playing cards, dice and other scenarios where the multitude and magnitude of unknown and unmanageable factors is unusually low. The more necessary skill is seeing the unique problem, finding the thing that is confusing people in a particular scenario. The thing that confuses people about the Monty Hall Problem is that when Monty eliminates a door he knows for a fact the prize is not behind it and he tells you so. That number then to plug into whatever formula you use for the chance the prize is behind door 2 if he eliminates door 2 is zero. It's that simple. It would be different if he randomly choose which door to eliminate. It is important to read the word problem correctly. That's all. The chance the prize is behind a randomly chosen eliminated door is greater than zero. That would readjust things. It is because things are not readjusted that the chance the prize is behind the door (or doors) that were not picked at first remains one minus the chance the first pick has the prize. If only one door remains, and this is critical, you know for a fact the eliminated doors do not have the prize, the chance the prize is behind the remaining doors is still what it originally was even if it is only one door. I do not see Bayes' Theorem in that. If there were a hundred doors the chance the prize is behind the first one picked is 1/100 and the chance it is behind the remaining doors is 99/100. If 98 doors are randomly eliminated the chance the prize is behind the remaining door 1/100, the chance the prize is behind the first door picked is 1/100 and the chance the prize is not behind either remaining door is 98/100. There you see how critical it is to know eliminated doors do not have the prize. If we know the 98 eliminated doors do not have the prize then the chance the prize is behind the first door picked is still 1/100 and the chance the prize is behind the remaining doors is still 99/100 even if only one remains.
|
|
fatpaul
Sophomore
@fatpaul
Posts: 502
Likes: 193
|
Post by fatpaul on Feb 13, 2020 14:17:39 GMT
1. Absence Of Evidence Is Not Evidence Of Absence - No, Absence Of Evidence sure as shit Is Evidence Of Absence. Imagine there's a mass murderer in your town. All homes in your neighborhood are searched, everyone is questioned, nothing is found to incriminate anyone for anything. Yes it's possible, though unlikely, that the murderer(s) resides in that house with mom, dad, two kids and a dog. The available evidence is that it's someone from a different locale. Of course evidence of something is not proof of anything. Here's a write-up on that topic. Here is my two cents worth on this phrase and other examples brought up: ∃= some or at least. ∀= any or all . E=Evidence. ¬= negation. x belongs to X (a set/domain) and y ∈ Y. If absence of evidence then evidence of absence. [Absence of evidence] →[evidence of absence]. [no evidence of x] →[evidence of no y]. a) ∃x(¬Ex)→¬∃y(Ey) -- [no evidence of some x] →[evidence of no y at least] b) ∃x(¬Ex)→∀y(¬Ey) -- [no evidence of some x] →[no evidence of any y] c) ∃y(Ey) → ∀x(Ex) -- [evidence of some y] →[evidence of any x] The a) and b) statements are equivalent to each other via quantifier negation and the c) statement is the contrapositive of both a) or b) statements, with either a) or b) being the contrapositive of the c) statement also. With the equivalent b) statement and the contrapositive c) statement going from some quantification in the antecedent to any quantification in the consequent, this is an inductive relationship in that the conclusion is broader than the premises. An inductive argument is about how likely the premise(s) are for the conclusion to be true and is a type of rational reasoning so you can have proof by induction as well as proof by deduction. Let's say r ∈ Rain and w ∈ Wet Sidewalk: 1. ∃r(¬Er)→¬∃w(Ew) -- [no evidence of some rain ] →[evidence of no wet sidewalk at least] 2. ∃r(¬Er)→∀w(¬Ew) -- [no evidence of some rain ] →[no evidence of any wet sidewalk ] 3. ∃w(Ew) → ∀r(Er) -- [evidence of some wet sidewalk] →[evidence of any rain] Stating that a wet sidewalk is evidence for rain is stating 3, with the contrapositive 1 statement stating: If absence of evidence of rain then evidence of absence of a wet sidewalk. Indeed other liquid type instances from other sets may also be responsible for a wet sidewalk but then again rain could be true also, such is the nature of inductive arguments. Let's say a ∈ Aliens: i) ∃a(¬Ea)→¬∃a(Ea) -- [no evidence of some alien] →[evidence of no alien at all] ii) ∃a(¬Ea)→∀a(¬Ea) -- [no evidence of some alien] →[no evidence of any alien] iii) ∃a(Ea) → ∀a(Ea) -- [evidence of some alien] →[evidence of any alien] I think that line iii) is likely false because evidence of a Martian is not evidence for a Venusian or a Saturnian, or any other alien. Given that line i) is the contrapositive statement of iii), I also think that in this case it's likely that absence of evidence isn't evidence. When I state, absence of evidence isn't evidence, I'm proposing it isn't the case that if absence of evidence then evidence of absence: A. ¬[∃x(¬Ex)→¬∃y(Ey)] -- not ( [no evidence of some x] →[evidence of no y at least] ) B. ¬[∃x(¬Ex)→∀y(¬Ey)] -- not ( [no evidence of some x] →[no evidence of any y] ) C. ¬[∃y(Ey) → ∀x(Ex)] -- not ( [evidence of some y] →[evidence of any x] ) By using the previous instances of wet sidewalk and rain, line C becomes: not the case that if evidence of some wet sidewalk then evidence of any rain. But I can't conclusively say this is not the case given that a wet sidewalk being evident for rain can, and often is, the case. The contrapositive of this statement is a variation of absence of evidence is not evidence of absence and so cannot be said for the case of rain and wet sidewalks unless I'm willing to totally discount the possibility of rain being causal for a wet sidewalk. In summary, all I can say is the absence of evidence of rain is evidence of absence of a wet sidewalk maybe true or false but I cannot conclusively say that the absence of evidence of rain isn't evidence of absence of a wet sidewalk because the absence of evidence of rain can be evidence of absence of a wet sidewalk. I could have just said this summary and saved myself a lot of time and effort but if I did so, would it not be considered that if absence of evidence of support for my claims then evidence of absence of support for my claims?
|
|
|
Post by general313 on Feb 13, 2020 15:57:59 GMT
Arlon has a point here. You made the claim that "Bayes's Theorem is the logic underlying science." which is an exaggeration. Bayes's Theorem is quite useful when statistical mathematics plays an important role, but much of science proceeds on reasoning that is more Boolean/binary. "The world is round because the shadow of the earth cast on the moon always has the same shape." or "mass is conserved because when we weigh the reactants before and after a chemical reaction the total weights are always the same", for example. edit: changed 'energy' to 'mass' in second example It's not an exaggeration at all. ET Jaynes wrote an entire textbook explaining this with numerous examples from numerous fields. The problem you (and I think) Arlon has is that you can only see Bayes as a statistical tool. It is not. Probability applies in any situation in which there are unknowns. Sometimes those knowns/unknowns can be precisely enumerated, and sometimes they can not. The Monty Hall Problem is an example that can be precisely enumerated, but I can just as easily apply Bayes to the question of "do I have a ham sandwich left in the fridge?" in which case I'm placing a probability on the proposition based on my memory. When you talk about Boolean/Binary truth statements, I think such things are useful fictions. They're things we assume because the evidence is overwhelming. It's a case of treating "99.9999999%" as "1" and "0.0000000%" as "0." Truth never actually reaches 1 or 0, because if it did then it would be literally impossible for evidence to convince us we were wrong, and we know damn well that science doesn't operate like this. The fact that evidence must be able to convince us we're wrong is a clue that all science is probabilistic, and Boolean truth values are just useful fictions we use for convenience. You're welcome to consider Boolean logic as a "fiction", however it doesn't make sense to give it a different status than Bayes or probability math in general (or for that matter any other kind of math). Here's a simple example Boolean proposition, to demonstrate its utility to science: If a material is made of atoms then it cannot be infinitely subdivided.If I decide that water is composed of atoms, and I decide that the proposition is valid, then I can conclude that water is not infinitely divisible. Simple hard Boolean logic here. The universe is thoroughly disinterested in our estimation of how likely it is that something is true or not. At one point scientists were very uncertain if matter was composed of atoms, now the scientific community is extremely confident that it is. But the basic truth of the question is unchanged by what scientists have learned: water either is or isn't composed of atoms, the same today as 300 years ago. Here are a couple of examples of scientific discovery that don't have anything to do with a probability assessment, at least in terms of the essential thought processes. 1. About 100 years ago it was an open question whether the Andromeda nebula was a dust cloud within the galaxy (like the crab nebula) or something else external to the galaxy. At the time "galaxy" and "universe" were synonymous. There was much debate, and resolving the question depended on developing techniques to measure the distance to Andromeda. Improved telescopes and discovery of particular properties of Cepheid variables provided the yardstick to measure the distance and thus answer the question (Edwin Hubble being one of the chief scientists involved), leading to the discovery that the Milky Way galaxy is not unique to the universe. 2. When Albert Einstein tried to solve the riddle of why the Michelson-Morley experiments always came up nil, he approached it by performing thought experiments of a highly visual and geometric nature. They were along the lines of "imagine what it would be like to be travelling alongside a light beam at the same speed" or "what is the difference between being in an accelerating elevator in space or a stationary one on the ground"? These and many other thought experiments are detailed in the book Relativity: The Special and the General Theory, written by Einstein himself. In the book he makes clear that the thought experiments weren't just contrived to explain the theory to a lay audience, they played a pivotal role in Einstein's development and understanding of his relativity theories. There are many other scientific discoveries made over the centuries (from Kepler to Hawking) that similarly cannot be characterized in terms of mathematical probability, hence why I don't accept your claim about Bayes.
|
|
|
Post by goz on Feb 13, 2020 20:21:30 GMT
1. Absence Of Evidence Is Not Evidence Of Absence - No, Absence Of Evidence sure as shit Is Evidence Of Absence. Imagine there's a mass murderer in your town. All homes in your neighborhood are searched, everyone is questioned, nothing is found to incriminate anyone for anything. Yes it's possible, though unlikely, that the murderer(s) resides in that house with mom, dad, two kids and a dog. The available evidence is that it's someone from a different locale. Of course evidence of something is not proof of anything. Here's a write-up on that topic. Here is my two cents worth on this phrase and other examples brought up: ∃= some or at least. ∀= any or all . E=Evidence. ¬= negation. x belongs to X (a set/domain) and y ∈ Y. If absence of evidence then evidence of absence. [Absence of evidence] →[evidence of absence]. [no evidence of x] →[evidence of no y]. a) ∃x(¬Ex)→¬∃y(Ey) -- [no evidence of some x] →[evidence of no y at least] b) ∃x(¬Ex)→∀y(¬Ey) -- [no evidence of some x] →[no evidence of any y] c) ∃y(Ey) → ∀x(Ex) -- [evidence of some y] →[evidence of any x] The a) and b) statements are equivalent to each other via quantifier negation and the c) statement is the contrapositive of both a) or b) statements, with either a) or b) being the contrapositive of the c) statement also. With the equivalent b) statement and the contrapositive c) statement going from some quantification in the antecedent to any quantification in the consequent, this is an inductive relationship in that the conclusion is broader than the premises. An inductive argument is about how likely the premise(s) are for the conclusion to be true and is a type of rational reasoning so you can have proof by induction as well as proof by deduction. Let's say r ∈ Rain and w ∈ Wet Sidewalk: 1. ∃r(¬Er)→¬∃w(Ew) -- [no evidence of some rain ] →[evidence of no wet sidewalk at least] 2. ∃r(¬Er)→∀w(¬Ew) -- [no evidence of some rain ] →[no evidence of any wet sidewalk ] 3. ∃w(Ew) → ∀r(Er) -- [evidence of some wet sidewalk] →[evidence of any rain] Stating that a wet sidewalk is evidence for rain is stating 3, with the contrapositive 1 statement stating: If absence of evidence of rain then evidence of absence of a wet sidewalk. Indeed other liquid type instances from other sets may also be responsible for a wet sidewalk but then again rain could be true also, such is the nature of inductive arguments. Let's say a ∈ Aliens: i) ∃a(¬Ea)→¬∃a(Ea) -- [no evidence of some alien] →[evidence of no alien at all] ii) ∃a(¬Ea)→∀a(¬Ea) -- [no evidence of some alien] →[no evidence of any alien] iii) ∃a(Ea) → ∀a(Ea) -- [evidence of some alien] →[evidence of any alien] I think that line iii) is likely false because evidence of a Martian is not evidence for a Venusian or a Saturnian, or any other alien. Given that line i) is the contrapositive statement of iii), I also think that in this case it's likely that absence of evidence isn't evidence. When I state, absence of evidence isn't evidence, I'm proposing it isn't the case that if absence of evidence then evidence of absence: A. ¬[∃x(¬Ex)→¬∃y(Ey)] -- not ( [no evidence of some x] →[evidence of no y at least] ) B. ¬[∃x(¬Ex)→∀y(¬Ey)] -- not ( [no evidence of some x] →[no evidence of any y] ) C. ¬[∃y(Ey) → ∀x(Ex)] -- not ( [evidence of some y] →[evidence of any x] ) By using the previous instances of wet sidewalk and rain, line C becomes: not the case that if evidence of some wet sidewalk then evidence of any rain. But I can't conclusively say this is not the case given that a wet sidewalk being evident for rain can, and often is, the case. The contrapositive of this statement is a variation of absence of evidence is not evidence of absence and so cannot be said for the case of rain and wet sidewalks unless I'm willing to totally discount the possibility of rain being causal for a wet sidewalk. In summary, all I can say is the absence of evidence of rain is evidence of absence of a wet sidewalk maybe true or false but I cannot conclusively say that the absence of evidence of rain isn't evidence of absence of a wet sidewalk because the absence of evidence of rain can be evidence of absence of a wet sidewalk. I could have just said this summary and saved myself a lot of time and effort but if I did so, would it not be considered that if absence of evidence of support for my claims then evidence of absence of support for my claims? A for effort.
|
|
|
Post by phludowin on Feb 13, 2020 21:57:57 GMT
A wet sidewalk is also, indeed, evidence of rain. Given that it rained, you'd expect to see a wet sidewalk 100% of the time, while if it didn't rain you'd expect to see a wet sidewalk <100% of the time. If there is some other situation in which you'd also expect to see a wet sidewalk 100% of the time--say your neighbor's sprinkler always wets your sidewalk--then a wet sidewalk would be equal evidence for both rain and your neighbor's sprinkler and whichever was more likely would depend on your prior information about both (how often does your neighbor run their sprinkler? Was it expected to rain today? etc.). Actually, a wet sidewalk is not evidence of rain. Rain is just a possible explanation for a wet sidewalk, but it doesn't have to be the most likely one. The statement that rain means a wet sidewalk 100% of the time is also false. It's possible that the sidewalk stayed dry during a rainfall. In fact, since you like calculations with Bayes, I'll give you a task for beginners in statistics. Let's consider two events. Event A: The sidewalk is wet. Event B: It has rained. If it rains, the probability that a sidewalk is wet is 99%. In statistical notation: P(A|B) = 0.99 If it doesn't rain, the probability that the sidewalk is wet is 10%. Or: P(A|¬B) = 0.1 (I made these numbers up, but for the sake of an argument, let's run with them) How big is the probability that it has rained when you observe that the sidewalk is wet? In other words: We are looking for P(B|A). The solution means applying Bayes' formula, and the result, using the probabilities I gave at the beginning, are: P(B|A) = 0.99*P(B)/(0.89*P(B) + 0.1) Now, if the a-priori probability of B is sufficiently small, you will find that P(B|A) can indeed be smaller than P(¬B|A). In layman terms: A wet sidewalk is not evidence for rain. At least not when rain is unlikely in general. When you live in a place where it seldom rains and people use plenty of sprinklers, then a wet sidewalk is not equal evidence for rain or a sprinkler. It's more evidence for a sprinkler than for rain. Sorry for harping on this, but I happen to like statistics, and when someone gets it wrong, I sometimes feel compelled to answer.
|
|
|
Post by general313 on Feb 14, 2020 0:29:21 GMT
A wet sidewalk is also, indeed, evidence of rain. Given that it rained, you'd expect to see a wet sidewalk 100% of the time, while if it didn't rain you'd expect to see a wet sidewalk <100% of the time. If there is some other situation in which you'd also expect to see a wet sidewalk 100% of the time--say your neighbor's sprinkler always wets your sidewalk--then a wet sidewalk would be equal evidence for both rain and your neighbor's sprinkler and whichever was more likely would depend on your prior information about both (how often does your neighbor run their sprinkler? Was it expected to rain today? etc.). Actually, a wet sidewalk is not evidence of rain. Rain is just a possible explanation for a wet sidewalk, but it doesn't have to be the most likely one. I would say it is evidence of rain, perhaps qualified as weak evidence, but still evidence nonetheless. It can also be evidence that sprinklers have been run. Evidence of both at the same time. Further evidence would be needed to determine the most likely cause. One could look at how spread out the wetness is. If it's confined to the sidewalk in front a few houses with others completely dry, or if the street is dry, that would suggest sprinklers as the more likely cause.
|
|
|
Post by Eva Yojimbo on Feb 14, 2020 1:16:01 GMT
1) That's not a response. If you want to give me an example of evidence for a hypothesis where Bayes doesn't apply, feel free. 2) The issue isn't the simplicity, the issue is that you can't fully explain how to solve it without Bayes in some form. Saying the "2/3 goes to the other door" is not an explanation when it can't account for alternative scenarios where it doesn't (Monty Crawl, Monty Fall). Of course you're not convinced I correctly applied Bayes's Theorem correctly because you don't understand it yourself. LOL, You have to understand things before you can clarify them. The 100 door Mont Hall might allow someone to intuitively get what happens in the problem, but you can apply the same math (Bayes) to a 3-door, 5-door, 10-door, or any-door variation of the problem and it works. You can also apply it to many other scenarios in which intuitive explanations aren't as easy to come by. Plus, once you've applied it enough, Bayes becomes intuitive itself. It makes it easy to see why stuff like "absence of evidence isn't evidence of absence" and "wet sidewalks aren't evidence of rain" are patently false. Learning formulas by rote and solving word problems that are designed to fit those formulas with easily plugged in values does not adequately prepare you for the real world. Real world problems are full of surprises that throw the simple formulas off. Probability mathematics can be very accurate when working with playing cards, dice and other scenarios where the multitude and magnitude of unknown and unmanageable factors is unusually low. The more necessary skill is seeing the unique problem, finding the thing that is confusing people in a particular scenario. The thing that confuses people about the Monty Hall Problem is that when Monty eliminates a door he knows for a fact the prize is not behind it and he tells you so. That number then to plug into whatever formula you use for the chance the prize is behind door 2 if he eliminates door 2 is zero. It's that simple. It would be different if he randomly choose which door to eliminate. It is important to read the word problem correctly. That's all. The chance the prize is behind a randomly chosen eliminated door is greater than zero. That would readjust things. It is because things are not readjusted that the chance the prize is behind the door (or doors) that were not picked at first remains one minus the chance the first pick has the prize. If only one door remains, and this is critical, you know for a fact the eliminated doors do not have the prize, the chance the prize is behind the remaining doors is still what it originally was even if it is only one door. I do not see Bayes' Theorem in that. If there were a hundred doors the chance the prize is behind the first one picked is 1/100 and the chance it is behind the remaining doors is 99/100. If 98 doors are randomly eliminated the chance the prize is behind the remaining door 1/100, the chance the prize is behind the first door picked is 1/100 and the chance the prize is not behind either remaining door is 98/100. There you see how critical it is to know eliminated doors do not have the prize. If we know the 98 eliminated doors do not have the prize then the chance the prize is behind the first door picked is still 1/100 and the chance the prize is behind the remaining doors is still 99/100 even if only one remains. We've been over this before. There's at least three different "applications" of Bayes; the theoretical, the statistical, and the practical. When you speak of Bayes you only want to discuss the statistical and the practical, but when I discuss Bayes I'm more interested (usually) in the theoretical. When I say Bayes models every situation in which we correctly reason from evidence to its affect on a hypothesis, I mean just that. This does not mean we can put precise numbers to all situations and apply it (meaning practically), nor does it mean we're treating all situations statistically. One can colloquially say "I thought I had a ham sandwich in the fridge, but when I opened it and saw no ham sandwich, I realized I didn't," and that's a perfectly reasonable and intuitively understandable chain of reasoning. But it is entirely possible to also see how Bayes applies to that reasoning; your memory of a ham sandwich in the fridge is the prior, and the absence of ham sandwich is the evidence modifying that prior. The fact that you don't see Bayes Theorem in the Monty Hall Problem is, indeed, your problem, and the reason we keep hitting this same roadblock on this subject. When you talk about how our knowledge changes the probability, this is PRECISELY what Bayes models with its conditional probabilities: "Given the prize is behind Door 1 (.33), what's the probability Monty opens Door 2 (.5); Given the prize is behind Door 3 (.33), what's the probability Monty opens Door 2 (1)." Using Bayes, we see what REALLY changes here is that when you choose the right door, Monty opens Door 2 50% of the time, while when you've chosen the wrong door, Monty opens Door 2 100% of the time. Because 100 is twice 50, it's twice as likely the prize is behind Door 3. We also see why the numbers are different if Monty opens the door randomly, because our knowledge changes: "Given the prize is behind Door 1 (.33), what's the probability Monty opens Door 2 (.5); Given the prize is behind Door 3 (.33), what's the probability Monty opens Door 2 (.5)." In that variation, we see that both conditional probabilities are equal (.5), rather than unequal as in the first situation. What changed? Our knowledge. We can express how our knowledge changes the probabilities with Bayes, as I just did. I have no idea what you're saying with your 100-door variation. If you choose Door 1, and 98 doors are randomly opened not revealing the prize, then the probability the prize is behind Door 1 VS the last remaining door is 50/50. If you open 98 doors and the prize isn't behind any of them, the probability the prize is behind them is 0/100, not 98/100. Don't know how you screwed that up. If you're randomly opening doors, then you can't use the logic that Monty (or whomever) opened all 98 doors because they didn't have a prize. One of the points of the original Monty Hall Problem is that randomness VS intention makes a big difference in the probability. With randomness you get 50/50, with intention you get 67/33.
|
|
|
Post by Eva Yojimbo on Feb 14, 2020 1:16:46 GMT
1. Absence Of Evidence Is Not Evidence Of Absence - No, Absence Of Evidence sure as shit Is Evidence Of Absence. Imagine there's a mass murderer in your town. All homes in your neighborhood are searched, everyone is questioned, nothing is found to incriminate anyone for anything. Yes it's possible, though unlikely, that the murderer(s) resides in that house with mom, dad, two kids and a dog. The available evidence is that it's someone from a different locale. Of course evidence of something is not proof of anything. Here's a write-up on that topic. In summary, all I can say is the absence of evidence of rain is evidence of absence of a wet sidewalk maybe true or false but I cannot conclusively say that the absence of evidence of rain isn't evidence of absence of a wet sidewalk because the absence of evidence of rain can be evidence of absence of a wet sidewalk. I could have just said this summary and saved myself a lot of time and effort but if I did so, would it not be considered that if absence of evidence of support for my claims then evidence of absence of support for my claims? I tried to follow your post but it's a struggle... seems you've complicated this even more than Arlon is accusing me of doing! I'm not even sure I can properly follow your summary here. To me, it's pretty simple: If you see an effect, that is evidence for all possible causes. What cause is most likely will depends on your prior knowledge of the likelihood of those causes. If you think there is a cause, but see no effect of that cause when you'd expect to if the cause existed, then that lack of an effect is evidence that cause doesn't exist. To go back to the wet sidewalk, a wet sidewalk is evidence for rain and all other possible causes for its wetness. Basically all this means is that whatever prior probability you had that it would rain that day has increased since you observed a wet sidewalk because, if it did rain, that's what you'd expect to see. If it didn't rain, you'd only expect to see that some of the time (for all other possible causes). The reverse is just as true, that NOT seeing a wet sidewalk, an absence of evidence of rain, is obviously evidence it didn't rain since, again, you'd expect to see a wet sidewalk if it had rained. If you want to put numbers to this, let's say you thought there was only a 10% chance of rain that day. Let's also say the only other way your sidewalk gets wet is from your neighbor's sprinkler, which happens once a week (14%). If it did rain, obviously that sidewalk would be wet 100% of the time, so that keeps your prior probability of rain at 10%. Now let's consider the other 90% of the time when it doesn't rain. Of that 90%, only 14% of the time would you expect a wet sidewalk, so .14*.9 is .13. Now you have .10 for "rain+wet sidewalk" and .13 for "no rain+wet sidewalk." If you divide .10 by the total for both you get .43. Suddenly, the probability it rained went from 10% to 43% because of the "wet sidewalk" evidence. If an observation makes your hypothesis (in this case "rain") 4.3x more likely, that's obviously evidence. What else would one call it?
|
|
|
Post by Eva Yojimbo on Feb 14, 2020 1:50:33 GMT
It's not an exaggeration at all. ET Jaynes wrote an entire textbook explaining this with numerous examples from numerous fields. The problem you (and I think) Arlon has is that you can only see Bayes as a statistical tool. It is not. Probability applies in any situation in which there are unknowns. Sometimes those knowns/unknowns can be precisely enumerated, and sometimes they can not. The Monty Hall Problem is an example that can be precisely enumerated, but I can just as easily apply Bayes to the question of "do I have a ham sandwich left in the fridge?" in which case I'm placing a probability on the proposition based on my memory. When you talk about Boolean/Binary truth statements, I think such things are useful fictions. They're things we assume because the evidence is overwhelming. It's a case of treating "99.9999999%" as "1" and "0.0000000%" as "0." Truth never actually reaches 1 or 0, because if it did then it would be literally impossible for evidence to convince us we were wrong, and we know damn well that science doesn't operate like this. The fact that evidence must be able to convince us we're wrong is a clue that all science is probabilistic, and Boolean truth values are just useful fictions we use for convenience. You're welcome to consider Boolean logic as a "fiction", however it doesn't make sense to give it a different status than Bayes or probability math in general (or for that matter any other kind of math). Here's a simple example Boolean proposition, to demonstrate its utility to science: If a material is made of atoms then it cannot be infinitely subdivided.If I decide that water is composed of atoms, and I decide that the proposition is valid, then I can conclude that water is not infinitely divisible. Simple hard Boolean logic here. The universe is thoroughly disinterested in our estimation of how likely it is that something is true or not. At one point scientists were very uncertain if matter was composed of atoms, now the scientific community is extremely confident that it is. But the basic truth of the question is unchanged by what scientists have learned: water either is or isn't composed of atoms, the same today as 300 years ago. Here are a couple of examples of scientific discovery that don't have anything to do with a probability assessment, at least in terms of the essential thought processes. 1. About 100 years ago it was an open question whether the Andromeda nebula was a dust cloud within the galaxy (like the crab nebula) or something else external to the galaxy. At the time "galaxy" and "universe" were synonymous. There was much debate, and resolving the question depended on developing techniques to measure the distance to Andromeda. Improved telescopes and discovery of particular properties of Cepheid variables provided the yardstick to measure the distance and thus answer the question (Edwin Hubble being one of the chief scientists involved), leading to the discovery that the Milky Way galaxy is not unique to the universe. 2. When Albert Einstein tried to solve the riddle of why the Michelson-Morley experiments always came up nil, he approached it by performing thought experiments of a highly visual and geometric nature. They were along the lines of "imagine what it would be like to be travelling alongside a light beam at the same speed" or "what is the difference between being in an accelerating elevator in space or a stationary one on the ground"? These and many other thought experiments are detailed in the book Relativity: The Special and the General Theory, written by Einstein himself. In the book he makes clear that the thought experiments weren't just contrived to explain the theory to a lay audience, they played a pivotal role in Einstein's development and understanding of his relativity theories. There are many other scientific discoveries made over the centuries (from Kepler to Hawking) that similarly cannot be characterized in terms of mathematical probability, hence why I don't accept your claim about Bayes. Depends on what you mean by "give it a different status." In what sense? If you just mean we consider it as such for its utility, then fine. I'm speaking more fundamentally, theoretically. I never denied the utility of using it. Ultimately, though, it relies on the assumption of truth of its propositions. I'm sure I don't have to tell you that in deductive logic propositions must be assumed true rather than proved themselves. If those assumptions rest on conclusions from other propositions, then that evidence must also rest on assumed propositions and so-on back to infinity. Any assumed truth could be wrong, even down to the most basic one that we can (generally) rely on our senses to tell us facts about reality. Plus, Hume knew long ago that you can't get from empirical evidence, which is inductive, to binary/universal truth values. To me, the easy way around this is to just say say that everything is really induction/probability, and deduction/Boolean truth values are useful fictions. I agree, of course, that the universe is disinterested in our estimations of how likely something is; that ignores the fact that we don't have direct access to how the universe is. Our only access is via our senses and reasoning, both of which can be fallible, both of which are limited. One reason science works is that it's able to update itself when new evidence is presented, but if we are absolutely, 100% certain that something's true then it would be impossible for new evidence to convince us otherwise. This should be an obvious indicator we never get to 100% certainty about anything. We just get awfully close sometimes. Both of your examples very much involve "probability assessment." In 1, there were hypotheses about Andromeda. Those hypotheses can be treated as our priors, which have some probability of being true. The invention of better telescopes provided new and better evidence as to which was true, and "galaxy" won out over "dust cloud" thanks to that evidence. In 2. you could say that "aether theory" was the prior hypothesis involved, that had some probability of being true, while the absence of evidence in things like the Michelson-Morley experiments was the evidence that aether wasn't true. This prompted Einstein to begin thinking about alternative explanations. This lead to the hypothesis of Relativity, a hypothesis that also had some probability of being true, which eventually lead to the various experiments which provided evidence that Relativity was very probably true. Again, if you really want to be convinced of this I'd recommend reading at least some of that Jaynes textbook. The biggest problem you're having is, again, not realizing that our hypotheses can be treated as priors, and experiments are the conditional probabilities that either provide evidence for them or not.
|
|
|
Post by Eva Yojimbo on Feb 14, 2020 2:03:55 GMT
A wet sidewalk is also, indeed, evidence of rain. Given that it rained, you'd expect to see a wet sidewalk 100% of the time, while if it didn't rain you'd expect to see a wet sidewalk <100% of the time. If there is some other situation in which you'd also expect to see a wet sidewalk 100% of the time--say your neighbor's sprinkler always wets your sidewalk--then a wet sidewalk would be equal evidence for both rain and your neighbor's sprinkler and whichever was more likely would depend on your prior information about both (how often does your neighbor run their sprinkler? Was it expected to rain today? etc.). Actually, a wet sidewalk is not evidence of rain. Rain is just a possible explanation for a wet sidewalk, but it doesn't have to be the most likely one. The statement that rain means a wet sidewalk 100% of the time is also false. It's possible that the sidewalk stayed dry during a rainfall. In fact, since you like calculations with Bayes, I'll give you a task for beginners in statistics. Let's consider two events. Event A: The sidewalk is wet. Event B: It has rained. If it rains, the probability that a sidewalk is wet is 99%. In statistical notation: P(A|B) = 0.99 If it doesn't rain, the probability that the sidewalk is wet is 10%. Or: P(A|¬B) = 0.1 (I made these numbers up, but for the sake of an argument, let's run with them) How big is the probability that it has rained when you observe that the sidewalk is wet? In other words: We are looking for P(B|A). The solution means applying Bayes' formula, and the result, using the probabilities I gave at the beginning, are: P(B|A) = 0.99*P(B)/(0.89*P(B) + 0.1) Now, if the a-priori probability of B is sufficiently small, you will find that P(B|A) can indeed be smaller than P(¬B|A). In layman terms: A wet sidewalk is not evidence for rain. At least not when rain is unlikely in general. When you live in a place where it seldom rains and people use plenty of sprinklers, then a wet sidewalk is not equal evidence for rain or a sprinkler. It's more evidence for a sprinkler than for rain. Sorry for harping on this, but I happen to like statistics, and when someone gets it wrong, I sometimes feel compelled to answer. If we say that "X is evidence for Y," we don't necessarily mean that "X made Y the most probable explanation," and we certainly don't mean that "X proved Y is true." Any observation that increases our prior probability about a hypothesis counts as "evidence." The strength of that evidence, how much it increases our prior, is another matter entirely. See my fourth paragraph HERE where I essentially did something similar with made-up numbers. I obviously agree that if the prior is small then the evidence might not even make it "more likely than not." Indeed, in my own example the probability of rain after observing a wet sidewalk went from 10% to 43%, so even there a "wet sidewalk" still means "rain" less than 50% of the time. THIS DOESN'T MAKE THE WET SIDEWALK NOT EVIDENCE! The wet sidewalk increased our probability of rain from 10% to 43%. That's what evidence does, it increases our prior probability. It can increase it a small amount, a medium amount, or a large amount. It can increase it a huge amount and the probability can still be quite unlikely to be true: any evidence that increased a probability from .1% to 25% would be a 250x increase, and still be unlikely to be true.
|
|
|
Post by Aj_June on Feb 14, 2020 2:08:37 GMT
Learning formulas by rote and solving word problems that are designed to fit those formulas with easily plugged in values does not adequately prepare you for the real world. Real world problems are full of surprises that throw the simple formulas off. Probability mathematics can be very accurate when working with playing cards, dice and other scenarios where the multitude and magnitude of unknown and unmanageable factors is unusually low. The more necessary skill is seeing the unique problem, finding the thing that is confusing people in a particular scenario. The thing that confuses people about the Monty Hall Problem is that when Monty eliminates a door he knows for a fact the prize is not behind it and he tells you so. That number then to plug into whatever formula you use for the chance the prize is behind door 2 if he eliminates door 2 is zero. It's that simple. It would be different if he randomly choose which door to eliminate. It is important to read the word problem correctly. That's all. The chance the prize is behind a randomly chosen eliminated door is greater than zero. That would readjust things. It is because things are not readjusted that the chance the prize is behind the door (or doors) that were not picked at first remains one minus the chance the first pick has the prize. If only one door remains, and this is critical, you know for a fact the eliminated doors do not have the prize, the chance the prize is behind the remaining doors is still what it originally was even if it is only one door. I do not see Bayes' Theorem in that. If there were a hundred doors the chance the prize is behind the first one picked is 1/100 and the chance it is behind the remaining doors is 99/100. If 98 doors are randomly eliminated the chance the prize is behind the remaining door 1/100, the chance the prize is behind the first door picked is 1/100 and the chance the prize is not behind either remaining door is 98/100. There you see how critical it is to know eliminated doors do not have the prize. If we know the 98 eliminated doors do not have the prize then the chance the prize is behind the first door picked is still 1/100 and the chance the prize is behind the remaining doors is still 99/100 even if only one remains. We've been over this before. There's at least three different "applications" of Bayes; the theoretical, the statistical, and the practical. When you speak of Bayes you only want to discuss the statistical and the practical, but when I discuss Bayes I'm more interested (usually) in the theoretical. When I say Bayes models every situation in which we correctly reason from evidence to its affect on a hypothesis, I mean just that. This does not mean we can put precise numbers to all situations and apply it (meaning practically), nor does it mean we're treating all situations statistically. One can colloquially say "I thought I had a ham sandwich in the fridge, but when I opened it and saw no ham sandwich, I realized I didn't," and that's a perfectly reasonable and intuitively understandable chain of reasoning. But it is entirely possible to also see how Bayes applies to that reasoning; your memory of a ham sandwich in the fridge is the prior, and the absence of ham sandwich is the evidence modifying that prior. Behavioural finance uses the inconsistent application of Bayes by individuals as one of the strong points to argue against traditional finance (traditional finance assumes investors are rational and take into account all the information when making decisions). Many of the biases that financial market participants display because of cognitive errors is linked to not updating views based on new information. Conservatism bias, in particular, is very strongly associated with a lack of rightful application of Bayes. Research has shown that quite a lot of investors suffer from biases such as conservatism and representativeness and that these biases often lead to overweighting prior beliefs while underweighting new information.
|
|
|
Post by Eva Yojimbo on Feb 14, 2020 2:27:40 GMT
We've been over this before. There's at least three different "applications" of Bayes; the theoretical, the statistical, and the practical. When you speak of Bayes you only want to discuss the statistical and the practical, but when I discuss Bayes I'm more interested (usually) in the theoretical. When I say Bayes models every situation in which we correctly reason from evidence to its affect on a hypothesis, I mean just that. This does not mean we can put precise numbers to all situations and apply it (meaning practically), nor does it mean we're treating all situations statistically. One can colloquially say "I thought I had a ham sandwich in the fridge, but when I opened it and saw no ham sandwich, I realized I didn't," and that's a perfectly reasonable and intuitively understandable chain of reasoning. But it is entirely possible to also see how Bayes applies to that reasoning; your memory of a ham sandwich in the fridge is the prior, and the absence of ham sandwich is the evidence modifying that prior. Behavioural finance uses the inconsistent application of Bayes by individuals as one of the strong points to argue against traditional finance (traditional finance assumes investors are rational and take into account all the information when making decisions). Many of the biases that financial market participants display because of cognitive errors is linked to not updating views based on new information. Conservatism bias, in particular, is very strongly associated with a lack of rightful application of Bayes. Research has shown that quite a lot of investors suffer from biases such as conservatism and representativeness and that these biases often lead to overweighting prior beliefs while underweighting new information. Absolutely. In fact there's a huge list of cognitive biases that divert our reasoning away from the correctness of what Bayes would dictate. I personally think there are three necessary books that anyone wanting to understand rationality and epistemology should read; one is ET Jaynes's Probability Theory, another is Judea Pearl's Causality (or The Book of Why for a more laymen-friendly version), and the final is Daniel Kahneman's Thinking, Fast and Slow. If Jaynes and Pearl tell us what ideal reasoning SHOULD be, Kahneman is about all the ways in which we are irrational.
|
|
|
Post by Arlon10 on Feb 14, 2020 11:25:04 GMT
Eva Yojimbo said: [ full text here] < clips >
1) This does not mean we can put precise numbers to all situations and apply it (meaning practically), nor does it mean we're treating all situations statistically. 2) When you talk about how our knowledge changes the probability, this is PRECISELY what Bayes models with its conditional probabilities 3) I have no idea what you're saying with your 100-door variation. 4) If you choose Door 1, and 98 doors are randomly opened not revealing the prize, then the probability the prize is behind Door 1 VS the last remaining door is 50/50. 5) If you open 98 doors and the prize isn't behind any of them, the probability the prize is behind them is 0/100, not 98/100. 6) With randomness you get 50/50, with intention you get 67/33. 1) Knock me over with a feather. 2) I seem to remember a sample problem listed on this board where the wording of it said "you get new information" when you did not. The math might be sound but the English isn't always. 3) It must be lonely out there, most people get it. 4) That's what I explained. 5) That is not what I said. There is your problem. However much trouble you're having with the math, you won't get over it till your English improves. What I did say is that the probability is 98/100 before they were opened. You're conflating the Monty Hall problem and the random "opening" or "elimination" of doors. Apparently Bayes' Theorem has not improved your ability to roll with the changes. 6) What I said. Sometimes when people get lost it helps to go to the top of the page and write everything out. Given: The probability the prize is behind exactly one of x (x = 3 e.g.) doors is a certainty or P=1. Given: With no further information the contestant selects one of the x doors. Given; The host knowing full well where the prize is opens or eliminates one of the two remaining doors giving the contestant new information. Solution: The probability the contestant chose the door with the prize is 1/x. (e.g. 1/3) The probability the prize is behind the remaining doors is 1 - 1/x. (e.g. 2/3) Notice that the sum of the probabilities is 1. That is important. See the first given. The contestant gets "new information" that the probability the prize is behind one of the remaining doors is zero, as it is opened or eliminated. However it is not new information to the person who selected it. The host already knew it was zero. This is the critical point to understand, and the reason some people have difficulty with the problem The sum of the probabilities must still add to 1. The probability the prize is behind the door the contestant selected is still 1/x. Since "we" (host, contestant and audience) have no new information. The probability it is behind the opened or eliminated remaining door is zero as we all now see. That leaves a probability of 1 - 1/x for the remaining closed door. For example Door 1 P = 1/x or 1/3, Door 2 P = 0, Door 3 P= 1 - 1/x or 2/3. For randomly selected "remaining" doors the probabilities must also add to 1.
The probability the host selects the right door is 1/x before any doors are opened. That leaves 1 - 1/x -1/x for the remaining door or if x = 3 then 1/3. A good question: if P = 1/3 for the contestant's door and P = 1/3 for the remaining door where is the other 1/3? That is in the times the host accidentally opens the door with the prize which is 1/3 of the time. That of course can only happen in this variation. I should also mention that if there are only two choices whose probability was determined to be the same are considered by themselves the probability becomes one half for each or "equal" probability. That change happens when you do not count the times the host accidentally opened the door with the prize. Another note: When I say the probability is 98/100 the prize is behind a door randomly opened that means that 98 percent of the time the host will accidentally open the door with the prize. Now you try it. Go to the top of the page and list line by line where you got your conditional and marginal probabilities and your 'A' and 'B' events.
|
|
fatpaul
Sophomore
@fatpaul
Posts: 502
Likes: 193
|
Post by fatpaul on Feb 14, 2020 11:26:15 GMT
Wtf? Even my ex-wives give me A+ for effort!
|
|
fatpaul
Sophomore
@fatpaul
Posts: 502
Likes: 193
|
Post by fatpaul on Feb 14, 2020 11:30:37 GMT
You didn't even bother to ask me for clarification for understanding but I will ask you a question for clarity because I want to understand what you're saying. So my question is: What are the implications imbedded in the '...' because the statement after the ellipsis doesn't follow from the prior statement? I ask because it might help me understand why you have forwarded your probabilistic take on the issue at hand as less complicated then my predicate logic take on the issue, so I may say, 'Yes we are coming to the same conclusions about the issue at hand so as to make a solid comparison for complexity, and so yes the probabilistic method is in fact less complicated'. Otherwise I, like you, am at a loss as to what your point is because as your post stands, all you've really said is: I don't understand what you are saying so it is complicated and so here is an example of something I do understand that's not complicated. Edit for grammatical errors because my grammar does suck. If you struggled due to grammar, I hold my hand up!
|
|
|
Post by general313 on Feb 14, 2020 16:31:31 GMT
You're welcome to consider Boolean logic as a "fiction", however it doesn't make sense to give it a different status than Bayes or probability math in general (or for that matter any other kind of math). Here's a simple example Boolean proposition, to demonstrate its utility to science: If a material is made of atoms then it cannot be infinitely subdivided.If I decide that water is composed of atoms, and I decide that the proposition is valid, then I can conclude that water is not infinitely divisible. Simple hard Boolean logic here. The universe is thoroughly disinterested in our estimation of how likely it is that something is true or not. At one point scientists were very uncertain if matter was composed of atoms, now the scientific community is extremely confident that it is. But the basic truth of the question is unchanged by what scientists have learned: water either is or isn't composed of atoms, the same today as 300 years ago. Here are a couple of examples of scientific discovery that don't have anything to do with a probability assessment, at least in terms of the essential thought processes. 1. About 100 years ago it was an open question whether the Andromeda nebula was a dust cloud within the galaxy (like the crab nebula) or something else external to the galaxy. At the time "galaxy" and "universe" were synonymous. There was much debate, and resolving the question depended on developing techniques to measure the distance to Andromeda. Improved telescopes and discovery of particular properties of Cepheid variables provided the yardstick to measure the distance and thus answer the question (Edwin Hubble being one of the chief scientists involved), leading to the discovery that the Milky Way galaxy is not unique to the universe. 2. When Albert Einstein tried to solve the riddle of why the Michelson-Morley experiments always came up nil, he approached it by performing thought experiments of a highly visual and geometric nature. They were along the lines of "imagine what it would be like to be travelling alongside a light beam at the same speed" or "what is the difference between being in an accelerating elevator in space or a stationary one on the ground"? These and many other thought experiments are detailed in the book Relativity: The Special and the General Theory, written by Einstein himself. In the book he makes clear that the thought experiments weren't just contrived to explain the theory to a lay audience, they played a pivotal role in Einstein's development and understanding of his relativity theories. There are many other scientific discoveries made over the centuries (from Kepler to Hawking) that similarly cannot be characterized in terms of mathematical probability, hence why I don't accept your claim about Bayes. Depends on what you mean by "give it a different status." In what sense? If you just mean we consider it as such for its utility, then fine. I'm speaking more fundamentally, theoretically. I never denied the utility of using it. Ultimately, though, it relies on the assumption of truth of its propositions. I'm sure I don't have to tell you that in deductive logic propositions must be assumed true rather than proved themselves. If those assumptions rest on conclusions from other propositions, then that evidence must also rest on assumed propositions and so-on back to infinity. Any assumed truth could be wrong, even down to the most basic one that we can (generally) rely on our senses to tell us facts about reality. Plus, Hume knew long ago that you can't get from empirical evidence, which is inductive, to binary/universal truth values. To me, the easy way around this is to just say say that everything is really induction/probability, and deduction/Boolean truth values are useful fictions. I agree, of course, that the universe is disinterested in our estimations of how likely something is; that ignores the fact that we don't have direct access to how the universe is. Our only access is via our senses and reasoning, both of which can be fallible, both of which are limited. One reason science works is that it's able to update itself when new evidence is presented, but if we are absolutely, 100% certain that something's true then it would be impossible for new evidence to convince us otherwise. This should be an obvious indicator we never get to 100% certainty about anything. We just get awfully close sometimes. Both of your examples very much involve "probability assessment." In 1, there were hypotheses about Andromeda. Those hypotheses can be treated as our priors, which have some probability of being true. The invention of better telescopes provided new and better evidence as to which was true, and "galaxy" won out over "dust cloud" thanks to that evidence. In 2. you could say that "aether theory" was the prior hypothesis involved, that had some probability of being true, while the absence of evidence in things like the Michelson-Morley experiments was the evidence that aether wasn't true. This prompted Einstein to begin thinking about alternative explanations. This lead to the hypothesis of Relativity, a hypothesis that also had some probability of being true, which eventually lead to the various experiments which provided evidence that Relativity was very probably true. Again, if you really want to be convinced of this I'd recommend reading at least some of that Jaynes textbook. The biggest problem you're having is, again, not realizing that our hypotheses can be treated as priors, and experiments are the conditional probabilities that either provide evidence for them or not. What I mean by "give it a different status" is "give it a different status according to how fictional" is is. You described Boolean algebra as a "useful fiction". You could start talking about the useful fiction of integer math ("it's impossible to have exactly 3 bales of hay"). What really bugs me about Bayesian thinking is that it always goes on about priors. What difference does it make if ancient philosophers thought there was an 80 percent chance that the earth rode on the back of a giant tortoise, or some other number? When we gradually gained a more modern view of the nature of earth and space, we can just throw out the entirety of the old thinking (the baby with the bathwater) and any estimated probabilities (however formal, informal or implied). In my examples the point I'm making is that the crucial thinking process that led to the discovery, the breakthrough in insight wasn't an analysis of prior probabilities but of a measurement in one case and geometric insight in the other. It didn't matter if 60 percent or 90 percent of astronomers favored the dust cloud theory for Andromeda; once a scientific measurement was made it was clear who was right, and the prior probabilities became completely irrelevant. Einstein wasn't thinking at all of prior probabilities while doing his thought experiments. If you can give a satisfying answer to the last paragraph I may consider taking a look at Jaynes book, though the title of the book "Probability Theory" doesn't really suggest it would address these topics.
|
|
|
Post by goz on Feb 14, 2020 21:03:27 GMT
Wtf? Even my ex-wives give me A+ for effort! It would have needed to be longer, harder and illustrated in pretty colours for an A+.
|
|
|
Post by Arlon10 on Feb 15, 2020 0:30:07 GMT
Aj_June said: [ full text here] < clip >
Many of the biases that financial market participants display because of cognitive errors is linked to not updating views based on new information. Conservatism bias, in particular, is very strongly associated with a lack of rightful application of Bayes. Research has shown that quite a lot of investors suffer from biases such as conservatism and representativeness and that these biases often lead to overweighting prior beliefs while underweighting new information. I think it is important to note here that "probability" and "statistics" classes are not the same thing. Probability classes tend to deal with "known" probabilities and how to determine other probabilities from those, such as your taxicab problem does. Statistics classes tend to deal with the proper ways to gather data to arrive at any probability in the first place. The probabilities in the taxicab problem are simple to assess with considerable accuracy and there is little harm in granting them. (The color identification test is a little shaky and suspicious.) However most real life applications of probabilities are not so simple. It takes very careful data gathering techniques to avoid the multitude and magnitude of unknown and unmanageable variables as far as possible. Most often that is not very far.
|
|
|
Post by Eva Yojimbo on Feb 15, 2020 1:12:52 GMT
Eva Yojimbo said: [ full text here] < clips >
1) This does not mean we can put precise numbers to all situations and apply it (meaning practically), nor does it mean we're treating all situations statistically. 2) When you talk about how our knowledge changes the probability, this is PRECISELY what Bayes models with its conditional probabilities 3) I have no idea what you're saying with your 100-door variation. 4) If you choose Door 1, and 98 doors are randomly opened not revealing the prize, then the probability the prize is behind Door 1 VS the last remaining door is 50/50. 5) If you open 98 doors and the prize isn't behind any of them, the probability the prize is behind them is 0/100, not 98/100. 6) With randomness you get 50/50, with intention you get 67/33. 4) That's what I explained. 5) That is not what I said. There is your problem. However much trouble you're having with the math, you won't get over it till your English improves. What I did say is that the probability is 98/100 before they were opened. You're conflating the Monty Hall problem and the random "opening" or "elimination" of doors. Apparently Bayes' Theorem has not improved your ability to roll with the changes. 6) What I said. Sometimes when people get lost it helps to go to the top of the page and write everything out. Given: The probability the prize is behind exactly one of x (x = 3 e.g.) doors is a certainty or P=1. Given: With no further information the contestant selects one of the x doors. Given; The host knowing full well where the prize is opens or eliminates one of the two remaining doors giving the contestant new information. Solution: The probability the contestant chose the door with the prize is 1/x. (e.g. 1/3) The probability the prize is behind the remaining doors is 1 - 1/x. (e.g. 2/3) Notice that the sum of the probabilities is 1. That is important. See the first given. The contestant gets "new information" that the probability the prize is behind one of the remaining doors is zero, as it is opened or eliminated. However it is not new information to the person who selected it. The host already knew it was zero. This is the critical point to understand, and the reason some people have difficulty with the problem The sum of the probabilities must still add to 1. The probability the prize is behind the door the contestant selected is still 1/x. Since "we" (host, contestant and audience) have no new information. The probability it is behind the opened or eliminated remaining door is zero as we all now see. That leaves a probability of 1 - 1/x for the remaining closed door. For example Door 1 P = 1/x or 1/3, Door 2 P = 0, Door 3 P= 1 - 1/x or 2/3. For randomly selected "remaining" doors the probabilities must also add to 1.
The probability the host selects the right door is 1/x before any doors are opened. That leaves 1 - 1/x -1/x for the remaining door or if x = 3 then 1/3. A good question: if P = 1/3 for the contestant's door and P = 1/3 for the remaining door where is the other 1/3? That is in the times the host accidentally opens the door with the prize which is 1/3 of the time. That of course can only happen in this variation. I should also mention that if there are only two choices whose probability was determined to be the same are considered by themselves the probability becomes one half for each or "equal" probability. That change happens when you do not count the times the host accidentally opened the door with the prize. Another note: When I say the probability is 98/100 the prize is behind a door randomly opened that means that 98 percent of the time the host will accidentally open the door with the prize. Now you try it. Go to the top of the page and list line by line where you got your conditional and marginal probabilities and your 'A' and 'B' events. 4-6. You do realize your post is still there for everyone to read, yes? If so, I don't know why you just blatantly lied about what you said. What you actually said: "If 98 doors are randomly eliminated the chance the prize is behind the remaining door 1/100, the chance the prize is behind the first door picked is 1/100 and the chance the prize is not behind either remaining door is 98/100.." You most certainly DID NOT say that "the probability is 98/100 before they were opened." You also most certainly didn't say that "with randomness it's 50/50" because, again, I quote: "If we know the 98 eliminated doors do not have the prize then the chance the prize is behind the first door picked is still 1/100 and the chance the prize is behind the remaining doors is still 99/100 even if only one remains."
All I can surmise from this is that you realized from my post that you were wrong, and now you're gaslighting to try to convince people you didn't say exactly what I quoted you as having said. Everything else you said is just restating the same thing you did the first time around, that the "2/3 goes to the third door." Why does it do this? You've never explained why the 2/3 goes to the third door rather than to the first door, or just becomes 50/50. Saying that it is so is not an explanation. Bayes, as I used it, explained this using conditional reasoning. If you want it longhand (I've already done this before in the last thread, and you didn't get it then): P(A|B) = P(B|A)P(A) / (P(B|A)P(A)+P(B|~A)P(~A)) P(A|B) = Probability prize is behind D3 given Monty opened D2 P(B|A) = Probability Monty opened D2 given prize is behind D3 (1) P(A) = Probability prize is behind D3 (.33) P(B|~A) = Probability Monty opened D2 given prize isn't behind D3 (.5) P(~A) = Probability prize isn't behind D3 (.33) P(A|B) = 1*.33 / (1*.33 + .5*.33) P(A|B) = .33 / .5 P(A|B) = .67
|
|
|
Post by Eva Yojimbo on Feb 15, 2020 1:13:06 GMT
You didn't even bother to ask me for clarification for understanding but I will ask you a question for clarity because I want to understand what you're saying. So my question is: What are the implications imbedded in the '...' because the statement after the ellipsis doesn't follow from the prior statement? I ask because it might help me understand why you have forwarded your probabilistic take on the issue at hand as less complicated then my predicate logic take on the issue, so I may say, 'Yes we are coming to the same conclusions about the issue at hand so as to make a solid comparison for complexity, and so yes the probabilistic method is in fact less complicated'. Otherwise I, like you, am at a loss as to what your point is because as your post stands, all you've really said is: I don't understand what you are saying so it is complicated and so here is an example of something I do understand that's not complicated. Edit for grammatical errors because my grammar does suck. If you struggled due to grammar, I hold my hand up! I figured the implication followed pretty obviously: I struggled to follow because I found your explanation complicated. I think my probabilistic take is less complicated because it can easily be translated into everyday language. In fact, my post was just explaining it linguistically and THEN I did the math to prove what I said, but even if you can't follow the math you can follow what I said before then. Meanwhile, your post was predicate logic from the beginning and predicate logic is evil. Maybe we were saying the same thing in different ways, I don't know. I'll leave it to others, though, to arbitrate which post was more complicated. My opinion on the matter is just that, and we all know what opinions are like.
|
|