|
Post by Eva Yojimbo on Jun 20, 2017 12:08:24 GMT
But which form of determinism? I was reading this article on the subject: www.benbest.com/science/quantum.htmlHere he rejects the many worlds theory as an extreme violation of Occam. Though as he says (and I'm inclined to agree) the use of Occam in these matters as to what is really simpler is largely a judgement call. I'm only speaking of causal determinism in the scientific context since that seems most appropriate given the thread subject. As for that article, first, I have no idea who Ben Best is and his webpage only advertises himself as a "life-extensionist and an inveterate scribbler," so I don't see any credentials that would hint as to his expertise on QM. While I'm not big on appealing to authority (if I were, I wouldn't have any business discussing most anything!), it is sometimes helpful as a quick-and-dirty BS detector. Second, all he says on the subject is here: "Again, my "Occam's Razor bias" is that it is outrageous to propose countless parallel universes being generated to satisfy a model — conveniently interacting/noninterating in just the right ways to create just the right results — such as "quantum" interference with our universe (probability-wave "interference", no less) — without other interaction." This really tells me he has no clue as to what he's talking about. First, MWI does not state that the results we see are from "Parallel universes... conveniently interacting/noninteracting in just the right ways." That's not what it is at all. Secondly, the parallel universes are not being "proposed" or "generated." Sean Carroll covered this objection definitively in the article I linked to above. To quote the most relevant parts: I excised most of his explanation as to how he gets to that conclusion, but you can read the article if you want. While Carroll doesn't mention Occam explicitly, that's basically what he's referring to. There is another article there by another physics professor, Matthew Rave, that does mention Occam explicitly: manyworldstheory.com/2014/01/13/dont-cut-yourself-on-occams-razor/ Again, to quote the most relevant bits: Assessing simplicity as per Occam is not (or should not be) a "judgment call." Occam can be/has been formalized in both information theory and Solomonoff Induction, which are not subjective. While it's often impractical (if not impossible) to describe theories in those terms, it's often not necessary to understand what Occam would favor objectively, which is basically what Rave describes above when he says: "For you can start with three postulates, and add a fourth about wave-function collapse, and you get CI. Or you can start with just three, and say nothing of wave-function collapse, and you get MWI." 3 is objectively simpler than 4 (though it's more like "3 is simpler than those same 3+1"). I'm not sure how you're trying to distinguish between a simpler theory VS ontological simplicity. The only real way to address ontological simplicity/complexity to begin with is to find a method that can model whatever it is we're describing as accurately as possible and then check the length of that description. Besides that, all you have are intuitions, and our intuitions evolved to deal with things on a much more complex level than, say, QM. As for your hypothetical about random/unpredictable being simpler, this could theoretically be true, but it doesn't seem to be the case in the world we're living in. I mean, it's possible you could devise a program to create extremely random behavior and that would be simpler than any you could devise to create determined behavior, but how well would it resemble our world? The simple fact is that the best model we have of classical physics is deterministic, and the simplest model we have of quantum physics is also deterministic. I have no idea why you'd want to reject the latter or ignore Occam. If you're trying to state that a random interpretation is QM is ontologically simpler, then I don't know what you think the justification for that is.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Jun 20, 2017 16:38:38 GMT
As for that article, first, I have no idea who Ben Best is and his webpage only advertises himself as a "life-extensionist and an inveterate scribbler," so I don't see any credentials that would hint as to his expertise on QM. While I'm not big on appealing to authority (if I were, I wouldn't have any business discussing most anything!), it is sometimes helpful as a quick-and-dirty BS detector. Hey don't dis life-extensionists, they made this country great! Point taken though. This is a pretty good explanation. Though doesn't it rely on reality resembling a computer program at heart? My thinking was more that these considerations are so beyond our usual experience that we may not be capable of saying one is simpler than the other. For instance saying CI has 4 postulates while MWI has only 3 may have more to do with our poor comprehension of what we are witnessing than there actually being more to CI. If it were truly random it would have to resemble some sort of world though. Why not ours as opposed to any other? But anyway that isn't quite what I meant. I meant what if randomness is an essential part of reality? If randomness exists in all possible worlds then we could not use Occam to eliminate it in this one. I don't particularly. Just throwing a few thoughts out there. I just have some misgivings about how reliably it can be applied in this situation. I'm not. My point is more we might be able to assess what is conceptually simplest but the ontologically simplest might be different.
|
|
|
Post by Eva Yojimbo on Jun 20, 2017 18:54:48 GMT
This is a pretty good explanation. Though doesn't it rely on reality resembling a computer program at heart? It's less reliant on reality resembling a computer program and more reliant on the notion that a computer program can model reality. If you doubt that, consider the various methods we use to model reality, like math, and then consider that computer programs can be written to run mathematical functions quite easily. Solomonoff Induction and a lot of Information theory like Kolmogorov Complexity kinda rely on the notion that you can model just about anything in binary code since that's the simplest, most basic language possible. They're beyond our everyday experience, sure, but that doesn't mean that they aren't extremely well understood in a purely mathematical/scientific context. The point is that all QM, no matter what interpretation you subscribe to, relies on the same 3 postulates (I assume by that he means Shrodinger Wave Equation, Born Rule, and Uncertainty Principle); but some interpretations are not satisfied with JUST those three because of their implications/consequences, so they add another (the collapse) that just complicates matters. I don't know how you look at that and say it's anything but an obvious violation of Occam. If randomness was an essential part of reality then it's unclear to me (and all science, apparently) how it can suddenly morph into order and determinism on some level. What I mean is that we know General Relativity is deterministic, local, and real; QM under the "random" interpretations is indeterminstic, non-local, and non-real, and there's no apparent reason when or how the latter transitions to former. To quote the physicist from "Ask a physicist" on the matter: "One may be tempted to say “the physics at small scales is just different!”. Fair enough. However, there are no physical laws that work differently on different scales. For example, at very small scales water acts like honey, and to swim you need to use things like flagella. At the other end of the scale (our scale) water behaves… like water, and things like fins and flippers suddenly work really well, but flagella don’t. However, the same physical laws (specifically, the Navier-Stokes equation) govern everything." "More generally, all laws apply at all scales, it’s just a question of degree. Relativity works at all velocities, but you don’t notice the weird effects until you’re moving really fast. What we call “Newton’s laws” are just an approximation that work at low speeds." "If the Copenhagen “size argument” (that larger objects somehow have different laws) holds up, it’ll be the first of its kind." So it seems to me that to make this "randomness is fundamental to reality" idea more than just an empty postulate--like, say, trans-dimensional unicorns--you have to come up for some reason how and why we have these two different theories working at two different levels with no connection/bridge between them, and you have to do this while realizing there's no current mathematical/empirical basis and that the idea is fundamentally more complicated to start with. Further, you have to justify why the quantum world gets to violate so many physical laws like (to quote the physicist again): "Conservation of information (supported by everything else, including logic), time reversibility (again, everything else), and information flowing backward through time (spacelike information exchange or “spooky action at a distance”)" I'm not sure what the reason is for such rigmarole when the simplest interpretation avoids all of these problems. I don't know why, unless there's just something you're misunderstanding, because it seems pretty straight-forward to me. I have no idea how you'd even hope to asses ontological simplicity without having some kind of formal descriptive language like math. I'm also not sure if addressing ontological simplicity/complexity even makes much sense outside of our ability to model such things.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Jun 21, 2017 13:20:26 GMT
If randomness was an essential part of reality then it's unclear to me (and all science, apparently) how it can suddenly morph into order and determinism on some level. What I mean is that we know General Relativity is deterministic, local, and real; QM under the "random" interpretations is indeterminstic, non-local, and non-real, and there's no apparent reason when or how the latter transitions to former. To quote the physicist from "Ask a physicist" on the matter: "One may be tempted to say “the physics at small scales is just different!”. Fair enough. However, there are no physical laws that work differently on different scales. For example, at very small scales water acts like honey, and to swim you need to use things like flagella. At the other end of the scale (our scale) water behaves… like water, and things like fins and flippers suddenly work really well, but flagella don’t. However, the same physical laws (specifically, the Navier-Stokes equation) govern everything." "More generally, all laws apply at all scales, it’s just a question of degree. Relativity works at all velocities, but you don’t notice the weird effects until you’re moving really fast. What we call “Newton’s laws” are just an approximation that work at low speeds." "If the Copenhagen “size argument” (that larger objects somehow have different laws) holds up, it’ll be the first of its kind." My understanding of it (correct me if I make any mistakes in any of the below) is that proponents of CI aren't saying that larger objects have different laws. Large events are as inderterministic under CI as very small events. Why the former appears to be deterministic is due to probability. So think of a quantum event like rolling a single 6-sided die. The result will be between 1 and 6 and each possible result is as likely as any other: the extremes (1 and 6) are just as likely as the mid-points (3 and 4). However roll two dice and compare the chance of rolling the extremes (2 and 12) to the midpoint (7). You have roughly a 17% chance of rolling the midpoint, a 28% chance of rolling 1 away from the midpoint, but only a 6% chance of rolling an extreme. Up the number of dice to 4 and you get an 11% chance of rolling the midpoint (14), then those odds decrease the further you get from that midpoint, culminating in a 0.16% of rolling either extreme (4 or 24). Now imagine rolling trillions upon trillions of dice and adding the total. The chances of rolling an extremely low or high total are so miniscule as to be all but impossible. The chance of rolling a number in or around the mid-point are so high they're almost certain. So when you observe a large event, all the particles involved may be acting randomly but when they come together on a large scale they will do the same thing with such regularity that the chances of ever observing an outlier are so remote as to be unworthy of even considering. I think the above provides a bridge - everything is inderterminate but it just doesn't appear that way on a large scale. I don't think there is a good basis for either. If they are not random, you're still left with the reason why they behave the way they do. In which case you either have to suggest some hidden explanation (God of the Gaps?) or say they just do. Which is no more justifiable than saying randomness is just the way they are. Will (attempt to) cover this below. Way out of my depth here so won't embarrass myself by trying to answer any of these. What do the physicists who favour CI (a minority now if I understand correctly) say about these? Quite possible! I'm not sure there is a hope. I'm not saying we should assess ontological simplicity instead only that conceptual simplicity may not correspond to ontological simplicity.
For instance maybe there's some fundamental flaw in how we understand the world, a category error where we think of two entities as different when in actuality they are one and the same. Any hypothesis that would posit both these entities therefore might be considered more complex than it is.
So suppose everyone agreed with Descartes that he had shown mind and matter to be distinct. We then considered whether a universe with just mind would be more probable than one with both mind and matter. If we plugged that into Solomonoff, we would find yes a universe with just mind is more likely than one with mind and matter. However, Descartes' thinking is largely considered fallacious now. Mind and matter may be distinct but we can't be sure one is not just a mode of representation of another. If that is the case, then a universe with both mind and matter is no more complex than one with just mind. Applying Solomonoff to our Cartesian model could therefore lead us to think there is complexity where there is none ontologically.
Since we cannot be sure we are not making category mistakes in our models then we cannot really be sure we are correct when we say something is more likely to be true (an ontological claim) because it is less complex (a conceptual claim).
To summarise, while I think I get why a more complex world is less likely to exist, I have basically two misgivings: 1. If there is some sort of necessary complexity, then we cannot posit a world without that complexity as more likely. While it's easy to see some things as contingent rather than necessary (such as whether Bob and Jane will have a daughter) when we get right down to what seems to basic building blocks of reality, I thinks its hard to be sure anything is unnecessary. 2. We may be limited in our assessment of complexity - what may seem complex to us is not actually complex in reality therefore it is no less likely.
|
|
|
Post by Eva Yojimbo on Jun 21, 2017 15:43:58 GMT
If randomness was an essential part of reality then it's unclear to me (and all science, apparently) how it can suddenly morph into order and determinism on some level. What I mean is that we know General Relativity is deterministic, local, and real; QM under the "random" interpretations is indeterminstic, non-local, and non-real, and there's no apparent reason when or how the latter transitions to former. To quote the physicist from "Ask a physicist" on the matter: "One may be tempted to say “the physics at small scales is just different!”. Fair enough. However, there are no physical laws that work differently on different scales. For example, at very small scales water acts like honey, and to swim you need to use things like flagella. At the other end of the scale (our scale) water behaves… like water, and things like fins and flippers suddenly work really well, but flagella don’t. However, the same physical laws (specifically, the Navier-Stokes equation) govern everything." "More generally, all laws apply at all scales, it’s just a question of degree. Relativity works at all velocities, but you don’t notice the weird effects until you’re moving really fast. What we call “Newton’s laws” are just an approximation that work at low speeds." "If the Copenhagen “size argument” (that larger objects somehow have different laws) holds up, it’ll be the first of its kind." My understanding of it (correct me if I make any mistakes in any of the below) is that proponents of CI aren't saying that larger objects have different laws. Large events are as inderterministic under CI as very small events. Why the former appears to be deterministic is due to probability. So think of a quantum event like rolling a single 6-sided die. The result will be between 1 and 6 and each possible result is as likely as any other: the extremes (1 and 6) are just as likely as the mid-points (3 and 4). However roll two dice and compare the chance of rolling the extremes (2 and 12) to the midpoint (7). You have roughly a 17% chance of rolling the midpoint, a 28% chance of rolling 1 away from the midpoint, but only a 6% chance of rolling an extreme. Up the number of dice to 4 and you get an 11% chance of rolling the midpoint (14), then those odds decrease the further you get from that midpoint, culminating in a 0.16% of rolling either extreme (4 or 24). Now imagine rolling trillions upon trillions of dice and adding the total. The chances of rolling an extremely low or high total are so miniscule as to be all but impossible. The chance of rolling a number in or around the mid-point are so high they're almost certain. So when you observe a large event, all the particles involved may be acting randomly but when they come together on a large scale they will do the same thing with such regularity that the chances of ever observing an outlier are so remote as to be unworthy of even considering. LOL, that dice example is one I've used before to explain how the appearance of quantum randomness could aggregate to 'deterministic means' on larger scales! It's also an example I didn't copy from anyone so I'm wondering if you accidentally copied it from me! (Of course, it's entirely possible someone else has used the example and I just don't know about it!) Anyway, that is incorrect. The only way the aggregation works is if there isn't a collapse and everything is actually in a state of superposition. Under CI, there is a definite "split" between the randomness of quantum events and the determinism of macro events. Whether that "split" is because of size, or because of observation-caused collapse, is debatable, but under CI you most certainly aren't getting randomness aggregating to a mean. This works in MWI because as the superpositions get larger the probabilities of any measurement would skew towards a mean; but the only way you get to that mean is if the other states of the superposition don't just go away (like they do in CI). An important thing to keep in mind is that MWI is basically able to subsume the idea of the probabilistic collapse in CI by saying that it's merely a result of us restricting our attention to one state of the wave function. The other states are still there, and are always there at all sizes; they don't just go away because we're measuring to see the probability of ending up in any given state. EDIT: It occurred to me that probably the best way to explain this is to stick with the dice example. The numbers on the die represent the superpositioned states, and the number of dice represent the number of particles. In MWI, since everything is in a superpositioned state, you just keep adding more dice for more particles. In CI, at some point all the other numbers go away except for one (this is the collapse postulate). It's as if the numbers change from 1-6 to just 1, and you can add more dice/particles that are also just 1. I'm not sure what "either" you're referring to here in response to what I posted. What I meant was that the collapse postulate of CI isn't justified either mathematically or empirically, and it's the collapse postulate that creates the randomness. Most say "shut up and calculate." Seriously. Thing is, the interpretation "debate" in science isn't even really a debate. Mathew Rave covered this rather succinctly in the article I posted where he said: "...I was never taught about interpretations of quantum mechanics. Ever. Everything I know about such things, I learned on my own since graduation. Thinking of taking a quantum mechanics class at your local university? Guess what: they will probably not talk about MWI, or the Copenhagen interpretation, or Schrodinger’s f***ing cat. Why not? Because those are philosophy topics, not physics. You can do quantum mechanics without ever interpreting a single thing. There’s no crying in baseball, and there’s no philosophy in quantum mechanics. It is a purely mathematical theory, that undeniably works, and most people just leave it at that." Copenhagen got proposed first. It created all of these problems that bothered Einstein and other philosophically-minded physicists, but it worked, so most just accepted it and moved on. I'm guessing the notion of a collapse still gets mentioned in textbooks (even if the interpretation isn't mentioned explicitly), and articles like the one cham linked to implicitly assume the collapse interpretation. It's just kind of a "given" since most working/experimental physicists just don't bother with the philosophy of it because it doesn't matter a hill of beans to the work they do. Remember when I linked you that Sean Carroll article where he lamented the fact that physicists don't care much for philosophy because it doesn't help them in their work? Well, I think you can blame that for why CI hasn't been more vehemently rejected in science: the basic reason is because it doesn't matter to what they do. Yudkowsky wrote a rather interesting and humorous article about this here: lesswrong.com/lw/qa/the_dilemma_science_or_bayes/And that might be true but, again, I don't know what you think you make of it beyond an empty hypothesis. It reminds me of the issue of trying to justify our most basic truth-finding tools like rationality and even Occam. Yudkowsky again has a good article on this: lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ It's long, but the basic idea is the attempt to distinguish between circular logic (bad) VS reflective loops (which are good and necessary). The idea is that the latter is a cross-talk between our theories, models, etc. and what we empirically observe. If we trust something like Occam and Information Theory/Solomonoff as a way of assessing Occam it's mostly because it seems to work, and there seems to be awfully good rational and mathematical justification for why it would be a good reflection of ontological simplicity/complexity. Perhaps it isn't perfect, and you can always ask "well is it REALLY a reflection of ontological simplicity/complexity?" in a similar way you can ask "well is what we see/hear/etc. REALLY a reflection of ontological existence?," but such questions seem rather useless to me unless you're going to propose a better/more accurate alternative. As for your example, we couldn't just plug the idea of "mind and matter" into Solomonoff because to do so would require a binary description of what "mind" and "matter" is. The very fact that Solomonoff requires a binary description of whatever data you want to find theoretical explanations for is one thing that helps prevents these cognitive category errors. This is also why I tend to find Yudkowsky so enlightening on such issues because he's essentially an AI researcher who had to learn about these philosophical issues in order to be able to program an AI. In such programming you can't take a very vague idea that you can't reduce and then make an AI out of it; you actually have to know how to reduce it and model it from the ground up. So though I'm not sure if you picked a good example to argue your case, I understand what you're trying to say. My basic response would be, yes, in some (perhaps even most) cases, the way we describe our models would have a huge impact on how we assess Occam. You can't just categorize a huge complex of things under a single label and then call that "one thing" and then say it's simpler to "two things" that are actually much simpler on a more fundamental level. Again, Yudkowsky has addresses this in his own post on Occam: lesswrong.com/lw/jp/occams_razor/ using the example "the woman down the street is a witch, she did it" as an example of how you can't confuse seemingly simple linguistic descriptions (like "mind" and "matter") with the kind of reductive descriptions that you achieve with Solomonoff. The former is indeed a type of cognitive category error, and it's also one that is greatly mitigated by the kind of modeling we do in math and algorithms. Coming back to QM, when we're referring to the postulates we're referring to extremely precise mathematical descriptions: Schrodinger's describing the wave-function/superpositing of particles, Heisenberg describing our ability to accurately measure position/momentum, and Born describing how to derive the probability of any given measurement. All three are essential to make QM work on a practical level. But the consequence of taking these (especially the first) at face value, as an accurate description of something real, is that it gives you many-worlds. In order to get rid of those other worlds/states, you have to add an additional postulate in the wave function collapse. I don't know how you would mathematically model this collapse via Solomonoff, but I don't know how you escape the notion that postulating to begin with is adding something to the basic formalism of QM. Further, the collapse seems to actually DO something; ie, the act of observation seems to cause a change. You can't describe any causal change without adding to the overall description of anything, so I don't think you can say that the collapse can simply be something that's already embedded in the other postulates so isn't adding complexity. Anyway, this is getting rather long and I think I've addressed most of your points, but you can let me know if I missed something or if anything is unclear (I'm not sure I was as lucid as possible on why the dice example only works in MWI but not CI... I'll try to think of a better way if it's not clear).
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Jun 21, 2017 20:35:05 GMT
LOL, that dice example is one I've used before to explain how the appearance of quantum randomness could aggregate to 'deterministic means' on larger scales! It's also an example I didn't copy from anyone so I'm wondering if you accidentally copied it from me! (Of course, it's entirely possible someone else has used the example and I just don't know about it!) Haha, well I did think when I was writing it "Hmm, I'm talking about dice probabilities to a professional gambler with an interest in Bayesian probability. There's a pretty good chance he's already considered this!" I don't think I took this from you though, I don't think we've ever discussed this before. It's possible I saw you mention it to someone else but I have a feeling I heard a dice analogy some time ago. But if I did nick it from you and forgot, apologies! Right, I think I get it now. Cheers! Yes, I misunderstood what you were saying. Pay it no heed. Makes sense. I'm not really making anything of it per se, just sharing some misgivings I have about using Occam to find truth. That I don't have an alternative doesn't bother me because I'm not really interested in truth bar its practical use. I agree with you Occam's Razor is a good practical tool in that it usually seems to work well in many situations. But here it doesn't really deliver anything. Since all the interpretations make the same predictions, it doesn't matter from a pragmatic point of view which you favour or for what reason (even the reasons Carroll mocked). If the main draw of the razor here is that it reveals ontological truth, then that would be good for the Einsteins of this world who are interested in ontological truth. But I'm not sure it even does that all that effectively. Sure. It was just an example of a potential category error that came to mind. Probably not a great one for this scenario. Probably not, no. But then maybe we're both falling foul of a very insidious fallacy here? No probs, very illuminating. Will check out your links.
|
|
|
Post by general313 on Jun 21, 2017 23:30:36 GMT
Even if QM could be proven to be compatible with a deterministic universe, collecting information to predict future results would be so utterly impractical that it is effectively meaningless. In order to to gather the necessary state information to accurately predict the future, one would have to use more material to model the thing in question than the thing itself. To model the universe would require far more atoms than are contained in the universe.
Some definitions of randomness (e.g. Kolmogorov randomness) are based on the idea that a string is random if it is shorter than any computer program that can produce that string.
|
|
|
Post by Terrapin Station on Jun 22, 2017 11:59:53 GMT
What's non-ontologically deterministic? I don't get why you're asking this. I had asked this: "So you don't agree that it wasn't clear what the Schrodinger equation meant (ontologically)?" To which you responded: "I'm saying it's still unclear to scientists what the Schrodinger equation means ontologically, hence one reason for the differing interpretations." So I said: "Okay, but above you'd said the equation is deterministic. You're saying that it's deterministic in some non-ontological sense? What in the world would that even mean?" Either the equation is or isn't clear semantically in terms of ontological implications. If you say that it's clear, then for one, you're disagreeing with Ian Taylor (Ph.D., Theoretical Physics (Cambridge), PhD (Durham)), for example--which is fine, you don't have to agree with him just because he's a physics professor, of course, but you agreed by saying "I'm saying it's still unclear to scientists what the Schrodinger equation means ontologically" (well, unless you meant that it's still unclear to scientists, but not to you). So on your view that the equation is clear(ly) deterministic, but not with respect to ontology, I'm asking you how it could be clearly deterministic non-ontologically. That's not the only issue that will arise here--if you were saying that it's clear with respect to ontology, rather, then one thing we'd do is examine how any mathematical formula can be clear with respect to ontology in general. But we can get there later if we need to.
|
|
|
Post by Eva Yojimbo on Jun 22, 2017 15:20:30 GMT
LOL, that dice example is one I've used before to explain how the appearance of quantum randomness could aggregate to 'deterministic means' on larger scales! It's also an example I didn't copy from anyone so I'm wondering if you accidentally copied it from me! (Of course, it's entirely possible someone else has used the example and I just don't know about it!) Haha, well I did think when I was writing it "Hmm, I'm talking about dice probabilities to a professional gambler with an interest in Bayesian probability. There's a pretty good chance he's already considered this!" I don't think I took this from you though, I don't think we've ever discussed this before. It's possible I saw you mention it to someone else but I have a feeling I heard a dice analogy some time ago. But if I did nick it from you and forgot, apologies! It's all good. I'm guessing I'm not the first to think of that analogy, but I knew I'd used it on IMDb before, not in talking with you but with others. I don't think I'd say that Occam is really for finding "truth," it's more for determining which theories/interpretations require evidence to consider over others. I mean, for any phenomena the number of theories we could concoct to explain it are only limited by our imagination, and it's entirely possible to take well-understood phenomena (like, say, electricity) and then just add stuff to it that's untestable and makes no practical difference. Occam essentially says that we ignore those "added on" theories/interpretations until there is additional evidence that supports them. In the case of QM, MWI should be the default interpretation and the ones like CI that are adding to the formalism should require more evidence before we seriously consider them. While I can sympathize with yours and most physicists' pragmatic stance, I do think this issue is a good illustration of where science runs afoul of rationality. I mean, there's no rational justification for why CI became the "standard" interpretation to begin with, and for why it's just been implicitly assumed ever since. It's the primary culprit for why QM has its "weird" reputation, when it's really the interpretation (and the minds that generated it) that were "weird." Sure, in this case it makes no pragmatic difference, but the anti-rational mind-set that gave precedence to the Occam-violating interpretation could most certainly have pragmatic consequences in other areas. It's just a very clear way of demonstrating the problem. Which is?
|
|
|
Post by Eva Yojimbo on Jun 22, 2017 15:40:09 GMT
I don't get why you're asking this. I had asked this: "So you don't agree that it wasn't clear what the Schrodinger equation meant (ontologically)?" To which you responded: "I'm saying it's still unclear to scientists what the Schrodinger equation means ontologically, hence one reason for the differing interpretations." So I said: "Okay, but above you'd said the equation is deterministic. You're saying that it's deterministic in some non-ontological sense? What in the world would that even mean?" Either the equation is or isn't clear semantically in terms of ontological implications. If you say that it's clear, then for one, you're disagreeing with Ian Taylor (Ph.D., Theoretical Physics (Cambridge), PhD (Durham)), for example--which is fine, you don't have to agree with him just because he's a physics professor, of course, but you agreed by saying "I'm saying it's still unclear to scientists what the Schrodinger equation means ontologically" (well, unless you meant that it's still unclear to scientists, but not to you). So on your view that the equation is clear(ly) deterministic, but not with respect to ontology, I'm asking you how it could be clearly deterministic non-ontologically. That's not the only issue that will arise here--if you were saying that it's clear with respect to ontology, rather, then one thing we'd do is examine how any mathematical formula can be clear with respect to ontology in general. But we can get there later if we need to. I'm trying to think of how to make the distinction I'm making clearest... If you just take the equation itself it's as deterministic as those in General Relativity. That the equation itself is deterministic doesn't prevent anyone from interpreting it in an indeterministic way. They do this by assuming that what it describes (the superpositioning of particles) is unreal, and that observation causes collapse of that superpositioning: the probabilities of collapse are determined by the wave equation, but the singular outcome of any given measurement appears random. This assumption plus added collapse postulate renders the equation, or at least the phenomena of what it refers to, as indeterministic because we're essentially changing what it is. When I'm talking about the ontology of the equation I'm really talking about the ontology of what it refers to, namely the superpositioning of particles and how it evolves over time. I think I've covered the reasons for why it's mostly treated as indeterministic. Remember the gunpowder example I provided? Would you describe it as deterministic, indeterministic, or both?
|
|
|
Post by Eva Yojimbo on Jun 22, 2017 15:45:31 GMT
Even if QM could be proven to be compatible with a deterministic universe, collecting information to predict future results would be so utterly impractical that it is effectively meaningless. In order to to gather the necessary state information to accurately predict the future, one would have to use more material to model the thing in question than the thing itself. To model the universe would require far more atoms than are contained in the universe. Some definitions of randomness (e.g. Kolmogorov randomness) are based on the idea that a string is random if it is shorter than any computer program that can produce that string. QM isn't compatible with A (emphasis on the singular a) deterministic universe, but it's compatible with a deterministic multiverse. The problem is that we want the determinism to be contained to the singular universe we find ourselves in when that's simply not the universe(s) that the deterministic wave function is describing. It's trying to fit a square peg into a round hole. As long as we're just interested in one state of the wave function, basically meaning whatever universe "we" happen to find ourselves in, it's never going to be deterministic in the way General Relativity is. Sadly, we can't help being victims of conservation laws. Never heard that definition of randomness though. Interesting.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Jun 22, 2017 16:50:52 GMT
While I can sympathize with yours and most physicists' pragmatic stance, I do think this issue is a good illustration of where science runs afoul of rationality. I mean, there's no rational justification for why CI became the "standard" interpretation to begin with, and for why it's just been implicitly assumed ever since. It's the primary culprit for why QM has its "weird" reputation, when it's really the interpretation (and the minds that generated it) that were "weird." You don't think many worlds and hidden properties are also kinda weird in their own way? Was it anti-rational though? My understanding is CI was the best Bohr and Heisenberg could come up with and the naysayers were hard pressed to find anything better at the time so it became the default. No idea! That's kinda my point. Descartes didn't realise he'd committed a fallacy and he was undeniably a very smart guy. Maybe we're all committing some unknown fallacy and aren't smart enough to realise.
|
|
|
Post by Eva Yojimbo on Jun 22, 2017 17:20:30 GMT
While I can sympathize with yours and most physicists' pragmatic stance, I do think this issue is a good illustration of where science runs afoul of rationality. I mean, there's no rational justification for why CI became the "standard" interpretation to begin with, and for why it's just been implicitly assumed ever since. It's the primary culprit for why QM has its "weird" reputation, when it's really the interpretation (and the minds that generated it) that were "weird." You don't think many worlds and hidden properties are also kinda weird in their own way? Just to be clear, there are no hidden properties in MWI. The standard rationalist view is that reality is not "weird." When it seems weird to us it's really because WE'RE weird, that we've evolved in a way that runs eschew to reality, and that we cling to faulty intuitions even when they clearly seem to be wrong. When you really look at QM I think MWI is the only interpretation that isn't weird in light of the facts and models we have. It reminds of what Yudkowsky said in the concluding post to his QM sequence ( lesswrong.com/lw/q8/many_worlds_one_best_guess/) about Special Relativity: "Special Relativity seems counterintuitive to us humans—like an arbitrary speed limit, which you could get around by going backward in time, and then forward again. A law you could escape prosecution for violating, if you managed to hide your crime from the authorities. But what Special Relativity really says is that human intuitions about space and time are simply wrong. There is no global "now", there is no "before" or "after" across spacelike gaps. The ability to visualize a single global world, even in principle, comes from not getting Special Relativity on a gut level. Otherwise it would be obvious that physics proceeds locally with invariant states of distant entanglement, and the requisite information is simply not locally present to support a globally single world. " It's more anti-rational that it's remained default even after Everett proposed the simpler interpretation over 60 years ago. When the simpler interpretation got proposed, CI should've been abandoned until there was more evidence, but that didn't happen. Even at the time I think Eisenstein's theory of hidden variables was probably the slightly more rational/likely notion (at least it seems to me more likely that there could be something we don't know accounting for the appearance of randomness than that there's actual randomness that makes the model incompatible with General Relativity). Maybe, but maybe it's time for me to take the pragmatic stance on this notion.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Jun 22, 2017 20:00:33 GMT
Just to be clear, there are no hidden properties in MWI. Oh I know. I meant hidden properties as another alternative to CI apart from MWI. But wouldn't that be equally true for why we find CI weird? I don't quite get why MWI is less weird. Hidden variables theories strike me as the least weird even if they are less simple. Ok, but is that really that worrying? Hardly anyone understands quantum physics at all and many of those that do have no interest in the underlying metaphysics. Is there likely to be an outbreak of anti-rationality as a result of the few who favour CI?
|
|
|
Post by Eva Yojimbo on Jun 23, 2017 14:42:13 GMT
But wouldn't that be equally true for why we find CI weird? I don't quite get why MWI is less weird. Hidden variables theories strike me as the least weird even if they are less simple. I'd say there's two very different sources of "weird" for both interpretations. For CI, it's weird because it violates a number of known physical laws and is incompatible with the other best model of physics we have. When you ask why it's allowed to do this, the answers are really no better than "it's magic!" For MWI, it's the notion that everything is in superpositioned states that decohere across different worlds. This seems weird because it doesn't feel like we're in multiple states/worlds. However, if you really think about it, what SHOULD it feel like to be in a superpositioned state decohering across multiple worlds? Why would it feel any differently than what we experience now? The obvious answer is that there's no way to tell. Much like with, say, the notion of Geocentrism or a flat earth, it seems our intuitions about a single world are simply wrong. Plus, when you really understand the QM models and what the interpretations assume, MWI actually doesn't seem weird it all; it seems like the only sensible option until additional evidence is produced for any of the more complex interpretations. First, Physicists understand QM extremely well; it's just the philosophy they don't understand. Second, the point I'm making is that the philosophy they don't understand is basic rationality, things like the primacy of Occam's Razor in such situations. I could even argue that they don't know when they accidentally slip into philosophy (the entire quantum non-realism position is really a metaphysical position, and a rather silly one IMO). Pragmatically, these mistakes don't make any difference in the ability of physicists to make use of QM, but I think it's foolish to assume that this same ignorance can't, couldn't, or doesn't affect other scientists in other fields of study. It's just a good illustration for why Bayesian rationality should be complementing traditional, empirical-based scientific inquiry.
|
|
|
Post by general313 on Jun 23, 2017 15:32:03 GMT
But wouldn't that be equally true for why we find CI weird? I don't quite get why MWI is less weird. Hidden variables theories strike me as the least weird even if they are less simple. I'd say there's two very different sources of "weird" for both interpretations. For CI, it's weird because it violates a number of known physical laws and is incompatible with the other best model of physics we have. When you ask why it's allowed to do this, the answers are really no better than "it's magic!" For MWI, it's the notion that everything is in superpositioned states that decohere across different worlds. This seems weird because it doesn't feel like we're in multiple states/worlds. However, if you really think about it, what SHOULD it feel like to be in a superpositioned state decohering across multiple worlds? Why would it feel any differently than what we experience now? The obvious answer is that there's no way to tell. Much like with, say, the notion of Geocentrism or a flat earth, it seems our intuitions about a single world are simply wrong. Plus, when you really understand the QM models and what the interpretations assume, MWI actually doesn't seem weird it all; it seems like the only sensible option until additional evidence is produced for any of the more complex interpretations. First, Physicists understand QM extremely well; it's just the philosophy they don't understand. Second, the point I'm making is that the philosophy they don't understand is basic rationality, things like the primacy of Occam's Razor in such situations. I could even argue that they don't know when they accidentally slip into philosophy (the entire quantum non-realism position is really a metaphysical position, and a rather silly one IMO). Pragmatically, these mistakes don't make any difference in the ability of physicists to make use of QM, but I think it's foolish to assume that this same ignorance can't, couldn't, or doesn't affect other scientists in other fields of study. It's just a good illustration for why Bayesian rationality should be complementing traditional, empirical-based scientific inquiry. I think part of what makes it hard to accept MWI is the collosal numbers of parallel universes required. If every particle in the universe is in a quantum superposition, it would seem on the surface that there would need to be A raised to the power B parallel universe, where A is the number of superimposed states per particle and B is the number of particles in the universe. Intuition would suggest that each duplication would require duplication of resources (especially energy), but I suppose conservation laws could be consistent with superimposed states. Going off on a tangent, I wonder if there's a variant of the anthropic principle where a person finds themselves alive in a particular universe because in ones where they're dead they simply not conscious and therefore unable to think about it. A sign of that might be if you one day notice that you are improbably old, the oldest person in the world, and by strokes of fortune are still alive. Solipsist immortality.
|
|
|
Post by Eva Yojimbo on Jun 23, 2017 15:55:05 GMT
I'd say there's two very different sources of "weird" for both interpretations. For CI, it's weird because it violates a number of known physical laws and is incompatible with the other best model of physics we have. When you ask why it's allowed to do this, the answers are really no better than "it's magic!" For MWI, it's the notion that everything is in superpositioned states that decohere across different worlds. This seems weird because it doesn't feel like we're in multiple states/worlds. However, if you really think about it, what SHOULD it feel like to be in a superpositioned state decohering across multiple worlds? Why would it feel any differently than what we experience now? The obvious answer is that there's no way to tell. Much like with, say, the notion of Geocentrism or a flat earth, it seems our intuitions about a single world are simply wrong. Plus, when you really understand the QM models and what the interpretations assume, MWI actually doesn't seem weird it all; it seems like the only sensible option until additional evidence is produced for any of the more complex interpretations. First, Physicists understand QM extremely well; it's just the philosophy they don't understand. Second, the point I'm making is that the philosophy they don't understand is basic rationality, things like the primacy of Occam's Razor in such situations. I could even argue that they don't know when they accidentally slip into philosophy (the entire quantum non-realism position is really a metaphysical position, and a rather silly one IMO). Pragmatically, these mistakes don't make any difference in the ability of physicists to make use of QM, but I think it's foolish to assume that this same ignorance can't, couldn't, or doesn't affect other scientists in other fields of study. It's just a good illustration for why Bayesian rationality should be complementing traditional, empirical-based scientific inquiry. I think part of what makes it hard to accept MWI is the collosal numbers of parallel universes required. If every particle in the universe is in a quantum superposition, it would seem on the surface that there would need to be A raised to the power B parallel universe, where A is the number of superimposed states per particle and B is the number of particles in the universe. Intuition would suggest that each duplication would require duplication of resources (especially energy), but I suppose conservation laws could be consistent with superimposed states. This argument is a bit like saying we can argue against evolutionary theory because it would produce a colossal number of different species over time. Even if we couldn't observe all the other species but the theory seemed to account for genetic change across generations, would there be any reason not to accept the consequences? The different species/other worlds are just a complex consequence of the simplest theory. As Sean Carroll notes in the article I posted, quantum states are found in Hilbert Space rather than the phase space of Classical Mechanics, and there's no reason Hibert Space couldn't accommodate a near infinity of worlds. As for the energy required, that seems to be there in the superposition itself. It seems there are only three ways to escape the consequence of many worlds: assume a stochastic collapse, rendering QM essentially like magic that conflicts with Relativity while breaking a number of known physical/logical laws; assume the quantum state is non-real--but what does that even mean, exactly? With that argument, all you've done is invent a new ontology: now we have non-real entities and real entities, and you haven't really accomplished anything by proposing it (other than easing the anxiety about the consequences of the multiple quantum states). The third way would be to assume that, at some level, particles aren't in a state of superposition anymore; but we've increasingly observed the superpositoning at larger and larger levels. Last I've read it was 108 atoms, or 2,424 particles, observed interfering with itself in the double-slit experiment. Plus, as the Physicist notes in that article, if the size argument holds it would be the first of its kind. This is the idea behind the notion of quantum suicide and immortality.
|
|
|
Post by general313 on Jun 23, 2017 16:09:54 GMT
I think part of what makes it hard to accept MWI is the collosal numbers of parallel universes required. If every particle in the universe is in a quantum superposition, it would seem on the surface that there would need to be A raised to the power B parallel universe, where A is the number of superimposed states per particle and B is the number of particles in the universe. Intuition would suggest that each duplication would require duplication of resources (especially energy), but I suppose conservation laws could be consistent with superimposed states. This argument is a bit like saying we can argue against evolutionary theory because it would produce a colossal number of different species over time. Even if we couldn't observe all the other species but the theory seemed to account for genetic change across generations, would there be any reason not to accept the consequences? The different species/other worlds are just a complex consequence of the simplest theory. As Sean Carroll notes in the article I posted, quantum states are found in Hilbert Space rather than the phase space of Classical Mechanics, and there's no reason Hibert Space couldn't accommodate a near infinity of worlds. As for the energy required, that seems to be there in the superposition itself. It seems there are only three ways to escape the consequence of many worlds: assume a stochastic collapse, rendering QM essentially like magic that conflicts with Relativity while breaking a number of known physical/logical laws; assume the quantum state is non-real--but what does that even mean, exactly? With that argument, all you've done is invent a new ontology: now we have non-real entities and real entities, and you haven't really accomplished anything by proposing it (other than easing the anxiety about the consequences of the multiple quantum states). The third way would be to assume that, at some level, particles aren't in a state of superposition anymore; but we've increasingly observed the superpositoning at larger and larger levels. Last I've read it was 108 atoms, or 2,424 particles, observed interfering with itself in the double-slit experiment. Plus, as the Physicist notes in that article, if the size argument holds it would be the first of its kind. This is the idea behind the notion of quantum suicide and immortality. I wasn't really stating it as an argument so much as an explanation of why physicists were (at least initially) resistant to Everett's interpretation. A similar explanation might be made as to why people initially resisted the idea of a heliocentric solar system. The earth feels so solid that, to the ancient mind it must have seemed very counter-intuitive to imagine that the earth was flying in space around the sun.
|
|
|
Post by heeeeey on Jun 24, 2017 13:59:16 GMT
Yes.
|
|
|
Post by permutojoe on Jun 24, 2017 16:00:33 GMT
This has been a good read. QM presented in a way that a layman can kind of understand is not all that common, especially on a message board. Thanks to all who have contributed and chalk one up for IMDB2.
|
|