- #1
Salman2
- 95
- 0
Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?:
http://arxiv.org/pdf/1301.3274.pdf
http://arxiv.org/pdf/1301.3274.pdf
Salman2 said:Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?:
http://arxiv.org/pdf/1301.3274.pdf
wayseeker said:The discussion of QBism poses iepistemological, and semantic problems for the reader. ...
DrChinese said:Are you familiar with the PBR theorem? Although I can't say I fully understand the examples in the OP's QBism paper, it seems to flow directly opposite to PBR. One says the wave function maps directly to reality, the other says it does not.
DiracPool said:Yes, I'd also like to know if anyone has any insights on this new model, dubbed the "QBism" model. The general idea is that the quantum wave function does NOT represent any actuality in the real physical world.
vanhees71 said:So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
vanhees71 said:The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event.
bhobba said:I think it goes beyond physics. My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation. To me this Bayesian stuff seems just a play on words as well.
stevendaryl said:The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
It's not at all easy to define the concept of probability. If you ask most people, a coin has probability 1/2 to land heads up if when you flip it a large number of times, it lands heads up close to half the time. But this is fatally vague!
After all what counts as a "large number" of times? And what does "close to half" mean? If we don't define these concepts precisely, the above definition is useless for actually deciding when a coin has probability 1/2 to land heads up!
Say we start flipping a coin and it keeps landing heads up, as in the play Rosencrantz and Guildenstern are Dead by Tom Stoppard. How many times does it need to land heads up before we decide that this is not happening with probability 1/2? Five? Ten? A thousand? A million?
This question has no good answer. There's no definite point at which we become sure the probability is something other than 1/2. Instead, we gradually become convinced that the probability is higher. It seems ever more likely that something is amiss. But, at any point we could turn out to be wrong. We could have been the victims of an improbable fluke.
Note the words "likely" and "improbable". We're starting to use concepts from probability theory - and yet we are in the middle of trying to define probability! Very odd. Suspiciously circular.
Some people try to get around this as follows. They say the coin has probability 1/2 of landing heads up if over an infinite number of flips it lands heads up half the time. There's one big problem, though: this criterion is useless in practice, because we can never flip a coin an infinite number of times!
Ultimately, one has to face the fact that probability cannot be usefully defined in terms of the frequency of occurence of some event over a large (or infinite) number of trials. In the jargon of probability theory, the frequentist interpretation of probability is wrong.
Note: I'm not saying probability has nothing to do with frequency. Indeed, they're deeply related! All I'm saying is that we can't usefully define probability solely in terms of frequency.
You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.vanhees71 said:Well, maybe I'm just too much biased by my training as a physicist to make sense of the whole Baysian interpretation of probabilities. ...
So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
bhobba said:My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation.
DaleSpam said:You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.
DaleSpam said:First, I don't know enough QM to have any opinion on interpretations of QM, but I do use Bayesian statistics in other things (e.g. analysis of medical tests)
You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.
stevendaryl said:The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
DaleSpam said:Most likely, your preference for frequentist statistics is simply a matter of familiarity, born of the fact that the tools are well-developed and commonly-used. This seems to be the case for bhobba also.
bhobba said:The law of large numbers is rigorously provable from the axioms pf probability.
What it says is if a trial (experiment or whatever) is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified outcome occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be.
This guarantees a sufficiently large, but finite, number of trials exists (ie an ensemble) that for all practical purposes contains the outcomes in proportion to its probability.
stevendaryl said:But the meaning of "tends to" is the part that makes no sense, under a frequentist account. What does that mean? It is possible, when flipping coins, to have a run of 1,000 flips in a row that are all heads. It is possible to have a run of 1,000,000 flips in a row with all heads. So what does this "tends to" mean? Well, you can say that such a run of heads is unlikely, but according to what meaning of "unlikely"?
Yes, I have also, particularly with respect to the wide variety of powerful software with pre-packaged standard statistical tests. Personally, I think that the hard-core Bayesians need to spend less time promoting their viewpoint and more time developing and standardizing their tools.bhobba said:I always found the frequentest interpretation more than adequate.
DaleSpam said:Yes, I have also, particularly with respect to the wide variety of powerful software with pre-packaged standard statistical tests. Personally, I think that the hard-core Bayesians need to spend less time promoting their viewpoint and more time developing and standardizing their tools.
DaleSpam said:First, I don't know enough QM to have any opinion on interpretations of QM, but I do use Bayesian statistics in other things (e.g. analysis of medical tests)
You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.
In the scientific method you formulate a hypothesis, then you acquire data, then you use that data to decide to keep or reject your hypothesis. In other words, you want to determine the likelyhood of the hypothesis given the data, which is exactly what Bayesian statistics calculate. Unfortunately, frequentist statistical tests simply don't measure that. Instead they calculate the likelyhood of the data given the hypothesis.
I think that the big problem with Bayesian statistics right now is the lack of standardized tests. If you say "my t-test was significant with p=0.01" then everyone understands what mathematical test you ran on your data and what you got. There is no corresponding "Bayesian t-test" that you can simply report and expect everyone to know what you did.
Most likely, your preference for frequentist statistics is simply a matter of familiarity, born of the fact that the tools are well-developed and commonly-used. This seems to be the case for bhobba also.
vanhees71 said:For me the subjectivistic view of probabilities as seems to be used in Bayesian statistics is not scientific at all.
bhobba said:The meaning of such things lies in a rigorous development of probability. That's how it is proved and rests on ideas like almost surely convergence and convergence in probability.
You are putting the cart before the horse. In the frequentest interpretation you have an ensemble that is the EXPECTED outcome of a very large number of trials - and that's what the law of large numbers converges to.
bhobba said:What Stevendaryl seems to be concerned about is it is only infinitesimally close to it. As I say - each to their own I suppose - but I suspect virtually everyone that applies probability will accept it.
stevendaryl said:"Infinitesimally close" is defined in terms of probability, but a notion of probability that is NOT relative frequency. So frequencies cannot be the core definition of probability.
I don't have any complaints with the use of frequencies as "frequentists" use them. It works for practical purposes, but it's not an actually consistent theory. In that sense, frequentism is sort of like the Copenhagen or "collapse" version of quantum mechanics. It's a set of rules for using probabilities, and it works well enough, but doesn't actually count as a rigorous theory.
stevendaryl said:That's my point---frequentism can't be the basis of a rigorous development of probability.
vanhees71 said:Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?
bhobba said:It isn't - the Kolmogorov axioms are. And that is what rigorous tretments use - not the frequentest interpretation. But the law of large numbers show for all practical purposes the frequentest interpretation is equivalent to it.
However I am taken back to what I was warned about all those years ago - you wouldn't read the tomes based on it - and in my experience that is very true. Even Fellers book, especially volume 2, is hard going, and believe me there are worse than that about.
Thanks
Bill
stevendaryl said:Okay, I guess if by "frequentism" you mean a particular methodology for using probabilities, then I don't have any big problems with it. But if it's supposed to explain the meaning of probabilities, I don't think it can actually do that, because you have to already have a notion of probability in order to connect relative frequencies to probabilities.