Bayesian Statistics Explained: Why Guess Tails After Getting Heads?

  • Thread starter Winzer
  • Start date
  • Tags
    Statistics
In summary, Bayesian statistics takes into account subjective probability and allows for the estimation of likelihood based on data, rather than assumptions about underlying distributions. However, this does not mean that prior flips of a coin influence the outcome of the next flip, as both frequentist and Bayesian inference assume independent events. The advantage of Bayesian inference lies in cases where assumptions about underlying distributions are weak, and it is not recommended to make inferences based on one flip of a coin.
  • #1
Winzer
598
0
Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

Apparently Bayesian statistics accounts for subjective probability. Being born and raised on classical stats "subjective" and "probability" should not be together. Could someone give me some clear reasoning on Bayesian stats?
 
Physics news on Phys.org
  • #2
Are you sure that's what it says?
then how do you calculate this probability that after getting head what are the probabilities to get again heads immediatly afterwards.
 
  • #3
Winzer said:
Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.

First, Bayes Theorem is a statement about conditional probability, not about what is called Bayesian statistics.

So called Bayesian statistics is really about the concept of statistical likelihood. A likelihood (L) is derived from probabilities but is not itself a probability as it ranges over all positive real numbers whereas probabilities range over the closed interval 0,1. In practice lnL and likelihood ratios are used.

The important difference between frequentist inference and Bayesian inference is that in the former, the distribution is assumed and the probability of the data is estimated under this assumption. In Bayesian inference the likelihood of a distribution is estimated given the data. This means that maximum likelihood estimation (MLE) is robust for any underlying distribution whereas frequentist inference is not.
 
Last edited:
  • #4
No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.
An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!
 
  • #5
mgb_phys said:
An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!

That is a joke. Unfortunately many people believe it. With either type of inference, assumptions need to be made regarding independent events. However if you don't make this assumption, the data is the basis for inference under MLE, not a presumed underlying distribution.
 
Last edited:
  • #6
SW VandeCarr said:
That is a joke. Unfortunately many people believe it. With either type of inference, assumptions need to be made regarding independent events.
But the whole point of Bayesian is that it's for when you don't know the underlying assumptions - like most of science!
 
  • #7
mgb_phys said:
But the whole point of Bayesian is that it's for when you don't know the underlying assumptions - like most of science!

I edited the post you're responding to. I agree, but is some cases you can get into trouble. In the coin flip example the presumed underlying distribution is a uniform p=0.5 H or T. This is widely accepted. Deviations from this are to be expected. How much deviation is acceptable? There is no particular advantage using Bayesian inference here as the frequentist will notice when the results deviate significantly from expectations. The real advantage of Bayesian inference is where the assumption of a particular underlying distribution is weak.

Edit: What I was particularly objecting to was the suggestion to the OP that if the first toss was heads, the Bayesian would say that the probability slightly favored tails in the second toss. This is simply wrong. I you were to make a Bayesian inference re n coin tosses, the inference would be based on the outcome of n tosses. If you were dumb enough to make an inference based on one toss that came out heads, Bayesian inference would favor heads again.
 
Last edited:

Related to Bayesian Statistics Explained: Why Guess Tails After Getting Heads?

1. What is Bayesian statistics?

Bayesian statistics is a statistical approach that uses probability to quantify uncertainty in the results of a hypothesis or experiment. It is based on Bayes' theorem, which describes how to update our beliefs about a hypothesis as we gather new evidence.

2. How is Bayesian statistics different from traditional statistics?

Traditional statistics relies on frequentist methods, which use the frequency of events to make predictions and draw conclusions. Bayesian statistics, on the other hand, incorporates prior knowledge and beliefs into the analysis, resulting in more personalized and updated results.

3. Why is it important to consider prior beliefs in Bayesian statistics?

Prior beliefs are important in Bayesian statistics because they provide a starting point for the analysis and can influence the final results. By incorporating prior beliefs, Bayesian statistics can adapt to new evidence and update the beliefs accordingly.

4. What is the role of probability in Bayesian statistics?

Probability is fundamental to Bayesian statistics as it is used to quantify uncertainty and update beliefs. In Bayesian analysis, prior beliefs and new evidence are combined using probability, resulting in a probability distribution that reflects the uncertainty in the results.

5. How does Bayesian statistics handle complex and large datasets?

Bayesian statistics can handle complex and large datasets by using computational methods such as Markov chain Monte Carlo (MCMC) algorithms. These algorithms allow for efficient sampling from complex probability distributions, making it possible to analyze large datasets and obtain accurate results.

Similar threads

  • Sticky
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
4K
  • Set Theory, Logic, Probability, Statistics
4
Replies
126
Views
6K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
2K
Replies
15
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
4K
  • Precalculus Mathematics Homework Help
Replies
12
Views
2K
Back
Top