Expected value of bernoulli random variable.

In summary, the definition of the average makes sense because it is close to p by the law of large numbers.
  • #1
kidsasd987
143
4
"Let X be a Bernoulli random variable. That is, P(X = 1) = p and P(X = 0) = 1 − p. Then E(X) = 1 × p + 0 × (1 − p) = p. Why does this definition make sense? By the law of large numbers, in n independent Bernoulli trials where n is very large, the fraction of 1’s is very close to p, and the fraction of 0’s is very close to 1 − p. So, the average of the outcomes of n independent Bernoulli trials is very close to 1 × p + 0 × (1 − p)."
I don't understand why it gives the average of 1 × p + 0 × (1 − p).
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.

then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction. If we find the average for n trials, it must be pk/n.

how do we have 1 × p + 0 × (1 − p) as our average
 
Physics news on Phys.org
  • #2
kidsasd987 said:
then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction.
Hi kidsasd:

This is where you went astray. Your success fraction is the number of successes divided by the number of trials, that is, k/n = pn/n = p.

Regards,
Buzz
 
  • #3
Buzz Bloom said:
Hi kidsasd:

This is where you went astray. Your success fraction is the number of successes divided by the number of trials, that is, k/n = pn/n = p.

Regards,
Buzz

Thanks. But I guess n number of independant trials has to consist of number of success k and failure (n-k). Since each trial is independent, thus we cannot have success only.

for example, if I toss a fair coin i'd observe two possible outcomes. Head and tail. If I toss a coin n times, it would not give all head or all tail. that's what I thuoght, and why I introduced k. Please correct me where I got this wrong.
 
  • #4
kidsasd987 said:
Thanks. But I guess it(n number of independant trials) has to be divided into number of success k and failure (n-k). Since each trial is independent, we cannot have success only.
Hi kidsasd:

You said:
kidsasd987 said:
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.

Do you agree that the "success fraction" is the number of successes divided by the number of trials? If so, then what is the number of successes, and what is the number of trials?

Regards,
Buzz
 
  • #5
Buzz Bloom said:
Hi kidsasd:

You said:Do you agree that the "success fraction" is the number of successes divided by the number of trials? If so, then what is the number of successes, and what is the number of trials?

Regards,
Buzz
isn't success fraction, 1*P(X=1)*k?
where number of trials=n, and number of success=k.

avg={1*P(X=1)*k+P(X=0)*(n-k)*0}/n
Oh now I see where I got it wrong.

so if n is a small number then there is a greater possibility of deviating the fraction but such trend is minimized as we set n to a large number.

Thanks!
 
  • #6
Hi kidsasd:

Glad to have been of help.

Regards,
Buzz
 
  • Like
Likes kidsasd987

Related to Expected value of bernoulli random variable.

1. What is the expected value of a Bernoulli random variable?

The expected value of a Bernoulli random variable is the sum of the possible outcomes, weighted by their respective probabilities. For a Bernoulli variable, which can only take on two values (usually labeled as 0 and 1), the expected value is equal to the probability of a success (1) multiplied by 1, plus the probability of a failure (0) multiplied by 0.

2. How is the expected value of a Bernoulli random variable calculated?

The expected value of a Bernoulli random variable is calculated by multiplying the possible outcomes by their respective probabilities, and then summing these values. For example, if the probability of success is 0.6 and the probability of failure is 0.4, the expected value would be (0.6*1) + (0.4*0) = 0.6.

3. What does the expected value of a Bernoulli random variable represent?

The expected value of a Bernoulli random variable represents the long-term average of repeated trials. In other words, if the experiment is repeated many times, the average value of the outcomes will approach the expected value.

4. Can the expected value of a Bernoulli random variable be negative?

No, the expected value of a Bernoulli random variable cannot be negative. This is because the probabilities used in the calculation are always between 0 and 1, and multiplying any number by 0 will result in 0. Therefore, the expected value will always be a non-negative number.

5. How is the expected value of a Bernoulli random variable used in decision making?

The expected value of a Bernoulli random variable can be used in decision making to determine the most likely outcome of an experiment. This information can be used to guide decision making, such as choosing between different options with different probabilities of success. However, it should be noted that the expected value is not the only factor to consider in decision making, and other factors such as risk and potential consequences should also be taken into account.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
1K
Replies
0
Views
501
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
871
Back
Top