How Do You Calculate Conditional Probability with Complements?

In summary: If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that... is the formula the wrong way around?The formula is correct. The point is that you are applying the formula incorrectly. In particular, your use
  • #1
TheSodesa
224
7

Homework Statement


[/B]
[tex]P(A | \overline{B}) = ?[/tex]

Homework Equations


Multiplicative rule:
\begin{equation}
P(A | B) = \frac{P(A \cap B)}{P(B)}
\end{equation}
Additive rule:
\begin{equation}
P(A \cup B) = P(A) + P(B) - P(A \cap B)
\end{equation}
Difference:
\begin{equation}
A \backslash B = A \cap \overline{B}
\end{equation}
A hint:
\begin{equation}
P(\overline{A} \backslash B) = P(\overline{A} \cap \overline{B})
\end{equation}

The Attempt at a Solution



Using equation (1):
[tex]P(A | \overline{B}) = \frac{P(A \cap \overline{B})}{P(\overline{B})}[/tex]

This is where I'm stuck. I don't see how ##(3)## nor ##(4)## would help me here, since there is not an identity I could use to convert a difference into something more operable.

What to do?
 
Physics news on Phys.org
  • #2
What's the full question?
 
  • #3
micromass said:
What's the full question?

Ah damn, sorry! My blood sugar is low and I'm a bit stressed out.

They gave us ##P(A) = 0.4##, ##P(B|A)=0.60## and ##P(B|\overline{A})=0.40## and asked us to calculate a few probabilities:

\begin{align*}
&a) P(A∩B) &= 0.24\\
&b) P(B) &= 0.48\\
&c) P(A∪B) &= 0.64\\
&d) P(A|B) &= 0.50\\
&e) P(A|\overline{B}) &= ?\\
&f) P(\overline{A}∖B) &= ?
\end{align*}

I'm having trouble with e) and f) (possibly just e). I'm somehow supposed to use the identities above to manipulate these expressions into a form I can plug the given or the previously calculated values into.
 
  • #4
Are you familiar with Bayes' theorem?
 
  • #5
micromass said:
Are you familiar with Bayes' theorem?

Looking at my course handout, it is mentioned under Kokonaistodennäköisyys ja Bayesin kaava (Total probability and Bayes' theorem), but we didn't yet cover it in class. Just a sec and I'll see if I can understand it.
 
Last edited:
  • #6
micromass said:
Are you familiar with Bayes' theorem?

Ok, so basically it goes like this:

Let's assume that our sample space ##\Omega## is partitioned into separate subsets like so:

[tex]\Omega = B_1 \cup \cdot\cdot\cdot \cup B_n[/tex]

Then if we have a subset of ##\Omega##, ##A##, that intersects all or some of the partitions, we can write ##A## like this:

[tex]A = (A \cap B_1) \cup (A \cap B_2) \cup ... \cup (A \cap B_n)[/tex]

Then
[tex]P(A) = \sum_{i=1}^{n} P(A \cap B_i)[/tex]

If ##B_i > 0##, based on the multiplicative identity, we have the total probability:

[tex]P(A) = \sum_{i=1}^{n} P(B_i)P(A|B_i)[/tex]

The Bayes' theorem can be derived using both the above total probability formula and the multiplicative identity:

[tex]P(B_k|A) = \frac{P(B_k)P(A|B_k)}{\sum_{i=1}^{n} P(B_i)P(A|B_i)}[/tex]
 
  • #7
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.
 
  • #8
micromass said:
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.

I'll see if I can figure out how to apply it. But first dinner.
 
  • Like
Likes member 587159
  • #9
micromass said:
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.

Juts to clarify, are you sure the partition is just ##A## and ##\overline{A}##? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):
Sample space.png

I'm not sure I understand why I should partition the space into ##A## and ##\overline{A}##. Is it because ##A## intersects both ##B## and ##\Omega##?

Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}
Now
[tex]
P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4
[/tex]
Then
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
&= 0.4
\end{align*}
I'm told this is still wrong. :frown:
 
  • #10
TheSodesa said:
Juts to clarify, are you sure the partition is just ##A## and ##\overline{A}##? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):
View attachment 105417
I'm not sure I understand why I should partition the space into ##A## and ##\overline{A}##. Is it because ##A## intersects both ##B## and ##\Omega##?

Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}
Now
[tex]
P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4
[/tex]
Then
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
&= 0.4
\end{align*}
I'm told this is still wrong. :frown:

##A, \bar{A}## form a partition 0f ##\Omega## because ##A \cap \bar{A} = \emptyset## (they are disjoint) and ##A \cup \bar{A} = \Omega## (together, they make up the whole space).
 
  • #11
TheSodesa said:
Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}

Are you sure about this? I would double check for some typos.
 
  • #12
micromass said:
Are you sure about this? I would double check for some typos.

If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.
 
  • #13
Ray Vickson said:
##A, \bar{A}## form a partition 0f ##\Omega## because ##A \cap \bar{A} = \emptyset## (they are disjoint) and ##A \cup \bar{A} = \Omega## (together, they make up the whole space).

Ahh, so the partitions have to cover the entire space. Got it.
 
  • #14
TheSodesa said:
If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.

Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?
 
  • #15
micromass said:
Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?
micromass said:
Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?

Wait, let's recap. So our conditional probability:

[tex]P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}[/tex]

becomes the Bayes' formula

[tex]P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)}[/tex],

when the pruduct identity and the formula for the total probability for ##P(A)## are applied to the topmost probability. Here ##B_i##s are the partitions. So if we apply this to my situation:

\begin{align*}
P(A|\overline{B})
&= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
\end{align*}

Alright, this looks different. Now I just need to figure out what ##P(\overline{B} | \overline{A})## is.
 
  • #16
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.
 
  • #17
micromass said:
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.

Ok. :biggrin:

I'm pretty sure my last iteration of the formula was finally correct; there's just that pain-in-the-butt term in the denominator.

But if we take the above approach:
[tex]P(\overline{B}) \times P(A | \overline{B}) = P(A) \times P(\overline{B} | A)[/tex]

We've already shown, that ##P(\overline{B} | A) = 0.4## above (in the post with the picture, assuming my understanding of basic set theory holds; nothing to do with Bayes). Then:

[tex]P(A|\overline{B}) = \frac{P(A) \times P(\overline{B} | A)}{P(\overline{B})} = \frac{0.4 \times 0.4}{0.52} = 0.30769[/tex]

Apparently this was still wrong. My derivation of ##P(\overline{B} | A)## was probably wrong.
 
  • #18
TheSodesa said:
Wait, let's recap. So our conditional probability:

[tex]P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}[/tex]

becomes the Bayes' formula

[tex]P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)}[/tex],

when the pruduct identity and the formula for the total probability for ##P(A)## are applied to the topmost probability. Here ##B_i##s are the partitions. So if we apply this to my situation:

\begin{align*}
P(A|\overline{B})
&= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
\end{align*}

Alright, this looks different. Now I just need to figure out what ##P(\overline{B} | \overline{A})## is.

Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?
 
  • #19
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

It's ##1##, isn't it?
 
  • #20
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

If ## P(\Omega|\overline{A}) = 1##, then
\begin{align*}
P(\overline{B}|\overline{A})
&= 1 - P(B|\overline{A})\\
&= 1 - 0.4\\
&= 0.6
\end{align*}

Then
\begin{align*}
P(A|\overline{B})
&= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
\end{align*}

This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of ##P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4## was wrong.
 
  • #21
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

Scratch what I just said!

It was correct. The system was **very** picky about significant figures
 
  • #22
micromass said:
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.

Your answer was correct after all. I messed up with the significant figures when I put the answer through the system.
 
  • #23
TheSodesa said:
If ## P(\Omega|\overline{A}) = 1##, then
\begin{align*}
P(\overline{B}|\overline{A})
&= 1 - P(B|\overline{A})\\
&= 1 - 0.4\\
&= 0.6
\end{align*}

Then
\begin{align*}
P(A|\overline{B})
&= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
\end{align*}

This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of ##P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4## was wrong.

$$P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A|\overline{B}) P(\overline{B})}{P(A)} $$
 
  • #24
Ray Vickson said:
$$P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A|\overline{B}) P(\overline{B})}{P(A)} $$

Thanks for the patience. It was correct.
 

Related to How Do You Calculate Conditional Probability with Complements?

1. What is the difference between probability and set theory?

Probability is the study of the likelihood of an event occurring, while set theory is the study of collections of objects and their relationships. Probability can be applied to set theory to determine the likelihood of certain outcomes within a set.

2. How are set operations used in probability?

Set operations, such as union, intersection, and complement, are used in probability to determine the likelihood of an event occurring based on the elements within a set. These operations allow for the combination and comparison of different sets to determine the probability of outcomes.

3. What is the sample space in probability?

The sample space in probability refers to the set of all possible outcomes of an experiment or event. It includes all the possible outcomes, whether they are favorable or not. It is an important concept in determining probabilities as it helps to define the total possible outcomes.

4. How are Venn diagrams used in basic probability with set theory?

Venn diagrams are a visual representation of sets and their relationships. In basic probability with set theory, Venn diagrams can be used to visually show the intersection, union, and complement of sets, which can help in determining the probability of certain outcomes.

5. Can set theory be applied to real-life situations?

Yes, set theory can be applied to real-life situations, such as analyzing the probability of outcomes in a game or the likelihood of a certain event occurring in a given scenario. It can also be applied in fields such as economics, genetics, and computer science.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Precalculus Mathematics Homework Help
Replies
12
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
800
  • Introductory Physics Homework Help
Replies
7
Views
1K
  • Precalculus Mathematics Homework Help
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
890
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
2
Views
870
Back
Top