Upper bound for probability when Bayes risk is zero

P}(g_n(X)\neq Y)=\mathbb{P}(\eta_n(X)>1/2, Y=0)+\mathbb{P}(\eta_n(X)\leq 1/2, Y=1)##.To show that this is less than or equal to four times the expected squared difference between the statistic and the expected value of the class label, we can use the fact that ##\eta_n(x)=\mathbb{E}(Y|X=x)##. This means that ##\mathbb{E}((\eta_n(X)-\eta(X))^2)=\mathbb{E}((\eta_n(X)-\mathbb{E}(Y|
  • #1
GabrielN00

Homework Statement


Bayes' risk is ##L^*=0## for a classification problem. ##g_n(x)## is a classification rule (plug-in) such that ##g_n=0## is ##\eta_n(x)\leq 1/2## and ##g_n=1##$ otherwise. The function ##\eta##is given by ##\eta(x)=\mathbb{E}(Y|X=x)##. Then ##\mathbb{P}(g_n(X)\neq Y)\leq 4\mathbb{E}((\eta_n(X)-\eta(X))^2)##

Homework Equations


Bayes risk: For a loss function ·##L## and an estimator ##\hat{\theta}## the risk function is defined as the expected loss for that decisionis ##R:\Theta\times D \rightarrow \mathbb{R}, R(\theta, \hat{\theta}) = \mathbb{E_\theta}L(\theta,\hat{\theta})##.

Bayes risk is defined as the mean risk: ##r(\pi,\hat{\theta})=\int_\Theta R(\theta,\hat{\theta})\pi(\theta)d\theta##

The Attempt at a Solution



I see that if ##\eta_n\leq 1/2## then it reduces to show that ##\mathbb{P}(0\neq Y)\leq 4\mathbb{E}((\eta_n(X)-\eta(X))^2)##, but how do I get the bayesian risk involved. The right-hand of the inequality looks similar like the bayesian risk for the quadratic loss, but I can't see a way to use it - if it is related to the problem.
 
Physics news on Phys.org
  • #2


First, let's define some terms for clarity:

- Bayes' risk: The expected risk of an estimator under the true parameter distribution, also known as the minimum achievable risk.
- Plug-in rule: A classification rule that assigns a class based on the value of a statistic, such as the mean or median.
- Classification rule: A function that maps an observation to a class.

Now, let's break down the forum post to understand it better. We are given that the Bayes' risk is ##L^*=0## for a classification problem. This means that the minimum achievable risk for any classification rule is 0. Next, we are given a classification rule ##g_n(x)##, which is a plug-in rule. This means that the class assigned by the rule is based on the value of a statistic, which in this case is ##\eta_n(x)##. We are also given that ##g_n=0## when ##\eta_n(x)\leq 1/2## and ##g_n=1## otherwise. This means that the rule assigns class 0 when the value of the statistic is less than or equal to 1/2, and class 1 otherwise.

Next, we have the function ##\eta(x)=\mathbb{E}(Y|X=x)##, which represents the expected value of the class label ##Y## given the observation ##X=x##. This function is often used as a statistic in classification problems.

Now, we need to show that ##\mathbb{P}(g_n(X)\neq Y)\leq 4\mathbb{E}((\eta_n(X)-\eta(X))^2)##. This means that the probability of the classification rule ##g_n## assigning the wrong class is less than or equal to four times the expected squared difference between the statistic used in the rule and the expected value of the class label.

To understand this better, let's consider the case where the classification rule assigns class 0 when ##\eta_n(x)\leq 1/2## and class 1 otherwise. In this case, the probability of assigning the wrong class is equal to the probability that ##\eta_n(x)>1/2## when the true class is 0, or the probability that ##\eta_n(x)\leq 1/2## when the true class is 1. This can be written as ##\math
 

Related to Upper bound for probability when Bayes risk is zero

What is an upper bound for probability?

An upper bound for probability is the maximum possible value that a probability can take. It is often used to set limits on the likelihood of an event occurring.

What is the Bayes risk?

The Bayes risk is a concept in Bayesian statistics that measures the expected loss associated with a decision rule. It takes into account both the loss function and the prior distribution of the parameters.

What is a zero Bayes risk?

A zero Bayes risk occurs when the expected loss associated with a decision rule is equal to zero. This means that the decision rule is optimal and there is no risk of making a wrong decision.

How is an upper bound for probability related to a zero Bayes risk?

An upper bound for probability is often used to prove that a decision rule has a zero Bayes risk. By setting a limit on the probability, it can be shown that the expected loss will be equal to zero.

What is the significance of an upper bound for probability when the Bayes risk is zero?

An upper bound for probability when the Bayes risk is zero is significant because it provides a measure of the certainty of a decision. It shows that the decision rule is optimal and there is no risk involved, making it a desirable outcome in statistical analysis.

Similar threads

Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
742
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
943
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
458
Back
Top