Solving Minimization Problem Involving Variance & Covariance

In summary, minimizing variance and covariance is crucial in scientific research as it allows for more precise and accurate measurements, making it easier to identify patterns and draw meaningful conclusions. The optimal solution for minimizing variance and covariance can be determined using various methods such as gradient descent, the method of Lagrange multipliers, and linear programming. Common applications of this concept include data analysis, experimental design, and optimization problems in fields like statistics, economics, and engineering. However, there are potential challenges when solving these problems, such as non-normal distribution of data and outliers. Despite these challenges, minimizing variance and covariance can greatly contribute to the advancement of scientific knowledge by providing more reliable and accurate results, leading to a better understanding of underlying mechanisms and the development of new theories
  • #1
OhMyMarkov
83
0
Hello Everyone!

What $b$ minimizes $E[(X-b)^2]$ where $b$ is some constant, isn't it $b=E[X]$? Is it right to go about the proof as follows:

$E[(X-b)^2] = E[(X^2+b^2-2bX)] = E[X^2] + E[b^2]-2bE[X]$, but $E = b$, we differentiate with respect to $b$ and set to zero, we obtain that $b=E[X]$. Is this proof correct? I was thinking it was until I got this problem:

What $Y$ minimizes $E[(Y-aX-b)^2]$? The given expression contains variances and covariances, but all I get was $Y=aE[X]+b$.

What am I doing wrong here?

Any help is appreciated! :D
 
Physics news on Phys.org
  • #2
OhMyMarkov said:
Hello Everyone!

What $b$ minimizes $E[(X-b)^2]$ where $b$ is some constant, isn't it $b=E[X]$? Is it right to go about the proof as follows:

$E[(X-b)^2] = E[(X^2+b^2-2bX)] = E[X^2] + E[b^2]-2bE[X]$, but $E = b$, we differentiate with respect to $b$ and set to zero, we obtain that $b=E[X]$. Is this proof correct?


Correct


I was thinking it was until I got this problem:

What $Y$ minimizes $E[(Y-aX-b)^2]$? The given expression contains variances and covariances, but all I get was $Y=aE[X]+b$.

What am I doing wrong here?

Any help is appreciated! :D

The problem with this second question is that with normal naming conventions \(Y\) is a random variable not a constant, if it were a constant what you get would be correct. If it is a RV then it leaves you with a minimisation problem where the variables are \( \overline{Y}\), \(Var(Y)\), \( Covar(Y,aX+b)\). This is a constrained minimisation problem as \( | Covar(Y,aX+b)| \le \sqrt{Var(Y)Var(aV+b)}\)

CB
 
Last edited:
  • #3
Thank you CaptainBlack!

But, you mentioned [FONT=MathJax_Math-italic]E[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]V[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]r[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]C[/FONT][FONT=MathJax_Math-italic]o[/FONT][FONT=MathJax_Math-italic]v[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]+[/FONT][FONT=MathJax_Math-italic]b[/FONT][FONT=MathJax_Main]][/FONT] , what about [FONT=MathJax_Math-italic]E[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]V[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]r[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]][/FONT] , can we sub them for [FONT=MathJax_Math-italic]C[/FONT][FONT=MathJax_Math-italic]o[/FONT][FONT=MathJax_Math-italic]v[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]+[/FONT][FONT=MathJax_Math-italic]b[/FONT][FONT=MathJax_Main]][/FONT] ? Or are the variables intentionally used in this fashion so that the hand calculation becomes easier?
 
Last edited:
  • #4
OhMyMarkov said:
Thank you CaptainBlack!

But, you mentioned [FONT=MathJax_Math-italic]E[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]V[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]r[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]C[/FONT][FONT=MathJax_Math-italic]o[/FONT][FONT=MathJax_Math-italic]v[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]+[/FONT][FONT=MathJax_Math-italic]b[/FONT][FONT=MathJax_Main]][/FONT] , what about [FONT=MathJax_Math-italic]E[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]V[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]r[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]][/FONT] , can we sub them for [FONT=MathJax_Math-italic]C[/FONT][FONT=MathJax_Math-italic]o[/FONT][FONT=MathJax_Math-italic]v[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math-italic]Y[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math-italic]a[/FONT][FONT=MathJax_Math-italic]X[/FONT][FONT=MathJax_Main]+[/FONT][FONT=MathJax_Math-italic]b[/FONT][FONT=MathJax_Main]][/FONT] ? Or are the variables intentionally used in this fashion so that the hand calculation becomes easier?

\(E(X)\) and \(Var(X)\) are constants for this problem, while \(u=Covar(Y,aX+b)\) is one of the variable in the optimisation problem.

CB
 
  • #5


Hello!

Your proof for the first problem is correct. When you differentiate with respect to $b$, you are essentially finding the critical point where the function is minimized, and setting it equal to zero gives the minimum value for $b$. Since $E = b$, setting $E[X] = b$ does indeed minimize the function.

For the second problem, you are on the right track. The key here is to use the properties of variance and covariance to simplify the expression. We know that $Var(aX+b) = a^2Var(X) + 2aCov(X,Y) + b^2$, where $Cov(X,Y)$ is the covariance between $X$ and $Y$. Using this, we can rewrite the expression as:

$E[(Y-aX-b)^2] = Var(Y) + a^2Var(X) + 2aCov(X,Y) + b^2$

Since we are only interested in minimizing with respect to $Y$, we can treat all other terms as constants. Therefore, we can ignore the $b^2$ term and differentiate with respect to $Y$ to get:

$2Cov(X,Y) = 0$

Setting this equal to zero, we get $Cov(X,Y) = 0$. This means that $X$ and $Y$ are uncorrelated, and the minimum value for $Y$ is simply its expected value, which is $aE[X] + b$.

I hope this helps! Let me know if you have any further questions.
 

Related to Solving Minimization Problem Involving Variance & Covariance

1. What is the significance of minimizing variance and covariance in a scientific context?

Minimizing variance and covariance is important because it allows for more accurate and precise measurements in scientific experiments. By reducing the variability of data and minimizing the relationships between variables, scientists can more easily identify patterns and draw meaningful conclusions from their data.

2. How do you determine the optimal solution for minimizing variance and covariance?

There are several methods for solving minimization problems involving variance and covariance, such as gradient descent, the method of Lagrange multipliers, and linear programming. The optimal solution will depend on the specific problem and data set, so it is important to carefully consider the appropriate approach for each situation.

3. What are some common applications of minimizing variance and covariance in scientific research?

Minimizing variance and covariance is used in a variety of scientific fields, including statistics, economics, and engineering. It is commonly applied in data analysis, experimental design, and optimization problems. For example, scientists may use it to minimize errors in measurements or to identify the most efficient way to allocate resources.

4. What are some potential challenges when solving minimization problems involving variance and covariance?

One challenge is that the data used to calculate variance and covariance may not be normally distributed, which can impact the accuracy of the results. Additionally, the presence of outliers or missing data can also complicate the analysis. It is important to carefully consider the assumptions and limitations of the chosen method when solving these types of problems.

5. How can minimizing variance and covariance contribute to the overall advancement of scientific knowledge?

By minimizing variance and covariance, scientists can obtain more accurate and reliable results, which can help to identify trends and relationships in data. This can lead to a better understanding of underlying mechanisms and processes, and contribute to the development of new theories and advancements in scientific knowledge.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
913
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
860
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
916
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Back
Top