- #1
musicgold
- 304
- 19
Hi,
My question is about a common procedure used to find minimum and maximum values of a function. In many problems we find the first derivative of a function and then equate it to zero. I understand the use of this method when one is trying to find the minimum or maximum value of the function.
However, I get confused when I see people using that ‘equating to 0’ assumption as a proof for something else.
To better explain my question, I have attached a file here. The file has equations used in deriving the coefficients of a least-square regression line.
The OLS method starts with the partial differentiation of equation 3.1.2, and then equates the derivatives to 0 and solves them to get the coeff. I get it up to this point.
However, in the last section, to prove that the sum of the residuals is 0, the author uses terms from partial differentiation as the proof.
I don’t understand how an assumption can be used as the proof for something.
Thanks,
MG.
My question is about a common procedure used to find minimum and maximum values of a function. In many problems we find the first derivative of a function and then equate it to zero. I understand the use of this method when one is trying to find the minimum or maximum value of the function.
However, I get confused when I see people using that ‘equating to 0’ assumption as a proof for something else.
To better explain my question, I have attached a file here. The file has equations used in deriving the coefficients of a least-square regression line.
The OLS method starts with the partial differentiation of equation 3.1.2, and then equates the derivatives to 0 and solves them to get the coeff. I get it up to this point.
However, in the last section, to prove that the sum of the residuals is 0, the author uses terms from partial differentiation as the proof.
I don’t understand how an assumption can be used as the proof for something.
Thanks,
MG.