# Critical point of function of two variables

#### GreenGoblin

##### Member
Hello,
Please help me solve this problem and help me find if I made a mistake? If you will,
Thank you

$f(x,y) = 2cos(2x) + sin(x^{2}-y^{2})$

Find all the first and second order derivatives, hence show the origin is a critical point and find which type of critical point
First time attempting critical point question, I know it has to do with second derivative but I am not sure on what the definition is,

I made the derivatives (I don't have any way to verify this other that my own mind and it needs to be right to check the critical point criteria I figure, so please point out to me any mistake)

$df/dx = -4sin(2x) + 2xsin(x^{2})cos(y^{2}) - cos(x^{2})sin(y^{2})$

$df/dy = -2y(sin(y^{2})sin(x^{2}) + cos(y^{2})cos(x^{2}))$

$d^{2}f/dx^{2} = -8cos(2x) + 4x^{2}cos(x^{2})(sin(y^{2}) + cos(y^{2})) + 2sin(x^{2})(sin(y^{2}) + cos(y^{2}))$

#### dwsmith

##### Well-known member
Hello,
Please help me solve this problem and help me find if I made a mistake? If you will,
Thank you

$f(x,y) = 2cos(2x) + sin(x^{2}-y^{2})$

Find all the first and second order derivatives, hence show the origin is a critical point and find which type of critical point
First time attempting critical point question, I know it has to do with second derivative but I am not sure on what the definition is,

I made the derivatives (I don't have any way to verify this other that my own mind and it needs to be right to check the critical point criteria I figure, so please point out to me any mistake)

$df/dx = -4sin(2x) + 2xsin(x^{2})cos(y^{2}) - cos(x^{2})sin(y^{2})$

$df/dy = -2y(sin(y^{2})sin(x^{2}) + cos(y^{2})cos(x^{2}))$

$d^{2}f/dx^{2} = -8cos(2x) + 4x^{2}cos(x^{2})(sin(y^{2}) + cos(y^{2})) + 2sin(x^{2})(sin(y^{2}) + cos(y^{2}))$
http://en.wikipedia.org/wiki/Second_partial_derivative_test

check out the example.

#### GreenGoblin

##### Member
Hi,
Thanks,

Why is it that the second derivative of x is used to find the critical point type but not y? What property is it that makes x more relevant than y in this case? (since the function is of both variables I don't understand why x is more involved in the evaluation than y..?)

#### dwsmith

##### Well-known member
Hi,
Thanks,

Why is it that the second derivative of x is used to find the critical point type but not y? What property is it that makes x more relevant than y in this case? (since the function is of both variables I don't understand why x is more involved in the evaluation than y..?)
What you find with x, you plug into the y derivative. You can start with y as well.

#### GreenGoblin

##### Member
No no, what I mean is, why is it the $d^{2}f/dx^{2}$ (or $f_{xx}$ whatever notation you prefer) is used to find out whether its a maximum or minimum? I can't see that x or y should be any different since theyre both independent variables? But it specificies the 2nd x derivative is to be used. What makes this the case?

#### dwsmith

##### Well-known member
No no, what I mean is, why is it the $d^{2}f/dx^{2}$ (or $f_{xx}$ whatever notation you prefer) is used to find out whether its a maximum or minimum? I can't see that x or y should be any different since theyre both independent variables? But it specificies the 2nd x derivative is to be used. What makes this the case?
That is the determinant for the Hessian matrix.
For example, the det of
$$\begin{vmatrix}a & b \\ c & d \end{vmatrix} = ad-bc$$

Examine at f_xx has to with positive and negative definiteness.

Last edited by a moderator:

#### GreenGoblin

##### Member
No, the determinant is used for the first step. I'm asking about the second. Given it is a maximum OR minimum, and not a saddle (from using the determinant), the process for determining which of maximum or minimum it is. I'm querying why the second x derivative is specified as being used and not y (since x and y are in essence, the same for problems like this. You could switch them around with no effect, so why isn't this true for that as well).

Please someone different respond.

Last edited:

#### Jester

##### Well-known member
MHB Math Helper
To answer your question, yes you can use either $f_{xx}$ or $f_{yy}$. It doesn't matter.

#### GreenGoblin

##### Member
To answer your question, yes you can use either $f_{xx}$ or $f_{yy}$. It doesn't matter.
Thank you

I like to have a justification for a formula rather than just a formula itself, I couldn't tally this as written. I saw no mention of this in the source provided.

Gracias,
GreenGoblin

#### HallsofIvy

##### Well-known member
MHB Math Helper
The condition for max or min is that $f_{xx}f_{yy}- f_{xy}^2$ be greater than 0. Since $f_{xy}$ is squared, it is non-negative so that $-f_{xy}^2$ is non-positive. In order that
$f_{xx}f_{yy}- f_{xy}^2$ be greater than 0, then, $f_{xx}f_{yy}$ must be positive, which, in turn means that $f_{xx}$ and $f_{yy}$ must have the same sign- so it is sufficient to check either of them to see whether it is a max or min.

The problem with a justification for this formula (which typically just "given" in a calculus text without proof or justification) is that it requires some pretty deep linear algebra. In a strict sense, "the" derivative, at a given point, of a function of two variables (as opposed to the partial derivatives) is a linear transformation from $R^2$ to R which can be represented by the vector $\begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}\end{bmatrix}$, the "gradient" of the function f.

And, then, the second derivative is the linear transformation from $R^2$ to $R^2$ which can be represented by the matrix
$\begin{bmatrix}\frac{\partial^2f}{\partial x^2} & \frac{\partial^2}{\partial x\partial y} \\ \frac{\partial^2f}{\partial x\partial y} & \frac{\partial^2f}{\partial y^2}\end{bmatrix}$

Now, that is a symmetric matrix so there exist a basis (coordinate system) in which it becomes the diagonal matrix
$\begin{bmatrix}\frac{\partial^2f}{\partial x'^2} & 0 \\ 0 & \frac{\partial^2f}{\partial y^2}\end{bmatrix}$

Further, the determinants of those two matrices are equal. Now that means that $f_{xx}f_{yy}- f_{xy}^2= f_{x'x'}f_{y'y'}$ which in turn means that $f_{fxx}f_{yy}- f_{xy}^2$ will be positive if and only if there exist a coodinate system, x'y', such that $f_{x'x'}$ and $f_{y'y'}$ have the same sign. Of course, you are only check this if the first derivatives, $f_x$ and $f_y$ are 0. So, in terms of the Taylor's series, $f(x',y')= f(x'_0, y'_0)+ f_{x'x'}(x'- x'_0)^2+ f_{y'y'}(y'- y'_0)^2$ plus higher powers of x' and y'. For x' and y' sufficiently close to $x'_0$ and $y'_0$ those higher power terms are negligible. And, if $f_{x'x'}$ and $f_{y'y'}$ are both positive, $f(x'_0, y'_0)+ f_{x'x'}(x'- x'_0)^2+ f_{y'y'}(y'- y'_0)^2$ is a parabola opening upward, so we have a minimum at $(x'_0, y'_0)$ while if they are negative we have a parabola opening downward so we have a maximum.

Last edited: