Understanding the Position Vector in Calculus Problems

In summary, the conversation discusses the computation of the Laplacian of a vector function. The poster attempts to use the product rule to differentiate twice, but is reminded to first compute fij, which involves using the product rule. The difficulty lies in simplifying the term r^-n∂jri+ri∂jr^-n. The responder clarifies that x_i is interchangeable with x, y, or z and can be treated algebraically.
  • #1
member 428835

Homework Statement


With ##\vec{r}## the position vector and ##r## its norm, we define
$$ \vec{f} = \frac{\vec{r}}{r^n}.$$
Show that
$$ \nabla^2\vec{f} = n(n-3)\frac{\vec{r}}{r^{n+2}}.$$

Homework Equations


Basic rules of calculus.

The Attempt at a Solution


From the definition of the Laplacian
$$
\nabla^2 \vec{f} = \nabla \cdot \nabla \vec{f}\\
= \left( \partial_i \partial_j \delta_{ij} \right) \frac{\vec{r}}{r^n}\\
= \vec{r}\left( \partial_i \partial_i \right) r^{-n}\\
= \vec{r}(-n)(-n-1) r^{-n-2}\\
= n(n+1) \frac{\vec{r}}{r^{n+2}}.$$

Clearly this does not agree with the proposed solution. I think my error is assuming ##vec{r}## is constant, but we know this is not always true in other coordinate systems. Any ideas?
 
Physics news on Phys.org
  • #2
Laplacian in spherical coordinates is not simply the second derivative wrt [itex] r [/itex] and you have ignored the [itex] \vec{r} [/itex].
 
  • #3
joshmccraney said:

Homework Statement


With ##\vec{r}## the position vector and ##r## its norm, we define
$$ \vec{f} = \frac{\vec{r}}{r^n}.$$
Show that
$$ \nabla^2\vec{f} = n(n-3)\frac{\vec{r}}{r^{n+2}}.$$

Homework Equations


Basic rules of calculus.

The Attempt at a Solution


From the definition of the Laplacian
$$
\nabla^2 \vec{f} = \nabla \cdot \nabla \vec{f}\\
= \left( \partial_i \partial_j \delta_{ij} \right) \frac{\vec{r}}{r^n}\\
= \vec{r}\left( \partial_i \partial_i \right) r^{-n}\\
= \vec{r}(-n)(-n-1) r^{-n-2}\\
= n(n+1) \frac{\vec{r}}{r^{n+2}}.$$

Clearly this does not agree with the proposed solution. I think my error is assuming ##vec{r}## is constant, but we know this is not always true in other coordinate systems. Any ideas?

Your ##\vec{f}## is a vector whose ##i##th component is ##f_i = x_i/r^n##. What is ##\nabla^2 f_i##?
 
  • #4
Ray Vickson said:
Your ##\vec{f}## is a vector whose ##i##th component is ##f_i = x_i/r^n##. What is ##\nabla^2 f_i##?

##\nabla^2 f_i = \partial_j \partial_j f_i = \partial_j \partial_j (x_i/r^n)## Does this look correct?
 
  • #5
joshmccraney said:
##\nabla^2 f_i = \partial_j \partial_j f_i = \partial_j \partial_j (x_i/r^n)## Does this look correct?

Why not finish the computations? When you have done that, if your final result agrees with what you were asked to show, then you know you have done it correctly---at least if the dimensionality is 3 (that is, vectors are ##(x,y,z)## or ##(x_1,x_2,x_3)##). For dimensions other than 3 the results ought to be a bit different.
 
  • #6
Ray Vickson said:
Why not finish the computations?

Ok, so I think we continue by saying ##r^{-n}\partial_j \partial_j(x_i) + x_i\partial_j \partial_j(r^{-n})## by product rule, right? However, I really don't know how to proceed from here. Partial derivatives are with respect to x,y,z and ##x_i## is technically not the same thing. Any help in the right direction here?
 
  • #7
joshmccraney said:
Ok, so I think we continue by saying ##r^{-n}\partial_j \partial_j(x_i) + x_i\partial_j \partial_j(r^{-n})## by product rule, right? However, I really don't know how to proceed from here. Partial derivatives are with respect to x,y,z and ##x_i## is technically not the same thing. Any help in the right direction here?

No, not right. First compute ##f_{ij} = \partial_j (x_i/r^n)##, which will certainly involve using the product rule. Then, compute ##\partial_j f_{ij}##, first without summing over ##j##, then doing ##j##-sum; those derivatives will involve the product rule again. So, when you differentiate twice, you cannot just use the product rule on ##\partial^2##; it is a bit more complicated than that:
$$ \partial^2 (fg) =(\partial^2 f) g + 2 (\partial f) (\partial g) + f (\partial^2 g) $$
However, using this formula is not much quicker than just going ahead and differentiating twice, re-using the product rule each time.

Anyway, I do not understand the rest of your difficulties: of course ##x_i## is the same thing as one of the ##x##, ##y## or ##z##; in fact, when YOU resorted to the use of the Einstein summation convention in post #1 you must have been assuming that your coordinates are ##x_1, x_2, x_3##. These are just names; instead of using ##x_i, i=1,2,3## or ##x,y,z## I could call the coordinates red, white and blue, or Tom, Dick and Harry. The only thing about them that matters is how they appear in formulas and how they can be manipulated algebraically.
 
  • #8
Ray Vickson said:
No, not right. First compute ##f_{ij} = \partial_j (x_i/r^n)##, which will certainly involve using the product rule. Then, compute ##\partial_j f_{ij}##, first without summing over ##j##, then doing ##j##-sum; those derivatives will involve the product rule again. So, when you differentiate twice, you cannot just use the product rule on ##\partial^2##; it is a bit more complicated than that:
$$ \partial^2 (fg) =(\partial^2 f) g + 2 (\partial f) (\partial g) + f (\partial^2 g) $$
However, using this formula is not much quicker than just going ahead and differentiating twice, re-using the product rule each time.
Yea, I spaced it big time! So just to be clear, how about we call ##\vec{r} = r_i \vec{e_i}##. Then we have $$\partial_j \partial_j (r_i \vec{e_i} r^{-n}) = \partial_j (\partial_j (r_i \vec{e_i} r^{-n}))\\
= \vec{e_i} \partial_j(\partial_j (r_i r^{-n})\\
= \vec{e_i} \partial_j(r^{-n} \partial_j r_i + r_i \partial_j r^{-n})$$
But how do you simplify the term ##r^{-n} \partial_j r_i + r_i \partial_j r^{-n}##?

Ray Vickson said:
Anyway, I do not understand the rest of your difficulties: of course ##x_i## is the same thing as one of the ##x##, ##y## or ##z##; in fact, when YOU resorted to the use of the Einstein summation convention in post #1 you must have been assuming that your coordinates are ##x_1, x_2, x_3##. These are just names; instead of using ##x_i, i=1,2,3## or ##x,y,z## I could call the coordinates red, white and blue, or Tom, Dick and Harry. The only thing about them that matters is how they appear in formulas and how they can be manipulated algebraically.
I was referring to the fact that if we define the components of ##\vec{r}## to be ##x_i## then ##x_i## is not the same as x,y,z since we don't know that ##\vec{r} = <x,y,z>##.
 
  • #9
Ray Vickson said:
No, not right. First compute ##f_{ij} = \partial_j (x_i/r^n)##, which will certainly involve using the product rule. Then, compute ##\partial_j f_{ij}##, first without summing over ##j##, then doing ##j##-sum; those derivatives will involve the product rule again. So, when you differentiate twice, you cannot just use the product rule on ##\partial^2##; it is a bit more complicated than that:
$$ \partial^2 (fg) =(\partial^2 f) g + 2 (\partial f) (\partial g) + f (\partial^2 g) $$
However, using this formula is not really much easier than just going ahead and differentiating twice, re-using the product rule each time.

Anyway, I do not understand the rest of your difficulties: of course ##x_i## is the same thing as one of the ##x##, ##y## or ##z##; in fact, when YOU resorted to the use of the Einstein summation convention in post #1 you must have been assuming that your coordinates are ##x_1, x_2, x_3##. These are just names; instead of using ##x_i, i=1,2,3## or ##x,y,z## I could call the coordinates red, white and blue, or Tom, Dick and Harry. The only thing about them that matters is how they appear in formulas and how they can be manipulated algebraically.
joshmccraney said:
Yea, I spaced it big time! So just to be clear, how about we call ##\vec{r} = r_i \vec{e_i}##. Then we have $$\partial_j \partial_j (r_i \vec{e_i} r^{-n}) = \partial_j (\partial_j (r_i \vec{e_i} r^{-n}))\\
= \vec{e_i} \partial_j(\partial_j (r_i r^{-n})\\
= \vec{e_i} \partial_j(r^{-n} \partial_j r_i + r_i \partial_j r^{-n})$$
But how do you simplify the term ##r^{-n} \partial_j r_i + r_i \partial_j r^{-n}##?

I was referring to the fact that if we define the components of ##\vec{r}## to be ##x_i## then ##x_i## is not the same as x,y,z since we don't know that ##\vec{r} = <x,y,z>##.

Call the components whatever you want. If you like ##\vec{r}=(x_1, x_2, x_3)## then go ahead and use those; if you prefer ##\vec{r}=(x,y, z)## then go with those; if you prefer ##\vec{r}=(\psi, \lambda, \sigma)## then use those instead. All that matters is how you use them.

However, if you want to use summation notation (and, possibly, the Einstein summation convention) then you had better use something like ##\vec{r}=(x_1, x_2, x_3)## or ##\vec{r}=(\xi_1,\xi_2, \xi_3)## or ##\vec{r}=(p_1, p_2, p_3)##, or even ##\vec{r} = (r_1,r_2,r_3)## as you seem to like best. It really and truly does not matter, but books and papers tend to use ##x,y,x## or ##x_1,x_2,x_3## most commonly, and the latter is what I was referring to.

As to the rest of your question: you seem to be saying that you do not know how to find derivatives like ##\partial x_i /\partial x_j## (of course, for ##i = j## and for ##i \neq j##), or derivatives like ##(\partial /\partial x_j) (x_1^2+x_2^2+x_3^2)^{-n/2}##, and I find that hard to believe.
 
  • #10
All of your post makes sense. I think what I'm having trouble with is thinking about ##\vec{r}##. Let's say ##\vec{r} = r_i \vec{e_i} = <r_1,r_2,r_3>##. Now let's get concrete and define ##<r_1,r_2,r_3> \equiv <2x,z,y+2>##. Then ##\partial_1 r_1 = \partial (r_1) / \partial x_1 = 2 \neq 1##. However, if we say ##\vec{r} = x_i \vec{e_i} = <x_1,x_2,x_3>\equiv <2x,z,y+2>## then ##\partial_1 x_1 = \partial (x_1) / \partial x_1 = 1 \neq 2##. Can you see what I'm confused about?
 
  • #11
joshmccraney said:
All of your post makes sense. I think what I'm having trouble with is thinking about ##\vec{r}##. Let's say ##\vec{r} = r_i \vec{e_i} = <r_1,r_2,r_3>##. Now let's get concrete and define ##<r_1,r_2,r_3> \equiv <2x,z,y+2>##. Then ##\partial_1 r_1 = \partial (r_1) / \partial x_1 = 2 \neq 1##. However, if we say ##\vec{r} = x_i \vec{e_i} = <x_1,x_2,x_3>\equiv <2x,z,y+2>## then ##\partial_1 x_1 = \partial (x_1) / \partial x_1 = 1 \neq 2##. Can you see what I'm confused about?

No. The ##x,y,z## (or ##r_1, r_2, r_3## if you prefer) are not functions of anything in this problem. Something else is a function of THEM, though, and that is what you have. That is, you are given a vector function
$$\vec{f} = \vec{i} \frac{x}{(x^2+y^2+z^2)^{n/2}}+ \vec{j} \frac{y}{(x^2+y^2+z^2)^{n/2}}+ \vec{k} \frac{z}{(x^2+y^2+z^2)^{n/2}}, $$
where ##\vec{i}##, ##\vec{j}## and ##\vec{k}## are the unit vectors along the x-, y- and z-axes. You want to compute
$$\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} \right) \vec{f}, $$
and this can be done by doing it separately for the x-, y- and z-components of ##\vec{f}## (because the unit coordinate vectors ##\vec{i}##, ##\vec{j}## and ##\vec{k}## are constants).

Just to simplify the work---not for any other reason at all---I prefer to use the symbols ##x_1, x_2, x_3## instead of the symbols ##x,y,z## and to speak of components 1,2, and 3 of ##\vec{f}## instead of the x-, y- and z-components. That's all, nothing mysterious about it. Anyway, that is what YOU did in post #1 when you, yourself used the notation ##\partial_i##, etc. I am just following YOUR lead!
 
  • Like
Likes member 428835
  • #12
Ray Vickson said:
No. The ##x,y,z## (or ##r_1, r_2, r_3## if you prefer) are not functions of anything in this problem. Something else is a function of THEM, though, and that is what you have.
This is exactly what I was confused about. So "position vector" implies the vector ##<x,y,z>##, or yea, any other name you want. It never (unless other wise stated) would imply something that I wrote, like ##<x^2,z,4>##, right? This makes sense since that must be what your position is. Sorry for all the other stuff; at the end of the day this was my real problem. Thanks!

In fact, that's why I seemingly didn't know derivatives or understand the dummy-variables behind the symbols ##x_i## and ##r_i##, etc. To me, it seemed like taking ##\partial r## was as arbitrary as taking ##\partial f##.
 

Related to Understanding the Position Vector in Calculus Problems

1. What is the Laplacian of a vector?

The Laplacian of a vector is a mathematical operation that involves taking the divergence of the gradient of a vector field. It is represented by the symbol ∇² and is used to describe the rate of change of a vector field at a given point.

2. How is the Laplacian of a vector calculated?

The Laplacian of a vector can be calculated by taking the second partial derivatives of each component of the vector and adding them together. It can also be expressed in terms of the dot product of the gradient operator with itself.

3. What is the significance of the Laplacian of a vector in physics?

The Laplacian of a vector is commonly used in physics to describe the behavior of vector fields, such as electric and magnetic fields. It can also be used to solve differential equations and describe the flow of fluids in motion.

4. How does the Laplacian of a vector relate to the Laplace equation?

The Laplacian of a vector is closely related to the Laplace equation, which describes how a scalar field varies in space. The Laplace equation can be obtained from the Laplacian of a vector by setting the vector field equal to its divergence.

5. Can the Laplacian of a vector be extended to higher dimensions?

Yes, the Laplacian of a vector can be extended to higher dimensions. In three dimensions, it is represented by the Laplace operator, which is the sum of the second partial derivatives of a scalar function. In higher dimensions, it is represented by the Laplace-Beltrami operator.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
819
  • Calculus and Beyond Homework Help
Replies
9
Views
994
  • Calculus and Beyond Homework Help
Replies
4
Views
233
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Replies
2
Views
920
  • Calculus and Beyond Homework Help
Replies
2
Views
679
  • Calculus and Beyond Homework Help
Replies
1
Views
577
  • Calculus and Beyond Homework Help
Replies
3
Views
601
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
838
Back
Top