Where does the gradient operator come from?

In summary, the gradient of a function is just a vector made up of partial derivatives of the function. The rationale here is that for ordinary calculus we look at the derivative of a function to say something sensible about extrema.
  • #1
I_laff
41
2
Can someone explain why the gradient of a function is just a vector made up of partial derivatives of the function?
 
  • Like
Likes Delta2
Physics news on Phys.org
  • #2
That's the definition. The rationale here is, that for ordinary calculus we look at the derivative of a function to say something sensible about extrema. If you consider a function of multiple arguments/variables which are independent (!), you stack all its partial derivatives in one vector* and call it the gradient. * Technically, a gradient is a so-called one-form and not a vector, a distinction which becomes important when you want to use non-Cartesian coordinates or consider curved manifolds.
 
  • Like
Likes Delta2
  • #3
The main idea comes from the functions of one variable f(x) where the derivative (with respect to the x variable ) at the point ##x## , that is ##f'(x)## is the slope of the tangent line at the point ##(x,f(x))## of the graph of the function f.
So, If we want to make an operator that :

takes as input a function f(x) and gives as output a vector which has
1) magnitude of the output vector is the slope of the function with respect to that variable,
2) direction of the output vector is the direction of the line of the variable (in one variable, or one dimension we just have a single direction, a single line)

Then the operator is just ##\vec{i}\frac{\partial}{\partial x}##

The most straightforward generalization of this operator in 3 dimensions is just the gradient operator, think about it, each component of the output vector of the gradient , is now the slope of the function f(x,y,z) with respect to that variable, that is the i component gives us the slope of f with respect to the variable x, the j component give us the slope with respect to y, and the k component give us the slope with respect to z.
 
Last edited:
  • Like
Likes I_laff
  • #4
I_laff said:
Can someone explain why the gradient of a function is just a vector made up of partial derivatives of the function?
I think it helps to consider the partial derivatives as the basis in tangent space. So the gradient is a certain derivative, which are always directional, expressed in this basis.
 
  • Like
Likes Delta2
  • #5
It is a straightforward generalization of the single-variable derivative to a multivariable function. Recall that if ##f:\mathbb{R}\rightarrow\mathbb{R}##, then
$$f'(x) = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}$$
If we try to generalize this to two-variable functions, we have a problem with the denominator:
$$f'(x,y) = \lim_{(h,k)\to 0} \frac{f(x+h, y+k) - f(x,y)}{(h,k)}$$
The denominator is a displacement vector, and there is no consistent way to divide by a vector. However, what we want to do is divide by the length of the displacement vector, which is what we really mean when we divide by h in 1 dimension too. So let's do that instead:
$$f'(x,y) = \lim_{(h,k)\to 0} \frac{f(x+h, y+k) - f(x,y)}{||(h,k)||}$$
This looks like a derivative. But recall that in 1-dimension, the derivative was defined so that it provided the best linear approximation to f around the point at which it was taken. It was not really a number: it was a tool for finding the correct tangent line.
That is what we want in a multivariable derivative: a tool for finding the best linear approximation to f(x,y) at the point the derivative is applied. In 1 dimension, the derivative was applied to the displacement h to get the linear approximation: ##f(x + h) \approx f(x) + f'(x)\cdot h##.
A linear 2-variable approximation based on the displacement vector (h, k) would then be ##f(x, y) \approx f(x_0, y_0) + u\cdot h + v\cdot k##, where u and v are constants.
So ##f'(x,y)## somehow has to give us two numbers, u and v, to apply to the displacement vector (h,k) in order to get our linear approximation. We know the dot product gives us that expression, so f'(x,y) must be the vector (u, v). Applying f'(x,y) to a particular displacement vector (h,k) must then be done with the dot product.
But hold on. Our definition doesn't look like it is going to give us a vector:
$$f'(x,y) = \lim_{(h,k)\to 0} \frac{f(x+h, y+k) - f(x,y)}{||(h,k)||}$$
If ##f(x,y)## is a scalar-valued function, then the right side is a single number, not a 2-component vector. So whatever that is, it is not what we want.To get what we want, mathematicians made a slight adjustment to the definition. Recalling that for single variable functions, we have:
$$f'(x) = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}$$
we can do some slight of hand to unify the two sides. If ##f'(x)## exists, then ##\lim_{h\to 0} f'(x) = f'(x)##, so
$$\lim_{h\to 0} f'(x) = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}$$
These two sides are just numbers, so we can do algebra:
$$0 = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h} - \lim_{h\to 0} f'(x)$$
$$0 = \lim_{h\to 0} \frac{f(x+h) - f(x)}{h} - \lim_{h\to 0} \frac{f'(x)\cdot h}{h}$$
$$0 = \lim_{h\to 0} \frac{f(x+h) - f(x) - f'(x)\cdot h}{h}$$
It follows that this is an equivalent definition of the derivative, where we can see the linear approximation that it provides directly in the numerator. This motivates us to try the following definition for multivariable derivatives: ##f'(x,y)## is, at each point ##(x,y)## the unique linear function ##L(h,k)## such that:
$$0 = \lim_{(h,k)\to 0} \frac{f(x+h, y+k) - f(x,y) - L(h,k)}{||(h,k)||}$$
That is, the derivative at each point is the unique linear function that vanishes at the same rate as the function with the displacement vector. Recalling that L is a linear function of the displacement vector (h,k), it must be the case that ##L(h,k) = u\cdot h + v\cdot k##.
Use this fact in the limit, and find out what u and v must be equivalent to. You will answer your own question then. :-)
 
Last edited:
  • Like
Likes jim mcnamara and Delta2
  • #6
Another way to answer is that vectors contain information about something, for example the vector of velocity gives us information on how fast something is moving and the direction that is moving.

The vector of the gradient of a function just give us information about each of the first partial derivatives (the magnitude of each component give us the first partial derivative with respect to that corresponding variable). The direction of the gradient give us some information about which partial derivative is greater, for example if the partial derivative with respect to x is much greater than the partial derivatives with respect to y,z then the gradient vector will tend to point in the x-direction.
 
  • Like
Likes I_laff
  • #7
Another way to generalize is using the Gâteaux differential and derivative definitions. View a derivative of a function at a point as a linear operator mapping the differentials of the input variables to differentials of the output variables.

So if ##y=f(x)## is a single variable function ##dy = f'(x)dx## is the simple 1-dimensional linear operator we call "multiplication by a number".
But in higher dimensions, where ##\mathbf{x}## is a vector, say ##\mathbf{x}=\langle x,y,z\rangle## and we have a scalar valued function of this vector:
## u = f(\mathbf{x}) = f(x,y,z)##
the derivative must map a vector differential to a scalar differential:
[tex] du = f'(\mathbf{x})[ d\mathbf{x}][/tex]
(I'm using square brackets here as function notation for a specifically linear function. Note we have a distinct linear function ##f'(\mathbf{x})## at each value of ##\mathbf{x}## which then acts on the differential ##d\mathbf{x}##.)

In this case the derivative must be something called a dual vector, or a linear functional. It is also a vector but living in the dual space of the space where ##d\mathbf{x}## lives. But we don't like to get into this deeper aspect of linear algebra in the Calc III courses and we also have a nice way of avoiding explicit reference to dual spaces (when we work in finite dimensions) by use of something called the Riesz Representation Theorem. It says, effectively, that we can always (in finite dimensional inner product spaces) define the action of a dual vector on a vector as the action of taking the dot product ##\bullet## with some specific vector. In technical terms:

"For ##\phi## a linear functional there exists some vector ##\mathbf{v}## such that ##\phi[\mathbf{x}] = \mathbf{v}\bullet \mathbf{x}##."

In the context of defining the derivative it means we can define the derivative functional as the dot product of some vector.

[tex] du = f'(\mathbf{x})[d\mathbf{x}] \equiv \nabla f (\mathbf{x})\bullet d\mathbf{x}[/tex]
([edit]:in short, [itex]f'(\mathbf{x}) = \nabla f(\mathbf{x})\bullet[/itex].)
This vector, ##\nabla f## being used to express the derivative is defined as the gradient vector. It is not ideal because we don't always have a dot product (or just one of em) defined on our space and the gradient depends on the definition of the dot product while this generalized derivative does not. But it is very convenient if you don't want to whip out the full toolbox of linear algebra with its dual spaces and such.

Do note that the gradient operator is something more, it is the operator mapping a function to its gradient vector. ## \nabla: f \mapsto \nabla f## But it can be applied to to vector valued functions as well and can be combined with other vector operations to define, for example, the curl and divergence operators.

Now as to why the gradient is just the vector of partial derivatives, this is not always the case and depends specifically on the fact that you are working with rectilinear coordinates where the coordinate basis is ortho-normal. You'll find, for example if you re-express a quantity in polar coordinates, its gradient vector is no longer simply the vector of partial derivatives. However it does obey a very simple "chain rule" so you can work out gradients in any coordinate system simply by remembering:
[tex]\nabla f(u,v,\ldots) = \frac{\partial f}{\partial u}\nabla u + \frac{\partial f}{\partial v}\nabla v + \ldots[/tex]
 
  • Like
Likes slider142 and Delta2
  • #8
I_laff said:
Can someone explain why the gradient of a function is just a vector made up of partial derivatives of the function?

Assuming you are asking this question for Euclidean 3 space, how do you define the gradient of a function if not the vector of partials with respect to the Euclidean coordinate axes? Are you using some other definition?

If you are thinking of a gradient as the direction and rate at which a physical quantity ##f(x,y,z)## changes most rapidly, then this turns out to be the vector of partial derivatives, ##(∂f/∂x,∂f/∂y,∂f/∂z)##.

To show this one needs to show:

The function changes most rapidly in the normal direction to the surfaces ##f(x,y,z)=k## on which ##f## is constant.​

##∇f=(∂f/∂x,∂f/∂y,∂f/∂z)## is normal to the surface ##f(x,y,z)=k##.​

The directional derivative of ##f## in the direction of ##∇f## is the length of ##∇f##​

The last two properties follow from the Chain Rule.
 
Last edited:

Related to Where does the gradient operator come from?

1. What is the gradient operator?

The gradient operator is a mathematical operator that is used in vector calculus to calculate the directional derivative of a scalar field. It is represented by the symbol ∇ (del) and is also known as the nabla operator.

2. Where did the gradient operator come from?

The gradient operator was first introduced by the mathematician William Rowan Hamilton in 1837. It was later refined by Lord Kelvin and Peter Guthrie Tait in the 1850s and became an essential tool in vector calculus.

3. How is the gradient operator used?

The gradient operator is used to calculate the rate of change of a scalar field in a particular direction. It takes the partial derivative of the field with respect to each of its variables and combines them using the dot product.

4. What is the relation between the gradient operator and the gradient vector?

The gradient operator is used to calculate the gradient vector, which is a vector of the directional derivatives of a scalar field. The gradient vector is obtained by applying the gradient operator to the scalar field.

5. Are there any other applications of the gradient operator?

Yes, the gradient operator is also used in physics and engineering to calculate various physical quantities such as electric and magnetic fields, temperature gradients, and fluid flow. It is also used in optimization problems to find the maximum or minimum value of a function.

Similar threads

Replies
18
Views
2K
Replies
4
Views
2K
Replies
2
Views
1K
Replies
6
Views
955
  • Calculus
Replies
9
Views
2K
Replies
8
Views
2K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
537
  • General Math
Replies
5
Views
885
Back
Top