Differentiation on R^n ....need/ use of norms ....

In summary: I am not sure how to proceed from here ... can someone please help me out?Hi Peter,To prove the implication, you can use a proof by contradiction. Assume that the limit of ##\|\mathbf{g}(\mathbf{x})\|## is not equal to 0, i.e. there exists an ##\epsilon > 0## such that for all ##\delta > 0##, there exists an ##\mathbf{x}## such
  • #1
Math Amateur
Gold Member
MHB
3,990
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on Rn" role="presentation">Rn"

I need some help with an aspect of Theorem 9.1.10 ...

Theorem 9.1.10 reads as follows:
Junghenn - Theorem 9.1.10   ...  ... .png

The proof of Theorem 9.1.10 relies on the definition of the derivative of a vector-valued function of several variables ... that is, Definition 9.1.6 ... so I am providing the same ... as follows:
Junghenn - Definition 9.1.6   ...  ... .png

In Junghenn's proof of Theorem 9.1.10 above, we read the following:

" ... ... and

##\eta (h) = \frac{ f(a + h ) - f(a) - df_a (h) }{ \| h \| }## if ##h \neq 0##

... ... "Now there are no norm signs around this expression (with the exception of around ##h## in the denominator ...) ... and indeed no norm signs around the expression ##\lim_{ h \rightarrow 0 } \eta(h) = 0## ... nor indeed are there any norm signs in the limit shown in Definition 9.1.6 above (with the exception of around ##h## in the denominator ...) ...

... BUT ...

... ... this lack of norm signs seems in contrast to the last few lines of the proof of Theorem 9.1.10 as follows ... where we read ...

" ... ... Conversely if (9.6) holds for some ##\eta## and ##T##, then##\lim_{ h \rightarrow 0 } \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| } = \lim_{ h \rightarrow 0 } \| \eta(h) \| = 0##... ... "Here, in contrast to the case above, there are norm signs around the numerator and indeed around ##\eta(h)## ... ...
Can someone please explain why norm signs are used in the numerator and, indeed, around ##\eta(h)## in one case ... yet not the other ...
Help will be appreciated ...

Peter
 

Attachments

  • Junghenn - Theorem 9.1.10   ...  ... .png
    Junghenn - Theorem 9.1.10 ... ... .png
    30.5 KB · Views: 1,148
  • Junghenn - Definition 9.1.6   ...  ... .png
    Junghenn - Definition 9.1.6 ... ... .png
    17.9 KB · Views: 506
Physics news on Phys.org
  • #2
The norm signs are not needed in the numerator. The line that contains those is still true if they are removed. It's just that the 0 being referred to is the zero of the vector space ##\mathbb R^m## rather than the scalar 0 of ##\mathbb R_+##.

It may be that the author put the norm signs in the numerator in order to exactly match what was used in the definition of 'differentiable'.

You should satisfy yourself that, if ##g:\mathbb R^n\to \mathbb R^m##, then:
$$\lim_{\mathbf x\to \mathbf a}g(\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\|g(\mathbf x)\| = 0
$$
You can use the theorem for the limit of the composition of two functions, each of which have certain limits.
 
  • Like
Likes Math Amateur
  • #3
andrewkirk said:
The norm signs are not needed in the numerator. The line that contains those is still true if they are removed. It's just that the 0 being referred to is the zero of the vector space ##\mathbb R^m## rather than the scalar 0 of ##\mathbb R_+##.

It may be that the author put the norm signs in the numerator in order to exactly match what was used in the definition of 'differentiable'.

You should satisfy yourself that, if ##g:\mathbb R^n\to \mathbb R^m##, then:
$$\lim_{\mathbf x\to \mathbf a}g(\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\|g(\mathbf x)\| = 0
$$
You can use the theorem for the limit of the composition of two functions, each of which have certain limits.
Hi Andrew ... thanks for the help ...

But, your post has really got me thinking ...

I think I need some more help, though ...

You write:

" ... ...
You should satisfy yourself that, if ##g:\mathbb R^n\to \mathbb R^m##, then:
$$\lim_{\mathbf x\to \mathbf a}g(\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\|g(\mathbf x)\| = 0
$$
You can use the theorem for the limit of the composition of two functions, each of which have certain limits. ... ... "Can you explain how the theorem for the limit of the composition of two functions, each of which have certain limits. ... ... could be used to prove the above ..

Peter
 
Last edited by a moderator:
  • #4
Math Amateur said:
Hi Andrew ... thanks for the help ...

But, your post has really got me thinking ...

I think I need some more help, though ...

You write:

" ... ...
You should satisfy yourself that, if ##g:\mathbb R^n\to \mathbb R^m##, then:
$$\lim_{\mathbf x\to \mathbf a}g(\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\|g(\mathbf x)\| = 0
$$
You can use the theorem for the limit of the composition of two functions, each of which have certain limits. ... ... "Can you explain how the theorem for the limit of the composition of two functions, each of which have certain limits. ... ... could be used to prove the above ..

Peter

The norm is continuous. Therefore, you can swap limit and norm. The statement becomes trivial then. It is also not difficult to prove this implication starting from the epsilon-delta definition, using the reverse triangle inequality
 
  • Like
Likes Math Amateur
  • #5
andrewkirk said:
The norm signs are not needed in the numerator. The line that contains those is still true if they are removed. It's just that the 0 being referred to is the zero of the vector space ##\mathbb R^m## rather than the scalar 0 of ##\mathbb R_+##.

It may be that the author put the norm signs in the numerator in order to exactly match what was used in the definition of 'differentiable'.

You should satisfy yourself that, if ##g:\mathbb R^n\to \mathbb R^m##, then:
$$\lim_{\mathbf x\to \mathbf a}g(\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\|g(\mathbf x)\| = 0
$$
You can use the theorem for the limit of the composition of two functions, each of which have certain limits.
Math_QED said:
The norm is continuous. Therefore, you can swap limit and norm. The statement becomes trivial then. It is also not difficult to prove this implication starting from the epsilon-delta definition, using the reverse triangle inequality
Thanks Math_QED ...

You write:

" ... ... It is also not difficult to prove this implication starting from the epsilon-delta definition, using the reverse triangle inequality ... ... "

Cannot quite see how to do this ... but to start the process ... ...We want to show that if
##g:\mathbb R^n\to \mathbb R^m##, then:

##\lim_{\mathbf x\to \mathbf a} \mathbf g (\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\| \mathbf g (\mathbf x)\| = 0##

Now ...

##\lim_{\mathbf x\to \mathbf a} \mathbf g (\mathbf x) = \mathbf 0##

##\Longrightarrow## ... for every ##\epsilon \gt 0## there exists a ##\delta \gt 0## such that ...

## \| \mathbf{g} ( \mathbf{x} ) - \mathbf{0} \| = \| \mathbf{g} ( \mathbf{x} ) \| \lt \epsilon ##

where ##0 \lt \| \mathbf{x} - \mathbf{a} \| \lt \delta ##

... ... ... But ... where do we go from here ... how do we get a situation where we can use the reverse triangle inequality ...?

Can you help ...?

Peter
 
  • #6
Math Amateur said:
Thanks Math_QED ...

You write:

" ... ... It is also not difficult to prove this implication starting from the epsilon-delta definition, using the reverse triangle inequality ... ... "

Cannot quite see how to do this ... but to start the process ... ...We want to show that if
##g:\mathbb R^n\to \mathbb R^m##, then:

##\lim_{\mathbf x\to \mathbf a} \mathbf g (\mathbf x) = \mathbf 0 \Rightarrow
\lim_{\mathbf x\to \mathbf a}\| \mathbf g (\mathbf x)\| = 0##

Now ...

##\lim_{\mathbf x\to \mathbf a} \mathbf g (\mathbf x) = \mathbf 0##

##\Longrightarrow## ... for every ##\epsilon \gt 0## there exists a ##\delta \gt 0## such that ...

## \| \mathbf{g} ( \mathbf{x} ) - \mathbf{0} \| = \| \mathbf{g} ( \mathbf{x} ) \| \lt \epsilon ##

where ##0 \lt \| \mathbf{x} - \mathbf{a} \| \lt \delta ##

... ... ...But ... where do we go from here ... how do we get a situation where we can use the reverse triangle inequality ...?

Can you help ...?

Peter

We prove a more general statement: Let ##a## be a limit point of ##A \subseteq \mathbb{R}^n## (your case is ##A = \mathbb{R}^n## with ##0 = a \in A)##

Let ##f: A \subseteq \mathbb{R}^n \to \mathbb{R}^m## be a function. If ##\lim_{x \to a} f(x) = b##, then ##\lim_{x \to a} \Vert f(x) \Vert = \Vert b \Vert##

Proof: Let ##\epsilon > 0##. Choose ##\delta>0## such that for all ##x \in A## satisfying ##0 < \Vert x-a \Vert < \delta##, we have that ##\Vert f(x) - b \Vert < \epsilon##. Then, whenever ##x \in A## satisfies ##0 <\Vert x - a \Vert < \delta##, we have:

##|\Vert f(x) \Vert - \Vert b \Vert | \leq \Vert f(x) - b \Vert < \epsilon##

Hence, we have proven:

##\forall \epsilon > 0: \exists \delta > 0: \forall x \in A: 0 < \Vert x - a \Vert < \delta \implies |\Vert f(x) \Vert - \Vert b \Vert | < \epsilon##, which is what we wanted to show.
 
  • Like
Likes Math Amateur
  • #7
Math_QED said:
We prove a more general statement: Let ##a## be a limit point of ##A \subseteq \mathbb{R}^n## (your case is ##A = \mathbb{R}^n## with ##0 = a \in A)##

Let ##f: A \subseteq \mathbb{R}^n \to \mathbb{R}^m## be a function. If ##\lim_{x \to a} f(x) = b##, then ##\lim_{x \to a} \Vert f(x) \Vert = \Vert b \Vert##

Proof: Let ##\epsilon > 0##. Choose ##\delta>0## such that for all ##x \in A## satisfying ##0 < \Vert x-a \Vert < \delta##, we have that ##\Vert f(x) - b \Vert < \epsilon##. Then, whenever ##x \in A## satisfies ##0 <\Vert x - a \Vert < \delta##, we have:

##|\Vert f(x) \Vert - \Vert b \Vert | \leq \Vert f(x) - b \Vert < \epsilon##

Hence, we have proven:

##\forall \epsilon > 0: \exists \delta > 0: \forall x \in A: 0 < \Vert x - a \Vert < \delta \implies |\Vert f(x) \Vert - \Vert b \Vert | < \epsilon##, which is what we wanted to show.
Thanks Math_QED ... appreciate your help...

Peter
 
  • Like
Likes member 587159

What is differentiation on R^n?

Differentiation on R^n is a mathematical concept that involves finding the rate of change of a function in n-dimensional space. It is a generalization of the concept of differentiation on the familiar Cartesian coordinate system in two or three dimensions.

Why is differentiation on R^n important?

Differentiation on R^n is important because it allows us to understand the behavior of functions in multiple dimensions and to solve problems in fields such as physics, engineering, and economics. It also plays a crucial role in optimization and calculus.

How is differentiation on R^n different from differentiation on R^2 or R^3?

The main difference is that in R^n, we are considering functions that have multiple input variables (n) instead of just two or three. This makes the process of differentiation more complex, as we have to account for all the variables and their interactions.

What is the need for norms in differentiation on R^n?

Norms are essential in differentiation on R^n because they allow us to measure the size or magnitude of a vector in n-dimensional space. This is crucial for calculating derivatives, as they provide a measure of the change in a function at a specific point.

How are norms used in differentiation on R^n?

Norms are used in differentiation on R^n to define the concept of differentiability. A function is said to be differentiable at a point if it satisfies certain conditions, including the existence of a limit of the difference quotient as the input variables approach that point. Norms are used to calculate this limit and determine differentiability.

Similar threads

  • Topology and Analysis
Replies
4
Views
1K
Replies
2
Views
1K
Replies
3
Views
2K
Replies
2
Views
1K
Replies
2
Views
2K
  • Topology and Analysis
Replies
8
Views
1K
Replies
2
Views
891
  • Topology and Analysis
Replies
1
Views
1K
Replies
2
Views
1K
Replies
2
Views
2K
Back
Top