The "Operator Norm" for Linear Transformations .... Browder, Lemma 8.4, Section 8.1, Ch. 8 .... ....

In summary, the conversation discusses the "Operator Norm" for Linear Transformations, specifically focusing on Lemma 8.4 in Section 8.1 of Chapter 8 of Andrew Browder's book "Mathematical Analysis: An Introduction". The conversation delves into the proof of Lemma 8.4 and explains how the author uses the same symbol to represent both the standard basis for the domain and codomain of the linear operator. The conversation concludes with appreciation for the help and clarification provided.
  • #1
Math Amateur
Gold Member
MHB
3,995
48
The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ...

I am reader Andrew Browder's book: "Mathematical Analysis: An Introduction" ... ...

I am currently reading Chapter 8: Differentiable Maps and am specifically focused on Section 8.1 Linear Algebra ...

I need some help in fully understanding Lemma 8.4 ...

Lemma 8.4 reads as follows:View attachment 7452
View attachment 7453In the proof of the above Lemma we read the following:" ... ... \(\displaystyle \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
\)\(\displaystyle = \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2\) ... ... "
Can someone please demonstrate why/how \(\displaystyle \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
\)\(\displaystyle = \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2\) ... ..

Help will be much appreciated ... ...

Peter
 
Physics news on Phys.org
  • #2
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
 
Last edited:
  • #3
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

GJA said:
Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
 
  • #4
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Peter said:
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
Hi GJA

I've now worked through your post ... and I now (thanks to you) understand the first equality ... but am stuck on the details of the second equality (despite your hint) ...

Can you please help further with the second equality ...?

Peter
 
  • #5
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
 
  • #6
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Was
GJA said:
Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
Oh! Indeed ...!

Thanks GJA ...Peter
 

Related to The "Operator Norm" for Linear Transformations .... Browder, Lemma 8.4, Section 8.1, Ch. 8 .... ....

What is the "Operator Norm" for Linear Transformations?

The "Operator Norm" for Linear Transformations is a mathematical concept that measures the size or magnitude of a linear transformation. It is also known as the norm or the length of a linear transformation. It is denoted by ||T|| and is defined as the supremum of the ratio of ||T(x)|| and ||x||, where x is a non-zero vector in the domain of the transformation T.

What is Browder's Lemma 8.4?

Browder's Lemma 8.4 is a fundamental result in linear algebra that states that for any bounded linear operator T on a Banach space X, ||T|| = sup{||T(x)|| : ||x|| = 1} where ||.|| is the norm on X. This lemma is useful in proving the uniqueness and existence of solutions to certain differential equations and in understanding the properties of linear operators.

What is Section 8.1 in Chapter 8 about?

Section 8.1 in Chapter 8 is about the properties and applications of the "Operator Norm" for Linear Transformations. It discusses the definition of the operator norm, its relation to the norm of vectors, and its properties. It also presents some important theorems and lemmas related to the operator norm, such as Browder's Lemma 8.4.

What is the significance of the "Operator Norm" for Linear Transformations?

The "Operator Norm" for Linear Transformations is a crucial concept in functional analysis, as it allows us to measure the size or magnitude of a linear transformation. It also plays a significant role in studying the properties and behavior of linear operators, which are essential in many areas of mathematics, including differential equations, optimization, and quantum mechanics.

How is the "Operator Norm" for Linear Transformations computed?

The "Operator Norm" for Linear Transformations is computed by taking the supremum of the ratio of the norm of the transformed vector and the norm of the original vector. Mathematically, it can be expressed as ||T|| = sup{||T(x)|| : ||x|| = 1}. This value can also be approximated by using numerical methods, such as the power method or the inverse power method, which are commonly used in numerical linear algebra.

Similar threads

  • Topology and Analysis
Replies
2
Views
1K
Replies
5
Views
2K
  • Topology and Analysis
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
889
  • Linear and Abstract Algebra
Replies
8
Views
838
Replies
2
Views
1K
Replies
2
Views
1K
  • Topology and Analysis
Replies
4
Views
1K
  • Topology and Analysis
Replies
13
Views
1K
Replies
3
Views
1K
Back
Top