Adjoint of an adjoint of a linear map

In summary: These proofs are correct. The only thing I would change is the notation. Instead of writing ##f\mathbf{v}## and ##\tilde{f}\mathbf{w}##, I would write ##f(\mathbf{v})## and ##\tilde{f}(\mathbf{w})##. Also, instead of writing ##\langle \mathbf{w},f\mathbf{v}\rangle##, I would write ##\langle \mathbf{w},f(\mathbf{v})\rangle##.
  • #1
"Don't panic!"
601
8
My question is as it says in the title really. I've been reading Nakahara's book on geometry and topology in physics and I'm slightly stuck on a part concerning adjoint mappings between vector spaces. It is as follows:

Let [itex]W=W(n,\mathbb{R})[/itex] be a vector space with a basis [itex]\lbrace\mathbf{f}_{\alpha}\rbrace[/itex] and a vector space isomorphism [itex]G:W\rightarrow W^{\ast}[/itex].
Given a map [itex]f:V\rightarrow W[/itex] we may define the [itex]\bf{adjoint}[/itex] of [itex]f[/itex], denoted by [itex]\tilde{f}[/itex], by [tex]G(\mathbf{w},f\mathbf{v}) =g(\mathbf{v},\tilde{f}\mathbf{w})[/tex] where [itex]\mathbf{v}\in V[/itex] and [itex]\mathbf{w}\in W[/itex], [itex]g(\cdot,\cdot)[/itex] is the inner product between the two vectors [itex]\mathbf{v}[/itex] and [itex]\tilde{f}\mathbf{w}[/itex].

He then goes on to say that "it is easy to see from this, that [itex]\widetilde{(\tilde{f})}=f[/itex]".
I'm having trouble showing that this is true given the definitions above.
 
Last edited:
Physics news on Phys.org
  • #2
Edit: I started writing this before you posted #2. Our proofs are essentially the same. The main difference is that I assumed that we're working with complex inner product spaces.

The adjoint of a linear ##A:V\to W## is the unique ##A^*:W\to V## such that
$$\langle w,Av\rangle_W=\langle A^*w,v\rangle_V$$ for all ##v\in V## and all ##w\in W##. So (assuming that ##A^*## is linear...this needs to be proved) the adjoint of ##A^*## is the unique ##A^{**}:V\to W## such that
$$\langle v,A^*w\rangle_V=\langle A^{**}v,w\rangle_W$$ for all ##w\in W## and all ##v\in V##.

I will use these facts about the adjoint operation and the definition of the inner product to prove that ##A^{**}=A##. For all ##v\in V## and all ##w\in W##, we have
$$\langle A^{**}v,w\rangle_W = \langle v,A^*w\rangle_V =\langle A^*w,v\rangle_V^* = \langle w,Av\rangle_W^* =\langle Av,w\rangle_W,$$ and therefore
$$\langle A^{**}v-Av,w\rangle_W =0.$$ This implies that for all ##v\in V##, we have
$$\langle A^{**}v-Av,A^{**}v-Av\rangle_W=0,$$ and therefore
$$A^{**}v-Av=0.$$ This implies that ##A^{**}=A##.
 
  • #3
Thanks Fredrik :-) ... I got cold-feet with my answer and so deleted it.
So is the notation [itex]G(\cdot,\cdot)[/itex] just denoting the inner product for the vector space [itex]W[/itex]?
Also, in the notation that I used is it correct to say that the adjoint mapping of [itex]f:V\rightarrow W[/itex] is the linear map [itex]\tilde{f}:W\rightarrow V[/itex], and therefore the adjoint mapping of [itex]\tilde{f}:W\rightarrow V[/itex] is the linear map [itex]\widetilde{(\tilde{f})}:V\rightarrow W[/itex]?

To prove that the adjoint map [itex]\tilde{f}:W\rightarrow V[/itex] of [itex]f[/itex] is unique, let [itex]g:W\rightarrow V[/itex] be any other adjoint map satisfying [tex]\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, g\mathbf{w}\rangle_{V}[/tex]
then [tex]\langle \mathbf{v}, g\mathbf{w}\rangle_{V}=\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}[/tex] which implies that [itex]g=\tilde{f}[/itex], i.e. the adjoint map of [itex]f[/itex] is unique. (Is this correct?!)

To prove that [itex]\tilde{f}[/itex] is linear, let [itex]\mathbf{w}_{1}, \mathbf{w}_{2}\in W[/itex] and [itex]\mathbf{v}\in V[/itex]. Then, as [itex]f[/itex] is linear, we have [tex] \langle \mathbf{v}, \tilde{f}(\mathbf{w}_{1}+\mathbf{w}_{2})\rangle_{V}= \langle \mathbf{w}_{1}+\mathbf{w}_{2}, f\mathbf{v}\rangle_{W}=\langle \mathbf{w}_{1}, f\mathbf{v}\rangle_{W}+\langle \mathbf{w}_{2}, f\mathbf{v}\rangle_{W}[/tex]
[tex]\qquad\qquad\qquad\qquad =\langle \mathbf{v}, \tilde{f}\mathbf{w}_{1}\rangle_{V}+\langle \mathbf{v}, \tilde{f}\mathbf{w}_{2}\rangle_{V}[/tex] where we have also used the linearity of the inner product.
Next, let [itex]\mathbf{v}\in V, \mathbf{w}\in W[/itex] and [itex]c\in \mathbb{R}[/itex]. Then,
[tex]\langle \mathbf{v}, \tilde{f}(c\mathbf{w})\rangle_{V}= \langle c\mathbf{w}, f\mathbf{v}\rangle_{W} =c\langle \mathbf{w}, f\mathbf{v}\rangle_{W}= c\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}=\langle \mathbf{v}, c\tilde{f}\mathbf{w}\rangle_{V}[/tex]
Would this be correct?
 
  • #4
"Don't panic!" said:
So is the notation [itex]G(\cdot,\cdot)[/itex] just denoting the inner product for the vector space [itex]W[/itex]?
I don't fully understand what's going on in post #1. You say that ##G:W\to W^*## is an isomorphism. Is ##W^*## the dual space of ##W##? If the domain of ##G## is W, then why do you write ##G(\mathbf w,f\mathbf v)## (which suggests that the domain is ##W\times W##)? Is the latter ##G## defined from the first by ##G(x,y)=G(x)(y)##? And finally, you don't mention that a map must be linear to have an adjoint. I'm not aware of a definition that applies to non-linear maps between vector spaces.

"Don't panic!" said:
Also, in the notation that I used is it correct to say that the adjoint mapping of [itex]f:V\rightarrow W[/itex] is the linear map [itex]\tilde{f}:W\rightarrow V[/itex], and therefore the adjoint mapping of [itex]\tilde{f}:W\rightarrow V[/itex] is the linear map [itex]\widetilde{(\tilde{f})}:V\rightarrow W[/itex]?
These statements are certainly true if for all linear transformation ##A##, you denote its adjoint by ##\tilde A##. But if that's how you define the notation, the statements are saying things like "the adjoint of ##f## is the adjoint of ##f##".

Your notation is non-standard. The standard notations are ##A^*## or ##A^\dagger## rather than ##\tilde A##. Also, linear transformations are usually denoted by uppercase symbols, like ##A## or ##T##.

"Don't panic!" said:
To prove that the adjoint map [itex]\tilde{f}:W\rightarrow V[/itex] of [itex]f[/itex] is unique, let [itex]g:W\rightarrow V[/itex] be any other adjoint map satisfying [tex]\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, g\mathbf{w}\rangle_{V}[/tex]
then [tex]\langle \mathbf{v}, g\mathbf{w}\rangle_{V}=\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}[/tex] which implies that [itex]g=\tilde{f}[/itex], i.e. the adjoint map of [itex]f[/itex] is unique. (Is this correct?!)
Yes. There are some minor inaccuracies in the language, but you have the right idea.

"Don't panic!" said:
To prove that [itex]\tilde{f}[/itex] is linear, let [itex]\mathbf{w}_{1}, \mathbf{w}_{2}\in W[/itex] and [itex]\mathbf{v}\in V[/itex]. Then, as [itex]f[/itex] is linear, we have [tex] \langle \mathbf{v}, \tilde{f}(\mathbf{w}_{1}+\mathbf{w}_{2})\rangle_{V}= \langle \mathbf{w}_{1}+\mathbf{w}_{2}, f\mathbf{v}\rangle_{W}=\langle \mathbf{w}_{1}, f\mathbf{v}\rangle_{W}+\langle \mathbf{w}_{2}, f\mathbf{v}\rangle_{W}[/tex]
[tex]\qquad\qquad\qquad\qquad =\langle \mathbf{v}, \tilde{f}\mathbf{w}_{1}\rangle_{V}+\langle \mathbf{v}, \tilde{f}\mathbf{w}_{2}\rangle_{V}[/tex] where we have also used the linearity of the inner product.
Next, let [itex]\mathbf{v}\in V, \mathbf{w}\in W[/itex] and [itex]c\in \mathbb{R}[/itex]. Then,
[tex]\langle \mathbf{v}, \tilde{f}(c\mathbf{w})\rangle_{V}= \langle c\mathbf{w}, f\mathbf{v}\rangle_{W} =c\langle \mathbf{w}, f\mathbf{v}\rangle_{W}= c\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}=\langle \mathbf{v}, c\tilde{f}\mathbf{w}\rangle_{V}[/tex]
Would this be correct?
Yes.
 
  • #5
In the first post I was basically quoting verbatim what Nakahara has written in his book. I was confused by this also, I don't really understand his notation (it was his notation I was using with the [itex]\tilde[/itex] everywhere, instead of [itex]\ast[/itex] or [itex]\dagger[/itex], which like you say are the standard notations)?! Hence leading to my difficulty in understanding how one can show from his definitions that [itex]A^{\ast\ast}=A[/itex].

Fredrik said:
Is W∗W^* the dual space of WW
As far as I understand what is written in the book, yes.
 
  • #6
I'm pretty sure this holds only in finite-dimensional vector spaces. Doesn't make much difference in Nakahara, but keep in mind if you try to apply these things to QM that there are subtleties there regarding adjoints.
 
  • #7
I think I can clarify a bit the language used in the book. The spaces ##V## and ##W## can be treated as the spaces ##\mathbb R^m## and ##\mathbb R^n## respectively, but with a non-standard inner product. Namely, the inner product in ##V## is defined using a (symmetric) positive definite matrix ##g##, $$(x,y)_V=(x,y)_g = (gx, y)_{\mathbb R^m} = \sum_{j,k} g_{j,k}x^k y^j;$$
here ##(x,y)_g## is the inner product in ##V##, and what is denoted in the book as ##g(x,y)##, and ##(gx, y)_{\mathbb R^m}## is the standard inner product of vectors ##gx## and ##y## in ##\mathbb R^m##. Note, that if we are given an inner product in ##V##, then the matrix ##g## is computed by $$g_{j,k} = (e_k, e_j)_V.$$The matrix ##g## is what is often called the metric tensor, and is interpreted as bilinear form on ##V\times V##,
$$g(x,y) = \sum_{j,k} g_{j,k} y^j x^k .$$ For a fixed ##x\in V## the map ##y\mapsto g(x,y)## is a linear functional on ##V##, i.e. an element of the dual space ##V^*##, so the bilinear form ##g## defines a mapping from ##V## to ##V^*##; it is easy to show that this map is linear and invertible; it is denoted in the book by the same letter ##g##.

It is possible to start with the isomorphism ##g:V\to V^*##, $$(gx)_j = \sum_{k} g_{j,k} x^k,$$ and then define the inner product as above, and that is exactly what is done in the book. Not all isomorphisms ##g## give rise to a "good" inner product: if we want the inner product to symmetric, the matrix ##g## has to be symmetric, and if we want the inner product to be non-negative, the matrix needs to be positive definite. So, for me it is more natural to start from the bilinear form than from the isomorphism ##V\to V^*##.

Note that non-positive inner products are studied in math and physics, see for example the Minkowski metric, but I haven't met the non-symmetric inner product (OK, in the case of complex spaces, the inner product is not symmetric, but a conjugate symmetric, I include it in the "symmetric" case)

I agree with Fredrik that notation ##\tilde f## for the adjoint operator is quite non-standard, and that the adjoint is defined only for linear transformations.

One more detail: for the complex inner product spaces the author follows "physical" convention (more logical, in my opinion), and the inner product ##(x,y)_V## is linear in the second variable ##y## and conjugate linear in the first variable ##x##: in mathematical literature it is the other way around.

And finally, the definition of an adjoint transformation it is usually written ##(y, Ax)_W = (A^*y x)_V##, as Fredrik wrote (or, equivalently ##(Ax, y)_W=(x, A^*y)_V##), which works in both real and complex cases. In the book the definition is written ##(x, Ay)_W = (y, A^*x)##, which does not matter in the real case because of symmetry: but in the complex case, as you can see in the book, the ugly complex conjugation sign appears in the definition of the adjoint.
 
  • #8
Thanks Hawkeye, appreciate your comments.

I'm still a little unsure on some of the notation used in the book though. When he writes [itex] G:W\rightarrow W^{\ast} [/itex] is he meaning that for some fixed [itex]\mathbf{w}[/itex] the isomorphism [itex]G[/itex] takes [itex]\mathbf{w}\in W[/itex] to [itex]G(\mathbf{w},\cdot)\in W^{\ast}[/itex], i.e. [tex]G:\mathbf{w}\mapsto \tilde{w}=G(\mathbf{w},\cdot)\in W^{\ast}[/tex] If so, is it correct to say that the inner product in [itex]W[/itex] is defined as [tex]G(\mathbf{w},f \mathbf{v})=\langle\mathbf{w},f \mathbf{v}\rangle_{W}, \qquad\mathbf{w},f \mathbf{v}\in W[/tex] and the inner product in [itex]V[/itex] defined as [tex]g(\cdot,\cdot)=\langle\cdot , \cdot\rangle_{V}[/tex] Then the adjoint mapping [itex]f^{\ast}[/itex] is defined such that [itex]\langle\mathbf{w},f \mathbf{v}\rangle_{W}= \langle\mathbf{v},f ^{\ast}\mathbf{w}\rangle_{V}[/itex]. Would this be correct (using the notation from the book anyway)?!
 
  • #9
"Don't panic!" said:
I'm still a little unsure on some of the notation used in the book though. When he writes [itex] G:W\rightarrow W^{\ast} [/itex] is he meaning that for some fixed [itex]\mathbf{w}[/itex] the isomorphism [itex]G[/itex] takes [itex]\mathbf{w}\in W[/itex] to [itex]G(\mathbf{w},\cdot)\in W^{\ast}[/itex], i.e. [tex]G:\mathbf{w}\mapsto \tilde{w}=G(\mathbf{w},\cdot)\in W^{\ast}[/tex]

Yes, it is correct. He uses the same letter for the isomorphism and for the bilinear form, but he uses ##G(x,y)## when he treats ##G## as a bilinear form and ##Gx## when he treats ##G## as an isomorphism; having this in mind will help you avoid the confusion.

What could be confusing is that in formula (2.19) symbols ##f##, ##g## and ##G## stand not for the respective objects, but for their matrices. For example, ##g## stands for the matrix of the isomorphism ##g## with respect to the bases ##e_k: 1\le k\le m## and ##e_k': 1\le k\le m## in the domain and in the target space respectively (or equivalently for the matrix of the bilinear form in the basis ##e_k: 1\le k\le m##) The symbol ##f## in this formula stands for the matrix of the transformation ##f## in the basis ##e_k: 1\le k\le m##.

"Don't panic!" said:
If so, is it correct to say that the inner product in [itex]W[/itex] is defined as [tex]G(\mathbf{w},f \mathbf{v})=\langle\mathbf{w},f \mathbf{v}\rangle_{W}, \qquad\mathbf{w},f \mathbf{v}\in W[/tex] and the inner product in [itex]V[/itex] defined as [tex]g(\cdot,\cdot)=\langle\cdot , \cdot\rangle_{V}[/tex] Then the adjoint mapping [itex]f^{\ast}[/itex] is defined such that [itex]\langle\mathbf{w},f \mathbf{v}\rangle_{W}= \langle\mathbf{v},f ^{\ast}\mathbf{w}\rangle_{V}[/itex]. Would this be correct (using the notation from the book anyway)?!

Yes, you are correct
 
  • #10
Ah ok, thank you very much for your help!

So, using the notation from the book and following Fredrik's input, I assume the following is correct (for the real case at least)?!

Let [itex]f:V\rightarrow W[/itex] be a linear map between [itex]V[/itex] and [itex]W[/itex] and [itex]G:W\rightarrow W^{\ast}[/itex] be an isomorphism between [itex]W[/itex] and [itex]W^{\ast}[/itex]. Then the adjoint of [itex]f[/itex], denoted [itex]\tilde{f}[/itex] is defined such that [tex] G(\mathbf{w},f \mathbf{v})= g(\mathbf{v},\tilde{f} \mathbf{w})[/tex]
Then, as [itex]\tilde{f}[/itex] is itself a linear map, we may define the adjoint of this map, denoted [itex]\widetilde{(\tilde{f})}[/itex] such that [tex]g(\mathbf{v},\tilde{f} \mathbf{w})= G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v})[/tex]
It follows then, that [tex] G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v})=g(\mathbf{v},\tilde{f} \mathbf{w})=G(\mathbf{w},f \mathbf{v})[/tex]
and as the inner product is linear in its arguments, we may write this as [tex]G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v}) =0[/tex]
As [itex]\mathbf{v}\in V[/itex] is arbitrary this implies that [tex]G(\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v},\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v})=0 \quad\forall \mathbf{v}\in V[/tex] which further implies that [tex]\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v}=\mathbf{0}\Rightarrow \widetilde{(\tilde{f})}=f[/tex]
 
Last edited:
  • #11
If T is a matrix, then the associativity of matrix multiplication implies that for a row vector v and a column vector w, we have v(Tw) = v(Tw). If you think about it, (i.e. since (vT)w = < (vT)^t, w> = < T^t(v^t),w>, this says that for column vectors v,w, we have < T^t(v),w> = <v,T(w)>. I.e. the transpose of T acts like an adjoint. I.e. matrix terms, the adjoint is just the transpose.

Since T^t^t = T, i.e. doing transpose twice takes us back to T, i.e. the transpose of the transpose is the original matrix, so the adjoint of the adjoint is the original map.
 
  • #12
Thanks for your input. Yes I understand it conceptually, but was just trying to show it rigorously in general (as the transpose is just a special of an adjoint mapping as far as I understand).
 
  • #13
well i would say my argument is rigorous, and yours is abstract. but every finite dimensional adjoint can be described as a transpose of a matrix (or in the complex case, a transpose conjugate), so the matrix argument is also general.

however you may be of the mind of e. artin who said matrices should not be used in vector space arguments. his point however was that they usually make the argument longer and less insightful. I thought this case was an exception where they make it simpler and more obvious.Actually the problem you are considering has nothing to do with dot products, since the double adjoint of a map from V-->W is a map from V**-->W**, and these spaces are always (in finite dimensions) naturally isomorphic to V and W.

The word "naturally" here refers to the problem you are solving. I.e. given the "obvious" isomorphisms between V and V**, and W and W** (defined by evaluation), show that for any linear map f:V-->W the corresponding map f**:V**-->W** corresponds to f under those isomorphisms.

I.e. "the double dual functor, is naturally equivalent to the identity functor, on finite dimensional vector spaces."

but forgive me, as this is almost certainly not helpful right now, and you are already getting very fine assistance. I just couldn't help writing something about the "natural" way one usually looks at these things. Of course it is then incumbent to relate it to the somewhat "less natural", i.e. dot product oriented, way your book is doing it.

And of course I am now making it even more abstract, after claiming to make it simpler.To sum up, the "natural" point of view is that the adjoint of a map f:V-->W is a map f*:W*-->V*, and the double adjoint is then a map f**:V**-->W**. There is always an evaluation map V-->V** taking x to "evaluation at x", which is an isomorphism in finite dimensions. Then we can ask whether under this isomorphism, f becomes f**, and it does.

An inner product on V is equivalent to an isomorphism V-->V*, since both things let us evaluate a vector of V on another vector from V. Thus given a dot product we have even more isomorphisms among duals, and it is a (bit of a messy) job to relate the two isomorphisms we get of the double duals, the natural one with the dot product one.
 
Last edited:
  • #14
Sorry, I probably didn't word my post very well, I hadn't meant to 'have a dig'. I guess I was just trying to understand it in the terms used in the book really.
Thanks very much for all the extra information though, it is interesting for insight into the area.
Would what I put in my earlier post be correct though (the one where a show that [itex]\widetilde{(\tilde{f})}=f[/itex])?
 
Last edited:
  • #15
well i find this stuff highly confusing myself and decline to critique again what is above. Fredrick’s proof in #2 cannot be improved upon in the desired language.

Let me make one point about “naturality” and compatibility (always in finite dimensions, and over real scalars, with symmetric dot products).

A dot product on V defines an isomorphism *:V-->V*, sending x to x*, where for all y in V, the value of x* on y is x*(y) = <x,y> = y*(x), since this product is symmetric.

A different dot product on V will give a different isomorphism of V with V*.

The adjoint of this isomorphism gives an isomorphism V**-->V*, taking a functional f on V* to the functional (f o *) on V. Now composing this with the inverse isomorphism

*^(-1):V*-->V, gives an isomorphism V**-->V, taking f in V** above to the unique vector x in V such that x* = (f o *), i.e. such that for all y in V, x*(y) = f(y*).

Now this says that f(y*) = x*(y) = <x,y> = <y,x> = y*(x). Thus the functional f corresponding to x, acting on y*, is merely “evaluation of y* at x”. I.e. for all functionals t in V*, f(t) = t(x).

Note that this last description of f does not use the isomorphism * from V to V*, (since there is no * in the equation f(t) = t(x)), hence does not use the dot product. Thus in fact all dot products give the SAME isomorphism from V to V**, even though they all give different isomorphisms from V to V*, and these isomorphisms were used in the composite isomorphism V-->V**.

This is ultimately why, for a linear map A, although A* changes when the dot product changes, A** does not, indeed it is always equal to A.
 
  • #16
Well I knew I would regret this, but I might as well finish up the abstract way of doing it with no dot products. Now that we know that the natural isomorphism from V to V** takes a vector to "evaluation at that vector", to show that A = A** under these isomorphisms, amounts to showing that given A:V-->W, then for any vector x in V,
A**(evaluation at x ) = evaluation at A(x).

So denote evaluation at x by x**, and then we want to show that A**(x**) = (A(x))**, as elements of W**,

I.e. for every t in W*, we have (A(x))**(t) = t(A(x)) = (toA)(x) = A*(t)(x) (since by defn A* means "precede by A"), = x**(A*(t)) = (x** o A*)(t) = A**(x**)(t).

I.e. (A(x))** = A**(x**). taadaa!. oink.

By the way, this shows the compositions x-->A(x)-->(A(x))**, and x-->x**-->A**(x**), are equal as maps from V to W**, even when the maps V-->V** and W-->W** are not isomorphisms, i.e. even in infinite dimensions.

But for a general element s of V**, and t in W*, all we get is that (A**(s))(t) = (soA*)(t) = s(A*(t)) = s(toA), which is really just the definition of A**. On the other hand, given that fact, and setting s = x**, one (slightly more briefly) gets (A**(x**))(t) = x**(toA) = (toA)(x) = t(A(x)) = (A(x))**(t).
 
Last edited:
  • #17
Thanks, really appreciate the effort that you've gone to to help me with this. I think I'll have to go and have a think about it all a bit more now.
 

Related to Adjoint of an adjoint of a linear map

1. What is the adjoint of an adjoint of a linear map?

The adjoint of an adjoint of a linear map is the original linear map itself. In other words, taking the adjoint of a linear map twice results in the original linear map.

2. Why is the adjoint of an adjoint of a linear map important?

The adjoint of an adjoint of a linear map is important because it allows us to define and study important properties of linear maps, such as self-adjointness and normality. These properties have applications in various areas of mathematics and physics.

3. How is the adjoint of an adjoint of a linear map related to the transpose of a matrix?

The adjoint of an adjoint of a linear map is equivalent to the transpose of the matrix representing the linear map. This is due to the fact that taking the adjoint is equivalent to taking the conjugate transpose, and taking the conjugate of a real number is equivalent to taking the transpose.

4. Can the adjoint of an adjoint of a linear map be expressed in terms of the original map?

Yes, the adjoint of an adjoint of a linear map can be expressed in terms of the original map. In particular, the adjoint of an adjoint is equal to the composition of the original map with its adjoint. This can be written as (T*)* = T.

5. How does the adjoint of an adjoint of a linear map behave under composition?

The adjoint of an adjoint of a linear map behaves as an involution under composition. This means that (T*)** = T, where (T*)** represents the adjoint of the adjoint of T. In other words, taking the adjoint of an adjoint is an operation that can be undone by taking the adjoint again.

Similar threads

Replies
4
Views
919
  • Linear and Abstract Algebra
Replies
2
Views
881
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
2K
Replies
22
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
2K
Back
Top