Proving the Relationship between Operators on a Finite-Dimensional Space

In summary, the first part of the conversation was easy to follow, but the second part was not clear to me. I think I may have misunderstood what you were asking, and am wondering if you could clarify what you meant.
  • #1
alexfloo
192
0
"Given operators σ,τ on a finite-dimensional space V, show that στ=i, and that σ=p(τ) for some polynomial p in F[x]."

The first part was no problem. As for the second, I have a strong suspicion that p is the characteristic polynomial, mostly because I believe I heard of that fact before (that a matrix inserted into its own characteristic polynomial it its inverse). However, I can't seem to find anything about that, and furthermore, the characteristic polynomial has not yet been mentioned in the text I'm using.

Any idea how the proof should proceed?
 
Physics news on Phys.org
  • #2
Given operators σ,τ on a finite-dimensional space V, show that στ=i
What is "i", the identity operator? Unless there are conditions on σ and τ you haven't mentioned, that simply isn't true.
 
  • #3
Yes, i is the identity. Here is the proof that I used:

We know that
dim V = rk i ≤ min { rk σ, rk τ },
So both σ and τ have rank dim V, and so both are isomorphisms.

Where did I go wrong here?

It's from Steven Roman's Advanced Linear Algebra, I believe 1st edition. I'm told there are many many errors, and can't seem to find any errata online. In any case, I'm much more interested in the second part.
 
  • #4
alexfloo said:
Yes, i is the identity. Here is the proof that I used:



You cannot prove something wrong unless you cheat. As you wrote your OP the conclusion is simply false. Check and correct this.

We know that
dim V = rk i ≤ min { rk σ, rk τ },
So both σ and τ have rank dim V, and so both are isomorphisms.

Where did I go wrong here?

It's from Steven Roman's Advanced Linear Algebra, I believe 1st edition. I'm told there are many many errors, and can't seem to find any errata online. In any case, I'm much more interested in the second part.


DonAntonio
 
  • #5
Oh my, that wa a stupid mistake. I must not have been paying attention as i was typing. It was supposed to be:

(sigma composed with tau equals i) implies (sigma and tau are both invertible).

Apologies for the mixup. In either case DonAntonio, what you said about proofs is not at all correct: every sentence has a negation.
 
  • #6
alexfloo said:
Oh my, that wa a stupid mistake. I must not have been paying attention as i was typing. It was supposed to be:

(sigma composed with tau equals i) implies (sigma and tau are both invertible).

Apologies for the mixup. In either case DonAntonio, what you said about proofs is not at all correct: every sentence has a negation.



So?? It still is true you cannot prove something wrong unless you cheat. Of course, this must be understood under the usual, standard

assumption: that one has a consistent axiomatic system, with the usual logical rules and etc.

DonAntonio
 
  • #7
alexfloo said:
Oh my, that wa a stupid mistake. I must not have been paying attention as i was typing. It was supposed to be:

(sigma composed with tau equals i) implies (sigma and tau are both invertible).

Apologies for the mixup. In either case DonAntonio, what you said about proofs is not at all correct: every sentence has a negation.



So you want to prove [itex]\,\,A,B\in GL(V)\,\,,\,\,\dim V<\infty\,\,,\,\,AB=I\Longrightarrow B=p(A)\,\,,\,\,p(x)\in\mathbb{F}[x]\,\,,\,\,\mathbb{F}=\,[/itex] the field of definition of V.

Well, you were then on the right track: as A (And also B, of course) is non-singular, its charac. polynomial[itex]\,\,p_A(x)\,\,[/itex] has non-zero free coefficient,

so if we write [itex]\,\,p_A(x)=x^n+a_{n-1}x^{n-1}+...+a_1x+a_0\,\,,\,\,a_0\neq 0\Longrightarrow \,\,[/itex] , by the Cayley-Hamilton Theorem we

have the operator (matricial) equality [itex]\,\,p_A(A)=0\,\,[/itex], we'll get that [itex]\,\,I=-\frac{1}{a_0}A^n-...-\frac{a_1}{a_0}A\Longrightarrow\,\,[/itex] ...finish the argument

DonAntonio
 
  • #8
My point is that every time one proves "p," one disproves "not p." In fact, it's only when one has an INconsistent system that this fails, not the other way around. In any event, do you have any idea about the polynomia question?
 
  • #9
alexfloo said:
My point is that every time one proves "p," one disproves "not p." In fact, it's only when one has an INconsistent system that this fails, not the other way around. In any event, do you have any idea about the polynomia question?


1) Well, it looks like you continue to misunderstand what I meant, so let's leave it

2) You rushed so much to answer the above that you didn't see my answer to your OP.

DonAntonio
 
  • #10
I'd really like to know what you meant by that, if you wouldn't mind giving further explanation.

And apologies for missing the response, I was replying through e-mail since I wasn't at my computer, and it would only display the first intervening post. However, I don't think that's the proof I'm seeking: the book hasn't yet made any mention of the characteristic polynomial or the Cayley-Hamilton theorem, so I'm guessing there's a more subtle, elementary proof somewhere. Or else it may have just been another error; like I said, I've spotted more than a few in my edition of the text.

Thanks anyways!
 

Related to Proving the Relationship between Operators on a Finite-Dimensional Space

1. What is a characteristic polynomial?

A characteristic polynomial is a polynomial equation that is used to find the eigenvalues of a square matrix. It is used in linear algebra and plays a crucial role in determining the properties of a matrix.

2. How is a characteristic polynomial calculated?

To calculate a characteristic polynomial, we first find the determinant of the matrix. Then, we subtract the identity matrix multiplied by a scalar from the original matrix. This scalar is known as the eigenvalue. The resulting equation is the characteristic polynomial.

3. What is the significance of eigenvalues in a characteristic polynomial?

Eigenvalues are the solutions to the characteristic polynomial and represent the values by which a matrix can be scaled without changing its direction. They have many applications in different fields of science, including physics, engineering, and computer science.

4. Can a matrix have multiple characteristic polynomials?

No, a matrix can have only one characteristic polynomial. However, it may have different characteristic polynomials depending on the field over which it is defined.

5. How are characteristic polynomials used in practical applications?

Characteristic polynomials have many real-world applications, such as in image processing, signal analysis, and data compression. They are also used in solving differential equations and in finding the stability of dynamic systems.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
12
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
8K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Replies
4
Views
2K
Back
Top