Motivation behind eigenvalue and eigenvector

In summary, an eigenvector is a non-zero vector that, when multiplied by a given square matrix, results in the vector being scaled by a constant factor. This is useful for finding uncorrelated variables in large data sets and simplifying matrices through a change of coordinates. The process involves finding the eigenvalues and eigenvectors of the matrix.
  • #1
Avichal
295
0
An eigenvector is defined as a non-zero vector 'v' such that A.v = λ.v

I don't understand the motive behind this. We are trying to find a vector that when multiplied by a given square matrix preserves the direction of the vector.

Shouldn't the motive be the opposite i.e. finding the matrix A given the vector v?

I suppose eigenvector was defined this way with some application in mind
 
Physics news on Phys.org
  • #2
Avichal said:
I suppose eigenvector was defined this way with some application in mind

There are many applications. For me, one obvious application is to find the set of uncorrelated variables in large data sets. Such data sets are usually initially described by correlated variables and the object of the analysis is to transform the data into its principal components (called Principal Component Analysis or PCA). The principal components are the eigenvectors of the covariance matrix describing the relations among the initial set of variables. Each eigenvector represents one new variable. These new variables are independent of each other (orthogonal). The eigenvalues of the principal components correspond to the variance of each component.
 
Last edited:
  • #3
Avichal said:
An eigenvector is defined as a non-zero vector 'v' such that A.v = λ.v

I don't understand the motive behind this. We are trying to find a vector that when multiplied by a given square matrix preserves the direction of the vector.

It's not just "a" vector, it could be several.

One path to understanding is to think about the convenience that a "change of coordinates" brings to many problems. If M is a matrix that represents something in a given set of coordinates, what happens to M when you change coordinates? (If you look into this you'll find that a linear change of coordinates amounts to the multiplication [itex] A^{-1} M A [/itex] for some matrix [itex] A [/itex] )

If M is a complicated matrix, and you want to change coordinates to make it simple, what is the "simplest" type of matrix to deal with? I think a diagonal matrix is simplest. You can't always change coordinates to revise the information in a matrix M so it is displayed as a diagonal matrix, but in many important cases you can. Suppose you do get a diagonal matrix. The eigenvectors of a diagonal matrix are simple to find, aren't they? The eigen values of these eigenvectors are just the numbers on the diagonal. If you only have to deal with diagonal matrices then eigenvectors and eigen values are obviously important concepts - although they are triival to find.

If we use a change of coordinates that doesn't change any vectorial properties of the information in M then the eigenvectors of the diagonalized version of the information should be the eigenvectors of M expressed in a different system of coordinates. If you don't know how to diagonalize M by changing coordinates then finding the eigenvectors of M can give you a hint about how to do it.

For matrices that can't be diagonalized by a change of coordinates, it turns out that they can be changed to a somewhat simple form consisting of "blocks" of numbers calld "Jordan blocks". Eigenvalues also play a role in the facts about that.
 
  • #5
Consider the differential equation
[tex] \frac{d \vec{x}}{dt}=A \vec{x},[/tex]
where [itex] \vec{x}[/itex] is a vector and [itex]A[/itex] is a matrix. If the problem were one-dimensional, then you'd like to say the solution is just [itex]x=e^{tA}(x_{0})[/itex]. What about the multi-dimensional version? We'd still like to say that [itex] \vec{x}=e^{tA}( \vec{x}_{0})[/itex], but we have to make sense out of the expression [itex]e^{tA}.[/itex] What does it mean to exponentiate a matrix? Well, one approach would be to use the Maclaurin expansion for the exponential:
[tex]e^{x}= \sum_{k=0}^{\infty} \frac{x^{k}}{k!},[/tex]
and simply substitute in the matrix:
[tex]e^{tA}= \sum_{k=0}^{ \infty} \frac{t^{k}A^{k}}{k!}.[/tex]
How to compute the arbitrary power of a matrix? Well, supposing you could write [itex]A[/itex] this way: [itex]A=PDP^{-1},[/itex] where [itex]D[/itex] is diagonal. Then [itex]A^{k}=PD^{k}P^{-1}[/itex], and arbitrary powers of diagonal matrices are easy to compute.

Guess what? Diagonalizing a matrix, when possible, is a matter of finding the eigenvalues and eigenvectors. The eigenvalues form the diagonal of [itex]D[/itex], and the eigenvectors, orthonormalized, form the columns of the matrix [itex]P[/itex].

The differential equation I began with is a very useful one. You can write many circuit DE's and mass-spring DE's in that form.
 
Last edited:
  • #6
Sorry for the late reply.
I was brushing up my linear algebra when I stumbled upon https://www.khanacademy.org/math/linear-algebra/alternate_bases/eigen_everything/v/linear-algebra--introduction-to-eigenvalues-and-eigenvectors

It basically said the need to find a function T such that T(v) = λv. Here λ is the eigenvalue and v the eigenvector.
Aren't we doing the opposite? For a given T we find all v?
Why not the opposite? For a given v, find all T such that T(v) = λv. This makes more sense in my opinion.

E.g.: - You have a vector and want to increase its length but preserve its direction. So you will find T for this v.
But what we are doing is opposite. We have T and find all v for it.
 

Related to Motivation behind eigenvalue and eigenvector

1. What is the significance of eigenvalues and eigenvectors in linear algebra?

Eigenvalues and eigenvectors are essential concepts in linear algebra because they help us understand the behavior of linear transformations. They provide important information about the characteristics and properties of a matrix, such as its scale and direction. They also allow us to decompose a matrix into simpler and more manageable forms, which can be useful for solving complex problems in various fields of science and engineering.

2. How are eigenvalues and eigenvectors related to each other?

Eigenvalues and eigenvectors are closely related. An eigenvector is a vector that remains in the same direction after being transformed by a matrix. The corresponding eigenvalue represents the factor by which the eigenvector is scaled during the transformation. In other words, an eigenvector and its corresponding eigenvalue are two different representations of the same underlying concept.

3. What is the motivation behind studying eigenvalues and eigenvectors?

The study of eigenvalues and eigenvectors has several motivations. One of the main reasons is that they provide a powerful tool for solving systems of linear equations and understanding the behavior of linear transformations. They also have various applications in fields such as physics, engineering, computer science, and economics, making them an important topic for researchers and practitioners.

4. How do eigenvalues and eigenvectors contribute to the understanding of eigenproblems?

Eigenvalues and eigenvectors play a crucial role in solving eigenproblems, which involve finding the eigenvalues and eigenvectors of a given matrix. By finding the eigenvalues, we can determine the scale of the transformation, and by finding the eigenvectors, we can determine the direction of the transformation. These pieces of information are essential for understanding and solving eigenproblems in various contexts.

5. Can you provide an example of how eigenvalues and eigenvectors are used in practical applications?

One practical application of eigenvalues and eigenvectors is in image compression. In this process, an image is represented as a matrix of pixels, and by finding the eigenvalues and eigenvectors of the matrix, we can identify the most significant features of the image. This information can then be used to compress the image without losing too much quality. Other examples include using eigenvalues and eigenvectors in principal component analysis for data analysis and in quantum mechanics for solving the Schrödinger equation.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
841
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Advanced Physics Homework Help
Replies
17
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
711
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
577
Back
Top