Vector Space Basis: Clarifying Linear Independence

In summary, The conversation discusses the confusion regarding linear independence and basis of a vector space, particularly with the use of "other" in the definition. It also touches on the use of orthonormal basis in Quantum mechanics and the significance of using hermitian operators in representing measuring devices. The connection between spectral theorem and projection-valued measures is also mentioned.
  • #1
bugatti79
794
1
Hi Folks,

I find this link http://mathworld.wolfram.com/VectorSpaceBasis.html confusing regarding linear independence.

One of the requirement for a basis of a vector space is that the vectors in a set S are linearly independent and so this implies that the vector cannot be written in terms of the other vectors in the set S.

Yet on the first paragraph it states that the vectors form a basis if and only if every vector can be uniquely written as a linear combination of the other which to me is a contradiction!
Can someone clarify my misinterpretation
regards
 
Physics news on Phys.org
  • #2
>it states that the vectors form a basis if and only if every vector can be uniquely written as a linear combination of the other vectors in the set S.

The word "other" has nothing to do in that statement. For ## x\in S ## , that unique linear combination is ## x=x ##.
 
Last edited:
  • Like
Likes FactChecker and bugatti79
  • #3
ok, thanks.

On a slight tangent, why do we use this "orthonormal basis" in Quantum mechanics calculations? Ie, is the physical significance, if any, of using this?

Pardon my ignorance.
 
  • #4
bugatti79 said:
On a slight tangent, why do we use this "orthonormal basis" in Quantum mechanics calculations? Ie, is the physical significance, if any, of using this?
The objects on which we do measurements are represented by unit vectors. Measuring devices are represented by linear operators. A measurement changes the object so that after the measurement it's represented by an eigenvector of the linear operator that represents the measuring device. The result of the measurement is always an eigenvalue of the linear operator.

Since results are eigenvalues, a measuring device whose output (the result of the measurement) is always a real number, must be represented by a linear operator whose eigenvalues are real numbers. Such linear operators have the following properties: a) their eigenvectors span the vector space, and b) eigenvectors corresponding to different eigenvalues are orthogonal (and therefore linearly independent). So if the eigenspaces associated with the eigenvalues are all 1-dimensional, the normalized eigenvectors must form an orthonormal basis. If the eigenspaces aren't all 1-dimensional, things get more complicated, but still not very different from what I just said.
 
Last edited:
  • Like
Likes bugatti79
  • #5
[..]a linear operator whose eigenvalues are real numbers. Such linear operators have the following properties:[...]

These properties follow from the assumption that the operators are Hermitian - actually, being hermitian is equivalent to having real eigenvalues and orthogonal eigenspaces.

As to why measurement operators in QM mut be hermitian, I don't know - I think it goes together with the whole idea that QM is formulated with Hilbert spaces, with the existence of amplitudes such that the probability of an outcome is the sum of squared (moduli of) amplitudes, and with the idea of unitary evolution, but I don't know what the precise statement is.
 
  • #6
wabbit said:
These properties follow from the assumption that the operators are Hermitian - actually, being hermitian is equivalent to having real eigenvalues and orthogonal eigenspaces
Ah, I was remembering a theorem about equivalence of self-adjointness and having a spectrum that's a subset of ##\mathbb R##, but I forgot that the theorem I had in mind is specifically for normal operators. OK, I agree that real eigenvalues alone doesn't imply self-adjointness.

wabbit said:
As to why measurement operators in QM mut be hermitian, I don't know - I think it goes together with the whole idea that QM is formulated with Hilbert spaces, with the existence of amplitudes such that the probability of an outcome is the sum of squared (moduli of) amplitudes, and with the idea of unitary evolution, but I don't know what the precise statement is.
I don't have all the details worked out either, but I think the strategy to answer this question should be to argue that measuring devices should be represented by projection-valued measures, and that the spectral theorem specifies a correspondence between projection-valued measures and self-adjoint operators.
 

Related to Vector Space Basis: Clarifying Linear Independence

1. What is a vector space basis?

A vector space basis is a set of vectors that can be used to express all other vectors in a given vector space through linear combinations. It is a fundamental concept in linear algebra and is used to define the dimension of a vector space.

2. How do you determine if a set of vectors is a basis for a vector space?

A set of vectors is a basis for a vector space if it is linearly independent, meaning that no vector in the set can be written as a linear combination of other vectors in the set, and it spans the entire vector space, meaning that any vector in the space can be expressed as a linear combination of the set of vectors.

3. What is the difference between linear independence and spanning?

Linear independence refers to the property of a set of vectors where no vector can be written as a linear combination of other vectors in the set. Spanning, on the other hand, refers to the property where a set of vectors can be used to express all other vectors in a given vector space through linear combinations.

4. Can a vector space have more than one basis?

Yes, a vector space can have multiple bases. In fact, any set of linearly independent vectors that span a vector space can be considered a basis for that space. However, all bases for a given vector space must have the same number of vectors, which is known as the dimension of the vector space.

5. How can a vector space basis be useful in solving linear algebra problems?

A vector space basis is useful in solving linear algebra problems because it provides a way to represent and manipulate vectors in a concise and efficient manner. It also allows for the determination of the dimension of a vector space, which is a key factor in solving many linear algebra problems.

Similar threads

Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
970
  • Topology and Analysis
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
675
  • Linear and Abstract Algebra
Replies
9
Views
333
  • Calculus and Beyond Homework Help
Replies
0
Views
477
  • Special and General Relativity
Replies
28
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
963
  • Quantum Interpretations and Foundations
Replies
5
Views
518
  • Linear and Abstract Algebra
Replies
3
Views
383
Back
Top