- #1
brydustin
- 205
- 0
I was taught that the columns of a matrix, T, representing a transformation represent the first vector space's basis set and the rows represent the basis set of the range vector space.
i.e. T(v_k) = t_1,k*w_1 +... + t_(m,k)*w_m
So v_k would be the k-th basis vector of the first space, V, and the w's are the vector basis set for W (the range space). The coeffiicients (t's) correspond to that specific column.
In other words, a transformation of a single basis (input) element is equal to a linear combination of the range's basis.
This is the convention in Linear Algebra Done Right, wikipedia, and every text I've read... except recently on on mathematical physics, which has the reverse style (rows act like columns, colms. like rows -- as defined above). Is there a common convention? Or is one of the authors just plain wrong?
i.e. T(v_k) = t_1,k*w_1 +... + t_(m,k)*w_m
So v_k would be the k-th basis vector of the first space, V, and the w's are the vector basis set for W (the range space). The coeffiicients (t's) correspond to that specific column.
In other words, a transformation of a single basis (input) element is equal to a linear combination of the range's basis.
This is the convention in Linear Algebra Done Right, wikipedia, and every text I've read... except recently on on mathematical physics, which has the reverse style (rows act like columns, colms. like rows -- as defined above). Is there a common convention? Or is one of the authors just plain wrong?