Looking for a nice basis for the general linear group

In summary, the conversation discusses the search for a nice set of basis matrices that cover matrices of size n x n and have the property of satisfying a simple computation inside the indices when multiplied together. Examples of almost-nice basis matrices were provided, but none were able to achieve the desired simplicity in computation. The possibility of performing matrix multiplication in O(n^2) essential multiplications was also brought up, but it depends on the cost and efficiency of the basis transformation.
  • #1
Strilanc
Science Advisor
612
229
I'm looking for a nice set of basis matrices ##B_{i,j}## that cover the matrices of size ##n \times n## when linear combinations are allowed. The nice property I want them to satisfying is something like ##B_{i,j} \cdot B_{a,b} = B_{i+a, j+b}##, i.e. I want multiplication of two basis matrices to turn into a simple computation inside the indices (using a single simple binary operator).

For example, define ##U^{\oplus x} = \otimes_k^{\lg_2 n} U^{2^i \land k}## where ##\otimes_a^b f(a) = f(0) \otimes f(1) \otimes ... \otimes f(b-1)## is a kronecker product expression, and ##a \land b## returns the number of bit positions where both ##a## and ##b## have a 1-bit in binary. Then an almost-nice basis, for a power-of-2 sized ##n##, is ##B_{i,j} = X^{\oplus i} Z^{\oplus j}##, where ##X## and ##Z## are the usual Pauli matrices. This is the basis of observables that quantum superdense coding is typically done in. Consider:

##B_{i,j} \cdot B_{a,b}##
##= X^{\oplus i} Z^{\oplus j} X^{\oplus a} Z^{\oplus b}##
##= X^{\oplus i} X^{\oplus a} Z^{\oplus j} Z^{\oplus b} (-1)^{a \land j}##
##= X^{\oplus i \oplus a} Z^{\oplus j \oplus b}(-1)^{a \land j}##
##= B_{i \oplus a, j \oplus b} (-1)^{a \land j}##

Which is pretty close to having the operation happen entirely inside the indices with a single operand. There's just that nasty extra sign factor. (The same thing happens if you work from the proper pauli set I, X, Y, Z instead of I, X, Z, and XZ; the factor just gets more complicated.)

Another almost-nice example is the combination of "offset" and "stride" matrices. To make an offset matrix you take the identity matrix then cycle its entries vertically by the given offset. The stride matrices work similarly, except instead of setting every ##i, j## entry satisfying ##i + d = j\pmod{n}## to 1 for any integer ##d## you set every entry satisfying ##i = cj \pmod{n}## for any ##c##.

For example, ##\text{Offset}_1 = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}## and ##\text{Stride}_2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}## when working with ##n = 3##. Offset matrices add their offsets when combined, whereas stride matrices multiply their stride.

If you define ##B_{i,j} = \text{Offset}_i \cdot \text{Stride}_{g^j}## and ##n## is a prime with generator ##g##, then:

##B_{i,j} \cdot B_{a,b}##
##= \text{Offset}_i \cdot \text{Stride}_{g^j} \cdot \text{Offset}_a \cdot \text{Stride}_{g^b}##
##= \text{Offset}_i \cdot \text{Offset}_{a g^j} \cdot \text{Stride}_{g^j} \cdot \text{Stride}_{g^b}##
##= B_{i + a g^j, j+b}##

Which does do all the computation inside the indicates, but doesn't manage to do it with one operator. Also it misses the zero-stride matrices due to the use of the generator.
 
Last edited:
Physics news on Phys.org
  • #2
Apparently the standard name for what I called "offset" matrices is "shift" matrices. And they're more typically combined with "clock" matrices. That gives you a system where ##B_{i,j} \cdot B_{a,b} = B_{i+a, j+b} \omega^{aj}##, whose extra term is directly analogous to the sign error from the Pauli-matrix-XZ-tensor basis. So still not as simple as desired.
 
  • #3
Wouldn't this imply you could perform matrix multiplication in ##O(n^2)## essential multiplications, which cannot be done?
 
  • #4
It depends on how expensive the basis transformation is. If it takes n^3 time to get into the nice multiplication basis, or if the basis has n^1.5 elements due to necessary redundancy, then it wouldn't help much with multiplication. The basis examples I gave are in fact cheap to get into though, and applying it to multiplication is what I had in mind.
 

Related to Looking for a nice basis for the general linear group

1. What is the general linear group?

The general linear group is a mathematical group that consists of all invertible linear transformations on a vector space. In simpler terms, it is the set of all matrices that can be multiplied with other matrices to produce a specific result. It is denoted as GL(n) where n represents the dimension of the vector space.

2. Why is finding a nice basis important for the general linear group?

Finding a nice basis for the general linear group is important because it allows us to easily represent and work with linear transformations. A nice basis is one that is easy to manipulate and has desirable properties, such as being orthogonal or diagonal.

3. What makes a basis "nice" for the general linear group?

A basis is considered "nice" for the general linear group if it satisfies certain properties, such as being orthogonal, diagonal, or having a specific form that makes it easy to work with. These properties can make computations and calculations involving linear transformations much simpler and more efficient.

4. How is a nice basis for the general linear group found?

Finding a nice basis for the general linear group involves using techniques from linear algebra, such as Gram-Schmidt orthogonalization or diagonalization methods. It also requires a thorough understanding of the properties and characteristics of the specific linear transformation being studied.

5. What are the practical applications of finding a nice basis for the general linear group?

Finding a nice basis for the general linear group has many practical applications in fields such as physics, engineering, and computer science. It allows for easier and more efficient computations involving linear transformations, and can also provide insight into the behavior and properties of these transformations in various applications.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
960
  • Linear and Abstract Algebra
Replies
8
Views
893
  • Linear and Abstract Algebra
Replies
11
Views
4K
  • Linear and Abstract Algebra
2
Replies
39
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
916
  • Linear and Abstract Algebra
Replies
23
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
3
Views
2K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Back
Top