# Wedge Product and Determinants ... Tu, Proposition 3.27 ... ...

#### Peter

##### Well-known member
MHB Site Helper
In Loring W.Tu's book: "An Introduction to Manifolds" (Second Edition) ... Proposition 3.27 reads as follows:

The above proposition gives the wedge product of k linear functions as a determinant ...

Walschap in his book: "Multivariable Calculus and Differential Geometry" gives the definition of a determinant as follows:

From Tu's proof above we can say that ...

$$\displaystyle \text{det} [ \alpha^i ( v_j ) ]$$

$$\displaystyle = \text{det} \begin{bmatrix} \alpha^1 ( v_1 ) & \alpha^1 ( v_2 ) & \cdot \cdot \cdot & \alpha^1 ( v_k ) \\ \alpha^2 ( v_1 ) & \alpha^2 ( v_2 ) & \cdot \cdot \cdot & \alpha^2 ( v_k ) \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \alpha^3 ( v_1 ) & \alpha^3 ( v_2 ) & \cdot \cdot \cdot & \alpha^3 ( v_k ) \end{bmatrix}$$

$$\displaystyle = \sum_{ \sigma \in S_k } ( \text{ sgn } \sigma ) \alpha^1 ( v_{ \sigma (1) } ) \cdot \cdot \cdot \alpha^k ( v_{ \sigma (k) } )$$

Thus Tu is indicating that the column index $$\displaystyle j$$ is permuted ... that is we permute the rows of the determinant matrix ...

But in the definition of the determinant given by Walschap we have

$$\displaystyle \text{det} \begin{bmatrix} a_{11} & \cdot \cdot \cdot & a_{ 1n } \\ \cdot & \cdot \cdot \cdot & \cdot \\ a_{n1} & \cdot \cdot \cdot & a_{ nn } \end{bmatrix}$$

$$\displaystyle = \sum_{ \sigma \in S_n } \varepsilon ( \sigma ) a_{ \sigma (1) 1 } \cdot \cdot \cdot a_{ \sigma (n) n }$$

Thus Walschap is indicating that the row index $$\displaystyle i$$ is permuted ... that is we permute the columns of the determinant matrix ... in contrast to Tu who indicates that we permute the rows of the determinant matrix ...

Can someone please reconcile these two approaches ... do we get the same answer to both ...?

Clarification of the above issues will be much appreciated ... ...

Peter

Last edited:

#### GJA

##### Well-known member
MHB Math Scholar
Hi Peter ,

$\text{Det}(A)=\text{Det}(A^{T})$ for all square matrices, where $A^{T}$ denotes the transpose. There are many books and websites with proofs of this fact.

One possible proof goes something like this: \begin{align*}\sum_{\sigma\in S_{n}} \text{sgn}(\sigma)a_{\sigma(1)1}\cdots a_{\sigma(n)n}&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma\sigma^{-1})\text{sgn}(\sigma^{-1})a_{\sigma(1)1}\cdots a_{\sigma(n)n}\\ & = \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{\sigma(1)\sigma^{-1}\sigma(1)}\cdots a_{\sigma(n)\sigma^{-1}\sigma(n)}. \end{align*} From here note that since each $\sigma\in S_{n}$ is a bijection from $\{1,\ldots, n\}$ to itself, the commutative property of multiplication can be used to arrange the $a$'s so that the row indices of all the terms in the sum are written in numerical order. Thus, we have \begin{align*}\sum_{\sigma\in S_{n}} \text{sgn}(\sigma)a_{\sigma(1)1}\cdots a_{\sigma(n)n}&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma\sigma^{-1})\text{sgn}(\sigma^{-1})a_{\sigma(1)1}\cdots a_{\sigma(n)n}\\ & = \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{\sigma(1)\sigma^{-1}\sigma(1)}\cdots a_{\sigma(n)\sigma^{-1}\sigma(n)}\\ &= \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{1\sigma^{-1}(1)}\cdots a_{n\sigma^{-1}(n)} \\&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma)a_{1\sigma(1)}\cdots a_{n\sigma(n)},\end{align*} where the last line follows from the fact that $\text{sgn}(\sigma)=\text{sgn}(\sigma^{-1})$ and that we are summing over all of $S_{n}.$

It is worth noting that the idea for this proof came from looking at the question you're asking in the case of $3\times 3$ matrices. For example, the term $a_{13}a_{21}a_{32}$ coming from the permutation $(132)$ in the case of permuted column indices corresponds (is equal) to $a_{21}a_{32}a_{13}$ coming from the permutation $(123)$ in the case of permuted row indices. From here note that $(132)$ and $(123)$ are inverses in $S_{3}.$

#### Peter

##### Well-known member
MHB Site Helper
Hi Peter ,

$\text{Det}(A)=\text{Det}(A^{T})$ for all square matrices, where $A^{T}$ denotes the transpose. There are many books and websites with proofs of this fact.

One possible proof goes something like this: \begin{align*}\sum_{\sigma\in S_{n}} \text{sgn}(\sigma)a_{\sigma(1)1}\cdots a_{\sigma(n)n}&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma\sigma^{-1})\text{sgn}(\sigma^{-1})a_{\sigma(1)1}\cdots a_{\sigma(n)n}\\ & = \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{\sigma(1)\sigma^{-1}\sigma(1)}\cdots a_{\sigma(n)\sigma^{-1}\sigma(n)}. \end{align*} From here note that since each $\sigma\in S_{n}$ is a bijection from $\{1,\ldots, n\}$ to itself, the commutative property of multiplication can be used to arrange the $a$'s so that the row indices of all the terms in the sum are written in numerical order. Thus, we have \begin{align*}\sum_{\sigma\in S_{n}} \text{sgn}(\sigma)a_{\sigma(1)1}\cdots a_{\sigma(n)n}&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma\sigma^{-1})\text{sgn}(\sigma^{-1})a_{\sigma(1)1}\cdots a_{\sigma(n)n}\\ & = \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{\sigma(1)\sigma^{-1}\sigma(1)}\cdots a_{\sigma(n)\sigma^{-1}\sigma(n)}\\ &= \sum_{\sigma\in S_{n}}\text{sgn}(\sigma^{-1})a_{1\sigma^{-1}(1)}\cdots a_{n\sigma^{-1}(n)} \\&=\sum_{\sigma\in S_{n}}\text{sgn}(\sigma)a_{1\sigma(1)}\cdots a_{n\sigma(n)},\end{align*} where the last line follows from the fact that $\text{sgn}(\sigma)=\text{sgn}(\sigma^{-1})$ and that we are summing over all of $S_{n}.$

It is worth noting that the idea for this proof came from looking at the question you're asking in the case of $3\times 3$ matrices. For example, the term $a_{13}a_{21}a_{32}$ coming from the permutation $(132)$ in the case of permuted column indices corresponds (is equal) to $a_{21}a_{32}a_{13}$ coming from the permutation $(123)$ in the case of permuted row indices. From here note that $(132)$ and $(123)$ are inverses in $S_{3}.$

Thanks GJA ...

Thanks in particular for a very helpful proof!

Working through the details now ...

Thanks again ...

Peter