An alternating quantity can be represented in several different ways. For example, the exterior product applied to multiple vectors is defined to change sign under the exchange of any two vector components. This can be written
v1∧v2∧⋯∧vk=sign(π)vπ(1)∧vπ(2)∧⋯∧vπ(k),
where π is any permutation of the k indices, and sign(π) is the sign of the permutation. Another way of writing it is
v1∧v2∧⋯∧vk=1k!∑i1,i2,…,ikεi1i2…ikvi1∧vi2∧⋯∧vik,
where each index ranges from 1 to k and ε is the permutation symbol (AKA completely anti-symmetric symbol, Levi-Civita symbol, alternating symbol, ε-symbol), defined to be +1 for even index permutations, −1 for odd, and 0 otherwise. In order to remove the summation sign by using the Einstein summation convention, the permutation symbol with upper indices is defined identically.
Δ Some texts, especially in the context of special relativity, define ε0⋯n=1 as we do, but define ε0⋯n=−1 (by lowering the indices with ημν, as we will shortly cover). |
The generalized Kronecker delta
δν1⋯νkμ1⋯μk≡∑πsign(π)δνπ(1)μ1⋯δνπ(k)μk
gives the sign of the permutation of upper versus lower indices and vanishes if they are not permutations or have a repeated index. We can then relate this to the permutation symbol:
δν1⋯νkμ1⋯μk=1(n−k)!εν1⋯νkλk+1…λnεμ1⋯μkλk+1…λn⇒ελ1⋯λnελ1…λn=n!
Δ It is important to remember that ε gives the sign of the permutation of sequential integers, and is only defined for n indices which take values 1⋯n, while δ gives the sign of the permutation of any number of indices. For example, if we write matrices with all upper indices, for two dimensional symmetric and anti-symmetric matrices Sij and Aij we have Sijεij=0 and Aijεij=2A12, while for n dimensional matrices we have Sijδklij=0 and Aijδklij=2Akl. |
For objects with many indices, multi-index notation is sometimes used, in which a multi-index I can be defined as I≡i1,i2,…,ik, but also can represent a sum or product. For example, the previous expression can be written
v1∧v2∧⋯∧vk=1k!∑IεIvi1∧vi2∧⋯∧vik=1k!εIvI.
Δ Note that multi-index notation is potentially ambiguous and much must be inferred from context, since the number of indices k is not explicitly noted, and the sequence of indices may be applied to either one object or any sum or product. |
Another example of an alternating quantity is the determinant of an n×n matrix Mij, which can be written
det(M)=∑πsign(π)M1π(1)M2π(2)⋯Mnπ(n)=∑i1,i2,…,inεi1i2…inM1i1M2i2⋯Mnin,
where the first sum is over all permutations π of the n second indices of the matrix Mij. Using the previous relation for the exterior product in terms of the permutation symbol, we can see that the transformation of the top exterior product of basis vectors under a change of basis e′μ=Mνμeν is
e′1∧e′2∧⋯∧e′n=det(M)e1∧e2∧⋯∧en,
which reminds of us of the Jacobian determinant from integral calculus, and as we will see makes the exterior product a natural way to express the volume element.
◊ The above relationship between the exterior product and the determinant means that under a positive definite inner product, for k arbitrary vectors vμ=Mνμˆeν the quantity P≡v1∧v2∧⋯∧vk satisfies ‖, which equals the volume of the parallelepiped defined by the vectors for orthonormal {\hat{e}_{\nu}}. There are other ways in which {P} behaves like a parallelepiped, and it is often useful to picture it as such. |
Δ Since the specific vectors in {P=v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k}} can have many values without changing {P} itself (e.g. {v\wedge w=(v+w)\wedge w}), a more accurate visualization might be the oriented subspace associated with the parallelepiped along with a basis-independent specification of volume. In particular, the change of basis formula above means that given any pseudo inner product, {P} can always be expressed as the exterior product of {k} orthogonal vectors. |
If {V} is {n}-dimensional and has a basis {e_{\mu}}, a general element {A} of {\Lambda^{k}V} can be written in terms of a basis for {\Lambda^{k}V} as
\displaystyle A=\underset{\mu_{1} < \dotsb < \mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k}}.
Here the sum is over only ordered sequences of indices, since due to anti-symmetric elements being identified, only these are linearly independent. Each index can take on any value between {1} and {n}. We can also write
\displaystyle A=\frac{1}{k!}\underset{\mu_{1},\dotsc,\mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k},}
where the coefficient is now defined for all combinations of indices, and its value changes sign for any exchange of indices (and thus vanishes if any two indices have the same value). The factorial ensures that the values for ordered sequences of indices matches the above expression.
The first expression shows that {\Lambda^{k}V} is a vector space with dimension equal to number of distinct subsets of {k} indices from the set of {n} available, i.e. its dimension is equal to the binomial coefficient “{n} choose {k}”
\displaystyle \left(\begin{array}{c} n\\ k \end{array}\right)\equiv\frac{n!}{k!\left(n-k\right)!}.
A general element of {\Lambda V} then has the form
\displaystyle \underset{0\leq k\leq n}{\bigoplus}\left[\underset{\mu_{1} < \dotsb < \mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k}}\right],
from which we can calculate that {\Lambda V} has dimension {2^{n}}.