Exponential of bivector as the geometric product of two vectors

I think there is something wrong in the book ‘geometric algebra for computer science’.

On page 186 of the book it says that the exponential of a bivector can have the form:

e^{-B/2} = (b_{2k}b_{2k+1})\dots(b_2b_1)

in which b_i are unit vectors. B is expressed as the sum of k commuting 2-blades
B = B_1+B_2+\dots+B_k and each 2-blade is expressed as B_i=b_{2i-1}\wedge b_{2i}.

This implies that e^{-b_{2i-1}\wedge b_{2i}/2} = b_{2i}b_{2i-1} which in general is not true.

Rewriting the expression to e^{a\wedge b/2} = ab

And replacing the 2-blade a\wedge b = I\sin \alpha the exponential becomes

e^{I/2\sin\alpha} = \cos(\frac{1}{2}\sin\alpha) + I\sin(\frac{1}{2}\sin\alpha)

the right hand side is going to become

ab = a\cdot b+a\wedge b = \cos\alpha+I\sin\alpha

concluding ab is not in general equal to e^{a\wedge b/2} since

\cos(\frac{1}{2}\sin\alpha) + I\sin(\frac{1}{2}\sin\alpha) \neq \cos\alpha+I\sin\alpha

I don’t mean any offense to the author of that book, as I don’t know them personally, but that author and some others are perpetuating several flawed presentations of geometric algebra unfortunately. There are several flaws in their work, which leads to inconsistency issues. However, there aren’t really any sources which currently are perfect and get everything right. You always have to be vigilant and think for yourself, I certainly don’t rely on any of these authors and have built up my own understanding which I verified myself.

It does seems to be confusing.As stated on the previous page 185, the expansion into commuting blades is only possible for Euclidian or Minkowski spaces. Your first equation for exp(-B/2) can be read as a statement of the Cartan-Dieudonne theorem, with the rotation factors combined in pairs to form rotations… I guess that you can always arrange things (ie make the factors in the wedge products be othogonal) so that the wedge products may be replaced with geometric products. In your notation the term a^b/2= is then ab/2=beta uv where u and v v square to +1 or -1. The expansion is then cos(beta)+uv sin(beta) [or cosh and sinh depending on the sign of uv squared]. If u squares to +1, then cos(beta))+uvsin(beta)=cos(beta) uu+uv sin(beta)=u(cos(beta)u+sin(beta)v) which gives the geometric product of two planes reflections as required. One of the terms is u, as Dorst states, but the other is a linear combination of u and v, Your expression for a^b captures the angle between them (which I redefine as 90 degrees) , but you also need a factor beta to incorporate their lengths. Hope this helps.

I am one of the authors of GA4CS, and I agree that the statement is not correct as it stands. In the proper conversion of the vectors at ground level to those in the exponential, there is indeed a log involved (duh) which for your rotational example takes the form of a GA atan-function, converts the trig factors to their linear arguments. And there is always that annoying factor of 1/2…
By the way, the issue of general bivector decomposition has only very recently been fully satisfactorily resolved by Martin Roelfs; recommended reading!

@chakravala I am glad that the consistency issues are being resolved; I am still continually learning myself. Getting things out there for the scrutiny of others, testing them in practice, it all helps to mold this into the tool we need. And then we write a better introduction, with more subtle shortcomings.

Are you referring to this paper?

Since this is BIVECTOR.NET, I send along this idea. It is rather different in approach from the Roelfs/De Keninck arxiv paper. But is seems to work for all R(p,q), n=p+q even.
As long as the eigenvalues of the vector-valued function B.v, (B a bivector, v a vector) are distinct [this means no 0 eigenvalues], you may calculate the decomposition as follows.

  1. Form a matrix B whose columns are the vectors B.ek, ek the n basis vectors.
  2. Find the eigenvalues λk and eigenvectors uk of B and vk of B’ (transpose). Reindex them so that the eigenvectors for the same k have the same eigenvalue. In a sense, the vk form a reciprocal frame to the uk.
  3. The eigenvalues (since the characteristic function is real and missing the odd terms) will be as follows (a,b,c,d real):
    (i) real pairs +a and -a
    (ii) imaginary pairs ib and -ib
    (iii) complex quads +/-c+/-id.
  4. Calculate the n (rank-one) matrices Mk=ukvk.'/vk.‘uk. (I use .’ since we want the transpose, not the hermitian transpose). Then matrix theory states that B=sum λkMk.
  5. Suppose as an example that a real pair corresponds to 1 and 2, +a and -a, an imaginary pair to 3 and 4, +ib and -ib, and the quads to 5,6,7,8, c+id, c-id, -c+id, -c-id. Then B=a[M1-M2]+b[iM3-iM4]+c[M5+M6-M7-M8]+d[iM5-iM6+iM7-iM8]. Then it can be shown that the matrices in square brackets are real and have the same relation to GA bivectors as in part 1. Thus,
  6. Convert back to bivectors B=aB1+bB2+cB3+dB4. Everything is real. This is the bivector decomposition. The 4 (n/2) bivectors commute with each other (and B). B1 and B2 are simple, but B3 and B4 are not and they generate isoclinic rotations, [but B3 and B4 never occur in Euclidian or Minkowski spaces]. So the good news is that if you understand rotations due to real eigenvalues (cosh and sinh), imaginary eigenvalues (cos and sin) and complex eigenvalues (mutually annihilating isoclinic rotations) you understand all rotations in GA. But I don’t understand isoclinic rotations so don’t ask me about them.
    Note 1. The terms in the bivector decomposition do not seem to be useful eigenbivectors of anything; since they commute with B they lie in the null space of the bivector-valued function BxW, the commutator of a bivector B and a bivector W.
    Note 2. It is possible to derive efficient methods in GA involving elementary Jacobi (Givens) rotations and Hessenberg (Householder) reflection preambles, [essentially a simplification of Francis ‘QR’ algorithm] but these methods seem to work best in Euclidean and Minkowski spaces, and I haven’t located a method for other spaces that doesn’t involve at least knowing the eigenvalues.
    Note 3. To convince yourself of this in MATLAB or OCTAVE, make a random skew-symmetric matrix A, and multiply by a diagonal matrix K with p +1s and q -1s in the diagonal, and calculate the [u,lambda]=eig(A) and [v,lambda]=eig(A.'), re-index, and carry through the indicated calculations. The matrices in square brackets should be real!