Why there is no "Inverse" operator defined in Bivector.net?

Hi All,

I know this is bit controversial topic to be posted here after reading many posts in this forum on similar topics of Dual and Conjugate etc. But I will ask this question anyways.

I don’t see Inverse operation defined in this cheatsheet nor in the expression evaluator

Also, I noticed that in
video thumbnails for sandwich product “Reverse” operation is used

Where as in the slides we see that Inverse operation is being used.

Not sure why this difference in slides?

To compare other implementations I looked at the python implementations.

Which uses the Inverse operation but not the “Reverse” And after digging in to code and googling I figured that python implementation was trying to implement this paper

I was reading about the Definition of Reverse, Inverse etc in this page. But none of these things are matching one another when it comes to implementation.

These difference has caused more confusions than helping my understanding. Any constructive inputs and suggestion is appreciated to get this concept cleared.

Note: I’m not a hardcore mathematician as most of you are, but a programmer with good understanding of how systems work and trying to implement PGA library using Swift to use it in Computer Graphics and Robotics. If you are in to swift here is my current WIP implementations.

Best,
SanMan

It is essentially the same, for a normalized versor (a geometric product of unit vectors), the inverse is simply the reverse.
But on my introductory slides, I preferred to avoid funny symbols that put people off and which they forget the meaning of. Simply ‘dividing by a vector’ needs no explanation after the initial shock.
In an implementation, an inverse sounds like an expensive operation, so we actually denote the reverse, which is just some sign swaps.
But again, for a (normalized) versor, they are the same. Do not forget that normalization!

The inverse of a general multivector is not easy to do in general, there have been some papers about that recently. They go into the math, but not into why you would actually want to compute it in general. I see no need yet.
Typical case of Clifford algebra vs geometric(ally meaningful) algebra.

Thanks @LeoDorst for your response. One cross question when you say

Is it for all the elements in the expression or for only the normalized for the one which is undergoing the “Reverse” operation?

i.e for a sandwich product between ‘a’ and ‘b’.
" -a * b * Reverse(a)" . Which of the following is correct representation?

  1. Normalized(a) * Normalized(b) * Reverse(Normalized(a))
  2. Normalized(a) * b * Reverse(Normalized(a))
  3. a * b * Reverse(Normalized(a))
    etc…
    Which all the vectors needs to be normalized?
    In my current implementation I have gone with option 1. I Normalize all the vectors before feeding in to the Geometric Product.

In general when in my current implementations, I’m always normalising for any of the operations before evaluating it. Do you foresee any problems with this approach?

By going with option 1. I have found there is no need to explicitly multiply the equation by -1.
Here is some example code snippet in Swift

let a = (1.0, e(1))
let b = (1.0, e(2))

// Here <*> is my representation of Sandwich product
print("\(a) <*> \(b) = ", a |<*>| b )  //  (1.0, [e(2)]) 
print("\(a) <*> \(a) = ", a |<*>| a )  //  (-1.0, [e(1)])
print("\(b) <*> \(b) = ", b |<*>| b )  //  (-1.0, [e(2)])

Even though by normalising everything before doing any Geometric/Dot/Wedge Products in the beginning I’m getting results which seem correct as of now, would like to know if this backfires on any scenarios which I have not encountered yet?