Seeking guidance in algorithmically implementing GA operations

I am a computer scientist with a passion for computer graphics, and when I discovered GA a year ago, I got immediately seduced by it’s elegance and exception-free approach to construction. I am currently trying to grasp the basics and hope that you wise people can help me with chewing though some of the tougher stuff. One challenge is that my mathematical knowledge is rudimentary at best — I did study some mathematics, but this was over 10 years ago and my courses never went beyond basic analysis and group theory. That is, I can read (and mostly understand) simpler definitions like what a vector space is, but I lack the intuition and the experience in dealing with more complex structures, which makes reading many papers rather difficult for me (as some things which I don’t understand are assumed to be trivial). So I would likely ask you to be patient with me and explain things like I am particularly slow.

In particular, I would like to solidify my grasp on various operations pertaining to GA and relationship between them. For this I am trying to implement an algorithm that would generate operations for a chosen metric, not unlike existing Clifford or Ganja.js libraries.

I think I understand how to properly implement the geometric product itself using the distributive property in combination with the fact that it anti-commutes for 1-blades. What I am doing here is recursively applying the distributive law until I end up with a long sum of 1-blades and then “canonicalise” these blades by “sorting” the bases in order (flipping the sign as I go) and applying the metric to eliminate products of a base vector with itself. This algorithm produces multiplication tables that are identical to what I find here, so I assume it is correct.

Where I am stuck however is implementing the general exterior, the regressive and the inner product. The documents linked in the doc section are not really helping me either (maybe because I lack understanding). I found a really helpful set of definitions on Wikipedia where exterior product and various inner products are defined using the geometric product and grade projection. While it doesn’t really help me in understanding what is going on, it is trivially algorithmized, which at least is something. It is unclear to me however whether these definitions are universally valid or only work for non-degenerate setups (in the common literature the assumption of non-degenerativity seems to be the standard from what I have seen).

Speaking of degenerate metrices, what I am really struggling with is the dual map and the regressive product here. In the SIGGRAPH notes, Charles Gunn defines the regressive product using the dual map, but I do not understand how to construct the dual map in the first place. All other literature I have seen suggests right-multiplying a multivector by the inverse of the pseudo-scalar, but in PGA the pseudo-scalar is not invertible if I understand it correctly. At the same time, people seem to be computing with duals without any issues and Eric Lengyel even provides a multiplication table for his geometric antiproduct, which as I understand can be computed using the dual and the regular product, so there must be a general form. I have read multiple topics on these forums, including this one, but I am still very confused. I have also read the section 5.10.2 in the SIGGRAPH notes multiple times, but I do not understand how the Poincar ́e duality map can be operationalized. The notes state that it is defined as x -> x* but this does not really help in implementing the map itself. I have tried to look at the source code in existing libraries but I don’t understand what is going on there.

Thank you in advance for helpful suggestions and your patience.

Hi tza,
It sounds like you are getting well stuck into the coding and theory. In my opinion definitely the best way to define the inner and outer product are in terms of the geometric product and grade selection. There are a good set of axioms to construct the inner and outer product in Chapter 4 of Doran and Laseby’s Geometric Algebra for Physicists (I some slides on using these definitions that you might find helpful here).
Outer (wedge) product:

X_a\wedge Y_b = \left< X_a Y_b \right>_{a+b}

and inner (dot) product:

X_a\cdot Y_b = \left< X_a Y_b \right>_{|a-b|}

With these two definitions in place and with a geometric product table for the basis elements it is possible to build inner and outer product tables (and indeed left and right contraction tables as well if you so wish).

To construct the tables in a fast way I would recommend using the bitmaps described in Chapter 19 of Dorst, Fontijne and Mann’s book: ‘Geometric algebra for computer science: an object-oriented approach to geometry’. The relevant code from the Clifford library that does these bitmap operations is around here:

And you can see our grade checks for the inner and outer products in these functions:

These product tables work for the degenerate metric as well as non degenerate metrics but what does have to change is the dual, as you have correctly pointed out. I’ll show you the clifford dual code and show you how that works…

First, when dual is called on a multivector we check what metric we are in:

If its a non-degenerate metric then great, divide by the pseudoscalar and we are done :+1: .

If we are in a degenerate metric however then we swap to use the right complement. The right complement is basically what thing X^* do we have to right multiply X by to get the pseudoscalar?
Here is our current code that implements it:

This is maybe a little difficult to get your head around at first because it is written to work with a general outer product table (this is so we can generate the left complement too…) but if we look at my first commit of the right complement it might be a bit easier.
The main idea is that, because of the way that we have ordered the multivectors in memory we can in fact iterate forward and backwards over the blade list at the same time and the outer product of the blades will always give the pseudoscalar, up to sign. We therefore calculate the sign for each of these products. With our final function being effectively: flip the multivector upside down and multiply by the sign list. In clifford we then bake this into a JITed closure and return it as a function:

    def gen_right_complement_func(self):
    Generates the right complement of a multivector
    dims = self.gaDims
    bl = self.blades_list
    signlist = np.zeros(self.gaDims)
    for n in range(len(bl)):
        i = bl[n]
        j = bl[dims-1-n]
        signval = (-1)**((i^j).value[-1]<0.001)
        signlist[n] = signval
    def right_comp_func(Xval):
        Yval = np.zeros(dims)
        for i,s in enumerate(signlist):
            Yval[i] = Xval[dims-1-i]*s
        return Yval
    return right_comp_func

Now I’m sure this is not the only or the best way to compute the right complement… but it works. If you decided to go for a different memory layout then what you would need to do is look at your outer product table and find what right hand operand gives the pseudoscalar, then take that as your right complement for each canonical blade element. You could probably do this operations with the bitmap representation as well if you are smart about it although we haven’t had a go as its not a pain point for us performance-wise in Clifford.

Hope this is helpful!

1 Like

Also forgot to mention, a really good investigation of GA computing (esp for high dimension and non-orthogonal metrics) is in Breuils, Nozick, Fuchs; Garamon: A Geometric Algebra Library Generator which is here:

1 Like

Thanks Hugo, this is all very useful! I will have a look (will probably take me some time to digest it all). A note on my implementation: I am using R and my goal is to have a symbolic implementation generator for any metric choice (I already have it working for the conformal case). Then you have optimizer passes that are supposed to auto-generate SIMD-friendly data layout for the final high-performance implementation.

A quick question about the dual: so the dual always have the property of being the right complement? Do all elements of a GA have right complements? From other discussions I have gathered that there can be multiple alternative ways to construct the dual within the PGA (the core idea being that you are mapping objects in one algebra to analogue objects in another algebra, e.g. intersection of planes to a point, but I never understood how they do it exactly). Does your method take this ambiguity into account?

Sounds like some pretty gnarly symbolic stuff, it will be cool to add an R package to the GA software ecosystem :)!

About the dual. Lets first talk about non-degenerate metrics with the example of CGA. With non-degenerate metrics its all the same space, a multivector and its dual are the same object just in a different form, you can bounce back and forth between the two different forms by just multiplying by the pseudoscalar (sometimes you then need to worry about sign changes etc). In these standard metrics it doesn’t really matter which representation you take as the ‘standard’ form and which as the ‘dual’ because the objects behave the same when we sandwich them with rotors, ie.:

RX^*\tilde{R} \equiv (RX\tilde{R})^*

this makes these algebras mathematically convenient to work with. This is probably one of the main reasons that a lot of the literature is full of people working in non-degenerate metrics, you can design null structures in these algebras (think the null vectors of CGA) to do translation rotors etc and still have a very simple definition for duality, the trade off of course is efficiency in implementation. If all you care about is rigid body transformations, lines, points and planes then CGA is not the smallest algebra that can represent all of these, it has 2x the number of canonical basis blades that PGA has! (CGA uses these extra basis vectors to describe circles, spheres, dilation and the other conformal rotors which are useful in some contexts/problems but superfluous in others).

When we move to PGA we no longer have objects transforming in the same way as their duals, and as a result you have to pick one form to be your main way of representing objects and one as your ‘dual’ to that. Steven De Keninck and Charles Gunn pick a representation with points defined as the intersection of 3 planes, a 3-vector. This representation allows them to have rotors that are even elements of the algebra with a standard exponential map to the lie algebra of bivectors and generally tie into the rest of the GA, lie theory and screw theory literature. From this point of view you can embed PGA as a subalgebra of CGA, specifically a subalgebra over the so-called ‘flat’ elements (lines, planes, ‘flat’-points), check out for an example.

This is all a bit mathematically technical but the upshot is that if you want to transform PGA objects in a ‘normal’ GA way with rotors as the even objects in the algebra you can represent them in this kind of setup. The downside of course is that when people first start using it, it’s pretty confusing with this whole point as the intersection of 3 planes business. A way around this of course is to say: what if I represent all my PGA stuff in a way that makes points look like standard 4D homogeneous coordinate points and just apply the PGA dual before and after every operation that I do? This is the Eric Lengyel approach to PGA (please correct me if you are reading this and I’m wrong Eric, I don’t claim to be an expert on degenerate metric algebras, just an interested observer!), he rolls his dual operation before and after the geometric product into a combo product which he then labels the anti geometric product. He then goes about constructing other anti products and a whole kinda anti-PGA that is based on these anti-products. This all seems fine from a mathematical perspective, its not all that unusual to roll your own product, it’s GA, everyone and their mum has some custom product they have defined to do their specific take on the only correct way to do GA etc etc. :grin: .

I’m mentioning both approaches to PGA here not because I am interested in sparking off a big debate about it but because as far as I am aware the dual operation for both should be the same, and can be implemented as the right complement. As to whether duals in other algebras are always right complements lets have a look at CGA:

>>> from clifford.g3c import *
>>> from import random_line, random_plane, \
>>> L = random_line()
>>> Lstar = L.dual()
>>> L^Lstar
>>> P = random_plane()
>>> Pstar = P.dual()
>>> P^Pstar
>>> X = random_conformal_point()
>>> X^X.dual()

So, the (1/pseudoscalar) dual is not always a right complement, we use the right complement dual for the degenerate metrics because we can’t use the (1/pseudoscalar) when the pseudoscalar is not invertible.

So that is my quick and dirty take on duals, again I don’t know the degenerate metric stuff all that well as I don’t use it in my day to day so will happily be corrected by people who know more!


Awesome summary. Just wanted to make a quick mention about signs and the dual. For supporting differential geometry applications your dual may not be an involution in every algebra. i.e. **M = -M. This is desirable for implementing the Hodge Dual.

My own library has 3 “duals”, 1 which is much like Hugo’s that switches it’s actual implementation if it detects a degenerate metric. 1 that is strictly intended to give you the orthogonal complement ( in the same space ) and finally a hodge star which satisfies k∧star(k) == (k⋅k)*𝑖

Like Hugo says:

Hi @tza,
Thanks for your question. Since you indicated that you didn’t understand my treatment of the Poincare duality in the course notes, I thought I would wait and see how the discussion developed before piping up preferring to see how others might explain things better than I had.
If you’ve watched the SIGGRAPH course video then you’ll know that I see my task as helping you move from the stance of “PGA as magic” to “PGA as understanding” – if you want to. By magic I mean that you are satisfied when you have code that runs and does what you want it to, but you might now be able to explain why it works to your colleague. I consider that a valid position and am not interested in forcing anyone to go further that they want to in the direction of understanding.
If you do want however to understand how euclidean PGA works then I think there are some remaining issues that I’d like to address in this and my next post.

In this post I’d like to deal with the “right complement” and in a separate post I’ll address the issue of why euclidean PGA uses the dual construction (planes as 1-vectors).

The right complement is defined above as follows:

The right complement is basically what thing X^* do we have to right multiply X by to get the pseudoscalar?

This definition is well-defined as long as X^{-1} exists, then X^*=X^{-1}I where I is the pseudoscalar.

But when the metric is degenerate, as with euclidean PGA, and X^2=0, then it’s not well-defined. For example, let X = e_{012}, the ideal point in the z-direction. Then the z-plane e_3 =: X^* certainly satisfies XX^*=I but it’s not unique. In general let X_\alpha^*=e_3 + \alpha e_0, then X^* also satisfies XX^*=I. since e_0^2=0. Geometrically, X_\alpha^* represents the plane parallel to the z-plane at a signed distance of \alpha from it. So if one plane is a right complement then all planes parallel to it are also a right complement according to the above definition. Hence the “right complement” isn’t well-defined.

So if we want to have a well-defined map we need to specify it otherwise. There’s a standard way to do this called the canonical dual basis. [More on this in this post here.] It’s basically the algorithm that the previous posts show: given a simple blade X as the wedge of k basis 1-vectors, define X^* to be the blade formed by wedging together the (n-k) basis 1-vectors that do NOT occur in X (with a sign adjustment if necessary). That certainly will satisfy the previous “right complement” condition XX^{*}=I but due to the restrictive definition, it is now unique. The final step is to use linearity to extend the definition to a general k-vector.

The right complement defined using the canonical dual basis in this way is not however coordinate-free! To see this, in the 2D Grassmann algebra \bigwedge(\mathbb{R}^2) consider replacing e_1 with e_0+e_1 so the new basis \{e_0', e_1'\} = \{e_0, e_0+e_1\}. Then I' = e'_0 \wedge e'_1 = I. Construct the canonical dual bases: for the first basis \{e_0^*, e_1^*\} = \{e_1, -e_0\}, while WRT to the second basis \{{e'}_0^*, {e'}_1^*\} = \{{e'}_1, -{e'}_0\} = \{e_0+e_1, -e_0\} where the second equality follows from substituting. So expressing the dual of e_0 = e_0' WRT the first basis is e_1 while WRT the second basis it is e_1' = e_0+e_1. So this is not coordinate-free.

Exercise: Let \{e_0, e_1, e_2\} be a basis of of the 3D real Grassmann algebra \bigwedge(\mathbb{R}^3). Define a second basis \{e_0', e_1', e_2'\} = \{e_0, e_1, e_0+e_2\}. Show that the canonical dual of e_0 WRT the first basis is e_1\wedge e_2 = e_{12} while WRT the second basis is e_1\wedge (e_0 + e_2) = -e_{01}+e_{12}.

Takeaway: One of the big selling points of geometric algebra is that it is coordinate-free. A danger of coordinate-dependent code in such a system (even when it works for a given coordinate system) is that once that coordinate system changes, the code will probably not work any longer and the idea of looking at the coordinates won’t occur to people since they have been told “GA is coordinate-free.”

Everyone has to decide for himself how comfortable they are with non-coordinate-free code. Read on if you want to learn how one further step yields a coordinate-free “dual” that works in all metrics.

It’s called “Poincare duality” or the “dual coordinate map” and it’s described in various forms in my articles for the past 10 years.

Here’s the short form: take the canonical dual map X^* obtained above. We first write it somewhat more precisely than we did above. Let e_I be some basis k-blade where I is the index set consisting of k of the first n+1 integers starting with 0. Let I^\perp be the index set obtained by taking the complement of I within \{0,1,...n\}. Then e_I \wedge e_{I^\perp} = \pm I.

To continue with our focus on n-D euclidean PGA: It can be considered as built on the dual Grassmann algebra G* in which the 1-vectors are points and the (n-1)-vectors are planes, and the wedge operation is meet. Consider the standard Grassmann algebra G in which the 1-vectors are points and the (n-1)-vectors are planes, and the wedge operation is join. Let the elements of G be written with superscripts instead of subscripts. So the 1-vectors are written as e^i, etc. Then the dual coordinate map J: G^* \rightarrow G is defined on basis blades by J(e_I) = e^{(I^\perp)}. Extend it by linearity to the whole algebra.

Basic result: J maps a geometric element of G^* to the same geometric element of G, and is coordinate-free. (Hint: to understand how this works, note that the map D: G^* \rightarrow G given by D(e_I) = e^I is essentially the canonical dual basis map that was previously defined above as a self-mapping G^*\rightarrow G^*. So J is essentially obtained by applying the canonical dual basis map twice: e_I \rightarrow e_{I^\perp} \rightarrow e^{I^\perp}, which yields the identity, since the dual of the dual is the identity map. This also explains why J is coordinate-free since the second step undoes any coordinate-dependence introduced by the first.)

Example: if P \in \bigwedge^3 is a point in G^*, then J(P) \in \bigwedge^1 represents the same point in G, etc.

Takeaway: if you’re already calculating I^\perp for the canonical dual basis approach (above), all you need to do to make it coordinate-free is to interpret it as an index in the “other” algebra, i. e., instead of e_{I^\perp}, think e^{(I^\perp)}, and, to be safe, keep track that you’ve moved over to G. Since the main application of this dual map (in my experience) is to calculate the regressive product, this “visit” to the other algebra should be short-lived: a\vee b := J^{-1}(J(a) \wedge J(b)).

Implementation: A single bit in the data structure for your GA element is sufficient to keep track of whether the element lives in G or G^*, and protects against invalid operations (like trying to take the wedge of an element in G and an element in G^*).

Wrap-up: Essentially I have described three versions of the “dual” map. I would also (continue to) recommend that more care be taken with the notation/terminology used here in order to know exactly which of these three maps are meant in a given context. Note that in the below I use G^* as the source algebra since this post is in the context of euclidean PGA for which that is the case (why that is, I’ll address in my next post.). There’s nothing magic about it, if your GA is based on a standard Grassmann algebra you should flip the roles of G and G^* below.

  1. “dual of X”: The grade-reversing dual map G^*\rightarrow G^* that I described above based on the “canonical dual basis”, that is coordinate-dependent. It’s also referred to in previous post as “right complement” (same goes obviously in principle for the “left complement”).
  2. “dual coordinates of X”: Poincare duality, also grade reversing G\rightarrow G^*, coordinate-free, mapping an element in one Grassmann element to the same geometric element in the other Grassmann algebra, and
  3. “orthogonal complement of X” or “polar of X”: P: G^* \rightarrow G^*, P(X) = X I. I don’t use the pseudoscalar inverse here since in a degenerate metric it doesn’t exist. This operation is essential for doing metric geometry in both degenerate and non-degenerate metrics. It’s sometimes called the pole-polar pairing or metric polarity (a term from projective geometry). I’ll have more to say about its usage in euclidean PGA in my next post.

The regressive product: Note that 1) and 2) can be used to calculate the regressive product for any metric but only 2) is coordinate-free. 3) can be used when the metric is non-degenerate. In all cases the regressive product looks like a \vee b = M^{-1}(M(a) \wedge M(b)) where M is one of these 3 maps and \wedge is the native wedge product in G or G^* as the case may be.

Observant readers of my other PGA writings will notice that the “canonical dual basis” in the only new theme here, throwing new light on Poincare duality (see the above discussion). Hopefully now that there’s more people actually programming and using euclidean PGA (hurrah!), these comments can be read with fresh eyes and help clarify some of the new and unfamiliar features of PGA. And perhaps it can contribute more conscious use of the proper terminology:-)

Disclaimer: in the interests of readability I have been quite cavalier about the various \pm sign adjustments that have to be made to make the whole thing work as advertised. I trust that the reader can take it as an exercise to figure out where to insert them.

@tza: if you still have questions on what I’ve written please don’t hesitate to post or send a message – it will naturally help if you can specify as exactly as possible where you lost the thread.

I welcome general feedback – I’ll be updating the SIGGRAPH course notes with this information so I want to make it as clear as I can without sacrificing accuracy.


Dear Charles, dear all,

Thank you so much for taking your time to write these detailed explanations, this is certainly going to be of tremendous value for my journey through GA! Right now unfortunately the demands my day job have caught up with me, but I hope that I will find some time (and cognitive capacity) in the coming weeks to sit down and try all of this out.

@cgunn3 takes a lot of time explaning things and I hope everything said on this forum will make an even better and updated SIGGRAPH course in a few years. :slight_smile:

With regard to my promised post on the reasons that euclidean geometry requires that 1-vectors be planes: I’ve decided it makes more sense to write this up as an observable notebook.

This notebook will be complementary to the excellent Observable notebook from @enki in the following sense:

Steven’s notebook proves, on the basis of a wide variety of typical calculations, that the algebra P(\mathbb{R}_{3,0,1}) does not provide the correct answers when applied to euclidean space.

My notebook will focus on showing that the algebra P(\mathbb{R}_{3,0,1}) does provide the correct answers … when applied to dual euclidean space.

I will post again when I have something to share. In the meantime, here is a simple example of how to “produce” dual euclidean space by dualizing euclidean space (see also Sec. 5.9 of SIGGRAPH course notes):

In the euclidean plane the ideal elements consist of a single ideal line and all of its points – in coordinates e_0 and x e_{20} + y e_{01}. Every line has a unique ideal point, its intersection point with the ideal line. Two lines are parallel if their intersection point is an ideal point.

Here’s the dual of this:

In the dual euclidean plane the ideal elements consist of a single ideal point and all its lines – in coordinates e^0 and x e^{20} + y e^{01}. Every point has a unique ideal line, its joining line with the ideal point. Two points are parallel when their joining line is an ideal line.

As Dorothy says to her little dog Toto in “Wizard of Oz”, “Toto, I’ve got a feeling we’re not in Kansas anymore.”