biVector forum

Why use dual space in PGA - R*(3,0,1) vs R(3,0,1)

@cgunn3 how do you feel about Eric Lengyel’s proposal at ?

Cheat sheet here,

I haven’t really had time to read this material in detail. But I read enough to suspect that it’s worth looking at the claim that the dual construction used in euclidean PGA is anti-intuitive. This argument appears to be based on the perception that having the planes be 1-vectors means that you start with the higher-dimensional subspaces and build up the lower dimensional ones. I suspect however this argument is based on an incomplete understanding of duality.

I’ve ended up writing a somewhat long post to explain what I mean. Apologies to the speed freaks out there.

Why do we say that a point is 0-dimensional? It’s equivalent to saying that it’s indivisible, elemental, simple: It’s the geometric primitive I start with (in the standard construction) and out of which I incrementally build up all the other primitives by wedging. For example the wedge m := P_1 \wedge P_2 of two points in the standard construction is a 2-vector representing the joining line of the two points. The line is 1-dimensional since an arbitrary point P incident with m (that is, satisfying m \wedge P = 0) can be represented by the points P_1 + \beta P_2 for \beta \in \mathbb{R} (to include P_2 in this expression you can use homogeneous coordinate \alpha : \beta in \alpha P_1 + \beta P_2 but I don’t want to overly complicate things here). The line m in this sense is conceived of as consisting of all the points incident with it, it’s called a point range in projective geometry. Similar remarks apply to a plane p = P_1 \wedge P_2 \wedge P_3 as a 2-dimensional set of points, called a point field in projective geometry. To sum up: the dimension of a geometric primitive in the standard construction measures how many linearly independent points you need to generate the primitive.

What does this look like in the dual construction? Now we build up all the other primitives out of planes. A plane in the dual construction is just as simple and indivisible as a point is in the standard construction. We’re used to saying that the wedge of two planes m = p_1 \wedge p_2 is the intersection line of the two planes. But we can be, and need to to be, more precise. A plane p is incident with this plane if it satisfies m \wedge p = 0. This set of such incident planes is called a plane pencil and is analogous to the point range defined above. If we’re going to allow saying “the joining line consists of all the points incident with it” then we have to also allow saying now that “the intersection line consists of all the planes incident with it.” This pencil can be parametrized (exactly as above) as p_1 + \beta p_2 for \beta \in \mathbb{R}, so it’s 1-dimensional (of course with respect to the 0-dimensional building block, the plane).

Similar remarks apply to the wedge of 3 planes to produce a point: P = p_1 \wedge p_2 \wedge p_3. The set of planes p incident to this point satisfy p \wedge P = 0. Just as the plane in the standard construction can be thought of as all the points incident to it, the point in the dual construction can be thought of as consisting of all the planes that are incident with it. This set of planes is dual to the point field above and is called a plane bundle in projective geometry.

To sum up: once you’ve understood that

  1. dimension isn’t inherent in the geometric primitive but depends on the way that space is conceptualized, and
  2. that the projective geometric principle of duality provides a logically rigorous and historically proven basis for an alternative conceptualization of space,

then, based on my experience, the conviction that planes don’t make good 1-vectors can be overcome.

(Note that one still has access in PGA to both aspects of the geometric primitive – the Poincare duality map allows you to move over from the plane-based to the point-based Grassmann algebra and vice-versa. In fact that’s how the join operator (regressive product) is implemented. Example: you can’t join two bundles you have to join two (naked) points. We’re not committed unilaterally to the dual construction for everything, only insofar as it mirrors the metric relationships of euclidean space.)

One final comment: I don’t want to minimize the seriousness of the mental revolution required to think of points and planes in the dual way that I’ve described above. We are hard-wired at some level to prefer the “reality is build out of points” point of view (excuse the pun, it’s plane silly) to the newer alternative “reality is built out of planes”. It’s one thing to say that mathematically the two statements are equivalent but it’s something else to inwardly experience the equivalence as reasonable or believable.

Fortunately, using euclidean PGA successfully does not require that you subscribe to any view about how reality is built up. An educated skepticism is always justified. My recommendation: first make sure that the results are mathematically impeccable (I hope I have made a start with this post), then implement, apply to reality, check for discrepancies, and repeat. I’ve been doing that for 10 years now with PGA and haven’t found anything that doesn’t agree 100% with everything else I know about the world.

Also my article “Geometric algebras for euclidean geometry” especially Section 5, “Homogenous models using a degenerate metric” has some things to say that may be of interest to readers of this thread.

1 Like

My impression is that this can be seen as a milder shift in perspective from Gunn’s PGA than the author makes it out to be.

PGA recognizes that objects like points, lines, and planes, have equivalent representations in direct space R(n,0,1) and dual space R*(n,0,1). You can do non-metric projective operations like the meet and join in either space: in direct space the meet and join are implemented with \vee and \wedge respectively, and in dual space, it’s the opposite. But if you want to do metric operations like rotations and reflections with the geometric product, you have to do them in dual space. For this reason, the dual space representations are emphasized more in Gunn’s PGA.

Lengyel proposes emphasizing the direct space representations more, because they are more similar to more traditional approaches to projective geometry. To implement metric operations (rotations and reflections, and also translations as versors), Lengyel proposes a new “anti-geometric product” operation, which could be interpreted as an instruction to transform the operands to dual space, do a geometric product there, and then transform the result back to direct space, so that all geometric products are still actually happening in dual space. I suspect Lengyel wouldn’t put it exactly this way, though, because I think he wants to get rid of the need for the concept of dual space. And you don’t strictly need dual space, because you can just write down the multiplication table for the anti-geometric product directly, or you can implement it using “complement” operations (see the cheat sheet) and the standard geometric product.

I can see value in both points of view, and I’m not sure that one needs to win over the other. You can either have a more standard space (direct space) and a less standard geometric product (Lengyel’s anti-geometric product), or a less standard space (Gunn’s R*(n,0,1) dual space), and a more standard geometric product.

I do think it’s useful to get some exposure to dual space at some point because of the intuitions it gives about things like being able to view a line as a set of planes that contain it (compared to a set of points that it contains). But if you don’t want to think about dual space all the time, it seems that Lengyel’s approach lets you put it in the background.

I actually do have some issues with Lengyel’s “direct construction” as it were, despite it being my worldview so to speak before encountering @cgunn3’s material which sort of forced me to re-evaluate things.

As I mentioned to him in a twitter debate, there is a “choice” in how to conceptualize a point (ray-plane intersection, 3-plane intersection), but we consider motors and the exp/log maps, the formalism doesn’t really work with the direct construction.

The entire point of introducing quaternions and dual-quaternions in the first place is that they form a continuous group which implies interpolability and differentiability. In particular, this means that exp/log have closed form solutions. In Lengyel’s construction, exp/log don’t come about naturally so there’s no correspondence to the all important Lie-group/Lie-algebra mapping you get with the dualized approach.

This is a very lucid take on the topic. The math ends up working out the same no matter which way you go. You could basically just swap the names of everything, and you’d get the opposite approach, leaving only one difference: the mapping between grade and dimensionality. In one case, elements of grade 1, 2, and 3 map to geometric objects of dimension 0, 1, and 2, in that order. In the other case, elements of grade 1, 2, and 3 map to geometry objects of dimension 2, 1, and 0, in that order. In the absence of any other valid reasons to choose one approach over the other, I choose the 1-up structure in which grades are one greater than the dimension instead of the 1-up structure in which grades are (n − 1) minus the dimension.

1 Like

That is not correct. In both constructions, exp and log arise equally as naturally. I mentioned at the end of my post that I have investigated this issue and found that everything still works fine. Is it that you didn’t read that statement, or is it that you don’t believe me?

1 Like

I’m trying not to be offended here, but damn, you’re making it difficult. I would ask that you show some professionalism and refrain from posts like these that can easily be taken as personal insults. In my view, you are mocking me in violation of the rules of conduct for your own forums.

My interpretation of the mathematics is that both products belong to both spaces. You yourself use the exterior ∧ and ∨ products in the same setting, and they are duals of each other. The geometric analogs, the ⟑ and ⟇ products, can also coexist peacefully. In my opinion, using the dual product is no more confusing than using dual representations of geometries.

My preference for the geometric antiproduct ⟇ starts with my preference that planes be trivectors. As I attempted to explain in my post, the regressive product ∨ would then be used to calculate the line where two planes intersect. Because a rotation about such a line corresponds to a pair of reflections through the two planes, it’s natural that the operator representing such a rotation would incorporate the regressive product ∨, and this leads to the use of the geometric antiproduct ⟇.

Now why do I insist on planes being trivectors? It mainly has to do with the linear transformations that arise in computer graphics. Regardless of what methods we use to transform objects before they are drawn, we eventually have to apply a non-affine transformation to perform the view frustum projection, and we always use a 4x4 matrix for this. (Camera transformations can sometimes include reflections as well.) Such a 4x4 matrix transforms a vector v from one coordinate space to another as Mv, where v is treated as a column. But if our vectors represent planes, then those planes won’t be transformed correctly. Planes are correctly transformed using v det(M) M⁻¹, where v is treated as a row. To flip the meanings of vector and trivector, we would have to change a lot of code involving a lot of different uses of vectors and reformulate a lot of long-established conventions.

Btw, it would be helpful to clarify whether by “dual space” your intent is to work in the reciprocal vector space or the complementary vector space. The term “dual vector” is often used ambiguously, and there are subtle differences in meaning. For example, a complementary vector, or antivector, transforms as v′ = v det(M) M⁻¹, but a reciprocal vector transforms as v′ = v M⁻¹, without the det(M) factor.

1 Like

When we exponentiate to get the motor algebra, we generally have a series of the form 1 + x + \frac{x^2}{2} + \dots. Your exponentiation would give us the pseudoscalar as the 1 there. I’m not sure I would refer to this as “arising naturally.” Furthermore, your rearrangement of the Cayley table produces an oscillating motor algebra parity (odd -> even -> odd etc).

(edit for clarification) By “oscillating” I mean depending on the space dimension, the parity of the motor subalgebra oscillates. This is what I mean by messing with the Lie correspondence.

Hi @vincent,

Both the point and hyperplane based views are equally valuable. In my opinion, going from \mathbb R_3, where the point based view is a good starting point, over \mathbb R^*_{n,0,1}, where you can introduce the hyperplane based view is the perfect step-up for \mathbb R_{4,1}, where both views are inherently tied into one another (both 1-vectors and 4-vectors are always both points (spheres with zero radius), as well as spheres and planes (spheres with infinite radius)). After that I feel it is very useful to go back and consider for example the hyperplane based view in \mathbb R_3 (to understand why in \mathbb R_3 we can only rotate around the origin), just as one can pick a GA to match the problem, the same goes for the geometry.


Welcome and thank you for joining the forum! This is a great place for an open discussion on a topic that is clearly important to all of us. As I’ve told you before, I was very happy to see your use of the degenerate metric (which is still considered controversial), and I’ve appreciated your writeup on PGA very much. I hope we can continue a constructive conversation here as I strongly feel that our messages are very close together. Although the casual reader right now might miss that :wink:

So, importantly, and still in contrast to popular belief, it is possible to do Euclidean Geometry, with
versor representations for \mathfrak{SE}(n), the translations and rotations, using just a 1-D up Geometric Algebra. This has eluded us for many years and I feel we all agree on the key points :

  • The projective model only works in the dual algebra / dual space.
  • for Euclidean geometry this projective model needs a degenerate projective dimension.
  • The non-invertibility of the pss under the GP is no problem for a dual.

Let me stress that these three points are not generally obvious to others, and it remains our main responsibility to convey their importance. It would be a shame to have these points be lost in the the crossfire.

To implement these points I think we can summarize both methods as follows :

\mathbb R^*_{3,0,1}

  • elements are defined dually (perhaps contrary to what some expect).
  • rotors/motors are defined in the primary space (resulting in e.g. the expected value for identity etc).

\mathbb R_{3,0,1}

  • elements are defined in the primary space.
  • rotors/motors are defined dually. (perhaps contrary to what some expect).

In practice, these choices really are almost about notation only. Both algebras are trivially isomorphic and it is perhaps good for readers to understand this. Whether or not you dualize your elements and take the normal GP, or not dualize your elements but use the anti-GP is just a matter of brackets.

I’d suggest that in a situation where it makes more sense to have your transformations in the primary space, pick \mathbb R^*_{n,0,1}, and in the other case pick \mathbb R_{n,0,1}.

I’m not sure I get the matrix argument? The memory layout of \mathbb R^*_{3,0,1} 3-vectors is exactly the same as that of \mathbb R_{3,0,1} 1-vectors, I’m using the standard projection matrices and have never had one complain the four floats passed in used to be called a trivector. It seems to me that when going to LA one loses these types either way and it always comes with the bookkeeping of providing the correct matrix?

In closing, welcome again to the forum (although fair warning - those who dare suggest the projective model have been called ‘the degenerate club’), and looking forward to a continued productive cooperation !


** edit - fixed some latex

1 Like

Hi there,

I’m very happy @elengyel takes some time to describe his work here. His work feels “fresh” from my newbie point of view.

It would be very nice to make a spreadsheet with both PGA “implementations” and compare
how they handle all useful computations. The post is becoming very long and I’m sure, there is still a lot to discuss.



The zen of GA lies in the acceptance that both approaches to a symmetrical concept are equally valid and the universe has no preference for either one. Yes, exponentiation under the geometric antiproduct ⟇ has the pseudoscalar as its unity, and there’s nothing wrong with that.

I like to think about objects in GA as having a number of present / full dimensions and a complementary number of absent / empty dimensions. These always add up to the total dimension of the algebra, and it’s equally valid to do math from either perspective. (In 4D GA, for example, vectors have one full dimension and three empty dimensions, while trivectors have three full dimensions and one empty dimension.) There is always symmetry. In the dual space, the elements of the motor algebra always have an even number of full dimensions and an oscillating number of empty dimensions. In the non-dual space, the elements of the motor algebra always have an oscillating number of full dimensions and an even number of empty dimensions. Neither one is better than the other.

Indeed, every Cayley table can be inverted along with its elements to give a strict isomorphism. This is basically the same statement as saying that all rows of the table can be relabeled to anything we want (“dog” and “cat” for example) so long as the column labels are swapped accordingly. It’s one of the reasons we study abstract algebra after all. By that token though, if you want to continue with your convention, you will need to provide an inverted inner product definition for your relabeling scheme so that inner-product measures of angles makes sense. Without that, you lose the ability to have the standard projection and rejection operators. Or I suppose you could say you don’t need that in your formulation and proceed without it.

Thank you.

Hopefully, it won’t take much longer for everyone to agree that there are two isomorphic approaches, as you have outlined.

I was surprised to learn that the degenerate metric is so controversial. I find it to be entirely unobjectionable, perhaps even advantageous, and it clearly produces spectacular results.

I’ve always disliked the way in which multiplication by I or I^{-1} has been used to calculate duals, and I regard its nonviability under the degenerate metric as a plus. This is an area where I’d like to promote the notion of equally significant left and right complements more. To speak of “the” dual of some object misses part of the picture. For example, the equation x \vee y = (x^* \wedge y^*)^* is not quite correct because the same dual can’t be used inside and outside the parentheses. They must be opposite complements (and either way works).

Agreed. To continue along the lines of a few things I mentioned in previous posts above, renaming everything in one choice would give you the other choice and leave only one difference. In the primary non-dual space, the grade corresponds to the number of full dimensions, and in the dual space, the grade corresponds to the number of empty dimensions. If, in addition to renaming, you also subtract grades from n, then the distinction between the two choices disappears entirely.

It’s not super important, but it basically comes down to the correspondence between names and types. I have C++ classes named Vector and Trivector, and these are expected to behave as points and planes, respectively. The type system enforces the proper matrix multiplication rules so you don’t accidentally transform a vector like a trivector and vice-versa. When a million lines of engine code are already using this convention, there’s going to be a lot of resistance to reversing the meanings of those two names. The small abstraction provided by changing the names to Point and Plane would certainly eliminate the problem in this one case, but there’s more to it. For example, I actually make a distinction between direction vectors and points by using the class names Vector and Point to avoid storing a w coordinate, which is presumed to be zero for a Vector and one for a Point. I really don’t want to start calling physical quantities like velocity a trivector instead of a vector in order to be consistent with the notion that a point is a trivector.


Yes, I know. This is trivial in the context of what has already been done.

Great I’m curious what you will come up with. Because all vector quantities are degenerate under your metric, I imagine you’ll need to take complements to provide an inner anti-product or something.

I’m not sure what you’re getting at. The dot product and anti-dot product are just as symmetric as everything else. There is no degeneracy the appears on one side that does not also appear for the same type of geometry on the other side.

A plane in your construction is a sum of trivectors. The inner product between two planes normalized this way would normally produce the cosine of the angle between them, but as trivectors, the dot product will be 1 regardless of the plane orientation. What I was trying to get at over twitter was that, yes, symmetry does exist, in particular when we consider the Grassmann exterior algebra, but because the metric itself isn’t symmetric (it’s \mathbb{R}_{(3, 0, 1)}, not \mathbb{R}_{(4, 0, 0)}), you’ll necessarily run into such artifacts once the geometric product is involved (which mixes the inner-product measure with the exterior product). After all, this is why you needed to introduce the anti-product table. Looking at the inner product directly shows what’s happening the most directly. The inner product between your points as vectors in the direct construction does return the cosine of the angle between them (which was one of the motivations of the dual construction in the first place). Well, not so much a motivation perhaps, but an indicator that there is some significance to it beyond just a mental exercise.

*edit: fixing my LaTeX fails

I’m sympathetic to this issue. One suggestion is that I use “adapter classes” in C++ that can map the notion of a point, line, plane, etc to the Grassmannian basis in a manner that is agnostic to metric or dimensionality (this helps generalize points in 2D vs 3D for example). As simple template classes, there is no virtual dispatch overhead, and conversion routines between different bases are trivial for the compiler to elide with constant folding optimizations. I also haven’t noticed any issue in compile time regressions specifically due to this, likely because the types are easy to parse/instantiate and the template code itself is minimal (even in a larger codebase).

Likewise. I’m optimistic that there is a bunch of new math to be discovered because I don’t think the universe would waste half of the algebra. As a first example of something new, consider the formula being used to calculate the norm ||{\bf x}|| (in the dual space):

||{\bf x}|| = {\bf x} \thinspace\char"27d1\thinspace \widetilde{\bf x}

I would bet money that someone has looked for a similarly concise formula for the \infty-norm ||{\bf x}||_\infty without success. However, upon realization that all operations have a mirror operation, we see that the formula is:

||{\bf x}||_\infty = {\bf x} \thinspace\char"27c7\thinspace \utilde{\bf x}

I don’t know about anyone else, but I find this symmetry to be exceptionally satisfying, and you only get it by acknowledging the existence of both operations \char"27d1 and \char"27c7. (Note that in the non-dual construction, these formulas are swapped.)

The notation \utilde{\bf x} means the reverse operator’s complement, or antireverse. Just as the reverse operator reverses the order in which grade-1 elements are multiplied with the progressive product \wedge, the antireverse operator reverses the order in which grade-(n - 1) elements are multiplied with the regressive product \vee.

The metric is symmetric, though. Under the geometric product, the metric applies to vectors, and we have

{\bf e}_1^2 = {\bf e}_2^2 = {\bf e}_3^2 = 1 and {\bf e}_4^2 = 0.

Under the geometric antiproduct, the metric applies to antivectors, and we have

{\bf \bar e}_1^2 = {\bf \bar e}_2^2 = {\bf \bar e}_3^2 = 1 and {\bf \bar e}_4^2 = 0.

(The overbar notation means right complement, so {\bf \bar e}_1 = {\bf e}_{234}, {\bf \bar e}_2 = {\bf e}_{314}, {\bf \bar e}_3 = {\bf e}_{124}, and {\bf \bar e}_4 = {\bf e}_{321}.)

1 Like