Why use dual space in PGA - R*(3,0,1) vs R(3,0,1)

I really don’t believe that anyone included @enki thought that relabeling all elements and inverting all operators wouldn’t have produced the same results. This is neither surprising, nor mathematically interesting. Honestly though, as a bystander reading this stuff is a bit incredulous. Just move on, man, no need to cast aspersions.

Similar calculations made directly with wedge products have been in use for a long time, but I never thought of them in terms of norms. Very nice.

It seems like a more symmetric notation might be appropriate to make it more clear that (a) the norm involves only a subset of the components and (b) the two types of norm involve complementary subsets of components. For example, somebody new to PGA, but with an otherwise decent math background, would probably expect the undecorated norm ||{\bf v}|| for a vector or trivector {\bf v} to mean \sqrt{v_x^2+v_y^2+v_z^2+v_w^2}.

Yes. His remark there with respect to symmetry is the same as mine. The metric breaks the symmetry when using the same operators. Looking at the original twitter post, I’m not finding any suggestion that all operators be inverted as well (which again, is nothing more than a cosmetic relabeling in my book). AFAICT it just looks like someone spent a lot of time trying to put a cohesive argument together and instead of being appreciated, was met with general hostility instead. They’ve already given you the olive branch just take it and stop replying to me.

https://twitter.com/EricLengyel/status/1157745644682936320
https://twitter.com/EricLengyel/status/1158649131633528832

Thanks for the links. As I mentioned, prior to the notebook you linked from Observable, you had not mentioned the notion of a “dual operator.” Steven later writes that there isn’t as far as he knows, a “Clifford algebra with the desired signature for points (000-) and planes (0111).” This is correct based on all formal definitions of a Clifford algebra as discussed but that he looks forward to your writeup.

@elengyel Did you “picture” any geometric interpretations using the new operators you introduce ? I saw the quick reference poster but you focus on definitions and computations.

Can you define more precisely what you mean by “dimensional inversion” in the above Twitter post? 140 characters isn’t much space to have a meaningful mathematical conversation:-)

I would assume a reader with any basic background in complex numbers, quaternions, Minkowski metric, … would assume the norm to be \sqrt{x\tilde x}, which works fine for all blades in PGA. (and a similar formula with a fourth root exists for general multivectors). For a reader with e.g. mostly linear algebra background (i.e. \mathbb R^n) I could see them making that assumption - Charles did cover that in plenty of detail in the course notes (including both the coordinate-free and coefficient versions of the formulas).

For the infinite norm, I feel the \lVert x\rVert_\infty gives an understandable hint, but there is a good reason that we did not use \lVert x^* \rVert (the norm of the dual, as I implemented it above) in the course notes, and instead opted to specifically define this infinite norm for each of the degenerate basis blades. (the reason is that {P(\mathbb R^*_{3,0,1})} does not define or need a metric on the dual Grassmann algebra)

That reason also brings me to some questions I still have about the details of your version of PGA. (and I would expect these same questions have caused some of the responses in this thread.)
You’ve mentioned before that you were unaware of the still existing controversy around the degenerate metric and the implementation of the duality operator. Allow me to try and create the context in which I can ask my questions - hopefully, this will also help other readers that are perhaps still new to some of these ideas to understand how things fit together.

The subject that deserves some extra attention here is the dualization procedure. It lies at the heart of the join of P(\mathbb R^*_{3,0,1}), in the form of its regressive product, and it is the basis upon which all of the anti- operators of P(\mathbb R_{3,0,1}) are built. Unfortunately, dualization is one of those very overloaded terms in math. I believe it makes sense to try and clear out which type of duality we are talking about. To set the perspective let us first specify the geometric version, as ultimately this is the duality we are after. I will call this geometric version, which defines a map between geometric elements, projective duality, introduced by Poncelet in the very first treatment of projective geometry ( Traité des propriétés projectives des figures ). (and in the plane - easier and avoids the risk of confusion that exists in 3D space where lines are dual to lines)

The projectivization procedure extends the set of Euclidean points with a set of infinite points and the set of Euclidean lines with one infinite line. By doing so the cardinalities of both these sets become equal. i.e. there is just one projective point for each projective line. (and this is not the case with standard Euclidean lines and points). With the same number of elements, we can define a bijection between these projective points and projective lines. The map we are after has only one requirement. It needs to map ‘points on one line’ to ‘lines through one point’. Many such bijections exist (find one and apply any isometry to find another one). All of these choices are valid duality maps that result in valid complementary meet and join operators.

Once we understand the geometry behind projective duality, we can focus on the question of how to implement this map algebraically. I believe it is useful to take a look at the algebras with a non-degenerate signature first. In these algebras, multiplication/division with the pseudoscalar provides an easy algebraic procedure that produces the desired geometric mapping. In these scenarios, the dual algebra (not the same as the dual geometry), is never explicitly constructed. There is no need as multiplication with the pseudoscalar is well defined and both the geometric elements and their geometric duals live in the same algebra.

When a degenerate metric is introduced, multiplication with the pseudo-scalar no longer can be used to algebraically implement the duality map. (it instead provides another useful operator which is called the polarity operator in the course notes). So for duality a new approach is needed. Several approaches have been suggested, and PGA in itself, its formulas and constructions, depend only on the geometric projective duality, not the particular algebraic implementation.

The first method (as introduced in the Siggraph course notes, and implemented in the reference implementations), constructs a Grassmann algebra for the dual vector space explicitly, and important here is to note that this dual algebra is an exterior algebra, with no inner product or metric. It defines the vectors of the algebra to be points, and of the dual algebra to be lines etc. The duality map now
is a map (called J) that simply associates points/lines in the algebra to the same points/lines in the dual algebra. Next the outer (non-metric) product in the dual algebra is used to define the join operator, and results are brought back using the same J map. The dual algebra only shows up for the definition of the regressive product and all elements and operations live in a single algebra.

The second method I would like to mention is the one used in my ganja.js implementation. It does not construct a dual algebra, but instead uses the left/right complement to implement the projective duality map. The complement operations are defined on basis blades in the usual way so that the geometric product of a basis blade with its complement produces the pseudo-scalar (where the element, its complement and the pseudoscalar live in the same algebra). Similar to the non-degenerate scenario, this is a duality map that keeps all elements and operations in the same algebra. (this method compares very well to the one in use for non-degenerate scenarios, for example, one can use the right and left complement to dualize/undualize or simply use the right complement twice and embrace the projective equivalence - very similar to the choice of dividing by the pss or multiplying again in e.g. CGA)

I would like to provide a similar analysis for the third method, by @elengyel, but am running into some issues, hopefully, Eric can shed some light. For starters, it is unclear to me what the complement operators Eric is using exactly are. From the context, I have to assume that they take elements from the vector space and produce elements in the dual vector space (and the other way around)? If that is the case how are they defined ? (a product between elements from the vector space and elements from the dual space is not defined so the standard definition x \overline x = \underline x x = I is not applicable?).
The dual algebra in Dr. Gunn’s PGA is a Grassmann algebra, without a metric. How are you defining the metric that is listed in the Cayley table of the anti-geometric product? I’m not sure I understand how you can arrive at that metric - unless you create a map like the J map that can make the association via the geometry?

In my, no doubt limited understanding, one either constructs the dual algebra, in which case the duality map moves an element from one algebra to the other and you will need some geometric (non-algebraic) association that links the two worlds. (and J is there the obvious choice, the geometric identity map if you wish). Or, you implement the desired projective duality using the left/right complement, but in this case, there is only one algebra and the dual algebra is never constructed (or needed).

So summarizing, and ofcourse I could be missing important bits, for P(\mathbb R^*_{3,0,1}), both options are available and one has the choice of either using the complement and staying in one algebra, or creating the dual algebra to be able to define the regressive product. The P(\mathbb R_{3,0,1}) does not offer this freedom as it plays entirely in the dual algebra, an algebra that cannot be constructed directly (I’m assuming Eric will be able to resolve the above-mentioned ambiguity, to make at least the indirect creation of the dual algebra (with metric) possible).

I hope that helps either provide some perspective - or that it helps pinpoint my mistakes or misunderstandings! Looking forward to your insights, @elengyel.

Steven.

5 Likes

Great post @enki. Still, I got a bit turned around reading it.

To clarify: If I’m looking at a point with cartesian coordinates (x,0,0) modeled in P(R*{3,0,1})
In ganja.js it will be represented as: (-?)xe032 + 1e123
In Dr. Gunn’s notes/papers it will be represented as: xe₁ # should I add 1e₀?

Applying J to same point, mapping from dual to primal representation:
in ganja.js: (-?)xe1 + 1e0
Dr. Gunn: xe⁰²³ + 1e¹²³

Same point modeled in P(R{3,0,1}) will be: xe₁ + 1e₀

Same point modeled in trusty old R{3}: xe₁

Where are you seeing this? In the “Projective Geometric Algebra” paper, a point in \mathbf{P}(\mathbb{R}_{3, 0, 1}) is defined as the meet of 3 planes. The homogeneous \mathbf{e_{123}} comes out naturally after applying the equivalence relation of the projective space (viewed as a quotient space). The cheatsheet corresponds exactly but handles the signs for you when entities are expressed with respect to the cyclical basis.

Some confusing language and figures. Figure 6 from “Geometric Algebra for Computer Graphics”. Had me all turned around. Using superscripts for primal coordinates, where I’m used to it being reversed messed me up, along with it not being terribly clear which labels belong to which geometry. There are two lines crossing at e¹ which indicated on first glance “the meet of two planes” but wait isn’t that a primal space thing… oh superscript on left side, that means… wait, let’s go read that last section again.

The language indicating elements in the * algebra are to be considered as standard and we should be thinking in dual space also caused much cognitive dissonance while reading. If I’m thinking of a point being the intersection of 3 planes, I don’t feel like I’m thinking in the dual space, I’m thinking how the dual space represents something using planes from the primal space. If you were really in dual-land the basis vectors should be canonical and reflect the degree of the manifold, i.e. one index not three for canonical x.

If the dual is standard, then the dual of something there should be in primal space. If v is a k-vector ( in dual space ) then v* is in primal space. So * raises indices. Algebraically nice, but conceptually odd.

I felt like I should already know projective geometry before reading this for it to make sense.

Totally get it, I had to read it a few times myself but to be honest, I actually reread it a couple days ago and I can’t imagine myself framing the argument any better.

In the figure, the caption mentions “written with raised indices” and “written with lowered indices” and that is, I believe, the only place where the convention is used, just to disambiguate the presence of both \mathbf{e_0}, \mathbf{e_1} and \mathbf{e_2} in close proximity.

To think in the dual space, the first thing to do (I think) is to realize that in \mathbb{R}^N, the fundamental element is a vector emanating from the origin. Then, you have a choice. Does the vector correspond to a point, or to a plane (referring to \mathbb{R}^3). The dual sense uses planes as the fundamental building block, from which we can build lines as intersections of planes, and points from the intersection of a line and a plane, or equivalently the intersection of three planes.

There isn’t any index-raising that I’m aware of, like you would find in a tensor notation or something.

Raised indices could be implemented if you liked. My library has that option. Might be useful some situations.

One thing I struggle with thinking in dual space is that the planes are “tilted” the opposite direction as I would expect. If I view the 1-vector as a normal to the plane. Compared to how they would be in projective R(N+1, 0, 0). i.e. if you were to visualize the plane with x coord = a. ae1+1e0, a plane in P(R(2,0,1)) intersecting the projective plane, it does so at the line x = -a. Which is fine, since you’ve decided it is the line at x = a and you build everything up from that. But there is this sign flip if you visualize it’s projection on the e0=1 plane. Maybe I should visualize the projective plane at e0=-1.

I actually ran into that issue too initially. Remember that the “normal vector” to a plane is actually the plane’s polarity. A more natural way to think of the plane’s equation is from the implicit equation ax + by + cz + d = 0. This equation works even if we scale the LHS by -1. So in that sense, the sign is unimportant (the same point field is encoded). The reason this ends up being more intuitive is its relationship to the exterior algebra. Two planes represented by two vectors is analogous to two such homogeneous equations which restrict the degrees of freedom by one (producing a line). The connection becomes more clear when you consider the set of homogeneous equations expressed in matrix form. Linear dependence implies the determinant vanishes, which is the same as the exterior product vanishing as well.

Another way to think about this is that when you take the cross product and use the right hand rule to get a plane’s normal, you are enforcing an orientation of the space at this point. In contrast, the basis ordering we choose defines the orientation in GA and the exterior product itself exhibits no such right hand rule (and so, is associative, compared to the cross product which just satisfies the Jacobi identity instead).

Right, that helps a lot. Thanks! Starting to understand the difference between thinking dually and just thinking of the dual objects represented by primal objects.

The 1-vector encoding the plane is not the normal vector. It really is the dual of the primal plane ( sounds obvious now ). That was was my error, since they look similar. If it was a normal vector it would have the undesirable effect of not transforming via reflection the way we want. It’s a psuedovector! Like magnetic field, which is much better represented as a 2-blade. So if you wanted to encode magentic field as a 1-vector you would be best served with something like e0 to encode the polarity.

1 Like

Continuing the discussion from Why use dual space in PGA - R*(3,0,1) vs R(3,0,1):

Hello, thank you everyone for all the insight carefully displayed in this threat.
We are left with a few quick questions as we learn the ropes.

We haven’t finished grasping all said in @enki’s Observable notebook: Plane vs Point based PGA. However, could we take the notes as a proof or almost proof that the two different approaches are Not isomorphic? If not, could anyone point us in the right direction of a proof or counter proof-ish? We are also aware of CI/CD challenges which could persuade one to adopt a way versus another. What could be some crucial advantageous points from the CI/CD perspective, given everyone in the team knows some PGA?

Conclusion. only in the hyperplane based interpretation can you have versors that represent translations. The Algebra and its Dual are NOT ISOMORPHIC - exactly because of the degenerate signature.

We apologize if our misunderstanding sounds trivial for the group. And, we are glad to re-read anything suggested. Thank you very much.

Hi Rho,

Welcome to the forum! Working through this thread probably misses some of the timeline details, so allow me to give a short reconstruction - by heart (please correct/add if needed @elengyel, @cgunn3).

  • SIGGRAPH 2019 : Charles and myself presented PGA to the CG audience. This is using the standard geometric product, and associating grade-1 elements to vectors.
  • The post you referenced above was a demonstration of why (with the standard products) the two are indeed not isomorphic. (btw, some other non-degenerate GA’s are isomorphic to their dual algebras …)
  • Following up on that popular course, I regularly made the statement that the dual algebra (again assuming the standard geometric product) does not offer a representation of the Euclidean group (but instead the generalized dihedral group), and thus is not isomorphic. @elengyel responded by insisting that an approach with grade-1 elements being points and offering translations should still be possible.
  • Several weeks later, @elengyel introduces new operators (the anti-geometric product, anti-dotproduct, anti-exponential, anti-logarithm, etc, all the original calls wrapped in dualisation/anti-dualisation following the pattern A antiop B = undual( dual(A) op dual(B)) - similar to how the regressive product is often implemented.
  • The new conclusion is now that PGA with a representation of the Euclidean group can be achieved either plane-based (using the standard products) or point-based (using the products defined by @elengyel).

This removes any ‘computational’ argument, that is all coefficients that can be computed with the Plane based approach and the geometric product can also be computed with the Point based approach and anti-geometric product

  • GAME2020 - my lecture on dual quaternions motivates my view on why even if both options are available, the plane based approach still offers substantial advantages - in short it matches the grading of the group with the algebra. The remaining difference with the point-based/anti-operator approach now lies in the readability and factorizability of the elements. The higher grade elements like e12 can be read as e1 * e2, which has a clear meaning in the plane based approach. (geometric product = composition, here of two reflections e1 and e2). In the point based approach e12 is still written as e1 * e2 (GP, not anti-GP), but I am not aware of any sensible meaning of the geometric product in the point-based approach. (similarly I haven’t got a single usecase for the anti-geometric product in the plane-based approach). (simple intuition : one can write rotations as the composition of two reflections, but not reflections as the composition of two rotations, hence reflections are the fundamental irreducible grade-1 elements.)

Hope that helps,

Steven.

2 Likes

It’s a bit disingenuous to dismiss the concept of anti-operators when one of the most important operators in plane-based PGA is an anti-operator. Sure there are less of them, but hey, duality. Neither method can do without the other.

1 Like

We should call them “dual operators” not “anti-operators” because otherwise the dual commutator is called the anti-commutator, which can be confused with the anti-commutator, and the dual anti-commutator being called the anti-anti-commutator can be confused with the commutator.

I think it helps to have them all in the algebra, for completeness, and possible unknown applications. This does not affect the correctness of enki’s arguments.

1 Like