Dual operator disambiguation

I’m implementing a Geometric Algebra package that supports degenerate metrics. i.e. can implement 3DPGA as outlined at biVector.net

Now I’m a bit confused as to how to implement the dual operator, or indeed the precise definition of the dual operator for all metrics. Most GA literature I’ve seen agrees that the dual of a euclidean space of dimension n is found via dual(A) = A/I where I is the pseudoscalar, A/I = AI⁻¹, I⁻¹= CodeCogsEqn

Not sure what to do for negative or degenerate metrics.

Seems we want dual to be the orthogonal complement of it’s argument? I read somewhere that a basis vector that squares to 0 ( it’s quadratic form ) is orthogonal to itself? What do I make of that?

Does the wikipedia definition of orthogonal complement apply without modification?

Appears from the 3DPGA cheatsheet that the dual is simply all the basis vectors not part of a given k-vector, up to a sign. So the orthogonal complement of a degenerate basis vector is all the unit k-vector formed from all the other canonical bases vectors. I can implement a lookup table or bitwise x0r based dual informed by this inference.

Final thing. How to sort out the sign? Especially when degenerate or non-euclidean metric is involved. The orthogonal complement of a k-vector could have either sign and still be an orthogonal complement, no?

Bonus points: to be fully general would I need to consider canonical basis vectors that are not orthogonal to each other? i.e. the quadradic form != (0,1,-1) for a given eᵢ? Or does that never happen, not useful in GA?

Edit: Bonus question


In geometric algebra, there are 2 complement operators (left complement and a right complement).

The formula you posted does not correspond to either of them. As described in my paper

The complement right is \star \langle\omega\rangle_p = \langle\tilde\omega\rangle_p I and the complement left is I\langle\tilde\omega\rangle_p

The metric is handled by the geometric product.

Yes, that’s correct modulo the grade-orientation and left/right handedness, it is orthogonal complement.

No, additional information is required, that article is not enough to define complement in GA (orientation).

Yes and No. In my own formulation of differential geometric algebra, there are 3 types of basis element that can square to the value of 0. Each of these 3 types has a different complement calculation involved. For example, there are Grassmann basis elements with metric 0, also null-basis elements from CGA, and finally I also have (what I would call) Leibniz-Taylor symmetric basis. Each is treated differently in my algebra.

What language are you using? Please share your code.

I’m trying build up GA from a minimal set of operations/objects. More akin to the approach of @enki.
Many relations can be succinctly expressed using a dual operator. However, I don’t have a nice formula/algorithm dual for non/anti-euclidian metrics.

I believe your complementright is what I’ve seen defined as the undual. The idea is that the undual of a dual gets you the original object back ( same orientation ).

Let me work try to it out. Correct me where I’m wrong. Most of the material I have is very “+” metric focused.
Assuming we define that the dual is the orthogonal complement ( of a subspace W in V ).
Trying to use the minimal set of GA axioms I can:

In GA we can use the pseudoscalar I for V and B for a blade spanning our subspace W. Now we are looking for dual(B), the dual spanning U where U⟂W, U⊂V.

We want this relation to be true:
dual(B)∧B = aI, where a is a scalar ( maybe a < 0 ). i.e. space of dual(B)+B spans I
Actually we really want a=1.0, but my math went off the rails so I had to add that scalar.

dual(B) = aᵢeᵢ∧ aⱼeⱼ∧... by construction, each k-vector e is perpendicular to B
dual(B)∧B = dual(B)B = aI

The object dual(B) we are looking for is a simple Blade ( i.e aᵢeᵢ∧ aⱼeⱼ∧..) so dual(B)dual(B) = scalar we can get something close to the formula I had:
dual(B)(BI⁻¹) = a the object (BI⁻¹) = dual(B) up to a scaling ( sign flip ).
in other words (BI⁻¹) is made of k-vectors in dual(B) and there are no k-vectors not in dual(B) ( otherwise we’d have a non-scalar result ). They span the same space.

Am I any closer to a formula or algorithm to compute dual(B)?

Julia. Was inspired by your work on Grassmann.jl, @chakravala. I will share the code, my supervisor has said he’s cool with it. I’ll make it available when I get this dual function written :slight_smile:

I didn’t clearly point out the problems with using the pseudoscalar to compute dual via contraction or inverse. The pseudoscalar contains the canonical 1-vector whose quadratic form is 0.
The inverse of the pseudoscalar doesn’t exist.
Contracting any blade containing the degenerate canonical 1-vector will end up = 0. e01*e0123 = 0. What you want is the dual to be e23.

Found a solution (well, someone on stackexchange did ). Works really nicely and appears to gives same result if the basis has no non-degenerate metric in it.

1 Like

I have discussed this question in several documents, all of which can be accessed from this ResearchGate project.

  • the SIGGRAPH 2019 course notes Section 5.10, the heading “Poincare duality”,
  • the discussion in Section 3.4 of my article “Geometric algebras for euclidean geometry”,
  • in Section 2.3.1, “The Isomorphism J”, of my thesis “Geometry, Kinemetics, and Rigid Body Mechanics in Cayley-Klein Geometries” (see especially for n-dimensional discussion)

The short form: there are two maps involved: duality (or dual coordinate map) and polarity. The first is non-metric and works independent of metric to provide the regressive product (in this case join); the second is metric and can only be used with non-degenerate metrics for this purpose.

Additionally, Jon Selig in his book “Geometric Fundamentals of Robotics” (Springer, 2005), pp. 226-228 describes an alternative but equivalent approach called the shuffle product that gives rise to a so-called Grassmann-Cayley algebra.

1 Like

Thanks @cgunn3. I found “On the Homogeneous Model of Euclidean Geometry” disambiguated things nicely. Although to fully understand the subtleties between the polarity and duality operators/maps I’d need to study the references from that paper.

What I have done is to provide what I hope is a flexible enough set of methods/functions to allow a user of my package to create operators for a wide range of Geometric Algebras. There is the option to create both dual and standard generating 1-vector bases, with types “spelled” eᵢ and eⁱ and equipped them with an operator to lower and raise indices directly.

raise(k)*inverse(raise(dual(1.0))) should give you ( something isomorphic to? ) the J operator. Transiently, the signs for my dual(k) differ from ganja.js in some metrics, but agree on dual(dual(k))

edit: removed description of polarity map

I suppose it’s inevitable that many different notations and terminologies co-exist in the real world of GA programming. @Orbots, it’s not clear to me what the difference between raise(k)*inverse(raise(dual(1.0)) and dual(k) is, since I think both should be equal to the J duality map. That’s probably connected to the fact that according to my understanding dual(k) should not depend on the metric at all. I notice that in an edit of your post you “removed description of polarity map”. That might have explained my confusion here, since the polarity map (multiplying by the pseudoscalar) is often confused with the duality operator (at least in the terminology I’ve adopted to clear up this often-confusing topic).

It seems that there is historically a great deal of confusion around this topic. In my opinion, this controversy can be resolved by relying on the differential geometric algebra formalism from my paper combined with the definitions of conformal geometric algebra. As stated in my post above, the definition is ⋆ω == (~ω)*I,

As I have mentioned many times before, the right-handed complement is the one and only true complement which I use, since it works in all forms of geometric algebra and co/homology theory. To properly do homology and cohomology with the correct orientation, I found that only the right complement works .

To address the situation when a degenerate basis is involved, it is also as I have stated before:

Essentially, I consider 3 types of degeneracy (basis elements squaring to zero):

  1. regular Grassmann basis element with metric set to zero
  2. null-basis element derived as sum of a positive and negative non-degenerate metric
  3. symmetric Leibniz basis element with 1st order differentiability

In the first kind, we are talking about a traditional Grassmann basis \langle v_1,v_2,v_3,v_4\rangle with also a degenerate bilinear metric defined on it, e.g. D"1,1,1,0" in the DirectSum.jl formalism. This would naturally behave as

julia> @basis D"1,1,1,0"
(⟨1,1,1,0⟩, v, v₁, v₂, v₃, v₄, v₁₂, v₁₃, v₁₄, v₂₃, v₂₄, v₃₄, v₁₂₃, v₁₂₄, v₁₃₄, v₂₃₄, v₁₂₃₄)

julia> v1^2, v2^2, v3^2, v4^2
(1v, 1v, 1v, 0v)

julia> V(I) # pseudoscalar

julia> ⋆v1, ⋆v2, ⋆v3, ⋆v4
(1v₂₃₄, -1v₁₃₄, 1v₁₂₄, 0v₁₂₃)

It’s clear that with an actually degenerate metric, the geometric product and complement will also be degenerate when that index is involved. That is naturally to be expected with a degenerate metric.

Now let’s suppose we are talking about a space with the null-basis, e.g. S"∞∅++" which is isomorphic to a space with non-degenerate metric S"+-++". However, the null-basis are a sum of the positive and negative metric elements. Therefore, the null-basis behaves differently under complement,

julia> @basis S"∞∅++"
(⟨∞∅++⟩, v, v∞, v∅, v₁, v₂, v∞∅, v∞₁, v∞₂, v∅₁, v∅₂, v₁₂, v∞∅₁, v∞∅₂, v∞₁₂, v∅₁₂, v∞∅₁₂)

julia> v∞^2, v∅^2, v1^2, v2^2
(0v, 0v, v, v)

julia> V(I) # pseudoscalar

julia> ⋆v∞, ⋆v∅, ⋆v1, ⋆v2
(v∞₁₂, -1v∅₁₂, v∞∅₂, -1v∞∅₁)

julia> ⋆v∞∅

In this case it is as if the index of the null-basis element is ignored.

Last but not least, there is the first order tangent(V) space, which shares certain aspects of the first two.
For example, tangent(ℝ^3) has an additional element ∂1 which is degenerate, but behaves more like a scalar (and thus is similar to the null-basis; however, the similarity is due to it being a symmetric Leibniz derivation as opposed to an anti-symmetric Grassmann tensor or composite null-basis element).

julia> @basis tangent(ℝ^3)
(⟨+++₁⟩, v, v₁, v₂, v₃, ∂₁, v₁₂, v₁₃, ∂₁v₁, v₂₃, ∂₁v₂, ∂₁v₃, v₁₂₃, ∂₁v₁₂, ∂₁v₁₃, ∂₁v₂₃, ∂₁v₁₂₃)

julia> v1^2, v2^2, v3^2, ∂1^2
(v, v, v, 0v)

julia> V(I) # pseudoscalar

julia> ⋆v1, ⋆v2, ⋆v3, ⋆∂1
(v₂₃, -1v₁₃, v₁₂, ∂₁v₁₂₃)

The most important aspect of the symmetric Leibniz derivations is that they do NOT make the complement degenerate unlike an anti-symmetric basis with degenerate metric, while still squaring to zero.

In other words, if a regular anti-symmetric basis with degenerate metric is used, then the definition of complement will also involve degeneracy due to the metric of the pseudoscalar. If a null-basis is used, then the pseudoscalar metric is not actually degenerate, although the composite null-basis squares to zero. Finally, with a symmetric Leibniz derivation, the pseudoscalar does not actually include the symmetric basis element in its psuedoscalar, since the psuedoscalar is the highest grade Grassmann element by definition. Therefore, the metric of the psueodscalar is always non-degenerate.

As you can see, the complement will in fact be degenerate if a degenerate metric is actually used. Ways to avoid a degenerate complement would be to either use a null-basis (based on non-degenerate metric) or to use what I call the symmetric Leibniz derivation basis (which does not add to the pseudoscalar grade).

The newly released v0.3.1 of Grassmann.jl supports all of the 3 types of degeneracy I mentioned here. This is the first computer algebra software I know of which properly disambiguates between all 3 of these types of degeneracy. Thanks to the formalism in my paper, the foundations are also consistent.

@cgunn3 Yes, my goal is to have the function named dual produce the least surprizing result across all metrics. Which is simply an api design choice. I chose the algorithm to generate the map from standard to dual basis so GA301 or GA201 would have nearly the same behavior as GA400 or GA300 (same orientation, same implied basis).
In my api the actual dual function is chosen based on the metric. If it’s degenerate it will output something very similar if not equivalent to your J duality map would. I think it differs by a sign. If the metric is not degenerate then it uses the common definition of the dual operator used in most GA material I’ve read, which is dual(A) = A/I where I is the pseudoscalar. I gather you want this rather than dual(A) = A*I because you generally want to stay in the same basis. dual(A)*I = A is the actual relation you start with. Which is trying to say that the orthogonal complement of A (left-)contracted with the highest grade space ( pseudoscalar ) is equal to A.

I actually define dual in a degenerate metric as

  𝐼 = raise(dual(1.0))

So you can see the implied basis remains in the standard space, it’s actually a map from standard basis to standard basis like the polarity map, but works on a degenerate metric. I do prefer your definition of dual on a conceptual level though. Polarity map I find less descriptive though. I suppose I could rename to orthogonal_complement and have dual return raised indices. I probably wont do that as I want to keep raised indices behind the scenes unless the user is an expert.

@chakravala I did consider the technique you describe, but choose not to go that route as I wanted to retain the dual(A) = A/I definition and it’s more difficult to implement. This is an example of what differentiates our packages, as yours has a far greater scope than what I’m aiming for.

1 Like

In my case for the complementright the relation is dual(A) = (~A)*I instead, if the metric is assumed positive only. In general with any metric, the relation I end up with is \tilde AI = \star A, the reason it isn’t A^{-1}I is because the coefficients are not supposed to be inverted, only the reverse is needed.

This is the definition that is in general most compatible with all types of geometric algebra and co/homology.

@Orbots I ran into the same decision point in my library and I opted to maintain the same consistent definition of a metric-free dual proposed by @cgunn3 for all metric signatures (up to a sign).

For performing general multi-grade measurements that involve the metric, the contraction and geometric products are there and intuitively, I empathize with the fact that a dual should not depend on it (that is, the metric).

I chalk up the choice of contraction with the inverse scalar as the dual definition a consequence of most texts not covering projective space which I think is a shame given its computational advantages.

@Orbots I’m still a little confused about your dual() function. What is the target space? On the one hand, the dual() function in the sense of Poincare duality maps to the “other” Grassmann algebra (I would say dual but the word is a bit overloaded in this context.) On the other hand, you say you are retaining the dual() function for non-degenerate metrics that is, in projective geometric terminology, the “polarity on the metric quadric”. This latter function is multiplication by the pseudoscalar (or its inverse) and hence the target space is the same as its domain.

I would advise to carefully consider if you want to try to “blend” these two features together in this way. Just to choose one example from many: in a degenerate metric how will you provide the operation of “multiplication by the pseudoscalar” if you have renamed dual() as you describe above to mimic a non-degenerate metric? Just because it’s degenerate doesn’t mean that multiplication by the pseudoscalar doesn’t yield correct results! For example two euclidean planes A, B are parallel if and only if A^\perp = B^\perp where X^\perp = XI, etc., etc.

In light of this, wouldn’t it make sense to rename the metric-dependent “dual()” to “polar()” (or some other equivalent name, such as “orthogonalComplement”), and implementing a true dual() function that is independent of metric? That’s the beauty of mathematical language that it can be refined and improved when the need arises. In this case, the terminology is not being invented but adopted from the centuries-old projective geometry literature. In this respect I agree with @ninepoints observation that ignorance of projective geometry lies at the base of the current confusion. Changing an API of course shouldn’t be taken lightly but when it is accompanied by a deeper understanding of the underlying mathematics then I think it’s worth considering. In this case renaming dual() would also help to raise community awareness of the central role projective geometry plays in geometric algebra. Of course it’s a decision you have to make based on what your own priorities and resources are.

Note that the casual user doesn’t have to be confronted with the new dual() function directly. Its primary application is to compute the regressive product (in the case of PGA, that is the join operator \vee) via: A \vee B := (A^* \wedge B^*)^*, where a^* = dual(A) and \wedge is the exterior product in the “other” Grassmann algebra. (In the non-degenerate metric of course this can be obtained in a similar way via A \vee B := (A^\perp \wedge B^\perp)^\perp (up to a sign of course)). The end user won’t need to know how you’ve implemented the join operator.


@cgunn3 when do you ever use the non-metric version of the complement, aside from let’s say inside the regressive product? is there any further use?

Good question, @chakravala. As I mentioned in my recent reply to @Orbots, the regressive product is the primary usage of J, poincare duality, but not the only one. In rigid body mechanics the inertia tensor acts on a (velocity) bivector to produce a (momentum) “co-bivector” that has then to be “brought back” to the base algebra via J. There are many similar operations involving vectors and co-vectors where J is the canonical way to shift back and forth.

It’s partly a matter of raising awareness that J exists, in order for practitioners and developers to begin to integrate it into their mental maps and begin to use it instead of metric-dependent variants that don’t IMHO represent the mathematical reality adequately. I conjecture that J leads to cleaner code and clearer thinking.

@cgunn3 could you comment on differentiability and J? Is it a continuous map in some/all/no situations/metrics, if so what do I need for that to be the case? Sorry, very broad questions. I have multiple uses for GA, want to be sure my package covers what I need.

I’m leaning towards ⟂(b) as my orthogonal complement operator ( previously known as dual ). But then I still need a dual which is J. Is J the same as polarity if the metric is non-degenerate?

1 Like

I’m a newer practitioner in the field, but my interpretation is the main thing the \vee requires to work is an isometry that takes \wedge_i blades to \wedge_{n-i} blades for a graded-algebra with n grades. J acts only on subspaces themselves so you don’t lose any continuity (or differentiability) by definition.

J is not the same as polarity if the metric is non-degenerate but will be the same up to a sign (consider metric-inner products that are negative between basis elements, or metrics which contract or expand space). You can prove that the grade will be the same trivially (contraction with the pseudoscalar is indeed a \vee_i \rightarrow \vee_{n-i} map, but it isn’t bijective).

edited: typo (but still more likely remain haha)

What do you mean exactly by J working on subspaces?

If the metric is non-degenerate and positive, contraction by pseudoscalar should be bijective since it has an inverse. Contraction with inverse pseudoscalar. Under scaling this should also work. Correct me if I’m wrong, I’m a programmer who learns enough math to make stuff work :slight_smile:

If the metric is non-degenerate and positive

This is precisely the issue at hand, we want a notion of a “dual” map that does not place such restrictions on the metric. Point being that we want an exterior product to work in the dual space (the “join” operator) so the map is quite useful indeed.

edit to clarify:

If you don’t use projective geometry at all, it may be hard to see why a degenerate metric is useful, but projective geometry has always been fundamental computationally because it’s more compact than alternatives.

1 Like

@Orbots Oops I forgot to address your first question. By “acting on subspaces,” I mean that J depends only on the subspace of its argument. Given a blade ae_{i} for some i \in \{0 \leq i \lt 2^k\} (using natural indexing on the bases elements), a is irrelevant to the computation of J.

1 Like