Dual operator disambiguation

For the exterior product to work in the dual space all you need is an outermorphism. I can see that J is the only outermorphism that works in all metrics. It’s not the only outermorphism though.

My concern with J is that, from a not so thorough test, I found it inconsistent with the left contraction by inverse psuedoscalar. I really need to run more tests though. I’ll respect the notion of dual we are trying to move towards and rename it.

I’d still like to use dual where it is consistent across maps.
dual(1.0) = psuedoscalar? both in J and polarity operators? Seems so. I think I’ll keep that one, since orthogonal complement of 1.0 is more weird than dual of 1.0.

edit: changed dual(scalar) to dual(1.0) since I want to avoid scaling

Are we using the same definition of “outermorphism?” My understanding is that outermorphisms take vectors to vectors and are extensions of linear maps to the entire graded-algebra and so it must be grade preserving by definition.

The target space of dual ( now called ) is the same as the domain. Which is actually the non-dual space. Which drives home why it shouldn’t be called “dual”!

As an aside: It seems geometric algebra lacks a nice way of tracking which space you are in ( when you need to track it ), unlike differential geometry which carries extra notation around. Should the be addressed in some way in GA?

You would trivially just (left)multiply by the pseudoscalar if that’s what you wanted.
I’m assuming a user of projectivized GA would know enough to do so.
But a user of, say GA3, coming from Vector Algebra would just want a convenient way to swap between cross products and blades/wedges. After reading an intro book/paper on GA they see dual and want to use that. Expecting dual to do exactly what they learned.
Oh dear, I’ve almost convinced myself to rename back to dual :stuck_out_tongue:

When I see polar I assume polar coordinates. So I wouldn’t use that. I think I’m happy with and dual = J ( up to a sign flip ).

The way I’m structuring it is so join would be implemented in another package, so in a sense the user could be someone who cares about how it is implemented. The user of that users package won’t though.

Thanks for the great discussion!

My understanding of an outermorphism f(A∧B) = f(A)∧f(B) where f is a linear transform. Grade preserving? Not sure, I guess only if f is. Right by definition it would be.

You missed an important part of the definition which is that given a vector x, \underline{f} [x] = f[x].

edit: The wiki page makes this more clear IMO: https://en.wikipedia.org/wiki/Outermorphism

@Orbots, @ninepoints . It appears that the underlying structures are a bit foggy.

  • There are two different algebras involved with duality call them G and G*. They are considered as Grassmann algebras (metric not relevant) and one is the dual algebra of the other. The dual() map is the grade-reversing map that takes a k-vector to the n-k vector that represents the same geometry in the dual algebra. So J is an identity map of sorts. (As such, @Orbots, there aren’t any issues with differentiability, etc.) That’s why it’s sometimes called the dual coordinate map.
  • There is only one algebra involved with the polarity() (aka orthogonalComplement or \perp map). This is just multiplication by the pseudoscalar and maps a vector onto its orthogonal complement. This is defined for both degenerate and non-degenerate metrics, but it’s only invertible in the latter case.
  • Both maps are grade-reversing but are very different since they have different target spaces.
2 Likes

What all of you probably don’t understand yet about differential geometric algebra: what I found is that if you want to exclude a basis element from the metric for the purpose of automatic differentiation, then the best way to do that is with my generalized geometric algebraic product! The new degenerate basis element it provides is a symmetric derivation and thus is not included the pseudocalar I. Having this basis does not make the metric degenerate and the metric does not need to by bypassed at all in the first place…

This is why when an actual degenerate metric is used, the complement can also be degenerate. If you don’t want a degenerate metric / complement / product, then dont use a degenerate metric… the correct way would be to use either null basis or the Leibniz derivations, which are properly degenerate without making the metric degenerate.

I don’t mind implementing an additional metric-independent map, but my point with differential geometric algebra is that it is actually better to work with the tangent space instead of with a degenerate metric. In the tangent space we can still have basis elements that square to 0, but the metric never becomes degenerate.

@cgunn3 Your last comment resonates with me the most (with respect to formalizing on the dual-algebra), thanks for summarizing it better than I could.

In the case of a non-degenerate metric ( positive only? ). You have a trivial map to G* in this case though don’t you?

I view it a bit like how you can consider a matrix in SO(3) as either a transformation to an different frame of reference or as a rotation “acting” on an object. Same object, different interpretations. You need the right mental model though or you won’t understand how to debug the inevitable orientation flips.

This is how I’m thinking I may implement the naming convention in Grassmann in the future:

  1. Right complement: metric independent Poincare dual so I=x*complementright(x)
  2. Left complement: metric independent Poincare dual so I=complementleft(x)*x
  3. Hodge complement: metric dependent Hodge dual so det[g_x]I = x(\star x)

Note that x is a unit blade here. The reason for the last Hodge definition is very specific and is needed to satisfy \star\omega = \tilde\omega I, which is essential to be equivalent to complementright with the metric multiplied to it.

For usability, the Hodge dual will be operator and the right complement will be ! for v0.3.2 Grassmann. This set of three definitions are the most compatible (for my purposes) with everything discussed so far.

P. S. I realized after making my last post that perhaps it’s useful to point out an implementation detail regarding the use of J: G \leftrightarrow G^*, the duality map. Particularly newcomers may not have “grokked” this convenient feature yet.

In the formula for the join operator A\vee B = (A^*\wedge B^*)^* (where A^* := J(A)), the \wedge on the RHS naturally has to be evaluated in G^*. However, the isomorphism of G and G^* is so strong that you can actually use the \wedge in G, that you already have at your disposal, to do this evaluation. That is you can pretend that A^* and B^* are in G for this wedge operation. You get the correct answer in G^*.

So although it appears you have to manage two algebras in this approach, you actually only need to calculate in the original one. It is however important is to keep track of which algebra a given multivector belongs. As long as both belong to the same algebra (as A^* and B^* do) then you can evaluate the wedge product. You can devote a flag in your data structure for this purpose. If a binary operation (such as \wedge) is called on elements from different algebras, throw an error; otherwise the operation will produce the correct answer; set the flag in the result to the same algebra as the two arguments.

@cgunn3 in Grassmann there is a so called dual algebra available,

julia> using Grassmann; mixedbasis"3"
(⟨+++---⟩*, v, v₁, v₂, v₃, w¹, w², w³, v₁₂, v₁₃, v₁w¹, v₁w², v₁w³, v₂₃, v₂w¹, v₂w², v₂w³, v₃w¹, v₃w², v₃w³, w¹², w¹³, w²³, v₁₂₃, v₁₂w¹, v₁₂w², v₁₂w³, v₁₃w¹, v₁₃w², v₁₃w³, v₁w¹², v₁w¹³, v₁w²³, v₂₃w¹, v₂₃w², v₂₃w³, v₂w¹², v₂w¹³, v₂w²³, v₃w¹², v₃w¹³, v₃w²³, w¹²³, v₁₂₃w¹, v₁₂₃w², v₁₂₃w³, v₁₂w¹², v₁₂w¹³, v₁₂w²³, v₁₃w¹², v₁₃w¹³, v₁₃w²³, v₁w¹²³, v₂₃w¹², v₂₃w¹³, v₂₃w²³, v₂w¹²³, v₃w¹²³, v₁₂₃w¹², v₁₂₃w¹³, v₁₂₃w²³, v₁₂w¹²³, v₁₃w¹²³, v₂₃w¹²³, v₁₂₃w¹²³)

This generates a dual geometric algebra V+V' over the space V = ℝ^3.

julia> A,B = v1 + v2, v2 + v3
(1v₁ + 1v₂ + 0v₃, 0v₁ + 1v₂ + 1v₃)

julia> !A # complement
0v₁₂ - 1v₁₃ + 1v₂₃

julia> A' # ' raise indices
1w¹ + 1w² + 0w³

julia> !A' # complement and raise indices
0w¹² - 1w¹³ + 1w²³

As you can see, the ' operator will take you from V to V' if you want.

Could you please clarify what you are proposing in terms of the formalism used in my software? For example, I use the index notation v_{1\dots n} for basis elements in \Lambda(V) and w^{1\dots n} for \Lambda(V').

One could think of hyperplane element \star v_1 = v_{23} isomorphically in \Lambda(V') in two different ways:

  1. the hyperplane element is represented by v_1' = w^1 and \wedge,\vee behave differently
  2. the hyperplane element is represented by \star v_1' = w^{23} and \wedge,\vee behave the same way

In Grassmann I have opted for interpretation 2, so that both \star and ' need to be applied successively.

Note, the mixed algebra is in fact setup as a full 2n-dimensional geometric algebra and it is in fact possible to construct mixed tensors with both covariant and contravariant indices combined in Grassmann.

@cgunn3 Ok so my understanding is that we are essentially in the dual map swapping e_i elements to E_j elements and back in the unoptimized form. If J is used, the parity is preserved also since the swap count for the concatenated elements after a \wedge is the same as the sum of swaps needed for the left and right. That leaves the actual indices themselves. A wedge acts as an xor and J acts like a bitwise negation. If the wedge would have produced a zero, the regressive product would have as well (trivial proof). Finally bitwise negation commutes with xor, so I believe this proves the claim.

Regarding your suggestion about gating the various operators to ensure the left and right operands are members of the same algebra, that’s a good suggestion and I should be able to do this easily enough at compile time. Thanks

This feature has now been implemented, the operator continues to be the Hodge complement and now the alternative ! operator has been changed to the metric independent Poincare duality.

This change is also reflected in the null basis also, for example

julia> @basis S"∞∅++"
(⟨∞∅++⟩, v, v∞, v∅, v₁, v₂, v∞∅, v∞₁, v∞₂, v∅₁, v∅₂, v₁₂, v∞∅₁, v∞∅₂, v∞₁₂, v∅₁₂, v∞∅₁₂)

julia> ⋆v∞, !v∞
(v∞₁₂, 2v∅₁₂)

In fact the Hodge complement of an infinity null-basis element is different from its Poincare dual. This can be verified by defining it in terms of the isomorphic geometric algebra:

julia> @basis S"+-++"
(⟨+-++⟩, v, v₁, v₂, v₃, v₄, v₁₂, v₁₃, v₁₄, v₂₃, v₂₄, v₃₄, v₁₂₃, v₁₂₄, v₁₃₄, v₂₃₄, v₁₂₃₄)

julia> ni = v1 + v2
1v₁ + 1v₂ + 0v₃ + 0v₄

julia> ⋆ni
0v₁₂₃ + 0v₁₂₄ + 1v₁₃₄ + 1v₂₃₄

julia> !ni
0v₁₂₃ + 0v₁₂₄ - 1v₁₃₄ + 1v₂₃₄

This validates the fact that for null-basis the metric-dependent and metric-independent complements differ not only by their orientation but also by a swap of their null-basis index… which I didn’t expect until now.

julia> ⋆ni*v34 == -ni
true

julia> ⋆no*v34 == no
true

julia> !ni*v34 == -2no
true

julia> !no*v34 == ni/2
true

Not only do the null-bases swap indices, they also differ by factors of 2, which I have not properly implemented yet in the first example above. That factor of 2 will be fixed in a following commit.

1 Like

I’m doing exactly this. I’m also implementing an inner product ( quadratic form ) between G and G* elements, which is more of a differential geometry thing, but it works well for me. I’m explicitly representing the dual basis types as raised indices like @chakravala is, also allowing mixed indices. Both are julia packages, so it seems that is the most natural thing to do there! I hide all the raised indices behind the scenes by directly lowering return values. Which corresponds to @cgunn3’s comment about the strong isomorphism between G and G*.

@cgunn3’s paper “On the Homogeneous Model of Euclidean Geometry” has an appendix item with an algorithm for calculating J.

I’m happy with my implementation as I get consistency between dual and orthogonal_complement in both sign and subspaces between G201 and G3, which was my goal.

Edit : the factor of 2 has now been properly implemented and the examples above updated:

julia> using Grassmann; @basis S"∞∅++"
(⟨∞∅++⟩, v, v∞, v∅, v₁, v₂, v∞∅, v∞₁, v∞₂, v∅₁, v∅₂, v₁₂, v∞∅₁, v∞∅₂, v∞₁₂, v∅₁₂, v∞∅₁₂)

julia> ⋆v∞, !v∞
(v∞₁₂, 2v∅₁₂)

julia> ⋆v∅, !v∅
(-1v∅₁₂, -0.5v∞₁₂)

julia> !v∞ * v12 == -2v∅
true

julia> !v∅ * v12 == v∞/2
true

julia> ⋆v∞ * v12 == -v∞
true

julia> ⋆v∅ * v12 == v∅
true

So now the Hodge complement and the metric-independent complement are fully implemented in v0.3.2

julia> v∞ * !v∞
0 - 2v₁₂ + 2v∞∅₁₂

julia> v∅ * !v∅
-0.0 + 0.5v₁₂ + 0.5v∞∅₁₂

julia> @basis S"+-++"; ni = v1+v2; no = (v2-v1)/2;

julia> ni * !ni
0 - 2v₃₄ + 2v₁₂₃₄

julia> no * !no
0.0 + 0.5v₃₄ + 0.5v₁₂₃₄

Hodge complement and metric-independent complement both work with null-basis, and give different base.

Hi @Orbots ,

I’m unable to find the algorithm in the paper you mentioned. Could you copy and paste the link and the page here (or maybe the algorithm) ?

Thanks

PS : I must be blind. :thinking:

Have a look at section 2.3 page 16-18 and 5.10 page 47-48 in @cgunn3 thesis on Geometry, Kinematics, and Rigid Body Mechanics in Cayley Klein Geometries.

His definition is basically the same as mine, but he doesn’t use the geomtric product as a foundation. My definition based on the geometric product.

1 Like

It’s in Appendix 1. It’s a wordy description of an algorithm, not exactly an algorithm.

Actually after re-reading, there is a bit that says you can use the algorithm for generating the hodge dual instead. Which makes sense, I implemented H-star recently and thought it was very familiar! So check here for a quicker explanation: https://en.wikipedia.org/wiki/Hodge_star_operator#Computation_of_the_Hodge_star

I do use the metric in my hodge dual though, as I wanted the behaviour described by @chakravala

Here’s a very poorly formated cut and paste from the paper.
“Appendix 1: J, metric polarity, and the regressive
product
Because of its importance in our approach to the homogeneous model, we
provide here an more stringent, dimension-independent treatment of the map
J and its close connection to the polarity Π on the elliptic metric, concluding
with reasons to prefer the use of J to that of Π for implementing duality in
the Clifford algebra setting.
Canonical basis. A subset S = {i1, i2, …ik} of N = {1, 2, …, n} is called
a canonical k-tuple of N if i1 < i2 < … < ik. For each canonical k-tuple of N,
define S
⊥ to be the canonical (n − k)-tuple consisting of the elements N \ S.
For each unique pair {S, S⊥}, swap a pair of elements of S
⊥ if necessary
so that the concatenation SS⊥, as a permutation P of N, is even. Call the
collection of the resulting sets S. For each S ∈ S, define e
S = e
i1
…e
ik . We
call the resulting set {e
S} the canonical basis for W generated by {e
i}.
Case 1: Equip W = P(
V
(R
n)) with the euclidean metric to form the
Clifford algebra P(Rn,0,0) with pseudoscalar I = e
N . Then, by construction,
e
SI = e
(S
⊥)
(remember (e
S)
−1 = e
S).This is the polarity Π : W → W on
the elliptic metric quadric (see Sect. 3.1).
Case 2: Consider W∗
, the dual algebra to W. Choose a basis {e1, e2, …en}
for W∗
so that ei represents the same oriented subspace represented by the
(n − 1)-vector e
(i
⊥) of W. Construct the canonical basis (as above) of W∗
generated by the basis {ei}. Then define a map J : W → W∗ by J(e
S) = eS⊥
and extend by linearity. J is an “identity” map on the subspace structure of
RP
n: it maps a k-blade B ∈ W to the (n − k)-blade ∈ W∗ which represents
On the Homogeneous Model of Euclidean Geometry 51
the same geometric entity as B does in RP
n. Proof: By construction, e
S
represents the join of the 1-vectors eij
,(ij ∈ S) in W. This is however the
same subspace as the meet of the n − k basis 1-vectors eij
,(ij ∈ S
⊥) of W∗
,
since ei contains e
j
exactly when j 6= i.
Conclusion.: Both J and Π represent valid grade-reversing involutive
isomorphisms. The only difference is the target space: Π : P(W) → P(W),
while J : P(W) → P(W∗
).
The regressive product via a metric. Given the point-based exterior
algebra with outer product A ∧ B, (representing the join of subspaces A
and B), the meet operator A ∨ B (also known as the regressive product) is
often defined Π(Π(A) ∧ Π(B)) [HS87]. That is, the euclidean metric (any
nondegenerate metric suffices) is introduced in order to provide a solution to
a projective (incidence) problem. By the above, the same result is also given
via J(J(A) ∧ J(B). (Here, the ∧ denotes the outer product in W∗
).
The Hodge ? operator. An equivalent method for producing dual coordinates is described in [PW01], p. 150. The ? operator is presented as a way
of generating dual coordinates, which is an apt description of the J operator
also. One can also define the regressive product by ?(?A ∧ ?B). Formally,
however, since ? is a map from W to W, it is identical to the metric polarity
Π.
Comparison. The two methods yield the same result, but they have very
different conceptual foundations”

1 Like

From @cgunn3 thesis, section 2.3.1.3 :

For each unique pair \{S,S^⊥\}, swap a pair of elements of S^⊥ if necessary so that the concatenation S S^⊥, as a permutation P of N is even.

@Orbots, @chakravala How do you implement this part ? Checking the parity is easy but how do you pick the pair ?