Dual operator disambiguation

Thanks ! This makes sense.

Hi,

I tried to build J following @cgunn3 but I’m unable to check the duality/polarity formula e^SI = e^{S\bot} using \textbf{P}( \mathbb{R}_{n, 0, 0}). Actually, I get a -1 factor for grade 2 and 3. Any idea ?

Thanks

S = \{\}
S^\bot = \{0,1,2,3\}
SS^\bot = \{0,1,2,3\} is even partition
J(1) = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}
e^SI = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}

S = \{0\}
S^\bot = \{1,2,3\}
SS^\bot = \{0,1,2,3\} is even partition
J(\boldsymbol{e}_{0}) = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}
e^SI = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}

S = \{1\}
S^\bot = \{0,2,3\}
SS^\bot = \{1,0,2,3\} is odd partition
J(\boldsymbol{e}_{1}) = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}
e^SI = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}

S = \{2\}
S^\bot = \{0,1,3\}
SS^\bot = \{2,0,1,3\} is even partition
J(\boldsymbol{e}_{2}) = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}
e^SI = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}

S = \{3\}
S^\bot = \{0,1,2\}
SS^\bot = \{3,0,1,2\} is odd partition
J(\boldsymbol{e}_{3}) = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}
e^SI = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}

S = \{0,1\}
S^\bot = \{2,3\}
SS^\bot = \{0,1,2,3\} is even partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}) = \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}
e^SI = - \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}

S = \{0,2\}
S^\bot = \{1,3\}
SS^\bot = \{0,2,1,3\} is odd partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}) = - \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}
e^SI = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}

S = \{0,3\}
S^\bot = \{1,2\}
SS^\bot = \{0,3,1,2\} is even partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{3}) = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}
e^SI = - \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}

S = \{1,2\}
S^\bot = \{0,3\}
SS^\bot = \{1,2,0,3\} is even partition
J(\boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}) = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{3}
e^SI = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{3}

S = \{1,3\}
S^\bot = \{0,2\}
SS^\bot = \{1,3,0,2\} is odd partition
J(\boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}) = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}
e^SI = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}

S = \{2,3\}
S^\bot = \{0,1\}
SS^\bot = \{2,3,0,1\} is even partition
J(\boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}) = \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}
e^SI = - \boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}

S = \{0,1,2\}
S^\bot = \{3\}
SS^\bot = \{0,1,2,3\} is even partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}) = \boldsymbol{e}_{3}
e^SI = - \boldsymbol{e}_{3}

S = \{0,1,3\}
S^\bot = \{2\}
SS^\bot = \{0,1,3,2\} is odd partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}) = - \boldsymbol{e}_{2}
e^SI = \boldsymbol{e}_{2}

S = \{0,2,3\}
S^\bot = \{1\}
SS^\bot = \{0,2,3,1\} is even partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}) = \boldsymbol{e}_{1}
e^SI = - \boldsymbol{e}_{1}

S = \{1,2,3\}
S^\bot = \{0\}
SS^\bot = \{1,2,3,0\} is odd partition
J(\boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}) = - \boldsymbol{e}_{0}
e^SI = \boldsymbol{e}_{0}

S = \{0,1,2,3\}
S^\bot = \{\}
SS^\bot = \{0,1,2,3\} is even partition
J(\boldsymbol{e}_{0}\wedge \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}) = 1
e^SI = 1

PS : I can’t understand how @enki obtains the Dual function for PGA too. It’s just a reversion of the indexes but there is even and off partitions.

Hi Meuns,

The default map I use is the following. The dual of the basis blade x is the basis blade x^* so that:

xx^* = \mathbf e_{0123},

which is then extended by linearity for arbitrary multivectors. Following this scheme, you will find that some basisblades will always get a minus sign. However, if one takes care picking the proper basis (like the one used in the course notes), these minus signs are all on the trivectors.

When the plan is to use the duality operator only for the regressive product, or to implement the ideal norm, one can then use projective equivalence (for projective points, lines, planes, a and -a represent the same element), and leave the sign changes out alltogether. We decided to go for this option for the course notes as it should make it clear that it is possible to implement the duality operator using casting (i.e. without any code…).

Please do note that this only works with a properly chosen basis. (and the natural order as you’ve used in your post is not one where that works.)

Cheers,

Enki

edit : if you look at the antidiagonal of the Cayley table on the cheat sheets you should be able to see what I mean.

1 Like

@meuns

I formulate my algorithm for the Hodge star so B * ⋆(B) = I. My implementation is probably different than other folks as I’m using set operations rather than bit-wise operations. Which is ok in julia because meta programming allows you to perform all kinds of computation at compile time.

So I’m trying to find the set of indices missing from B to make I. This will be my dual. Then you need to account for any parity swaps as you shuffle indices into B. So the easiest thing for me was to view ⋆(B) as two parts, one part is going to shuffle into B to fill the holes and the other part gets wedged to the result of that ( no parity swap there ). Track all the swaps as you shuffle anything into B
Here an example with a “++++++” metric ( R6 )

julia> B = 1e₂₄
1e₂₄

julia> 𝐼 = 1e₁₂₃₄₅₆
1e₁₂₃₄₅₆

julia> ⋆B
-1e₁₃₅₆

julia> B*⋆(B) == 𝐼
true

julia> subspace(B)
2-element Array{Int64,1}:
 2
 4

julia> subspace(⋆B)
4-element Array{Int64,1}:
 1
 3
 5
 6

julia> subspace(𝐼)
6-element Array{Int64,1}:
 1
 2
 3
 4
 5
 6


julia> (-1)^3*1e₂₄∧(1e₁₃∧1e₅₆) == 1e₁₂₃₄₅₆
true

The -(1)^3 term counts the number of index swaps I made as I moved [2,4] <-- [1,3]
since a∧b == -b∧a

1 Like

It’s good that the definition of J (and its inverse) is being looked into more closely. I’ve been repeating the mantra “projectively X and -X are equivalent” – but my experience is that it is always a good idea to remove unnecessary sign flips despite this equivalence. In some sense I’m following my sense of beauty here more than my sense of truth.

@meuns: I believe the condition used in my description of J is that e_S e_{S^\perp}=\bf{I} which I don’t think is the same as e_S \bf{I} = e_{S^\perp}. If you use the former rather than the latter then the sign flips on the 2-vectors disappear I think.

Regarding the sign-flips on the 3-vectors: these can’t be gotten rid of so easily. In fact, I need to slightly correct the general description of J in 2.3.1.3 of my thesis for this case.

The algorithm described there assumes that given an ordered subset S of some ordered set U it’s always possible to find an ordering of the complementary subset S^\perp such that SS^\perp is an even permutation of U. This is however not always possible.

Consider the case n=4. There if I have S=0, then S^\perp={123} satisfies {0}{123} is even. But for S={123} and S^\perp = 0, {123}{0} is not even – and I cannot flip any elements in S^\perp to obtain an even permutation since S^\perp consists of a single element.

Exactly when k=(n-1) and k is odd, it isn’t possible to find S^\perp so that SS^\perp is an even permutation.

Proof: Note that if n-k > 1 then I can always flip elements in S^\perp so that SS^\perp is an even permutation. And if k=n-1 and k is even than SS^\perp and S^\perp S are both even or both odd. We can assume that S^\perp S is even (since we can then flip elements of S to obtain this goal), so SS^\perp is also even. But this isn’t possible for k odd.

Here’s the refined definition of J. I am using superscripts for G and subscripts for G* to be consistent with the original discussion in my thesis.
Let e^{S} \in G be some basis k-vector. Define
J(e^S) := -e_{S^\perp} if k=n-1 and k is odd, and
J(e^S) := e_{S^\perp} (where SS^\perp is an even permutation of 0123) otherwise.

The minus sign reflects the fact that in this case e_S e_{S^\perp} = -\bf{I}.

Then J^{-1} has to be accordingly adjusted:
J^{-1}(e_S) := -e^{S^\perp} if k==1 and k is odd, and
J^{-1}(e_S) := e^{S^\perp} for S\in\bf{S} (where S^\perp S is an even permutation of 0123) otherwise.

Note: it may make sense for implementation purposes to further amend the J algorithm as follows:
Arrange the 2^n index sets in lexicographical order: {{}, {0},{1},{2},{3},{01}, …}. Run the J algorithm on the first half of this list to obtain ordered representatives of the second half. Use this full ordered set of index pairs \{S, S^\perp\} to fix “canonical” bases for both G and G^*. Then run the second half of the J algorithm without allowing any flipping in S^\perp, but rather insert minus signs into the definition of J where the resulting permutation SS^{\perp} is odd. If I’m not mistaken, this will be the case exactly when both k and n-k are odd:

J(e^S) := (-1)^{k(n-k)}e_{S^\perp} for S in the second half of the full set of index sets.
J(e^S) := e_{S^\perp} for S otherwise

The advantage of this approach is that J is expressed directly using the canonical basis obtained in the first half of the algorithm. It will first differ from the existing (as amended above) version of the algorithm for n=6 and k=3 if I’m not mistaken. Then for example 012345 is even and 345012 is odd.

3 Likes

@cgunn3 I implemented the refined version (not amended) of J and J^{-1} and the algorithm fixed an issue on 2-vectors (all my rotors was clock wise instead of counter clock wise :smiley: ).

I’m really interested to understand how this map works (the whole Poincaré duality thing). John Browne introduces the cobasis of a basis for defining the regressive product \vee but he picks another algorithm for the sign. He doesn’t motivate his choice. Do you have any reference paper in mind a developer could understand ?

Regards

@meuns all of my complement operations are consistent with John Browne’s definitions also. I have studied his work and tried to explain the theory here in this thread and in my differential geometric algebra paper (which will be updated this evening). If you want to understand it better, read my posts again, the foundation is the geometric product. Let me know if my posts are confusing.

@chakravala the whole discussion of this post and your paper lack some basics (I never worked with manifolds, tensors…) and it is not simple to fill the gaps. I understand no one will write down a fully detailed explanation but I know experienced practitioners already read a lot of papers and I hope one or two papers will be easy to follow. :slight_smile:

@meuns: I took a quick look at John Brownes’s book on google books. Thanks for pointing that out! Indeed, his idea of a co-basis is very similar to what I use in defining Poincare duality. His co-basis appears to be in the same Grassmann algebra as the original basis, and he “interprets” the elements then differently (as points of hyperplanes for example) depending on the context.

The sign differences arise, I believe because he doesn’t fully use the freedom to choose his basis elements to begin with. For the basis 1-vectors in R3 (on p. 54) are e1e2, e1e3, and e2e3 (alphabetical order). He obtains in this way two minus signs in the co-basis corresponding the the elements e2 and e1e3. If instead he chooses the basis 2-vectors e1e2, e2e3, and e3e1, then the minus signs go away. Similarly in R4, he chooses e2e4 instead of e4e2 and harvests more minus signs. I mention this latter example since so-called Pluecker coordinates for lines uses e4e2 (correctly I would say); this simplifies many of the formulas of classical line geometry and confirms that other folks already noticed that the basis elements in G ought to be chosen carefully (before even thinking about the regressive product).

Recall that I choose half of the basis vectors in G (the “second half”) by making sure that the condition e_Se_{S^\perp}=\bf{I} is satisfied. That’s the key I think to minimizing the number of minus signs when defining the co-basis. For example, it guarantees that no minus signs occur in the first half of the dual basis.

Does that make sense?

Notice that in the Poincare duality approach (two algebras G and G*) one gets equality “co-basis of the co-basis = original basis” since the maps from G->G* and G*->G can be different while in the single algebra approach, one does not have that freedom and ends up with some minus signs in the “co-basis of the co-basis” when compared to the original basis.

2 Likes

@cgunn3 that makes sense. Thank you :slight_smile:

It seems the choice of dual basis/map is, in some ways, a design decision. There are multiple “correct” choices. Especially if you have degenerate metric which is orthogonal to everything. From what I understand there are many Poincare maps you could use.

Design decision and mathematics sounds weird, but that’s the way I look at it. i.e. I chose to use a dual map so that it is in agreement with the dual of the dual ending up with a minus sign in some algebras. Well, I actually end up with multiple “dual” functions, which is part of the design decision. I did this because I’m not certain a projectivized algebra is the only algebra I will want to use.

1 Like

I agree. I have trouble reading your paper @chakravala. Once I see the direct product I’m lost. I’m learning tensor algebra in parallel, but it’s pretty esoteric and seems you end up replacing a lot of it when you get into applied Grassmann/Geometric algebras. To make your paper more accessible you’ll need a preliminaries section or alternate definitions that use more common mathematics. I know, you’d end up with massive volume(s) then, so maybe that can’t be done so easily.

@cgunn3
FWIW, even if the cyclic ordering is more natural when working on pen and paper, I think the lexicographic order may be more likely to come up in implementation, if only because bitfield representations of the basis blades are so ubiquitous for efficiency.

2 Likes

@ninepoints you can write a generator without relying on the bitmap method described by leo dorst and keep things fast.

Do you think you could elaborate a bit? @meuns

For low dimension geometric algebras, you can use a table and use any basis without fighting too much with signs. You can also extend the bitmap method to handle an additional sign but it feels clunky (i did that for using PGA with galgebra, I’m sure it remains bugs).

I really liked Stephane Breuils’ thesis. It provides a nice overview of all implementations.

1 Like

Thanks for sharing! I’ll have to take a closer look, but my first thought is, I’m not sure I’m a believer. Granted, my perspective is as a C++ developer so YMMV (if certain points of the following argument don’t apply to you, feel free to ignore :slight_smile:)

A table based method requires a load which seems bizarrely expensive for one should just be a single instruction bitwise ^, &, or | in various cases. The biggest tell being that the table can’t even fit in L1 cache for most useful dimensions, so we’re talking dozens to hundreds of cycles per access, depending on the situation. Worse, the access is likely to be random, as we aren’t looping through the table or anything. Even with latency hiding, at least in my code, there are many cases where execution cannot continue until the result of such a table query is known (mainly because one side of the branch often extinguishes a routine immediately, e.g. in the case of encountering a zero). It will waste static storage space, which I also care greatly about because such a header will likely be included in every translation unit. If we globally inline such a table such that one copy exists, we’ve now increased our link times. In my particular case, the biggest impediment to the approach is actually compile times, which I sort of have no choice but to obsess over if my approach is to remain tractable (I’m optimistic, but there is still a bit of work to do).

That said, I’ve bookmarked the link and will try to find some time to take an even closer look when I’m afforded the time. Thanks

1 Like

@ninepoints I consider the table is only used by the generator (in my case I’m tweaking galgebra by laziness and it uses some kind of bitmaps). The runtime code use a flat array, a list of grades or several classes representing geometric objects because we rarely use need true multivectors. In the runtime code, the basis vectors become an implementation detail and you can use a pile of tricks in the generator. The generator table will be inlined by in the various products or other handy methods. Not sure about binary size, but it will probably depend more on your overall program inlining strategy than a few float values added in the products.

In some ways, it is a design decision. Poincare duality does not associate an orientation, so there are many variations of it. Hodge duality is very specific and there is only one variation of it. I found there are only 2 types of complement that actually matter: Hodge complement and the metric-independent complement. The other features, such as degenerate Leibniz elements do not conflict with those definition choices, as those basis elements are not anti-symmetric. The complement operation only changes Grassmann basis.

Also, it is not necessary to know differential geometry to learn my definition of complement. All you need to know is the geometric product and the reverse.

So in other words, you prefer to re-orient your basis to eliminate minus signs. I think this is okay to do in writing and in papers, but not in a computer program.

I always have an increasing index like v123 for example, which is equal to v231 by the exterior basis equivalence relation, but there is no need to create a separate instance of the class. I only need one instance of the class, so I only need v123 and don’t need a representation for v231 in my calculations, so I automatically make sure the indices are sorted. Hence, I always use increasing order, requiring a single element from equivalence class for representation.

This does not mean that I am opposed to using other orderings in writing. It only means that for computing purposes, it is simpler and more efficient to only represent a single element of each equivalence class, instead of complicating things by having multiple representations for the same object.