Ordering of basis vectors?

Cheers.

Is that any better than having e213 as its dual?
If I have e0 with e123 as dual, does that mean I want e1 to have e230 as dual, and e2 to have e301 as dual, and e3 with e012 as dual?

How did the edge 02 get determined?

I’m still a little unclear on how every edge in your picture got a direction. I can see how I could come up with directions for 01, 12, and 23. I was semi-expecting the 03 edge to be flipped too.

Let’s say that I have managed to get your edge directions. I think from that I can see how e0 pairs with e123 - I go to the face opposite e0, and then pick any vertex and follow the edges in their direction. How does this work for when I pick e2 - the 03, 31 are going one way around the face, but the 01 edge is going the other way.

I realise that the types of questions I’m asking suggest that I’ve missed something fundamental, and answering all the questions probably doesn’t make sense. I’m hoping that by asking all these questions you can see where I’ve gone astray.

1 Like

Mainly your sanity :slight_smile: The ordering as “chosen” is just easier to follow since we already have a “natural” ordering from the natural numbers after all.

This was one of our initial choices where we decided (perhaps somewhat arbitrarily) that e_{01}, e_{02} and e_{03} would occupy our initial edges. The goal was ultimately to define the map with the involutory and grade-reversing properties we wanted. As I mention at the end, there are definitely other choices for this map that will work. The one we chose is largely about being somewhat easier to remember. Also, part of the point of the visualization is to show that even if you try to “improve” the situation and reduce the number of flips, you’ll just end up pushing the flips to a different edge/face. For any of the suggestions you made, I encourage you to just fill in the rest of the blanks and you’ll find a map with similar properties but with indices jumbled in a different arrangement.

As an aside, one other reason to prefer edges leaving 0 is that the convention extends to higher dimensions more easily.

If you read up on John Browne’s grassmann algebra book, it’d seem to tell that the definition for duals comes from your definition of metric because the inner product is defined as a ∨ dual b. The book also describes how to compute duals and metric for projective spaces. They seem very similar in behavior to what’s described in Gunn’s paper.

The exact same thing appears in linear algebra. If you think that transpose of a vector is a dual for it, then transpose v * w defines a dot product for the v which allows it to compute magnitude, angles and projections of vectors. However this seem to mean that your definition for dual should change when you change the basis vectors.

I haven’t entirely understood what I’ve read.

Welcome @htuhola :slight_smile:

Both John Browne and also Grassmann’s original work didn’t need to consider a space that has a degenerate element \mathbf{e}_i such that \mathbf{e}_i^2 = 0 hence the need for a definition that is metric-free. Both that book and others define the dual in terms of the geometric product and pseudoscalar, which is a valid dual map in cases where the metric has full rank.

As for your second paragraph, could you clarify a bit what you mean by “your definition for dual should change when you change the basis vectors?”

Take an orthogonal basis. The transpose of vector ‘v’ (1 -> 3) in that basis gives you (3 -> 1). When you multiply any other vector using that matrix you get a dot product between ‘v’ and other matrix. It allows you to compute projection, angles and distances. Transpose is understood as a tool to produce duals for vectors because of this.

But you can also take a non-orthogonal basis. Now to compute operation that produces projection, angles and distances, you got to first multiply the vectors with a basis. dot (Bv) (Bw). So transpose no longer produces them. Instead you got to (transpose(B*v)*B) to get a ‘dual’ that would have the same effect. In some sense your choice of basis or metric selects what the dual should be in linear algebra.

Browne describes “bound vector” complements in his “exploring complements” -section. The complements for 2d space are provided as:

1 ~ e0^e1^e2
e0 ~ e1^e2
e1 ~ e2^e0
e2 ~ e0^e1
e0^e1 ~ e2
e2^e0 ~ e1
e1^e2 ~ e0
e0^e1^e2 ~ 1

They seem to have been described in a way that you can select a different basis and the dual adjusts.

I get the duality in itself is not depending on metric to start with. But once you ‘compress’ it back to your multivector-space it would appear to require the metric.

In the definition that appears earlier in this thread, the map is fully defined without any reference to the metric. I believe the issue stems from believing that the dual should be defined with respect to the inner product, as opposed to the exterior product in the context of this thread. The inner product has an intrinsic metric dependence, but the exterior product does not. In other words, an involutory grade-reversing map is definable without any notion of a metric on every possible Clifford algebra, so the metric dependent definitions are in this sense, not the most general definition.

The dual is defined through notion of antivectors. They are vectors containing everything else except the original vector. That’d seem to require that you already know what are orthogonal to each other.

If you have slanted basis, eg. [[1,1], [1,0]]. The dual is going to be different than if it was orthogonal basis. Dual also depends on the dimensionality of the basis.

Is there some really good explanation on these poincare maps that you are using? Something that would explain it clearly? Basically how does it work around the need to know the metric?

For the non-orthogonal bases, the dual map presented as \mathbf{J} extends by linearity so you have a trivial bijection for any element in the algebra. See the exposition on the Poincare dual map in Charles Gunn’s thesis for additional information here: https://www.researchgate.net/publication/265672134_Geometry_Kinematics_and_Rigid_Body_Mechanics_in_Cayley-Klein_Geometries

To clarify the point further, when you say “they are vectors containing everything else except the original vector,” this statement needs to be more mathematically precise. In particular, the words “containing” and “everything else except.”

John Browne uses a different definition, the complement map is certainly not a involution as discussed by John Browne in his book. I already pointed this out repeatedly here in this thread.

Also, he doesn’t use the geometric product at all to define anything. He doesn’t even go into Clifford algebra in the first and only published volume. His definition are only based on Grassmann algebra.

It is possible to use the geometric product to define the same thing as John Browne, and that’s the definition I have provided, but John Browne doesn’t use the geometric product in the definition.

John Browne’s and Grassmann’s complement operation can be easier understood with the geometric product, but it’s not necessary and he certainly never uses the geometric product to define it. I prefer using the geometric product in the definition because it highlights how it really works and makes it simple to understand.

I did look into Gunn’s thesis and read on the poincare isomorphism from references. I understand something of it but not enough.

I’ve understood the basis is defined through a^b=I. Eg. In 2D x^y=x^y, y^x=-x^y. You get the x => y, y => -x dual that does change sign. This is mentioned in Gunn’s thesis. Also the 2D projective dual would seem to be the same operation as the 3D dual.

It just would seem to be different. The meet(a, dual b) would not be the inner product then. I don’t know if that’s a necessary property. But if it is then the dual mapping depends on metric.

How to be precise about the terms “containing” and “everything else except”? I guess you need the notion of projection to define these terms.

I’d be interested about a demonstration of how differential geometry fails with the J map. I dunno why you haven’t already requested this from @chakravala

It seems none of you have actually studied the axioms of the complement.

Screenshot_2020-02-29_20-40-20

Look on the diagram of that page, you’ll see the complement operation in \mathbb R^2 is like a \pi/2 rotation. This will stop happening with the false definitions used by the others in this thread.

The formula \star\star \omega = (-1)^{m(n-m)} is an AXIOM. This means that it is NOT a proposition or theorem, but it is an axiom from which the other properties are derived.

This complement axiom is a well known fact for differential geometry, it’s the starting point for theory.

Screenshot_2020-02-29_22-10-56

Axioms cannot be proven, as they are the starting point of a theory. If you ignore this axiom, then you are simply not doing Grassmann algebra, you are doing something else.

The J map and several other conventions favoured on this site/forum are, from my pov, in service to the dualized projective geometric algebra formalism. Which makes sense since the chief proponents set up this forum in the first place. I’m fine with that and totally grateful for them providing this resource for the GA community.

The J map is great for intersection tests etc, but looses a really important property, as @chakravala has pointed out. I personally use a reciprocal basis to define my non-metric dual. This would be similar to something I’ve seen Hestene’s do, but with a modification for a degenerate metric. I’m not a mathematician, so I’m not sure what I’ve done is completely correct, but it seems like it works if you have less degenerate basis than non-degenerate ones.

Let’s say our only degenerate metric is e_0\cdot e_0 = 0.
Reciprocal basis e^k is defined as e_i \cdot e^j = \delta_i^j
To be more precise about e^k we have the relation e^k = E^kI^{-1} where E^k is a blade with every basis vector except k. E^k = -1^{k+1}e_0\wedge ... \wedge e_{k-1} \wedge e_{k+1}...e_n and I is the unit pseudoscalar using reciprocal basis.

You’ll notice I’ve defined e^0 using e^0, so we have to fix that. We can do so by first defining e^i where i>0 first and then defining e^0 using the other reciprocal basis.

Define e^i, i >0 using I^{1} which is I without e^0, define e^i as above but using E^{1k}
Finally define e^0 as above.

I had a bit of trouble following Dr. Gunn’s definition in his paper, but I believe the way I’m doing things has pretty much the same effect as his computation/use of J but using reciprocal basis rather than temporarily euclideanizing.

Now you maintain the complement axiom which you do want for differential geometry or really it’s just very useful to have consistent behaviour when switching between projective and non-projective algebras. As a bonus you have reciprocal basis implemented in your codebase which is handy for differential geometry on curved surfaces.

I originally defined the J operator to go to and from the dual Grassmann algebra since I found (and still find) it mathematically cleaner.

There is however an alternative, at least for computing the regressive product (the join operator) that for more practically-minded people is perhaps preferable and that doesn’t require the dual Grassmann algebra.

Rather than try to explain it here, I’m including a short write-up on it. You should focus on the discussion of the “canonical dual basis” on p. 2 rather than what’s said about the reciprocal frame, which belongs to a different but related discussion. The new map J takes a basis 1-vector e_i to the (n-1)-vector e^i where e_i and e^j satisfy e_i \wedge e^j = \delta_i^j \bf{I}. So you stay in the same algebra but the grade is reversed. It can be extended to other grades in a natural way.

The main drawback from this approach from a mathematical point of view is that the result is not coordinate-free as the original Poincare map is. But since the Poincare map seems to mystify many readers and the coordinates used in PGA are not usually going to change, it seems better to use this more practical approach to define J.

An alternative interpretation of what is going on: interpret the coordinates provided by the Poincare map in the original algebra instead of in the dual Grassmann algebra.

If you use this version of J, it should no longer be called the Poincare map. A good name would be the “dual map” – you just have to remember not to confuse that with multiplication by the pseudoscalar, that has been also called “duality” – but should actually be called “polarity” or “orthogonal complement” (it’s metric dependent, while J is not). I have written about this terminology issue in my articles, if you’re interested, look at “Geometric algebras for euclidean geometry”.
If you’re using J to get the regressive product, then it doesn’t really matter what you call it because you apply it twice and hence come back to the original “space” but if you apply J just once then the results will differ from using the Poincare map and you should be careful to understand exactly what you are doing.
[Added 22.10.2020: I would recommend the terminology “Hodge duality” for the version that maps to the original algebra, and “Poincare duality” for the (original) version that maps to the other (dual) algebra.]

2 Likes

I don’t think it’s really mystifying many readers. I think some of us just want a construction of GA that doesn’t favour a given algebra/metric above others.

@cgunn3 nice whitepaper. I also am not an expert in multi-linear algebra, but have been studying differential geometry in parallel with learning GA.
Your geometric notion of a metric free way of getting scalar coordinates from a geometric object is very much like what k-forms are designed for. Often I see the geometric description of how a 1-form measures a 1-vector is by counting the number of 1-form iso-surfaces ( planes ) pierced by the 1-vector . Flip that around to the description you give of displacing basis planes and you have the same effect.

@Orbots I agree with you. What I’m describing is the local behavior of vector fields and 1-forms (which vary from point to point in differential geometry). A 1-form “eats” a vector field to produce a real number at every point: a scalar field. Given a basis for the vector field, there is a “canonical” basis for the 1-forms such that e_i \wedge e^j = \delta^j_i\bf{I}. That means I suppose you could say that each basis 1-form is particularly interested in eating a particular basis vector.
It’s clear – I hope – that this depends on the choice of basis, hence isn’t “coordinate-free” and hence I don’t personally endorse it as a full replacement for J, which is coordinate-free. But it has begun to be used in PGA implementations and for this reason I think it’s important to understand what’s going on.

Hi again all, thanks for the interesting discussion. It is still a little bit over my head unfortunately.

Mainly your sanity :slight_smile: The ordering as “chosen” is just easier to follow since we already have a “natural” ordering from the natural numbers after all.

Understood.

This was one of our initial choices where we decided (perhaps somewhat arbitrarily) that e01, e02,​ and e03​ would occupy our initial edges.

Ok, so you you “somewhat arbitrarily” picked e0 as a starting point, and made outward pointing edges from e0 to all the other things, and had you chosen e1 instead, you would have just ended up with another ordering (which would have worked fine, just been less obvious).

The goal was ultimately to define the map with the involutory and grade-reversing properties we wanted.

Ok, this is where I’m a bit lost. My understanding of the grade-reversing properties are that we want to match the grade-0 thing with the grade-max thing and the grade-1 things with the grade-(max-1) things. I don’t see how the direction of the edges has any relationship to that property. I am unclear on what the involutory properties are.

So, after choosing 01, 02, 03 as the starting points, you then do 12, 23, 31. I’m guessing this is a little bit arbitrary as well, and you could have done 13, 32, 21?

Or if we had 5 points we could start with 01, 02, 03, 04 and then 12, 23, 34, 41. Now we need to decide on an ordering for 13 and 24, and it seems like we could pick any of those independently (i.e. we have 4 options here). Does that sound accurate?

Finally, even once I’ve got the directions figured out for the edges. I’m still not clear on how to translate that into the end product (1, e0, e1, e2, e3, e01, e02, etc). My assumption was that if I just took some ordering like 230, then I could check the direction of 2->3, 3->0, and 0->2, count the number of times I was going against the arrow (2->3 no, 3->0 yes, 0->2 no) and if that was odd (in this case yes), then reversing the order of two adjacent items so I get 320 (or 203). I don’t think that is actually correct though.

Hi @thenewandy,

The choices above may be arbitrary in the sense that everything keeps working if you make other choices and carefully keep track of signs. They are however not arbitrary in that the choices Dr Gunn made in PGA provide trivial mappings to the well known subalgebras. This means the quaternions and dual quaternions ‘layout’ is mimicked by picking e_0 as starting point (grouping the degenerate bivectors), and similar for the other choices. (so that i,j,k map directly).

While it is good to have some feel about this, the true coordinate free nature of GA means it really doesn’t matter, and practicing GA one will hardly ever go into those details.

Hope that helps :+1:

Right, I’m ok with arbitrary choices, and even then making those choices to map nice mappings to existing algebras for convenience. What I don’t understand right now is how I would generate a sane ordering for something with e0, e1, e2, e3, e4, e5, e6. My definition of “sane” is likely to be suspect - and maybe just any ordering where my dual operator would swap only things of “opposite” grades, and also avoid swapping e.g. e1 and e123456 (but would instead swap e1 and e023456).

Currently I’m thinking of a new axiom for the complement, which specifically takes degenerate metrics into account. Consider \mathbb{R}^{p,q,r} with n=p+q. Then the complement of the complement axiom would still be the same formula \star\star\omega = (-1)^{m(m-n)}, except that n does not count r. This could also be useful.

In fact, I already have this setup to work with the other “Leibniz” symmetric basis already. That’s the whole point of it, symmetric basis doesn’t count towards grade, it only counts towards degree/order.