Ordering of basis vectors?

No, it’s not baked in there. It only takes care of the sign in one direction, you would get (-1)^k(-1)^k then instead of (-1)^k(-1)^{n-k}, and so you’d be off by a factor of (-1)^k(-1)^{n-k} either way. It doesn’t matter if you use a different ordering there, because you’ll still have grade reversal to deal with.

Rather than talking through each other (I suspect I’m being misunderstood again), how about we just look at code.

Here’s the implementation of J in Klein. No sign flips in sight. @thenewandy maybe the impl there might help you with your implementation.

You aren’t using the correct definition, but if you don’t care about being mathematically correct, that’s fine.

We are talking past each other because you have assumed an incorrect proposition: you assume that the complement map is an involution, and this is false. Therefore, everything else based on that idea will not be fully compatible with e.g. discrete exterior calculus.

In PGA, mutliplication with the pseudo-scalar is non-invertible. So how can this definition be used for the duality ? The entire point of the J map is that it is a trivial isomorphism on the geometry in both algebras. (i.e. swapping a point in the algebra for the same point in the dual algebra, same for lines and planes). And so yes, the J map is indeed an involution. (the right complement is not.)

So when the J map is used - there is no complement at all.

Furthermore, if you use that as definition of duality, it is metric dependent (as it uses the geometric product) - the very definition of duality

a\wedge b = (a^* \wedge b^*)^{-*}

implies it is a non-metric operation. (the left side is the outer product, non metric, the thing in the middle is an equal sign).

You’ll either have to read that again, or explain it again - and just a small hint - when you find yourself explaining something twenty times … maybe you need another approach?

@enki it seems you are not really familiar with my arguments either: you can use a non-metric version of Hodge complement (I prefer the ! symbol instead of J). Nevertheless, the factor of (-1)^{k(n-k)} is unrelated to the metric of the algebra. Hence, whether you used the metric or non-metric version doesn’t change it. Also, changing basis ordering doesn’t affect that factor either.

@chakravala,

You seem to be completely missing the point. There is no complement being used here whatsoever. The map we are after, which we use solely to define the regressive product, has only one (geometric, not algebraic) requirement :

  • it must map ‘points on a line’ to ‘lines through a point’ (in 2D)

The J map is the simpelest, trivial map that does that. It has no sign swaps anywhere and it uses no complements. It is an involution and is the most efficient way to define the regressive product.

Basis order and metric are completely irrelevant - you can add factors as much as you like, you’d only be correct if you take them back out in the very next instruction after the outer product.

(since we apply J, do one outer product and apply the inverse of J (which is J again, its an involution)) to define the regressive product:

a \vee b = J^{-1}(J(a) \wedge J(b))


Unrelated to the definition of the regressive product, you say

As I explained many times, the best and most straight forward definition is the following: \star\omega = \tilde\omega I, which is simply the reverse element times the pseudoscalar with the geometric product.

This clearly is wrong when there is a degenerate metric. i.e. that map clearly is not an anti-involution. For proof calculate, in \mathbb R_{2,0,1}, the double application of your map on e_{01} where e_0^2 = 0 and e_1^2 = 1 :

“simply the reverse times the pseudoscalar” : \tilde e_{01} e_{012} = e_{10} e_{012} = 0

This is why people keep ‘ignoring’ your definition in PGA. (Hope that helps cause I’m tired of hearing how many times you’ve explained it - when I’ve never seen you explain it).

Hi all, thanks for all your help. I’m afraid that most of what you are talking about is above my head.

My simple minded view is that I can implement “dual” by reversing the order of my coefficients, so the one which applied to “1” now applies to “e0123” and vice versa.

If I choose the wrong order of basis vectors, then this won’t work.

What isn’t clear to me is:

  1. How is the order “1”,“e0”,“e1”,“e2”,“e3”,“e01”,“e02”,“e03”,“e12”,“e31”,“e23”,“e021”,“e013”,“e032”,“e123”,“e0123” determined?
  2. (bonus credit, I think if I understand (1) then I’m happy enough): Are there other orderings which can work with the “reverse the coefficients for dual” approach?

Just trying to follow along with the discussion, can I check my assumptions? (hopefully this communicates my level of ignorance…):

  • The “pseudoscalar” in PGA3D is e0123
  • The “grade” of “1” is 0, the grade of “e0” is 1, the grade of “e01” is 2, the grade of “e012” is 3, and the grade of “1+e0+e01+e012” is also 3.
  • The “dual map” is the same as the “Poincare dual”
  • “Complement” is negating some subset of the coefficients (all but the one applying to “1”?)

@enki, your definition used is actually incorrect for defining the regressive product, there are several mistakes in your approach.

The one and true original definition from Hermann Grassmann is what I use, which is described in John Browne’s book and also discussed by Hodge, DeRahm, and used in differential geometry for deriving the chain co/homology theories… which you will all get incorrect with your definitions.

The extra factor included in the double application of the complement is not there by accident. It is deliberate and it has been used from the start by Grassmann, Hodge, and DeRahm, and many others.

You are basically ignoring the original definitions used by these people for differential geometry.

The correct definition for Grassmann algebra is the one I described, it is also used to define the regressive product, as explained in John Browne’s Grassmann algebra book. There is no need for me to elaborate on it further, since it is already well known in the literature and was used from the start.

If you keep assuming that this map needs to be an involution (to make things “simpler”), then you are actually disabling yourself from using it for differential geometry and Hodge-DeRahm co/homology.

There is a non-metric version of this definition which I associate to the symbol ! --yet you keep repeating the same thing… and you are literally ignoring the things I said, no wonder you don’t understand.

Anyway, the only difference between the ! and the J map is that a double application of ! will always give you that factor (-1)^{k(n-k)}, which is absolutely essential for differential geometry. If you don’t want to do differential geometry and do things incorrectly, that’s fine… I’m just worried that you will be teaching this and giving people definitions that aren’t compatible with OG (Original Grassmann).

Great question @thenewandy! The key is to think about “cycles,” hence the name “cyclic basis.” For the elements of grade 0 and grade 1, there is no reasonable “ordering” since they consist of either no indices or a single index. In space, the vectors e_0, e_1, e_2, and e_3 can be thought of as vectors that terminate at four points of a tetrahedron.

In this view, edges of this tetrahedron can be mapped to the bivector basis elements by considering the start and end vertex. This is also consistent with the viewpoint of bivectors as “oriented areas.” After all e_{12} should be opposite e_{21} in sign, and on this tetrahedron (which we are using as a representation of our basis elements), edges can be oriented as well.

Starting from vertex 0, there’s no reason to consider it as anything other than the “starting vertex,” so let’s start with 3 elements e_{01}, e_{02} and e_{03} emanating from e_0. Here’s a sketch I made of what this looks like so far:

Now, if we move to the next point at e_1, we see that we already have an edge from e_0 terminating here which is our now “official” basis element e_{01}. But now we have a choice. Should I connect e_1 to e_2 to make e_{12}? Should I connect it to e_{13}? Both? To decide best how to label it, I should explain the whole point of the tetrahedron in the first place. For each vertex of the tetrahedron, I can map it uniquely (and bijectively) to the opposite face of the tetrahedron. That face is associated with 3 vertices, which is… a trivector! To figure out the index order, we can start from any vertex of that opposite face, and list the other two indices in order by following the edges. Similarly, for each edge in the tetrahedron, there is a unique “opposite” edge that doesn’t share any vertices, which uniquely maps bivectors to bivectors, and all these maps are reversible. This tetrahedron IS the manifestation of a dual map with all the properties that we want.

OK, so knowing that, this is where we can be more clever with how we arrange things. One thing we “want” is to arrange it so that e_0 has e_{123} as its dual. Notice that e_{123} is equivalent to e_{312} and e_{231} (see the cycle yet?). This means we need to select the edge directions of the (123) face of the tetrahedron to preserve this order. Starting from e_1 then, we’ll move to 2, and then 3 to define e_{12}, e_{23} and e_{31} respectively. I’ve amended my picture to show this:

One magical thing happened here. All the edges have been determined, which means that all the tetrahedron areas have been determined too! Let’s look at e_3 for example. We can see that the opposite face with the 0, 1, and 2 vertices has an unfortunate property that it doesn’t create a cycle. That is, starting from 0, I can go to 1, and then 2, but the edge from 2 to 0 is reversed. The remedy is to introduce a “flip” so that we have a cycle in our mapping. Thus e_{021} (equivalent to -e_{012}) is part of our canonical basis. Note that it doesn’t matter if we try to traverse 0 \rightarrow 1 \rightarrow 2 and flip one edge, or traverse the opposite direction 0 \rightarrow 2 \rightarrow 1 and flip two edges. The result is the same since the latter traversal introduces two flips resulting in the same cycle (021). If we repeat this for the other faces, we get e_{013} and e_{032}.

What about the bivectors (edges)? It’s easy to verify that each one has a unique mapping to and from the opposite edge. For example, e_{02} there on the left has for its dual edge e_{31}. Similarly e_{31} maps back to e_{02}.

Now, we have a well-defined scheme that maps every basis element to its dual element and back. It is easy to see here that doing this map twice gets you to the same element, and because of the cyclic ordering, we need no sign changes to perform the mapping. There is some ambiguity in the “canonical” trivectors, but only because we are free, for a given trivector, to use instead a trivector with indices that are within an even permutation. Thus, we have the desired Poincare isomorphism we’ve been looking for.

Now, for your second question, are there alternative bases that could have worked (aside from swapping indices on the trivector)? My answer there is “yes.” Take all the edges above and swap the orientations. Done :). Alternatively, we can swap all the edges of the (123) face of the tetrahedron. You can see that in both cases, this tetrahedron still has all the properties of the map we need, and due to projective equivalence, all our computation will be unchanged. Other arrangements are possible of course!

I hope this helps (and I also hope that I didn’t make too many egregious errors in this somewhat rushed post).

5 Likes

Cheers.

Is that any better than having e213 as its dual?
If I have e0 with e123 as dual, does that mean I want e1 to have e230 as dual, and e2 to have e301 as dual, and e3 with e012 as dual?

How did the edge 02 get determined?

I’m still a little unclear on how every edge in your picture got a direction. I can see how I could come up with directions for 01, 12, and 23. I was semi-expecting the 03 edge to be flipped too.

Let’s say that I have managed to get your edge directions. I think from that I can see how e0 pairs with e123 - I go to the face opposite e0, and then pick any vertex and follow the edges in their direction. How does this work for when I pick e2 - the 03, 31 are going one way around the face, but the 01 edge is going the other way.

I realise that the types of questions I’m asking suggest that I’ve missed something fundamental, and answering all the questions probably doesn’t make sense. I’m hoping that by asking all these questions you can see where I’ve gone astray.

1 Like

Mainly your sanity :slight_smile: The ordering as “chosen” is just easier to follow since we already have a “natural” ordering from the natural numbers after all.

This was one of our initial choices where we decided (perhaps somewhat arbitrarily) that e_{01}, e_{02} and e_{03} would occupy our initial edges. The goal was ultimately to define the map with the involutory and grade-reversing properties we wanted. As I mention at the end, there are definitely other choices for this map that will work. The one we chose is largely about being somewhat easier to remember. Also, part of the point of the visualization is to show that even if you try to “improve” the situation and reduce the number of flips, you’ll just end up pushing the flips to a different edge/face. For any of the suggestions you made, I encourage you to just fill in the rest of the blanks and you’ll find a map with similar properties but with indices jumbled in a different arrangement.

As an aside, one other reason to prefer edges leaving 0 is that the convention extends to higher dimensions more easily.

If you read up on John Browne’s grassmann algebra book, it’d seem to tell that the definition for duals comes from your definition of metric because the inner product is defined as a ∨ dual b. The book also describes how to compute duals and metric for projective spaces. They seem very similar in behavior to what’s described in Gunn’s paper.

The exact same thing appears in linear algebra. If you think that transpose of a vector is a dual for it, then transpose v * w defines a dot product for the v which allows it to compute magnitude, angles and projections of vectors. However this seem to mean that your definition for dual should change when you change the basis vectors.

I haven’t entirely understood what I’ve read.

Welcome @htuhola :slight_smile:

Both John Browne and also Grassmann’s original work didn’t need to consider a space that has a degenerate element \mathbf{e}_i such that \mathbf{e}_i^2 = 0 hence the need for a definition that is metric-free. Both that book and others define the dual in terms of the geometric product and pseudoscalar, which is a valid dual map in cases where the metric has full rank.

As for your second paragraph, could you clarify a bit what you mean by “your definition for dual should change when you change the basis vectors?”

Take an orthogonal basis. The transpose of vector ‘v’ (1 -> 3) in that basis gives you (3 -> 1). When you multiply any other vector using that matrix you get a dot product between ‘v’ and other matrix. It allows you to compute projection, angles and distances. Transpose is understood as a tool to produce duals for vectors because of this.

But you can also take a non-orthogonal basis. Now to compute operation that produces projection, angles and distances, you got to first multiply the vectors with a basis. dot (Bv) (Bw). So transpose no longer produces them. Instead you got to (transpose(B*v)*B) to get a ‘dual’ that would have the same effect. In some sense your choice of basis or metric selects what the dual should be in linear algebra.

Browne describes “bound vector” complements in his “exploring complements” -section. The complements for 2d space are provided as:

1 ~ e0^e1^e2
e0 ~ e1^e2
e1 ~ e2^e0
e2 ~ e0^e1
e0^e1 ~ e2
e2^e0 ~ e1
e1^e2 ~ e0
e0^e1^e2 ~ 1

They seem to have been described in a way that you can select a different basis and the dual adjusts.

I get the duality in itself is not depending on metric to start with. But once you ‘compress’ it back to your multivector-space it would appear to require the metric.

In the definition that appears earlier in this thread, the map is fully defined without any reference to the metric. I believe the issue stems from believing that the dual should be defined with respect to the inner product, as opposed to the exterior product in the context of this thread. The inner product has an intrinsic metric dependence, but the exterior product does not. In other words, an involutory grade-reversing map is definable without any notion of a metric on every possible Clifford algebra, so the metric dependent definitions are in this sense, not the most general definition.

The dual is defined through notion of antivectors. They are vectors containing everything else except the original vector. That’d seem to require that you already know what are orthogonal to each other.

If you have slanted basis, eg. [[1,1], [1,0]]. The dual is going to be different than if it was orthogonal basis. Dual also depends on the dimensionality of the basis.

Is there some really good explanation on these poincare maps that you are using? Something that would explain it clearly? Basically how does it work around the need to know the metric?

For the non-orthogonal bases, the dual map presented as \mathbf{J} extends by linearity so you have a trivial bijection for any element in the algebra. See the exposition on the Poincare dual map in Charles Gunn’s thesis for additional information here: https://www.researchgate.net/publication/265672134_Geometry_Kinematics_and_Rigid_Body_Mechanics_in_Cayley-Klein_Geometries

To clarify the point further, when you say “they are vectors containing everything else except the original vector,” this statement needs to be more mathematically precise. In particular, the words “containing” and “everything else except.”

John Browne uses a different definition, the complement map is certainly not a involution as discussed by John Browne in his book. I already pointed this out repeatedly here in this thread.

Also, he doesn’t use the geometric product at all to define anything. He doesn’t even go into Clifford algebra in the first and only published volume. His definition are only based on Grassmann algebra.

It is possible to use the geometric product to define the same thing as John Browne, and that’s the definition I have provided, but John Browne doesn’t use the geometric product in the definition.

John Browne’s and Grassmann’s complement operation can be easier understood with the geometric product, but it’s not necessary and he certainly never uses the geometric product to define it. I prefer using the geometric product in the definition because it highlights how it really works and makes it simple to understand.

I did look into Gunn’s thesis and read on the poincare isomorphism from references. I understand something of it but not enough.

I’ve understood the basis is defined through a^b=I. Eg. In 2D x^y=x^y, y^x=-x^y. You get the x => y, y => -x dual that does change sign. This is mentioned in Gunn’s thesis. Also the 2D projective dual would seem to be the same operation as the 3D dual.

It just would seem to be different. The meet(a, dual b) would not be the inner product then. I don’t know if that’s a necessary property. But if it is then the dual mapping depends on metric.

How to be precise about the terms “containing” and “everything else except”? I guess you need the notion of projection to define these terms.

I’d be interested about a demonstration of how differential geometry fails with the J map. I dunno why you haven’t already requested this from @chakravala

It seems none of you have actually studied the axioms of the complement.

Screenshot_2020-02-29_20-40-20

Look on the diagram of that page, you’ll see the complement operation in \mathbb R^2 is like a \pi/2 rotation. This will stop happening with the false definitions used by the others in this thread.

The formula \star\star \omega = (-1)^{m(n-m)} is an AXIOM. This means that it is NOT a proposition or theorem, but it is an axiom from which the other properties are derived.

This complement axiom is a well known fact for differential geometry, it’s the starting point for theory.

Screenshot_2020-02-29_22-10-56

Axioms cannot be proven, as they are the starting point of a theory. If you ignore this axiom, then you are simply not doing Grassmann algebra, you are doing something else.