biVector forum

2D-PGA Cheat Sheet - Dual Basis Superscript

The cheat sheet shows the dual with superscripts for the bases. Is that a notation I’m not familiar with or simply a formatting mistake?
dual-reverse
Cheers.

PS: Just recently got into GA and loving all about it. Thanks for being a great community with such a huge effort in providing resources and insight to get started!

The use of superscripts in these formulas refers to elements of the dual algebra, and in particular that means that the map defined in the table is the dual coordinate map or Poincare duality map. More recently, others have introduced Hodge duality for very similar purposes – essentially the same formulas using subscripts instead of superscripts, so that the table refers to elements of a single algebra. You can read about this issue in more detail in this post on the bivector forum. Please post any further questions here.

Introduction

Thanks a bunch for the quick and concise reply! I took some days to go through the linked post, followed by the forum in general. Still digesting all the information, but I figured I would write some of my thoughts down while they’re still fresh, with the slight chance that it might be of interest to some in the community. Apologies if it is not, and worst case I’m thankful for a place to order some of my own confusion :wink: .

I’m a computer scientist by trade, so my point of view is one of the student, tipping his toes into an exciting new field. It will also be highly subjective at points, so absolutely feel free to disagree!

Kind of a tl;dr at the end.

About the Dual

The concept of duality is such a beautiful one. In the context of GA (and PGA specifically) however it seems to be frequent cause for both confusion among newcomers as well as discussion among experts. The by now infamous thread about why use dual space in PGA was, as @bellinterlab puts it, “quite enlightening”.

So I started out by trying to understand the various concepts, from Poincaré and Dual maps to Hodge Duals and Reciprocal Frames. Followed by quite the adventure to figure out some of the missing puzzle pieces and gaps, including the choice of basis and how to actually compute the dual.

At this point I had already done some very intriguing experiments using ganja.js, where mocking up a 2D-shadow caster using PGA is just such a fun project to get started!

\mathbb{R}_{2,0,1} and the available implementations is a great starting point for newbies.

However, I hit a wall of understanding, with some unanswered questions lingering in my head. So I decided to sit down and start from scratch, drawing some pretty Cayley tables and deriving some of the operations by hand.

The following outlines some of my findings and conclusions of these early adventures.

The Choice of Basis

The first time I stumbled upon the topic was related to ganja.js, which uses \mathbf{e}_{02} instead of \mathbf{e}_{20}, allegedly in order to account for the Y-down orientation in 2D-rendering. I soon came to realize though that the implications of how we name, write and order our basis vectors and blades go much deeper.

Notation

The first point I want to make highlights two aspects in particular, both of which are of somewhat subjective nature.

0-indexing vs. 1-indexing

For me, starting e.g. from two basis vectors \mathbf{e}_1, \mathbf{e}_2 \in \mathbb{R}_{2,0,1}, it felt more natural to add the additional 1-up dimension at the end, rather than at the front, i.e. introduce \mathbf{e}_3 rather than \mathbf{e}_0. I likely have been primed this way during my studies and career, but looking at Cayley tables with \mathbf{e}_0 in the first column feels “off” to me. Maybe also due to the fact that all my math courses in university used 1-indexing.

More objectively, an argument I’ve seen being made in favor of 0-indexing is that the translational element stays consistent in all dimensions. I find this argument can be flipped though, noticing that vice-versa the basis always starts with the “bulk” of the object using 1-indexing.

In practice, I’d assume that a slight majority of enthusiasts would be more comfortable with 1-indexing for basis vectors, lowering the bar for cross-discipline beginners. This is of course speculative, it was at least true for myself.

Pseudoscalar Symbol

A second aspect is a notational one, concerning the pseudoscalar. In written form, it’s often denoted by I. In calculations and tables, it’s often being written in terms of its basis vectors, e.g. \mathbf{e}_{1234}. This clutters formulas and tables alike, especially in higher dimensions.

There’s an alternative notation, that I’ve first seen in @elengyel’s work, who uses blackboard boldface \mathbb{1} instead. I got to like this notation a lot and hope to further advocate its adoption in the GA community, in both paragraph and table context.

Side-by-Side Comparison

I drew up the Cayley tables for \mathbb{R}_{2,0,1} as defined by ganja.js, biVector, as well as my own favorite form, respectively.

ganja biVector

Complements

This is where it got interesting. Drawing up my first \mathbb{R}_{2,0,1} Cayley tables, I noticed that both the order of blades, but more significantly the order of basis vectors that define those blades has implications reaching farther than I was yet aware of.

Oder of Blades

Rather obvious - but still something I needed to discover first - is the fact that by simply arranging blades in one way or another, it is possible to shape the table’s anti-diagonal.

For any PGA Cayley table, it is possible to arrange the blades in such a way that the anti-diagonal items result in \pm\mathbb{1}.

Order of Basis Vectors

Similarly, we can influence the sign of pseudoscalars on the anti-diagonal by swapping the order in which we multiply basis vectors. As mentioned before, there is for example a discrepancy in the biVector cheat sheet and the ganja.js implementation, where the latter has -\mathbb{1} on some of its anti-diagonal elements. In this case, it was likely just an implementation detail, and doesn’t really matter, but for educational purposes, I would argue that having

non-negative pseudoscalars on the anti-diagonal is desirable.

And apart from simplicity and visuals, there are mathematical implications I’ll get into in the next section.

While for \mathbb{R}_{2,0,1} it is rather simple to find such an ordering, it is impossible to do so in \mathbb{R}_{3,0,1}. Consider for example \mathbf{e}_{234} \wedge \mathbf{e}_{1}. The vector-element needs to swap places three times to get it to the front and form \mathbf{e}_{1234}, no matter what. There’s simply no way around it! This inherently introduces the negation due to anti-commutativity. @enki already notes, that in a choice for a proper basis, the minus signs are on the trivectors, e.g. similar to the biVector cheat sheet.

1, \mathbf{e}_{0}, \mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}, \mathbf{e}_{01}, \mathbf{e}_{02}, \mathbf{e}_{03}, \mathbf{e}_{12}, \mathbf{e}_{31}, \mathbf{e}_{23}, \mathbf{e}_{021}, \mathbf{e}_{013}, \mathbf{e}_{032}, \mathbf{e}_{123}, \mathbb{1}

This property is imo due to the axioms of the complement, which @chakravala mentions here. While I agree that this forms a “nicer” basis, @elengyel on his poster chose an order such that both tri- and bivector pseudoscalar become negated.

1, \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3, \mathbf{e}_4, \mathbf{e}_{23}, \mathbf{e}_{31}, \mathbf{e}_{12}, \mathbf{e}_{43}, \mathbf{e}_{42}, \mathbf{e}_{41}, \mathbf{e}_{321}, \mathbf{e}_{124}, \mathbf{e}_{314}, \mathbf{e}_{234}, \mathbb{1}

Curious as to why that choice has been made! All of this however led me to the discovery I made next.

The Great Collapse.

For the math-savvy here this is old news, but I made a small jump when I discovered the connection between basis ordering and the complement operators. To recap - and this is only my understanding at this point - the (some™)

dual operator generalizes to a left- and right-complement.

Bluntly put, they flip the coefficients of the current base, and sometimes have some seemingly magic sign flips in there as well, just for good measure. However, the revelation for me was to realize that the sign directly correlates with the Cayley table’s anti-diagonal, which in turn led me to finally make the cognitive connection between blades and their duals. In some spaces, the complements inherently “flip” as is the case in \mathbb{R}_{3,0,1} as described above for all trivectors. The complement operator of course mirrors this behavior by flipping the sign on the according coordinates to generate a proper dual representation over the same basis.

I was confused for quite a while why Eric’s poster kinda had two dual operators, left- and right-complement. With the new found understanding, this finally made sense and also helped me understand the necessity for both J and J^{-1} and similar in other models. Now, here comes the part where my mind got blown. In \mathbb{R}_{2,0,1} (and by extension generally any odd-dimensional geometric algebra), due to the fact that the anti-diagonal can be constructed completely non-negative, the left- and right-complements collapse to a single dual operator!

lc(a) = rc(a), a \in \mathbb{R}_{n_-,n_+,n_0} \Leftrightarrow (n_- + n_+ + n_0) \in (2 \mathbb{Z} + 1)

I don’t have the mathematical prowess to prove this claim. It just feels right and somewhat intuitive to me. Glad if anyone can chime in with comments or references, this is likely no news at all, some good paper to read maybe?

As a last comment on this, it seems to me that choosing a basis with as few negations as possible would also benefit GA implementations, minimizing operations. It would be interesting to see what influence the choice of basis has on common GA operations like join, meet, project etc. themselves - if any. Maybe after factorizing everything out, it’s just as many ops (additions, multiplications) one way or the other?

Where to Next

Thanks for reading so far! Some closing thoughts and tl;dr.

Curriculum

I only got into GA recently, and it’s been a lot of fun and excitement. In hindsight, I have a couple suggestions on what helped me at least to understand and get a better grasp on things.

2D PGA is a Great Starting Point

  • The projective aspect can still be visualized and conceptualized by our minds in 3D
  • Cayley tables behave nicely, not much ambiguation to figure out
  • The left- and right-complements collapse to a single dual operation
  • Coding up some fun examples is straight forward and very rewarding!

The Move to 3D PGA

  • Use Cayley table to intuitively illustrate complement “flip”
  • Introduce left- and right-complement as generalized dual operator
  • Outline both geometric product, wedge and their dual operators

Blackboard Boldface Pseudoscalar

  • It’s the way to go :slight_smile:
  • Improves readability of calculations and tables
  • Simply fun to write

Personal Agenda

Things I want to do next.

Open Questions

  • Is the whole complement-collapse idea correct in the first place?
  • Does it in fact hold for odd-dimensional algebras?
  • What are further implications of choice of basis, e.g. on implementation performance?

2D PGA Duality Cheat Sheet

  • Introduce dual operators with in beginner-friendly 2D PGA setting
  • Side-by-side comparison of common formulas and their duals
  • Collaboration welcome!

2D PGA Applications

  • Already working on a fun little browser game
  • Looking into implementing primal-based implementation using dual geometric product

Conclusion

Thank you if you actually took the time, effort and patience to crunch through this wall of text! Hope it was at least an interesting insight into the experience of learning GA from my outside perspective, if nothing else.

Cheers and all the best!

@gurki Thanks for the detailed post. I haven’t had time to digest everything.

Regarding bold-face 1 for the pseudoscalar: I’d like to suggest that there are reasons to reserve \mathbf{1} for the generator of the vector space \bigwedge^0 of 0-vectors. We usually don’t think about this, since we immediately identify the grade-0 vectors as scalars and these are identified usually as the real numbers \mathbb{R} (or whatever the base field is). But this is not mathematically precise. Precise is to consider this 1-dimensional vector space of 0-vectors (\bigwedge^0) over \mathbb{R} to be generated by the basis set \{\mathbf{1}\}, so that the scalar a for example is just a shorthand for a\mathbf{1}. This is analogous then to a pseudoscalar a\mathbf{I} (where \mathbf{I} is the unit pseudoscalar of the algebra and the generator of \bigwedge^n), or in general, to being able to write a k-vector as a linear combination of basis vectors, for example, \mathbf{v} = a \mathbf{e}_0 + b \mathbf{e}_1 + c \mathbf{e}_2 for a 1-vector in \mathbb{R}_{2,0,1}. Without \mathbf{1}, how would you write a 0-vector in this form?

\mathbf{Conclusion:} Although it is common practice to refer to a 0-vector by its scalar value a, mathematically the vector space of 0-vectors \bigwedge^0 has a single basis vector and \mathbf{1} is traditionally used to represent it.

Notation

Interesting, that is a good point, I agree and appreciate the explanation! To avoid a potential misunderstanding though, I was proposing the “blackboard boldface” typed 1 rather than normal “boldface”. However, I only noticed after posting that the forum doesn’t support blackboard typed numbers yet, which needs \usepackage{bbold}. So I’ll stick with I for now. Here is a comparison:

Choice of Basis

In the meantime, I was also able to answer one of my own questions, which was about any further implications of vector ordering for higher-order blade construction. For 2D PGA, ganja.js chose their basis such that -I appear on the anti-diagonal, while the biVector cheat sheets kept it purely positive. Thanks to a discussion with @cgunn3, I realized that these choices influence the mapping of geometric objects to blade coefficients.

Here is a comparison for the 2D PGA wedge product of two objects
a = a_1\mathbf{e}_1 + a_2\mathbf{e}_2 + \mathbf{e}_3,
b = b_1\mathbf{e}_1+b_2\mathbf{e}_2+\mathbf{e}_3,
where \mathbf{e}_{31} is being swapped for \mathbf{e}_{13}.

Basis A (~biVector): \{1,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3,\mathbf{e}_{12},\mathbf{e}_{31},\mathbf{e}_{23},\mathbf{e}_{123}\}
a \wedge b = (a_1b_2-a_2b_1)\mathbf{e}_{12}+(b_1-a_1)\mathbf{e}_{31}+(a_2-b_2)\mathbf{e}_{23}

Basis B (~ganja.js): \{1,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3,\mathbf{e}_{12},\mathbf{e}_{13},\mathbf{e}_{23},\mathbf{e}_{123}\}
a \wedge b = (a_1b_2-a_2b_1)\mathbf{e}_{12}+(a_1-b_1)\mathbf{e}_{13}+(a_2-b_2)\mathbf{e}_{23}

Where \mathbf{e}_1^2=\mathbf{e}_2^2=1,\mathbf{e}_3^2=0.
Examining this from “points are vectors”, the wedge result in Basis A yields the exact line equation standard form coefficients (a_2-b_2)x+(b_1-a_1)y+(a_1b_2-a_2b_1)=0.
In Basis B however, to translate line equation coefficients to the PGA representation or vice versa, one needs to take the dual first, which accounts for the negation in e_{13} accordingly (“Y-Axis flip”).

Conclusion

In conclusion, it seems to me that choosing a proper basis can make a significant difference, in both interpreting objects, as well as understanding and teaching the subject. On the other hand, different basis choices might be beneficial for e.g. implementation purposes or performance reasons.

Cheers, have a great weekend :).

EDIT: Updated and corrected “choice of basis” section after aforementioned excellent remarks.

Great post. Thanks for doing this.

I’m a computer scientist first as well. I really like the blackboard boldface 1 for the pseudoscalar. Mostly because capital “I” is reserved for the identity matrix in linear algebra. I’ve been using italic I recently, but not really happy with that. Julia is my preferred programming language. From julia repl: \bbone[Tab] => 𝟙 Yay! I think I’m actually going to start using this going forwards.

“count-ordering” or as I’ve seen it called more often “lexical ordering” is the nicest way to implement in a programming language as it’s in sorted order and extends to any dimension trivially. Indeed that’s how I’ve implemented it.

However, I do a lot of simulation programming and blade order is really what you want for physics, as the cayley table => matrix for bivectors, dual cross product coords, partials in curl etc end up with the correct signs for 3D. Using anything else is very error prone. I’m a little torn about what do here since it doesn’t really generalize nicely to other dimensions/metrics.

1 Like